Skip navigation.

DBA Blogs

2013 Chevrolet Spark lt for sale

Ameed Taylor - Wed, 2014-09-17 21:36
The 2013 Chevrolet Spark lt for sale minicar is the littlest Chevy offered in the U.s. in a couple of years. focused at Millennial benefactors, it could be intended to be minimal effort, bright, and digitally empowered for first-time auto supporters who to discover their cell phones more intriguing than new vehicles.

The minimal 5-entryway hatchback contends with a climbing workforce of minicars that gimmicks the MINI Cooper that dispatched the stage inside the U.s., the new Fiat 500, the Mitsubishi i electrical auto, and a couple of two-seat passages, the developing old savvy Fortwo and the new Scion iq.

actually assuming its 12-foot-1-inch length is unequivocally three toes shorter than a Chevrolet Cruze smaller 4-entryway vehicle, the Spark does not so much show up as little as it seems to be. best in scale- -stopped beside an amusement utility auto, as a sample -is its fitting measurement prominent.

The creators have accomplished all that they may to blanket the Spark's extents. its a tall auto roosted on little 15-inch wheels, however stress strains, dark plastic boards that proceed with the window line, and a vast Chevy twin-opening grille help to conceal the chunk sided, scorn nosed field-on-wheels show up.

The handles for the back entryways are stowed away operating at a profit plastic trim, which Chevy says gives the auto a "roadster like" appear. we don't in all actuality buy that, however outwardly the back entryways don't fundamentally learn as entryways on first look. while the back completion is everything except vertical, a long top spoiler stretches out to edge the back window opening, offering profundity to the back and bettering the Spark's aeromechanics too.

inside, the textured hard-plastic dash makes no attempt and camouflage what its through copying another topic. so also, the seats are unashamedly manufactured material. however constitution color embeds on the entryway trim, inside the entryway boxes, and on the dashboard include a sprightly stress. They take notice the painted inside steel found in economy autos of the Sixties, despite the fact that inside the Spark they may be all body-color plastic presented for effect.

Like its vast sibling the Sonic, the Spark has a "motorbike model" instrument case mounted on the direction section. It suits a speedometer and a little auto learning show. the base Spark has a little monochrome center showcase, yet all LT trim levels work a 7-inch coloration touchscreen show inside the heart stack.

there is only one motor in the Spark, a 84-pull 1.2-liter four-barrel Ecotec that puts out eighty three lb-toes of torque. The Spark comes same old with a five-velocity handbook gearbox, and a four-pace automated transmission will likewise be requested for an extra $925. Forceful driving with the aide can hustle the little Spark by means of town acceptably, however the robotized is, honestly, a puppy. Its first gear is inordinate sufficient that quickening a long way from stoplights shows up excruciatingly steady, and it would not appear to offer such a great amount of punch at any velocity or in any apparatuses. this is one auto for which the handbook gearbox may be the main conceivable choice.

The 2013 Chevrolet Spark lt for sale with the aide transmission is EPA-evaluated at 32 mpg city, 38 mpg thruway, for a blended score of 34 mpg- -comparable to the rest inside the class, however sensibly lower than a considerable amount of bigger reduced vehicles, which benefit from higher the study of air. the programmed model is accessible in lower, at a consolidated 32 mpg (28 mpg city, 37 mpg road). The Spark runs on consistent evaluation gas, in spite of the fact that; the Fiat 500 prescribes top rate gas.

The Spark's ride is somewhat firm, by and by it dealt with broken NY city roads with aplomb- -despite the fact that riders had been mindful of each knock, pothole, edge, and swell. its tall enough that travelers will truly feel the Spark inclining toward difficult corners, nonetheless its reasonably little wheels and tires toiled grave to convey the road. the electric force direction deals some interstate feel, and though its barely ever as charming to throw round as the MINI Cooper, the Spark can trade paths into spaces diverse autos couldn't fit into, and it is a delight to stop. Our starting force provided for us no probability to test the Spark at 75-mph turnpike speeds.

within, the Spark's seats are physically movable yet agreeable -if a tad bit thin for broad channeled American travelers. The back seatback is upright, yet with to some degree arrangement between front- and once more seat riders, four adult male people can possess the Spark and go in reasonable house. that is more than can additionally be expressed for the MINI Cooper or Fiat 500, actually assuming the Mitsubishi i electric auto -the main other 5-entryway hatchback on this arrangement -bargains also shocking back seat space too.

With the back seat up, the Spark has 11.four cubic feet of load territory -enough to hold 10 to 12 full paper staple gear without issues -which grows to 31.2 cubic toes when the 60/40 part back seat is flipped and collapsed down. A payload net is close by as a feature of the higher trim levels, as are rails to mount baggage bearers on the top.





The 2013 Chevrolet Spark lt for sale has been intended to satisfy all present and future U.s. security necessities, and springs ordinary and not utilizing a less than 10 airbags. It has now not yet been appraised for accident wellbeing by method for either the NHTSA or IIHS. The Spark has not least complex electronic dependability keep watch over and electronically monitored slowing mechanisms, yet moreover doorway seat-cinch pretensioners and same old Hill start help- -a decent trademark for youthful drivers. Outward innovative and insightful is great, together with respectable back three-quarter perceivability over the intention energy's shoulder for switching (no back creative and judicious Polaroid is possible). All Sparks come standard with GM's Onstar gadget and 6 free months of bearer.

quickly for a minicar, all Sparks come typical with air-con, vitality windows, a back window wiper, and a drive pc. Furthermore Chevy contemplated that one methodology to make the Spark appear to be considerably less machine like
Categories: DBA Blogs

Oracle EMEA Customer Support Services Excellence Award 2014

The Oracle Instructor - Wed, 2014-09-17 13:54

The corporation announced today that I got the Customer Services Excellence Award 2014 in the category ‘Customer Champion’ for the EMEA region. It is an honor to be listed there together with these excellent professionals that I proudly call colleagues.

CSS Excellence Award 2014


Categories: DBA Blogs

Using the ILOM for Troubleshooting on ODA

Pythian Group - Wed, 2014-09-17 09:25

I worked on root cause analysis for a strange node reboot on client’s Oracle Database Appliance yesterday. The case was quite interesting from the perspective that none of the logs contained any information related to the cause of the reboot. I could only see the log entries for normal activities and then – BOOM! – the start-up sequence! It looked like someone just power cycled the node. I also observed the heartbeat timeouts followed by the node eviction on the remaining node. There was still one place I hadn’t checked and it revealed the cause of the issue.

One of the cool things about ODA is it’s service processor (SP) called Integrated Lights Out Manager (ILOM), which allows you to do many things that you’d normally do being physically located in the data center – power cycle the node, change the BIOS settings, choose boot devices, and … (the drum-roll) … see the console outputs from the server node! And it doesn’t only show the current console output but it keeps logging it too. Each ODA server has its own ILOM, so I found out the IP address for the ILOM of the node which failed and connected to it using SSH.

$ ssh pythian@oda01a-mgmt
Password:

Oracle(R) Integrated Lights Out Manager

Version 3.0.14.13.a r70764

Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.

->
-> ls

 /
    Targets:
        HOST
        STORAGE
        SYS
        SP

    Properties:

    Commands:
        cd
        show

ILOM can be browsed as it would be a directory structure. Here the “Targets” are different components of the system. When you “cd” into a target you see sub-components and so on. Each target can have properties, they are displayed as variable=value pairs under “Properties” section. And there are also list of “Commands” that you can execute for the current target. the “ls” command shows the sub-targets, the properties and the commands for the current target. Here’s how I found the console outputs from the failed node:

-> cd HOST
/HOST

-> ls

 /HOST
    Targets:
        console
        diag

    Properties:
        boot_device = default
        generate_host_nmi = (Cannot show property)

    Commands:
        cd
        set
        show

-> cd console
/HOST/console

-> ls

 /HOST/console
    Targets:
        history

    Properties:
        line_count = 0
        pause_count = 0
        start_from = end

    Commands:
        cd
        show
        start
        stop

-> cd history
/HOST/console/history

-> ls

The last “ls” command started printing all the history of console outputs on my screen and look what I found just before the startup sequence (I removed some lines to make this shorter and I also highlighted the most interesting lines):

divide error: 0000 [#1] SMP
last sysfs file: /sys/devices/pci0000:00/0000:00:09.0/0000:1f:00.0/host7/port-7:1/expander-7:1/port-7:1:2/end_device-7:1:2/target7:0:15/7:0:15:0/timeout
CPU 3
Modules linked in: iptable_filter(U) ip_tables(U) x_tables(U) oracleacfs(P)(U) oracleadvm(P)(U) oracleoks(P)(U) mptctl(U) mptbase(U) autofs4(U) hidp(U) l2cap(U) bluetooth(U) rfkill(U) nfs(U) fscache(U) nfs_acl(U) auth_rpcgss(U) lockd(U) sunrpc(U) bonding(U) be2iscsi(U) ib_iser(U) rdma_cm(U) ib_cm(U) iw_cm(U) ib_sa(U) ib_mad(U) ib_core(U) ib_addr(U) iscsi_tcp(U) bnx2i(U) cnic(U) uio(U) dm_round_robin(U) ipv6(U) cxgb3i(U) libcxgbi(U) cxgb3(U) mdio(U) libiscsi_tcp(U) libiscsi(U) scsi_transport_iscsi(U) video(U
) output(U) sbs(U) sbshc(U) parport_pc(U) lp(U) parport(U) ipmi_si(U) ipmi_devintf(U) ipmi_msghandler(U) igb(U) ixgbe(U) joydev(U) ses(U) enclosure(U) e1000e(U) snd_seq_dummy(U) snd_seq_oss(U) snd_seq_midi_event(U) snd_seq(U) snd_seq_device(U) snd_pcm_oss(U) snd_mixer_oss(U) snd_pcm(U) snd_timer(U) snd(U) soundcore(U) snd_page_alloc(U) iTCO_wdt(U) iTCO_vendor_support(U) shpchp(U) i2c_i801(U) i2c_core(U) ioatdma(U) dca(U) pcspkr(U) dm_multipath(U) usb_storage(U) mpt2sas(U) scsi_transport_sas(U) raid_class(U)
 ahci(U) raid1(U) [last unloaded: microcode]
Pid: 29478, comm: top Tainted: P        W  2.6.32-300.11.1.el5uek #1 SUN FIRE X4370 M2 SERVER
RIP: 0010:[<ffffffff8104b3e8>]  [<ffffffff8104b3e8>] thread_group_times+0x5b/0xab
...
Kernel panic - not syncing: Fatal exception
Pid: 29478, comm: top Tainted: P      D W  2.6.32-300.11.1.el5uek #1
Call Trace:
 [<ffffffff8105797e>] panic+0xa5/0x162
 [<ffffffff8107ae09>] ? up+0x39/0x3e
 [<ffffffff810580d1>] ? release_console_sem+0x194/0x19d
 [<ffffffff8105839a>] ? console_unblank+0x6a/0x6f
 [<ffffffff8105764b>] ? print_oops_end_marker+0x23/0x25
 [<ffffffff81456ea6>] oops_end+0xb7/0xc7
 [<ffffffff8101565d>] die+0x5a/0x63
 [<ffffffff8145677c>] do_trap+0x115/0x124
 [<ffffffff81013674>] do_divide_error+0x96/0x9f
 [<ffffffff8104b3e8>] ? thread_group_times+0x5b/0xab
 [<ffffffff810dd2f8>] ? get_page_from_freelist+0x4be/0x65e
 [<ffffffff81012b1b>] divide_error+0x1b/0x20
 [<ffffffff8104b3e8>] ? thread_group_times+0x5b/0xab
 [<ffffffff8104b3d4>] ? thread_group_times+0x47/0xab
 [<ffffffff8116ee13>] ? collect_sigign_sigcatch+0x46/0x5e
 [<ffffffff8116f366>] do_task_stat+0x354/0x8c3
 [<ffffffff81238267>] ? put_dec+0xcf/0xd2
 [<ffffffff81238396>] ? number+0x12c/0x244
 [<ffffffff8107419b>] ? get_pid_task+0xe/0x19
 [<ffffffff811eac34>] ? security_task_to_inode+0x16/0x18
 [<ffffffff8116a77b>] ? task_lock+0x15/0x17
 [<ffffffff8116add1>] ? task_dumpable+0x29/0x3c
 [<ffffffff8116c1c6>] ? pid_revalidate+0x80/0x99
 [<ffffffff81135992>] ? seq_open+0x25/0xba
 [<ffffffff81135a08>] ? seq_open+0x9b/0xba
 [<ffffffff8116d147>] ? proc_single_show+0x0/0x7a
 [<ffffffff81135b2e>] ? single_open+0x8f/0xb8
 [<ffffffff8116aa0e>] ? proc_single_open+0x23/0x3b
 [<ffffffff81127cc1>] ? do_filp_open+0x4f8/0x92d
 [<ffffffff8116f8e9>] proc_tgid_stat+0x14/0x16
 [<ffffffff8116d1a6>] proc_single_show+0x5f/0x7a
 [<ffffffff81135e73>] seq_read+0x193/0x350
 [<ffffffff811ea88c>] ? security_file_permission+0x16/0x18
 [<ffffffff8111a797>] vfs_read+0xad/0x107
 [<ffffffff8111b24b>] sys_read+0x4c/0x70
 [<ffffffff81011db2>] system_call_fastpath+0x16/0x1b
Rebooting in 60 seconds..???

A quick search on My Oracle Support quickly found a match: Kernel Panic at “thread_group_times+0x5b/0xab” (Doc ID 1620097.1)”. The call stack and the massages are a 100% match and the root cause is a kernel bug that’s fixed in more recent versions.
I’m not sure how I would have gotten to the root cause if this system was not an ODA and the server would have just bounced without logging the Kernel Panic in any of the logs. ODA’s ILOM definitely made the troubleshooting effort less painful and probably saved us from couple more incidents caused by this bug in the future as we’d been able to troubleshoot it quicklyand we’ll be able to implement the fix sooner.

Categories: DBA Blogs

Oracle Java Compute Cloud Service Now Available!

Today Oracle added exciting new services to our existing Public Cloud offerings. First Things First It all begins with Oracle Compute Cloud service. It offers Elastic Compute Capacity, where...

We share our skills to maximize your revenue!
Categories: DBA Blogs

ps and top differences with HugePages

Bobby Durrett's DBA Blog - Tue, 2014-09-16 18:09

The Unix utilities ps and top report memory differently with HugePages than without.

Without HugePages ps -eF seems to include the SGA memory under the SZ column:

UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
... 
oracle    1822     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d000_orcl
oracle    1824     1  0 846155 16228  0 07:19 ?        00:00:00 ora_d001_orcl
oracle    1826     1  0 846155 16236  0 07:19 ?        00:00:00 ora_d002_orcl
oracle    1828     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d003_orcl
oracle    1830     1  0 846155 16224  0 07:19 ?        00:00:00 ora_d004_orcl
oracle    1832     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d005_orcl
oracle    1834     1  0 846155 16236  0 07:19 ?        00:00:00 ora_d006_orcl
oracle    1836     1  0 846155 16228  0 07:19 ?        00:00:00 ora_d007_orcl
oracle    1838     1  0 846155 16224  0 07:19 ?        00:00:00 ora_d008_orcl
oracle    1840     1  0 846155 16232  0 07:19 ?        00:00:00 ora_d009_orcl
oracle    1842     1  0 846155 16240  0 07:19 ?        00:00:00 ora_d010_orcl
oracle    1844     1  0 846155 16228  0 07:19 ?        00:00:00 ora_d011_orcl
...

Here SZ = 846155 kilobytes = 826 megabytes.  If you add up all the SZ values it comes to 81 gigabytes which wont fit in my 4 gig memory and 4 gig swap.  It seems to include the amount of the SGA actually used, not the full 3 gigabyte max sga size, otherwise the total would have been hundreds of gigabytes.

Doing the same exercise with 3 gigabytes of huge pages ps looks like this:

UID        PID  PPID  C    SZ   RSS PSR STIME TTY          TIME CMD
...
oracle    1809     1  0 59211 15552   0 07:52 ?        00:00:00 ora_d000_orcl
oracle    1811     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d001_orcl
oracle    1813     1  0 59211 15548   0 07:52 ?        00:00:00 ora_d002_orcl
oracle    1815     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d003_orcl
oracle    1817     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d004_orcl
oracle    1819     1  0 59211 15548   0 07:52 ?        00:00:00 ora_d005_orcl
oracle    1821     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d006_orcl
oracle    1823     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d007_orcl
oracle    1825     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d008_orcl
oracle    1827     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d009_orcl
oracle    1829     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d010_orcl
oracle    1831     1  0 59211 15544   0 07:52 ?        00:00:00 ora_d011_orcl
...

SZ = 59211 k= 57 meg.  Total SZ = 5.89 gigabytes.  Still this is bigger than total memory but closer to the 4 gig memory available.  It’s just a guess, but I’m pretty sure that with HugePages this total doesn’t include the amount of memory in use in the SGA in the SZ for each process as it did without HugePages.

The other weird thing is how different top looks with HugePages.  Here is top with the database having just come up without HugePages:

top - 07:20:16 up 3 min,  2 users,  load average: 1.06, 0.33, 0.13
Tasks: 187 total,   1 running, 186 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.6%us,  6.3%sy,  0.0%ni, 77.8%id, 14.2%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   4050836k total,   984444k used,  3066392k free,    14460k buffers
Swap:  4095996k total,        0k used,  4095996k free,   450128k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 2010 oracle    20   0 3310m  51m  44m D  7.6  1.3   0:00.21 oracle             
 1988 oracle    20   0 3307m  50m  45m D  3.8  1.3   0:00.21 oracle             
 1794 oracle    -2   0 3303m  15m  13m S  1.9  0.4   0:01.07 oracle

Notice that we have about 3 gigabytes free – 3066392k and nothing in swap.

Here is the same system  with 3 gig of HugePages:

top - 07:53:21 up 2 min,  2 users,  load average: 0.81, 0.29, 0.11
Tasks: 179 total,   1 running, 178 sleeping,   0 stopped,   0 zombie
Cpu(s):  2.0%us,  8.6%sy,  0.0%ni, 69.2%id, 20.1%wa,  0.0%hi,  0.1%si,  0.0%st
Mem:   4050836k total,  3860100k used,   190736k free,    14332k buffers
Swap:  4095996k total,        0k used,  4095996k free,   239104k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND            
 1781 oracle    -2   0 3303m  15m  13m S  3.5  0.4   0:01.02 oracle             
    1 root      20   0 19400 1520 1220 S  0.0  0.0   0:01.43 init               
    2 root      20   0     0    0    0 S  0.0  0.0   0:00.00 kthreadd

Now only 190736k is free.  But, note that in both cases top lists the oracle processes with 3300 meg of virtual memory, which is consistent with the 3 gig max sga.

I’ve still got a lot to learn about HugePages but I thought I would pass along these observations.  This article on Oracle’s support site helped me learn about HugePages:

HugePages on Oracle Linux 64-bit (Doc ID 361468.1)

I ended up sizing the HugePages down to 2 gig on my 4 gig test system and reducing sga max size to 2 gig as well.  My system was sluggish with so little free memory when I was using a 3 gig SGA and HugePages.  It was much snappier with only 2 gig tied up in HugePages and dedicated to the SGA, leaving 2 gig for everything else.

This was all done with Oracle’s 64 bit version of Linux and 11.2.0.3 database.

– Bobby

 






Categories: DBA Blogs

Top 7 Reasons Why Oracle Conferences Are A Waste Of Time

Top 7 Reasons Why Oracle Conferences Are A Waste Of Time
Want to turn a lame Oracle Database conference experience into a great one? You may not believe this, but I think Oracle conferences are a waste of time. That is, unless I take action. I've been to hundreds of Oracle conferences, so I'm kind of an expert in this field.

Here is my "Top 7" list about why Oracle conferences are time suckers AND how you can turn them into a GREAT educational, social and networking experience.

Number 7. Sleeper Presentations. You forgot to read that session abstract before you sat down? You're not alone! Here my secret: Sit by the door. But when you walk out, remember that the speaker probably
knows how you feel but is doing they best they can. Out of respect for them and the poor souls who are actually enjoying the session, be quiet when you leave.

Number 6. My System Is Bigger Than Your System. How many times have we all heard someone talking about their, "50 node RAC global system with 5 million active users." Really? Is that even possible? Here are four proven options. Option 1 is to ask a question to shut them up. For example, "So how do you deal with the enqueue contention?" Another option is to simply walk away. If you feel that's rude, then Option 3 is to suddenly grab your phone to answer "that" call and walk away...never to return. If you're feeling feisty then combining Option 2 and 3 is wonderful. Ask the question "So how do you deal with upgrading 50 nodes?" AND THEN suddenly grab your phone and walk away.

Number 5. Not Another New Feature Presentation! Oracle Corporation and their marketing minions love to talk about futures. If you're depressed from hearing about new features you won't get to touch for the next five years, here's what to do. First, remember that Oracle Corporation user group sponsorship is important to user group conferences. Without the help from Oracle, most conferences would not occur. Second, the more Oracle sponsors an event, the more they influence the content. The money source always drives the content. The solution is to look at number of presentations given by non-Oracle employees versus Oracle employees... before you register.

Number 4. Can't Afford It Unless Oracle Sales Pays. Yes you can! I know training and traveling budgets have been cut. What you need is a free pass, right? I figured this one out while still working for Oracle. Submit an abstract to speak. If speaking freaks you out, click HERE. If you won't speak, then volunteer. Believe me when I say, EVERY CONFERENCE NEEDS VOLUNTEERS.

Number 3. Boring Conference. I totally get that. Want to know the secret about boring conferences? Avoid them...unless you take action. My conference of choice is Collaborate/IOUG. I love that conference... because I'm really involved, have been for years and look forward to working with the conference team and seeing them each year. It's kind of like family to me.

Number 2. I Feel Like A Dork. Because I speak a lot, people don't realize that much of the time I feel like a social idiot. Strange, but put me on stage or in a mentoring situation and I'm fine. It's taken me 20 years of conferences to figure this one out. Here's what to do: Go with a colleague or volunteer so you'll always have someone to hang out with. If that doesn't work and you're a guy, look for the stupid games because that's where you'll find DBAs just like you (and me).

Number 1. Stupid Drunks. I get so tired of drunk people yelling in my face! Especially when I can feel the spit from their mouths poke me in the eye. It only took a few years to figure this one out. The solution? Step back a couple of feet. And if that doesn't work, then walk away. Don't worry about offending them, because they won't remember you anyways.

Make It A Great Conference!

There you have it, my "Top 7" list about why Oracle conferences are time suckers AND how you can turn them into a GREAT educational, social and networking experience.

See you at IOUG this April in Las Vegas!

All the best in your Oracle performance work,

Craig.
Categories: DBA Blogs

Benchmark: TokuDB vs. MariaDB / MySQL InnoDB Compression

Pythian Group - Mon, 2014-09-15 09:55

As the amount of data companies are interested in collecting grows, life becomes all the more difficult for IT staff at all levels within an organization. SAS Enterprise storage devices that were once considered giants are now being phased out in favor of SSD Arrays with features such as de-duplication, tape storage has pretty much been abandoned and the same goes without saying for database engines.

For many customers just storing data is not enough because of the CAPEX and OPEX that is involved, smarter ways of storing the same data are required and since databases generally account for the greatest portion of storage requirements across an application stack. Lately they are used not only for storing data but also for storing logs in many cases. IT managers, developers and system administrators very often turn to the DBA and pose the time old question “is there a way we can cut down on the space the database is taking up?” and this question seems to be asked all the more frequently as time goes by.

This is a dilemma that cannot easily be solved for a MySQL DBA. What would the best way to resolve this issue be? Should I cut down on binary logging? Hmm… I need the binary logs in case I need to track down the transactions that have been executed and perform point in time recovery. Perhaps I should have a look at archiving data to disk and then compress this using tar and gzip? Heck if I do that I’ll have to manage and track multiple files and perform countless imports to re-generate the dataset when a report is needed from historical data. Maybe, just maybe, I should look into compressing the data files? This seems like a good idea… that way I can keep all my data, and I can just take advantage of a few extra CPU cycles to keep my data to a reasonable size – or does it?

Inspired by the time old dilemma I decided to take the latest version of TokuDB for test run and compare it to InnoDB compression which has been around a while. Both technologies promise a great reduction in disk usage and even performance benefits – naturally if data resides on a smaller portion of the disk access time and seek time will decrease, however this isn’t applicable to SSD disks that are generally used in the industry today. So I put together a test system using an HP Gen8 Proliant Server with 4x Intel® Xeon® E3 Processors, 4GB ECC RAM & the Samsung EVO SATA III SSD rated at 6G/s and installed the latest version of Ubuntu 14.04 to run some benchmarks. I used the standard innodb-heavy configuration from the support-files directory adding one change – innodb_file_per_table = ON. The reason for this is that TokuDB will not compress the shared tablespace hence this would affect the results of the benchmarks. To be objective I ran the benchmarks both on MySQL and MariaDB using 5.5.38 which is the latest bundled version for TokuDB.

The databases were benchmarked for speed and also for the space consumed by the tpcc-mysql dataset generated with 20 warehouses. So lets first have a look at how much space was needed by TokuDB vs. InnoDB (using both compressed and uncompressed tables):

 

Configuration GB TokuDB  2,7 InnoDB Compressed Tables  4,2 InnoDB Regular Tables  4,8

 

TokuDB was a clear winner here, of course the space savings depend on the type of data stored in the database however with the same dataset it seems TokuDB is in the lead. Seeing such a gain in storage requirements of course will make you wonder how much overhead is incurred in reading and writing this data, so lets have a look at the “tpm-C” to understand how many orders can be processed per minute on each. Here I have also included results for MariaDB vs. MySQL. The first graph shows the amount of orders that were processed per 10 second interval and the second graph shows the total “tpm-C” after the tests were run for 120 seconds:

 

Toku_Maria_MySQL

Figure 1 – Orders processed @ 10 sec interval

 

Interval MariaDB 5.5.38 MariaDB 5.5.38 InnoDB Compressed TokuDB on MariaDB 5.5.38 MySQL 5.5.38 MySQL 5.5.38 InnoDB Compressed TokuDB on MySQL 5.5.38 10 5300 529 5140 5667 83 5477 20 5743 590 5112 5513 767 5935 30 5322 596 4784 5267 792 5931 40 4536 616 4215 5627 774 6107 50 5206 724 5472 5770 489 6020 60 5827 584 5527 5956 402 6211 70 5588 464 5450 6061 761 5999 80 5679 424 5474 5775 789 6029 90 5759 649 5490 6258 788 5998 100 5288 611 5584 6044 765 6026 110 4637 575 4948 5753 720 5314 120 3696 512 4459 930 472 292 Toku_Maria_MySQL_2

Figure 2 - “tpm-C” for 120 test run

MySQL Edition “tpm-C” TokuDB on MySQL 5.5.38 32669.5 MySQL 5.5.38 32310.5 MariaDB 5.5.38 31290.5 TokuDB on MariaDB 5.5.38 30827.5 MySQL 5.5.38 InnoDB Compressed Tables 4151 MariaDB 5.5.38 InnoDB Compressed Tables 3437

 

Surprisingly enough however, the InnoDB table compression results were very low – perhaps this may have shown better results on regular SAS / SATA disks with traditional rotating disks. The impact on performance was incredibly high and the savings on disk space were marginal compared to those of TokuDB so once again again it seems we have a clear winner! TokuDB on MySQL outperformed both MySQL and MariaDB with uncompressed tables. The findings are interesting because in previous benchmarks for older versions of MariaDB and MySQL, MariaDB would generally outperform MySQL however there are many factors should be considered.

These tests were performed on Ubuntu 14.04 while the previous tests I mentioned were performed on CentOS 6.5 and also the hardware was slightly different (Corsair SSD 128GB vs. Samsung EVO 256GB). Please keep in mind these benchmarks reflect the performance on a specific configurations and there are many factors that should be considered when choosing the MySQL / MariaDB edition to use in production.

As per this benchmark, the results for TokuDB were nothing less than impressive and it will be very interesting to see the results on the newer versions of MySQL (5.6) and MariaDB (10)!

Categories: DBA Blogs

Change unknown SYSMAN password on #EM12c

DBASolved - Fri, 2014-09-12 17:52

When I normally start work on a new EM 12c environment, I would request to have a userid created; however, I don’t have a userid in this environment and I need access EM 12c as SYSMAN.  Without knowing the password for SYSMAN, how can I access the EM 12c interface?  The short answer is that I can change the SYSMAN password from the OS where EM 12c is running.

Note:
Before changing the SYSMAN password for EM 12c, make sure to understand the following:

  1. SYSMAN is used by the OMS to login to the OMR to store and query all activity
  2. SYSMAN password has to be changed at both the OMS and OMR to EM 12c to work correctly
  3. Do not modify the SYSMAN or any  other repository user at the OMR level (not recommended)

The steps to change an unknown SYSMAN password is as follows:

Tip: Make sure you know what the SYS password is for the OMR.  It will be needed to reset SYSMAN.

1. Stop all OMS processes

cd <oms home>/bin
emctl stop oms 

Image 1:
sysman_pwd_stop_oms.png

 

 

 

 

 

 

2. Change the SYSMAN password

cd <oms home>/bin
emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd <sys password> -new_pwd <new sysman password>

In Image 2, notice that I didn’t pass the password for SYS or SYSMAN on the command line.  EMCTL will ask you to provide the password if you don’t put it on the command line.

Image 2:
sysman_pwd_change_pwd.png

 

 

 

 

 

 

 

3. Stop the Admin Server on the primary OMS and restart OMS

cd <oms home>/bin
emctl stop oms -all
emctl start oms

Image 3:
sysman_pwd_start_oms.png

 

 

 

 

 

 

 

 

 

4. Verify that all of OMS is up and running

cd <oms home>/bin
emctl status oms -details

Image 4:

sysman_pwd_oms_status.png
 

 

 

 

 

 

 

 

 

 

After verifying that the OMS is backup, I can now try to login to the OMS interface.

Image 5:
sysman_pwd_oem_access.png

 

 

 

 

 

 

 

 

 

 

 

As we can see, I’m able to access OEM as SYSMAN now with the new SYSMAN password.

Enjoy!!

about.me: http://about.me/dbasolved


Filed under: OEM
Categories: DBA Blogs

Watch: 5 Best Practices for Launching Your Online Video Game

Pythian Group - Fri, 2014-09-12 07:24

Warner Chaves, Principal Consultant at Pythian, has had the privilege of working with several companies on their video game launches, and is best known for his work with the highly anticipated release of an action-adventure video game back in 2013. Through his experience, he’s developed a set of best practices for launching an online video game.

“You don’t want to have angry gamers on the launch of the game because they lost progress in the game,” he says. “Usually at launch, you will have really high peaks of volume, and there might be some pieces of the infrastructure that are not as prepared for that kind of load. There also might be some parts of the game that are actually more popular than what  you expected.”

Watch his latest video below, 5 Best Practices for Launching Your Online Video Game.

Categories: DBA Blogs

Log Buffer #388, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-09-12 07:22

In order to expand the knowledge about database features of any kind, blogs are inevitable these days. Whether its Oracle, MySQL, or SQL Server blogs writers are contributing like never before and this log buffer edition skims some of it.

Oracle:

The Oracle Utilities family of products use Oracle standard technology such as the Oracle Database and Oracle Fusion Middleware (a.k.a. Oracle WebLogic).

OBIEE SampleApp in The Cloud: Importing VirtualBox Machines to AWS EC2.

The default value for the INMEMORY_MAX_POPULATE_SERVERS parameter is derived from the PGA_AGGREGATE_LIMIT parameter.

Most customers of Oracle Enterprise Manager using JVM Diagnostics use the tool to monitor their Java Applications servers like Weblogic, Websphere, Tomcat, etc.

Taking Enterprise File Exchange to the Next Level with Oracle Managed File Transfer 12c.

SQL Server:

The concept of a synonym was introduced in SQL Server 2005. Synonyms are very simple database objects, but have the potential to save a lot of time and work if implemented with a little bit of thought.

This article summarizes the factors to consider and provide an overview of various options for HA and DR in cloud based SQL Server deployments.

Chris Date is famous for his writings on relational theory. Chris took on the role of communicating and teaching Codd’s relational theory, and reluctantly admits to a role in establishing SQL as the dominant relational language.

Introduction of how to design a star schema dimensional model for new BI developers.

Have you ever wondered why the transaction log file grows bigger and bigger? What caused it to happen? How do you control it? How does the recovery model of a database control the growing size of the transaction log? Read on to learn the answers.

MySQL:

A common migration path from standalone MySQL/Percona Server to a Percona XtraDB Cluster (PXC) environment involves some measure of time where one node in the new cluster has been configured as a slave of the production master that the cluster is slated to replace.

How to shrink the ibdata file by transporting tables with Trite.

OpenStack users shed light on Percona XtraDB Cluster deadlock issues.

There are a lot of tools that generate test data.  Many of them have complex XML scripts or GUI interfaces that let you identify characteristics about the data. For testing query performance and many other applications, however, a simple quick and dirty data generator which can be constructed at the MySQL command line is useful.

How to calculate the correct size of Percona XtraDB Cluster’s gcache.

Categories: DBA Blogs

Virtual Circuit Wait

Bobby Durrett's DBA Blog - Thu, 2014-09-11 15:28

On Monday we had some performance problems on a system that includes a database which uses shared servers.  The top wait was “virtual circuit wait”.  Here are the top 5 events for a 52 minute time frame:

Top 5 Timed Foreground Events

Event Waits Time(s) Avg wait (ms) % DB time Wait Class virtual circuit wait 1,388,199 17,917 13 50.98 Network db file sequential read 1,186,933 9,252 8 26.33 User I/O log file sync 1,185,620 6,429 5 18.29 Commit DB CPU 5,964 16.97 enq: TX – row lock contention 391 586 1499 1.67 Application

From other monitoring tools there was no sign of poor performance from the database end but virtual circuit wait is not normally the top wait during peak times.  Overall for the time period of this AWR report the shared servers didn’t seem busy:

Shared Servers Utilization Total Server Time (s) %Busy %Idle Incoming Net % Outgoing Net % 111,963 38.49 61.51 15.99 0.01

We have seen virtual circuit waits ever since we upgraded to 11g on this system so I wanted to learn more about what causes it.  These two Oracle support documents were the most helpful:

Troubleshooting: Virtual Circuit Waits (Doc ID 1415999.1)

Bug 5689608: INACTIVE SESSION IS NOT RELEASING SHARED SERVER PROCESS (closed as not bug)

Evidently when you return a cursor from a package and the cursor includes a sort step a shared server will be hung up in a virtual circuit wait state from the time the cursor is first fetched until the application closes the cursor.  Our application uses cursors in this way so it stands to reason that our virtual circuit wait times we saw in our AWR report represent the time it took for our web servers to fetch from the cursors and close them, at least for the cursors that included sort steps.  So, if our web servers slow down due to some other issue they could potentially take longer to fetch from and close the affected cursors and this could result in higher virtual circuit wait times.

Here is a zip of a test script I ran and its output: zip

I took the test case documented in bug 5689608 and added queries to v$session_wait to show the current session’s virtual circuit waits.

Here are the first steps of the test case:

CREATE TABLE TEST AS SELECT * FROM DBA_OBJECTS; 
     
create or replace package cursor_package as
cursor mycursor is select * from test order by object_name;
end;
/
       
begin
 open cursor_package.mycursor;
end;
/
 
create or replace procedure test_case is
l_row TEST%rowtype;
begin
if cursor_package.mycursor%isopen then
fetch cursor_package.mycursor into l_row;
end if;
end;
/

These steps do the following:

  1. Create a test table
  2. Create a package with a cursor that includes an order by to force a sort
  3. Open the cursor
  4. Create a procedure to fetch the first row from the cursor

At this point I queried v$session_wait and found no waits:

SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

no rows selected

The next step of the test case fetched the first row and then I queried and found the first wait:

SQL> exec test_case;

SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

       SID EVENT                          TIME_WAITED
---------- --------------------------------------------------------
       783 virtual circuit wait           0

Note that time_waited is 0 which means the time was less than one hundredth of a second.  Next I made my sqlplus client sleep for five seconds using a host command and looked at the wait again:

SQL> host sleep 5

SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

       SID EVENT                             TIME_WAITED
---------- --------------------------------------------------------
       783 virtual circuit wait              507

Total time is now 507 centiseconds = 5 seconds, same as the sleep time.  So, the time for the virtual circuit wait includes the time after the client does the first fetch, even if the client is idle.  Next I closed the cursor and slept another 5 seconds:

SQL> begin
  2   close cursor_package.mycursor;
  3  end;
  4  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> 
SQL> host sleep 5

SQL> 
SQL> select * from v$session_event
  2  where sid=
  3  (SELECT sid from v$session where audsid=USERENV('SESSIONID')) 
     and
  4  event='virtual circuit wait';

       SID EVENT                                 TIME_WAITED
---------- --------------------------------------------------------
       783 virtual circuit wait                  509

The time waited is still just about 5 seconds so the clock stops on the virtual circuit wait after the sqlplus script closes the cursor.  If the session was still waiting on virtual circuit wait after the close of the cursor the time would have been 10 seconds.

This was all new to me.  Even though we have plenty of shared servers to handle the active sessions we still see virtual circuit waits.  These waits correspond to time on the clients fetching from and closing cursors from called packages.  As a result, these wait times represent time outside of the database and not time spent within the database.  These waits tie up shared servers but as long as they are short enough and you have shared servers free they don’t represent a problem.

– Bobby

p.s. This is on hp-ux 11.31 ia64 Oracle 11.2.0.3

 

 












Categories: DBA Blogs

2002 Honda passport timing belt replacement

Ameed Taylor - Wed, 2014-09-10 19:14
The Honda Passport was a activity-utility car bought via the japanese maker from 1994 through 2002. It used to be changed in 2003 through the Honda Pilot, a crossover utility automobile that shared one of the most underpinnings of the Honda Odyssey minivan. not like the Pilot, which adopted the lead of the Toyota Highlander in placing a mid-dimension crossover body on the underpinnings of what used to be basically a car, the Passport was once constructed on a rear-wheel-power truck chassis with all-wheel force as an choice. The experience quality and coping with reflected its truck origins, so the Pilot was a striking step ahead when it replaced the Passport.

The Passport was once actually a re-badged Isuzu Rodeo, a truck-based SUV inbuilt Indiana, at a plant that Subaru and Isuzu shared on the time. the primary era Passport, sold from 1994 via 1997, offered a collection of a one hundred twenty-horsepower 2.6-liter four-cylinder engine, paired with a 5-pace handbook gearbox, or a a hundred seventy five-hp 3.2-liter V-6--and an available four-pace automated transmission. Rear-wheel power was same old, and all-wheel pressure might be ordered as an choice. Trim ranges have been base and EX.
2002 honda passport check engine light flashingIn 1998, a 2nd-era Passport used to be introduced. It used to be still based on a truck chassis, nevertheless it came with extra relief and safety options than the earlier adaptation, and was considerably extra refined. The 4-door game-utility vehicle came usual with a 205-hp three.2-liter V-6, matched with a 5-speed guide gearbox on base versions, though a four-speed computerized transmission was additionally on hand.

The second Passport was once offered in two trim ranges: the LX will be ordered with the 5-velocity guide, with four-wheel-pressure as an possibility, and the extra upscale EX came with the 4-velocity automatic, once more with both force possibility. while the spare tire on the base LX was established on a swinging bracket on the tailgate, the EX relocated it to a service beneath the cargo house. For the 2000 version year, the Honda Passport received a handful of updates, together with non-compulsory 16-inch wheels on the LX and available two-tone paint treatments.
2002 honda passport transmission dipstick locationWhen taking into account the Passport as a used car, patrons should comprehend that the 1998-2002 models have been recalled in October 2010 as a result of body corrosion in the basic house where the rear suspension was mounted. Any autos with out seen corrosion have been handled with a rust-resistant compound, but reinforcement brackets were to be installed in those with more extreme rust. In some cases, the damage was once so extreme that Honda simply repurchased the autos from their homeowners. Used-automotive shoppers taking a look at Passports must be sure to in finding out whether the car had been via a remember, and what--if anything else--was achieved.
2002 honda passport keyless remote
2002 honda passport o2 sensor location
2002 honda passport picture gallery
2002 honda passport transmission problems
2002 honda passport starter replacement
Categories: DBA Blogs

Index Growing Larger Than The Table

Hemant K Chitale - Wed, 2014-09-10 08:52
Here is a very simple demonstration of a case where an Index can grow larger than the table.  This happens because the pattern of data deleted and inserted doesn't allow deleted entries to be reused.  For every 10 rows that are inserted, 7 rows are subsequently deleted after their status is changed to "Processed".  But the space for the deleted entries from the index cannot be reused.

SQL>
SQL>REM Demo Index growth larger than table !
SQL>
SQL>drop table hkc_process_list purge;

Table dropped.

SQL>
SQL>create table hkc_process_list
2 (transaction_id number,
3 status_flag varchar2(1),
4 last_update_date date,
5 transaction_type number,
6 details varchar2(25))
7 /

Table created.

SQL>
SQL>create index hkc_process_list_ndx
2 on hkc_process_list
3 (transaction_id, status_flag)
4 /

Index created.

SQL>
SQL>
SQL>REM Cycle 1 -------------------------------------
> -- create first 1000 transactions
SQL>insert into hkc_process_list
2 select rownum, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 3
Table HKC_PROCESS_LIST 5

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>REM Cycle 2 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
2 select rownum+1000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 7
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Cycle 3 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
2 select rownum+2000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 11
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Cycle 4 -------------------------------------
> -- insert another 1000 rows
SQL>insert into hkc_process_list
2 select rownum+3000, 'N', sysdate, mod(rownum,4)+1, dbms_random.string('X',10)
3 from dual
4 connect by level < 1001
5 /

1000 rows created.

SQL>commit;

Commit complete.

SQL>
SQL>-- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 15
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>-- change status flag for 70% of the transactions to 'P'
SQL>update hkc_process_list
2 set status_flag='P'
3 where mod(transaction_id,10) < 7
4 /

700 rows updated.

SQL>commit;

Commit complete.

SQL>
SQL>-- delete processed rows
SQL>delete hkc_process_list
2 where status_flag='P'
3 /

700 rows deleted.

SQL>commit;

Commit complete.

SQL>
SQL>
SQL>REM Latest State size -------------------------
> -- get sizes of table and index
SQL>exec dbms_stats.gather_table_stats('','HKC_PROCESS_LIST',estimate_percent=>100,cascade=>TRUE);

PL/SQL procedure successfully completed.

SQL>select 'Table' Obj_Type, table_name, blocks Blocks
2 from user_tables
3 where table_name like 'HKC_PROCE%'
4 union
5 select 'Index', index_name, leaf_blocks
6 from user_indexes
7 where index_name like 'HKC_PROCE%'
8 order by 1
9 /

OBJ_T TABLE_NAME BLOCKS
----- ------------------------------ ----------
Index HKC_PROCESS_LIST_NDX 17
Table HKC_PROCESS_LIST 13

2 rows selected.

SQL>
SQL>

Note how the Index grew from 3 blocks to 17 blocks, larger than the table that grew to 13 and seemed to have reached a "steady-state" at 13 blocks.

The Index is built on only 2 of the 5 columns of the table and these two columns are also "narrow" in that they are a number and a single character.  Yet it grows faster through the INSERT - DELETE - INSERT cycles.

Note the difference between the Index definition (built on TRANSACTION_ID as the leading column) and the pattern of DELETEs (which is on STATUS_FLAG).

Deleted rows leave "holes" in the index but these are entries that cannot be reused by subsequent
Inserts.  The Index is ordered on TRANSACTION_ID.  So if an Index entry for TRANSACTION_ID = n is deleted, the entry can be reused only for the same (or very close) TRANSACTION_ID.

Assume that an Index Leaf Block contains entries for TRANSACTION_IDs 1, 2, 3, 4 and so on upto 10.  If rows for TRANSACTION_IDs 2,3,5,6,8 and 9 are deleted but 1,4,7 and 10  are not deleted then the Leaf Block has "free" space for new rows only with TRANSACTION_IDs 2,3,5,6,8 and 9.  New rows with TRANSACTION_IDs 11 and above will take a new Index Leaf Block and not re-use the "free" space in the first Index Leaf Block.  The first Leaf Block remains with deleted entries that are not reused.
On the other hand, when the rows are delete from the Table Block, new rows can be reinserted into the same Table Block.  The Table is Heap Organised, not Ordered like the Index.  Therefore, new rows are permitted to be inserted into any Block(s) that contain space for those new rows -- e.g. blocks from which rows are deleted.  Therefore, after deleting TRANSACTION_IDs 2,3,5,6 from a Table Block, new TRANSACTION_IDs 11,12,13,14 can be re-inserted into the *same* Block.

.
.
.
Categories: DBA Blogs

Getting Started with Windows VDI by Andrew Fryer

Surachart Opun - Wed, 2014-09-10 05:55
Virtual desktop infrastructure (VDI) is the practice of hosting a desktop operating system within a virtual machine (VM) running on a centralized server. VDI is a variation on the client/server computing model, sometimes referred to as server-based computing.
VDI is the new technology that gives lots of benefits.
• Efficient use of CPU and memory resources
• Reduced desktop downtime and increased availability
• Patches and upgrades performed in data center
• New users can be up and running quickly
• Data and applications reside in secure data centers
• Centralized management reduces operational expenses
Reference
Additional, VDI can be deployed with Microsoft Windows and suggest to learn What’s New in VDI for Windows Server 2012 R2 and 8.1
Anyway, I explained much more before starting to mention a book that was written by Andrew FryerGetting Started with Windows VDI - This book guides readers to build VDI by using Windows Server 2012 R2 and 8.1 quickly and easy to follow each chapter.

What Readers Will Learn:
  • Explore the various server roles and features that provide Microsoft's VDI solution
  • Virtualize desktops and the other infrastructure servers required for VDI using server virtualization in Windows Server Hyper-V
  • Build high availability clusters for VDI with techniques such as failover clustering and load balancing
  • Provide secure VDI to remote users over the Internet
  • Use Microsoft's Deployment Toolkit and Windows Server Update Services to automate the creation and maintenance of virtual desktops
  • Carry out performance tuning and monitoring
  • Understand the complexities of VDI licensing irrespective of the VDI solution you have opted for
  • Deploy PowerShell to automate all of the above techniques

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Partner Webcast – Managing Exadata with Oracle Enterprise Manager 12c

Oracle Enterprise Manager 12c is system management software that delivers centralized monitoring, administration, and life cycle management functionality for the complete Oracle IT infrastructure,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

2009 honda s2000 ultimate edition for sale

Ameed Taylor - Tue, 2014-09-09 18:40
Drive the S2000 tenderly and you presumably won't be satisfied with the buzzy powertrain and occupied ride. Tuned to perform on tight clips, the S2000 can feel rigid and jittery on open streets. Wind out the motor and push its points of confinement in corners, and you're in for a totally diverse, smile actuating background; that is the thing that the Honda S2000 is about.

Mazda's Miata feels very nearly large in correlation to the S2000. The cockpit is confined regardless of how little the tenants. The high shoulders of the S2000 keep the driver and traveler, and the controlling wheel sits low even at its most noteworthy alteration point. Strangely for Honda, the controls aren't laid out neatly (there's not a considerable measure of dash space to do so), and the enormous red Start catch appears to be more like a contrivance. There's a lot of dark plastic, as well, for the sake of sparing weight.

The 2009 Honda S2000 is one of the slightest reasonable large scale manufacture autos on the planet. There's practically no inside or trunk stockpiling, the cockpit's more confined than the mentor situates on a Boeing 757, and its evaluated above $30,000. It is an exemplary roadster sportscar with back wheel drive, a ragtop to open on sunny days, a six-pace manual transmission, and a rev-cheerful four-barrel motor.
2009 red honda convertible s2000
A year ago Honda presented the S2000 CR, the club-racer adaptation of the standard S2000. The CR gets a full-body flight optimized unit, superior Bridgestone tires, firmer suspension settings, a thicker hostile to move bar, and new wheels. A lightweight aluminum hardtop that cuts weight by around 90 pounds replaces the delicate top component. Inside, the CR gets different material seats with yellow sewing, another aluminum shifter handle, and carbon-fiber resemble the other much the same trim boards.

Standard supplies on the 2009 Honda S2000 incorporates electronic dependability control and non-freezing stopping devices, however side airbags—a gimmick now found on almost all new vehicles—aren't accessi

although the 2009 Honda S2000 has a dated design, the bottom edition stands out for its spectacular mix of fashion and performance, regardless of the overwhelming additions on the CR.

automobiles.com studies other exterior highlights embody trendy “excessive-intensity-discharge headlamps and 17-inch alloy wheels” that come usual on the 2009 Honda S2000. Edmunds resorts essentially the most distinguished criticism of the exterior of the 2009 Honda S2000, noting that whereas the brand new aerodynamic items on the CR “reduce excessive-velocity aerodynamic lift by way of about 70 p.c,” additionally they “cut back the car’s overall visual appeal with the aid of, oh, 79 %.” evaluations read through ebizsig.blogspot.com convey that the exterior styling of the 2009 Honda S2000 is a large success, and Kelley Blue e-book says the Honda S2000 “strikes an awfully un-Honda like, slightly depraved poise” that may “resemble an angry cobra about to strike.”
honda s2000 fiche technique 2009
Kelley Blue e book notes that “CR models include an aerodynamic physique kit,” together with “raise-reducing front and rear spoilers and a removable aluminum onerous high instead of the traditional cloth” model on the standard Honda S2000.
according to the reviewers at Edmunds, the “2009 Honda S2000 is a compact two-seat roadster that’s provided in two trims: same old and CR.” each trims share the same normal profile, which automobiles.com calls a “wedge-formed profile that stands except for different roadsters.”

ConsumerGuide approves of the internal structure on the 2009 Honda S2000, claiming that the “S2000 has a comfortable cockpit, so everything is shut at hand,” and whereas the “digital bar-graph tachometer and digital speedometer usually are not the sports activities-automotive norm,” they're “simple to learn.” Edmunds chimes in, noting that “just about all the controls you’ll ever want are set up inside a finger’s extension of the guidance wheel.” one of the most cooler interior features to find its method right into a manufacturing car is the “new top-power Indicator” on the 2009 Honda S2000 CR, a feature that cars.com says will flash “a inexperienced light when top power is reached.” Kelley Blue ebook gushes the 2009 Honda S2000’s “inside is stuffed with excellent surprises,” including a “giant pink start button on the sprint” and “the long heart console [that] sits up excessive, affording you the perfect perch on which to rest your arm.”
2009 honda s2000 performance specs
The 2009 Honda S2000 enjoys better handling because of the quicker guidance ratio and new tires, and the CR variation is a monitor-necessary contender that can hold its personal against more expensive European and American competition.

The EPA estimates that the 2009 Honda S2000, whether in standard or CR kind, will get 18 mpg within the city and 25 on the highway. Most cars as robust because the 2009 Honda S2000 pay a big penalty on the gasoline pump, however the small engine blended with lightweight development on the Honda S2000 yields a moderately frugal efficiency machine.

evaluations read by way of ebizsig.blogspot.com convey that the engine is happiest when operating flat-out. cars.com notes that “once it reaches 5,000 rpm or so, the S2000 lunges ahead like a rocket,” and Edmunds adds that “piloting the 2009 Honda S2000 takes some getting used to, on the grounds that height energy is delivered at nearly eight,000 rpm.” ConsumerGuide reviewers love the engine and find the Honda S2000 “offers a stunning provide of usable power across a extensive rpm vary, mixed with ultrahigh-revving excitement.” although two diverse versions of the 2009 Honda S2000 are on hand, Edmunds studies that the only engine offered is a “2.2-liter four-cylinder that churns out 237 hp at a lofty 7,800 rpm and 162 pound-feet of torque at 6,800 rpm.” Honda has tuned the engine on the Honda S2000 almost to the breaking point, with automobile and Driver commenting that “the S2000’s 2.2-liter four is mainly maxed out.”
modified honda s2000 turbo 2009 picture
evaluations learn by using ebizsig.blogspot.com additionally compliment the S2000’s transmission for its easy shifts and brief throws. Kelley Blue e book claims that the engine and transmission combination makes for “startlingly-quick efficiency,” whereas the chassis provides “outstanding nimbleness” to the 2009 Honda S2000 package deal. vehicles.com states that the four-cylinder engine on the S2000 Honda “mates with a six-speed handbook transmission” that ConsumerGuide says will supply “manageable take hold of motion” and a “slick, quick-throw gearbox.”

As excellent as the engine/transmission mixture is, coping with continues to be a trademark of the 2009 S2000. automobiles.com holds nothing back in praising the “razor-sharp steerage, disciplined coping with and athletic cornering ability” of the 2009 Honda S2000. Kelley Blue e book reviewers rave about the “just about flat cornering conduct and intensely crisp response that allows” the 2009 Honda S2000 “to barter the corners with positive tenacity.” The membership Racer is even more impressive, with automotive and Driver reporting it “is simply harder and sharper, with much less physique roll and tire scrubbing and extra nook composure and stability underneath braking.” sadly, the associated fee for all that efficiency is bad journey quality, and ConsumerGuide points out that “nearly every small bump and tar strip registers during the seats.” On the positive aspect, ConsumerGuide also comments that “braking is swift and simply modulated” whether or not you might be driving on the street or the monitor.
2009 honda s2000 horsepower
2009 honda s2000 owner's manual
2009 honda s2000 pictures
2009 honda s2000 price new
Categories: DBA Blogs

Adding additional agents to OEM12c

DBASolved - Mon, 2014-09-08 07:52

One question I get asked a lot is “how can I add additional agent software to OEM 12c”?  The answer is pretty easy; just download and apply to the software library.  Now what does that mean?  In this post, I’ll explain how to download additional agents for later deployments to other platforms.

After logging into OEM 12c, go to the Setup -> Extensibility -> Self Update (Image 1).

Image 1:

SelfUpdate_Menu.png

 

 

 

 

 

 

 

 

 

 

Once on the Self Update page (Image 2), there are a few things to notice.  The first thing is that under Status, the Connection Mode is Online.  This is an indicator that OEM has been configured and connected to My Oracle Support (MOS).  Additional items under the Status area is when was the last refresh, last download time and the last download type.  Right under the Status section there is a menu bar with actions that can be performed on this page.  Clicking the Check Updates button will check for any new updates in all the Types listed.  Since we want to focus on Agents, click on the folder for Agent Software.

Image 2:

SelfUpdate_Page.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After clicking on the Agent Software folder, it takes us to the Agent Software Updates page for Self Updates (Image 3).  On this page, it can be seen clearly that there are a lot of agent software available.  On this page, we can see the Past Activities where we can see what actions have been performed against a particular version of the agent.

Image 3:
AgentSoftwareUpdatePage.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

On the menu bar (Image 4), we can search the agent software by either description or by example.  These search options take text search terms.  If we know there is a new release, it can be searched my simply entering text like ’12.1.0.4’.

Image 4:
SelfUpdate_AgentUpdate_bar.png

 

As we can see in Image 5, searching for agents that are the version ’12.1.0.4’, we get a list of available agents with that version.  Notice the Status column of the table.  There are two types of status listed.  These are two of the three statuses available.  The third status is Downloading; which indicates that a new agent is downloading.  The two status listed in Image 5 are: Applied and Available.

Image 5:
AgentUpdateSearch.png

 

 

 

 

 

Let’s define the Agent Software Update Statuses a bit more.  They are as follows:

  1. Available = This version of the agent is available for the OS Platform and can be downloaded
  2. Download in progress = This version of the agent is being downloaded to the OMS
  3. Downloaded = This version of the agent has been downloaded to the OMS
  4. Applied = This version of the agent has been applied to the Software Library and ready to use for agent deployments

Now that we know what the Status column means, how can an agent be downloaded?

While on the Agent Software Updates page, select and highlight an OS Platform that an agent is needed for.  In this example, lets use “Microsoft Windows x64 (64-bit)” (Image 6). Notice the Status column and Past Activities section.  This agent is available for download.  Download the agent by clicking the download button in the menu bar.

Image 6:
AgentUpdate_Win64.png

 

 

 

 

 

 

 

 

 

 

 

 

After clicking the Download button, OEM will ask you when to run the job (Image 7).  Normally running it immediately is fine.

Image 7:
AgentDownloadJob.png

 

 

 

Once the Status is set Downloaded, the agent software needs to be applied to the Software Library before it can be used (Image 8). Highlight the agent that was just downloaded and click the Apply button.  This will apply the binaries to the software library.  Also notice the Past Activities section; here we can clearly see what has been done with these agent binaries.

Image 8:
AgentSoftwareDownloaded.png

 

 

 

 

 

 

 

 

 

 

 

 

 

Once the Apply button has been clicked, OEM presents a message letting you know that the Apply operation will store the agent software in the software library (Image 9).  Click OK when we are ready.

Image 9:
AgentUpdateApplyMsg.png

 

 

 

 

 

The agent software is finally applied to the Software Library and ready to use (Image 10).

Image 10:
AgentAppliedtoSWLib.png

 

 

 

 

 

 

 

 

With the agent now applied to the Software Library, it can be used to deploy out to, via push or pull, Microsoft Windows hosts.

Note: In my experience most deployments to Microsoft Windows host have to be done with there with Cygwin or Silent installed.  If you would like more information on the silent install approach, I wrote a post on it here.

Enjoy!!

 

about.me: http://about.me/dbasolved

 


Filed under: OEM
Categories: DBA Blogs

API Integration with Zapier (Gmail to Salesforce)

Kubilay Çilkara - Sun, 2014-09-07 11:42
Recently I attended a training session with +General Assembly  in London titled, What and Why of APIs. It was a training session focusing on usage of APIs and it was not technical at all. I find these type of training sessions very useful as they describe concepts and controlling ideas behind technologies rather than the hands-on, involved implementation details.

What grabbed my attention from the many different and very useful public and private API tools, 'thingies', introduced in this training session was Zapier. - www.zapier.com

Zapier looked to me as a platform for integrating APIs with clicks rather than code, with declarative programming. Is a way of automating the internet. What you get when you sign up with them is the ability to use 'Zaps', or create your own zaps. Zaps are integration of endpoints, like connecting Foursquare to Facebook or Gmail to Salesforce and syncing them. One of the Zaps available does that, connects your Gmail emails to Salesforce using the Gmail and Salesforce APIs and lets you sync between them. Not only that, but Zapier Zaps also put triggers on the endpoints which allow you to sync only when certain conditions are true. For example the Gmail to Salesforce Zap can push your email into a Salesforce Lead only when an email with a certain subject arrives to your gmail inbox. This is what a Zapier platform looks like:


An individual Zap looks like this and is nothing more than a mapping of the Endpoints with some trigger actions and filters.


The environment is self-documenting and very easy to use. All you do is drag and drop gmail fields and match them with the Lead, or other custom object Salesforce fields. Then you configure the sync to happen only under certain conditions/filters. Really easy to set-up. The free version runs the sync every 5 hours, well good enough for me. The paid version runs the sync every 5 minutes. 
There is even capability to track historical runs and trigger a manual run via the Zap menu. See below the 'Run' command to run a Zap whenever you like. 

In my case I used the tool to create a Zap to do exactly what I just described. My Zap creates a Salesforce Lead automatically in my Salesforce org whenever a 'special' email is sent to me. Great automation!
This is a taste of the 'platform cloud' tools out there to do API to API and App to App integrations with clicks and not code. With tools like Zapier all you really need is, imagination!
More links:
Categories: DBA Blogs

Watch Oracle DB Elapsed Time and Wall Time With Parallel Query

Watch Oracle Elapsed Time and Wall Time With Parallel Query
In my recent postings I wrote that when using the Oracle Database parallel query a SQL statement's wall time should be equal to its elapsed time divided by the number of parallel query slaves plus some overhead.

That may seem correct, but is it really true? To check I ran an experiment and posted the results here. The results are both obvious and illuminating.

If you don't want to read but just sit on the couch, have a beer and watch TV you're in luck! I took a clip from my Tuning Oracle Using An AWR Report online video seminar put it on youtube.  You can watch the video clip on YouTube HERE or simply click on the movie below.


The Math, For Review Purposes
In my previous recent postings I detailed the key time parameters; DB Time, DB CPU, non-idle wait time, elapsed time, parallelism and effective parallelism. To save you some clicking, the key parameters and their relationships are shown below.

DB Time = DB CPU + NIWT

Elapsed Time = Sum of DB Time

Wall Time = ( Elapsed Time / Parallelism ) + Parallelism Overhead

Wall Time = Elapsed Time / Effective Parallelism


Test Results: When Oracle Parallel Query was NOT involved.
If you want to see my notes, snippets, etc. they can be found in this text file HERE.

Here is the non-parallel SQL statement.

select /*+ FULL(big2) NOPARALLEL (big2) */ count(*)
into   i_var
from   big2 
where  rownum < 9000000

When the SQL statement was running, I was monitoring the session using my Realtime Session Sampler OSM tool, rss.sql. Since I knew the server process session ID and wanted to sample every second and wanted to see everything just for this session, this is the rss.sql syntax:
SQL>@rss.sql 16 16 827 827 % % 1
For details on any OSM tool syntax, run the OSM menu script, osmi.sql. You can download my OSM Toolkit HERE.

The rss.sql tool output is written to a text file, which I was doing a "tail -f" on. Here is a very small snippet of the output. The columns are sample number, sample time, session SID, session serial#, Oracle username, CPU or WAIT, SQL_ID, OraPub wait category, wait event, [p1,p2,p3].


We can see the session is consuming CPU and waiting. When waiting, the wait event is "direct path read", which is asynchronous (we hope) block read requests to the IO subsystem that will NOT be buffered in the Oracle buffer cache.

Now for the timing results, which are shown in the below table. I took five samples.  It's VERY important to know that the wait time (WAIT_TIME_S), DB CPU (DB_CPU_S), and DB Time (DB_TIME_S) values are related to ONLY server process SID 16. In blazing contrast, the wall time (WALL_S), elapsed time (EL_VSQL_S), and SQL statement CPU consumption (CPU_VSQL_S) is related the entire SQL_ID statement execution.

Here are the "no parallel" experimental results.
SQL> select * from op_results;

SAMPLE_NO WALL_S EL_VSQL_S CPU_VSQL_S WAIT_TIME_S DB_CPU_S DB_TIME_S
---------- ---------- ---------- ---------- ----------- ---------- ----------
1 35.480252 35.470015 9.764407 24.97 9.428506 34.152294
2 35.670021 35.659748 9.778554 25.15 9.774984 35.541861
3 35.749926 35.739473 9.774375 25.12 9.31266 34.126285
4 35.868076 35.857752 9.772321 25.32 9.345398 34.273479
5 36.193062 36.18378 9.712962 25.46 9.548465 35.499693
Let's check the math. For simplicity and clarity, please allow me to round and use only sample 5.
DB_TIME_S = DB_CPU_S + WAIT_TIME_S
35.5 = 9.5 + 25.5 = 35.0
The DB Time is pretty close (35.5 vs 35.0). Close enough to demonstrate the time statistic relationships.
Elapsed Time (EL_VSQL_S) = DB_TIME_S
35.5 = 34.2
The Elapsed Time is off by around 4% (35.5 vs 34.2), but still closely to demonstrate the time statistic relationships.
Wall Time (WALL_S) = Elapsed Time (EL_VSQL_S) / Effective Parallelism
35.5 = 35.5 / 1
Nice! The Wall Time results matched perfectly. (35.5 vs 35.5)

To summarize in a non parallel query (i.e., single server process) situation, the time math results are what we expected! (and hoped for)

Test Results: When Oracle Parallel Query WAS involved.
The only difference in the "non parallel" SQL statement above and the SQL statement below is the parallel hint. Below is the "parallel" SQL statement.
select /*+  FULL(big2) PARALLEL(big2,3)  */ count(*) into i_var from big2 where rownum<9000000>
When the "parallel" SQL statement was running, because Oracle parallel query was involved resulting in multiple related Oracle sessions, when monitoring using my rss.sql tool, I need to open the session ID (and serial#) to include all sessions. I still sampled every second. Here is the rss.sql syntax:
SQL>@rss.sql 0 9999 0 9999 % % 1
The tool output is written to a text file, which I was doing a "tail -f" on. Here is a very small snippet of the output. I manually inserted the blank lines to make it easier to see the different sample periods.


There is only one SQL statement being run on this idle test system. And because there is no DML involved, we don't see much background process activity. If you look closely above, sessions 168 (see third column) must be a log write process because the wait event is "log file parallel write". I checked and session 6 is a background process as well.

It's no surprise to typically see only four session involved. One session is the parallel query coordinator and the three parallel query slaves! Interestingly, the main server process session that I executed the query from is session number 16. It never showed up in any of my samples! I suspect it was "waiting" on an idle wait event and I'm only showing processes consuming CPU or waiting on a non-idle wait event. Very cool.

Now for the timing results. I took five samples.  Again, it's VERY important to know that the wait time (WAIT_TIME_S), DB CPU (DB_CPU_S), and DB Time (DB_TIME_S) values are related to ONLY calling server process, which in this case is session 16. In blazing contrast, the wall time (WALL_S), elapsed time (EL_VSQL_S), and SQL statement CPU consumption (CPU_VSQL_S) is related the entire SQL statement execution.

Here are the "parallel" experimental results.
 SQL>  select * from op_results;

SAMPLE_NO WALL_S EL_VSQL_S CPU_VSQL_S WAIT_TIME_S DB_CPU_S DB_TIME_S
---------- ---------- ---------- ---------- ----------- ---------- ----------
1 46.305951 132.174453 19.53818 .01 4.069579 4.664083
2 46.982111 132.797536 19.371063 .02 3.809439 4.959602
3 47.79761 134.338069 19.739735 .02 4.170921 4.555491
4 45.97324 131.809249 19.397557 .01 3.790226 4.159572
5 46.053922 131.765983 19.754143 .01 4.062703 4.461175
Let's check the math. So simplicity and clarity, please allow me to round and use sample 5.
DB_TIME_S = DB_CPU_S + WAIT_TIME_S
4.5 = 4.1 + 0
The DB Time shown above is kind of close... 10% off. (4.5 vs 4.1) But there is for sure timing error in my collection sript. I take the position, this is close enough to demonstrate the time statistic relationships. Now look below.
Elapsed Time (EL_VSQL_S)  = DB_TIME_S
131.7 != 4.5
Woah! What happened here? (131.7 vs 4.5) Actually, everything is OK (so far aways) because the DB Time is related to the session (Session ID 16), whereas the elapsed time is ALL the DB Time for ALL the processes involved in the SQL statement. Since parallel query is involved, resulting in four additional sessions (1 coordinator, 3 slaves) we would expect the elapsed time to be greater than the DB Time. Now let's look at the wall time.
Wall Time (WALL_S) = ( Elapsed Time (EL_VSQL_S) / Parallelism ) + overhead
46.1 = ( 131.8 / 3 ) + 2.2
Nice! Clearly the effective parallelism is greater than 3 because there is some overhead (2.2). But the numbers makes sense because:

1. The wall time is less than the elapsed time because parallel query is involved.

2. The wall time is close to the elapsed time divided by the parallelism. And we can even see the parallelism overhead.

So it looks like our time math is correct!


Reality And The AWR Or Statspack Report
This is really important. In the SQL Statement section of any AWR or Statspack Report, you will see the total elapsed time over the snapshot interval and perhaps the average SQL ID elapsed time per execution. So what is the wall time? What are users experiencing? The short answer is, we do NOT have enough information.

To know the wall time, we need to know the parallelism situation. If you are NOT using parallel query, than based on the time math demonstrated above, the elapsed time per execution will be close to what the user experiencing (unless there is an issue outside of Oracle). However, if parallelism is involved, you can expect the wall time (i.e, user's experience) to be much less than the elapsed time per execution shown in the AWR or Statspack report.

Another way of looking at this is: If a user is reporting a query is taking 10 seconds, but the average elapsed time is showing as as 60 seconds, parallel query is probably involved. Also, as I mentioned above, never forget the average value is not always the typical value. (More? Check out my video seminar entitled, Using Skewed Data To Your Advantage HERE.)

Thanks for reading!

Craig.
Categories: DBA Blogs

RAC Database Backups

Hemant K Chitale - Sun, 2014-09-07 08:20
In 11gR2 Grid Infrastructure and RAC


UPDATE : 13-Sep-14 : How to run the RMAN Backup using server sessions concurrently on each node.  Please scroll down to the update.


In a RAC environment, the database backups can be executed from any one node or distributed across multiple nodes of the cluster.

In my two-node environment, I have backups configured to go to an FRA.  This is defined by the instance parameter "db_recovery_file_dest" (and "db_recovery_file_dest_size").  This can be a shared location -- e.g. an ASM DiskGroup or a ClusterFileSystem.  Therefore, the parameter should ideally be the same across all nodes so that backups may be executed from any or multiple nodes without changing the backup location.

Running the RMAN commands from node1 :
[root@node1 ~]# su - oracle
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 21:56:46 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter db_recovery_file

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string +FRA
db_recovery_file_dest_size big integer 4000M
SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 21:57:49 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> list backup summary;

using target database control file instead of recovery catalog

List of Backups
===============
Key TY LV S Device Type Completion Time #Pieces #Copies Compressed Tag
------- -- -- - ----------- --------------- ------- ------- ---------- ---
12 B F A DISK 26-NOV-11 1 1 YES TAG20111126T224849
13 B A A DISK 26-NOV-11 1 1 YES TAG20111126T230108
16 B A A DISK 16-JUN-14 1 1 YES TAG20140616T222340
18 B A A DISK 16-JUN-14 1 1 YES TAG20140616T222738
19 B F A DISK 16-JUN-14 1 1 NO TAG20140616T222742
20 B F A DISK 05-JUL-14 1 1 NO TAG20140705T173046
21 B F A DISK 16-AUG-14 1 1 NO TAG20140816T231412
22 B F A DISK 17-AUG-14 1 1 NO TAG20140817T002340

RMAN>
RMAN> backup as compressed backupset database plus archivelog delete input;


Starting backup at 07-SEP-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=111 RECID=77 STAMP=857685630
input archived log thread=2 sequence=37 RECID=76 STAMP=857685626
input archived log thread=2 sequence=38 RECID=79 STAMP=857685684
input archived log thread=1 sequence=112 RECID=78 STAMP=857685681
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220131_0.288.857685699 tag=TAG20140907T220131 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:09
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_111.307.857685623 RECID=77 STAMP=857685630
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_37.309.857685623 RECID=76 STAMP=857685626
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_38.277.857685685 RECID=79 STAMP=857685684
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_112.270.857685681 RECID=78 STAMP=857685681
Finished backup at 07-SEP-14

Starting backup at 07-SEP-14
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709 tag=TAG20140907T220145 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:06:15
Finished backup at 07-SEP-14

Starting backup at 07-SEP-14
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=113 RECID=81 STAMP=857686085
input archived log thread=2 sequence=39 RECID=80 STAMP=857686083
channel ORA_DISK_1: starting piece 1 at 07-SEP-14
channel ORA_DISK_1: finished piece 1 at 07-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_07/annnf0_tag20140907t220807_0.307.857686087 tag=TAG20140907T220807 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_1_seq_113.309.857686085 RECID=81 STAMP=857686085
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_39.277.857686083 RECID=80 STAMP=857686083
Finished backup at 07-SEP-14

Starting Control File and SPFILE Autobackup at 07-SEP-14
piece handle=+FRA/racdb/autobackup/2014_09_07/s_857686089.277.857686097 comment=NONE
Finished Control File and SPFILE Autobackup at 07-SEP-14

RMAN>

Note how the "PLUS ARCHIVELOG" specification also included archivelogs from both threads (instances) of the database.

Let's verify these details from the instance on node2 :

[root@node2 ~]# su - oracle
-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sun Sep 7 22:11:00 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN>

RMAN> list backup of database completed after 'trunc(sysdate)-1';

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
24 Full 258.21M DISK 00:06:12 07-SEP-14
BP Key: 24 Status: AVAILABLE Compressed: YES Tag: TAG20140907T220145
Piece Name: +FRA/racdb/backupset/2014_09_07/nnndf0_tag20140907t220145_0.270.857685709
List of Datafiles in backup set 24
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
1 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/system.257.765499365
2 Full 1160228 07-SEP-14 +DATA2/racdb/datafile/sysaux.256.765502307
3 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/undotbs1.259.765500033
4 Full 1160228 07-SEP-14 +DATA2/racdb/datafile/undotbs2.257.765503281
5 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/users.261.765500215
6 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/partition_test.265.809628399
7 Full 1160228 07-SEP-14 +DATA1/racdb/datafile/hemant_tbs.266.852139375
8 Full 1160228 07-SEP-14 +DATA3/racdb/datafile/new_tbs.256.855792859

RMAN>

Yes, today's backup is visible from node2 as it retrieves the information from the controlfile that is common across all the instances of the database.

How are the archivelogs configured ?

RMAN> exit


Recovery Manager complete.
-sh-3.2$
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Sep 7 22:15:51 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 39
Next log sequence to archive 40
Current log sequence 40
SQL>
SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string +FRA
db_recovery_file_dest_size big integer 4000M
SQL>

Both instances have the same destination configured for archivelogs and backups.
.
.
.
=======================================================
UPDATE : 13-Sep-14 :  Running the backup concurrently from both nodes 

There are two ways to have the RMAN Backup run from both nodes.
A.   Issue a seperate RMAN BACKUP DATAFILE or BACKUP TABLESPACE command from each node, such that the two nodes have an independent list of Datafiles / Tablespaces

B.  Issue a BACKUP DATABASE command from one node but with two channels open, one against each node.

Here, method A is easy to do but difficult to control as you add Tablespaces and Datafiles.  So, I will demonstrate method B.

I begin with ensuring that
a.  I have REMOTE_LOGIN_PASSWORDFILE configured so that I can make a SQLNet connection from node1 to node2  (RMAN requires the connect AS SYSDBA in 11g)
b.  I have a TNSNAMES.ORA entry configured to the instance on node2 (note that the service name is common across all [both] instances in the Cluster)

-sh-3.2$ hostname
node1.mydomain.com
-sh-3.2$ id
uid=800(oracle) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba),1021(dba)
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sat Sep 13 23:22:09 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> show parameter remote_login_passwordfile;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
remote_login_passwordfile string EXCLUSIVE
SQL> quit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
-sh-3.2$ cat $ORACLE_HOME/network/admin/tnsnames.ora
# tnsnames.ora.node1 Network Configuration File: /u01/app/oracle/rdbms/11.2.0/network/admin/tnsnames.ora.node1
# Generated by Oracle configuration tools.

RACDB_1 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node1)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = RACDB)
)
)

RACDB_2 =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = node2)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = RACDB)
)
)

-sh-3.2$

Next, I start RMAN and allocate two Channels, one for each Instance (on each Node in the Cluster) and issue a BACKUP DATABASE that is automatically executed across both Channels.

-sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Sep 13 23:23:24 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: RACDB (DBID=762767011)

RMAN> run
2> {allocate channel ch1 device type disk connect 'sys/manager@RACDB_1';
3> allocate channel ch2 device type disk connect 'sys/manager@RACDB_2';
4> backup as compressed backupset database plus archivelog delete input;
5> }

using target database control file instead of recovery catalog
allocated channel: ch1
channel ch1: SID=61 instance=RACDB_1 device type=DISK

allocated channel: ch2
channel ch2: SID=61 instance=RACDB_2 device type=DISK


Starting backup at 13-SEP-14
current log archived
channel ch1: starting compressed archived log backup set
channel ch1: specifying archived log(s) in backup set
input archived log thread=2 sequence=40 RECID=82 STAMP=857687640
input archived log thread=1 sequence=114 RECID=84 STAMP=858204801
input archived log thread=2 sequence=41 RECID=83 STAMP=857687641
input archived log thread=1 sequence=115 RECID=86 STAMP=858208025
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed archived log backup set
channel ch2: specifying archived log(s) in backup set
input archived log thread=2 sequence=42 RECID=85 STAMP=858208000
input archived log thread=1 sequence=116 RECID=87 STAMP=858209078
input archived log thread=2 sequence=43 RECID=88 STAMP=858209079
channel ch2: starting piece 1 at 13-SEP-14
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.279.858209109 tag=TAG20140913T232445 comment=NONE
channel ch2: backup set complete, elapsed time: 00:00:26
channel ch2: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_42.296.858207997 RECID=85 STAMP=858208000
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_116.263.858209079 RECID=87 STAMP=858209078
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_43.265.858209079 RECID=88 STAMP=858209079
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t232445_0.275.858209099 tag=TAG20140913T232445 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:56
channel ch1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_40.309.857687641 RECID=82 STAMP=857687640
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_114.295.858204777 RECID=84 STAMP=858204801
archived log file name=+FRA/racdb/archivelog/2014_09_07/thread_2_seq_41.293.857687641 RECID=83 STAMP=857687641
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_115.305.858208001 RECID=86 STAMP=858208025
Finished backup at 13-SEP-14

Starting backup at 13-SEP-14
channel ch1: starting compressed full datafile backup set
channel ch1: specifying datafile(s) in backup set
input datafile file number=00001 name=+DATA1/racdb/datafile/system.257.765499365
input datafile file number=00004 name=+DATA2/racdb/datafile/undotbs2.257.765503281
input datafile file number=00007 name=+DATA1/racdb/datafile/hemant_tbs.266.852139375
input datafile file number=00008 name=+DATA3/racdb/datafile/new_tbs.256.855792859
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed full datafile backup set
channel ch2: specifying datafile(s) in backup set
input datafile file number=00002 name=+DATA2/racdb/datafile/sysaux.256.765502307
input datafile file number=00003 name=+DATA1/racdb/datafile/undotbs1.259.765500033
input datafile file number=00006 name=+DATA1/racdb/datafile/partition_test.265.809628399
input datafile file number=00005 name=+DATA1/racdb/datafile/users.261.765500215
channel ch2: starting piece 1 at 13-SEP-14
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.293.858209175 tag=TAG20140913T232557 comment=NONE
channel ch2: backup set complete, elapsed time: 00:12:02
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/nnndf0_tag20140913t232557_0.305.858209163 tag=TAG20140913T232557 comment=NONE
channel ch1: backup set complete, elapsed time: 00:13:06
Finished backup at 13-SEP-14

Starting backup at 13-SEP-14
current log archived
channel ch1: starting compressed archived log backup set
channel ch1: specifying archived log(s) in backup set
input archived log thread=1 sequence=117 RECID=90 STAMP=858209954
channel ch1: starting piece 1 at 13-SEP-14
channel ch2: starting compressed archived log backup set
channel ch2: specifying archived log(s) in backup set
input archived log thread=2 sequence=44 RECID=89 STAMP=858209952
channel ch2: starting piece 1 at 13-SEP-14
channel ch1: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.265.858209957 tag=TAG20140913T233915 comment=NONE
channel ch1: backup set complete, elapsed time: 00:00:03
channel ch1: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_1_seq_117.309.858209953 RECID=90 STAMP=858209954
channel ch2: finished piece 1 at 13-SEP-14
piece handle=+FRA/racdb/backupset/2014_09_13/annnf0_tag20140913t233915_0.263.858209957 tag=TAG20140913T233915 comment=NONE
channel ch2: backup set complete, elapsed time: 00:00:03
channel ch2: deleting archived log(s)
archived log file name=+FRA/racdb/archivelog/2014_09_13/thread_2_seq_44.295.858209951 RECID=89 STAMP=858209952
Finished backup at 13-SEP-14

Starting Control File and SPFILE Autobackup at 13-SEP-14
piece handle=+FRA/racdb/autobackup/2014_09_13/s_858209961.295.858209967 comment=NONE
Finished Control File and SPFILE Autobackup at 13-SEP-14
released channel: ch1
released channel: ch2

RMAN>

We can see that Channel ch1 was connected to Instance RACDB_1 and ch2 was connected to RACDB_2. Also, the messages indicate that both channels were running concurrently.
I also verified that the Channels did connect to each instance :

[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle 11205 1 1 23:24 ? 00:00:00 oracleRACDB_1 (LOCAL=NO)
You have new mail in /var/spool/mail/root
[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle 11205 1 3 23:24 ? 00:00:04 oracleRACDB_1 (LOCAL=NO)
[root@node1 ~]# ps -ef |grep RACDB_1 |grep LOCAL=NO
oracle 11205 1 4 23:24 ? 00:00:49 oracleRACDB_1 (LOCAL=NO)
[root@node1 ~]#
[root@node2 ~]# ps -ef |grep RACDB_2 | grep LOCAL=NO
oracle 6233 1 0 23:24 ? 00:00:00 oracleRACDB_2 (LOCAL=NO)
You have new mail in /var/spool/mail/root
[root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
oracle 6233 1 0 23:24 ? 00:00:00 oracleRACDB_2 (LOCAL=NO)
[root@node2 ~]# ps -ef |grep RACDB_2 |grep LOCAL=NO
oracle 6233 1 2 23:24 ? 00:00:24 oracleRACDB_2 (LOCAL=NO)
[root@node2 ~]#

As soon as I closed the RMAN (client) session, the two server processes also terminated.

This method (Method B) allows me to run an RMAN client session from any node in the Cluster and have RMAN server sessions running concurrently across all or some nodes of the Cluster, if I have not designated a single, specific node, as my RMAN Backups node.

Edit : I have demonstrated using ALLOCATE CHANNEL to run an adhoc, interactive, backup.  If you want to create a persistent script, you might want to use CONFIGURE CHANNEL and have the SYS password persisted in the configuration (saved in the controlfile) so that it is not in "plain text" in a script.

.
.
.

Categories: DBA Blogs