Skip navigation.

Feed aggregator

Complément : Sauvegarder une instance dans le Cloud Publid Oracle

Jean-Philippe Pinte - Thu, 2014-09-18 03:09
Depuis avril 2014, Oracle propose une solution de sauvegarde Cloud, appelée Oracle Database Backup Service :

https://cloud.oracle.com/database_backup?lmResID=1394421723860&tabID=1394421714895 


La console Oracle Enterprise Manager 12c vous permet de piloter vos sauvegardes et restaurations directement sur Oracle Database Backup Service.


Plus d'informations :


2013 Chevrolet Spark lt for sale

Ameed Taylor - Wed, 2014-09-17 21:36
The 2013 Chevrolet Spark lt for sale minicar is the littlest Chevy offered in the U.s. in a couple of years. focused at Millennial benefactors, it could be intended to be minimal effort, bright, and digitally empowered for first-time auto supporters who to discover their cell phones more intriguing than new vehicles.

The minimal 5-entryway hatchback contends with a climbing workforce of minicars that gimmicks the MINI Cooper that dispatched the stage inside the U.s., the new Fiat 500, the Mitsubishi i electrical auto, and a couple of two-seat passages, the developing old savvy Fortwo and the new Scion iq.

actually assuming its 12-foot-1-inch length is unequivocally three toes shorter than a Chevrolet Cruze smaller 4-entryway vehicle, the Spark does not so much show up as little as it seems to be. best in scale- -stopped beside an amusement utility auto, as a sample -is its fitting measurement prominent.

The creators have accomplished all that they may to blanket the Spark's extents. its a tall auto roosted on little 15-inch wheels, however stress strains, dark plastic boards that proceed with the window line, and a vast Chevy twin-opening grille help to conceal the chunk sided, scorn nosed field-on-wheels show up.

The handles for the back entryways are stowed away operating at a profit plastic trim, which Chevy says gives the auto a "roadster like" appear. we don't in all actuality buy that, however outwardly the back entryways don't fundamentally learn as entryways on first look. while the back completion is everything except vertical, a long top spoiler stretches out to edge the back window opening, offering profundity to the back and bettering the Spark's aeromechanics too.

inside, the textured hard-plastic dash makes no attempt and camouflage what its through copying another topic. so also, the seats are unashamedly manufactured material. however constitution color embeds on the entryway trim, inside the entryway boxes, and on the dashboard include a sprightly stress. They take notice the painted inside steel found in economy autos of the Sixties, despite the fact that inside the Spark they may be all body-color plastic presented for effect.

Like its vast sibling the Sonic, the Spark has a "motorbike model" instrument case mounted on the direction section. It suits a speedometer and a little auto learning show. the base Spark has a little monochrome center showcase, yet all LT trim levels work a 7-inch coloration touchscreen show inside the heart stack.

there is only one motor in the Spark, a 84-pull 1.2-liter four-barrel Ecotec that puts out eighty three lb-toes of torque. The Spark comes same old with a five-velocity handbook gearbox, and a four-pace automated transmission will likewise be requested for an extra $925. Forceful driving with the aide can hustle the little Spark by means of town acceptably, however the robotized is, honestly, a puppy. Its first gear is inordinate sufficient that quickening a long way from stoplights shows up excruciatingly steady, and it would not appear to offer such a great amount of punch at any velocity or in any apparatuses. this is one auto for which the handbook gearbox may be the main conceivable choice.

The 2013 Chevrolet Spark lt for sale with the aide transmission is EPA-evaluated at 32 mpg city, 38 mpg thruway, for a blended score of 34 mpg- -comparable to the rest inside the class, however sensibly lower than a considerable amount of bigger reduced vehicles, which benefit from higher the study of air. the programmed model is accessible in lower, at a consolidated 32 mpg (28 mpg city, 37 mpg road). The Spark runs on consistent evaluation gas, in spite of the fact that; the Fiat 500 prescribes top rate gas.

The Spark's ride is somewhat firm, by and by it dealt with broken NY city roads with aplomb- -despite the fact that riders had been mindful of each knock, pothole, edge, and swell. its tall enough that travelers will truly feel the Spark inclining toward difficult corners, nonetheless its reasonably little wheels and tires toiled grave to convey the road. the electric force direction deals some interstate feel, and though its barely ever as charming to throw round as the MINI Cooper, the Spark can trade paths into spaces diverse autos couldn't fit into, and it is a delight to stop. Our starting force provided for us no probability to test the Spark at 75-mph turnpike speeds.

within, the Spark's seats are physically movable yet agreeable -if a tad bit thin for broad channeled American travelers. The back seatback is upright, yet with to some degree arrangement between front- and once more seat riders, four adult male people can possess the Spark and go in reasonable house. that is more than can additionally be expressed for the MINI Cooper or Fiat 500, actually assuming the Mitsubishi i electric auto -the main other 5-entryway hatchback on this arrangement -bargains also shocking back seat space too.

With the back seat up, the Spark has 11.four cubic feet of load territory -enough to hold 10 to 12 full paper staple gear without issues -which grows to 31.2 cubic toes when the 60/40 part back seat is flipped and collapsed down. A payload net is close by as a feature of the higher trim levels, as are rails to mount baggage bearers on the top.





The 2013 Chevrolet Spark lt for sale has been intended to satisfy all present and future U.s. security necessities, and springs ordinary and not utilizing a less than 10 airbags. It has now not yet been appraised for accident wellbeing by method for either the NHTSA or IIHS. The Spark has not least complex electronic dependability keep watch over and electronically monitored slowing mechanisms, yet moreover doorway seat-cinch pretensioners and same old Hill start help- -a decent trademark for youthful drivers. Outward innovative and insightful is great, together with respectable back three-quarter perceivability over the intention energy's shoulder for switching (no back creative and judicious Polaroid is possible). All Sparks come standard with GM's Onstar gadget and 6 free months of bearer.

quickly for a minicar, all Sparks come typical with air-con, vitality windows, a back window wiper, and a drive pc. Furthermore Chevy contemplated that one methodology to make the Spark appear to be considerably less machine like
Categories: DBA Blogs

2013 Chevrolet Spark lt for sale

EBIZ SIG BLOG - Wed, 2014-09-17 21:36
The 2013 Chevrolet Spark lt for sale minicar is the littlest Chevy offered in the U.s. in a couple of years. focused at Millennial benefactors, it could be intended to be minimal effort, bright, and digitally empowered for first-time auto supporters who to discover their cell phones more intriguing than new vehicles.

The minimal 5-entryway hatchback contends with a climbing workforce of minicars that gimmicks the MINI Cooper that dispatched the stage inside the U.s., the new Fiat 500, the Mitsubishi i electrical auto, and a couple of two-seat passages, the developing old savvy Fortwo and the new Scion iq.

actually assuming its 12-foot-1-inch length is unequivocally three toes shorter than a Chevrolet Cruze smaller 4-entryway vehicle, the Spark does not so much show up as little as it seems to be. best in scale- -stopped beside an amusement utility auto, as a sample -is its fitting measurement prominent.

The creators have accomplished all that they may to blanket the Spark's extents. its a tall auto roosted on little 15-inch wheels, however stress strains, dark plastic boards that proceed with the window line, and a vast Chevy twin-opening grille help to conceal the chunk sided, scorn nosed field-on-wheels show up.

The handles for the back entryways are stowed away operating at a profit plastic trim, which Chevy says gives the auto a "roadster like" appear. we don't in all actuality buy that, however outwardly the back entryways don't fundamentally learn as entryways on first look. while the back completion is everything except vertical, a long top spoiler stretches out to edge the back window opening, offering profundity to the back and bettering the Spark's aeromechanics too.

inside, the textured hard-plastic dash makes no attempt and camouflage what its through copying another topic. so also, the seats are unashamedly manufactured material. however constitution color embeds on the entryway trim, inside the entryway boxes, and on the dashboard include a sprightly stress. They take notice the painted inside steel found in economy autos of the Sixties, despite the fact that inside the Spark they may be all body-color plastic presented for effect.

Like its vast sibling the Sonic, the Spark has a "motorbike model" instrument case mounted on the direction section. It suits a speedometer and a little auto learning show. the base Spark has a little monochrome center showcase, yet all LT trim levels work a 7-inch coloration touchscreen show inside the heart stack.

there is only one motor in the Spark, a 84-pull 1.2-liter four-barrel Ecotec that puts out eighty three lb-toes of torque. The Spark comes same old with a five-velocity handbook gearbox, and a four-pace automated transmission will likewise be requested for an extra $925. Forceful driving with the aide can hustle the little Spark by means of town acceptably, however the robotized is, honestly, a puppy. Its first gear is inordinate sufficient that quickening a long way from stoplights shows up excruciatingly steady, and it would not appear to offer such a great amount of punch at any velocity or in any apparatuses. this is one auto for which the handbook gearbox may be the main conceivable choice.

The 2013 Chevrolet Spark lt for sale with the aide transmission is EPA-evaluated at 32 mpg city, 38 mpg thruway, for a blended score of 34 mpg- -comparable to the rest inside the class, however sensibly lower than a considerable amount of bigger reduced vehicles, which benefit from higher the study of air. the programmed model is accessible in lower, at a consolidated 32 mpg (28 mpg city, 37 mpg road). The Spark runs on consistent evaluation gas, in spite of the fact that; the Fiat 500 prescribes top rate gas.

The Spark's ride is somewhat firm, by and by it dealt with broken NY city roads with aplomb- -despite the fact that riders had been mindful of each knock, pothole, edge, and swell. its tall enough that travelers will truly feel the Spark inclining toward difficult corners, nonetheless its reasonably little wheels and tires toiled grave to convey the road. the electric force direction deals some interstate feel, and though its barely ever as charming to throw round as the MINI Cooper, the Spark can trade paths into spaces diverse autos couldn't fit into, and it is a delight to stop. Our starting force provided for us no probability to test the Spark at 75-mph turnpike speeds.

within, the Spark's seats are physically movable yet agreeable -if a tad bit thin for broad channeled American travelers. The back seatback is upright, yet with to some degree arrangement between front- and once more seat riders, four adult male people can possess the Spark and go in reasonable house. that is more than can additionally be expressed for the MINI Cooper or Fiat 500, actually assuming the Mitsubishi i electric auto -the main other 5-entryway hatchback on this arrangement -bargains also shocking back seat space too.

With the back seat up, the Spark has 11.four cubic feet of load territory -enough to hold 10 to 12 full paper staple gear without issues -which grows to 31.2 cubic toes when the 60/40 part back seat is flipped and collapsed down. A payload net is close by as a feature of the higher trim levels, as are rails to mount baggage bearers on the top.





The 2013 Chevrolet Spark lt for sale has been intended to satisfy all present and future U.s. security necessities, and springs ordinary and not utilizing a less than 10 airbags. It has now not yet been appraised for accident wellbeing by method for either the NHTSA or IIHS. The Spark has not least complex electronic dependability keep watch over and electronically monitored slowing mechanisms, yet moreover doorway seat-cinch pretensioners and same old Hill start help- -a decent trademark for youthful drivers. Outward innovative and insightful is great, together with respectable back three-quarter perceivability over the intention energy's shoulder for switching (no back creative and judicious Polaroid is possible). All Sparks come standard with GM's Onstar gadget and 6 free months of bearer.

quickly for a minicar, all Sparks come typical with air-con, vitality windows, a back window wiper, and a drive pc. Furthermore Chevy contemplated that one methodology to make the Spark appear to be considerably less machine like
Categories: APPS Blogs

Our Very Own Wearable

Oracle AppsLab - Wed, 2014-09-17 16:33

Noel (@noelportugal) and Raymond have been hard at work building a custom wearable, a.k.a. the secret OpenWorld project. The finished product is ready for a closeup.

Click to view slideshow.

The components are:

The Bean is an amazingly little board, Arduino-compatible with a Bluetooth Low Energy module, plus an RGB LED and an 3-axis accelerometer.

I can’t tell you what we’re doing with this custom wearable, yet, but it will happen during OpenWorld. If you’ll be at the big show, OpenWorld or JavaOne, you’ll have a chance to see it in action and chat with the guys who built it.

Oh, and Noel will be writing up the details of the build, the story behind it and the journey, as well as all the nerdy bits. Stay tuned for that.Possibly Related Posts:

Autonomous Quadcopters Playing Some Catch

Oracle AppsLab - Wed, 2014-09-17 16:04

Tony went to a talk by Salim Ismail (@salimismail), the Founding Executive Director of Singularity University recently. He may/may not post his thoughts on the talk, which sounds fascinating, but this video is worth sharing either way, and not just because we have quadcopter fever.

Yeah, that’s autonomous flight, So refer to the list of horrifying things that should not be allowed.Possibly Related Posts:

Filler or Curated Content?

Oracle AppsLab - Wed, 2014-09-17 15:30

I consider these types of posts to be filler, but I suppose you could look at it as curated content or something highbrow like that. Take your pick.

10 Horrifying Technologies That Should Never Be Allowed

I scanned this post first, thought it would be interesting and left it to read later. Then I read it, and now, I’m terrified. Here’s the list, make sure to hit the link and read all about the sci-fi horrors that aren’t really sci-fi anymore.

  • Weaponized Nanotechnology
  • Conscious Machines
  • Artificial Superintelligence
  • Time Travel
  • Mind Reading Devices
  • Brain Hacking Devices
  • Autonomous Robots Designed to Kill Humans
  • Weaponized Pathogens
  • Virtual Prisons and Punishment
  • Hell Engineering

xkcd on watches

This is exactly how I feel about watches.

This is Phil Fish

I only know who Phil Fish is because I watched Indie Game: The Movie. This short documentary by Ian Danskin is quite good and is newsworthy this week thanks to Marcus Persson’s reference to it in his post about why he’s leaving Mojang (h/t Laurie for sharing), the makers of Minecraft, after Microsoft completes its acquisition of the company.

I have often wondered why so many people hate Nickelback, and now I have a much better understanding of why, thanks to Ian. Embedded here for your viewing pleasure.

https://www.youtube.com/watch?v=PmTUW-owa2wPossibly Related Posts:

No Write Permission on ACFS Mount Point

Sabdar Syed - Wed, 2014-09-17 15:24


Last night, I managed to create the ACFS Mount Point after resolving the issue "ACFS-9459: ASVM/ACFS is not supported on this os version". But, after creating the ACFS Mount Point, I'm unable to create or touch any files under this ACFS Mount Point.

Though, I tried to touch a file to create files under this ACFS Mount Point using OS User Oracle and Root, it's failing with the following error:

"touch: cannot touch `x': Permission denied"

Here are the steps I tried and got the error:

The ACFS Mount Point "/oracle/prd"  has been created on Linux 6.5 server using Oracle ASMCA tool, and this "/oracle/prd" mount point has 775 permission.

As Oracle User:

[oracle@Linux01 ~]# df -m|grep -i asm
/dev/asm/oracle_prd-77   35840    148     35693   1% /oracle/prd
[oracle@Linux01 ~]# cd /oracle/prd
[oracle@Linux01 prd]# pwd
/oracle/prd
[oracle@Linux01 prd]# ls -ld /oracle/prd
drwxrwxr-x. 4 oracle dba 4096 Sep 15 19:29 /oracle/prd
[oracle@Linux01 prd]# ls
lost+found
[oracle@Linux01 prd]# touch abc
touch: cannot touch `abc': Permission denied
[oracle@Linux01 prd]#

As Root user:

[root@Linux01 ~]# df -m|grep -i asm
/dev/asm/oracle_prd-77   35840    148     35693   1% /oracle/prd
[root@Linux01 ~]# cd /oracle/prd
[root@Linux01 prd]# pwd
/oracle/prd
[root@Linux01 prd]# ls -ld /oracle/prd
drwxrwxr-x. 4 oracle dba 4096 Sep 15 19:29 /oracle/prd
[root@Linux01 prd]# ls
lost+found
[root@Linux01 prd]# touch abc
touch: cannot touch `abc': Permission denied
[root@Linux01 prd]#

The problem was the SELinux is enabled on the Linux System.

To check, if SELinux is enable/disable on the system, cat the file "/etc/selinux/config"
Note: Used the root login to do the following steps:

[root@Linux01]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Or use the sestatus command to check the status.

[root@geprdb850 prd]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   enforcing
Mode from config file:          disabled
Policy version:                 28
Policy from config file:        targeted

Here is how to disable SElinux:

Method 1- Edit "/etc/selinux/config" and set the SELINUX variable to 'disabled'
Method 2- Use the setenforce command to disable on-the-fly

If you go with Method 1, then your changes are permanent but only effective if you reboot the machine.

If you go with Method 2, then your changes are NOT permanent but effective immediately.

Method 1: (Permanent Change)

Take the backup of "/etc/selinux/config" file.

[root@Linux01]# cp /etc/selinux/config /etc/selinux/config.bkp

Then edit "/etc/selinux/config" the file and set the SELinux variable to 'disabled'

[root@Linux01]# cat /etc/selinux/config

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
#     targeted - Targeted processes are protected,
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

Then reboot the server!!

Method 2: (On-the-fly)

[root@Linux01]# getenforce
Enforcing

[root@Linux01]# setenforce
usage:  setenforce [ Enforcing | Permissive | 1 | 0 ]

[root@Linux01 prd]# setenforce 0

[root@Linux01 prd]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   permissive
Mode from config file:          disabled
Policy version:                 28
Policy from config file:        targeted

[root@Linux01]# getenforce
Disabled

After the SELinux is disabled, then creating the files under ACFS Mount Point is succeeded.

Note: The above commands have to be completed by root user and do this under system admin supervision.

Regards,
Sabdar Syed.

http://sabdarsyed.blogspot.com

Oracle EMEA Customer Support Services Excellence Award 2014

The Oracle Instructor - Wed, 2014-09-17 13:54

The corporation announced today that I got the Customer Services Excellence Award 2014 in the category ‘Customer Champion’ for the EMEA region. It is an honor to be listed there together with these excellent professionals that I proudly call colleagues.

CSS Excellence Award 2014


Categories: DBA Blogs

Business Transformation: Getting from Here to There

Chris Warticki - Wed, 2014-09-17 12:00

New Oracle OpenWorld Sessions

Nearly every industry is undergoing some type of transformation. Businesses strive to innovate, gain market share, and stay ahead of the competition.

Six new Oracle OpenWorld conference sessions--specifically designed for organizations embarking on major transformation initiatives such as Cloud, Big Data and Analytics and Engineered Systems--will help answer how do we get from here to there.

During Oracle OpenWorld, Thought Leaders from Oracle Consulting will share insights, best-in-class methodologies, and critical lessons learned from helping customers transform their business with new solutions built on Oracle technology.

Do you need to hear first-hand how customers are successfully moving to the Cloud (HCM, ERP and CX) in weeks with a proven, practical approach that creates value, drives down cost, and reduces risk? Are you trying to maximize what you’re getting from your current analytic solution? Would you like to enhance your customer experience as well as improve the productivity of employees with mobility? Is a private cloud initiative is on your short-list? If so, you won’t want to miss new conference sessions dedicated to these leading transformational themes.

Learn More.


So many choices! 149 Oracle ACE-led sessions at Oracle OpenWorld

OTN TechBlog - Wed, 2014-09-17 10:54

Have you been searching the Oracle OpenWorld content catalog trying to decide which sessions to add to your schedule? We suggest you spend some time looking over the Oracle ACE session Focus On Document that lists all 149 sessions that will be presented by an Oracle ACE, ACE Associate or ACE Director.

Participants of the Oracle ACE Program are recognized for frequently sharing their technical insight, knowledge, and real world experience with the Oracle Community. We hope this handy list saves you some time preparing your session schedule and gives you some more time to think about what kind of cool t-shirt you are going to design while you're hanging out in the OTN Lounge. ;-)

Using the ILOM for Troubleshooting on ODA

Pythian Group - Wed, 2014-09-17 09:25

I worked on root cause analysis for a strange node reboot on client’s Oracle Database Appliance yesterday. The case was quite interesting from the perspective that none of the logs contained any information related to the cause of the reboot. I could only see the log entries for normal activities and then – BOOM! – the start-up sequence! It looked like someone just power cycled the node. I also observed the heartbeat timeouts followed by the node eviction on the remaining node. There was still one place I hadn’t checked and it revealed the cause of the issue.

One of the cool things about ODA is it’s service processor (SP) called Integrated Lights Out Manager (ILOM), which allows you to do many things that you’d normally do being physically located in the data center – power cycle the node, change the BIOS settings, choose boot devices, and … (the drum-roll) … see the console outputs from the server node! And it doesn’t only show the current console output but it keeps logging it too. Each ODA server has its own ILOM, so I found out the IP address for the ILOM of the node which failed and connected to it using SSH.

$ ssh pythian@oda01a-mgmt
Password:

Oracle(R) Integrated Lights Out Manager

Version 3.0.14.13.a r70764

Copyright (c) 2011, Oracle and/or its affiliates. All rights reserved.

->
-> ls

 /
    Targets:
        HOST
        STORAGE
        SYS
        SP

    Properties:

    Commands:
        cd
        show

ILOM can be browsed as it would be a directory structure. Here the “Targets” are different components of the system. When you “cd” into a target you see sub-components and so on. Each target can have properties, they are displayed as variable=value pairs under “Properties” section. And there are also list of “Commands” that you can execute for the current target. the “ls” command shows the sub-targets, the properties and the commands for the current target. Here’s how I found the console outputs from the failed node:

-> cd HOST
/HOST

-> ls

 /HOST
    Targets:
        console
        diag

    Properties:
        boot_device = default
        generate_host_nmi = (Cannot show property)

    Commands:
        cd
        set
        show

-> cd console
/HOST/console

-> ls

 /HOST/console
    Targets:
        history

    Properties:
        line_count = 0
        pause_count = 0
        start_from = end

    Commands:
        cd
        show
        start
        stop

-> cd history
/HOST/console/history

-> ls

The last “ls” command started printing all the history of console outputs on my screen and look what I found just before the startup sequence (I removed some lines to make this shorter and I also highlighted the most interesting lines):

divide error: 0000 [#1] SMP
last sysfs file: /sys/devices/pci0000:00/0000:00:09.0/0000:1f:00.0/host7/port-7:1/expander-7:1/port-7:1:2/end_device-7:1:2/target7:0:15/7:0:15:0/timeout
CPU 3
Modules linked in: iptable_filter(U) ip_tables(U) x_tables(U) oracleacfs(P)(U) oracleadvm(P)(U) oracleoks(P)(U) mptctl(U) mptbase(U) autofs4(U) hidp(U) l2cap(U) bluetooth(U) rfkill(U) nfs(U) fscache(U) nfs_acl(U) auth_rpcgss(U) lockd(U) sunrpc(U) bonding(U) be2iscsi(U) ib_iser(U) rdma_cm(U) ib_cm(U) iw_cm(U) ib_sa(U) ib_mad(U) ib_core(U) ib_addr(U) iscsi_tcp(U) bnx2i(U) cnic(U) uio(U) dm_round_robin(U) ipv6(U) cxgb3i(U) libcxgbi(U) cxgb3(U) mdio(U) libiscsi_tcp(U) libiscsi(U) scsi_transport_iscsi(U) video(U
) output(U) sbs(U) sbshc(U) parport_pc(U) lp(U) parport(U) ipmi_si(U) ipmi_devintf(U) ipmi_msghandler(U) igb(U) ixgbe(U) joydev(U) ses(U) enclosure(U) e1000e(U) snd_seq_dummy(U) snd_seq_oss(U) snd_seq_midi_event(U) snd_seq(U) snd_seq_device(U) snd_pcm_oss(U) snd_mixer_oss(U) snd_pcm(U) snd_timer(U) snd(U) soundcore(U) snd_page_alloc(U) iTCO_wdt(U) iTCO_vendor_support(U) shpchp(U) i2c_i801(U) i2c_core(U) ioatdma(U) dca(U) pcspkr(U) dm_multipath(U) usb_storage(U) mpt2sas(U) scsi_transport_sas(U) raid_class(U)
 ahci(U) raid1(U) [last unloaded: microcode]
Pid: 29478, comm: top Tainted: P        W  2.6.32-300.11.1.el5uek #1 SUN FIRE X4370 M2 SERVER
RIP: 0010:[<ffffffff8104b3e8>]  [<ffffffff8104b3e8>] thread_group_times+0x5b/0xab
...
Kernel panic - not syncing: Fatal exception
Pid: 29478, comm: top Tainted: P      D W  2.6.32-300.11.1.el5uek #1
Call Trace:
 [<ffffffff8105797e>] panic+0xa5/0x162
 [<ffffffff8107ae09>] ? up+0x39/0x3e
 [<ffffffff810580d1>] ? release_console_sem+0x194/0x19d
 [<ffffffff8105839a>] ? console_unblank+0x6a/0x6f
 [<ffffffff8105764b>] ? print_oops_end_marker+0x23/0x25
 [<ffffffff81456ea6>] oops_end+0xb7/0xc7
 [<ffffffff8101565d>] die+0x5a/0x63
 [<ffffffff8145677c>] do_trap+0x115/0x124
 [<ffffffff81013674>] do_divide_error+0x96/0x9f
 [<ffffffff8104b3e8>] ? thread_group_times+0x5b/0xab
 [<ffffffff810dd2f8>] ? get_page_from_freelist+0x4be/0x65e
 [<ffffffff81012b1b>] divide_error+0x1b/0x20
 [<ffffffff8104b3e8>] ? thread_group_times+0x5b/0xab
 [<ffffffff8104b3d4>] ? thread_group_times+0x47/0xab
 [<ffffffff8116ee13>] ? collect_sigign_sigcatch+0x46/0x5e
 [<ffffffff8116f366>] do_task_stat+0x354/0x8c3
 [<ffffffff81238267>] ? put_dec+0xcf/0xd2
 [<ffffffff81238396>] ? number+0x12c/0x244
 [<ffffffff8107419b>] ? get_pid_task+0xe/0x19
 [<ffffffff811eac34>] ? security_task_to_inode+0x16/0x18
 [<ffffffff8116a77b>] ? task_lock+0x15/0x17
 [<ffffffff8116add1>] ? task_dumpable+0x29/0x3c
 [<ffffffff8116c1c6>] ? pid_revalidate+0x80/0x99
 [<ffffffff81135992>] ? seq_open+0x25/0xba
 [<ffffffff81135a08>] ? seq_open+0x9b/0xba
 [<ffffffff8116d147>] ? proc_single_show+0x0/0x7a
 [<ffffffff81135b2e>] ? single_open+0x8f/0xb8
 [<ffffffff8116aa0e>] ? proc_single_open+0x23/0x3b
 [<ffffffff81127cc1>] ? do_filp_open+0x4f8/0x92d
 [<ffffffff8116f8e9>] proc_tgid_stat+0x14/0x16
 [<ffffffff8116d1a6>] proc_single_show+0x5f/0x7a
 [<ffffffff81135e73>] seq_read+0x193/0x350
 [<ffffffff811ea88c>] ? security_file_permission+0x16/0x18
 [<ffffffff8111a797>] vfs_read+0xad/0x107
 [<ffffffff8111b24b>] sys_read+0x4c/0x70
 [<ffffffff81011db2>] system_call_fastpath+0x16/0x1b
Rebooting in 60 seconds..???

A quick search on My Oracle Support quickly found a match: Kernel Panic at “thread_group_times+0x5b/0xab” (Doc ID 1620097.1)”. The call stack and the massages are a 100% match and the root cause is a kernel bug that’s fixed in more recent versions.
I’m not sure how I would have gotten to the root cause if this system was not an ODA and the server would have just bounced without logging the Kernel Panic in any of the logs. ODA’s ILOM definitely made the troubleshooting effort less painful and probably saved us from couple more incidents caused by this bug in the future as we’d been able to troubleshoot it quicklyand we’ll be able to implement the fix sooner.

Categories: DBA Blogs

Opening Up the LMS Walled Garden

Michael Feldstein - Wed, 2014-09-17 08:45

In yesterday’s post I described where I (and many others) see the LMS market heading in terms of interoperability.

At the same time, the LMS does a very poor job at providing a lot of the learning technologies desired by faculty and students. There is no way that a monolithic LMS can keep up with the market – it cannot match functionality of open internet tools especially without adding feature bloat.

I would add that part of the cause of the “false binary position” that D’Arcy points out is that much of the public commentary focuses on where the LMS has been rather than where it is going. There is a significant movement based on interoperability that is leading, perhaps painfully and slowly, to a world where the LMS can coexist with open educational tools, with even end users (faculty and students) eventually having the ability to select their tools that can share rosters and data with the institutional LMS.

Coexistence and interoperability, however, should not imply merely having links from the LMS to external tools as is too often the case.

The Walled Garden

The LMS (which George Station rightly points out was really called the Course Management System in the early years) started out as a walled garden with basic functionality of syllabus sharing, announcements, gradebook, email, and a few other tools.

walledgarden

Over time, as both Jared Stein points out in his blog post:

Flash forward to 2005(ish), when “Web 2.0” was on many educators’ minds as a new wave of services that made it easier for anyone to express themselves to anyone who was interested in participating. New web services and social media made the legacy LMS look like what it was: A slow-moving cruise ship that locked passengers in their cabins. It didn’t care about user experience. It didn’t care about integrating with social media. It didn’t care about encouraging novel practices or experimentation. But those were really just symptoms; the sickness was that the LMS vendors didn’t care about what was happening in our culture and in our communities as connectivity and multimedia exploded through the open web.

The LMS vendors did not just ignore these new services, however, but they tried to eat their cake and have it, too, by creating poor imitations of the external tools and stuffing them inside the LMS.

walledgarden2

As Web 2.0 tools proliferated, this approach of maintaining the walled garden was one of the primary causes of feature bloat and poorly-designed learning tools within the LMS.

walledgarden3

False Binary – A Choice

This situation – a walled garden LMS with feature bloat and inelegant tools while multiplying external tools become available – represents the bad side of the ed tech market as it has existed. Despite the weakness of this design approach, the vendors themselves were not the only ones at fault. As Mike Caulfield points out in his description of the “elegant and extensible Prometheus:

A number of years later I asked a person I knew who worked at Prometheus why Prometheus failed. Did Blackboard crush them?

His answer was interesting. No, it wasn’t Blackboard at all. It was the educational institutions. With the slow, resource-intensive and state-mandated RFP processes, the interminable faculty commitees, and the way that even after the deal was signed the institution would delay payment and implementation as long as possible (or suddenly throw it into an unanticipated ‘final review’) it was just not possible to grow a stable business. The process institutions followed was supposed to ensure equitable access to contracts, but what it did was made it impossible for any company not sitting on a pile of cash to stay in business. (I’m extrapolating a bit here, but not much).

I would add that the RFP process also encourages a feature checklist mentality, elevating the importance of being able to say “we have that feature” and minimizing the ability to say “this design doesn’t suck”.

Many institutions have reacted slowly to the proliferation of tools and officially support only the enterprise LMS – often due to FERPA / student privacy concerns but also due to perceived inability of central units to provide support to faculty and students on multiple tools.

But this is a choice, even in the current market with limited interoperability. There are other institutions that support not only the official enterprise LMS but also multiple learning tools. While institutions have a responsibility to provide baseline LMS services for faculty, there is a strong argument that they also have a responsibility to support the innovators and early adopters that want to explore with different learning tools, whether or not they integrate with the LMS within a course.

Moving Beyond the Wall

But can the market progress such that the enterprise LMS can coexist with open tools even at the course level? The answer in my mind is yes, and the work to move in this direction has been in progress for years. Thanks to LTI specification, and in the future the Caliper interoperability framework, the vision that George Kroner describes is getting closer and closer.

But the LMSs today won’t be the LMSs of tomorrow. Rather than being a “dumping ground” for content, maybe one possible future for LMSs is as Learning Management Scaffolding – metaphorically supporting learning no matter its shape or form – with content being viewed and activities taking place inside and outside of the LMS. Maybe content will be seamlessly navigable around the LMS and the web – and perhaps in other types of systems like LCMSs – Learning Content Management Systems. Maybe learning tools of all types and sizes – but external to the LMS – will support every long-tail instructional desire imaginable while assessment results feed back into the LMS gradebook. Maybe the LMS will be the storage mechanism for leaning analytics as well, but it is more likely that it will become only one source of data feeding into another system better-suited for the task. But try as I might I fail to imagine a future in which some centrally-managed, instructor-accessible system stores rosters and grades, enforces privacy and security policies, and provides some form of starting-off point for students.

In this developing future market, coexistence of LMS and Open will include not just links or grudging institutional support, but it will also include information sharing of rosters, data, and context. Open tools that will start with the class roster in place, data of user activity shared between apps, and the ability to external apps to be run in the context of the course design and recent class activities.

walledgarden5

There will be painful implementations – caused both by LMS vendors and by institutions – that will prevent a smooth transition to this breakdown of the walled garden, but it will become increasingly difficult for LMS solutions to survive over time if they don’t adapt. There will also be market niches (e.g. specific online programs) that will retain the walled garden LMS approach, but in general the markets should change.

I personally see the realistic future as having more of a choice of tools rather than a minimal LMS. LMS vendors will continue to have reasons to develop (or acquire) their own internal tools, and there will even be cases where the tight integration and focused development will lead to better tools in the LMS than outside. The key change will be the ability for integration decisions – which tools to use in specific classes or in specific institutions – to be made closer to the faculty and student end users. From LMS vendor to central IT to academic program to even individual faculty – moving closer to those who know the specific needs of the class. Central IT and the institution will remain important in setting policies and permissions to protect student privacy and provide guidance to faculty and course designers who are more conservative in their ed tech usage. But either way (minimal LMS or swappable tool LMS), I think the long-term trend is moving in this direction of LMS and Open tool coexistence.

Update 9/19: Updated graphics to add LMS label, CC license and logo to facilitate sharing outside of blog.

The post Opening Up the LMS Walled Garden appeared first on e-Literate.

Ignoring outliers in aggregate function

Tony Andrews - Wed, 2014-09-17 08:22
This is another aide-memoire for myself really.  I want to calculate the average load times per page for an application from timings stored in the database, and see which pages need attention. However, the stats can be skewed by the odd exceptional load that takes much longer than a typical load for reasons that are probably irrelevant to me. Here is a fictitious example: create table timings (Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2014/09/ignoring-outliers-in-aggregate-function.html

OOW - Focus On Support and Services for PeopleSoft

Chris Warticki - Wed, 2014-09-17 08:00
Focus On Support and Services for PeopleSoft   Monday, Sep 29, 2014

Conference Sessions

Integrating PeopleSoft for Seamless IT Service Delivery: Tips from UCF
Robert Yanckello, CTO, UCF
Sastry Vempati, Director ACS Customer Service Management, Oracle
2:45 PM - 3:30 PM Moscone West - 2024 CON2541 Wednesday, Oct 01, 2014

Conference Sessions

Leading Practices in a PeopleSoft 9.2 Upgrade
Bryan Moore, OMES
Michael Widell, Deputy Business Segment Director, OMES
Rick Humphress, Project Manager, Oracle
11:30 AM - 12:15 PM Intercontinental - Intercontinental B CON2871 Thursday, Oct 02, 2014

Conference Sessions

PeopleSoft: Support in the Age of Update Manager
Ganesan Sankaran, Principal Support Engineer, Oracle
12:00 PM - 12:45 PM Moscone West - 2020 CON8325 Is Your Organization Trying to Focus on an HCM Cloud Strategy?
Rich Isola, Sr. Practice Director, Oracle
Heather Mcaninch, Senior Client Executive HCM, Oracle
Ken Thompson, Sr. Practice Director, Oracle
1:15 PM - 2:00 PM Palace - Grand Ballroom CON7574 Thursday, Oct 02, 2014

Conference Sessions

Parallel Upgrade of PeopleSoft Applications and Oracle Database: Tips from MetLife
Gopi Kotha, Software Systems Specialist, MetLife
Asha Santosh, Lead PeopleSoft DBA, Metropolitan Life Insurance Company (inc)
Navin Lobo, Principal Advanced Support Engineer, Oracle
1:15 PM - 2:00 PM Moscone West - 2020 CON6106   My Oracle Support Monday Mix

Monday, Sep 29

Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29 from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center. Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our web site for more details: http://www.oracle.com/goto/mondaymix 6:00 PM - 8:00 PM ThirstyBear Brewing Company Oracle Support Stars Bar & Mini Briefing Center

Monday, Sep 29

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Tuesday, Sep 30

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 6:00 PM Moscone West Exhibition Hall, 3461 and 3908

Wednesday, Oct 01

Ask the stars of Oracle Support your toughest questions, learn about proactive support tools and advanced support offerings, and win a prize at one of our 10-minute mini-briefings where you are sure to leave with valuable tips and best practices based on our experience supporting Oracle customers around the globe. 9:45 AM - 3:45 PM Moscone West Exhibition Hall, 3461 and 3908

To secure a seat in a session, please use Schedule Builder to add to your Schedule.

Oracle Java Compute Cloud Service Now Available!

Today Oracle added exciting new services to our existing Public Cloud offerings. First Things First It all begins with Oracle Compute Cloud service. It offers Elastic Compute Capacity, where...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Getting The Users’ Trust – Part 1

Rittman Mead Consulting - Wed, 2014-09-17 03:02

Looking back over some of my truly ancient Rittman Mead blogs (so old in fact that they came with me when I joined the company soon after Rittman Mead was launched), I see recurrent themes on why people “do” BI and what makes for successful implementations. After all, why would an organisation wish to invest serious money in a project if it does not give value either in terms of cost reduction or increasing profitability through smart decisions. This requires technology to provide answers and a workforce that is both able to use this technology and has faith that the answers returned allow them to do their jobs better. Giving users this trust in the BI platform generally boils down to resolving these three issues: ease of use of the reporting tool, quickness of data return and “accuracy” or validity of the response. These last two issues are a fundamental part of my work here at Rittman Mead and underpin all that I do in terms of BI architecture, performance, and data quality. Even today as we adapt our BI systems to include Big Data and Advanced Analytics I follow the same sound approaches to ensure usable, reliable data and the ability to analyse it in a reasonable time.

Storage is cheap so don’t aggregate away your knowledge. If my raw data feed is sales by item by store by customer by day and I only store it in my data warehouse as sales by month by state I can’t go back to do any analysis on my customers, my stores, my products. Remember that the UNGROUP BY only existed in my April Fools’ post. Where you choose to store your ‘unaggregated’ data may well be different these days; Hadoop and schema on read paradigms often being a sensible approach. Mark Rittman has been looking at architectures where both the traditional DWH and Big Data happily co-exist.

When improving performance I tend to avoid tuning specific queries, instead I aim to make frequent access patterns work well. Tuning individual queries is almost always not a sustainable approach in BI; this week’s hot, ‘we need the answer immediately’ query may have no business focus next week. Indexes that we create to make a specific query fly may have no positive effect on other queries; indeed, indexes may degrade other aspects of BI performance such as increased data load times and have subtle effects such as changing a query plan cost so that groups of materialized views are no longer candidates in query re-write (this is especially true when you use nested views and the base view is no longer accessed).

My favoured performance improvement techniques are: correctly placing the data be it clustering, partitioning, compressing, table pinning, in-memory or whatever, and making sure that the query optimiser knows all about the nature of the data; again and again “right” optimiser information is key to good performance. Right is not just about running DBMS_STATS.gather_XXX over tables or schemas every now and then; it is also about telling the optimiser about data relationships between data items. Constraints describe the data, for example which columns allow NULL values, which columns are part of parent-child relationships (foreign keys). Extended table statistics can help describe relationships between columns in a single table for example in a product dimensions table the product sub-category and the product category columns will have an interdependence, without that knowledge cardinality estimates can be very wrong and favour nested loop style plans that could be very poor performing on large data sets.

Sometimes we will need to create aggregates to answer queries quickly; I tend to build ‘generic’ aggregates, those that can be used by many queries. Often I find that although data is loaded frequently, even near-real-time, many business users wish to look at larger time windows such as week, month, or quarter; In practice I see little need for day level aggregates over the whole data warehouse timespan, however, there will always be specific cases that might require day-level summaries. If I build summary tables or use Materialized Views I would aim to make tables that are at least 80% smaller than the base table and to avoid aggregates that partially roll up many dimensional hierarchies; customer category by product category by store region by month would probably not be the ideal aggregate for most real-user queries. That said Oracle does allow us to use fancy grouping semantics in the building of aggregates (grouping sets, group by rollup and group by cube.) The in-database Oracle OLAP cube functionality is still alive and well (and was given a performance boost in Oracle 12c); it may be more appropriate to aggregate in a cube (or relational-look-alike) rather than individual summaries.

Getting the wrong results quickly is no good, we must be sure that the results we display are correct. As professional developers we test to prove that we are not losing or gaining data through incorrect joins and filters, but ETL coding is often the smallest factor in “incorrect results” and this brings me to part 2, Data Quality.

Categories: BI & Warehousing

About index range scans, disk re-reads and how your new car can go 600 miles per hour!

Tanel Poder - Wed, 2014-09-17 02:56

Despite the title, this is actually a technical post about Oracle, disk I/O and Exadata & Oracle In-Memory Database Option performance. Read on :)

If a car dealer tells you that this fancy new car on display goes 10 times (or 100 or 1000) faster than any of your previous ones, then either the salesman is lying or this new car is doing something radically different from all the old ones. You don’t just get orders of magnitude performance improvements by making small changes.

Perhaps the car bends space around it instead of moving – or perhaps it has a jet engine built on it (like the one below :-) :

Anyway, this blog entry is a prelude to my upcoming Oracle In-Memory Database Option series and here I’ll explain one of the radical differences between the old way of thinking and modern (In-Memory / Smart Scan) thinking that allow such performance improvements.

To set the scope and and clarify what I mean by the “old way of thinking”: I am talking about reporting, analytics and batch workloads here – and the decades old mantra “if you want more speed, use more indexes”.

I’m actually not going to talk about the In-Memory DB option here – but I am going to walk you through the performance numbers of one index range scan. It’s a deliberately simple and synthetic example executed on my laptop, but it should be enough to demonstrate one important point.

Let’s say we have a report that requires me to visit 20% of rows in an orders table and I’m using an index range scan to retrieve these rows (let’s not discuss whether that’s wise or not just yet). First, I’ll give you some background information about the table and index involved.

My test server’s buffer cache is currently about 650 MB:

SQL> show sga

Total System Global Area 2147483648 bytes
Fixed Size                  2926472 bytes
Variable Size             369100920 bytes
Database Buffers          687865856 bytes
Redo Buffers               13848576 bytes
In-Memory Area           1073741824 bytes

The table I am accessing is a bit less than 800 MB in size, about 100k blocks:

SQL> @seg soe.orders

    SEG_MB OWNER  SEGMENT_NAME   SEGMENT_TYPE    BLOCKS 
---------- ------ -------------  ------------- -------- 
       793 SOE    ORDERS         TABLE           101504 

I have removed some irrelevant output from the output below, I will be using the ORD_WAREHOUSE_IX index for my demo:

SQL> @ind soe.orders
Display indexes where table or index name matches %soe.orders%...

TABLE_OWNER  TABLE_NAME  INDEX_NAME         POS# COLUMN_NAME     DSC
------------ ----------- ------------------ ---- --------------- ----
SOE          ORDERS      ORDER_PK              1 ORDER_ID
                         ORD_WAREHOUSE_IX      1 WAREHOUSE_ID
                                               2 ORDER_STATUS

INDEX_OWNER  TABLE_NAME  INDEX_NAME        IDXTYPE    UNIQ STATUS   PART TEMP  H  LFBLKS       NDK   NUM_ROWS      CLUF LAST_ANALYZED     DEGREE VISIBILIT
------------ ----------- ----------------- ---------- ---- -------- ---- ---- -- ------- --------- ---------- --------- ----------------- ------ ---------
SOE          ORDERS      ORDER_PK          NORMAL/REV YES  VALID    NO   N     3   15801   7148950    7148950   7148948 20140913 16:17:29 16     VISIBLE
             ORDERS      ORD_WAREHOUSE_IX  NORMAL     NO   VALID    NO   N     3   17860      8685    7148950   7082149 20140913 16:18:03 16     VISIBLE

I am going to do an index range scan on the WAREHOUSE_ID column:

SQL> @descxx soe.orders

Col# Column Name                    Null?      Type                      NUM_DISTINCT        Density  NUM_NULLS HISTOGRAM       NUM_BUCKETS Low Value                        High Value
---- ------------------------------ ---------- ------------------------- ------------ -------------- ---------- --------------- ----------- -------------------------------- --------------------------------
   1 ORDER_ID                       NOT NULL   NUMBER(12,0)                   7148950   .00000013988          0                           1 1                                7148950
...
   9 WAREHOUSE_ID                              NUMBER(6,0)                        999   .00100100100          0                           1 1                                999
...

Also, I enabled SQL trace and event 10298 – “ORA-10298: ksfd i/o tracing”, more about that later:

SQL> ALTER SESSION SET EVENTS '10298 trace name context forever, level 1';

Session altered.

SQL> EXEC SYS.DBMS_MONITOR.SESSION_TRACE_ENABLE(waits=>TRUE);

PL/SQL procedure successfully completed.

SQL> SET AUTOTRACE ON STAT

Ok, now we are ready to run the query! (It’s slightly formatted):

SQL> SELECT /*+ MONITOR INDEX(o, o(warehouse_id)) */ 
         SUM(order_total) 
     FROM 
         soe.orders o 
     WHERE 
         warehouse_id BETWEEN 400 AND 599;

Let’s check the basic autotrace figures:

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
    1423335  consistent gets
     351950  physical reads
          0  redo size
        347  bytes sent via SQL*Net to client
        357  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

What?! We have done 351950 physical reads?! This is 351950 blocks read via physical read operations. This is about 2.7 GB worth of IOs done just for this query! Our entire table size was under 800MB and the index size under 150MB. Shouldn’t indexes allow us to visit less blocks than the table size?!

Let’s dig deeper – by breaking down this IO number by execution plan line (using a SQL Monitoring report in this case):

Global Stats
================================================================
| Elapsed |   Cpu   |    IO    | Fetch | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) | Calls |  Gets  | Reqs | Bytes |
================================================================
|      48 |      25 |       23 |     1 |     1M | 352K |   3GB |
================================================================

SQL Plan Monitoring Details (Plan Hash Value=16715356)
=============================================================================================================================================
| Id |               Operation                |       Name       | Execs |   Rows   | Read | Read  | Activity |       Activity Detail       |
|    |                                        |                  |       | (Actual) | Reqs | Bytes |   (%)    |         (# samples)         |
=============================================================================================================================================
|  0 | SELECT STATEMENT                       |                  |     1 |        1 |      |       |          |                             |
|  1 |   SORT AGGREGATE                       |                  |     1 |        1 |      |       |          |                             |
|  2 |    TABLE ACCESS BY INDEX ROWID BATCHED | ORDERS           |     1 |       1M | 348K |   3GB |    96.30 | Cpu (1)                     |
|    |                                        |                  |       |          |      |       |          | db file parallel read (25)  |
|  3 |     INDEX RANGE SCAN                   | ORD_WAREHOUSE_IX |     1 |       1M | 3600 |  28MB |     3.70 | db file sequential read (1) |
=============================================================================================================================================

So, most of these IOs come from accessing the table (after fetching relevant ROWIDs from the index). 96% of response time of this query was also spent in that table access line. We have done about ~348 000 IO requests for fetching blocks from this table. This is over 3x more blocks than the entire table size! So we must be re-reading some blocks from disk again and again for some reason.

Let’s confirm if we are having re-reads. This is why I enabled the SQL trace and event 10298. I can just post-process the tracefile and see if IO operations with the same file# and block# combination do show up.

However, using just SQL trace isn’t enough because multiblock read wait events don’t show all blocks read (you’d have to infer this from the starting block# and count#), the “db file parallel read” doesn’t show any block#/file# info at all in SQL Trace (as this “vector read” wait event encompasses multiple different block reads under a single wait event).

The classic single block read has the file#/block# info:

WAIT #139789045903344: nam='db file sequential read' ela= 448 file#=2 block#=1182073 blocks=1 obj#=93732 tim=156953721029

The parallel read wait events don’t have individual file#/block# info (just total number of files/blocks involved):

WAIT #139789045903344: nam='db file parallel read' ela= 7558 files=1 blocks=127 requests=127 obj#=93696 tim=156953729450

Anyway, because we had plenty of db file parallel read waits that don’t show all the detail in SQL Trace, I also enabled the event 10298 that gives us following details below (only tiny excerpt below):

...
ksfd_osdrqfil:fob=0xce726160 bufp=0xbd2be000 blkno=1119019 nbyt=8192 flags=0x4
ksfdbio:rq=0x7f232c4edb00 fob=0xce726160 aiopend=126
ksfd_osdrqfil:fob=0xce726160 bufp=0x9e61a000 blkno=1120039 nbyt=8192 flags=0x4
ksfdbio:rq=0x7f232c4edd80 fob=0xce726160 aiopend=127
ksfdwtio:count=127 aioflags=0x500 timeout=2147483647 posted=(nil)
...
ksfdchkio:ksfdrq=0x7f232c4edb00 completed=1
ksfdchkio:ksfdrq=0x7f232c4edd80 completed=0
WAIT #139789045903344: nam='db file parallel read' ela= 6872 files=1 blocks=127 requests=127 obj#=93696 tim=156953739197

So, on Oracle 12.1.0.2 on Linux x86_64 with xfs filesystem with async IO enabled and filesystemio_options = SETALL we get the “ksfd_osdrqfil” trace entries to show us the block# Oracle read from a datafile. It doesn’t show the file# itself, but it shows the accessed file state object address (FOB) in SGA and as it was always the same in the tracefile, I know duplicate block numbers listed in trace would be for the same datafile (and not for a block with the same block# in some other datafile). And the tablespace I used for my test had a single datafile anyway.

Anyway, I wrote a simple script to summarize whether there were any disk re-reads in this tracefile (of a select statement):

$ grep ^ksfd_osdrqfil LIN121_ora_11406.trc | awk '{ print $3 }' | sort | uniq -c | sort -nr | head -20
     10 blkno=348827
     10 blkno=317708
      9 blkno=90493
      9 blkno=90476
      9 blkno=85171
      9 blkno=82023
      9 blkno=81014
      9 blkno=80954
      9 blkno=74703
      9 blkno=65222
      9 blkno=63899
      9 blkno=62977
      9 blkno=62488
      9 blkno=59663
      9 blkno=557215
      9 blkno=556581
      9 blkno=555412
      9 blkno=555357
      9 blkno=554070
      9 blkno=551593
...

Indeed! The “worst” blocks have been read in 10 times – all that for a single query execution.

I only showed 20 top blocks here, but even when I used “head -10000″ and “head -50000″ above, I still saw blocks that had been read in to buffer cache 8 and 4 times respectively.

Looking into earlier autotrace metrics, my simple index range scan query did read in over 3x more blocks than the total table and index size combined (~350k blocks read while the table had only 100k blocks)! Some blocks have gotten kicked out from buffer cache and have been re-read back into cache later, multiple times.

Hmm, let’s think further: We are accessing only about 20% of a 800 MB table + 150 MB index, so the “working set” of datablocks used by my query should be well less than my 650 MB buffer cache, right? And as I am the only user in this database, everything should nicely fit and stay in buffer cache, right?

Actually, both of the arguments above are flawed:

  1. Accessing 20% of rows in a table doesn’t automatically mean that we need to visit only 20% blocks of that table! Maybe all of the tables’s blocks contain a few of the rows this index range scan needs? So we might need to visit all of that table’s blocks (or most of them) and extract only a few matching rows from each block. But nevertheless, the “working set” of required blocks for this query would include almost all of the table blocks, not only 20%. We must read all of them in at some point in the range scan.So, the matching rows in table blocks are not tightly packed and physically in correspondence with the index range scan’s table access driving order, but are potentially “randomly” scattered all over the table.This means that an index range scan may come back and access some data block again and again to get a yet-another row from it when the ROWID entries in index leaf blocks point there. This is what I call buffer re-visits(Now scroll back up and see what is that index’es clustering factor :-)

  2. So what, all the buffer re-visits should be really fast as the previously read block is going to be in buffer cache, right?Well, not really. Especially when the working set of blocks read is bigger than buffer cache. But even if it is smaller, the Oracle buffer cache isn’t managed using basic LRU replacement logic (since 8.1.6). New blocks that get read in to buffer cache will be put into the middle of the “LRU” list and they work their way up to the “hot” end only if they are touched enough times before someone manages to flush them out. So even if you are a single user of the buffer cache, there’s a chance that some just recently read blocks get aged out from buffer cache – by the same query still running – before they get hot enough. And this means that your next buffer re-visit may turn into a disk block re-read that we saw in the tracefiles.If you combine this with the reality of production systems where there’s a thousand more users trying to do what you’re doing, at the same time, it becomes clear that you’ll be able to use only a small portion of the total buffer cache for your needs. This is why people sometimes configure KEEP pools – not that the KEEP pool is somehow able to keep more blocks in memory for longer per GB of RAM, but simply for segregating the less important troublemakers from more important… troublemakers :)

 

So what’s my point here – in the context of this blog post’s title?

Let’s start from Exadata – over the last years it has given many customers order(s) of magnitude better analytics, reporting and batch performance compared to their old systems, if done right of course. In other words, instead of indexing even more, performing wide index range scans with millions of random block reads and re-reads, they ditched many indexes and started doing full table scans. Full table scans do not have such “scaling problems” like a wide index range scan (or a “wide” nested loop join driving access to another table). In addition you got all the cool stuff that goes really well with full scans – multiblock reads, deep prefetching, partition-wise hash joins, partition pruning and of course all the throughput and Smart Scan magic on Exadata).

An untuned complex SQL on a complex schema with lots of non-ideal indexes may end up causing a lot of “waste IO” (don’t have a better term) and similarly CPU usage too. And often it’s not simple to actually fix the query – as it may end up needing a significant schema adjustment/redesign that would require also changing the application code in many different places (ain’t gonna happen). With defaulting reporting to full table scans, you can actually eliminate a lot of such waste, assuming that you have a high-througput – and ideally smart – IO subsystem. (Yes, there are always exceptions and special cases).

We had a customer who had a reporting job that ran almost 2000x faster after moving to Exadata (from 17 hours to 30 seconds or something like that). Their first reaction was: “It didn’t run!” Indeed it did run and it ran correctly. Such radical improvement came from the fact that the new system – compared to the old system – was doing multiple things radically better. It wasn’t just an incremental tweak of adding a hint or a yet another index without daring to do more significant changes.

In this post I demoed just one of the problems that’s plaguing many of the old-school Oracle DW and reporting systems. While favoring full table scanning had always been counterintuitive for most Oracle shops out there, it was the Exadata’s hardware, software and also the geek-excitement surrounding it, what allowed customers to take the leap and switch from the old mindset to new. I expect the same from the Oracle In-Memory Database Option. More about this in a following post.

 

Related Posts

Lecture : Oracle Magazine Septembre / Octobre 2014

Jean-Philippe Pinte - Wed, 2014-09-17 00:06
L'Oracle Magazine  de Septembre / Octobre 2014 est disponible.

Heart Walk at Oracle HQ

David Haimes - Tue, 2014-09-16 23:19

Tomorrow Oracle is hosting a Bay Area Heart Walk at it’s HQ campus and it should be a huge event, not only raising valuable funds but also raising awareness of the importance of a heart healthy lifestyle.  Since the beginning of this year I have been doing regular walking meetings and have enjoyed the benefits to both my health and my productivity.  I also found many other people already doing walking meetings and others who have been inspired to start them.  I’ve really enjoyed the overwhelmingly positive responses and encouragement.

Highlights of the event include the Semi Official Oracle house band, Scope Creep, check out this YouTube clip, you may recognize some of the faces including my boss on drums, who I can confirm does not appreciate my joke – what do you call somebody with no talent who hangs with musicians? – answers in the comment section please.

I will be walking with ‘Team Erb’ led by the energetic and enthusiastic (except when it comes to twitter) Janine Erb, who is one of the singers in the YouTube clip above.  We have close to 100 people in the team and I’m looking forward to walking with and talking to people I have known and worked with for a long time and also making some new friends.  If you are there tomorrow, come and say hi.  I’ll take some pictures and probably tweet during the event.

Finally, if you want to donate you can follow this link, but probably more importantly spend some time educating yourself, the American Heart Association is one place to start.


Categories: APPS Blogs

Behind the Screen with Oracle Support

Joshua Solomin - Tue, 2014-09-16 22:43

Untitled Document

GPIcon

Get beyond the support interface screen to meet the experts from Oracle Support at the Oracle Support Stars Bar. Have a tough question about supporting or upgrading your Oracle products? Looking for best practices for problem prevention, rapid resolution, and product upgrades? Stop by the Stars Bar and speak directly with an expert who can help.

While you’re there, check out one of our 10-minute briefing sessions on the hottest support topics. Here are just a few of the high-impact briefings you can see at this year’s Stars Bar:

  • Proactive Support Best Practices
  • Oracle Platinum Services
  • My Oracle Support Tips & Tricks
  • And many more!

The Support Stars Bar is open Monday, Tuesday and Wednesday in the Moscone West Exhibition Hall (Booths 3461 and 3908). More details here.

Visit the Services and Support Oracle OpenWorld Website to discover how you can take advantage of all Oracle OpenWorld has to offer. See you there!

See you there!