Skip navigation.

Feed aggregator

The Amazing Spider-Man 2

Tim Hall - Sat, 2014-04-19 13:26

I’ve just got back from watching The Amazing Spider-Man 2.

Wow, that is one seriously long film! At 2 hours and 22 minutes, it’s a good 1 hour and 22 minutes too long…

I guess there are two sides to this film:

  1. Action Scenes: During the action this films it is brilliant. Really over the top stuff. Bright, flashy and really cool.
  2. Everything Else : I don’t give a crap about character development in an action film. This re-imagining of the franchise is turning out to be even more whiny than the Toby Maguire version.

The film really could have been edited down massively and I would have come out agreeing with the film title. As it was, it’s “The Amazingly Long and Whiny Spider-Man Too!”

Cheers

Tim…

The Amazing Spider-Man 2 was first posted on April 19, 2014 at 8:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Running scripts in CDBs and PDBs in Oracle Database 12c

Tim Hall - Sat, 2014-04-19 09:23

You’ve been sold on the whole concept of the multitenant option in Oracle 12c and you are launching full steam ahead. Your first database gets upgraded and converted to a PDB, so you start testing your shell scripts and bang! Broken! Your company uses CRON and shell scripting all over the place and the multitenant architecture has just gone and broken the lot in one fell swoop! I think this will end up being a big shock to many people.

I’ve been talking about this issue with a number of people since the release of Oracle 12c. Brynn Llewellyn did a session on “Self-Provisioning Pluggable Databases Using PL/SQL” at last year’s UKOUG, which covered some of these issues. More recently, I spent some time speaking to Hans Forbrich about this when we were on the OTN Yathra 2014 Tour.

Today, I put down some of my thoughts on the matter in this article.

Like most things to do with Oracle 12c, I’m sure my thoughts on the subject will evolve as I keep using it. As my thoughts evolve, so will the article. :)

Cheers

Tim…

Running scripts in CDBs and PDBs in Oracle Database 12c was first posted on April 19, 2014 at 4:23 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Coincindences

Jonathan Lewis - Sat, 2014-04-19 02:22

I had another of those odd timing events today that make me think that Larry Ellison has access to a time machine. I found (yet another example of a) bug that had been reported on MoS just a few days before it appeared on an instance I was running. How is it possible that someone keeps doing things that I’m doing, but just a few days before I do them !

For no good reason I happened to browse through a load of background trace files on an 11.2.0.4 instance and found the following in an “m000″ file:

*** SERVICE NAME:(SYS$BACKGROUND) 2014-04-19 08:55:20.617
*** MODULE NAME:(MMON_SLAVE) 2014-04-19 08:55:20.617
*** ACTION NAME:(Auto-Purge Slave Action) 2014-04-19 08:55:20.617

*** KEWROCISTMTEXEC - encountered error: (ORA-06525: Length Mismatch for CHAR or RAW data
ORA-06512: at "SYS.DBMS_STATS", line 29022
ORA-06512: at line 1
)
  *** SQLSTR: total-len=93, dump-len=93,
      STR={begin dbms_stats.copy_table_stats('SYS', :bind1, :bind2, :bind3, flags=>1, force=>TRUE); end;}

Before trying to track down what had gone wrong I did a quick check on MoS, searching for “copy_table_stats” and “29022″ and found bug 17079301 – fixed in 12.2, and 12.1.0.2, with a patch for 12.1.0.1 (and some back-ports for 11.2.0.4). The description of the bug in the note was basically: “it happens”.

I may get around to looking more closely at what’s gone wrong but as an initial though I’m guessing that, even though the action name is “auto-purge slave action”, this may be something to do with adding a partition to some of the AWR history tables and rolling stats forward – so at some point I’ll probably start by checking for partitions with missing stats in the SYS schema.

The bug note, by the way, was published (last updated, on second thoughts) on 14th April 2014 – just 5 days before I first happened to spot the occurrence of the bug.


Coincindences

Jonathan Lewis - Sat, 2014-04-19 02:22

I had another of those odd timing events today that make me think that Larry Ellison has access to a time machine. I found (yet another example of a) bug that had been reported on MoS just a few days before it appeared on an instance I was running. How is it possible that someone keeps doing things that I’m doing, but just a few days before I do them !

For no good reason I happened to browse through a load of background trace files on an 11.2.0.4 instance and found the following in an “m000″ file:

*** SERVICE NAME:(SYS$BACKGROUND) 2014-04-19 08:55:20.617
*** MODULE NAME:(MMON_SLAVE) 2014-04-19 08:55:20.617
*** ACTION NAME:(Auto-Purge Slave Action) 2014-04-19 08:55:20.617

*** KEWROCISTMTEXEC - encountered error: (ORA-06525: Length Mismatch for CHAR or RAW data
ORA-06512: at "SYS.DBMS_STATS", line 29022
ORA-06512: at line 1
)
  *** SQLSTR: total-len=93, dump-len=93,
      STR={begin dbms_stats.copy_table_stats('SYS', :bind1, :bind2, :bind3, flags=>1, force=>TRUE); end;}

Before trying to track down what had gone wrong I did a quick check on MoS, searching for “copy_table_stats” and “29022″ and found bug 17079301 – fixed in 12.2, and 12.1.0.2, with a patch for 12.1.0.1 (and some back-ports for 11.2.0.4). The description of the bug in the note was basically: “it happens”.

I may get around to looking more closely at what’s gone wrong but as an initial though I’m guessing that, even though the action name is “auto-purge slave action”, this may be something to do with adding a partition to some of the AWR history tables and rolling stats forward – so at some point I’ll probably start by checking for partitions with missing stats in the SYS schema.

The bug note, by the way, was published (last updated, on second thoughts) on 14th April 2014 – just 5 days before I first happened to spot the occurrence of the bug.


Necessary complexity

DBMS2 - Sat, 2014-04-19 02:17

When I’m asked to talk to academics, the requested subject is usually a version of “What should we know about what’s happening in the actual market/real world?” I then try to figure out what the scholars could stand to hear that they perhaps don’t already know.

In the current case (Berkeley next Tuesday), I’m using the title “Necessary complexity”. I actually mean three different but related things by that, namely:

  1. No matter how cool an improvement you have in some particular area of technology, it’s not very useful until you add a whole bunch of me-too features and capabilities as well.
  2. Even beyond that, however, the simple(r) stuff has already been built. Most new opportunities are in the creation of complex integrated stacks, in part because …
  3. … users are doing ever more complex things.

While everybody on some level already knows all this, I think it bears calling out even so.

I previously encapsulated the first point in the cardinal rules of DBMS development:

Rule 1: Developing a good DBMS requires 5-7 years and tens of millions of dollars.

That’s if things go extremely well.

Rule 2: You aren’t an exception to Rule 1. 

In particular:

  • Concurrent workloads benchmarked in the lab are poor predictors of concurrent performance in real life.
  • Mixed workload management is harder than you’re assuming it is.
  • Those minor edge cases in which your Version 1 product works poorly aren’t minor after all.

My recent post about MongoDB is just one example of same.

Examples of the second point include but are hardly limited to:

BDAS and Spark make a splendid example as well. :)

As to the third point:

Bottom line: Serious software has been built for over 50 years. Very little of it is simple any more.

Related links

Categories: Other

Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

Senthil Rajendran - Fri, 2014-04-18 22:05
Big Data Oracle NoSQL in No Time - It is time to Load Data for a simple Use Case

Index
Big Data Oracle NoSQL in No Time - Getting Started Part 1
Big Data Oracle NoSQL in No Time - Startup & Shutdown Part 2
Big Data Oracle NoSQL in No Time - Setting up 1x1 Topology Part 3
Big Data Oracle NoSQL in No Time - Expanding 1x1 to 3x1 Topology Part 4
Big Data Oracle NoSQL in No Time - From 3x1 to 3x3 Topology Part 5
Big Data Oracle NoSQL in No Time - Smoke Testing Part 6
Big Data Oracle NoSQL in No Time - Increasing Throughput Read/Write Part 7
Big Data Oracle NoSQL in No Time - It is time to Upgrade
Big Data Oracle NoSQL in No Time - It is time to Load Data for a Simple Use Case

There are a lot of reference to NoSQL Use Case but I wanted to make it simple. Though I am not a developer but thanks to my unix scripting skills.

So here is what I am planning to make

  • create a schema for storing server cpu details from mpstat command
  • storing it every minute
  • on 4 nodes
  • then some dashboards
AVRO Schema Design
Here I am creating an avro schema that can hold the date and time with the values from mpstat
cpudata.avsc{ "type": "record", "name": "cpudata", "namespace":"avro", "fields": [ {"name": "yyyy", "type": "int", "default": 0},{"name": "mm", "type": "int", "default": 0}, {"name": "dd", "type": "int", "default": 0}, {"name": "hh", "type": "int", "default": 0}, {"name": "mi", "type": "int", "default": 0}, {"name": "user", "type": "float", "default": 0}, {"name": "nice", "type": "float", "default": 0},{"name": "sys", "type": "float", "default": 0},{"name": "iowait", "type": "float", "default": 0},{"name": "irq", "type": "float", "default": 0},{"name": "soft", "type": "float", "default": 0},{"name": "steal", "type": "float", "default": 0},{"name": "idle", "type": "float", "default": 0},{"name": "intr", "type": "float", "default": 0}] } 
Now I am adding the schema to the store
$ java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000kv-> ddl add-schema -file cpudata.avscAdded schema: avro.cpudata.1kv-> show schemaavro.cpudata  ID: 1  Modified: 2014-04-18 00:29:58 UTC, From: server1kv->
To load the data I am creating a shell script which will create the put kv -key command in a temporary file.Later I load the temporary file immediately into the storeThis is automated via a crontab job entry that runs every minute.So this program is going to capture the server cpu metrics every minute. 
$ cat cpuload.shexport KVHOME=$KVBASE/server2/oraclesoftware/kv-3.0.5echo `hostname` `date +"%d-%m-%Y-%H-%M-%S"` `date +"%-d"` `date +"%-m"` `date +"%Y"` `date +"%-H"` `date +"%-M"` `mpstat|tail -1`|awk '{print "put kv -key /cpudata/"$1"/"$2" -value \"{\\\"yyyy\\\":"$5",\\\"mm\\\":"$4",\\\"dd\\\":"$3",\\\"hh\\\":"$6",\\\"mi\\\":"$7",\\\"user\\\":"$10",\\\"nice\\\":"$11",\\\"sys\\\":"$12",\\\"iowait\\\":"$13",\\\"irq\\\":"$14",\\\"soft\\\":"$15",\\\"steal\\\":"$16",\\\"idle\\\":"$17",\\\"intr\\\":"$18" }\" -json avro.cpudata"}' > /tmp/1.loadjava -jar $KVHOME/lib/kvcli.jar -host server1 -port 5000 -store mystore load -file /tmp/1.load$$ crontab -l* * * * * /oraclenosql/work/cpuload.sh$
Since the job has been scheduled , I am testing the records if they are getting loaded
kv-> get kv -key /cpudata -all -keyonly/cpudata/server1/18-04-2014-03-35-02/cpudata/server1/18-04-2014-03-36-022 Keys returned.kv->
Since the program has just started it has two records now
kv-> aggregate -count -key /cpudatacount: 2kv->
A detailed listing of the two records
kv-> get kv -key /cpudata/server1 -all/cpudata/server1/18-04-2014-03-37-02{  "yyyy" : 2014,  "mm" : 4,  "dd" : 18,  "hh" : 3,  "mi" : 37,  "user" : 0.8799999952316284,  "nice" : 1.350000023841858,  "sys" : 0.38999998569488525,  "iowait" : 1.0399999618530273,  "irq" : 0.0,  "soft" : 0.009999999776482582,  "steal" : 0.03999999910593033,  "idle" : 96.30000305175781,  "intr" : 713.0399780273438}/cpudata/server1/18-04-2014-03-35-02{  "yyyy" : 2014,  "mm" : 4,  "dd" : 18,  "hh" : 3,  "mi" : 35,  "user" : 0.8799999952316284,  "nice" : 1.350000023841858,  "sys" : 0.38999998569488525,  "iowait" : 1.0399999618530273,  "irq" : 0.0,  "soft" : 0.009999999776482582,  "steal" : 0.03999999910593033,  "idle" : 96.30000305175781,  "intr" : 713.0399780273438}
Now I am going to sleep and next day going to have some fun. With 24 hours completed the store now has all the CPU metric for the whole day. Let me try some aggregate commands.
Average CPU usage kv-> aggregate -key /cpudata/server1 -avg useravg(user): 0.8799999952316284
kv-> aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intravg(user): 0.8799999952316284avg(nice): 1.350000023841858avg(sys): 0.38999998569488525avg(iowait): 1.0399999618530273avg(irq): 0.0avg(soft): 0.009999999776482582avg(steal): 0.03999999910593033avg(idle): 96.30000305175781avg(intr): 713.0599822998047kv->
Let me bring a range and see the hourly usage
kv-> aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-04 -end 18-04-2014-05
avg(user): 0.8799999952316284avg(nice): 1.350000023841858avg(sys): 0.38999998569488525avg(iowait): 1.0399999618530273avg(irq): 0.0avg(soft): 0.009999999776482582avg(steal): 0.03999999910593033avg(idle): 96.30000305175781avg(intr): 713.0399780273438kv-> aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-03-35-02 -end 18-04-2014-03-40-02
avg(user): 0.8799999952316284avg(nice): 1.350000023841858avg(sys): 0.38999998569488525avg(iowait): 1.0399999618530273avg(irq): 0.0avg(soft): 0.009999999776482582avg(steal): 0.03999999910593033avg(idle): 96.30000305175781avg(intr): 713.0849914550781kv->

Interesting ?
Time for some dashboards

Hourly CPU Idle Metric 
$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24> do> echo "connect store -host server1 -port 5000 -name mystore" > /tmp/1.lst> echo "aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-"$i" -end 18-04-2014-"$i >> /tmp/1.lst> echo "18-04-2014-"$i" - "`java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000 load -file /tmp/1.lst|grep -i idle|awk '{print $2 }'`> done18-04-2014-01 - 96.2733306884765618-04-2014-02 - 96.2799987792968818-04-2014-03 - 96.3000030517578118-04-2014-04 - 96.3000030517578118-04-2014-05 - 96.3000030517578118-04-2014-06 - 96.3000030517578118-04-2014-07 - 96.2843330383300818-04-2014-08 - 96.269996643066418-04-2014-09 - 96.269996643066418-04-2014-10 - 96.2733306884765618-04-2014-11 - 96.2799987792968818-04-2014-12 - 96.287000274658218-04-2014-13 - 96.2901676177978518-04-2014-14 - 96.2968357086181618-04-2014-15 - 96.30200195312518-04-2014-16 - 96.3099975585937518-04-2014-17 - 96.3184993743896518-04-2014-18 - 96.3248340606689518-04-2014-19 - 96.3300018310546918-04-2014-20 - 96.333166758219418-04-2014-21 - 96.2813516565271418-04-2014-22 - 96.2733306884765618-04-2014-23 - 96.2799987792968818-04-2014-24 - 96.27333068847656$



Hourly CPU User Metric 
$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24> do> echo "connect store -host server1 -port 5000 -name mystore" > /tmp/1.lst> echo "aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-"$i" -end 18-04-2014-"$i >> /tmp/1.lst> echo "18-04-2014-"$i" - "`java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000 load -file /tmp/1.lst|grep -i user|awk '{print $2 }'`> done18-04-2014-01 - 0.889999985694885318-04-2014-02 - 0.889999985694885318-04-2014-03 - 0.879999995231628418-04-2014-04 - 0.879999995231628418-04-2014-05 - 0.879999995231628418-04-2014-06 - 0.879999995231628418-04-2014-07 - 0.881999993324279818-04-2014-08 - 0.890666651725769118-04-2014-09 - 0.889999985694885318-04-2014-10 - 0.889999985694885318-04-2014-11 - 0.889999985694885318-04-2014-12 - 0.889999985694885318-04-2014-13 - 0.889999985694885318-04-2014-14 - 0.889999985694885318-04-2014-15 - 0.889999985694885318-04-2014-16 - 0.889999985694885318-04-2014-17 - 0.889999985694885318-04-2014-18 - 0.889999985694885318-04-2014-19 - 0.889999985694885318-04-2014-20 - 0.889999985694885318-04-2014-21 - 0.892127643240259118-04-2014-22 - 0.879999995231628418-04-2014-23 - 0.879999995231628418-04-2014-24 - 0.8899999856948853$





Hourly CPU IOWAIT Metric
$ for i in 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24> do> echo "connect store -host server1 -port 5000 -name mystore" > /tmp/1.lst> echo "aggregate -key /cpudata/server1 -avg user,nice,sys,iowait,irq,soft,steal,idle,intr -start 18-04-2014-"$i" -end 18-04-2014-"$i >> /tmp/1.lst> echo "18-04-2014-"$i" - "`java -jar $KVHOME/lib/kvstore.jar runadmin -host server1 -port 5000 load -file /tmp/1.lst|grep -i iowait|awk '{print $2 }'`> done18-04-2014-01 - 1.090769232847751618-04-2014-02 - 1.049999952316284218-04-2014-03 - 1.039999961853027318-04-2014-04 - 1.039999961853027318-04-2014-05 - 1.037333297729492218-04-2014-06 - 1.029999971389770518-04-2014-07 - 1.040333294868469118-04-2014-08 - 1.049999952316284218-04-2014-09 - 1.049999952316284218-04-2014-10 - 1.049999952316284218-04-2014-11 - 1.049999952316284218-04-2014-12 - 1.049999952316284218-04-2014-13 - 1.048166620731353818-04-2014-14 - 1.049999952316284218-04-2014-15 - 1.044999957084655818-04-2014-16 - 1.039999961853027318-04-2014-17 - 1.039999961853027318-04-2014-18 - 1.039999961853027318-04-2014-19 - 1.039999961853027318-04-2014-20 - 1.039833295345306418-04-2014-21 - 1.090769232847751618-04-2014-22 - 1.049999952316284218-04-2014-23 - 1.090769232847751618-04-2014-24 - 1.0499999523162842$


So this NoSQL Use Case is very simple. I have scheduled the jobs to run on another couple of servers so that my store can be used to analyze CPU metric for all my hosted servers. The avro schema can be expanded to have many more information.

missing my local conference makes me a little grumpy

Grumpy old DBA - Fri, 2014-04-18 18:11
Our big event here is a 2 1/2 day conference ( Great Lakes Oracle Conference aka GLOC ) is coming up soon in mid May and I am going to miss the first two days of it.  Not happy exactly but I do have a reasonable excuse.

My oldest daughter finishes her sophomore year at college ( Fordham in NYC ) and needs to get picked up ( and dorm room packed up ) and carted back home to Ohio.  So it is drive there sunday ... pick her up Monday and pack ... drive back Tuesday ...

I should probably be able to make the networking event Tuesday night ( well traffic permitting ) but will miss the main activities that day as well as the workshops on Monday.

I should at least be onsite to attend Wednesday and introduce Steven Feuerstein.

All the colleges tend to sometimes break or end college years at rougly same times we get to use Cleveland State for the conference that week because they are finished while Fordham has the last two days of final exams Monday and Tuesday.  My wife and my mother in law did the trip last year to get my daughter so only fair that this is my year.

Still however makes me a little grumpy ... no surprises right?
Categories: DBA Blogs

Security Alert CVE-2014-0160 (‘Heartbleed’) Released

Oracle Security Team - Fri, 2014-04-18 13:38

Hi, this is Eric Maurice.

Oracle just released Security Alert CVE-2014-0160 to address the publicly disclosed ‘Heartbleed’ vulnerability which affects a number of versions of the OpenSSL library.  Due to the severity of this vulnerability, and the fact that active exploitation of this vulnerability is reported “in the wild,” Oracle recommends that customers of affected Oracle products apply the necessary patches as soon as they are released by Oracle.

The CVSS Base Score for this vulnerability is 5.0.  This relative low score denotes the difficulty in coming up with a system that can rate the severity of all types of vulnerabilities, including the ones that constitute blended threat. 

It is easy to exploit vulnerability CVE-2014-0160 with relative impunity as it is remotely exploitable without authentication over the Internet.  However a successful exploit can only result in compromising the confidentiality of some of the data contained in the targeted systems.  An active exploitation of the bug allows the malicious perpetrator to read the memory of the targeted system on which resides the vulnerable versions of the OpenSSL library.  The vulnerability, on its own, does not allow a compromise of the availability (e.g., denial of service attack) or integrity of the targeted system (e.g., deletion of sensitive log files). 

Unfortunately, this vulnerability is very serious in that it is contained into a widely used security package, which enables the use of SSL/TLS, and the compromise of that memory can have serious follow-on consequences.  According to http://heartbleed.com the compromised data may contain passwords, private keys, and other sensitive information.  In some instances, this information could be used by a malicious perpetrator to decrypt private information that was sent months or years ago, or log into systems with stolen identity.   As a result, this vulnerability creates very significant risks including unauthorized access to systems with full user rights.

 

For more information:

 

The Advisory for Security Alert CVE-2014-0160 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2014-0160-2190703.html

The ‘OpenSSL Security Bug - Heartbleed / CVE-2014-0160’ page on OTN is located at http://www.oracle.com/technetwork/topics/security/opensslheartbleedcve-2014-0160-2188454.html

The ‘Heartbleed’ web site is located at http://www.heartbleed.com.  Note that this site is not affiliated with Oracle.

 

 

 

 

Case Insensitive Search in LOV - Effective and Generic

Andrejus Baranovski - Fri, 2014-04-18 12:50
Search in LOV dialog window, by default is not case insensitive. You could define View Criteria for LOV VO with case insensitivity and select this criteria to be displayed in LOV dialog. You could do this for one or two, may be for ten LOV's - but I'm sure you are going to get tired pretty soon. Much more effective is to implement generic solution to convert LOV search criteria to be UPPER case automatically.

I'm using sample application from my previous post about dynamic ADF BC and new dynamic ADF UI component in ADF 12c - ADF Dynamic ADF BC - Loading Multiple Instances (Nr. 100 in 2013). The same technique as described below can be applied also for design time ADF BC, across different ADF versions. Download sample application, updated for this post - ADFDynamicReportUI_v6.zip.

Default search dialog is case insensitive, you could test it quite easily. Try to search for lower case value, when you know there are upper case matching values - there will be no results:


SQL query is generated with a WHERE clause as it should, it is trying to search for matching values - but there are no such records in DB:


We could solve it with generic class - our custom View Object implementation class. My sample is using dynamic ADF BC, so I register this custom class programmatically with VO instance (typically you could do it through the wizard for design time ADF BC):


As I mentioned above, sample application is using ADF UI dynamic component, however the same works with regular ADF UI also - it doesn't matter:


Here is the main trick how to make search from LOV dialog to be case insensitive. You must override buildViewCriteriaClauses method in View Object implementation class. If current VO instance represents LOV (if you don't want to rely on naming convention, you could create additional View Object implementation class, intended to use only for LOV's), we should invoke setUpperColumns method applied for View Criteria. This converts entire View Criteria clause to use UPPER for both criteria item and bind variable:


Now with automatic conversion to UPPER case, try to execute the same search - you should see results:


View Criteria clause is constructed with UPPER and this is why it allows to perform case insensitive search. Of course, for runtime DB performance optimisation, you need to make sure there is functional index created for searchable columns:


The same works for any number of View Criteria items. Here I search using both attributes:


View Criteria clause contains both attributes and both are changed to use UPPER - this is perfect:


Case insensitive auto completion works as well with the technique described above. Try to type a value, existing in LOV - but with lower case (it_prog):


Such value is located and is automatically changed to use the case as it is originally stored in DB (IT_PROG):


View Criteria clause was constructed with UPPER in the case of auto completion as well as with regular search in LOV dialog:

Best Of OTN - Week of April 13th

OTN TechBlog - Fri, 2014-04-18 12:14
Systems Community

Interview- Which Type of Virtualization Should I Use? - I routinely ask techies which type of virtualization they'd recommend for which type of job. I seldom get an answer as crystal clear as Brian Bream's.

Database Community

Hot: Check out the Oracle Critical Patch Update for April 15, 2014 - Over a hundred patches for Oracle products and technologies...including Oracle Database 12c. Get it here: http://ora.cl/6G6

Got Big Data?  Here's a new collection of Technology Deep Dives on the OTN Database Youtube channel, organized in a handy playlist - Subscribe today.

Oracle Support publishes the Oracle Enterprise Manager Bundle Patch Master Note. Updates apply to: 
Enterprise Manager for Cloud
Enterprise Manager Base Platform - Version 12.1.0.3.0 and later
Enterprise Manager for Fusion Applications - Version 12.1.0.4.0 and later
Enterprise Manager for Oracle Database - Version 12.1.0.4.0 and later
Enterprise Manager for Fusion Middleware - Version 12.1.0.4.0 and later
Information in this document applies to any platform.   Get it here: http://ora.cl/1f8

Friday Funny from OTN Database Community Manager, Laura Ramsey - Famous Oracle ACE Selfie  :)  Taken at Collaborate 2014.

Java Community 

Video: Board Buffet - IoT, Java and Raspberry Pi - Java expert Vinicius Senger and Oracle engineer Gary Collins, discuss IoT and show a bunch of different types of boards, single board computers, and plug computers. 

Free Java Virtual Developer Day - Next month, Oracle will host a Virtual Developer Day covering Java SE 8, Java EE 7 and Java Embedded. The VDD is free to attend, just make sure to register. The complete agenda and the registration details can be found here

A classic video: How To Design A Good API and Why it Matters by Josh Bloch 

Friday Funny - with apologies to experts everywhere

<span id="XinhaEditingPostion"></span>

Watch "Moments of Engagement" Webcast On-Demand Now

WebCenter Team - Fri, 2014-04-18 12:00
Oracle Corporation

Digital Business Thought Leadership Webcast Series

Delivering Moments of Engagement
Across the Enterprise

Five Steps for Mobilizing Digital Experiences

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

How Do You Deliver High-Value Moments of Engagement?

The web and mobile have become primary channels for engaging with customers today. To compete effectively, companies need to deliver multiple digital experiences that are contextually relevant to customers and valuable for the business—across various channels and on a global scale. But doing so is a great challenge without the right strategies and architectures in place.

As the kickoff of the new Digital Business Thought Leadership Series, noted industry analyst Geoffrey Bock investigated what some of Oracle’s customers are already doing, and how they are rapidly mobilizing the capabilities of their enterprise ecosystems.

Join us for a conversation about building your digital roadmap for the engaging enterprise. In this webcast you’ll have an opportunity to learn:

  • How leading organizations are extending and mobilizing digital experiences for their customers, partners, and employees
  • The key best practices for powering the high-value moments of engagement that deliver business value
  • Business opportunities and challenges that exist for enterprise wide mobility to fuel multichannel experiences

Now Available to Watch On-Demand!

Watch Now

Now Available On-Demand!






Presented by:

Geoffrey Bock

Geoffrey Bock

Principal, Bock & Company


Michael Snow

Michael Snow

Senior Product Marketing Director, Oracle WebCenter




Hardware and Software Copyright © 2014, Oracle and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

Complément : Sondage DBaaS

Jean-Philippe Pinte - Fri, 2014-04-18 08:12
Prenez 30s ... pour répondre au sondage DBaaS (publié dans la barre de navigation)

Bitmap loading

Jonathan Lewis - Fri, 2014-04-18 05:43

Everyone “knows” that bitmap indexes are a disaster (compared to B-tree indexes) when it comes to DML. But at an event I spoke at recently someone made the point that they had observed that their data loading operations were faster when the table being loaded had bitmap indexes on it than when it had the equivalent B-tree indexes in place.

There’s a good reason why this can be the case.  No prizes for working out what it is – and I’ll supply an answer in a couple of days time.  (Hint – it may also be the reason why Oracle doesn’t use bitmap indexes to avoid the “foreign key locking” problem).

 


Bitmap loading

Jonathan Lewis - Fri, 2014-04-18 05:43

Everyone “knows” that bitmap indexes are a disaster (compared to B-tree indexes) when it comes to DML. But at an event I spoke at recently someone made the point that they had observed that their data loading operations were faster when the table being loaded had bitmap indexes on it than when it had the equivalent B-tree indexes in place.

There’s a good reason why this can be the case.  No prizes for working out what it is – and I’ll supply an answer in a couple of days time.  (Hint – it may also be the reason why Oracle doesn’t use bitmap indexes to avoid the “foreign key locking” problem).

Answer

As Martin (comment 3) points out, there’s a lot of interesting information in the statistics once you start doing the experiment. So here’s some demonstration code, first we create a table with one of two possible indexes:


create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	rownum			id,
	mod(rownum,1000)	btree_col,
	mod(rownum,1000)	bitmap_col,
	rpad('x',100)		padding
from
	generator	v1,
	generator	v2
where
	rownum <= 1e6
;

begin
	dbms_stats.gather_table_stats(
		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'
	);
end;
/

create        index t1_btree on t1(btree_col) nologging;
-- create bitmap index t1_bitmap on t1(bitmap_col) nologging;

You’ll note that the two columns I’m going to build indexes on hold the same data in the same order – and it’s an order with maximum scatter because of the mod() function I’ve used to create it. It’s also very repetitive data, having 1000 distinct values over 1,000,0000 rows. With the data and (one of) the indexes in place I’m going to insert another 10,000 rows:

execute snap_my_stats.start_snap

insert /* append */ into t1
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	1e6 + rownum		id,
	mod(rownum,1000)	btree_col,
	mod(rownum,1000)	bitmap_col,
	rpad('x',100)		padding
from
	generator
;

execute snap_my_stats.end_snap

You’ll note that I’ve got an incomplete append hint in the code – I’ve tested the mechanism about eight different ways, and left the append in as a convenience, but the results I want to talk about (first) are with the hint disabled so that the insert is a standard insert. The snap_my_stats calls are my standard mechanism to capture deltas of my session statistics (v$mystat) – one day I’ll probably get around to using Tanel’s snapper routine everywhere – and here are some of the key results produced in the two tests:


11.2.0.4 with btree
===================
Name                                                                     Value
----                                                                     -----
session logical reads                                                   31,403
DB time                                                                     64
db block gets                                                           31,195
consistent gets                                                            208
db block changes                                                        21,511
redo entries                                                            10,873
redo size                                                            3,591,820
undo change vector size                                                897,608
sorts (memory)                                                               2
sorts (rows)                                                                 1

11.2.0.4 with bitmap
====================
Name                                                                     Value
----                                                                     -----
session logical reads                                                   13,204
DB time                                                                     42
db block gets                                                            8,001
consistent gets                                                          5,203
db block changes                                                         5,911
redo entries                                                             2,880
redo size                                                            4,955,896
undo change vector size                                              3,269,932
sorts (memory)                                                               3
sorts (rows)                                                            10,001

As Martin has pointed out, there are a number of statistics that show large differences between the B-tree and bitmap approaches, but the one he didn’t mention was the key: sorts (rows). What is this telling us, and why could it matter so much ? If the B-tree index exists when the insert takes place Oracle locates the correct place for the new index entry as each row is inserted which is why you end up with so many redo entries, block gets and block changes; if the bitmap index exists, Oracle postpones index maintenance until the table insert is complete, but accumulates the keys and rowids as it goes then sorts them to optimize the rowid to bitmap conversion and walks the index in order updating each modified key just once.

The performance consequences of the two different strategies depends on the number of indexes affected, the number of rows modified, the typical number of rows per key value, and the ordering of the new data as it arrives; but it’s possible that the most significant impact could come from ordering.  As each row arrives, the relevant B-tree indexes are modified – but if you’re unlucky, or have too many indexes on the table, then each index maintenance operation could result in a random disk I/O to read the necessary block (how many times have you seen complaints like: “we’re only inserting 2M rows but it’s taking 45 minutes and we’re always waiting on db file sequential reads”). If Oracle sorts the index entries before doing the updates it minimises the random I/O because it need only update each index leaf block once and doesn’t run the risk of re-reading many leaf blocks many times for a big insert.

Further Observations

The delayed maintenance for bitmap indexes (probably) explains why they aren’t used to avoid the foreign key locking problem.  On a large insert, the table data will be arriving, the b-tree indexes will be maintained in real time, but a new child row of some parent won’t appear in the bitmap index until the entire insert is complete – so another session could delete the parent of a row that exists, is not yet committed, but is not yet visible. Try working out a generic strategy to deal with that type of problem.

It’s worth noting, of course, that when you add the /*+ append */ hint to the insert then Oracle uses exactly the same optimization strategy for B-trees as it does for bitmaps – i.e. postpone the index maintenance, remember all the keys and rowids, then sort and bulk insert them.  And when you’ve remembered that, you may also remember that the hint is (has to be) ignored if there are any enabled foreign key constraints on the table. The argument for why the hint has to be ignored and why bitmap indexes don’t avoid the locking problem is (probably) the same argument.

You may also recall, by the way, that when you have B-tree indexes on a table you can choose the optimal update or delete strategy by selecting a tablescan or index range scan as the execution path.  If you update or delete through an index range scan the same “delayed maintenance” trick is used to optimize the index updates … except for any indexes being used to support foreign key constraints, and they are maintained row by row.

In passing, while checking the results for this note I re-ran some tests that I had originally done in 2006 and added one more test that I hadn’t considered at the time; as a result I can also point out that index will see delayed maintenance if you drive the update or delete with an index() hint, but not if you drive it with an index_desc() hint.

 


OBIEE Security: Catalogs, Access Control Lists and Permission Reports

The presentation catalog (Web Catalog) stores the content that users create within OBIEE. While the Catalog uses the presentation layer objects, do not confuse the presentation layer within the RPD with the presentation catalog. The presentation catalog includes objects such as folders, shortcuts, filters, KPIs and dashboards. These objects are built using the presentation layer within the RPD.

The difference between RPD and Catalog security is that Repository level restrictions give the most flexibility as they can be either course-grained or fine-grained based on the data. Catalog level restrictions are more course-grained as they are applied to entire subject areas and/or objects.

To access an object in the catalog users must have security and can use either the BI client or web user interface. The BI client for the Web Catalog is installed along with the BI Admin client.

Access Control Lists (ACL)

Access Control Lists (ACL) are defined for each object in the catalog. Within the file system the ACLs are stored in the *.ATR files which may be viewed through a HEX editor. A 16-digit binary representation is used similar to UNIX (e.g. 777). There are six different types of permissions for each object:

  • Full control
  • Modify
  • Open
  • Traverse
  • No Access
  • Custom

In 11g the catalog is located here:

$ORACLE_INSTANCE/bifoundation/OracleBIPresentationServicesComponent/catalog

  Catalog Permission Reports

From a security perspective, the permission reports that are able to be generated from the Web Catalog client tool are very valuable and can be exported to Excel for further analysis. For example, these reports can provide system generated reports for who can avoid OBIEE security and issue Direct SQL or has rights to Write-Back to the database.  The security ACL will report on who has such Administration privileges.

OBIEE Administration Privileges

BI Catalog Client

Catalog Report

 

If you have questions, please contact us at info@integrigy.com

 -Michael Miller, CISSP-ISSMP

References Tags: ReferenceOracle Business Intelligence (OBIEE)IT Security
Categories: APPS Blogs, Security Blogs

I Love Logs

Gary Myers - Thu, 2014-04-17 20:08
It occurred to me a few days ago, as I was reading this article on DevOps, that I might actually be a DevOps.

I think of myself as a developer, but my current role is in a small team running a small system. And by running, I mean that we are 

  • 'root' and 'Administrator' on our Linux and Windows servers
  • 'oracle / sysdba' on the database side, 
  • the apex administrator account and the apex workspace administrators,
  • the developers and testers, 
  • the people who set up (and revoke) application users and 
  • the people on the receiving end of the support email
Flashbacked to Jeff Smith's article on Developers in Prod. But the truth is that there's a lot of people wearing multiple hats out there, and the job titles of old are getting a bit thin. 
The advantage of having all those hats, or at least all those passwords, is that when I'm looking at issues, I get to look pretty much EVERYWHERE. 
I look at the SSH, FTP and mailserver logs owned by root. The SSH logs generally tell me who logged on where and from where. Some of that is for file transfers (some are SFTP, some are still FTP), some of it is the other members of the team logging on to run jobs. The system sends out lots of mail notifications, and occasionally they don't arrive so I check that log to see that it was sent (and if it may have been too big, or rejected by the gateway).
Also on the server are the Apache logs. We've got these on daily rotate going back a couple of years because it is a small enough system that the logs sizes don't matter. But Apex stuffs most of those field values into the URL as a GET, so they all get logged by Apache. I can get a good idea of what IP address was inquiring about a particular location or order by grepping the logs for the period in question.
I haven't often had the need to look in the Oracle alert logs or dump directories, but they are there if I want to run a trace on some code. 
In contracts, I'm often looking at the V$ (and DBA_) views and tables. The database has some audit trail settings so we can track DDL and (some) logons. Most of the database access is via the Apex component, so there's only a connection pool there.
The SELECT ANY TABLE also gives us access to the underlying Apex tables that tell us the 'private' session state of variables, collections etc. (Scott Wesley blogged on this a while back). Oh, and it amazing how many people DON'T log out of an application, but just shut their browser (or computer) down. At least it amazed me. 
The apex workspace logs stick around for a couple of weeks too, so they can be handy to see who was looking at which pages (because sometimes email us a screenshot of an error message without telling us how or where it popped up). Luckily error messages are logged in that workspace log. 
We have internal application logs too. Emails sent, batch jobs run, people logging on, navigation menu items clicked. And some of our tables include columns with a DEFAULT from SYS_CONTEXT/USERENV (Module, Action, Client Identifier/Info) so we can automatically pick up details when a row is inserted.

All this metadata makes it a lot easier to find the cause of problems. It isn't voyeurism or spying. Honest. 

Oracle E-Business Suite R12 Pre-Install RPM available for Oracle Linux 5 and 6

Wim Coekaerts - Thu, 2014-04-17 17:44
One of the things we have been focusing on with Oracle Linux for quite some time now, is making it easy to install and deploy Oracle products on top of it without having to worry about which RPMs to install and what the basic OS configuration needs to be.

A minimal Oracle Linux install contains a really small set of RPMs but typically not enough for a product to install on and a full/complete install contains way more packages than you need. While a full install is convenient, it also means that the likelihood of having to install an errata for a package is higher and as such the cost of patching and updating/maintaining systems increases.

In an effort to make it as easy as possible, we have created a number of pre-install RPM packages which don't really contain actual programs but they 're more or less dummy packages and a few configuration scripts. They are built around the concept that you have a minimal OL installation (configured to point to a yum repository) and all the RPMs/packages which the specific Oracle product requires to install cleanly and pass the pre-requisites will be dependencies for the pre-install script.

When you install the pre-install RPM, yum will calculate the dependencies, figure out which additional RPMs are needed beyond what's installed, download them and install them. The configuration scripts in the RPM will also set up a number of sysctl options, create the default user, etc. After installation of this pre-install RPM, you can confidently start the Oracle product installer.

We have released a pre-install RPM in the past for the Oracle Database (11g, 12c,..) and Oracle Enterprise Manager 12c agent. And we now also released a similar RPM for E-Business R12.

This RPM is available on both ULN and public-yum in the addons channel.

Oracle Priority Service Infogram for 17-APR-2014

Oracle Infogram - Thu, 2014-04-17 17:26

RDBMS and Performance
Frequently Misused Metrics in Oracle, from The Oracle Alchemist.
Some notes from Tyler Muth on Oracle Database 10.2 De-Supported.
Also from Tyler Muth, a quick posting on Create Bigfile Tablespace – Oracle Managed Files (OMF).
Recovering a standby over the network in 12c, from Martin's Blog.
Data Pump Enhancements in Oracle Database 12c, from the ORACLE-BASE Blog.
DBAAS
From the dbi services blog: Implementing Oracle Database as a Service (DBAAS).
GoldenGate
MONITORING ORACLE GOLDEN GATE FROM SQL DEVELOPER, from the DBASOLVED blog.
SOAP
How to restrict data coming back from a SOAP Call, from Angelo Santagata's Blog.
Oracle Internet Expenses
Oracle Fusion Expenses - Mobile app for R12, from the Oracle Internet Expenses blog.
APEX
APEX 5 first peek - Themes & Templates, from grassroots oracle.
EPM Mobile
A new video introducing EPM Mobile is available on YouTube. You can find this video, and other EPM videos, here: http://www.youtube.com/user/OracleEPMWebcasts
Security

CVE-2013-5211 Input Validation vulnerability in NTP, from the Third Party Vulnerability Resolution Blog.
...and Finally
From the ever-useful LifeHacker: This Tipping Infographic Shows Who Expects Tips, and How Much.

The Art Of Easy: Easy Decisions For Complex Problems (Part 3 of 6)

Linda Fishman Hoyle - Thu, 2014-04-17 12:58

A Guest Post by  Heike Lorenz, Director of Global Product Marketing, Policy Automation

Making complex decisions EASY by automating your service policies allows your organization to efficiently ensure the correct decisions are being applied to the right people.

Like the hit British TV series Little Britain suggests, when “Computer Says No”, you can be left wondering why?

It’s not easy to automate your Customer Service polices, let alone do it in a way that is transparent, consistent and cost effective. Especially if you are working within environments where markets conditions and regulations change frequently. Get it wrong and you are left with compliance problems and customer complaints—and that’s a costly outcome!

So while you may not be striving to change the decision from a “NO” to a “YES” for your customer, you should be looking to get to that answer quicker for them, with a complete explanation as to why it’s a “NO”, have the traceability of what answer was given at that time, have the peace of mind that the answer is accurate, AND do it all at the lowest cost to your business. Simple right?!

So how do you achieve this? There are three core areas of consideration: 1) Centralize & Automate, 2) Personalize & Optimize, and 3) Analyze & Adapt.

1) Centralize & Automate

One method is to grab all of your policy documents, throw them at a team of costly developers to move into a database, code the logic around them, and hope what comes out is maintainable, accurate and accessible to the right audiences. Or, maybe not.

A simpler method is to take your original policy documents and import them into a policy automation tool that will help a business user through a step-by-step process to model the rules. Once developed, they can be tested, modified, published and updated within a few clicks. The result is a solution that can empower your agents with dynamic interviewing tools, and your customers with a self-service approach, across channels, in any language, and on any device.

But that is only part of the whole picture.

2) Personalize & Optimize

A simple decision process could be easily managed by one or two published FAQs, whereas a complex decision process requires you to take into account many personal attributes about that specific customer—and by definition those attributes can’t be applied through static views. Getting a customer to repeat information, or worse not even taking into consideration critical information that is provided within the interaction and personalizes the response, is a fast way to get them to abandon the process, or worse leave you!

You must ensure that your automated policies can be optimized to dynamically adapt to every customer’s unique situation and attributes—be that channel, device, location, language, or other more personal characteristics that are shared prior and during the interaction. After all, each answer should be uniquely theirs, explaining in detail why the decision was reached, with everything taken into consideration.

3) Analyze & Adapt

The saying “data rich and insight poor” is one that often fits with the word “compliance”—businesses can easily be more focused on capturing volumes of data for compliance, and less on making the data actionable. The flip side of that is “data poor” when businesses must scramble to get the data needed to ensure compliance, as an afterthought! And we all know that having insight without ability for timely action is a missed opportunity to improve, avoid, or sustain compliance.

As your policies change, or you introduce new policies, often the requirements to capture data can change too. Adapting to environmental or organizational changes requires you to gather the right data to deliver the right insight for action. The right tools are required in order to apply that insight in a timely, measurable, and effective manner. The right volume of accessible data is also needed to remain compliant with regulatory business or industry Customer Service standards during periodic audits. So you must have a solution that can adapt with scale, demand, change, and archive—a solution that can actually automate your service policies for insight, compliance, and agility—making it easy.

Putting all these pieces together lets you truly automate the nurturing of trusted relationships with your customers during complex decision-making processes, through transparent and personalized interactions. Giving your business confidence that in even the most demanding markets, you are remaining compliant, in a cost-effective and efficient way.

The Oracle Service Cloud empowers your business to care, take action and succeed in your Web Customer Service initiatives and become a Modern Customer Service organization.

Lotus Notes Support Deprecation

PeopleSoft Technology Blog - Thu, 2014-04-17 11:39

In the next release of the PeopleSoft Interaction Hub (9.1/FP3), we will be deprecating direct Lotus Notes support as an email option.  Customers that wish to use Lotus Notes in the future can still use our IMAP support.

Here is information on configuring your email system to use IMAP.  Guidance on working with the email pagelet is located here.