Skip navigation.

Feed aggregator

Clarity In The Avalanche

Floyd Teter - Mon, 2014-10-06 10:04
So I've spent the days since Oracle OpenWorld 14 decompressing...puttering in the garden, BBQing for family, running errands.  The idea was to give my mind time to process all the things I saw and heard at OOW this year.  Big year - it was like trying to take a sip from a firehose.  Developing any clarity around the avalanche of news has been tough.

If you average out all of Oracle's new product development, it comes to a rate of one new product release every working day of the year.  And I think they saved up bunches for OOW. It was difficult to keep up.

It was also difficult to physically keep up with things at OOW, as Oracle utilized the concept of product centers and spread things out over even more of downtown San Francisco this year. For example, Cloud ERP products were centered in the Westin on Market Street.  Cloud HCM was located at the Palace Hotel.  Sales Cloud took over the 2nd floor of Moscone West.  Higher Education focused around the Marriott Marquis. Anything UX, as well as many other hands-on labs, happened at the InterContinental Hotel.  And, of course, JavaOne took place at the Hilton on Union Square along with the surrounding area.  The geographical separation required even more in the way of making tough choices about where to be and when to be there.

With all that, I think I've figured out a way to organize my own take on the highlights from OOW - with a tip o' the hat to Oracle's Thomas Kurian.  Thomas sees Oracle as based around five product lines:  engineered systems, database, middleware, packaged applications, and cloud services. The more I consider this framework, the more it makes sense to me.  So my plan is to organize the news from OOW around these five product lines over the next few posts here.  We'll see if we can't find some clarity in the avalanche.

rsyslog: Send logs to Flume

Surachart Opun - Mon, 2014-10-06 04:12
Good day for learning something new. After read Flume book, that something popped up in my head. Wanted to test "rsyslog" => Flume => HDFS. As we know, forwarding log to other systems. We can set rsyslog:
*.* @YOURSERVERADDRESS:YOURSERVERPORT ## for UDP
*.* @@YOURSERVERADDRESS:YOURSERVERPORT ## for TCPFor rsyslog:
[root@centos01 ~]# grep centos /etc/rsyslog.conf
*.* @centos01:7777Came back to Flume, I used Simple Example for reference and changed a bit. Because I wanted it write to HDFS.
[root@centos01 ~]# grep "^FLUME_AGENT_NAME\="  /etc/default/flume-agent
FLUME_AGENT_NAME=a1
[root@centos01 ~]# cat /etc/flume/conf/flume.conf
# example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
#a1.sources.r1.type = netcat
a1.sources.r1.type = syslogudp
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 7777
# Describe the sink
#a1.sinks.k1.type = logger
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://localhost:8020/user/flume/syslog/%Y/%m/%d/%H/
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.batchSize = 10000
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 10000
a1.sinks.k1.hdfs.filePrefix = syslog
a1.sinks.k1.hdfs.round = true


# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
[root@centos01 ~]# /etc/init.d/flume-agent start
Flume NG agent is not running                              [FAILED]
Starting Flume NG agent daemon (flume-agent):              [  OK  ]Tested to login by ssh.
[root@centos01 ~]#  tail -0f  /var/log/flume/flume.log
06 Oct 2014 16:35:40,601 INFO  [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.BucketWriter.doOpen:208)  - Creating hdfs://localhost:8020/user/flume/syslog/2014/10/06/16//syslog.1412588139067.tmp
06 Oct 2014 16:36:10,957 INFO  [hdfs-k1-roll-timer-0] (org.apache.flume.sink.hdfs.BucketWriter.renameBucket:427)  - Renaming hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067.tmp to hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]# hadoop fs -ls hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   1 flume supergroup        299 2014-10-06 16:36 hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]#
[root@centos01 ~]#
[root@centos01 ~]# hadoop fs -cat hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
sshd[20235]: Accepted password for surachart from 192.168.111.16 port 65068 ssh2
sshd[20235]: pam_unix(sshd:session): session opened for user surachart by (uid=0)
su: pam_unix(su-l:session): session opened for user root by surachart(uid=500)
su: pam_unix(su-l:session): session closed for user rootLook good... Anyway, It needs to adapt more...



Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Why the In-Memory Column Store is not used (II)

Karl Reitschuster - Mon, 2014-10-06 03:10
Now after some research - I detected one simple rule for provoking In-Memory scans :

Oracle In-Memory Column Store Internals – Part 1 – Which SIMD extensions are getting used?

Tanel Poder - Sun, 2014-10-05 23:51

This is the first entry in a series of random articles about some useful internals-to-know of the awesome Oracle Database In-Memory column store. I intend to write about Oracle’s IM stuff that’s not already covered somewhere else and also about some general CPU topics (that are well covered elsewhere, but not always so well known in the Oracle DBA/developer world).

Before going into further details, you might want to review the Part 0 of this series and also our recent Oracle Database In-Memory Option in Action presentation with some examples. And then read this doc by Intel if you want more info on how the SIMD registers and instructions get used.

There’s a lot of talk about the use of your CPUs’ SIMD vector processing capabilities in the Oracle inmemory module, let’s start by checking if it’s enabled in your database at all. We’ll look into Linux/Intel examples here.

The first generation of SIMD extensions in Intel Pentium world were called MMX. It added 8 new XMMn registers, 64 bits each. Over time the registers got widened, more registers and new features were added. The extensions were called Streaming SIMD Extensions (SSE, SSE2, SSSE3, SSE4.1, SSE4.2) and Advanced Vector Extensions (AVX and AVX2).

The currently available AVX2 extensions provide 16 x 256 bit YMMn registers and the AVX-512 in upcoming King’s Landing microarchitecture (year 2015) will provide 32 x 512 bit ZMMn registers for vector processing.

So how to check which extensions does your CPU support? On Linux, the “flags” column in /proc/cpuinfo easily provides this info.

Let’s check the Exadatas in our research lab:

Exadata V2:

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Xeon(R) CPU           E5540  @ 2.53GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

So the highest SIMD extension support on this Exadata V2 is SSE4.2 (No AVX!)

Exadata X2:

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Xeon(R) CPU           X5670  @ 2.93GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

Exadata X2 also has SSE4.2 but no AVX.

Exadata X3:

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
avx
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

The Exadata X3 supports the newer AVX too.

My laptop (Macbook Pro late 2013):
The Exadata X4 has not yet arrived to our lab, so I’m using my laptop as an example of a latest available CPU with AVX2:

Update: Jason Arneil commented that the X4 does not have AVX2 capable CPUs (but the X5 will)

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Core(TM) i7-4960HQ CPU @ 2.60GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
avx
avx2
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

The Core-i7 generation supports everything up to the current AVX2 extension set.

So, which extensions is Oracle actually using? Let’s check!

As Oracle needs to run different binary code on CPUs with different capabilities, some of the In-Memory Data (kdm) layer code has been duplicated into separate external libraries – and then gets dynamically loaded into Oracle executable address space as needed. You can run pmap on one of your Oracle server processes and grep for libshpk:

$ pmap 21401 | grep libshpk
00007f0368594000   1604K r-x--  /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libshpksse4212.so
00007f0368725000   2044K -----  /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libshpksse4212.so
00007f0368924000     72K rw---  /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libshpksse4212.so

My (educated) guess is that the “shpk” in libshpk above stands for oS dependent High Performance [K]ompression. “s” prefix normally means platform dependent (OSD) code and this low-level SIMD code sure is platform and CPU microarchitecture version dependent stuff.

Anyway, the above output from an Exadata X2 shows that SSE4.2 SIMD HPK libraries are used on this platform (and indeed, X2 CPUs do support SSE4.2, but not AVX).

Let’s list similar files from $ORACLE_HOME/lib:

$ cd $ORACLE_HOME/lib
$ ls -l libshpk*.so
-rw-r--r-- 1 oracle oinstall 1818445 Jul  7 04:16 libshpkavx12.so
-rw-r--r-- 1 oracle oinstall    8813 Jul  7 04:16 libshpkavx212.so
-rw-r--r-- 1 oracle oinstall 1863576 Jul  7 04:16 libshpksse4212.so

So, there are libraries for AVX and AVX2 in the lib directory too (the “12” suffix for all file names just means Oracle version 12). The AVX2 library is almost empty though (and the nm/objdump commands don’t show any Oracle functions in it, unlike in the other files).

Let’s run pmap on a process in my new laptop (which supports AVX and AVX2 ) to see if the AVX2 library gets used:

$ pmap 18969 | grep libshpk     
00007f85741b1000   1560K r-x-- libshpkavx12.so
00007f8574337000   2044K ----- libshpkavx12.so
00007f8574536000     72K rw--- libshpkavx12.so

Despite my new laptop supporting AVX2, only the AVX library is used (the AVX2 library is named libshpkavx212.so). So it looks like the AVX2 extensions are not used yet in this version (it’s the first Oracle 12.1.0.2 GA release without any patches). I’m sure this will be added soon, along with more features and bugfixes.

To be continued …

Related Posts

Adding Oracle Big Data SQL to ODI12c to Enhance Hive Data Transformations

Rittman Mead Consulting - Sun, 2014-10-05 15:29

An updated version of the Oracle BigDataLite VM came out a couple of weeks ago, and as well as updating the core Cloudera CDH software to the latest release it also included Oracle Big Data SQL, the SQL access layer over Hadoop that I covered on the blog a few months ago (here and here). Big Data SQL takes the SmartScan technology from Exadata and extends it to Hadoop, presenting Hive tables and HDFS files as Oracle external tables and pushing down the filtering and column-selection of data to individual Hadoop nodes. Any table registered in the Hive metastore can be exposed as an external table in Oracle, and a BigDataSQL agent installed on each Hadoop node gives them the ability to understand full Oracle SQL syntax rather than the cut-down SQL dialect that you get with Hive.

NewImage

There’s two immediate use-cases that come to mind when you think about Big Data SQL in the context of BI and data warehousing; you can use Big Data SQL to include Hive tables in regular Oracle set-based ETL transformations, giving you the ability to reference Hive data during part of your data load; and you can also use Big Data SQL as a way to access Hive tables from OBIEE, rather than having to go through Hive or Impala ODBC drivers. Let’s start off in this post by looking at the ETL scenario using ODI12c as the data integration environment, and I’ll come back to the BI example later in the week.

You may recall in a couple of earlier posts earlier in the year on ETL and data integration on Hadoop, I looked at a scenario where I wanted to geo-code web server log transactions using an IP address range lookup file from a company called MaxMind. To determine the country for a given IP address you need to locate the IP address of interest within ranges listed in the lookup file, something that’s easy to do with a full SQL dialect such as that provided by Oracle:

NewImage

In my case, I’d want to join my Hive table of server log entries with a Hive table containing the IP address ranges, using the BETWEEN operator – except that Hive doesn’t support any type of join other than an equi-join. You can use Impala and a BETWEEN clause there, but in my testing anything other than a relatively small log file Hive table took massive amounts of memory to do the join as Impala works in-memory which effectively ruled-out doing the geo-lookup set-based. I then went on to do the lookup using Pig and a Python API into the geocoding database but then you’ve got to learn Pig, and I finally came up with my best solution using Hive streaming and a Python script that called that same API, but each of these are fairly involved and require a bit of skill and experience from the developer.

But this of course is where Big Data SQL could be useful. If I could expose the Hive table containing my log file entries as an Oracle external table and then join that within Oracle to an Oracle-native lookup table, I could do my join using the BETWEEN operator and then output the join results to a temporary Oracle table; once that’s done I could then use ODI12c’s sqoop functionality to copy the results back down to Hive for the rest of the ETL process. Looking at my Hive database using SQL*Developer 4.0.3’s new ability to work with Hive tables I can see the table I’m interested in listed there:

NewImage

and I can also see it listed in the DBA_HIVE_TABLES static view that comes with Big Data SQL on Oracle Database 12c:

SQL> select database_name, table_name, location
  2  from dba_hive_tables
  3  where table_name like 'access_per_post%';

DATABASE_N TABLE_NAME             LOCATION
---------- ------------------------------ --------------------------------------------------
default    access_per_post        hdfs://bigdatalite.localdomain:8020/user/hive/ware
                      house/access_per_post

default    access_per_post_categories     hdfs://bigdatalite.localdomain:8020/user/hive/ware
                      house/access_per_post_categories

default    access_per_post_full       hdfs://bigdatalite.localdomain:8020/user/hive/ware
                      house/access_per_post_full

There are various ways to create the Oracle external tables over Hive tables in the linked Hadoop cluster, including using the new DBMS_HADOOP package to create the Oracle DDL from the Hive metastore table definitions or using SQL*Developer Data Modeler to generate the DDL from modelled Hive tables, but if you know the Hive table definition and its not too complicated, you might as well just write the DDL statement yourself using the new ORACLE_HIVE external table access driver. In my case, to create the corresponding external table for the Hive table I want to geo-code, it looks like this:

CREATE TABLE access_per_post_categories(
  hostname varchar2(100), 
  request_date varchar2(100), 
  post_id varchar2(10), 
  title varchar2(200), 
  author varchar2(100), 
  category varchar2(100),
  ip_integer number)
organization external
(type oracle_hive
 default directory default_dir
 access parameters(com.oracle.bigdata.tablename=default.access_per_post_categories));

Then it’s just a case of importing the metadata for the external table over Hive, and the tables I’m going to join to and then load the results into, into ODI’s repository and then create a mapping to bring them all together.

NewImage

Importantly, I can create the join between the tables using the BETWEEN clause, something I just couldn’t do when working with Hive tables on their own.

NewImage

Running the mapping then joins the webserver log lookup table to the geocoding IP address range lookup table through the Oracle SQL engine, removing all the complexity of using Hive streaming, Pig or the other workaround solutions I used before. What I can then do is add a further step to the mapping to take the output of my join and use that to load the results back into Hive, like this:

NewImage

I’ll then use IKM SQL to to Hive-HBase-File (SQOOP) knowledge module to set up the export from Oracle into Hive.

NewImage

Now, when I run the mapping I can see the initial table join taking place between the Oracle native table and the Hive-sourced external table, and the results then being exported back into Hadoop at the end using the Sqoop KM.

NewImage

Finally, I can view the contents of the downstream Hive table loaded via Sqoop, and see that it does in-fact contain the country name for each of the page accesses.

NewImage

Oracle Big Data SQL isn’t a solution suitable for everyone; it only runs on the BDA and requires Exadata for the database access, and it’s an additional license cost on top of the base BDA software bundle. But if you’ve got it available it’s an excellent way to blend Hive and Oracle data, and a great way around some of the restrictions around HiveQL and the Hive JDBC/ODBC drivers. More on this topic later next week, when I’ll look at using Big Data SQL in conjunction with OBIEE 11g.

Categories: BI & Warehousing

What I like best about myself

FeuerThoughts - Sun, 2014-10-05 08:23
What could be more self-centered?

Why should anyone else in the world care what I like best about myself?
I have no idea. That is for sure. But, hey, what can I say? This is the world we live in (I mean: the artificial environment humans have created, mainly to avoid actually living in and on our amazing world).
It is an age of, ahem, sharing. And, ahem, advertising. Actually, first and foremost, advertising.
Anyway, screw all that. Here's what I like best about myself:
I love to be with kids. And I am, to put it stupidly but perhaps clearly, a kid whisperer.
Given the choice between spending time with an adult or spending time with a child, there is no contest. None at all. It's a bit of a compulsion, I suppose, but....
If there is a child in the room, I pay them all of my attention, I cannot stop myself from doing this. It just happens. Adults, for the most part, disappear. I engage with a child as a peer, another whole human. And usually children respond to me instantly and with great enthusiasm. 
Chances are, if your child is between, say, three months old to five years, we will be fast friends within minutes. Your cranky baby might fall asleep in my arms, as I sing Moonshadow to her or whisper nonsense words in her ear. Your shy three-year old son might find himself talking excitedly about a snake he saw on a trail that day (he hadn't mentioned it to you). Your teenage daughter might be telling me about playing games on her phone and how she doesn't think her dad realizes how much she is doing it.
I have the most amazing discussions with children. And though I bet this will sound strange to you: some of my favorite and memorable conversations have been with five month old babies. How is this possible, you might wonder. They can't even talk. Well, you can find ouit. Just try this at home with your baby:
Hold her about a foot away from your face, cradled in your arms. Look deeply and fully into her eyes. Smile deeply. And then say something along these lines, moving your mouth slowly: "Ooooh. Aaaaah. Maaaaa. Paaaaa." And then she will (sometimes) answer back, eyes never leaving yours....and you have a conversation. Your very first game of verbal Ping Pong. 
I suppose I could try to explain the feeling of pure happiness I experience at moments like this. I don't think, though, that written language is good for stuff like that. It's better for recording knowledge needed to destroy more and more of our planet to make humans comfortable.
And with my granddaughter, oh, don't even get me started. Sometimes I will be talking to her, our heads close together, and realize her face has gone into this kind of open, relaxed state in which she is rapt, almost in a trance, absorbing everything I am saying, the sound of my voice, my mouth moving. Just taking it all in. You'd better believe that I put some thought into what I am saying to this incredibly smart and observant "big girl." (who turns three in three weeks)
Here's another "try this at home" with your three year old (or two or four): talk about shadows. Where do they come from/ How do they relate to your body? Why does their shape change as the day goes on? Loey and I have had fun with shadows several times.
I have always been this way. I have no idea why. I have this funny feeling that it might actually be at least in some small way the result of a genetic mutation. I have a nephew who resembles me in several different, seemingly unconnected ways, including this love of and deep affinity for children.
I don't think that many people understand what I am doing when I spend time with children. I am called a "doting" grandfather. It offends me, though I certainly understand that no offense was intended.
I don't dote on Loey. Instead,I  seek out every opportunity to share my wonder of our world and life with her, help her understand and live in the world as effectively as possible. What this has meant lately is that I talk with her a lot about trees, how much I love them, how amazing they are. 
One day at the park, as we walked past the entrance to the playground, I noticed a very small oak sapling - in essence, a baby oak tree.
When we got inside the park, there was a mature oak towering over our stroller. I asked Loey if she wanted to see a baby tree. She said yes, so I picked her up to get close to the mature oak's leaf. I showed her the shape of the leaf, and the big tree to which it was attached.
Then I took her outside and we looked at the sapling. I showed her how the leaves on this tiny baby tree were the same, shape and size, as those on the big tree. That's how we knew it was a baby of that big tree. And it certainly was interesting that the leaves would be the same size on the tiny sapling. Held her attention throughout. That was deeply satisfying.
Mostly what I do is look children directly in the eyes, give them my full attention, smile with great joy at seeing them. Babies are deeply hard-wired to read faces. They can see in the wrinkles around my widened eyes and the smile that is stretching across my face that I love them, accept them fully. And with that more or less physical connection established, they seem to relax, melt, soften with trust. They know they can trust me, and they are absolutely correct. 
In that moment, I would do anything for them.
This wisdom (that's how I see it) to accept the primacy of our young, my willingness to appear to adults as absolutely foolish, but to a child appear as a bright light, making them glow right back at me:
That is what I like best about me. 
Categories: Development

Spark vs. Tez, revisited

DBMS2 - Sun, 2014-10-05 02:59

I’m on record as noting and agreeing with an industry near-consensus that Spark, rather than Tez, will be the replacement for Hadoop MapReduce. I presumed that Hortonworks, which is pushing Tez, disagreed. But Shaun Connolly of Hortonworks suggested a more nuanced view. Specifically, Shaun tweeted thoughts including:

Tez vs Spark = Apples vs Oranges.

Spark is general-purpose engine with elegant APIs for app devs creating modern data-driven apps, analytics, and ML algos.

Tez is a framework for expressing purpose-built YARN-based DAGs; its APIs are for ISVs & engine/tool builders who embed it

[For example], Hive embeds Tez to convert its SQL needs into purpose-built DAGs expressed optimally and leveraging YARN

That said, I haven’t yet had a chance to understand what advantages Tez might have over Spark in the use cases that Shaun relegates it to.

Related link

Categories: Other

How to setup passwordless ssh in Exadata using dcli

Alejandro Vargas - Sun, 2014-10-05 02:57

 




Normal
0




false
false
false

EN-US
X-NONE
HE













MicrosoftInternetExplorer4















DefSemiHidden="true" DefQFormat="false" DefPriority="99"
LatentStyleCount="267">
UnhideWhenUsed="false" QFormat="true" Name="Normal"/>
UnhideWhenUsed="false" QFormat="true" Name="heading 1"/>


















UnhideWhenUsed="false" QFormat="true" Name="Title"/>

UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/>
UnhideWhenUsed="false" QFormat="true" Name="Strong"/>
UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/>
UnhideWhenUsed="false" Name="Table Grid"/>

UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/>
UnhideWhenUsed="false" Name="Light Shading"/>
UnhideWhenUsed="false" Name="Light List"/>
UnhideWhenUsed="false" Name="Light Grid"/>
UnhideWhenUsed="false" Name="Medium Shading 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2"/>
UnhideWhenUsed="false" Name="Medium List 1"/>
UnhideWhenUsed="false" Name="Medium List 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3"/>
UnhideWhenUsed="false" Name="Dark List"/>
UnhideWhenUsed="false" Name="Colorful Shading"/>
UnhideWhenUsed="false" Name="Colorful List"/>
UnhideWhenUsed="false" Name="Colorful Grid"/>
UnhideWhenUsed="false" Name="Light Shading Accent 1"/>
UnhideWhenUsed="false" Name="Light List Accent 1"/>
UnhideWhenUsed="false" Name="Light Grid Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/>

UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/>
UnhideWhenUsed="false" QFormat="true" Name="Quote"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/>
UnhideWhenUsed="false" Name="Dark List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/>
UnhideWhenUsed="false" Name="Colorful List Accent 1"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/>
UnhideWhenUsed="false" Name="Light Shading Accent 2"/>
UnhideWhenUsed="false" Name="Light List Accent 2"/>
UnhideWhenUsed="false" Name="Light Grid Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/>
UnhideWhenUsed="false" Name="Dark List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/>
UnhideWhenUsed="false" Name="Colorful List Accent 2"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/>
UnhideWhenUsed="false" Name="Light Shading Accent 3"/>
UnhideWhenUsed="false" Name="Light List Accent 3"/>
UnhideWhenUsed="false" Name="Light Grid Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/>
UnhideWhenUsed="false" Name="Dark List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/>
UnhideWhenUsed="false" Name="Colorful List Accent 3"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/>
UnhideWhenUsed="false" Name="Light Shading Accent 4"/>
UnhideWhenUsed="false" Name="Light List Accent 4"/>
UnhideWhenUsed="false" Name="Light Grid Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/>
UnhideWhenUsed="false" Name="Dark List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/>
UnhideWhenUsed="false" Name="Colorful List Accent 4"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/>
UnhideWhenUsed="false" Name="Light Shading Accent 5"/>
UnhideWhenUsed="false" Name="Light List Accent 5"/>
UnhideWhenUsed="false" Name="Light Grid Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/>
UnhideWhenUsed="false" Name="Dark List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/>
UnhideWhenUsed="false" Name="Colorful List Accent 5"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/>
UnhideWhenUsed="false" Name="Light Shading Accent 6"/>
UnhideWhenUsed="false" Name="Light List Accent 6"/>
UnhideWhenUsed="false" Name="Light Grid Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/>
UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/>
UnhideWhenUsed="false" Name="Dark List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/>
UnhideWhenUsed="false" Name="Colorful List Accent 6"/>
UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/>
UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/>
UnhideWhenUsed="false" QFormat="true" Name="Book Title"/>





/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-qformat:yes;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin:0cm;
mso-para-margin-bottom:.0001pt;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:"Calibri","sans-serif";
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-fareast-font-family:"Times New Roman";
mso-fareast-theme-font:minor-fareast;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:Arial;
mso-bidi-theme-font:minor-bidi;}


Setting passwordless ssh root connection using dcli is fast and simple and will easy later to execute commands on all servers using this utility.


In order to do that you should have either:


DNS resolution to all Database and Storage nodes OR have them registered in /etc/hosts


1) Create a parameter file that contains all the server names you want to reach via dcli, tipically we have a cell_group for storage cells, a dbs_group for database servers and an all_group for both of them.


The parameter files will have only the server name, in short format


ie: all_group will have on an Exadata quarter rack:


dbnode1
dbnode2
cell1
cell2
cell3


2) As root user create ssh equivalence:


ssh-keygen   -t    rsa


3) Distribute the key to all servers


dcli -g ./all_group -l root -k -s '-o StrictHostKeyChecking=no'


4) check 


dcli -g all_group -l root hostname 



 

Categories: DBA Blogs

Streaming for Hadoop

DBMS2 - Sun, 2014-10-05 02:56

The genesis of this post is that:

  • Hortonworks is trying to revitalize the Apache Storm project, after Storm lost momentum; indeed, Hortonworks is referring to Storm as a component of Hadoop.
  • Cloudera is talking up what I would call its human real-time strategy, which includes but is not limited to Flume, Kafka, and Spark Streaming. Cloudera also sees a few use cases for Storm.
  • This all fits with my view that the Current Hot Subject is human real-time data freshness — for analytics, of course, since we’ve always had low latencies in short-request processing.
  • This also all fits with the importance I place on log analysis.
  • Cloudera reached out to talk to me about all this.

Of course, we should hardly assume that what the Hadoop distro vendors favor will be the be-all and end-all of streaming. But they are likely to at least be influential players in the area.

In the parts of the problem that Cloudera emphasizes, the main tasks that need to be addressed are:

  • Getting data into the plumbing from whatever systems it’s being generated in. This is the province of Flume, one of Cloudera’s earliest projects. I’d add that this is also one of the core competencies of Splunk.
  • Getting data where it needs to go. Flume can do this. Kafka, a publish/subscribe messaging system, can do it in a more general way, because streams are sent to a Kafka broker, which then re-streams them to their ultimate destination.
  • Processing data in flight. Storm can do this. Spark Streaming can do it more easily. Spark Streaming is or soon will be a part of every serious Hadoop distribution. Flume can do some lightweight processing as well.
  • Serving up data for further query. Cloudera would like you to do this via HBase or Impala. But Oracle is a fine choice too, and indeed a popular choice among Cloudera customers.

I guess there’s also a step of receiving data out of the plumbing system. Cloudera and I glossed over that aspect when we talked, but I’ll say:

  • Spark commonly lives over HDFS (Hadoop Distributed File System).
  • Flume feeds HDFS. Flume was also hacked years ago — rah-rah open source! — to feed Kafka instead, and also to be fed by it.

Cloudera has not yet decided whether to make Kafka part of CDH (which stands for Cloudera Distribution yada yada Hadoop). Considerations in that probably include:

  • Kafka has impressive adoption among high-profile internet companies, but not so much among conventional enterprises.
  • Surely not coincidentally, Kafka is missing features in areas such as security (e.g. it lacks Kerberos integration).
  • Kafka lacks cool capabilities to let you configure rather than code, although Cloudera thinks that in some cases you can work around this problem by marrying Kafka and Flume.

I still find it bizarre that a messaging system be named after an author famous for writing about depressingly inescapable situations. Also, I wish that:

  • Kafka had something to do with transformations.
  • The name Kafka had been used by a commercial software company, which could offer product trials.

Highlights from the Storm vs. Spark Streaming vs. Samza part of my discussion with Cloudera include:

  • Storm has a companion project Trident that makes Storm somewhat easier to program and/or configure. But Trident only has some of the usability advantages of Spark Streaming.
  • Cloudera sees no advantages to Samza, a Kafka companion project, when compared with whichever of Spark Streaming or Storm + Trident is better suited to a particular use case.
  • Cloudera likes the rich set of primitives that Spark Streaming inherits from Spark. Cloudera also notes that, if you learn to program over Spark for any reason, then you will in particular have learned how to program over Spark Streaming.
  • Spark Streaming lets you join Spark Streaming data to other data that Spark can get access to. I agree with Cloudera that this is an important advantage.
  • Cloudera sees Storm’s main advantages as being in latency. If you need 10-200 millisecond latency, Storm can give you that today while Spark Streaming can’t. However, Cloudera notes that to write efficiently to your persistent store — which Cloudera fondly hopes but does not insist will be HBase or Impala — you may need to micro-batch your writes anyway.

Also, Spark Streaming has a major advantage over bare Storm in whether you have to manually configure your topology, but I wasn’t clear as to how far Trident closes that particular gap.

Cloudera and I didn’t particularly talk about data-consuming technologies such as BI, predictive analytics, or analytic applications, but we did review use cases a bit. Nothing too surprising jumped out. Indeed, the discussion reminded me of a 2007 list I did of applications — other than extreme low-latency ones — for CEP (Complex Event Processing).

  • Top-of-mind were things that fit into one or more of the buckets “internet”, “retail”, “recommendation/personalization”, “security” or “anti-fraud”.
  • Transportation/logistics got mentioned, to which I replied that the CEP vendors had all seemed to have one trucking/logistics client each.
  • At least in theory, there are potentially huge future applications in health care.

In general, candidate application areas for streaming-to-Hadoop match those that involve large volumes of machine-generated data.

Edit: Shortly after I posted this, Storm creator Nathan Marz put up a detailed and optimistic post about the history and state of Storm

Categories: Other

Bash security fix made available for Exadata

Alejandro Vargas - Sun, 2014-10-05 02:29

Complete information about the security fix availability should be reviewed, before applying the fix, in MOS DOC:


 Responses to common Exadata security scan findings (Doc ID 1405320.1)


The security fix is available for download from:


http://public-yum.oracle.com/repo/OracleLinux/OL5/latest/x86_64/getPackage/bash-3.2-33.el5_11.4.x86_64.rpm


The summary installation instructions are as follows:


1) Download getPackage/bash-3.2-33.el5_11.4.x86_64.rpm


2) Copy bash-3.2-33.el5_11.4.x86_64.rpm into /tmp at both database and storage nodes.


3) Remove rpm  exadata-sun-computenode-exact



rpm -e exadata-sun-computenode-exact



4) On compute nodes install bash-3.2-33.el5_11.4.x86_64.rpm using this command:



 rpm -Uvh /tmp/bash-3.2-33.el5_11.4.x86_64.rpm



5) On storage nodes  install bash-3.2-33.el5_11.4.x86_64.rpm using this command:




rpm -Uvh --nodeps /tmp/bash-3.2-33.el5_11.4.x86_64.rpm


6) Remove /tmp/bash-3.2-33.el5_11.4.x86_64.rpm from all nodes


As a side effect of applyin this fix,  during future upgrades on the database nodes, a warning will appear informing:



The "exact package" was not found and it will use minimal instead.


That's a normal and expected message and will not interfere with the upgrade. 







Categories: DBA Blogs

OOW 2014: Day 1

Doug Burns - Sat, 2014-10-04 23:26
Disclosure: I'm attending Openworld at the invitation of the OTN ACE Director program who are paying for my flights, hotel and conference fee. My employer has helpfully let me attend on work time, as well as sending other team mates because they recognise the educational value of attending. Despite that, all of the opinions expressed in these posts are, as usual, all my own.
After the very welcome tradition of breakfast at Lori's Diner, I had time to register and then get myself down to Moscone South for my first session of the day. I'd planned to listen to Paul Vallee's security talk because I'd been unable to register for Gwen Shapira's Analyzing Twitter data with Hadoop session but noticed spare seats as I passed the room, so switched. I love listening to Gwen talk on any subject because her enthusiasm is contagious. A few of the demos went a little wrong but I still got a nice overview of the various components of a Hadoop solution (which is an area I've never really looked at much) so the session flew by. Good stuff.
Next up was Yet-another-Oracle-ACE-Director Arup Nanda's presentation on Demystifying Cache Buffer Chains. The main reason I attended was to see how he presented the subject and wasn't expecting to learn too much but it's an important subject, particularly now I'm working with RAC more often and consolidated environments. CBC latch waits are on my radar once more!
Next up was 12 things about 12c, a session of 12 speakers given 5 minutes to talk about, well, 12c stuff. Debra Lilley organised this and despite all her concerns that she'd expressed leading up to it, it went very smoothly, so hats off to Debra and to the speakers for behaving themselves with the timing! I was particularly concerned that we kicked off with Jonathan Lewis ;-) Big problem with putting him on first - will he actually be able to stay within the time constraints? Because he'll get too excited and want to talk about things in more depth. He did do it, but it was tough as he raced towards the finishing line ;-)
The only thing that bugged me about this was that I hadn't realised it was two session slots (makes complete sense if I'd performed some simple maths!) but it was very annoying when they kicked everyone out of the room at half-time before readmitting them. Yes, there are rules, but this was one of the more stupid. It annoyed me enough that I decided to skip the second half and attend the Enkitec panel session instead.
What an amazing line-up of Exadata geek talent they had on one stage for Expert Oracle Exadata: Then and Now ....
Enkitec Panel

Including most of the authors of the original book as well as the authors who are writing the next edition which should be out before the end of the year.

From left-to-right : Karl Arao, Martin Bach, Kerry Osborne, Andy Colvin, Tanel Poder and Frits Hoogland.

They talked a little about the original version of the book (largely based on V2) and how far Exadata had come since then, but it was a pretty open session with questions shooting around all over the place and great fun. Nice way for me to wrap up my user group conference activities for the day and head out into the sun for Larry's Opening Keynote. 
First we had the traditional vendor blah-blah-blah about stuff I couldn't care less about but, in shocking news, I actually enjoyed it! Maybe it's because it was Intel and so I'm probably more interested in what they're doing, but it was pretty ok stuff. All the keynotes are available online here.
Then it was LarryTime. Seemed on pretty good form by recent standards although I can summarise it simply as Cloud, Cloud and more Cloud. There's no getting away from the fact that it's been quite the about-turn from him in his attitude towards the Cloud. I did appreciate the "we're only just getting started" message and I suppose I've become innured to how accurate the actual facts are in his presentations and to the attacks on competitors so I sort of enjoy his keynotes more than most.
At this stage, the jetlag was biting *hard* and I ended up missing yet another ACE dinner but from all the reports I heard it was the best ever by some distance so I was gutted to miss out on it. But when you're body is saying sleep whilst you're walking, sometimes you have to listen to it! Then again, when it decides to wake you up again at 2:30, perhaps you should tell it to go and take a running jump!

Upgrading PeopleTools with Zero Downtime (1/3)

Javier Delgado - Sat, 2014-10-04 21:35
A few months ago, BNB concluded a PeopleTools upgrade with a quite curious approach. Our customer, a leading Spanish financial institution, had PeopleSoft CRM 8.4 installation running under PeopleTools 8.42. Their CRM application was being used to provide support to their 24x7 call centres, and the only reason they had to perform the PeopleTools upgrade was to be able to update their database and WebLogic releases, as the existing ones were already out of support.

Now, the organisation was going under a major structural change, so the customer wanted to perform the PeopleTools upgrade with a minimal disruption of their activities, as it was difficult at that time to obtain the needed sponsorship from higher managerial levels. In other words, they wanted to perform the upgrade as silently as possible. This translated in two particular requirements:
  • Ability to perform the PeopleTools change with zero downtime, in order to avoid any impact on the users.
  • Ability to gradually move users from the old PeopleTools release to the new one, practically limiting the impact of any product issue related to the upgrade. In case anything failed, they wanted to be able to move the users back to the old release.
Having performed quite a few PeopleTools upgrades in the past, I knew that following the standard procedures would not help us in providing a satisfactory answer to the client. So, after some discussions, the customer agreed on trying a non-standard way of upgrade PeopleTools. We agreed to do a prototype, test and if everything went well, then move to Production. If it did not work out, we would need to do it in the standard way. As it finally turned out, the suggested approach worked out.

I cannot say it would work for any other combination of PeopleTools and application versions, nor different customer usage of the application. Anyhow, I thought it may be useful to share it with you, in case any of you can enrich the approach with your feedback. In the next post I will describe the approach and in the third and final one I will describe the issues we faced during the implementation. So... keep tuned ;).

OOW14 Day 5 - not only Oracle OpenWorld

Yann Neuhaus - Sat, 2014-10-04 11:45

Oracle's OpenWorld has ended. It was the fist time I attended this great event and it really is a "great" event:

  • 60000 attendees from 145 countries
  • 500 partners or customers in the exhibit hall
  • 400 demos in the DEMOgrounds
  • 2500 sessions

11g Adaptive Cursor Sharing --- does it work only for SELECT statements ? Using the BIND_AWARE Hint for DML

Hemant K Chitale - Sat, 2014-10-04 08:52
Test run in 11.2.0.2

UPDATE 07-Oct-14 :  I have been able to get the DML statement also to demonstrate Adaptive Cursor Sharing with the "BIND_AWARE" hint as suggested by Stefan Koehler and Dominic Brooks.

Some of you may be familiar with Adaptive Cursor Sharing.

This is an 11g improvement over the "bind peek once and execute repeatedly without evaluating the true cost of execution" behaviour that we see in 10g.  Thus, if the predicate is skewed and the bind value is changed, 10g does not "re-peek" and re-evaluate the execution plan. 11g doesn't "re-peek" at the first execution with  new bind but if it finds the true cardinality returned by the execution at signficant variance, it decides to "re-peek" at a subsequent execution.  This behaviour is determined by the new attributes "IS_BIND_SENSITIVE" and "IS_BIND_AWARE" for the SQL cursor.

If a column is highly skewed, as determined by the presence of  Histogram, the Optimizer, when parsing an SQL with a bind against the column as a predicate, marks the SQL as BIND_SENSITIVE. If two executions with two different bind values return very different counts of rows for the predicate, the SQL is marked BIND_AWARE.  The Optimizer "re-peeks" the bind and generates a new Child Cursor that is marked as BIND_AWARE.

Here is a demo.


SQL> -- create and populate table
SQL> drop table demo_ACS purge;

Table dropped.

SQL>
SQL> create table demo_ACS
2 as
3 select * from dba_objects
4 where 1=2
5 /

Table created.

SQL>
SQL> -- populate the table
SQL> insert /*+ APPEND */ into demo_ACS
2 select * from dba_objects
3 /

75043 rows created.

SQL>
SQL> -- create index on single column
SQL> create index demo_ACS_ndx
2 on demo_ACS (owner) nologging
3 /

Index created.

SQL>
SQL> select count(distinct(owner))
2 from demo_ACS
3 /

COUNT(DISTINCT(OWNER))
----------------------
42

SQL>
SQL> select owner, count(*)
2 from demo_ACS
3 where owner in ('HEMANT','SYS')
4 group by owner
5 /

OWNER COUNT(*)
-------- ----------
HEMANT 55
SYS 31165

SQL>
SQL> -- create a histogram on the OWNER column
SQL> exec dbms_stats.gather_table_stats('','DEMO_ACS',estimate_percent=>100,method_opt=>'FOR COLUMNS OWNER SIZE 250');

PL/SQL procedure successfully completed.

SQL> select column_name, histogram, num_distinct, num_buckets
2 from user_tab_columns
3 where table_name = 'DEMO_ACS'
4 and column_name = 'OWNER'
5 /

COLUMN_NAME HISTOGRAM NUM_DISTINCT NUM_BUCKETS
------------------------------ --------------- ------------ -----------
OWNER FREQUENCY 42 42

SQL>

So, I now have a table that has very different row counts for 'HEMANT' and 'SYS'. The data is skewed. The Execution Plan for queries on 'HEMANT' would not be optimal for queries on 'SYS'.

Let's see a query executing for 'HEMANT'.

SQL> -- define bind variable
SQL> variable target_owner varchar2(30);
SQL>
SQL> -- setup first SQL for 'HEMANT'
SQL> exec :target_owner := 'HEMANT';

PL/SQL procedure successfully completed.

SQL>
SQL> -- run SQL
SQL> select owner, object_name
2 from demo_ACS
3 where owner = :target_owner
4 /

OWNER OBJECT_NAME
-------- ------------------------------
.....
.....

55 rows selected.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1820xq3ggh6p6, child number 0
-------------------------------------
select owner, object_name from demo_ACS where owner = :target_owner

Plan hash value: 805812326

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"=:TARGET_OWNER)


19 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '1820xq3ggh6p6'
4 order by child_number
5 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
1820xq3ggh6p6 0 805812326 Y N 1 55

SQL> commit;

Commit complete.

SQL>

We see one execution of the SQL Cursor with an Index Range Scan and Plan_Hash_Value 805812326. The SQL is marked BIND_SENSITIVE because of the presence of a Histogram indicating skew.

Now, let's change the bind value from 'HEMANT' to 'SYS' and re-execute exactly the same query.

SQL> -- setup second SQL for 'SYS'
SQL> exec :target_owner := 'SYS';

PL/SQL procedure successfully completed.

SQL>
SQL> -- run SQL
SQL> select owner, object_name
2 from demo_ACS
3 where owner = :target_owner
4 /

OWNER OBJECT_NAME
-------- ------------------------------
.....
.....

31165 rows selected.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1820xq3ggh6p6, child number 0
-------------------------------------
select owner, object_name from demo_ACS where owner = :target_owner

Plan hash value: 805812326

--------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 3 (100)| |
| 1 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("OWNER"=:TARGET_OWNER)


19 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '1820xq3ggh6p6'
4 order by child_number
5 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
1820xq3ggh6p6 0 805812326 Y N 2 31220

SQL> commit;

Commit complete.

SQL>

This time, for 31,165 rows (instead of 55 rows), Oracle has used the same Execution Plan -- the same Plan_Hash_Value and the same expected cardinality of 55 rows. However, the Optimizer is now "aware" that the 55 row Execution Plan actually returned 31.165 rows.

The next execution will see a re-parse because of this awareness.

SQL> -- rerun second SQL
SQL> select owner, object_name
2 from demo_ACS
3 where owner = :target_owner
4 /

OWNER OBJECT_NAME
-------- ------------------------------
.....
.....

31165 rows selected.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 1820xq3ggh6p6, child number 1
-------------------------------------
select owner, object_name from demo_ACS where owner = :target_owner

Plan hash value: 1893049797

------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 299 (100)| |
|* 1 | TABLE ACCESS FULL| DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"=:TARGET_OWNER)


18 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '1820xq3ggh6p6'
4 order by child_number
5 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
1820xq3ggh6p6 0 805812326 Y N 2 31220
1820xq3ggh6p6 1 1893049797 Y Y 1 31165

SQL> commit;

Commit complete.

SQL>

Aha ! This time we have a new Plan_Hash_Value (1893049797) for a Full Table Scan, being represented as a new Child Cursor (Child 1) that is now BIND_AWARE.






Now, here's the catch I see.  If I change the "SELECT ....." statement to an "INSERT .... SELECT ....", I do NOT see this behaviour.  I do NOT see the cursor becoming BIND_AWARE as a new Child Cursor.
Thus, the 3rd pass of an "INSERT ..... SELECT ..... " being the second pass with the Bind Value 'SYS' is correctly BIND_SENSITIVE but not BIND_AWARE.  This is what it shows :


SQL> -- rerun second SQL
SQL> insert into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

31165 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID cqyhjz5a5xyu4, child number 0
-------------------------------------
insert into target_tbl ( select owner, object_name from demo_ACS where
owner = :target_owner )

Plan hash value: 805812326

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 3 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("OWNER"=:TARGET_OWNER)


21 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = 'cqyhjz5a5xyu4'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
cqyhjz5a5xyu4 0 805812326 Y N 3 62385

SQL> commit;

Commit complete.

SQL>

Three executions -- one with 'HEMANT' and the second and third with 'SYS' as the Bind Value -- all use the *same* Execution Plan.

So, does this mean that I cannot expect ACS for DML ?


UPDATE 07-Oct-14 :  I have been able to get the DML statement also to demonstrate Adaptive Cursor Sharing with the "BIND_AWARE" hint as suggested by Stefan Koehler and Dominic Brooks.

SQL> -- run SQL
SQL> insert /*+ BIND_AWARE */ into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

55 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0cca9xusptauj, child number 0
-------------------------------------
insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
from demo_ACS where owner = :target_owner )

Plan hash value: 805812326

---------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 3 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID| DEMO_ACS | 55 | 3960 | 3 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | DEMO_ACS_NDX | 55 | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("OWNER"=:TARGET_OWNER)


21 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '0cca9xusptauj'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
0cca9xusptauj 0 805812326 Y Y 1 55

SQL> commit;

Commit complete.

SQL>
SQL> -- setup second SQL for 'SYS'
SQL> exec :target_owner := 'SYS';

PL/SQL procedure successfully completed.

SQL>
SQL> -- run SQL
SQL> insert /*+ BIND_AWARE */ into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

31165 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0cca9xusptauj, child number 1
-------------------------------------
insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
from demo_ACS where owner = :target_owner )

Plan hash value: 1893049797

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 299 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
|* 2 | TABLE ACCESS FULL | DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("OWNER"=:TARGET_OWNER)


20 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '0cca9xusptauj'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
0cca9xusptauj 0 805812326 Y Y 1 55
0cca9xusptauj 1 1893049797 Y Y 1 31165

SQL> commit;

Commit complete.

SQL>
SQL> -- rerun second SQL
SQL> insert /*+ BIND_AWARE */ into target_tbl
2 (
3 select owner, object_name
4 from demo_ACS
5 where owner = :target_owner
6 )
7 /

31165 rows created.

SQL>
SQL> -- get execution plan
SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 0cca9xusptauj, child number 1
-------------------------------------
insert /*+ BIND_AWARE */ into target_tbl ( select owner, object_name
from demo_ACS where owner = :target_owner )

Plan hash value: 1893049797

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | | | 299 (100)| |
| 1 | LOAD TABLE CONVENTIONAL | | | | | |
|* 2 | TABLE ACCESS FULL | DEMO_ACS | 31165 | 2191K| 299 (1)| 00:00:04 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("OWNER"=:TARGET_OWNER)


20 rows selected.

SQL>
SQL> -- get SQL query info
SQL> select sql_id, child_number, plan_hash_value, is_bind_sensitive, is_bind_aware, executions, rows_processed
2 from v$SQL
3 where sql_id = '0cca9xusptauj'
4 /

SQL_ID CHILD_NUMBER PLAN_HASH_VALUE I I EXECUTIONS ROWS_PROCESSED
------------- ------------ --------------- - - ---------- --------------
0cca9xusptauj 0 805812326 Y Y 1 55
0cca9xusptauj 1 1893049797 Y Y 2 62330

SQL> commit;

Commit complete.

SQL>

However, there is a noticeable difference.  With the BIND_AWARE Hint, the SQL is Bind Aware right from the first execution (for :target_owner='HEMANT').  So, even at the second execution (for the first run of :target_owner='SYS'), it re-peeks and generates a new Execution Plan (the Full Table Scan) for a new Child (Child 1).
.
.
.
Categories: DBA Blogs

News and Updates from Oracle Openworld 2014

Rittman Mead Consulting - Sat, 2014-10-04 08:48

It’s the Saturday after Oracle Openworld 2014, and I’m now home from San Francisco and back in the UK. It’s been a great week as usual, with lots of product announcements and updates to the BI, DW and Big Data products we use on current projects. Here’s my take on what was announced this last week.

New Products Announced

From a BI and DW perspective, the most significant product announcements were around Hadoop and Big Data. Up to this point most parts of an analytics-focused big data project required you to code the solution yourself, with the diagram below showing the typical three steps in a big data project – data ingestion, analysis and sharing the results.

NewImage

At the moment, all of these steps are typically performed from the command-line using languages such as Python, R, Pig, Hive and so on, with tools like Apache Flume and Apache Sqoop used to bring data into and out of the Hadoop cluster. Under the covers, these tools use technologies such as MapReduce or Spark to do their work, automatically running jobs in parallel across the cluster and making use of the easy scalability of Hadoop and NoSQL databases.

You can also neatly divide the work up on a big data project into two phases; the “discovery” phase typically performed by a data scientist where data is loaded, analysed, correlated and otherwise “understood” to provide the initial insights, and then an “exploitation” phase where we apply governance, provide the output data in a format usable by BI tools and otherwise share the results with the wider corporate audience. The updated Information Management Reference Architecture we collaborated on with Oracle and launched by in June this year had distinct discovery and exploitation phases, and the architecture itself made a clear distinction between the Innovation part that enabled the discovery phase of a project and the Execution part that delivered the insights and data in a more governed, production setting.

NewImage

This was the theme of the product announcements around analytics, BI, data warehousing and big data during Openworld 2014, with Oracle’s Omri Traub in the photo below taking us through Oracle’s big data product strategy. What Oracle are doing here is productising and “democratising” big data, putting it clearly in context of their existing database, engineered systems and BI products and linking them all together into an overall information management architecture and delivery process.

NewImage

So working through from ingestion through to data analysis, these steps have typically been performed by data scientists using scripting tools and rudimentary data visualisation engines, making them labour-intensive and reliant on a small set of people conversant with these tools and process. Oracle Big Data Discovery is aimed squarely at these steps, and combines Apache Spark-based data preparation and transformation capabilities with an analysis and visualisation engine based on Endeca Server.

NewImage

Key features of Big Data Discovery include:

  • Ability to analyse, parse, explore and “wrangle” data using graphical tools and a Spark-based transformation engine
  • Create a catalog of the data on your Hadoop cluster, and then search that catalog using Endeca Server search technologies
  • Create recommendations of other datasets that might interest you, based on what you’re looking at now
  • Visualize your datasets to help understand what they contain, and discover new insights

Under the covers it comprises two parts; the data loading, transformation and profiling part that uses Apache Spark to do its work in parallel across all the nodes in the cluster, and the analysis part, which takes data prepared by Apache Spark and loads into the Endeca Server in-memory engine to perform the analysis, aggregation and data visualisation. Unlike the Spark part the Endeca server element runs just on one node and limits the size of the analysis dataset to what can run in-memory in the Endeca Server engine, but in practice you’re going to work with a sample of the data rather than the entire dataset at that stage (in time the assumption is that the Endeca Server engine will be unbundled and run natively on YARN, giving it the same scalability as the Spark-based data ingestion and transformation part). Initially Big Data Discovery will run on-premise with a cloud version later on, and it’s not dependent on Big Data Appliance – expect to see something later this year / early next year.

Another new product that addresses the discovery phase and discovery lab part of a big data project is Oracle Data Enrichment Cloud Service, from the Oracle Data Integration team and designed to complement ODI and Oracle EDQ. Whilst Oracle positioned ODECS as something you’d use as well as Big Data Discovery and typically upstream from BDD, to me there seemed to be a fair bit of overlap between the products, with both tools doing data profiling and transformation but BDD being more focused on the exploration and discovery part, and ODECS being more focused on early-stage data profiling and transformation.

NewImage

ODECS is clearly more of an ETL tool complement and runs natively in the cloud, right from the start. It’s most probably aimed at customers with their Hadoop dataset already in the cloud, maybe using Amazon Elastic MapReduce or Oracle’s new Hadoop-as-a-Service and has more in common with the old Data Quality Option for Oracle Warehouse Builder than Endeca’s search-first analytic interface. It’s got a very nice interface including a mobile-enabled website and the ability to include and merge in external datasets, including Oracle’s own Data as a Service platform offering. Along with the new Metadata Management tool Oracle also launched at Openworld it’s a great addition to the Oracle Data Integration product suite, but I can’t help thinking that its initial availability only on Oracle’s public cloud platform is going to limit its use with Oracle’s typical customers – we’ll have to just wait and see.

The other major product that addresses big data projects was Oracle Big Data SQL. Partly addressing the discovery phase of big data projects but mostly (to my mind) addressing the exploitation phase, and the execution part of the information management architecture, Big Data SQL gives Oracle Exadata the ability to return data from Hive and NoSQL on the Big Data Appliance as well as data from its normal relational store. I covered Big Data SQL on the blog a few weeks ago and I’ll be posting some more in-depth articles on it next week, but the other main technical innovation with the product is its bringing of Exadata’s SmartScan feature to Hadoop, projecting and filtering data at the Hadoop storage node level and also giving Hadoop the ability to understand regular Oracle SQL, rather than the cut-down version you get with HiveQL.

NewImage

Where this then leaves us is with the ability to do most of a big data project using (Oracle) tools, bringing big data analysis within reach of organisations with Oracle-style budgets but without access to rare data scientist-type resources. Going back to my diagram earlier, a post-OOW big data project using the new products launched in this last week could look something like this:

NewImage

Big Data SQL is out now and depends on BDA and Exadata for its use; Big Data Discovery should be out in a few months time, runs on-premise but doesn’t require BDA, whilst ODECS is cloud-only and runs on a BDA in the background. Expect more news and more integration/alignment from the products as 2014 ends and 2015 starts, and we’re looking forward to using them on Oracle-centric Hadoop projects in the near future. 

Product Updates for BI, Data Integration, Exalytics, BI Applications and OBIEE

Other news announced over the week for products we more commonly use on projects include:

Finally, something that we were particularly pleased to see was the updated Oracle Information Management Architecture I mentioned earlier referenced in most of the analytics sessions, with Oracle’s Balaji Yelamanchili for example introducing it in his big data and business analytics general session mid-way through the week. 

NewImage  

We love the way this brings together the big data components and puts them in the context of the wider data warehouse and analytic processes, and compared to a few years ago when Hadoop and big data was considered completely separate to data warehousing and BI and done by staff completely different to the core business analytics team, this new reference architecture puts it squarely within the world of BI and analytics we work in. It also emphasises the new abilities Hadoop, NoSQL databases and big data can bring us – support for wider sets of data sources with dynamic schemas, the ability to economically work with and analyse much larger datasets, and support discovery-type upfront analysis work. Finally, it recognises that to get true value out of analysis you start on Hadoop, you eventually need to add proper data governance, make the results more widely available using full SQL tools, and use the right tools – relational databases, OLAP servers and the like – to analyse the data once its in a more structured form. 

If you missed our write-up on the updated Information Management Reference Architecture you can can read our two-part blog post here and here, read the Oracle white paper, or listen to the podcast with OTN Archbeat’s Bob Rhubart. For now though I’m looking forward to seeing the family after a week and a half away in San Francisco – thanks to OTN and the Oracle ACE Director Program for sponsoring my visit over to SF for Openworld, and we’ll post our conference presentation slides later next week when we’re back in the UK and US offices.

Categories: BI & Warehousing

Error unzipping PeopleSoft Images

Duncan Davies - Fri, 2014-10-03 18:14

The new PUM images are a boon for anyone wanting to get a PeopleSoft instance up and running quickly. Once you’ve downloaded the zip archives however, you might find that the delivered zip file doesn’t work by default for everyone.

The line:

unzip HCM-920-UPD-008_OVA_2of11.zip

gives me the following error:

'unzip' is not recognized as an internal or external command

I’m not sure where the unzip utility is supposed to be from, but it’s not delivered as part of Windows 8.1. I typically use the excellent 7zip utility for my zip/archiving needs, so I need to amend the script slightly.

I add the following line near the top:

set PATH=%PATH%;C:\Program Files\7-Zip\

so that I can reference the extraction tool with just the filename, then I change each archive line to use 7zip instead, thus:

7z e HCM-920-UPD-008_OVA_2of11.zip

PeopleSoft Interaction Hub 9.1/Revision 3 Now Available

PeopleSoft Technology Blog - Fri, 2014-10-03 17:29

Peoplesoft is proud to announce that the PeopleSoft Interaction Hub 9.1/Revision 3 is now Generally Available for new installations. The release is available for download on OSDC or physical shipment through Customer Care.

Here are a few highlights of Revision 3.  See the Release Value Proposition and Release Notes for more detail on what's in this important release.

Branding In PeopleTools 8.54, the Branding feature has been migrated from Interaction Hub to PeopleTools along with several enhancements.  However, there are still useful branding capabilities provided by the Interation Hub. In a cluster environment, the Interaction Hub will be used to brand across the cluster. The Interaction Hub provides a new Branding WorkCenter that can be used to create, manage, and assign role-specific themes. We've also delivered an “out-of-the-box” Hub similar to the one that is typically shown in our demo examples.   Fluid UX Support The 8.54 PeopleTools release represents a landmark for PeopleTools and the PeopleSoft user experience. With this release, PeopleSoft introduces the PeopleSoft Fluid User Interface. The Interaction Hub provides some Fluid content like The Company News (news publications) pagelet. Interaction Hub Cluster Setup Improvements The Interaction Hub includes the Unified Navigation WorkCenter, which makes it easier for administrators to install, set up, and monitor the  Interaction Hub cluster with other PeopleSoft applications. The Unified Navigation WorkCenter also has a diagnostics page that provides  information on the In Network nodes and the SSO setup. This provides a central location for diagnostic and troubleshooting for the PeopleSoft cluster.  Content Management  Content Management is a powerful and popular feature in the Interaction Hub.  Revision 3 delivers a new WorkCenter that provides a simple mechanism to create and publish content. The WorkCenter guides the user through the content creation and publishing process.  Revision 3 also has an enhancement that enables the creation of Rich Text Editor (RTE) templates.  WCAG 2.0 Support Enterprises and public sector institutions globally are conforming to accessibility regulations. Revision 3 enhancements make it possible for customers to deliver accessible content in the Interaction Hub. PeopleTools 8.54 makes it possible for PeopleSoft applications to conform to the WCAG 2.0 standards. In Revision 3, the Interaction Hub product conforms to WCAG 2.0 AA standards.   Where to Go For More Information 

There is a lot of information available on this new release.  The following documentation is available on the Oracle Technology Network and my.oracle.support:

PeopleSoft Portal Solutions Interaction Hub 9.1 Documentation Home Page.  Pretty much all related documentation is available here.
Here are some links to particularly useful items:
Release Notes
Revision 3 Installation Home Page
Revision 3 Hardware and Software Requirements
Revision 3 Upgrade Home Page
PeopleTools 8.54 Licensing Notes with the new content:

Health care an open target for hackers [VIDEO]

Chris Foot - Fri, 2014-10-03 13:31

Transcript

Think hackers are only after you credit card numbers? Think again.

Hi, welcome to RDX. While the U.S. health care industry is required by law to secure patient information, many organizations are only taking basic protective measures.

According to Reuters, the FBI stated Chinese cybercriminals had broken into a health care organization's database and stole personal information on about 4.5 million patients. Names, birth dates, policy numbers, billing information and other data can be easily accessed by persistent hackers.

Databases holding this information need to employ active monitoring and automated surveillance tools to ensure unrestricted access isn't allowed. In addition, encrypting patient files is a critical next step.

Thanks for watching. For more security tips, be sure to check in frequently.  

The post Health care an open target for hackers [VIDEO] appeared first on Remote DBA Experts.

An OOW Summary from the ADF and MAF perspective

Shay Shmeltzer - Fri, 2014-10-03 12:39

Another Oracle OpenWorld is behind us, and it was certainly a busy one for us. In case you didn't have a chance to attend, or follow the twitter frenzy during the week, here are the key take aways that you should be aware of if you are developing with either Oracle ADF or Oracle MAF.

 Oracle Alta UI

We released our design patterns for building modern applications for multiple channels. This include a new skin and many samples that show you how to create the type of UIs that we are now using for our modern cloud based interfaces.

All the resources are at http://bit.ly/oraclealta

The nice thing is that you can start using it today in both Oracle ADF Faces and Oracle MAF - just switch the skin to get the basic color scheme. Instructions here.

Note however that Alta is much more than just a color change, if you really want an Alta type UI you need to start designing your UI differently - take a look at some of the screen samples or our demo application for ideas.

Cloud Based Development

A few weeks before OOW we released our Developer Cloud Service in production, and our booth and sessions showing this were quite popular. For those who are not familiar, the Developer Cloud Service, gives you a hosted environment for managing your code life cycle (git version management, Hudson continuos integration, and easy cloud deployment), and it also gives you a way to track your requirements, and manage team work.

While this would be relevant to any Java developing team, for ADF developers there are specific templates in place to make things even easier.

You can get to experience this in a trial mode by getting a trial Java service account here.

Another developer oriented cloud service that got a lot of focus this year was on the upcoming Oracle Mobile Cloud Service - which includes everything your team will need in order to build mobile backends (APIs, Connectors, Notification, Storage and more). We ran multiple hands-on labs and sessions covering this, and it was featured in many keynotes too.

 In the Application development tools general session we also announced that in the future we'll provide a capability called Oracle Mobile Application Accelerator (which we call Oracle MAX for short) which will allow power users to build on device mobile applications easily through a web interface. The applications will leverage MAF as the framework, and as a MAF developer you'll be able to provide additional templates, components and functionality for those.

Another capability we showed in the same session was a cloud based development environment that we are planning to add to both the Developer Cloud Service and the Mobile Cloud Service - for developers to be able to code in the cloud with the usual functions that you would expect from a modern code editor.

dcs

The Developer Community is Alive and Kicking

The ADF and MAF sessions were quite full this year, and additional community activities were successful as well. Starting with a set of ADF/MAF session by users on the Sunday courtesy of ODTUG and the ADF EMG. In one of the sessions there members of the community announced a new ADF data control for XML. Check out the work they did!

ODTUG also hosted a nice meet up for ADF/MAF developers, and announced their upcoming mobile conference in December. They also have their upcoming KScope15 summer conference that is looking for your abstract right now!

Coding Competition

Want to earn some money on the side? Check out the Oracle MAF Developer Challenge - build a mobile app and you can earn prizes that range from $6,000 to $1,000.

Sessions

With so many events taking place it sometime hard to hit all the sessions that you are interested in. And while the best experience is to be in the room, you might get some mileage from just looking at the slides. You can find the slides for many sessions in the session catalog here. And a list of the ADF/MAF sessions here.

See you next year. 

Categories: Development