Skip navigation.

Feed aggregator

New Alta UI for ADF UI Shell Application

Andrejus Baranovski - Tue, 2014-10-07 23:14
I have applied new Alta UI for customised ADF UI Shell application. Customised version of ADF UI Shell is taken from my previous blog post - ADF UI Shell Usability Improvement - Tab Contextual Menu. Old application with new Alta UI looks fresh and clean. Runtime performance is improved - ADF transfers less content to the browser, this makes application load and run faster.

Here you can download my sample application with Alta UI applied to ADF UI Shell - MultiTaskFlowApp_v12c.zip.

All three ADF UI Shell tabs are opened and Master-Detail data is displayed in this example:


New style is applied for LOV component and buttons, making all buttons and controls more visible and natural:


Customized ADF UI Shell supports tab menu - user can close current tab or other tabs:


There was a change in 12c related to the tab menu, we need to set align ID property differently. You can see this change in ADF UI Shell template file - Java Script function gets tab ID to align directly from component client ID property:


Alta UI is applied simply by changing a skin name in trinidad file:


This hidden gem was packaged with current JDEV 12.1.3 release, you don't need to download anything extra.

Bringing Clarity To The Avalanche Part 1 - OOW14

Floyd Teter - Tue, 2014-10-07 15:50
Since the prior post here, I've had some people ask why I compared Oracle OpenWorld this year to an avalanche.  Well, to be honest, there are two reasons.  First, it was certainly an avalanche of news. You can check all the Oracle press releases related to the conference here (warning: it's pages and pages of information).  Second, I'm tired of using the analogy of sipping or drinking from a firehose...time to try something new.

So let's talk about some User Experience highlights from the conference.  Why am I starting with UX?  Because I like it and it's my blog ;)

Alta UI

OK, let's be clear.  Alta is more of a user interface standard than a full UX, as it focuses strictly on UI rather than the entire user experience.  That being said, it's pretty cool.  It's a very clean and simplified look, and applies many lessons learned through Oracle's (separate) UX efforts.  I could blab on and on about Oracle Alta, but you can learn about it for yourself here.

Beacons

We all love gadgets.  I had the opportunity to get a sneak peek at some of the "projects that aren't quite products yet" in the works at the Oracle UX Labs.  Beacons are a big part of that work.  Turns out that the work has already progress beyond mere gadgetry.  The beacons were used to help guide me from station to station within the event space - this booth is ready for you now.  The AppsLab team talks about beacons on a regular basis.  I'm much more sold now on the usefulness to beacon technology than I was before OOW.  This was one of the better applications I've seen at the intersection of Wearables and the Internet of Things.

Simplified UI

I like the concepts behind Simplified UI because well-designed UX drives user acceptance and increases productivity.  Simplified UI was originally introduced for Oracle Cloud Applications back when they were known as Fusion Applications.  But now we're seeing Simplified UI propagating out to other Oracle Applications.  We now see Simplified UI patterns applied to the E-Business Suite, JD Edwards and PeopleSoft.  Different underlying technology for each, but the same look and feel.  Very cool to see the understanding growing within Oracle development that user experience is not only important, but is a value-add product in and of itself.

Simplified UI Rapid Development Kit

Simplified UI is great for Oracle products, but what if I want to extend those products.  Or, even better, what if I want to custom-build products with the same look and feel?  Well, Oracle has made it easy for me to literally steal...in fact, they want me to steal...their secret sauce with the Simplified UI Rapid Development Kit.  Yeah, I'm cheating a bit.  This was actually released before OOW.  But most folks, especially Oracle partners, were unaware prior to the conference.  If I had a nickel for every time I saw a developer's eyes light up over this at OOW, I'd could buy my own yacht and race Larry across San Francisco Bay.  Worth checking out if you haven't already.

Student Cloud

I'll probably get hauled off to the special prison Oracle keeps for people who toy with the limits of their NDA for this, but it's too cool to keep to myself.  I had the opportunity to work hands-on with an early semi-functional prototype of the in-development Student Cloud application for managing Higher Education continuing education students.  The part that's cool:  you can see great UX design throughout the application.  Very few clicks, even fewer icons, a search-based navigation architecture, and very, very simple business processes for very specific use cases.  I can't wait to see and hear reactions when this app rolls out to the Higher Education market.

More cool stuff next post...

Little script for finding tables for which dynamic sampling was used

XTended Oracle SQL - Tue, 2014-10-07 14:42

You can always download latest version here: http://github.com/xtender/xt_scripts/blob/master/dynamic_sampling_used_for.sql
Current source code:

col owner         for a30;
col tab_name      for a30;
col top_sql_id    for a13;
col temporary     for a9;
col last_analyzed for a30;
col partitioned   for a11;
col nested        for a6;
col IOT_TYPE      for a15;
with tabs as (
      select 
         to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,1))  owner
        ,to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,2))  tab_name
        ,count(*)                                                                    cnt
        ,sum(executions)                                                             execs
        ,round(sum(elapsed_time/1e6),3)                                              elapsed
        ,max(sql_id) keep(dense_rank first order by elapsed_time desc)               top_sql_id
      from v$sqlarea a
      where a.sql_text like 'SELECT /* OPT_DYN_SAMP */%'
      group by
         to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,1))
        ,to_char(regexp_substr(sql_fulltext,'FROM "([^"]+)"."([^"]+)"',1,1,null,2))
)
select tabs.* 
      ,t.temporary
      ,t.last_analyzed
      ,t.partitioned
      ,t.nested
      ,t.IOT_TYPE
from tabs
    ,dba_tables t
where 
     tabs.owner    = t.owner(+)
 and tabs.tab_name = t.table_name(+)
order by elapsed desc
/
col owner         clear;
col tab_name      clear;
col top_sql_id    clear;
col temporary     clear;
col last_analyzed clear;
col partitioned   clear;
col nested        clear;
col IOT_TYPE      clear;

ps. Or if you want to find queries that used dynamic sampling, you can use query like that:

select s.*
from v$sql s
where 
  s.sql_id in (select p.sql_id 
               from v$sql_plan p
               where p.id=1
                 and p.other_xml like '%dynamic_sampling%'
              )
Categories: Development

OOW : Edition 2015

Jean-Philippe Pinte - Tue, 2014-10-07 14:22
A noter dans l'agenda, les dates de l'édition 2015 !
25 au 29 octobre 2015

Presentations Available from OpenWorld

Anthony Shorten - Tue, 2014-10-07 11:38

Last week I conducted three sessions on a number of topics. The presentations used in those sessions are now available from the Sessions --> Content Catalog on the Oracle OpenWorld site.Just search for my name (Anthony Shorten) to download the presentations in PDF format.

The sessions available are:

I know a few customers and partners came to me after each session to get a copy of the presentation. They are now available as I pointed out.

Objects versus Insert Statements

Anthony Shorten - Tue, 2014-10-07 11:06

A few times I have encountered issues and problems at customers that can defy explanation. After investigation I usually find out the cause and in some cases it is the way the implementation has created the data in the first place. In the majority of these types of issues, I find that interfaces or even people are using direct INSERT statements against the product database to create data. This is inherently dangerous for a number of reasons and therefore strongly discouraged:

  • Direct INSERT statements frequently miss important data in the object.
  • Direct INSERT statements ignore any product business logic which means the data is potentially inconsistent from the definition of the object. This can cause the product processing to misinterpret the data and may even cause data corruption in extreme cases.
  • Direct INSERT statements ignore product managed referential integrity. We do not use the referential integrity of the data within the database as we allow extensions to augment the behavior of the object and determine the optimal point of checking data integrity. The object has inbuilt referential integrity rules.

To avoid this situation we highly recommend that you only insert data through the object and NOT use direct INSERT statements. The interface to the object can be direct within the product or via Web Services (either directly or through your favorite middleware) to create data from an external source. Running through the object interface ensures not only that the data is complete but takes into account product referential integrity and conforms to the business rules that you configure for your data.

Take care and create data through the objects.

12c Upgrade and Concurrent Stats Gathering

Jason Arneil - Tue, 2014-10-07 07:50

I was upgrading an Exadata test database from 11.2.0.4 to 12.1.0.2 and I came across a failure scenario I had not encountered before. I’ve upgraded a few databases to both 12.1.0.1 and 12.1.0.2 for test purposes, but this was the first one I’d done on Exadata. And the first time I’d encountered such a failure.

I started the upgrade after checking with the pre upgrade script that everything was ready to upgrade. And I ran with the maximum amount of parellelism:

$ORACLE_HOME/perl/bin/perl catctl.pl -n 8 catupgrd.sql
.
.
.
Serial Phase #:81 Files: 1 A process terminated prior to completion.

Died at catcon.pm line 5084.

That was both annoying and surprising. The line in catcon.pm is of no assistance:

   5080   sub catcon_HandleSigchld () {
   5081     print CATCONOUT "A process terminated prior to completion.\n";
   5082     print CATCONOUT "Review the ${catcon_LogFilePathBase}*.log files to identify the failure.\n";
   5083     $SIG{CHLD} = 'IGNORE';  # now ignore any child processes
   5084     die;
   5085   }

But what of more use was the bottom of a catupgrd.log file:

11:12:35 269  /
catrequtlmg: b_StatEvt     = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
declare
*
ERROR at line 1:
ORA-20000: Unable to gather statistics concurrently: Resource Manager is not
enabled.
ORA-06512: at "SYS.DBMS_STATS", line 34567
ORA-06512: at line 152

This error is coming from the catrequtlmg.sql, but my first thought was checking if the parameter resource_manager_plan was set, and it turned out it wasn’t. However setting the default_plan and running this piece of sql by itself produced the same error:

SQL> @catrequtlmg.sql

PL/SQL procedure successfully completed.

catrequtlmg: b_StatEvt	   = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
declare
*
ERROR at line 1:
ORA-20000: Unable to gather statistics concurrently: Resource Manager is not
enabled.
ORA-06512: at "SYS.DBMS_STATS", line 34567
ORA-06512: at line 152



PL/SQL procedure successfully completed.

I then started thinking about what it meant by gather statistics concurrently and I noticed that I had indeed set this database to gather stats concurrently (it’s off by default):

SQL> select dbms_stats.get_prefs('concurrent') from dual;

DBMS_STATS.GET_PREFS('CONCURRENT')
--------------------------------------------------------------------------------
TRUE

I then proceeded to turn of this concurrent gathering and rerun the failing SQL:


SQL> exec dbms_stats.set_global_prefs('CONCURRENT','FALSE');

PL/SQL procedure successfully completed.

SQL> select dbms_stats.get_prefs('concurrent') from dual;

DBMS_STATS.GET_PREFS('CONCURRENT')
--------------------------------------------------------------------------------
FALSE


SQL> @catrequtlmg.sql

PL/SQL procedure successfully completed.

catrequtlmg: b_StatEvt	   = TRUE
catrequtlmg: b_SelProps    = FALSE
catrequtlmg: b_UpgradeMode = TRUE
catrequtlmg: b_InUtlMig    = TRUE
catrequtlmg: Deleting table stats
catrequtlmg: Gathering Table Stats OBJ$MIG
catrequtlmg: Gathering Table Stats USER$MIG
catrequtlmg: Gathering Table Stats COL$MIG
catrequtlmg: Gathering Table Stats CLU$MIG
catrequtlmg: Gathering Table Stats CON$MIG
catrequtlmg: Gathering Table Stats TAB$MIG
catrequtlmg: Gathering Table Stats IND$MIG
catrequtlmg: Gathering Table Stats ICOL$MIG
catrequtlmg: Gathering Table Stats LOB$MIG
catrequtlmg: Gathering Table Stats COLTYPE$MIG
catrequtlmg: Gathering Table Stats SUBCOLTYPE$MIG
catrequtlmg: Gathering Table Stats NTAB$MIG
catrequtlmg: Gathering Table Stats REFCON$MIG
catrequtlmg: Gathering Table Stats OPQTYPE$MIG
catrequtlmg: Gathering Table Stats ICOLDEP$MIG
catrequtlmg: Gathering Table Stats TSQ$MIG
catrequtlmg: Gathering Table Stats VIEWTRCOL$MIG
catrequtlmg: Gathering Table Stats ATTRCOL$MIG
catrequtlmg: Gathering Table Stats TYPE_MISC$MIG
catrequtlmg: Gathering Table Stats LIBRARY$MIG
catrequtlmg: Gathering Table Stats ASSEMBLY$MIG
catrequtlmg: delete_props_data: No Props Data

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.

It worked! I was able to upgrade my database in the end.

I wish the preupgrade.sql script would check for this. Or indeed when upgrading, the catrequtlmg.sql would disable the concurrent gathering.

I would advise checking for this before any upgrade to 12c and turning it off if you find it in one of your about to be upgraded databases.


iBeacons or The Physical Web?

Oracle AppsLab - Tue, 2014-10-07 06:55

For the past year at the AppsLab we have been exploring the possibilities of advanced user interactions using BLE beacons. A couple days ago, Google (unofficially) announced that one of their Chrome teams is working on what I’m calling the gBeacon. They are calling it the Physical Web.
This is how they describe it:

“The Physical Web is an approach to unleash the core superpower of the web: interaction on demand. People should be able to walk up to any smart device – a vending machine, a poster, a toy, a bus stop, a rental car – and not have to download an app first. Everything should be just a tap away.

The Physical Web is not shipping yet nor is it a Google product. This is an early-stage experimental project and we’re developing it out in the open as we do all things related to the web. This should only be of interest to developers looking to test out this feature and provide us feedback.

Here is a short run down of how iBeacon works vs The Physical Web beacons:

iBeacon

The iBeacon profile advertises a 30 byte packet containing three values that combined make a unique identifier: UUID, Major, Minor. The mobile device will actively listen for these packets. When it gets close to one of them it will query a database (cloud) or use hard-coded values to determine what it needs to do or show for that beacon. Generally the UUID is set to identify a common organization. Major value is an asset within that organization, and Minor is a subset of assets belonging to the Major.
iBeacon_overview.001
For example, if I’m close to the Oracle campus, and I have an Oracle application that is actively listening for beacons, then as I get within reach of any beacon my app can trigger certain interactions related to the whole organization (“Hello Noel, Welcome to Oracle.”) The application had to query a database to know what that UUID represents. As I reach building 200, my application picks up another beacon that contains a Major value of lets say 200. Then my app will do the same and query to see what it represents (“You are in building 200.”) Finally when I get close to our new Cloud UX Lab, a beacon inside the lab will broadcast a Minor ID that represents the lab (“This is the Cloud UX lab, want to learn more?”)

iBeacons are designed to work as full closed ecosystem where only the deployed devices (app+beacons+db) will know what a beacon represents. Today I can walk to the Apple store and use a Bluetooth app to “sniff” BLE devices, but unless I know what their UUID/Major/Minor values represent I cannot do anything with that information. Only the official Apple Store app will know what do with when is nearby beacons around the store (“Looks like you are looking for a new iPhone case.”)

As you can see the iBeacon approach is a “push” method where the device will proactively push actions to you. In contrast the Physical Web beacon proposes to act as a “pull” or on-demand method.

Physical Web

The Physical Web gBeacon will advertise a 28 bytes packet containing an encoded URL. Google wants to use the familiar and established method of URLs to tell an application, or an OS, where to find information about physical objects. They plan to use context (physical and virtual) to top rank what might be more important to you at the current time and display it.

gBeacon

Image from https://github.com/google/physical-web/blob/master/documentation/introduction.md

The Physical Web approach is designed to be a “pull” discovery service where most likely the user will initiate the interaction. For example, when I arrive to the Oracle campus, I can start an application that will scan for nearby gBeacons or I can open my Chrome browser and do a search.  The application or browser will use context to top rank nearby objects combined with results. It can also use calendar data, email or Google Now to narrow down interests.  A background process with “push” capabilities could also be implemented. This process could have filters that can alert the user of nearby objects of interest.  These interests rules could be predefined or inferred by using Google’s intelligence gathering systems like Google Now.

The main difference between the two approaches is that iBeacons is a closed ecosystem (app+beacons+db) and the Physical Web is intended to be a public self discovered (app/os+beacons+www) physical extension of the web. Although the Physical Web could also be restricted by using protected websites and encrypted URLs.

Both approaches are accounting to prevent the misconception about these technologies: “I am going to be spammed as soon as I walk inside a mall?”  The answer is NO. iBeacons is an opt-in service within an app and the Physical Web beacons will mostly work on-demand or will have filter subscriptions.

So there you have it. Which method do you prefer?Possibly Related Posts:

Oracle OpenWorld 2014 Highlights

WebCenter Team - Tue, 2014-10-07 06:28

As Oracle OpenWorld 2014 comes to a close, we wanted to reflect on the week and provide some highlights for you all!

We say this every year, but this year's event was one of the best ones yet. We had more than 35 scheduled sessions, plus user group sessions, 10 live product demos, and 7 hands-on labs devoted to Oracle WebCenter and Oracle Business Process Management (Oracle BPM) solutions. This year's Oracle OpenWorld provided broad and deep insight into next-generation solutions that increase business agility, improve performance, and drive personal, contextual, and multichannel interactions. 

Oracle WebCenter & BPM Customer Appreciation Reception

Our 8th annual Oracle WebCenter & BPM Customer Appreciation Reception was held for the second year at San Francisco’s Old Mint, a National Historic Landmark. This was a great evening of networking and relationship building, where the Oracle WebCenter & BPM community had the chance to mingle and make new connections. Many thanks to our partners Aurionpro, AVIO Consulting, Bezzotech, Fishbowl Solutions, Keste, Redstone Content Solutions, TekStream & VASSIT for sponsoring!

Oracle Fusion Middleware Innovation Awards 

Oracle Fusion Middleware Innovation honors Oracle customers for their cutting-edge solutions using Oracle Fusion Middleware. Winners were selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. This year’s winners for WebCenter were Bank of Lebanon and McAfee.


This year’s winners for the BPM category were State Revenue Office, Victoria and Vertafore.


Congratulations winners! 

Oracle Appreciation Event at Treasure Island

We stayed up past our bedtimes rocking to Aerosmith and hip-hopping to Macklemore & Ryan Lewis and Spacehog at the Oracle Appreciation Event. These award-winners—plus free-flowing networking, food, and drink—made Wednesday evening magical at Treasure Island. Once we arrived on Treasure Island, we saw that it had been transformed and we were wowed by the 360-degree views of Bay Area skylines (with an even better view from the top of the Ferris wheel). We tested our skills playing arcade games between acts, and relaxed and enjoyed ourselves after a busy couple of days.

Cloud

Cloud was one of the OOW shining spotlights this year. For WebCenter and BPM, we had dedicated hands-on labs for Documents Cloud Service and Process Cloud Service @ the Intercontinental. In addition, we had live demos including Documents Cloud Service, Process Cloud Services and Oracle Social Network (OSN) throughout the week. Documents Cloud Service and OSN were featured prominently in the Thomas Kurian OOW Keynote (from the 46 minute mark) and the FMW General Session (from the 40 minute mark). 

The Oracle WebCenter & BPM Community

Oracle OpenWorld is unmatched in providing you with opportunities to interact and engage with other WebCenter & BPM customers and experts from among our partner and employee communities. It was great to see everyone, make new connections and reconnect with old friends. We look forward to seeing you all again next year!

BI Applications in Cloud

Dylan's BI Notes - Mon, 2014-10-06 18:28
Prepackaged analytics applications are available as cloud services. The idea is that the client company does not need to use their own hardware and does not need to install the software or apply patches by themselves. What they need is just simply the browsers. For the end users, there should not be much difference.   The BI apps built […]
Categories: BI & Warehousing

Comparing SQL Execution Times From Different Systems

Comparing SQL Execution Times From Different Systems
Suppose it's your job to identify SQL that may run slower in the about-to-be-upgrated Oracle Database. It's tricky because no two systems are alike. Just because the SQL run time is faster in the test environment doesn't mean the decision to upgrade is a good one. In fact, it could be disastrous.

For example; If a SQL statement runs 10 seconds in production and runs 20 seconds in QAT, but the production system is twice as fast as QAT, is that a problem? It's difficult to compare SQL runs times when the same SQL resides in different environments.

In this posting, I present a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.

And, there is a cool, free, downloadable tool involved!

Why SQL Can Run Slower In Different Environments
There are a number of reasons why a SQL's run time is different in different systems. An obvious reason is a different execution plan. A less obvious and much more complex reason is a workload intensity or type difference. In this posting, I will focus on CPU speed differences. Actually, what I'll show you is how to remove the CPU speed differences so you can appropriately compare two SQL statements. It's pretty cool.

The Mental Gymnastics
If a SQL statement's elapsed time in production is 10 seconds and 20 seconds in QAT, that’s NOT an issue IF the production system is twice as fast.

If this makes sense to you, then what you did was mentally adjust one of the systems so it could be appropriately compared. This is how I did it:

10 seconds in production * production is 2 times as fast as QA  = 20 seconds 
And in QA the sql ran in 20 seconds… so really they ran “the same” in both environments. If I am considering placing the SQL from the test environment into the production environment, then this scenario does not raise any risk flags. The "trick" is determining "production is 2 times as fast as QA" and then creatively use that information.
Determining The "Speed Value"
Fortunately, there are many ways to determine a system's "speed value." Basing the speed value on Oracle's ability to process buffers in memory has many advantages: a real load is not required or even desired, real Oracle code is being run at a particular version, real operating systems are being run and the processing of an Oracle buffer highly correlates with CPU consumption.
Keep in mind, this type of CPU speed test is not an indicator of scalability (benefit of adding additional CPUs) in any way shape or form. It is simply a measure of brut force Oracle buffer cache logical IO processing speed based on a number of factors. If you are architecting a system, other tests will be required.
As you might expect, I have a free tool you can download to determine the "true speed" rating. I recently updated it to be more accurate, require less Oracle privileges, and also show the execution plan of the speed test tool SQL. (A special thanks to Steve for the execution plan enhancement!) If the execution plan used in the speed tool is difference on the various systems, then obviously we can't expect the "true speeds" to be comparable.
You can download the tool HERE.
How To Analyze The Risk
Before we can analyze the risk, we need the "speed value" for both systems. Suppose a faster system means its speed rating is larger. If the production system speed rating is 600 and the QAT system speed rating is 300, then production is deemed "twice as fast."
Now let's put this all together and quickly go through three examples.
This is the core math:
standardized elapsed time = sql elapsed time * system speed value
So if the SQL elapsed time is 25 seconds and the system speed value is 200, then the standardized "apples-to-apples" elapsed time is 5000 which is 25*200. The "standardized elapsed time" is simply a way to compare SQL elapsed times, not what users will feel and not the true SQL elapsed time.
To make this a little more interesting, I'll quickly go through three scenarios focusing on identifying risk.
1. The SQL truly runs the same in both systems.
Here is the math:
QAT standardized elapsed time = 20 seconds X 300 = 6000 seconds
PRD standardized elapsed time = 10 seconds X 600 = 6000 seconds
In this scenario, the true speed situation is, QAT = PRD. This means, the SQL effectively runs just as fast in QAT as in production. If someone says the SQL is running slower in QAT and therefore this presents a risk to the upgrade, you can confidently say it's because the PRD system is twice as fast! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.
2. The SQL runs faster in production.
Now suppose the SQL runs for 30 seconds in QAT and for 10 seconds in PRD. If someone was to say, "Well of course it's runs slower in QAT because QAT is slower than the PRD system." Really? Everything is OK? Again, to make a fare comparison, we must compare the system using a standardizing metric, which I have been calling the, "standardized elapsed time."
Here are the scenario numbers:
QAT standardized elapsed time = 30 seconds X 300 = 9000 secondsPRD standardized elapsed time = 10 seconds X 600 = 6000 seconds
In this scenario, the QAT standard elapsed time is greater than the PRD standardized elapsed time. This means the QAT SQL is truly running slower in QAT compared to PRD. Specifically, this means the slower SQL in QAT can not be fully explained by the slower QAT system. Said another way, while we expect the SQL in QAT to run slower then in the PRD system, we didn't expect it to be quite so slow in QAT. There must another reason for this slowness, which we are not accounting for. In this scenario, the QAT SQL should be flagged as presenting a significant risk when upgrading from QAT to PRD.
3. The SQL runs faster in QAT.
In this final scenario, the SQL runs for 15 seconds in QAT and for 10 seconds in PRD. Suppose someone was to say, "Well of course the SQL runs slower in QAT. So everything is OK." Really? Everything is OK? To get a better understanding of the true situation, we need to look at their standardized elapsed times.
QAT standardized elapsed time = 15 seconds X 300 = 4500 secondsPRD standardized elapsed time = 10 seconds X 600 = 6000 seconds 
In this scenario, QAT standard elapsed time is less then the PRD standardized elapsed time. This means the QAT SQL is actually running faster in the QAT, even though the QAT wall time is 15 seconds and the PRD wall time is only 10 seconds. So while most people would flag this QAT SQL as "high risk" we know better! We know the QAT SQL is actually running faster in QAT than in production! In this scenario, the QAT SQL will not be flagged as presenting a significant risk when upgrading from QAT to PRD.
In Summary...
Identify risk is extremely important while planning for an upgrade. It is unlikely the QAT and production system will be identical in every way. This mismatch makes identifying risk more difficult. One of the common differences in systems is their CPU processing speeds. What I demonstrated was a way to remove the CPU speed differences, so an appropriate "apples to apples" SQL elapsed time comparison can be made, thereby improving our ability to more correctly detect risky SQL that may be placed into the upgraded production system.
What's Next?
Looking at the "standardized elapsed time" based on Oracle LIO processing is important, but it's just one reason why a SQL may have a different elapsed time in a different environment. One of the big "gotchas" in load testing is comparing production performance to a QAT environment with a different workload. Creating an equivalent workload on different systems is extremely difficult to do. But with some very cool math and a clear understanding of performance analysis, we can also create a more "apples-to-apples" comparison, just like we have done with CPU speeds. But I'll save that for another posting.

All the best in your Oracle performance work!

Craig.




Categories: DBA Blogs

New ADF Alta UI for ADF UI Shell

Andrejus Baranovski - Mon, 2014-10-06 15:55
New skin for ADF in 12c looks great, I have applied it for one of my sample application with ADF UI Shell and it works smoothly. Check Oracle documentation how to apply Alta UI, really easy.

ADF UI Shell with Alta UI - clean and light:


Upcoming Webinar Series: Using Google Search with your Oracle WebCenter or Liferay Portal

GSA Portal Search LogoFishbowl will host a series of webinars this month about integrating the Google Search Appliance with an Oracle WebCenter or Liferay Portal. Our new product, the GSA Portal Search Suite, fully exposes Google features within portals while also maintaining the existing look and feel.

The first webinar, “The Benefits of Google Search for your Oracle WebCenter or Liferay Portal”, will be held on Wednesday, October 15 from 12:00-1:00 PM CST. This webinar will focus on the benefits of using the Google Search Appliance, which has the best-in-class relevancy and impressive search features, such as spell check and document preview, that Google users are used to.

Register now

The second webinar, “Integrating the Google Search Appliance and Oracle WebCenter or Liferay Portal”, further explains how Fishbowl’s GSA Portal Search Suite helps improve the process of setting up a GSA with a WebCenter or Liferay Portal. This product uses configurable portlets so users can choose which Google features to enable and provides single sign-on between the portal and the GSA. The webinar will be held on Wednesday, October 22 from 12:00-1:00 PM CST.

Register now

For more information on the GSA Portal Search Suite, read our previous blog post on the topic.

The post Upcoming Webinar Series: Using Google Search with your Oracle WebCenter or Liferay Portal appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Microsoft Hadoop: Taming the Big Challenge of Big Data – Part Three

Pythian Group - Mon, 2014-10-06 11:57

Today’s blog post completes our three-part series with excerpts from our latest white paper, Microsoft Hadoop: Taming the Big Challenge of Big Data. In the first two posts, we discussed the impact of big data on today’s organizations, and its challenges.

Today, we’ll be sharing what organizations can accomplish by using the Microsoft Hadoop solution:

  1. Improve agility. Because companies now have the ability to collect and analyze data essentially in real time, they can more quickly discover which business strategies are working and which are not, and make adjustments as necessary.
  2. Increase innovation. By integrating structured and unstructured data sources, the solution provides decision makers with greater insight into all the factors affecting the business and encouraging new ways of thinking about opportunities and challenges.
  3. Reduce inefficiencies. Data that currently resides in conventional data management systems can be migrated into Parallel Data Warehouse (PDW) for faster information delivery
  4. Better allocate IT resources. The Microsoft Hadoop solution includes a powerful, intuitive interface for installing, configuring, and managing the technology, freeing up IT staff to work on projects that provide higher value to the organization.
  5. Decrease costs. Previously, because of the inability to effectively analyze big data, much of it was dumped into data warehouses on commodity hardware, which is no longer required thanks to Hadoop.

Download our full white paper to learn which companies are currently benefiting from Hadoop, and how you can achieve the maximum ROI from the Microsoft Hadoop solution.

Don’t forget to check out part one and part two of our Microsoft Hadoop blog series.

Categories: DBA Blogs

Clarity In The Avalanche

Floyd Teter - Mon, 2014-10-06 10:04
So I've spent the days since Oracle OpenWorld 14 decompressing...puttering in the garden, BBQing for family, running errands.  The idea was to give my mind time to process all the things I saw and heard at OOW this year.  Big year - it was like trying to take a sip from a firehose.  Developing any clarity around the avalanche of news has been tough.

If you average out all of Oracle's new product development, it comes to a rate of one new product release every working day of the year.  And I think they saved up bunches for OOW. It was difficult to keep up.

It was also difficult to physically keep up with things at OOW, as Oracle utilized the concept of product centers and spread things out over even more of downtown San Francisco this year. For example, Cloud ERP products were centered in the Westin on Market Street.  Cloud HCM was located at the Palace Hotel.  Sales Cloud took over the 2nd floor of Moscone West.  Higher Education focused around the Marriott Marquis. Anything UX, as well as many other hands-on labs, happened at the InterContinental Hotel.  And, of course, JavaOne took place at the Hilton on Union Square along with the surrounding area.  The geographical separation required even more in the way of making tough choices about where to be and when to be there.

With all that, I think I've figured out a way to organize my own take on the highlights from OOW - with a tip o' the hat to Oracle's Thomas Kurian.  Thomas sees Oracle as based around five product lines:  engineered systems, database, middleware, packaged applications, and cloud services. The more I consider this framework, the more it makes sense to me.  So my plan is to organize the news from OOW around these five product lines over the next few posts here.  We'll see if we can't find some clarity in the avalanche.

rsyslog: Send logs to Flume

Surachart Opun - Mon, 2014-10-06 04:12
Good day for learning something new. After read Flume book, that something popped up in my head. Wanted to test "rsyslog" => Flume => HDFS. As we know, forwarding log to other systems. We can set rsyslog:
*.* @YOURSERVERADDRESS:YOURSERVERPORT ## for UDP
*.* @@YOURSERVERADDRESS:YOURSERVERPORT ## for TCPFor rsyslog:
[root@centos01 ~]# grep centos /etc/rsyslog.conf
*.* @centos01:7777Came back to Flume, I used Simple Example for reference and changed a bit. Because I wanted it write to HDFS.
[root@centos01 ~]# grep "^FLUME_AGENT_NAME\="  /etc/default/flume-agent
FLUME_AGENT_NAME=a1
[root@centos01 ~]# cat /etc/flume/conf/flume.conf
# example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
#a1.sources.r1.type = netcat
a1.sources.r1.type = syslogudp
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 7777
# Describe the sink
#a1.sinks.k1.type = logger
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://localhost:8020/user/flume/syslog/%Y/%m/%d/%H/
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.batchSize = 10000
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 10000
a1.sinks.k1.hdfs.filePrefix = syslog
a1.sinks.k1.hdfs.round = true


# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
[root@centos01 ~]# /etc/init.d/flume-agent start
Flume NG agent is not running                              [FAILED]
Starting Flume NG agent daemon (flume-agent):              [  OK  ]Tested to login by ssh.
[root@centos01 ~]#  tail -0f  /var/log/flume/flume.log
06 Oct 2014 16:35:40,601 INFO  [hdfs-k1-call-runner-0] (org.apache.flume.sink.hdfs.BucketWriter.doOpen:208)  - Creating hdfs://localhost:8020/user/flume/syslog/2014/10/06/16//syslog.1412588139067.tmp
06 Oct 2014 16:36:10,957 INFO  [hdfs-k1-roll-timer-0] (org.apache.flume.sink.hdfs.BucketWriter.renameBucket:427)  - Renaming hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067.tmp to hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]# hadoop fs -ls hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r--   1 flume supergroup        299 2014-10-06 16:36 hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
[root@centos01 ~]#
[root@centos01 ~]#
[root@centos01 ~]# hadoop fs -cat hdfs://localhost:8020/user/flume/syslog/2014/10/06/16/syslog.1412588139067
14/10/06 16:37:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
sshd[20235]: Accepted password for surachart from 192.168.111.16 port 65068 ssh2
sshd[20235]: pam_unix(sshd:session): session opened for user surachart by (uid=0)
su: pam_unix(su-l:session): session opened for user root by surachart(uid=500)
su: pam_unix(su-l:session): session closed for user rootLook good... Anyway, It needs to adapt more...



Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Why the In-Memory Column Store is not used (II)

Karl Reitschuster - Mon, 2014-10-06 03:10
Now after some research - I detected one simple rule for provoking In-Memory scans :

Oracle In-Memory Column Store Internals – Part 1 – Which SIMD extensions are getting used?

Tanel Poder - Sun, 2014-10-05 23:51

This is the first entry in a series of random articles about some useful internals-to-know of the awesome Oracle Database In-Memory column store. I intend to write about Oracle’s IM stuff that’s not already covered somewhere else and also about some general CPU topics (that are well covered elsewhere, but not always so well known in the Oracle DBA/developer world).

Before going into further details, you might want to review the Part 0 of this series and also our recent Oracle Database In-Memory Option in Action presentation with some examples. And then read this doc by Intel if you want more info on how the SIMD registers and instructions get used.

There’s a lot of talk about the use of your CPUs’ SIMD vector processing capabilities in the Oracle inmemory module, let’s start by checking if it’s enabled in your database at all. We’ll look into Linux/Intel examples here.

The first generation of SIMD extensions in Intel Pentium world were called MMX. It added 8 new XMMn registers, 64 bits each. Over time the registers got widened, more registers and new features were added. The extensions were called Streaming SIMD Extensions (SSE, SSE2, SSSE3, SSE4.1, SSE4.2) and Advanced Vector Extensions (AVX and AVX2).

The currently available AVX2 extensions provide 16 x 256 bit YMMn registers and the AVX-512 in upcoming King’s Landing microarchitecture (year 2015) will provide 32 x 512 bit ZMMn registers for vector processing.

So how to check which extensions does your CPU support? On Linux, the “flags” column in /proc/cpuinfo easily provides this info.

Let’s check the Exadatas in our research lab:

Exadata V2:

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Xeon(R) CPU           E5540  @ 2.53GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

So the highest SIMD extension support on this Exadata V2 is SSE4.2 (No AVX!)

Exadata X2:

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Xeon(R) CPU           X5670  @ 2.93GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

Exadata X2 also has SSE4.2 but no AVX.

Exadata X3:

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
avx
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

The Exadata X3 supports the newer AVX too.

My laptop (Macbook Pro late 2013):
The Exadata X4 has not yet arrived to our lab, so I’m using my laptop as an example of a latest available CPU with AVX2:

Update: Jason Arneil commented that the X4 does not have AVX2 capable CPUs (but the X5 will)

$ grep "^model name" /proc/cpuinfo | sort | uniq
model name	: Intel(R) Core(TM) i7-4960HQ CPU @ 2.60GHz

$ grep ^flags /proc/cpuinfo | egrep "avx|sse|popcnt" | sed 's/ /\n/g' | egrep "avx|sse|popcnt" | sort | uniq
avx
avx2
popcnt
sse
sse2
sse4_1
sse4_2
ssse3

The Core-i7 generation supports everything up to the current AVX2 extension set.

So, which extensions is Oracle actually using? Let’s check!

As Oracle needs to run different binary code on CPUs with different capabilities, some of the In-Memory Data (kdm) layer code has been duplicated into separate external libraries – and then gets dynamically loaded into Oracle executable address space as needed. You can run pmap on one of your Oracle server processes and grep for libshpk:

$ pmap 21401 | grep libshpk
00007f0368594000   1604K r-x--  /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libshpksse4212.so
00007f0368725000   2044K -----  /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libshpksse4212.so
00007f0368924000     72K rw---  /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libshpksse4212.so

My (educated) guess is that the “shpk” in libshpk above stands for oS dependent High Performance [K]ompression. “s” prefix normally means platform dependent (OSD) code and this low-level SIMD code sure is platform and CPU microarchitecture version dependent stuff.

Anyway, the above output from an Exadata X2 shows that SSE4.2 SIMD HPK libraries are used on this platform (and indeed, X2 CPUs do support SSE4.2, but not AVX).

Let’s list similar files from $ORACLE_HOME/lib:

$ cd $ORACLE_HOME/lib
$ ls -l libshpk*.so
-rw-r--r-- 1 oracle oinstall 1818445 Jul  7 04:16 libshpkavx12.so
-rw-r--r-- 1 oracle oinstall    8813 Jul  7 04:16 libshpkavx212.so
-rw-r--r-- 1 oracle oinstall 1863576 Jul  7 04:16 libshpksse4212.so

So, there are libraries for AVX and AVX2 in the lib directory too (the “12” suffix for all file names just means Oracle version 12). The AVX2 library is almost empty though (and the nm/objdump commands don’t show any Oracle functions in it, unlike in the other files).

Let’s run pmap on a process in my new laptop (which supports AVX and AVX2 ) to see if the AVX2 library gets used:

$ pmap 18969 | grep libshpk     
00007f85741b1000   1560K r-x-- libshpkavx12.so
00007f8574337000   2044K ----- libshpkavx12.so
00007f8574536000     72K rw--- libshpkavx12.so

Despite my new laptop supporting AVX2, only the AVX library is used (the AVX2 library is named libshpkavx212.so). So it looks like the AVX2 extensions are not used yet in this version (it’s the first Oracle 12.1.0.2 GA release without any patches). I’m sure this will be added soon, along with more features and bugfixes.

To be continued …

Related Posts

Adding Oracle Big Data SQL to ODI12c to Enhance Hive Data Transformations

Rittman Mead Consulting - Sun, 2014-10-05 15:29

An updated version of the Oracle BigDataLite VM came out a couple of weeks ago, and as well as updating the core Cloudera CDH software to the latest release it also included Oracle Big Data SQL, the SQL access layer over Hadoop that I covered on the blog a few months ago (here and here). Big Data SQL takes the SmartScan technology from Exadata and extends it to Hadoop, presenting Hive tables and HDFS files as Oracle external tables and pushing down the filtering and column-selection of data to individual Hadoop nodes. Any table registered in the Hive metastore can be exposed as an external table in Oracle, and a BigDataSQL agent installed on each Hadoop node gives them the ability to understand full Oracle SQL syntax rather than the cut-down SQL dialect that you get with Hive.

NewImage

There’s two immediate use-cases that come to mind when you think about Big Data SQL in the context of BI and data warehousing; you can use Big Data SQL to include Hive tables in regular Oracle set-based ETL transformations, giving you the ability to reference Hive data during part of your data load; and you can also use Big Data SQL as a way to access Hive tables from OBIEE, rather than having to go through Hive or Impala ODBC drivers. Let’s start off in this post by looking at the ETL scenario using ODI12c as the data integration environment, and I’ll come back to the BI example later in the week.

You may recall in a couple of earlier posts earlier in the year on ETL and data integration on Hadoop, I looked at a scenario where I wanted to geo-code web server log transactions using an IP address range lookup file from a company called MaxMind. To determine the country for a given IP address you need to locate the IP address of interest within ranges listed in the lookup file, something that’s easy to do with a full SQL dialect such as that provided by Oracle:

NewImage

In my case, I’d want to join my Hive table of server log entries with a Hive table containing the IP address ranges, using the BETWEEN operator – except that Hive doesn’t support any type of join other than an equi-join. You can use Impala and a BETWEEN clause there, but in my testing anything other than a relatively small log file Hive table took massive amounts of memory to do the join as Impala works in-memory which effectively ruled-out doing the geo-lookup set-based. I then went on to do the lookup using Pig and a Python API into the geocoding database but then you’ve got to learn Pig, and I finally came up with my best solution using Hive streaming and a Python script that called that same API, but each of these are fairly involved and require a bit of skill and experience from the developer.

But this of course is where Big Data SQL could be useful. If I could expose the Hive table containing my log file entries as an Oracle external table and then join that within Oracle to an Oracle-native lookup table, I could do my join using the BETWEEN operator and then output the join results to a temporary Oracle table; once that’s done I could then use ODI12c’s sqoop functionality to copy the results back down to Hive for the rest of the ETL process. Looking at my Hive database using SQL*Developer 4.0.3’s new ability to work with Hive tables I can see the table I’m interested in listed there:

NewImage

and I can also see it listed in the DBA_HIVE_TABLES static view that comes with Big Data SQL on Oracle Database 12c:

SQL> select database_name, table_name, location
  2  from dba_hive_tables
  3  where table_name like 'access_per_post%';

DATABASE_N TABLE_NAME             LOCATION
---------- ------------------------------ --------------------------------------------------
default    access_per_post        hdfs://bigdatalite.localdomain:8020/user/hive/ware
                      house/access_per_post

default    access_per_post_categories     hdfs://bigdatalite.localdomain:8020/user/hive/ware
                      house/access_per_post_categories

default    access_per_post_full       hdfs://bigdatalite.localdomain:8020/user/hive/ware
                      house/access_per_post_full

There are various ways to create the Oracle external tables over Hive tables in the linked Hadoop cluster, including using the new DBMS_HADOOP package to create the Oracle DDL from the Hive metastore table definitions or using SQL*Developer Data Modeler to generate the DDL from modelled Hive tables, but if you know the Hive table definition and its not too complicated, you might as well just write the DDL statement yourself using the new ORACLE_HIVE external table access driver. In my case, to create the corresponding external table for the Hive table I want to geo-code, it looks like this:

CREATE TABLE access_per_post_categories(
  hostname varchar2(100), 
  request_date varchar2(100), 
  post_id varchar2(10), 
  title varchar2(200), 
  author varchar2(100), 
  category varchar2(100),
  ip_integer number)
organization external
(type oracle_hive
 default directory default_dir
 access parameters(com.oracle.bigdata.tablename=default.access_per_post_categories));

Then it’s just a case of importing the metadata for the external table over Hive, and the tables I’m going to join to and then load the results into, into ODI’s repository and then create a mapping to bring them all together.

NewImage

Importantly, I can create the join between the tables using the BETWEEN clause, something I just couldn’t do when working with Hive tables on their own.

NewImage

Running the mapping then joins the webserver log lookup table to the geocoding IP address range lookup table through the Oracle SQL engine, removing all the complexity of using Hive streaming, Pig or the other workaround solutions I used before. What I can then do is add a further step to the mapping to take the output of my join and use that to load the results back into Hive, like this:

NewImage

I’ll then use IKM SQL to to Hive-HBase-File (SQOOP) knowledge module to set up the export from Oracle into Hive.

NewImage

Now, when I run the mapping I can see the initial table join taking place between the Oracle native table and the Hive-sourced external table, and the results then being exported back into Hadoop at the end using the Sqoop KM.

NewImage

Finally, I can view the contents of the downstream Hive table loaded via Sqoop, and see that it does in-fact contain the country name for each of the page accesses.

NewImage

Oracle Big Data SQL isn’t a solution suitable for everyone; it only runs on the BDA and requires Exadata for the database access, and it’s an additional license cost on top of the base BDA software bundle. But if you’ve got it available it’s an excellent way to blend Hive and Oracle data, and a great way around some of the restrictions around HiveQL and the Hive JDBC/ODBC drivers. More on this topic later next week, when I’ll look at using Big Data SQL in conjunction with OBIEE 11g.

Categories: BI & Warehousing