Skip navigation.

Feed aggregator

Magical Links for a Tuesday in December

Oracle AppsLab - Tue, 2014-12-02 13:28

It’s difficult to make a link post seem interesting. Anyway, I have some nuggets from the Applications User Experience desk plus bonus robot video because it’s Tuesday.

Back to Basics. Helping You Phrase that Alta UI versus UX Question

Always entertaining bloke and longtime Friend of the ‘Lab, Ultan (@ultan) answers a question we get a lot, what’s the difference between UI and UX?

From Coffee Table to Cloud at a Glance: Free Oracle Applications Cloud UX eBook Available

Next up, another byte from Ultan on a new and free eBook (registration required) produced by OAUX called “Oracle Applications Cloud User Experiences: Trends and Strategy.” If you’ve seen our fearless leader, Jeremy Ashley (@jrwashley), present recently, you might recognize some of the slides.

OAUX_ebook

Oh and if you like eBooks and UX, make sure to download the Oracle Applications Cloud Simplified User Interface Rapid Development Kit.

Today, We Are All Partners: Oracle UX Design Lab for PaaS

And hey, another post from Ultan about an event he ran a couple weeks ago, the UX Design Lab for PaaS.

Friend of the ‘Lab, David Haimes (@dhaimes), and several of our team members, Anthony (@anthonyslai), Mark (@mvilrokx), and Tony, participated in this PaaS4SaaS extravaganza, and although I can’t discuss details, they built some cool stuff and had oodles of fun. Yes, that’s a specific unit of fun measurement.

IMG_1147

Mark (@mvilrokx) and Anthony (@anthonyslai) juggle balloons for science.

Amazon’s robotic fulfillment army

From kottke.org, a wonderful ballet of Amazon’s fulfillment robots.

https://www.youtube.com/watch?v=tMpsMt7ETi8Possibly Related Posts:

Trends in Big Data, Hadoop, Business Intelligence, Analytics and Dashboards

Nilesh Jethwa - Tue, 2014-12-02 10:34

How has the interest in Big Data, Hadoop, Business Intelligence, Analytics and Dashboards changed over the years?

One easy way to gauge the interest is to measure how much news is generated for the related term and Google Trends allows you do that very easily.

After plugging all of the above terms in Google trends and further analysis leads to the following visualizations.

Aggregating the results by year

Image

 

It is very amazing to see that the stream representing Dashboards has remained constant through out the years.

So does the stream for Analytics and Business Intelligence in general exihibit similar trend.

Analytics is kind of widening its mouth as we move forward and that is being helped by the combination of terms such as Hadoop + Big Data + Analytics being used almost together.

Now check the line chart below

Image

 

Looks like the Trend for Dashboards define the lower bound and the trend for Business Intelligence define the upper bound. The trend for Hadoop started around first Quarter of 2007. The trend for Big Data started around third Quarter of 2008 and ever since they both are rapidly increasing. It remains to see whether they will cross “Business Intelligence” in terms of popularity of kind of merge and find a stable position somewhere in the middle.

Before Big Data and Hadoop came into picture the term “Analytics” exhibited a stable ground closer to dashboards but now the trend for Analytics seems to be following Big Data and Hadoop.

Let us take a deeper look into each week since 2004

Image

 

Look at the downward spikes occuring around Christmas time. Nobody wants to hear about Big Data or Dashboards during holidays.

And finally, here is a quarterly cyclical view

Image

Click here to view the full interactive Visualizations

anonymous cypher suites for SSL (and a 12c pitfall)

Laurent Schneider - Tue, 2014-12-02 08:21

If you configure your listener for encryption only, you do not really need authentication.

It works pretty fine until 11.2.0.2, I wrote multiple posts on ssl.

You add SSL_CLIENT_AUTHENTICATION=FALSE to your server sqlnet.ora and listener.ora and specify an “anon” cipher suite in your client. You do not need to validate the certificate, so a default wallet will do.


orapki wallet create -wallet . -auto_login_only

sqlnet.ora

WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=.)))
ssl_cipher_suites=(SSL_DH_anon_WITH_3DES_EDE_CBC_SHA)
NAMES.DIRECTORY_PATH=(TNSNAMES)

tnsnames.ora

DB01=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=srv01.example.com)(PORT=1521))(CONNECT_DATA=(SID=DB01)))

or if you use java, the default truststore -usually located in $JAVA_HOME/jre/lib/security/cacerts, will also do.


    System.setProperty("oracle.net.ssl_cipher_suites", "SSL_DH_anon_WITH_DES_CBC_SHA");

On some plateform however you may get something like : IBM’s Client TrustManager does not allow anonymous cipher suites.

So far so good, but if you upgrade your listener to 11.2.0.3/4 or 12c, the anonymous suites won’t be accepted if not explicitely set up in sqlnet.ora. This is documented in Note 1434966.1

You will get something like “ORA-28860: Fatal SSL error”, “TNS-12560: TNS:protocol adapter error” in Oracle or “SSLHandshakeException: Received fatal alert: handshake_failure”, “SQLRecoverableException: I/O-Error: Received fatal alert: handshake_failure” in java.

There are two -obvious- ways to fix this. The preferred approach is to not use anonymous suite (they seem to have disappeared from the supported cypher suites in the doc).

For this task, you use another cipher suite. The easiest way is to not specify any or just use one like TLS_RSA_WITH_AES_128_CBC_SHA (java) / SSL_RSA_WITH_AES_128_CBC_SHA (sqlnet). Even if you do not use client authentication, you will then have to authenticate the server, and import the root ca in the wallet or the keystore.
sqlnet.ora


# comment out ssl_cipher_suites=(SSL_DH_anon_WITH_3DES_EDE_CBC_SHA)

java

// comment out : System.setProperty("oracle.net.ssl_cipher_suites", "SSL_DH_anon_WITH_DES_CBC_SHA");
System.setProperty("javax.net.ssl.trustStore","keystore.jks");
System.setProperty("javax.net.ssl.trustStoreType","JKS");
System.setProperty("javax.net.ssl.trustStorePassword","***");

Or, as documented in metalink, define the suite in sqlnet.ora and listener.ora if you use 11.2.0.3 or 11.2.0.4.

StatsPack and AWR Reports -- Bits and Pieces -- 4

Hemant K Chitale - Tue, 2014-12-02 08:05
This is the fourth post in a series.

Post 1 is here.
Post 2 is here.
Post 3 is here.

Buffer Cache Hit Ratios

Many novice DBAs may use Hit Ratios as indicators of performance.  However, these can be misleading or incomplete.

Here are two examples :

Extract A: 9i StatsPack

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer  Hit   %:   99.06

It would seem that with only 0.94% of reads being physical reads, the database is performing optimally.  So, the DBA doesn't need to look any further.  
Or so it seems.
If he spends some time reading the report, he also then comes across this :
Top 5 Timed Events~~~~~~~~~~~~~~~~~~                                                     % TotalEvent                                               Waits    Time (s) Ela Time-------------------------------------------- ------------ ----------- --------db file sequential read                           837,955       4,107    67.36CPU time                                                        1,018    16.70db file scattered read                             43,281         549     9.00


                                                                   Avg                                                     Total Wait   wait    WaitsEvent                               Waits   Timeouts   Time (s)   (ms)     /txn---------------------------- ------------ ---------- ---------- ------ --------db file sequential read           837,955          0      4,107      5    403.3db file scattered read             43,281          0        549     13     20.8
Physical I/O is a significant proportion (76%) of total database time.  88% of the physical I/O is single-block  reads ("db file sequential read").  This is where the DBA must identify that tuning *is* required.
Considering the single block access pattern it is likely that a significant proportion are index blocks as well.  Increasing the buffer cache might help cache the index blocks.


Extract B : 10.2 AWR
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:99.98Redo NoWait %:100.00Buffer Hit %:96.43In-memory Sort %:99.99Library Hit %:97.16Soft Parse %:98.16Execute to Parse %:25.09Latch Hit %:99.85Parse CPU to Parse Elapsd %:89.96% Non-Parse CPU:96.00
The Buffer Hit Ratio is very good.  Does that mean that I/O is not an issue ?
Look again at the same report 
Top 5 Timed Events
EventWaitsTime(s)Avg Wait(ms)% Total Call TimeWait ClassCPU time147,59342.3db file sequential read31,776,67887,659325.1User I/Odb file scattered read19,568,22079,142422.7User I/ORMAN backup & recovery I/O1,579,31437,6502410.8System I/Oread by other session3,076,11014,21654.1User I/O
User I/O is actually significant.  The SQLs with the highest logical I/O need to be reviewed for tuning.

.
.
.

Categories: DBA Blogs

Engaging Digital Experiences

WebCenter Team - Tue, 2014-12-02 07:15
By Mitchell Palski, Oracle WebCenter Sales Consultant
This week we’re happy to have Oracle WebCenter expert Mitchell Palski join us for a Q&A around how you can Deliver Engaging Digital Experiences to your end users.
Q: So before we dive into the topic, it might be helpful for our readers if you could define what the terms Digital Business and Digital Experience mean? First let’s describe how we are defining those terms – Digital Business and Digital Experience. 
Digital business is the use of any technology to promote, sell and enable innovative products, services and experiences. Digital Business isn't about digitizing everything in sight, it’s about leveraging digital technologies. A Digital Business is an enterprise that embraces technological advances in a way that:
  • Empowers their customers, citizens, employees, suppliers, and partners
  • Optimizes their business operations
  • Fuels innovation and increases business
In order to ensure you are meeting and anticipating customer expectations, your Digital Business needs to deliver a consistent and engaging Digital Experience to customers, citizens, employees and partners. 
To be successful in this area, you must deliver engaging digital experiences that are:
  1. Consistent across all channels to drive user loyalty
  2. Delivering relevant content (including links, documents, promotions, and documents) to the right users and at the right time

Q: It seems that today, everyone is trying to deliver these engaging digital experiences. Can you touch on the importance and why organizations should take action now if they aren’t already doing this?
Today’s consumers are “plugged in” 24/7. They demand instant access to information and transactional capabilities whenever they want them. They are savvy when it comes to making decisions and unafraid to make a change if a company no longer meets their expectations.  

It’s enough for your organization to just have an attractive look-and-feel; it’s more important to deliver a positive customer experience and provide responsive customer service. Organizations have to differentiate themselves across all channels and interactions to not only engage customers but to also retain them in loyal, long-term relationships.  

Q:What types of technologies are out there if an organization was looking to deliver digital experiences to their end users?
Oracle WebCenter can be used in a variety of ways to deliver engaging Digital Experiences:

  • Oracle WebCenter Sites is a Web Experience Management (WEM) tool that enables business users to easily create and manage contextually relevant social and interactive online experiences across multiple channels on a global scale to drive sales and loyalty.
  • Oracle WebCenter Portal is an enterprise Portal platform that can deliver role-based user experiences for use cases such as:
    • Employee intranets (e.g. HR portal, Finance portal)
    • Citizen self-service
    • Partner collaboration
    • Marketing website

Oracle Business Process Management (BPM) is a rules and workflow engine that delivers transactional engagement to a web interface. Oracle WebCenter can provide contextual user experiences while BPM drives efficient and convenient user interaction. For example, Oracle WebCenter would provide a citizen that has a bad driving record with links to pay fines to the Department of Transportation. Oracle BPM would provide that citizen with the actual tools to initiate the payment process, route a payment to the correct DOT employees, and keep all parties aware of the payments status in workflow.

Q: Before we close, do you have any examples of organizations that are already doing this and successfully delivering engaging digital experiences? Panduit is a world-class developer and provider of leading-edge Data centers, Connected buildings and Industrial automation solutions. Using Oracle WebCenter and Business Process Management, Panduit is now able to: 
  • Support a growing global partner ecosystem with secure, multilingual online experience
  • Provide integrated role-based experiences for all customers and partners within a single portal (www.panduit.com).
  • Improve number and quality of sales leads through increased web and mobile customer interactions and registrations
  • Experience the benefits with activity up 57% from previous year with portal site @ 42,632 self-serve transactions per month
Thank you, Mitchell for sharing your insight into how to Deliver Engaging Digital Experiences.  If you’d like to listen to a podcast on this topic, you can do so here! We are also doing an Oracle Day with Primitive Logic on Digital Disruption & Experience in Orange County on December 4 -- we hope you will join us!

UKOUG 2014

Jonathan Lewis - Tue, 2014-12-02 05:44

So it’s that time of year when I have to decide on my timetable for the UKOUG annual conference. Of course, I never manage to stick to it, but in principle here are the sessions I’ve highlighted:

Sunday
  • 12:30 – How to Avoid a Salted Banana – Lothar Flatz
  • 13:30 – Calculating Selectivity  – Me
  • 15:00 – Advanced Diagnostics Revisited – Julian Dyke
  • 16:00 – Testing Jumbo Frames for RAC – Neil Johnson
Monday
  • 9:00 – Oracle Indexes Q & A session – Richard Foote
  • 10:00 – How Oracle works in 50 minutes – Martin Widlake
  • 11:30 – Predictive Queries in 12c – Brendan Tierney
  • 14:30 – Oracle Database In-Memory DB and the Query Optimizer – Christian Antognini
  • 16:00 – Instrumenting, Analysing, & Tuning the Performance of Oracle ADF Applications – Frank Houweling
  • 17:00 – Techniques for Strategic Testing – Clive King
Tuesday
  • 9:30 – Top Five Things You Need To Know About Oracle Database In-Memory Option – Maria Colgan
  • 10:30 – How to Write Better PL/SQL – Andrew Clarke
  • 12:00 – Optimizer Round Table – Tony Hasler
  • 14:00 – What we had to Unlearn & Learn when Moving from M9000 to Super Cluster -Philippe Fierens
  • 15:00 – Maximum Availability Architecture: A Recipe for Disaster? – Julian Dyke
  • 16:30 – Chasing the Holy Grail of HA – Implementing Transaction Guard & Application Continuity in Oracle Database 12c -Mark Bobak
  • 17:30 – Five Hints for Efficient SQL – Me
Wednesday
  • 9:00 – Fundamentals of Troubleshooting (without graphics) pt.1 – Me
  • 10:00 – Fundamentals of Troubleshooting (without graphics) pt.2 – Me
  • 11:30 – Indexing in Exadata – Richard Foote

 

 


The Perfect Gift For The Oracle DBA: Top 5 DBA T-Shirts

The Perfect Gift For The Oracle DBA: Top 5 DBA T-Shirts
It's that time of year again and I can already hear it, "Dad, what do you want for Christmas?" This year I'm taking action. Like forecasting Oracle performance, I'm taking proactive action.

Like most of you reading this, you have a, let's say, unique sense of humor. I stumbled across the ultimate geek website that has an astonishing variety of t-shirts aimed at those rare individuals like us that get a rush in understanding the meaning of an otherwise cryptic message on a t-shirt.

I picked my Top 5 DBA Geek T-Shirts based on the challenges, conflicts and joys of being an Oracle DBA. With each t-shirt I saw, a story came to mind almost immediately. I suspect you will have a similar experience that rings strangely true.

So here they are—the Top 5 T-Shirts For The Oracle DBA:
Number 5: Change Your Password
According to Slash Data the top password is now "Password".  I guess the upper-case "P" makes people feel secure, especially since last years top password was "123456" and EVERYBODY knows thats a stupid password. Thanks to new and improved password requirements, the next most popular password is "12345678". Scary but not surprising.

As Oracle Database Administrators and those who listened to Troy Ligon's presentation last years IOUG conference presentation, passwords are clearly not safe. ANY passwords. Hopefully in the coming years, passwords will be a thing of the past.


Number 4: Show Your Work
Part of my job as a teacher and consultant is to stop behavior like this: I ask a DBA, "I want to understand why you want to make this change to improve performance." And the reply is something like one of these:

  1. Because it has worked on our other systems.
  2. I did a Google search and an expert recommended this.
  3. Because the box is out of CPU power, there is latching issues, so increasing spin_count will help.
  4. Because we have got to do something and quick!

I teach Oracle DBAs to think from the user experience to the CPU cycles developing a chain of cause and effect. If we can understand the cause and effect relationships, perhaps we can disrupt poor performance and turn it to our favor. "Showing your work" and actually writing it down can be really helpful.

Number 3: You Read My T-Shirt
Why do managers and users think their presence in close proximity to mine will improve performance or perhaps increase my productivity? Is that what they learn in Hawaii during "end user training"?

What's worse is when a user or manager wants to talk about it...while I'm obviously in concentrating on a serious problem.

Perhaps if I wear this t-shirt, stand up, turn around and remain silent they will stop talking and get the point. We can only hope.

Number 2: I'm Here Because You Broke Something
Obnoxious but true. Why do users wonder why performance is "slow" when they do a blind query returning ten-million rows and then scroll down looking for the one row they are interested in.... Wow. The problem isn't always the technology... but you know that already.

Hint to Developers: Don't let users do a drop down or a lookup that returns millions or even thousands or even hundreds of rows... Please for the love of performance optimization!


Number 1 (drum roll): Stand Back! I'm Going To Try SCIENCE
One of my goals in optimizing Oracle Database performance is to be quantitative. And whenever possible, repeatable. Add some basic statistics and you've got science. But stand back because, as my family tells me, it does get a little strange sometimes.

But seriously, being a "Quantitative Oracle Performance Analyst" is always my goal because my work is quantifiable, reference-able and sets me up for advanced analysis.


So there you go! Five t-shirts for the serious and sometimes strange Oracle DBA. Not only will these t-shirts prove and reinforce your geeky reputation, but you'll get a small yet satisfying feeling your job is special...though a little strange at times.

All the best in your Oracle performance endeavors!

Craig.
Categories: DBA Blogs

Auto Sales Data Visualization by Manufacturer

Nilesh Jethwa - Mon, 2014-12-01 14:42

Data: Edmunds

Image

 

Top Manufacturer

Image

 

Quarterly breakup of units sold by manufacturer

Image

 

View the interactive visualizations

About Proofs and Stories, in 4 Parts

Oracle AppsLab - Mon, 2014-12-01 14:14

Editor’s note: Here’s another new post from a new team member. Shortly after the ‘Lab expanded to include research and design, I attended a workshop on visualizations hosted by a couple of our new team members, Joyce, John and Julia. 

The event was excellent. John and Julia have done an enormous amount of critical thinking about visualizations, and I immediately started bugging them for blog posts. All the work and research they’ve done needs to be freed into the World so anyone can benefit from it. This post includes the first three installments, and I hope to get more. Enjoy.

Part 1

I still haven’t talked anyone into reading Proofs and Stories, and god knows I tried. If you read it, let me know. It is written by the author of Logicomix, Apostolos Doxiadis, if that makes the idea of reading Proofs and Stories more enticing. If not, I can offer you my summary:

H.C. Berann. Yellowstone National Park panorama

H.C. Berann. Yellowstone National Park panorama

1. Problem solving is like a quest. As in a quest, you might set off thinking you are bound for Ithaka only to find yourself on Ogygia years later. Or, in Apostolos’ example, you might set off to prove Fermat’s Last Theorem only to find yourself studying elliptic curves for years. The seeker walks through many paths, wonders in circles, reverses the steps, and encounters dead ends.

2. The quest has a starting point = what you know, the destination = the hypothesis you want to prove, and the points in between = statements of facts. Graph, in mathematical sense, is a great way to represent this. A is a starting point, B is the destination, F is a transitive point, C is a choice.

graph1

A story is a path through the graph, defined by the choices a storyteller makes on behalf of his characters.

graph_2

Frame P5 below shows Snowy’s dilemma. Snowy’s choice determines what happens to Tintin in Tibet. If only Snowy not gone for the bone, the story would be different.

swnoys_dilemma

Image from Tintin in Tibet by Hergé

Even though its own nature dictates the story to be linear, there is always a notion of alternative paths. How to linearize forks and branches of the path so that the story is most interesting, is an art of storytelling.

3. Certain weight, or importance, can be suggested based on the number of choices leading to a point, or resulting from it.

graph_weights

When a story is summarized, each storyteller likely to come up with a different outline. However the most important points usually survive majority of summarizations.

Stories can be similar. The practitioners of both narrative and problem solving rely on patterns to reduce choice and complexity.

So how does this have to do with anything?

Part 2

Another book I cannot make anyone to read but myself is called “Interaction Design for Complex Problem Solving: Developing Useful and Usable Software” by Barbara Mirel. The book is as voluminous as its title suggests, 397 pages, out of which I made it through the page 232 in four years. This probably doesn’t entice you into reading the book. Luckily there is a one-pager paper “Visualizing complexity: Getting from here to there in ill-defined problem landscapes” from the same author on the same very subject. If this is too much to read still, may I offer you my summary?

Mainly, cut and paste from Mirel’s text:

1. Complex problem solving is an exploration across rugged and at times uncharted problem terrains. In that terrain, analysts have no way of knowing in advance all moves, conditions, constraints or consequences. Problem solvers take circuitous routes through “tracts” of tasks toward their goals, sometimes crisscrossing the landscape and jump across foothills to explore distant knowledge, to recover from dead ends, or to reinvigorate inquiry.

2. Mountainscapes are effective ways to model and visualize complex inquiry. These models stress relationships among parts and do not reduce problem solving to linear and rule-based procedures or work flows. Mountainscapes represent spaces being as important to coherence as the paths. Selecting the right model affect the designs of the software and whether complex problem solvers experience useful support. Models matter.

 B. Mirel, L. Allmendinger. Analyzing sleep products visualized as a mountain climb

B. Mirel, L. Allmendinger. Analyzing sleep products visualized as a mountain climb

Complex problems can neither be solved nor supported with linear or pre-defined methods. Complex problems have many possible heuristics, indefinite parameters, and ranges of outcomes rather than one single right answer or stopping point.

3. Certain types of complex problems recur in various domains and, for each type, analysts across organizations perform similar patterns of inquiry. Patterns of inquiry are the regularly repeated sets of actions and knowledge that have a successful track record in resolving a class of problems in a specific domain
And so how does this have to do with anything?

Part 3

A colleague of mine, Dan Workman, once commented on a sales demo of a popular visual analytics tool. “Somehow” he said “the presenter drills down here, pivots there, zooms out there, and, miraculously, arrives to that view of the report where the answer to his question lies. But how did he know to go there? How would anyone know where the insight hides?”

His words stuck with me.

Imagine a simple visualization that shows revenue trend of a business by region by product by time. Let’s pretend the business operates in 4 regions, sells 4 products, and has been in business for 4 years. The combination of these parameters results in 64 views of sales data. Now imagine that each region is made up of hundreds of countries. If visualization allows user to view sales by country, there will be thousands and thousands of additional views. In the real world, a business might also have lots more products. The number of possible views could easily exceed what a human being can manually look at, and only some views (alone or in combination) possibly contain insight. But which ones?

I am yet to see an application that supports the users in finding insightful views of a visualization. Often users won’t even know where to start.
So here is the connection between Part1, Part2, and Part3. It’s the model. The visualization exploration can be represented as a graph (in mathematical sense), where the points are the views, and the connections are navigation between views. Users then trace a path through the graph as they explore new results.

J. Blyumen Navigating Interactive Visualizations

J. Blyumen Navigating Interactive Visualizations

From here, certain design research agenda comes to mind:

1. The world needs interfaces to navigate the problem mountainspaces: keeping track of places visited, representing branches and loops in the path, enabling to reverse steps, etc.

2. The world needs an interface for linearizing a completed quest into a story (research into presentation), and outlining stories.

3. The world needs software smarts that can collect the patterns of inquiry and use them to guide the problem solvers through the mountainspaces.

So I hope from this agenda the Part 4 will eventually follow . . . .Possibly Related Posts:

Kilobytes, Kibibytes and DBMS_XPLAN undocumented functions

The Anti-Kyte - Mon, 2014-12-01 12:46

How many bytes in a Kilobyte ? The answer to this question is pretty obvious…and, apparently, wrong.
Yep, apparently we’ve had it wrong all these years for there are, officially, 1000 bytes in a Kilobyte, not 1024.
Never mind that 1000 is not a factor of 2 and that, unless some earth-shattering breakthrough has happened whilst I wasn’t paying attention, binary is still the fundemental basis of computing.
According to the IEEE, there are 1000 bytes in a kilobyte and we should all get used to talking about a collection of 1024 bytes as a Kibibyte

Can you imagine dropping that into a conversation ? People might look at you in a strange way the first time “Kibibyte” passes your lips. If you then move on and start talking about Yobibytes, they may well conclude that you’re just being silly.

Let’s face it, if you’re going to be like that about things then C++ is actually and object orientated language and the proof is not in the pudding – the proof of the pudding is in the eating.

All of which petulant pedantry brings me on to the point of this particular post – some rather helpful formatting functions that are hidden in, of all places, the DBMS_XPLAN pacakge…

Function Signatures

If we happened to be strolling through the Data Dictionary and issued the following query…

select text
from dba_source
where owner = 'SYS'
and type = 'PACKAGE'
and name = 'DBMS_XPLAN'
order by line
/

we might be surprised at what we find….

***snip***
  ----------------------------------------------------------------------------
  -- ---------------------------------------------------------------------- --
  --                                                                        --
  -- The folloing section of this package contains functions and procedures --
  -- which are for INTERNAL use ONLY. PLEASE DO NO DOCUMENT THEM.           --
  --                                                                        --
  -- ---------------------------------------------------------------------- --
  ----------------------------------------------------------------------------
  -- private procedure, used internally

*** snip ***

  FUNCTION format_size(num number)
  RETURN varchar2;

  FUNCTION format_number(num number)
  RETURN varchar2;

  FUNCTION format_size2(num number)
  RETURN varchar2;

  FUNCTION format_number2(num number)
  RETURN varchar2;

  --
  -- formats a number representing time in seconds using the format HH:MM:SS.
  -- This function is internal to this package
  --
  function format_time_s(num number)
  return varchar2;

***snip***
Formatting a time in seconds

Let’s start with DBMS_XPLAN.FORMAT_TIME_S because we pretty much know what it does from the header comments.
To save myself a bit of typing, I’m just going to use the following SQL to see how the function copes with various values :

with actual_time as
(
    select &1 as my_secs
    from dual
)
select my_secs,
    dbms_xplan.format_time_s(my_secs) as formatted_time
from actual_time
/

Plug in a variety of numbers ( representing a time in seconds) and …

SQL> @format_time.sql 60
old   3:     select &1 as my_secs
new   3:     select 60 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
               60.00 00:01:00

SQL> @format_time.sql 3600
old   3:     select &1 as my_secs
new   3:     select 3600 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
             3600.00 01:00:00

SQL> @format_time.sql 86400
old   3:     select &1 as my_secs
new   3:     select 86400 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
            86400.00 24:00:00

SQL> @format_time.sql 129784
old   3:     select &1 as my_secs
new   3:     select 129784 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
           129784.00 36:03:04

SQL> 

I wonder how it treats fractions of a second ….

SQL> @format_time.sql  5.4
old   3:     select &1 as my_secs
new   3:     select 5.4 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
                5.40 00:00:05

SQL> @format_time.sql  5.5
old   3:     select &1 as my_secs
new   3:     select 5.5 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
                5.50 00:00:06

SQL> 

So, the function appears to round to the nearest second. Not great if you’re trying to list the times of the Olympic Finalists of the 100 metres, but OK for longer durations where maybe rounding to the nearest second is appropriate.
One minor quirk to be aware of :

SQL> @format_time.sql 119.5
old   3:     select &1 as my_secs
new   3:     select 119.5 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
              119.50 00:01:60

SQL> 

SQL> @format_time.sql 3599.5
old   3:     select &1 as my_secs
new   3:     select 3599.5 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
             3599.50 00:59:60

SQL> 


If 59.5 seconds is rounded up, the function returns a value containing 60 seconds, rather than displaying the value as a minute.

Formatting Numbers

Next on our list of functions to explore are FORMAT_NUMBER and FORMAT_NUMBER2. At first glance, it may appear that these functions are designed to represent sizes using the IEEE standard definitions…

with myval as
(
    select &1 as the_value
    from dual
)
select the_value, 
    dbms_xplan.format_number(the_value) as format_size, 
    dbms_xplan.format_number2(the_value) as format_size2
from myval
/

Run this with a variety of inputs and we get :

SQL> @format_number.sql 999
old   3:     select &1 as the_value
new   3:     select 999 as the_value

 THE_VALUE FORMAT_NUMBER                  FORMAT_NUMBER2
---------- ------------------------------ ------------------------------
       999 999                             999

SQL> @format_number.sql 1000
old   3:     select &1 as the_value
new   3:     select 1000 as the_value

 THE_VALUE FORMAT_NUMBER                  FORMAT_NUMBER2
---------- ------------------------------ ------------------------------
      1000 1000                              1K

SQL> @format_number.sql 1024
old   3:     select &1 as the_value
new   3:     select 1024 as the_value

 THE_VALUE FORMAT_NUMBER                  FORMAT_NUMBER2
---------- ------------------------------ ------------------------------
      1024 1024                              1K

SQL> @format_number.sql 1000000
old   3:     select &1 as the_value
new   3:     select 1000000 as the_value

 THE_VALUE FORMAT_NUMBER                  FORMAT_NUMBER2
---------- ------------------------------ ------------------------------
   1000000 1000K                             1M

SQL> 

SQL> @format_number.sql 1500
old   3:     select &1 as the_value
new   3:     select 1500 as the_value

 THE_VALUE FORMAT_NUMBER                  FORMAT_NUMBER2
---------- ------------------------------ ------------------------------
      1500 1500                              2K

SQL> 

The FORMAT_NUMBER2 function reports 1000 as 1K.
Furthermore, for numbers above 1000, it appears to round to the nearest 1000.
FORMAT_NUMBER on the other hand, doesn’t start rounding until you hit 1000000.

From this it seems reasonable to infer that these functions are designed to present large decimal numbers in an easily readable format rather than being an attempt to conform to the new-fangled definition of a Kilobyte ( or Megabyte…etc).

Using the following script, I’ve created the BIG_EMPLOYEES table and populated it with 100,000 or so rows…

create table big_employees as
    select * from hr.employees
/

begin
    for i in 1..1000 loop
        insert into big_employees
        select * from hr.employees;
    end loop;
    commit;
end;
/

If we now apply these functions to count the rows in the table, we get the following :

select count(*),
    dbms_xplan.format_number(count(*)) as format_number,
    dbms_xplan.format_number2(count(*)) as format_number2
from big_employees
/

  COUNT(*) FORMAT_NUMBER        FORMAT_NUMBER2
---------- -------------------- --------------------
    107107 107K                  107K

You can see from this, how these functions might be useful when you’re looking at the number of rows in a very large table ( perhaps several million).

Counting the Kilobytes properly

We now come to the other two functions we’ve identified – FORMAT_SIZE and FORMAT_SIZE2.

with myval as
(
    select &1 as the_value
    from dual
)
select the_value, 
    dbms_xplan.format_size(the_value) as format_size, 
    dbms_xplan.format_size2(the_value) as format_size2
from myval
/

Running this the results are :

SQL> @format_size.sql 999
old   3:     select &1 as the_value
new   3:     select 999 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
       999 999                   999

SQL> @format_size.sql 1000
old   3:     select &1 as the_value
new   3:     select 1000 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
      1000 1000                 1000

SQL> @format_size.sql 1024
old   3:     select &1 as the_value
new   3:     select 1024 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
      1024 1024                    1k

SQL> @format_size.sql 1000000
old   3:     select &1 as the_value
new   3:     select 1000000 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
   1000000 976K                  977k

SQL> @format_size.sql 1048576
old   3:     select &1 as the_value
new   3:     select 1048576 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
   1048576 1024K                   1m

SQL> @format_size.sql 2047.4
old   3:     select &1 as the_value
new   3:     select 2047.4 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
    2047.4 2047                    2k

SQL> @format_size.sql 2047.5
old   3:     select &1 as the_value
new   3:     select 2047.5 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
    2047.5 2047                    2k

SQL> 

Things to notice here include the fact that format_size appears to FLOOR the value (1000000 bytes = 976.56 K), wheras FORMAT_SIZE2 rounds it up.
Additionally, once you pass in a value of over 1024, FORMAT_SIZE2 returns values in Kilobytes.

So, if we want to know the size of the BIG_EMPLOYEES table we’ve just created :

select bytes, 
    dbms_xplan.format_size(bytes) as format_size,
    dbms_xplan.format_size2(bytes) as format_size2
from user_segments
where segment_name = 'BIG_EMPLOYEES'
/

     BYTES FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
   9437184 9216K                   9m

If all you need is an approximate value, then FORMAT_SIZE2 could be considered a reasonable alternative to :

select bytes/1024/1024 as MB
from user_segments
where segment_name = 'BIG_EMPLOYEES'
/

As well as it’s primary purpose, DBMS_XPLAN does offer some fairly useful functions if you need a quick approximation of timings, or counts or even sizes.
Fortunately, it adheres to the traditional definition of a Kilobyte as 1024 bytes rather than “Litebytes”.


Filed under: Oracle, PL/SQL, SQL Tagged: dbms_xplan, format_number, format_number2, format_size, format_size2, format_time_s

Watch: Hadoop vs. Cassandra

Pythian Group - Mon, 2014-12-01 10:53

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“Hadoop is generally deployed in a single data center, multi-RAC deployment, but they’re all reasonably geographically co-located with each other,” Alex explains. Cassandra on the other hand, “…is frequently deployed in a very distributed fashion… Somewhere in Asia, Europe, North America… So you end up with a very fault-tolerant environment.” Learn how the two platforms compare by watching Alex’s video Hadoop vs. Cassandra.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs

Speaking My Own Language for UKOUG Apps 14 Conference

David Haimes - Mon, 2014-12-01 10:27

Finally I will be at a conference where my British accent, specifically my North West of England accent will be understood.  This will be my first time presenting at the UK OUG Conference and what better place than Liverpool to do it?  Home of my beloved Everton F.C., hometown of my parents and less than 20 miles from where I grew up (People from Liverpool would call me a woollyback) just outside Wigan.  So I will try to remember to shift from the Californian drawl I have picked up over the last 14 years and into my finest scouse accent.

I’m going to be presenting two papers which will showcase not just the powerful features that can revolutionize how you run your business, but also the amazing use experience, mobile and social features available in our ERP Cloud.  Both are on Monday and one is right after the other, so I’m a little bit apprehensive about having 10 minutes to dash from one room to another, get set up and start again.

Here are the details of the sessions, or just search for ‘Haimes’ and you’ll find them. Add them to your agenda, because they are both ‘must not miss’ sessions.

First up, Monday December 8th, 2pm, Hall 11C

Oracle E-Business Suite Coexistence with Fusion Accounting Hub & Implementing a Global Chart of Accounts.

This is a great session with a lot of content to pack in but I know the area well and am very passionate about it and have seen first hand how big a deal this is for businesses.

Then 10 minutes to pack up and dash to Hall 1B for 3pm

Oracle ERP Cloud Service Social & Mobile Demonstrations.

Doing live demos, with multiple different devices to switch between and using a live cloud environment on a conference WiFi make this a logistical challenge.  However when you have a phenomenal user experience, the best thing to do is show it live, so bear with me because we have some pretty cool features to show.


Categories: APPS Blogs

Oracle 12c: comparing TTS with noncdb_to_pdb

Yann Neuhaus - Mon, 2014-12-01 08:44

How to migrate from non-CDB to CDB? Of course all known migration methods works. But there is also another solution: upgrade to 12c if necessary and then convert the non-CDB to a PDB. This is done with the noncdb_to_pdb.sql which converts a non-CDB dictionary to a PDB one, with metadata and object links. But do you get a clean PDB after that ? I tested it and compared the result with same database migrated by transportable tablespaces.

The test case

In 12c I can use Full Transportable database, but here I've only one tablespace as I'm doing my comparison on an empty database with the EXAMPLE schemas.

Here is my database:

RMAN> report schema;

Report of database schema for database with db_unique_name NDB1

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    790      SYSTEM               YES     /u01/app/oracle/oradata/NDB1/system01.dbf
3    610      SYSAUX               NO      /u01/app/oracle/oradata/NDB1/sysaux01.dbf
4    275      UNDOTBS1             YES     /u01/app/oracle/oradata/NDB1/undotbs01.dbf
5    1243     EXAMPLE              NO      /u01/app/oracle/oradata/NDB1/example01.dbf
6    5        USERS                NO      /u01/app/oracle/oradata/NDB1/users01.dbf

It's a new database, created with dbca, all defaults, and having only the EXAMPLE tablespace. SYSTEM is 790MB and SYSAUX is 610MB. We can have a lot of small databases like that, where system size is larger than user size and this is a reason to go to multitenant.

I will compare the following:

  • the migration with transportable tablespaces (into pluggable database PDB_TTS)
  • the plug and run noncdb_to_pdb (into the pluggable database PDB_PLG)

Transportable tablespace

Transportable tablespace will plug only the non system tablespaces and all the dictionary entries are recreated while importing metadata. Here it is:

SQL> alter tablespace EXAMPLE read only;
Tablespace altered.
SQL> host expdp '"/ as sysdba"' transport_tablespaces='EXAMPLE'

The log gives me the dump file (containing the metadata) and the datafiles to copy:

Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
  /u01/app/oracle/admin/NDB1/dpdump/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  /u01/app/oracle/oradata/NDB1/example01.dbf
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at ... elapsed 00:03:55

then on the destination I create an empty pluggable database:

SQL> create pluggable database PDB_TTS admin user pdbadmin identified by "oracle" file_name_convert=('/pdbseed/','/PDB_TTS/');
SQL> alter pluggable database PDB_TTS open;
SQL> alter session set container=PDB_TTS;

and import the metadata after having copied the datafile to /u03:

SQL> create or replace directory DATA_PUMP_DIR_NDB1 as '/u01/app/oracle/admin/NDB1/dpdump';
SQL> host impdp '"sys/oracle@//vm211:1666/pdb_tts as sysdba"' transport_datafiles=/u03/app/oracle/oradata/PDB_TTS/example01.dbf directory=DATA_PUMP_DIR_NDB1

which took only two minutes because I don't have a lot of objects. That's all. I have a brand new pluggable database where I've imported my tablespaces.

Here I used the transportable tablespace and had to pre-create the users. But in 12c you can do everything with Full Tabsportable Database.

noncdb_to_pdb.sql

The other solution is to plug the whole database, including the SYSTEM and SYSAUX tablespaces, and then run the noncdb_to_pdb.sql script to transform the dictionary to a multitenant one. First, we generate the xml describing the database, which is similar to the one generated when we unplug a PDB:

SQL> shutdown immediate
SQL> startup open read only;
SQL> exec dbms_pdb.describe('/tmp/NDB01.xml');

And then plug it:

SQL> CREATE PLUGGABLE DATABASE PDB_PLG USING '/tmp/NDB01.xml' COPY FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/NDB1', '/u03/app/oracle/oradata/PDB_PLG');

At that point I can open the PDB but it will act as a Non-CDB, with its own dictionary that is not linked to the root. For example, you have nothing when you query DBA_PDBS from the PDB:

SQL> show con_id
CON_ID
------------------------------
6
SQL> select * from dba_pdbs;
no rows selected

I put in my todo list to test what we can do in that PDB which is not yet a PDB before raising lot of ORA-600.

Now you have to migrate the dictionary to a PDB one. The noncdb_to_pdb.sql will do internal updates to transform the entries in OBJ$ to be metadata links.

SQL> alter session set container=PDB_PLG;
SQL> @?/rdbms/admin/noncdb_to_pdb;
SQL> alter pluggable database PDB_PLG open;

The updates will depend on the number of dictionary objects, so that is fixed for the version. And the remaining time is to recompile all objects, but that can be done in parallel. Here, I've run it in serial to see how long it takes (screenshot from Lighty):

b2ap3_thumbnail_Capturenon_cdb_to_pdb.png

Comparison

My goal was to compare both methods. As I expected, the SYSTEM and SYSAUX tablespaces did not decrease when using the noncdb_to_pdb, so if you want to go to multitenant to save space, the noncdb_to_pdb method is not the good one:

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1_SITE1

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    781      SYSTEM               YES     /u02/app/oracle/oradata/cdb1_site1/system01.dbf
3    691      SYSAUX               NO      /u02/app/oracle/oradata/cdb1_site1/sysaux01.dbf
4    870      UNDOTBS1             YES     /u02/app/oracle/oradata/cdb1_site1/undotbs01.dbf
5    260      PDB$SEED:SYSTEM      NO      /u02/app/oracle/oradata/cdb1_site1/pdbseed/system01.dbf
6    5        USERS                NO      /u02/app/oracle/oradata/cdb1_site1/users01.dbf
7    570      PDB$SEED:SYSAUX      NO      /u02/app/oracle/oradata/cdb1_site1/pdbseed/sysaux01.dbf
8    260      PDB1:SYSTEM          NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/system01.dbf
9    580      PDB1:SYSAUX          NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/sysaux01.dbf
10   10       PDB1:USERS           NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/pdb1_users01.dbf
14   270      PDB_TTS:SYSTEM       NO      /u02/app/oracle/oradata/cdb1_site1/PDB_TTS/system01.dbf
15   590      PDB_TTS:SYSAUX       NO      /u02/app/oracle/oradata/cdb1_site1/PDB_TTS/sysaux01.dbf
17   1243     PDB_TTS:EXAMPLE      NO      /u03/app/oracle/oradata/PDB_TTS/example01.dbf
22   790      PDB_PLG:SYSTEM       NO      /u03/app/oracle/oradata/PDB_PLG/system01.dbf
23   680      PDB_PLG:SYSAUX       NO      /u03/app/oracle/oradata/PDB_PLG/sysaux01.dbf
24   5        PDB_PLG:USERS        NO      /u03/app/oracle/oradata/PDB_PLG/users01.dbf
25   1243     PDB_PLG:EXAMPLE      NO      /u03/app/oracle/oradata/PDB_PLG/example01.dbf

The SYSTEM tablespace which is supposed to contain only links (my user schemas don't have a lot of objects) is the same size as the root. This is bad. Let's look at the detail:

SQL> select *
  from (select nvl(pdb_name,'CDB$ROOT') pdb_name,owner,segment_type,bytes from cdb_segments 
  left outer join dba_pdbs using(con_id))
  pivot (sum(bytes/1024/1024) as "MB" for (pdb_name) 
  in ('CDB$ROOT' as "CDB$ROOT",'PDB_TTS' as PDB_TTS,'PDB_PLG' as PDB_PLG))
  order by greatest(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))-least(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0)) 
  desc fetch first 20 rows only;

OWNER                SEGMENT_TYPE       CDB$ROOT_MB PDB_TTS_MB PDB_PLG_MB
-------------------- ------------------ ----------- ---------- ----------
SYS                  TABLE                      539         96        540
SYS                  INDEX                      187        109        195
SYS                  LOBSEGMENT                 117        105        118
SYS                  TABLE PARTITION             17          1         12
SYSTEM               INDEX                       10          1         10
SYS                  SYSTEM STATISTICS                                  8
SYSTEM               TABLE                        8          1          8
SYS                  LOBINDEX                    12          7         13
SYS                  INDEX PARTITION              9          0          6
SYSTEM               LOBSEGMENT                   5          0          5
APEX_040200          LOBSEGMENT                  80         74         80
SYSTEM               INDEX PARTITION              4                     4
SYSTEM               TABLE PARTITION              3                     3
SYS                  TABLE SUBPARTITION           2                     2
SYS                  CLUSTER                     52         50         52
SYS                  LOB PARTITION                3          1          2
SYSTEM               LOBINDEX                     2          0          2
APEX_040200          TABLE                      100         99        100
XDB                  TABLE                        7          6          7
AUDSYS               LOB PARTITION                1          0          1

20 rows selected.

Here I've compared the dictionary sizes. While the PDB_TTS table segments are below 100MB, the PDB_PLG is the same size as the root. The noncdb_to_pdb has updated OBJ$ but did not delete the rows reclaim space from other tables (see update 2).

Which tables?

SQL> select *
   from (select nvl(pdb_name,'CDB$ROOT') pdb_name,owner,segment_type,segment_name,bytes 
   from cdb_segments left outer join dba_pdbs using(con_id) 
   where owner='SYS' and segment_type in ('TABLE'))
   pivot (sum(bytes/1024/1024) as "MB" for (pdb_name) 
   in ('CDB$ROOT' as "CDB$ROOT",'PDB_TTS' as PDB_TTS,'PDB_PLG' as PDB_PLG))
   order by greatest(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))-least(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))
   desc fetch first 20 rows only;

OWNER             SEGMENT_TYPE       SEGMENT_NAME                   CDB$ROOT_MB PDB_TTS_MB PDB_PLG_MB
----------------- ------------------ ------------------------------ ----------- ---------- ----------
SYS               TABLE              IDL_UB1$                               288          3        288
SYS               TABLE              SOURCE$                                 51          2         52
SYS               TABLE              IDL_UB2$                                32         13         32
SYS               TABLE              ARGUMENT$                               13          0         13
SYS               TABLE              IDL_CHAR$                               11          3         11

The IDL_UB1$ is the table that contains all the pcode for pl/sql. All those wrapped dbms_ packages are there. And we don't need them in the PDB: we have link to the root which has exactly the same version.

Conclusion

My conclusion is that I'll not advise to use using noncdb_to_pdb. First, that script doing a lot of internal stuff scares me. I prefer to start that new implementation of the dictionary with a clean one.

But now that I made this test, I've two additional reasons to avoid it. First, it's not faster - except if you have a lot of objects. And the main goal is to reduce the total space by having the oracle packages stored only once. And this is cleary not done by the noncdb_to_pdb.

However, that conclusion is only for small databases. If you a database with a huge number of objects and pl/sql packages, then the overhead to keep the dictionary objects will not be very significant. And the TSS solution will be longer because it has to import all metadata. So there is still a case for noncdb_to_pdb. But test is before. And be sure to have a large shared pool for the recompile step.

Update 1: I forgot to add another reason to be very careful with noncdb_to_pdb from Bertrand Drouvot in his post about optimizer_adaptive_features huge negative impact on it.

Update 2: From a comment on OTN forum I changed the sentence about deleted rows because it was wrong. In fact, rows are deleted when the objects are recompiled:

SQL> select name,count(*) from containers(IDL_UB1$) left outer join v$containers using(con_id) group by name order by 1;

NAME                             COUNT(*)
------------------------------ ----------
CDB$ROOT                            53298
PDB1                                 6457
PDB_PLG                              6432
PDB_TTS                              6354

SQL> select name,count(*) from containers(SOURCE$) left outer join v$containers using(con_id) group by name order by 1;

NAME                             COUNT(*)
------------------------------ ----------
CDB$ROOT                           327589
PDB1                                73055
PDB_PLG                             20306
PDB_TTS                             17753
The issue is only that space is still allocated. And you can't SHRINK those objects because SYSTEM is DMT, and anyway the large tables contain LONG, and finally:
SQL> alter table sys.IDL_UB1$ shrink space;
alter table sys.IDL_UB1$ shrink space
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database
Of course, the space can be reused, but do you expect to add 200MB of compiled pl/sql in future releases?

Pearson, Efficacy, and Research

Michael Feldstein - Mon, 2014-12-01 08:03

A while back, I mentioned that MindWires, the consulting company that Phil and I run, had been hired by Pearson in response to a post I wrote a while back expressing concerns about the possibility of the company trying to define “efficacy” in education for educators (or to them) rather than with them. The heart of the engagement was us facilitating conversations with different groups of educators about how they think about learning outcomes—how they define them, how they know whether students are achieving them, how the institution does or doesn’t support achieving them, and so on. As a rule, we don’t blog about our consulting work here on e-Literate. But since we think these conversations have broader implications for education, we asked for and received permission to blog about what we learn under the following conditions:

  • The blogging is not part of the paid engagement. We are not obliged to blog about anything in particular or, for that matter, to blog at all.
  • Pearson has no editorial input or prior review of anything we write.
  • If we write about specific schools or academics who participated in the discussions, we will seek their permission before blogging about them.

I honestly wasn’t sure what, if anything, would come out of these conversations that would be worth blogging about. But we got some interesting feedback. It seems to me that the aspect I’d like to cover in this post has implications not only for Pearson, and not only for ed tech vendors in general, but for open education and maybe for the future of education in general. It certainly is relevant to my recent post about why the LMS is the way it is and the follow-up post about fostering better campus conversations. It’s about the role of research in educational product design. It’s also about the relationship of faculty to the scholarship of teaching.

It turns out that one of the aspects about Pearson’s efficacy work that really got the attention of the folks we talked with was their research program. Pearson has about 40 PhDs doing educational research of different kinds throughout the company. They’ve completed about 300 studies and have about another 100 currently in progress. Given that educational researchers were heavily represented in the groups of academics we talked to, it wasn’t terribly surprising that the reaction of quite a few of them were variations of “Holy crap!” (That is a direct quote of one the researchers.) And it turns out that the more our participants knew about learning outcomes research, the more they were interested in talking about how little we know about the topic. For example, even though we have had course design frameworks for a long time now, we don’t know a whole lot about which course design features will increase the likelihood of achieving particular types of learning outcomes. Also, while we know that helping students develop a sense of community in their first year at school increases the likelihood that they will stay on in school and complete their degrees, we know very little about which sorts of intra-course activities are most likely to help students develop that sense of connectedness in ways that will measurably increase their odds of completion. And to the degree that research on topics like these exist, it’s scattered throughout various disciplinary silos. There is very little in the way of a pool of common knowledge. So the idea of a well-funded organization conducting high volumes of basic research was exciting to a number of the folks that we talked to.

But how to trust that research? Every vendor out there is touting their solutions based on “brain science” and “big data.” How can the number of PhDs a vendor employs or the number of “studies” that it conducts yield more credible value than a bullet point in the marketing copy?

In part, the answer is surprisingly simple: Vendors can demonstrate the credibility and value of their research using the same mechanisms that any other researcher would. The first step is transparency. It turns out that Pearson already publishes a library of their studies on their “research and innovation network” site. Here is a sample of some of their more recent titles that will give you a sense of the range of topics:

Pearson also has a MyLabs- and Mastering-specific site that is more marketing-oriented but still has some research-based reports in it.

How good is this research? I don’t know. My guess is that, like any large body of research conducted by a reasonably large group of people, it probably varies in quality. Some of these studies have been published in academic journals or presented in academic conferences. Many have not. One thing we heard from a number of the folks we spoke to was that they’d like to see Pearson submit as much of their research as possible to blind peer-reviewed journals. Ultimately, how does an academic typically judge the quality of any research? The number of citations it gets is a good place to start. So the folks that we talked to wanted to see Pearson researchers participate as peers in the academic research community, including submitting their work to the same scrutiny that academic research undergoes.

This is approach isn’t perfect, of course. We’ve seen in industries like pharmaceuticals that deep-pocketed industry players can find various ways to warp the research process. But pharmaceuticals are particularly bad because (a) the research studies are incredibly expensive to conduct, and (b) they require access to the proprietary drugs being tested, which can be difficult in general and particularly so before the product is released to the market. Educational research is much less vulnerable to these problems, but it has one of its own. By and large, replicability of experiments (and therefore confirmation or disconfirmation of results) is highly difficult or even impossible in many educational situations for both logistical and ethical reasons. So evaluating vendor-conducted or vendor-sponsored educational research would have its challenges, even with blind peer review. That said, the opinions of many of the folks we talked to, particularly of those who are involved in conducting, reviewing, and publishing academic educational research, was that the challenges are manageable and the potential value generated could be considerable.

Even more interesting to me were the discussions about what to do with that research besides just publishing it. There was a lot of interest in getting faculty engaged in with the scholarship of teaching, even in small ways. Take, for example, the case of an adjunct instructor, running from one school to the next to cobble together a living, spending many, many hours grading papers and exams. That person likely doesn’t have time to do a lot of reading on educational research, never mind conducting some. But she might appreciate some curricular materials that say, “there are at least three different ways to teach this topic, but the way we’re recommending is consistent with research on a sense of building class community that encourages students to feel like they belong at the school and reduces dropout rates.” She might even find some precious time to follow the link and read that research if it’s on a topic that’s important enough to her.

This is pretty much the opposite of how most educational technology and curricular materials products are currently designed. The emphasis has historically been on making things easier for the instructors by having them cede more control to the product and vendor. “Don’t worry. It’s brain science! It’s big data! You don’t have to understand it. Just buy it and the product will do the work for you.” Instead, these products could be educating and empowering faculty to try more sophisticated pedagogical approaches (without forcing them to do so). Even if most faculty pass up these opportunities most of the time, simply providing them with ready contextual access to relevant research could be transformative in the sense that it constantly affords them new opportunities to incorporate the scholarship of teaching into their daily professional lives. It also could encourage a fundamentally different relationship between the teachers and third-party curricular materials, whether they are vendor-provided or OER. Rather than being a solitary choice made behind closed doors, the choice of curricular materials could include, in part, the choice of a community of educational research and practice that the adopting faculty member wants to join. Personally, I think this is a much better selection criterion for curricular materials than the ones that are often employed by faculty today.

These ideas came out of conversations with just a couple of dozen people, but the themes were pretty strong and consistent. I’d be interested to hear what you all think.

The post Pearson, Efficacy, and Research appeared first on e-Literate.

Preparing for the end of Windows Server 2003

Chris Foot - Mon, 2014-12-01 07:24

Although support for Windows Server 2003 doesn't end until July of next year, enterprises that have used the operating system since its inception are transitioning to the solution's latest iteration, Windows Server 2012 R2.

Preliminary considerations
Before diving into the implications of transitioning from Server 2003 to Server 2012 R2, it's important to answer a valid question: Why not simply make the switch to Windows Server 2008 R2?

It's a conundrum that Windows IT Pro contributor Orin Thomas has ruminated on since the announcement of Microsoft's discontinuation of Server 2003. While he acknowledged various reasons why some professionals are hesitant to make the leap from Server 2003 to Server 2012 R2 (such as application compatibility issues and the "Windows 8-style interface") he pointed to a key concern: time.

Basically, Server 2008 R2 will cease to receive updates and support on Jan. 14, 2020. Comparatively, Server 2012 R2's end of life is slated for Jan. 10 2023.

In the event organizations have difficulty making the transition, there's always the option of seeking assistance from experts with certifications in Server 2012 R2. On top of migration and integration, these professionals can provide continued support throughout the duration of the solution's usage.

Key considerations
As companies using Windows Server 2003 will be moving to either Server 2008 R2 or Server 2012 R2, a number of implications must be taken into account. ZDNet contributor Ken Hess outlined several recommendations for those preparing for the migration:

  1. Identify how many Server 2003 systems you have in place.
  2. Aggregate and organize the hardware specifications for each system (CPU, memory, disk space, etc.).
  3. Assess how heavily these solutions were utilized over the years, then correlate them with projected growth and future workloads.
  4. Do away with systems that are no longer applicable to operations.
  5. Determine which applications running on top of Server 2003 are critical to the business model.
  6. Deduce how virtual machines can be leveraged to host underutilized processes.
  7. Collaborate with a database administration firm to outline and implement a migration plan (provide the partner with the data mentioned above).

These are just a few starting points on which to base a comprehensive migration plan. Also, it's important to be aware of unexpected spikes in server utilization. Although upsurges of 100 percent may occur infrequently, it's important that systems will be able to handle them effectively. As always, be sure to troubleshoot the renewed solution after implementation.

The post Preparing for the end of Windows Server 2003 appeared first on Remote DBA Experts.

Database active monitoring a strong defense against SQL injections

Chris Foot - Mon, 2014-12-01 07:24

SQL injections have been named as the culprits of many database security woes, including the infamous Target breach that occurred at the commencement of last year's holiday season.

Content management system compromised
One particular solution was recently flagged as vulnerable to such hacking techniques. Chris Duckett, a contributor to ZDNet, referenced a public service announcement released by Drupal, a open source content management solution used to power millions of websites and applications.

The developer noted that, unless users patched their sites against SQL injection attacks before October 15, "you should proceed under the assumption that every Drupal 7 website was compromised." Drupal expanded by asserting that updating to 7.32 will patch the vulnerability, but websites that have already been exposed are still compromised – the reason being that hackers have already obtained back-end information.

There is one way in which websites that sustained attacks could have remained protected. Database monitoring, regardless of the system being used, can alert administrators of problems as they arise, giving them ample time to respond to breaches.

Why database monitoring works
Although access permissions, malware and other assets are designed to dismantle and eradicate intrusions, some of their detection features leave something to be desired. Therefore, in order for programs capable of deterring SQL injections to operate to the best of their ability, they must be programmed to work in conjunction with surveillance tools that assess all database actions constantly.

The Ponemon Institute polled 595 database experts on the matter, asking them about the effectiveness of server monitoring tools. While Chairman Larry Ponemon acknowledged the importance of using continuous monitoring to look for anomalous behavior, Secure Ideas CEO Kevin Johnson said some tools can miscalculate SQL injections because they're designed to appear legitimate. Therefore, it's important for surveillance programs to also be directed toward identifying vulnerabilities. Paul Henry, senior instructor at the SANS Institute, also weighed in on the matter. 

"I believe in a layered approach that perhaps should include a database firewall to mitigate the risk of SQL injection, combined with continuous monitoring of the database along with continuous monitoring of normalized network traffic flows," said Henry, as quoted by the source.

At the end of the day, having a team of professionals on standby to address SQL injections if and when they occur is the only way to guarantee that massive consequences don't exacerbate as a result of these attacks.

The post Database active monitoring a strong defense against SQL injections appeared first on Remote DBA Experts.

WSFC with SQL Server: Manual or automatic failover? That is the question

Yann Neuhaus - Mon, 2014-12-01 00:05

During the night, you receive an alert concerning your SQL Server failover cluster or your availability groups. You're in panic because the message displayed is "a failover has occured .. see the log for more details" ...

So you try to keep quiet and after connecting to your environment, you are not able to find anything ... What has happened? Maybe someone has triggered a failover manually and you are not aware of it. I'm sure that by reading the previous sentences, many of you will find the situation familiar, but the real question is: Is it possible to distinguish a manual failover from an automatic failover with a Windows failover cluster?

The answer is yes and one way to find out the response is to take a look at the cluster.log. In fact, you have a record entry that clearly identifies a manual failover of resources:

 

[RCM] rcm::RcmApi::MoveGroup: (, 1, 0, MoveType::Manual )

 

As a reminder, this is the resource control monitor [RCM] that is responsible for performing actions according to the state of a resource. In fact, when you trigger a manual failover, the MoveGroup API is called with a identified parameter MoveType::Manual

Let me know if you find a other way of discovering a manual failover :-)

Happy failover (or not)!

Using runInstaller to check Prereqs with responseFile

Michael Dinh - Sun, 2014-11-30 21:38

The SILENT method

[grid@rac01:/media/sf_Linux/11.2.0.4/grid]
$ ./runInstaller -silent -executePrereqs -showProgress -waitforcompletion -force -responseFile /media/sf_Linux/11.2.0.4/grid/grid_crs_config.rsp
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 48807 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 8191 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-11-30_06-40-05PM. Please wait ...[grid@rac01:/media/sf_Linux/11.2.0.4/grid]
$ cd /u01/app/oraInventory/logs/
[grid@rac01:/u01/app/oraInventory/logs]
$ ls -lrt
total 84
-rw-r--r--. 1 grid oinstall    47 Nov 30 18:40 time2014-11-30_06-40-05PM.log
-rw-rw----. 1 grid oinstall 80257 Nov 30 18:41 installActions2014-11-30_06-40-05PM.log
-rw-r--r--. 1 grid oinstall     0 Nov 30 18:41 oraInstall2014-11-30_06-40-05PM.out
-rw-r--r--. 1 grid oinstall     0 Nov 30 18:41 oraInstall2014-11-30_06-40-05PM.err
[grid@rac01:/u01/app/oraInventory/logs]
$ tail installActions2014-11-30_06-40-05PM.log
INFO: Actual Value:0|2
INFO: -----------------------------------------------
INFO: -----------------------------------------------
INFO: Verification Result for Node:rac01
INFO: Expected Value:0|2
INFO: Actual Value:0|2
INFO: -----------------------------------------------
INFO: All forked task are completed at state init
INFO: Exit Status is 0
INFO: Shutdown Oracle Grid Infrastructure

The NOT SO SILENT method
executePrereqs


Upcoming EDUCAUSE Webinars on Dec 4th and Dec 8th

Michael Feldstein - Sun, 2014-11-30 20:39

Michael and I will be participating in two upcoming EDUCAUSE webinars.

Massive and Open: A Flipped Webinar about What We Are Learning

On Thursday, December 4th from 1:00–2:00 p.m. ET we will be joined by George Siemens for an EDUCAUSE Live! webinar:

In 2012, MOOCs burst into public consciousness with course rosters large enough to fill a stadium and grand promises that they would disrupt higher education. Two years later, after some disappointments, setbacks, and not a small amount of schadenfreude, MOOCs seem almost passé. And yet, away from the sound and the fury, researchers and teachers have been busy finding answers to some basic questions: What are MOOCs good for, and what can we learn from them? Phil Hill and Michael Feldstein will talk about what we’re learning so far.

This will be a flipped webinar. We strongly encourage you to watch the 14-minute video interview film of MOOC Research Initiative (MRI) grantees before the webinar begins. If you would like more background on MOOCs, feel free to watch the other two videos (parts 2 and 3) in the series as well.

More information on this page. Registration is free.

UPDATE: The recording is now available, both the Adobe Connect and the chat transcript.

Teaching and Learning: 2014 in Retrospect, 2015 in Prospect

On Monday, December 8th from 1:00 – 2:00 p.m. ET we will be joined by Audrey Watters for an EDUCAUSE Learning Initiative (ELI) webinar:

Join Malcolm Brown, EDUCAUSE Learning Initiative director, and Veronica Diaz, ELI associate director, as they moderate this webinar with Phil Hill, Michael Feldstein, and Audrey Watters.

The past year has been an eventful one for teaching and learning in higher education. There have been developments in all areas, including analytics, the LMS, online education, and learning spaces. What were the key developments in 2014, and what do they portend for us in 2015? For this ELI webinar, we will welcome a trio of thought leaders to help us understand the past year and what it portends for the coming year. Come and share your own insights, and join the discussion.

More information on this page. This webinar is available for ELI member institutions, but it will be publicly available 90 days later.

We hope you can join us for these discussions.

The post Upcoming EDUCAUSE Webinars on Dec 4th and Dec 8th appeared first on e-Literate.

Avoid false errors with runcluvfy.sh for VirtualBox

Michael Dinh - Sun, 2014-11-30 20:32

My typical VirtualBox network configuration is: eth0 (NAT), eth1 (Host Only), eth2(Internal)

runcluvfy.sh stage -pre crsinst -n rac01,rac02 -r 11gR2 -fixup -fixupdir /tmp – FAILED

Node connectivity passed for subnet "10.0.2.0" with node(s) rac02,rac01

ERROR:
PRVF-7617 : Node connectivity between "rac01 : 10.0.2.15" and "rac02 : 10.0.2.15" failed
TCP connectivity check failed for subnet "10.0.2.0"

Node connectivity passed for subnet "192.168.56.0" with node(s) rac02,rac01
TCP connectivity check passed for subnet "192.168.56.0"

Node connectivity passed for subnet "10.0.0.0" with node(s) rac02,rac01
TCP connectivity check passed for subnet "10.0.0.0"

Pre-check for cluster services setup was unsuccessful.
Checks did not pass for the following node(s):
        rac01
[grid@rac01:/media/sf_Linux/11.2.0.4/grid]
$

I understand the error is ignorable but why create an ignorable error in the first place?
Next, I would like to look at the bottom line to know if the check passed for failed and if the failure is false positive, then it would require wasted time for investigation.

A better alternative is to exclude eth0 from the check using -networks eth1,eth2 to have successful setup.

Pre-check for cluster services setup was successful.

runcluvfy.sh stage -pre crsinst -n rac01,rac02 -r 11gR2 -fixup -fixupdir /tmp -networks eth1,eth2 -verbose

[grid@rac01:/media/sf_Linux/11.2.0.4/grid]
$ ifconfig eth0
eth0      Link encap:Ethernet  HWaddr 08:00:27:02:B1:57
          inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe02:b157/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:578823 errors:0 dropped:0 overruns:0 frame:0
          TX packets:292879 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:477712790 (455.5 MiB)  TX bytes:15902513 (15.1 MiB)
[grid@rac01:/media/sf_Linux/11.2.0.4/grid]
$ ./runcluvfy.sh stage -pre crsinst -n rac01,rac02 -r 11gR2 -fixup -fixupdir /tmp -networks eth1,eth2 -verbose

Performing pre-checks for cluster services setup

Checking node reachability...

Check: Node reachability from node "rac01"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac02                                 yes
  rac01                                 yes
Result: Node reachability check passed from node "rac01"


Checking user equivalence...

Check: User equivalence for user "grid"
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed
Result: User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed

Verification of the hosts config file successful


Interface information for node "rac02"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   10.0.2.15       10.0.2.0        0.0.0.0         10.0.2.2        08:00:27:87:91:11 1500
 eth1   192.168.56.12   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:4A:7B:27 1500
 eth2   10.10.10.12     10.0.0.0        0.0.0.0         10.0.2.2        08:00:27:E8:D6:21 1500


Interface information for node "rac01"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   10.0.2.15       10.0.2.0        0.0.0.0         10.0.2.2        08:00:27:02:B1:57 1500
 eth1   192.168.56.11   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:BD:66:A4 1500
 eth2   10.10.10.11     10.0.0.0        0.0.0.0         10.0.2.2        08:00:27:60:79:0F 1500


Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac02[192.168.56.12]            rac01[192.168.56.11]            yes
Result: Node connectivity passed for interface "eth1"


Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac01:192.168.56.11             rac02:192.168.56.12             passed
Result: TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth2"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac02[10.10.10.12]              rac01[10.10.10.11]              yes
Result: Node connectivity passed for interface "eth2"


Check: TCP connectivity of subnet "10.0.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac01:10.10.10.11               rac02:10.10.10.12               passed
Result: TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Checking ASMLib configuration.
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed
Result: Check for ASMLib configuration passed.

Check: Total memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         3.8674GB (4055296.0KB)    1.5GB (1572864.0KB)       passed
  rac01         3.8674GB (4055296.0KB)    1.5GB (1572864.0KB)       passed
Result: Total memory check passed

Check: Available memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         3.5583GB (3731180.0KB)    50MB (51200.0KB)          passed
  rac01         3.269GB (3427772.0KB)     50MB (51200.0KB)          passed
Result: Available memory check passed

Check: Swap space
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         8GB (8388604.0KB)         3.8674GB (4055296.0KB)    passed
  rac01         8GB (8388604.0KB)         3.8674GB (4055296.0KB)    passed
Result: Swap space check passed

Check: Free disk space for "rac02:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              rac02         /             52.2988GB     1GB           passed
Result: Free disk space check passed for "rac02:/tmp"

Check: Free disk space for "rac01:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /tmp              rac01         /             49.9507GB     1GB           passed
Result: Free disk space check passed for "rac01:/tmp"

Check: User existence for "grid"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac02         passed                    exists(54322)
  rac01         passed                    exists(54322)

Checking for multiple users with UID value 54322
Result: Check for multiple users with UID value 54322 passed
Result: User existence check passed for "grid"

Check: Group existence for "oinstall"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac02         passed                    exists
  rac01         passed                    exists
Result: Group existence check passed for "oinstall"

Check: Group existence for "dba"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac02         passed                    exists
  rac01         passed                    exists
Result: Group existence check passed for "dba"

Check: Membership of user "grid" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             yes           yes           yes           yes           passed
  rac01             yes           yes           yes           yes           passed
Result: Membership check for user "grid" in group "oinstall" [as Primary] passed

Check: Membership of user "grid" in group "dba"
  Node Name         User Exists   Group Exists  User in Group  Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             yes           yes           yes           passed
  rac01             yes           yes           yes           passed
Result: Membership check for user "grid" in group "dba" passed

Check: Run level
  Node Name     run level                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         5                         3,5                       passed
  rac01         5                         3,5                       passed
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             hard          65536         65536         passed
  rac01             hard          65536         65536         passed
Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             soft          1024          1024          passed
  rac01             soft          1024          1024          passed
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             hard          16384         16384         passed
  rac01             hard          16384         16384         passed
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             soft          16384         2047          passed
  rac01             soft          16384         2047          passed
Result: Soft limits check passed for "maximum user processes"

Check: System architecture
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         x86_64                    x86_64                    passed
  rac01         x86_64                    x86_64                    passed
Result: System architecture check passed

Check: Kernel version
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         2.6.39-400.17.1.el6uek.x86_64  2.6.32                    passed
  rac01         2.6.39-400.17.1.el6uek.x86_64  2.6.32                    passed
Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             250           250           250           passed
  rac01             250           250           250           passed
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             32000         32000         32000         passed
  rac01             32000         32000         32000         passed
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             100           100           100           passed
  rac01             100           100           100           passed
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             128           128           128           passed
  rac01             128           128           128           passed
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4398046511104  4398046511104  2076311552    passed
  rac01             4398046511104  4398046511104  2076311552    passed
Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4096          4096          4096          passed
  rac01             4096          4096          4096          passed
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4294967296    4294967296    2097152       passed
  rac01             4294967296    4294967296    2097152       passed
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             6815744       6815744       6815744       passed
  rac01             6815744       6815744       6815744       passed
Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed
  rac01             between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             262144        262144        262144        passed
  rac01             262144        262144        262144        passed
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4194304       4194304       4194304       passed
  rac01             4194304       4194304       4194304       passed
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             262144        262144        262144        passed
  rac01             262144        262144        262144        passed
Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             1048576       1048576       1048576       passed
  rac01             1048576       1048576       1048576       passed
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             1048576       1048576       1048576       passed
  rac01             1048576       1048576       1048576       passed
Result: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "binutils"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed
  rac01         binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed
Result: Package existence check passed for "binutils"

Check: Package existence for "compat-libcap1"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         compat-libcap1-1.10-1     compat-libcap1-1.10       passed
  rac01         compat-libcap1-1.10-1     compat-libcap1-1.10       passed
Result: Package existence check passed for "compat-libcap1"

Check: Package existence for "compat-libstdc++-33(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed
  rac01         compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "libgcc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libgcc(x86_64)-4.4.7-11.el6  libgcc(x86_64)-4.4.4      passed
  rac01         libgcc(x86_64)-4.4.7-11.el6  libgcc(x86_64)-4.4.4      passed
Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libstdc++(x86_64)-4.4.7-11.el6  libstdc++(x86_64)-4.4.4   passed
  rac01         libstdc++(x86_64)-4.4.7-11.el6  libstdc++(x86_64)-4.4.4   passed
Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libstdc++-devel(x86_64)-4.4.7-11.el6  libstdc++-devel(x86_64)-4.4.4  passed
  rac01         libstdc++-devel(x86_64)-4.4.7-11.el6  libstdc++-devel(x86_64)-4.4.4  passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed
  rac01         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed
Result: Package existence check passed for "sysstat"

Check: Package existence for "gcc"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         gcc-4.4.7-11.el6          gcc-4.4.4                 passed
  rac01         gcc-4.4.7-11.el6          gcc-4.4.4                 passed
Result: Package existence check passed for "gcc"

Check: Package existence for "gcc-c++"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         gcc-c++-4.4.7-11.el6      gcc-c++-4.4.4             passed
  rac01         gcc-c++-4.4.7-11.el6      gcc-c++-4.4.4             passed
Result: Package existence check passed for "gcc-c++"

Check: Package existence for "ksh"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         ksh-20120801-21.el6.1     ksh-20100621              passed
  rac01         ksh-20120801-21.el6.1     ksh-20100621              passed
Result: Package existence check passed for "ksh"

Check: Package existence for "make"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         make-3.81-20.el6          make-3.81                 passed
  rac01         make-3.81-20.el6          make-3.81                 passed
Result: Package existence check passed for "make"

Check: Package existence for "glibc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         glibc(x86_64)-2.12-1.149.el6  glibc(x86_64)-2.12        passed
  rac01         glibc(x86_64)-2.12-1.149.el6  glibc(x86_64)-2.12        passed
Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "glibc-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         glibc-devel(x86_64)-2.12-1.149.el6  glibc-devel(x86_64)-2.12  passed
  rac01         glibc-devel(x86_64)-2.12-1.149.el6  glibc-devel(x86_64)-2.12  passed
Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "libaio(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed
  rac01         libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed
Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "libaio-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed
  rac01         libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed
Result: Package existence check passed for "libaio-devel(x86_64)"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed

Check: Current group ID
Result: Current group ID check passed

Starting check for consistency of primary group of root user
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed

Check for consistency of root user's primary group passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
Network Time Protocol(NTP) configuration file not found on any of the nodes. Oracle Cluster Time Synchronization Service(CTSS) can be used instead of NTP for time synchronization on the cluster nodes
No NTP Daemons or Services were found to be running

Result: Clock synchronization check using Network Time Protocol(NTP) passed

Checking Core file name pattern consistency...
Core file name pattern consistency check passed.

Checking to make sure user "grid" is not in "root" group
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac02         passed                    does not exist
  rac01         passed                    does not exist
Result: User "grid" is not part of "root" group. Check passed

Check default user file creation mask
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         0022                      0022                      passed
  rac01         0022                      0022                      passed
Result: Default user file creation mask check passed
Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "rac02"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes

Check: Time zone consistency
Result: Time zone consistency check passed

Starting check for Reverse path filter setting ...
Reverse path filter setting is correct for all private interconnect network interfaces on node "rac02.localdomain".
Reverse path filter setting is correct for all private interconnect network interfaces on node "rac01.localdomain".

Check for Reverse path filter setting passed

Pre-check for cluster services setup was successful.
[grid@rac01:/media/sf_Linux/11.2.0.4/grid]
$