Skip navigation.

Feed aggregator

Reminder: Fishbowl Solutions Webinar Tomorrow at 1 PM CST

Cole OrndorffThere’s still time to register for the webinar that Fishbowl Solutions and Oracle will be holding tomorrow from 1 PM-2 PM CST! Innovation in Managing the Chaos of Everyday Project Management will feature Fishbowl’s AEC Practice Director Cole Orndorff. Orndorff, who has a great deal of experience with enterprise information portals, said the following about the webinar:

“According to Psychology Today, the average employee can lose up to 40% of their productivity switching from task to task. The number of tasks executed across a disparate set of systems over the lifecycle of a complex project is overwhelming, and in most cases, 20% of each solution is utilized 80% of the time.

I am thrilled to have the opportunity to present on how improving workforce effectiveness can enhance your margins. This can be accomplished by providing a consistent, intuitive user experience across the diverse systems project teams use and by reusing the intellectual assets that already exist in your organization.”

To register for the webinar, visit Oracle’s website. To learn more about Fishbowl’s new Enterprise Information Portal for Project Management, visit our website.

The post Reminder: Fishbowl Solutions Webinar Tomorrow at 1 PM CST appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

From Zero to Hero....In About 2 Hours

Joel Kallman - Wed, 2014-12-03 11:23


This is an example of a real-world problem, an opportunistic one, being solved via a mobile application created with Oracle Application Express.

First, a brief bit of background.  Our son is 9 years old and is in the Cub Scouts.  Cub Scouts in the United States is an organization that is associated with Boy Scouts of America.  It's essentially a club that is geared towards younger boys, and teaches them many valuable skills - hiking, camping out, shooting a bow and arrow, tying different knots, nutrition, etc.  This club has a single fundraiser every year, where the boys go door-to-door selling popcorn, and the proceeds of the popcorn sale fund the activities of the Cub Scouts local group for the next year.  There is a leader who organizes the sale of this popcorn for the local Cub Scout group, and this leader gets the unenvious title of "Popcorn Kernel".  For the past 2 years, I've been the "Popcorn Kernel" for our Cub Scout Pack (60 Scouts).

I was recently at the DOAG Konferenz in Nürnberg, Germany and it wasn't until my flight home that I began to think about how I was going to distribute the 1,000 items to 60 different Scouts.  My flight home from Germany was on a Sunday and I had pre-scheduled the distribution of all of this popcorn to all 60 families on that next day, Monday afternoon.  Jet lag would not be my friend.

The previous year, I had meticulously laid out 60 different orders across a large meeting room and let the parents and Scouts pick it up.  This year, I actually had 4 volunteer helpers, but I had no time.  All I had in my possession was an Excel spreadsheet which was used to tally the orders across all 60 Cub Scouts.   But I knew I could do better than 60 pieces of paper, which was the "solution" last year.

On my flight home, on my iPad, I sketched out the simple 4-page user interface to locate and manage the orders.  As well, I wrote the DDL on my iPad for a single table.  Normally, I would use SQL Developer Data Modeler as my starting point, but this application and design needed to be quick and simple, so a single denormalized table was more than sufficient.



Bright and early on Monday morning, I logged into an existing workspace on apex.oracle.com.  I created my single table using the Object Browser in SQL Commands, created a trigger on this table, uploaded the spreadsheet data into this table, and then massaged the data using some DML statements in SQL Commands.  Now that my table and data were complete, it was now time for my mobile application!

I created a simple Mobile User Interface application with navigation links on the home page.  There are multiple "dens" that make up each group in a Cub Scout Pack, and these were navigation aids as people would come and pick up their popcorn ("Johnny is in the Wolf Den").  These ultimately went to the same report page but with different filters.



Once a list view report was accessed, I showed the Scout's name, the total item count for them, and then via a click, drill down to the actual number of items to be delivered to the Scout.  Once the items were handed over and verified, the user of this application had to click a button to complete the order.  This was the only DML update operation in the entire application.



I also added a couple charts to the starting page, so we could keep track of how many orders for each den had already been delivered and how many were remaining.



I also added a chart page to show how many of each item was remaining, at least according to our records. This enabled us to do a quick "spot check" at any given point in time, and assess if the current inventory we had remaining was also accurately reflected in our system.  It was invaluable!  And remember - this entire application was all on a single table in the Oracle Database.  At one point in time, 8 people were all actively using this system - 5 to do updates and fulfill orders, and the rest to simply view and monitor the progress from their homes.  Concurrency was never even a consideration.  I didn't have to worry about it.



Now some would say that this application:
  • isn't pixel perfect
  • doesn't have offline storage
  • isn't natively running on the device
  • can't capitalize on the native features of the phone
  • doesn't have a badge icon
  • isn't offered in a store

And they would be correct.  But guess what?  None of it mattered.  The application was used by 5 different people, all using different devices, and I didn't care what type of devices they were using.  They all thought it was rocket science.  It looked and felt close enough to a native application that none of them noticed nor cared.  The navigation and display were consistent with what they were accustomed to.  More importantly, it was a vast improvement over the alternative - consisting of either a piece of paper or, worse yet, 5 guys huddling around a single computer looking at a spreadsheet.  And this was something that I was able to produce, starting from nothing to completed solution, in about two hours.  If I hadn't been jet lagged, I might have been able to do it in an hour.

You might read this blog post and chuckle to yourself.  How possibly could this trivial application for popcorn distribution to Cub Scouts relate to a "real" mobile enterprise application?  Actually, it's enormously relevant.

  • For this application, I didn't have to know CSS, HTML or mobile user interfaces.
  • I only needed to know SQL.  I wrote no PL/SQL.  I only wrote a handful of SQL queries for the list views, charts, and the one DML statement to update the row.
  • It was immediately accessible to anyone with a Web browser and a smart phone (i.e., everyone).
  • Concurrency and scalability were never a concern.  This application easily could have been used by 1,000 people and I still would not have had any concern.  I let the Oracle Database do the heavy lifting and put an elegant mobile interface on it with Oracle Application Express.

This was a simple example of an opportunistic application.  It didn't necessarily have to start from a spreadsheet to be opportunistic.  And every enterprise on the planet (including Oracle) has a slew of application problems just like this, and which today are going unsolved.  I went from zero to hero to rocket scientist in the span of two hours.  And so can you.

A demo version of this application (with fictitious names) is here.  I left the application as is - imperfect on the report page and the form (I should have used a read-only display).  Try it on your own mobile device.

ORA 700's make me grumpy ( well or at least confused )

Grumpy old DBA - Wed, 2014-12-03 09:16
Geez Louise I guess I coulda/shoulda known about this before now.

At some point the oracle software ( 11.2 ish ? 11.1 ish ? ) now starts getting worried and kind of grumpy.

You apparently can get ORA 700's under certain circumstances.  It's not a huge problem apparently ( yet ) when the software decides to let you know ... ( but it might become one later maybe ? ).

I saw this one on a new test vm I am setting up ( Database 12.1.0.2 using ASM and also OEM 12.1.0.4 ).

ORA 700 [kskvmstatact: excessive swapping observed]

So anyway I started doing some looking around at the memory config stuff after seeing that ( but that's a longer story ).

Categories: DBA Blogs

Oracle Support Advisor Webcasts Series for December

Chris Warticki - Wed, 2014-12-03 08:41

Dear Valued Support Customer,
We are pleased to invite you to our Advisor Webcast series for December 2014. Subject matter experts prepare these presentations and deliver them through WebEx. Topics include information about Oracle support services and products. To learn more about the program or to access archived recordings, please follow the links.

There are currently two types of Advisor Webcasts;
If you prefer to read about current events, Oracle Premier Support News provides you information, technical content, and technical updates from the various Oracle Support teams. For a full list of Premier Support News, go to My Oracle Support and enter Document ID 222.1 in Knowledge Base search.

Sincerely,
Oracle Support

shadow2 shadow3 pen December Featured Webcasts by Product Area: Database Oracle Database 12c Patching New Features December 17 Enroll Database Oracle Database中的Mutex - Mandarin only December 18 Enroll E-Business Suite Get Proactive with Doc ID 432.1 December 9 Enroll E-Business Suite Discrete Costing Functional Changes And Bug Fixes For 12.2.3 And 12.2.4 December 10 Enroll E-Business Suite Rapid Planning: Enabling Mass Updates to Demand Priorities and Background Processing December 11 Enroll E-Business Suite Order Management Corrupt Data and Data Fixes December 16 Enroll E-Business Suite AutoLockbox Validation: Case Studies For Customer Identification & Receipt Application December 18 Enroll E-Business Suite eAM Mobile App Overview and Product Tour December 18 Enroll Fusion Applications Want to Learn more about how to Setup and Troubleshoot Email notifications in Fusion Applications? December 11 Enroll Hyperion EPM Getting Started with Essbase Aggregate Storage Option - ASO 101 December 10 Enroll JD Edwards EnterpriseOne 2014 Year-End Processing for US Payroll - How To Have A Successful Year-End December 4 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne: Canadian Year-End Processing for 2014 December 11 Enroll JD Edwards EnterpriseOne JD Edwards World to EnterpriseOne Migration: Migration Plan and Conversions December 16 Enroll JD Edwards EnterpriseOne JD Edwards EnterpriseOne 2014 Year-End ESU Install for W2, T4 and 1099 December 17 Enroll JD Edwards World JD Edwards World: 1099 Address Book setup and Guidelines Refresher 2014 December 2 Enroll JD Edwards World JD Edwards World: A/P Ledger Method Refresher 2014 December 3 Enroll JD Edwards World JD Edwards World: 1099 G/L Method Refresher 2014 December 4 Enroll JD Edwards World JD Edwards World: Preparing for the 2014 W-2 Year-End Processing Season December 9 Enroll JD Edwards World JD Edwards World: Reviewing Encumbrance Rollover (P4317) & Program Changes in Release A9.3 Update 1 December 9 Enroll JD Edwards World JD Edwards World: Preparing for the 2014 Canadian T4 & Releve Year-End Processing Season December 10 Enroll Middleware Tuxedo GWTDOMAIN的基本配置和常见问题 (Mandarin Only) December 16 Enroll Middleware G1 Garbage Collector 早わかり (Japanese Only) December 17 Enroll PeopleSoft Enterprise PeopleSoft Payroll for North America 9.2 (Product Image 9): New Capabilities December 2 Enroll PeopleSoft Enterprise Tax Update 14-F General Information Session December 9 Enroll PeopleSoft Enterprise PeopleSoft HCM Time & Labor Mobile Application, In-Memory Rules & Troubleshooting December 10 Enroll PeopleSoft Enterprise PeopleSoft Payroll for North America – 2014 Year-End, Special Topics December 16 Enroll PeopleSoft Enterprise PeopleSoft Process Scheduler and Report Distribution PeopleTools 8.54 New Features December 17 Enroll Hardware and Software Engineered to Work Together
Copyright © 2014, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices | Privacy SEV100370159

Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

Your privacy is important to us. You can login to your account to update your e-mail subscriptions or you can opt-out of all Oracle Marketing e-mails at any time.

Please note that opting-out of Marketing communications does not affect your receipt of important business communications related to your current relationship with Oracle such as Security Updates, Event Registration notices, Account Management and Support/Service communications.

Integrating BPM12c with BAM

Darwin IT - Wed, 2014-12-03 07:21
In BPM12c there is a tight integration with BAM, like it was with 11g. However, BAM is not automatically installed in BPM. You need to do that seperately. But having done that, you need to get BPM acquainted with BAM and instruct it to enable process analytics.

For a greater part the configuration is similar to the 11g story. My former oracle colleague wrote a blog about it: 'Configuration of BAM and BPM for process analytics'.
There is a little difference, because in 12c you won't find the mentioned property 'DisableActions' under the 'oracle.as.soainfra.config' -> 'BPMNConfig'. But you have to enable process analytics on the bpm-server(s). The 12c docs tell you how: '11.1 Understanding Oracle BAM and BPM Integration'.
Taken from that document here a short step-list:
  1. Log in to the Fusion Middleware Control (http:Adminserver-host:port/em) console. 
  2. In the Target Navigation pane, expand the Weblogic Domain node. 
  3. Select the domain in which the Oracle BAM server is installed. 
  4. Open the MBean Browser by right-clicking on the domain and select System MBean Browser. 
  5. Expand the Application Defined MBeans node. 
  6. Navigate to oracle.as.soainfra.config node -> 'Server: server_name' -> AnalyticsConfig -> analytics.
  7. Disable the 'DisableProcessMetrics'-property by setting the value to false. 
  8. You might want to do the same with the 'DisableMonitorExpress'-property.
  9. Click Apply.

Deploying Spring Boot Applications to Pivotal Cloud Foundry from STS

Pas Apicella - Wed, 2014-12-03 05:14
The example below shows how to use STS (Spring Tool Suite) to deploy a spring boot web application directly from the IDE itself. I created a basic spring boot web application using the template engine thymeleaf. The application isn't that fancy it simply displays a products page of some mock up Products. This blog entry just shows how you could deploy this to Pivotal Cloud Foundry from the IDE itself.
1. First create a Pivotal Cloud Foundry Server connection. The image blow shows the connection and one single application.


2. Right click on your Spring Boot application and select "Configure -> Enable as cloud foundry app"
3. Drag and Drop The project onto the Cloud Foundry Connection.
4. At this point a dialog appears asking for an application name as shown below.

5. Click Next
6. Select deployment options and click Next

7. Bind to existing services if you need to 

8. Click next9. Click finish
At this point it will push the application to your Cloud Foundry Instance


Once complete the Console window in STS will show something as follows
Checking application - SpringBootWebCloudFoundryGenerating application archiveCreating applicationPushing applicationApplication successfully pushedStarting and staging applicationGot staging request for app with id bb3c63f5-c32d-4e27-a834-04076f2af35aUpdated app with guid bb3c63f5-c32d-4e27-a834-04076f2af35a ({"state"=>"STARTED"})-----> Downloaded app package (12M)-----> Java Buildpack Version: v2.4 (offline) | https://github.com/cloudfoundry/java-buildpack.git#7cdcf1a-----> Downloading Open Jdk JRE 1.7.0_60 from http://download.run.pivotal.io/openjdk/lucid/x86_64/openjdk-1.7.0_60.tar.gz (found in cache)       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (0.9s)-----> Downloading Spring Auto Reconfiguration 1.4.0_RELEASE from http://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.4.0_RELEASE.jar (found in cache)-----> Uploading droplet (43M)Starting app instance (index 0) with guid bb3c63f5-c32d-4e27-a834-04076f2af35a
  .   ____          _            __ _ _ /\\ / ___'_ __ _ _(_)_ __  __ _ \ \ \ \( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/  ___)| |_)| | | | | || (_| |  ) ) ) )  '  |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot ::        (v1.1.9.RELEASE)2014-12-03 11:09:50.434  INFO 32 --- [           main] loudProfileApplicationContextInitializer : Adding 'cloud' to list of active profiles2014-12-03 11:09:50.447  INFO 32 --- [           main] pertySourceApplicationContextInitializer : Adding 'cloud' PropertySource to ApplicationContext2014-12-03 11:09:50.497  INFO 32 --- [           main] nfigurationApplicationContextInitializer : Adding cloud service auto-reconfiguration to ApplicationContext2014-12-03 11:09:50.521  INFO 32 --- [           main] apples.sts.web.Application               : Starting Application on 187dfn5m5ve with PID 32 (/home/vcap/app started by vcap in /home/vcap/app)2014-12-03 11:09:50.577  INFO 32 --- [           main] ationConfigEmbeddedWebApplicationContext : Refreshing org.springframework.boot.context.embedded.AnnotationConfigEmbeddedWebApplicationContext@374d2f77: startup date [Wed Dec 03 11:09:50 UTC 2014]; root of context hierarchy2014-12-03 11:09:50.930  WARN 32 --- [           main] .i.s.PathMatchingResourcePatternResolver : Skipping [/home/vcap/app/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.4.0_RELEASE.jar] because it does not denote a directory2014-12-03 11:09:51.600  WARN 32 --- [           main] .i.s.PathMatchingResourcePatternResolver : Skipping [/home/vcap/app/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.4.0_RELEASE.jar] because it does not denote a directory2014-12-03 11:09:52.349  INFO 32 --- [           main] urceCloudServiceBeanFactoryPostProcessor : Auto-reconfiguring beans of type javax.sql.DataSource2014-12-03 11:09:52.358  INFO 32 --- [           main] urceCloudServiceBeanFactoryPostProcessor : No beans of type javax.sql.DataSource found. Skipping auto-reconfiguration.2014-12-03 11:09:53.109  INFO 32 --- [           main] .t.TomcatEmbeddedServletContainerFactory : Server initialized with port: 610972014-12-03 11:09:53.391  INFO 32 --- [           main] o.apache.catalina.core.StandardService   : Starting service Tomcat2014-12-03 11:09:53.393  INFO 32 --- [           main] org.apache.catalina.core.StandardEngine  : Starting Servlet Engine: Apache Tomcat/7.0.562014-12-03 11:09:53.523  INFO 32 --- [ost-startStop-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext2014-12-03 11:09:53.524  INFO 32 --- [ost-startStop-1] o.s.web.context.ContextLoader            : Root WebApplicationContext: initialization completed in 2950 ms2014-12-03 11:09:54.201  INFO 32 --- [ost-startStop-1] o.s.b.c.e.ServletRegistrationBean        : Mapping servlet: 'dispatcherServlet' to [/]2014-12-03 11:09:54.205  INFO 32 --- [ost-startStop-1] o.s.b.c.embedded.FilterRegistrationBean  : Mapping filter: 'hiddenHttpMethodFilter' to: [/*]2014-12-03 11:09:54.521  INFO 32 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**/favicon.ico] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]2014-12-03 11:09:54.611  INFO 32 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.String apples.sts.web.WelcomeController.welcome(org.springframework.ui.Model)2014-12-03 11:09:54.612  INFO 32 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/products],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public java.lang.String apples.sts.web.ProductController.listProducts(org.springframework.ui.Model)2014-12-03 11:09:54.615  INFO 32 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],methods=[],params=[],headers=[],consumes=[],produces=[text/html],custom=[]}" onto public org.springframework.web.servlet.ModelAndView org.springframework.boot.autoconfigure.web.BasicErrorController.errorHtml(javax.servlet.http.HttpServletRequest)2014-12-03 11:09:54.616  INFO 32 --- [           main] s.w.s.m.m.a.RequestMappingHandlerMapping : Mapped "{[/error],methods=[],params=[],headers=[],consumes=[],produces=[],custom=[]}" onto public org.springframework.http.ResponseEntity> org.springframework.boot.autoconfigure.web.BasicErrorController.error(javax.servlet.http.HttpServletRequest)2014-12-03 11:09:54.640  INFO 32 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]2014-12-03 11:09:54.641  INFO 32 --- [           main] o.s.w.s.handler.SimpleUrlHandlerMapping  : Mapped URL path [/webjars/**] onto handler of type [class org.springframework.web.servlet.resource.ResourceHttpRequestHandler]2014-12-03 11:09:55.077  INFO 32 --- [           main] o.s.j.e.a.AnnotationMBeanExporter        : Registering beans for JMX exposure on startup2014-12-03 11:09:55.156  INFO 32 --- [           main] s.b.c.e.t.TomcatEmbeddedServletContainer : Tomcat started on port(s): 61097/http2014-12-03 11:09:55.167  INFO 32 --- [           main] apples.sts.web.Application               : Started Application in 5.918 seconds (JVM running for 6.712)
You can also view the deployed application details in STS by double clicking on it as shown below.

http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

VirtualBox and Windows driver verifier

Amardeep Sidhu - Wed, 2014-12-03 04:34

I was troubleshooting some Windows hangs on my Desktop system running Windows 8 and enabled driver verifier. Today when I tried to start VirtualBox it failed with this error message.

Failed to load VMMR0.r0 (VERR_LDR_MISMATCH_NATIVE)

Most of the online forums were asking to reinstall VirtualBox to fix the issue. But one of the thread mentioned that it was being caused by Windows Driver Verifier. I disabled it, restarted Windows and VirtualBox worked like a charm. Didn’t have time to do more research as i quickly wanted to test something. May be we can skip some particular stuff from Driver Verifier and VirutalBox can then work.

Categories: BI & Warehousing

Upgrades

Jonathan Lewis - Wed, 2014-12-03 02:24

I have a simple script that creates two identical tables , collects stats (with no histograms) on the pair of them, then executes a join. Here’s the SQL to create the first table:


create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(dbms_random.value(0,1000))	n_1000,
	trunc(dbms_random.value(0,750))		n_750,
	trunc(dbms_random.value(0,600))		n_600,
	trunc(dbms_random.value(0,400))		n_400,
	trunc(dbms_random.value(0,90))		n_90,
	trunc(dbms_random.value(0,72))		n_72,
	trunc(dbms_random.value(0,40))		n_40,
	trunc(dbms_random.value(0,3))		n_3
from
	generator	v1,
	generator	v2
where
	rownum <= 1e6
;

-- gather stats: no histograms

The two tables have 1,000,000 rows each and t2 is created from t1 with a simple “create as select”. The columns are all defined to be integers, and the naming convention is simple – n_400 holds 400 distinct values with uniform distribution from 0 – 399, n_750 holds 750 values from 0 – 749, and so on.

Here’s the simple query:


select
        t1.*, t2.*
from
        t1, t2
where
        t1.n_400 = 0
and     t2.n_72  = t1.n_90
and     t2.n_750 = t1.n_600
and     t2.n_400 = 1
;

Since I’ve created no indexes you might expect the query to do a couple of and a hash join to get its result – and you’d be right; but what do you think the predicted cardinality would be ?

Here are the results from running explain plan on the query and then reporting the execution plan – for three different versions of Oracle:



9.2.0.8
-------------------------------------------------------------------------
| Id  | Operation            |  Name       | Rows  | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |             |    96 |  4992 |  1230  (10)|
|*  1 |  HASH JOIN           |             |    96 |  4992 |  1230  (10)|
|*  2 |   TABLE ACCESS FULL  | T1          |  2500 | 65000 |   617  (11)|
|*  3 |   TABLE ACCESS FULL  | T2          |  2500 | 65000 |   613  (10)|
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_750"="T1"."N_600" AND "T2"."N_72"="T1"."N_90")
   2 - filter("T1"."N_400"=0)
   3 - filter("T2"."N_400"=1)

***************************************************************************

10.2.0.5
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |   116 |  6032 |  1229  (10)| 00:00:07 |
|*  1 |  HASH JOIN         |      |   116 |  6032 |  1229  (10)| 00:00:07 |
|*  2 |   TABLE ACCESS FULL| T1   |  2500 | 65000 |   616  (11)| 00:00:04 |
|*  3 |   TABLE ACCESS FULL| T2   |  2500 | 65000 |   612  (10)| 00:00:04 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_750"="T1"."N_600" AND "T2"."N_72"="T1"."N_90")
   2 - filter("T1"."N_400"=0)
   3 - filter("T2"."N_400"=1)

***************************************************************************

11.2.0.4
---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |  2554 |   139K|  1225  (10)| 00:00:07 |
|*  1 |  HASH JOIN         |      |  2554 |   139K|  1225  (10)| 00:00:07 |
|*  2 |   TABLE ACCESS FULL| T1   |  2500 | 70000 |   612  (10)| 00:00:04 |
|*  3 |   TABLE ACCESS FULL| T2   |  2500 | 70000 |   612  (10)| 00:00:04 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_72"="T1"."N_90" AND "T2"."N_750"="T1"."N_600")
   2 - filter("T1"."N_400"=0)
   3 - filter("T2"."N_400"=1)

The change for 11.2.0.4 (which is still there for 12.1.0.2. I didn’t check to see if it also appears in 11.1.0.7) is particularly worrying. When you see a simple query like this changing cardinality on the upgrade you can be fairly confident that some of your more complex queries will change their plans – even if there are no clever new optimizer transformations coming into play.

I’ll write up an explanation of how the optimizer has produced three different estimates some time over the next couple of weeks; but if you want an earlier answer this is one of the things I’ll be covering in my presentation on calculating selectivity at “Super Sunday” at UKOUG Tech 14.


Magical Links for a Tuesday in December

Oracle AppsLab - Tue, 2014-12-02 13:28

It’s difficult to make a link post seem interesting. Anyway, I have some nuggets from the Applications User Experience desk plus bonus robot video because it’s Tuesday.

Back to Basics. Helping You Phrase that Alta UI versus UX Question

Always entertaining bloke and longtime Friend of the ‘Lab, Ultan (@ultan) answers a question we get a lot, what’s the difference between UI and UX?

From Coffee Table to Cloud at a Glance: Free Oracle Applications Cloud UX eBook Available

Next up, another byte from Ultan on a new and free eBook (registration required) produced by OAUX called “Oracle Applications Cloud User Experiences: Trends and Strategy.” If you’ve seen our fearless leader, Jeremy Ashley (@jrwashley), present recently, you might recognize some of the slides.

OAUX_ebook

Oh and if you like eBooks and UX, make sure to download the Oracle Applications Cloud Simplified User Interface Rapid Development Kit.

Today, We Are All Partners: Oracle UX Design Lab for PaaS

And hey, another post from Ultan about an event he ran a couple weeks ago, the UX Design Lab for PaaS.

Friend of the ‘Lab, David Haimes (@dhaimes), and several of our team members, Anthony (@anthonyslai), Mark (@mvilrokx), and Tony, participated in this PaaS4SaaS extravaganza, and although I can’t discuss details, they built some cool stuff and had oodles of fun. Yes, that’s a specific unit of fun measurement.

IMG_1147

Mark (@mvilrokx) and Anthony (@anthonyslai) juggle balloons for science.

Amazon’s robotic fulfillment army

From kottke.org, a wonderful ballet of Amazon’s fulfillment robots.

https://www.youtube.com/watch?v=tMpsMt7ETi8Possibly Related Posts:

Trends in Big Data, Hadoop, Business Intelligence, Analytics and Dashboards

Nilesh Jethwa - Tue, 2014-12-02 10:34

How has the interest in Big Data, Hadoop, Business Intelligence, Analytics and Dashboards changed over the years?

One easy way to gauge the interest is to measure how much news is generated for the related term and Google Trends allows you do that very easily.

After plugging all of the above terms in Google trends and further analysis leads to the following visualizations.

Aggregating the results by year

Image

 

It is very amazing to see that the stream representing Dashboards has remained constant through out the years.

So does the stream for Analytics and Business Intelligence in general exihibit similar trend.

Analytics is kind of widening its mouth as we move forward and that is being helped by the combination of terms such as Hadoop + Big Data + Analytics being used almost together.

Now check the line chart below

Image

 

Looks like the Trend for Dashboards define the lower bound and the trend for Business Intelligence define the upper bound. The trend for Hadoop started around first Quarter of 2007. The trend for Big Data started around third Quarter of 2008 and ever since they both are rapidly increasing. It remains to see whether they will cross “Business Intelligence” in terms of popularity of kind of merge and find a stable position somewhere in the middle.

Before Big Data and Hadoop came into picture the term “Analytics” exhibited a stable ground closer to dashboards but now the trend for Analytics seems to be following Big Data and Hadoop.

Let us take a deeper look into each week since 2004

Image

 

Look at the downward spikes occuring around Christmas time. Nobody wants to hear about Big Data or Dashboards during holidays.

And finally, here is a quarterly cyclical view

Image

Click here to view the full interactive Visualizations

anonymous cypher suites for SSL (and a 12c pitfall)

Laurent Schneider - Tue, 2014-12-02 08:21

If you configure your listener for encryption only, you do not really need authentication.

It works pretty fine until 11.2.0.2, I wrote multiple posts on ssl.

You add SSL_CLIENT_AUTHENTICATION=FALSE to your server sqlnet.ora and listener.ora and specify an “anon” cipher suite in your client. You do not need to validate the certificate, so a default wallet will do.


orapki wallet create -wallet . -auto_login_only

sqlnet.ora

WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=.)))
ssl_cipher_suites=(SSL_DH_anon_WITH_3DES_EDE_CBC_SHA)
NAMES.DIRECTORY_PATH=(TNSNAMES)

tnsnames.ora

DB01=(DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=srv01.example.com)(PORT=1521))(CONNECT_DATA=(SID=DB01)))

or if you use java, the default truststore -usually located in $JAVA_HOME/jre/lib/security/cacerts, will also do.


    System.setProperty("oracle.net.ssl_cipher_suites", "SSL_DH_anon_WITH_DES_CBC_SHA");

On some plateform however you may get something like : IBM’s Client TrustManager does not allow anonymous cipher suites.

So far so good, but if you upgrade your listener to 11.2.0.3/4 or 12c, the anonymous suites won’t be accepted if not explicitely set up in sqlnet.ora. This is documented in Note 1434966.1

You will get something like “ORA-28860: Fatal SSL error”, “TNS-12560: TNS:protocol adapter error” in Oracle or “SSLHandshakeException: Received fatal alert: handshake_failure”, “SQLRecoverableException: I/O-Error: Received fatal alert: handshake_failure” in java.

There are two -obvious- ways to fix this. The preferred approach is to not use anonymous suite (they seem to have disappeared from the supported cypher suites in the doc).

For this task, you use another cipher suite. The easiest way is to not specify any or just use one like TLS_RSA_WITH_AES_128_CBC_SHA (java) / SSL_RSA_WITH_AES_128_CBC_SHA (sqlnet). Even if you do not use client authentication, you will then have to authenticate the server, and import the root ca in the wallet or the keystore.
sqlnet.ora


# comment out ssl_cipher_suites=(SSL_DH_anon_WITH_3DES_EDE_CBC_SHA)

java

// comment out : System.setProperty("oracle.net.ssl_cipher_suites", "SSL_DH_anon_WITH_DES_CBC_SHA");
System.setProperty("javax.net.ssl.trustStore","keystore.jks");
System.setProperty("javax.net.ssl.trustStoreType","JKS");
System.setProperty("javax.net.ssl.trustStorePassword","***");

Or, as documented in metalink, define the suite in sqlnet.ora and listener.ora if you use 11.2.0.3 or 11.2.0.4.

StatsPack and AWR Reports -- Bits and Pieces -- 4

Hemant K Chitale - Tue, 2014-12-02 08:05
This is the fourth post in a series.

Post 1 is here.
Post 2 is here.
Post 3 is here.

Buffer Cache Hit Ratios

Many novice DBAs may use Hit Ratios as indicators of performance.  However, these can be misleading or incomplete.

Here are two examples :

Extract A: 9i StatsPack

Instance Efficiency Percentages (Target 100%)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer  Hit   %:   99.06

It would seem that with only 0.94% of reads being physical reads, the database is performing optimally.  So, the DBA doesn't need to look any further.  
Or so it seems.
If he spends some time reading the report, he also then comes across this :
Top 5 Timed Events~~~~~~~~~~~~~~~~~~                                                     % TotalEvent                                               Waits    Time (s) Ela Time-------------------------------------------- ------------ ----------- --------db file sequential read                           837,955       4,107    67.36CPU time                                                        1,018    16.70db file scattered read                             43,281         549     9.00


                                                                   Avg                                                     Total Wait   wait    WaitsEvent                               Waits   Timeouts   Time (s)   (ms)     /txn---------------------------- ------------ ---------- ---------- ------ --------db file sequential read           837,955          0      4,107      5    403.3db file scattered read             43,281          0        549     13     20.8
Physical I/O is a significant proportion (76%) of total database time.  88% of the physical I/O is single-block  reads ("db file sequential read").  This is where the DBA must identify that tuning *is* required.
Considering the single block access pattern it is likely that a significant proportion are index blocks as well.  Increasing the buffer cache might help cache the index blocks.


Extract B : 10.2 AWR
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %:99.98Redo NoWait %:100.00Buffer Hit %:96.43In-memory Sort %:99.99Library Hit %:97.16Soft Parse %:98.16Execute to Parse %:25.09Latch Hit %:99.85Parse CPU to Parse Elapsd %:89.96% Non-Parse CPU:96.00
The Buffer Hit Ratio is very good.  Does that mean that I/O is not an issue ?
Look again at the same report 
Top 5 Timed Events
EventWaitsTime(s)Avg Wait(ms)% Total Call TimeWait ClassCPU time147,59342.3db file sequential read31,776,67887,659325.1User I/Odb file scattered read19,568,22079,142422.7User I/ORMAN backup & recovery I/O1,579,31437,6502410.8System I/Oread by other session3,076,11014,21654.1User I/O
User I/O is actually significant.  The SQLs with the highest logical I/O need to be reviewed for tuning.

.
.
.

Categories: DBA Blogs

Engaging Digital Experiences

WebCenter Team - Tue, 2014-12-02 07:15
By Mitchell Palski, Oracle WebCenter Sales Consultant
This week we’re happy to have Oracle WebCenter expert Mitchell Palski join us for a Q&A around how you can Deliver Engaging Digital Experiences to your end users.
Q: So before we dive into the topic, it might be helpful for our readers if you could define what the terms Digital Business and Digital Experience mean? First let’s describe how we are defining those terms – Digital Business and Digital Experience. 
Digital business is the use of any technology to promote, sell and enable innovative products, services and experiences. Digital Business isn't about digitizing everything in sight, it’s about leveraging digital technologies. A Digital Business is an enterprise that embraces technological advances in a way that:
  • Empowers their customers, citizens, employees, suppliers, and partners
  • Optimizes their business operations
  • Fuels innovation and increases business
In order to ensure you are meeting and anticipating customer expectations, your Digital Business needs to deliver a consistent and engaging Digital Experience to customers, citizens, employees and partners. 
To be successful in this area, you must deliver engaging digital experiences that are:
  1. Consistent across all channels to drive user loyalty
  2. Delivering relevant content (including links, documents, promotions, and documents) to the right users and at the right time

Q: It seems that today, everyone is trying to deliver these engaging digital experiences. Can you touch on the importance and why organizations should take action now if they aren’t already doing this?
Today’s consumers are “plugged in” 24/7. They demand instant access to information and transactional capabilities whenever they want them. They are savvy when it comes to making decisions and unafraid to make a change if a company no longer meets their expectations.  

It’s enough for your organization to just have an attractive look-and-feel; it’s more important to deliver a positive customer experience and provide responsive customer service. Organizations have to differentiate themselves across all channels and interactions to not only engage customers but to also retain them in loyal, long-term relationships.  

Q:What types of technologies are out there if an organization was looking to deliver digital experiences to their end users?
Oracle WebCenter can be used in a variety of ways to deliver engaging Digital Experiences:

  • Oracle WebCenter Sites is a Web Experience Management (WEM) tool that enables business users to easily create and manage contextually relevant social and interactive online experiences across multiple channels on a global scale to drive sales and loyalty.
  • Oracle WebCenter Portal is an enterprise Portal platform that can deliver role-based user experiences for use cases such as:
    • Employee intranets (e.g. HR portal, Finance portal)
    • Citizen self-service
    • Partner collaboration
    • Marketing website

Oracle Business Process Management (BPM) is a rules and workflow engine that delivers transactional engagement to a web interface. Oracle WebCenter can provide contextual user experiences while BPM drives efficient and convenient user interaction. For example, Oracle WebCenter would provide a citizen that has a bad driving record with links to pay fines to the Department of Transportation. Oracle BPM would provide that citizen with the actual tools to initiate the payment process, route a payment to the correct DOT employees, and keep all parties aware of the payments status in workflow.

Q: Before we close, do you have any examples of organizations that are already doing this and successfully delivering engaging digital experiences? Panduit is a world-class developer and provider of leading-edge Data centers, Connected buildings and Industrial automation solutions. Using Oracle WebCenter and Business Process Management, Panduit is now able to: 
  • Support a growing global partner ecosystem with secure, multilingual online experience
  • Provide integrated role-based experiences for all customers and partners within a single portal (www.panduit.com).
  • Improve number and quality of sales leads through increased web and mobile customer interactions and registrations
  • Experience the benefits with activity up 57% from previous year with portal site @ 42,632 self-serve transactions per month
Thank you, Mitchell for sharing your insight into how to Deliver Engaging Digital Experiences.  If you’d like to listen to a podcast on this topic, you can do so here! We are also doing an Oracle Day with Primitive Logic on Digital Disruption & Experience in Orange County on December 4 -- we hope you will join us!

UKOUG 2014

Jonathan Lewis - Tue, 2014-12-02 05:44

So it’s that time of year when I have to decide on my timetable for the UKOUG annual conference. Of course, I never manage to stick to it, but in principle here are the sessions I’ve highlighted:

Sunday
  • 12:30 – How to Avoid a Salted Banana – Lothar Flatz
  • 13:30 – Calculating Selectivity  – Me
  • 15:00 – Advanced Diagnostics Revisited – Julian Dyke
  • 16:00 – Testing Jumbo Frames for RAC – Neil Johnson
Monday
  • 9:00 – Oracle Indexes Q & A session – Richard Foote
  • 10:00 – How Oracle works in 50 minutes – Martin Widlake
  • 11:30 – Predictive Queries in 12c – Brendan Tierney
  • 14:30 – Oracle Database In-Memory DB and the Query Optimizer – Christian Antognini
  • 16:00 – Instrumenting, Analysing, & Tuning the Performance of Oracle ADF Applications – Frank Houweling
  • 17:00 – Techniques for Strategic Testing – Clive King
Tuesday
  • 9:30 – Top Five Things You Need To Know About Oracle Database In-Memory Option – Maria Colgan
  • 10:30 – How to Write Better PL/SQL – Andrew Clarke
  • 12:00 – Optimizer Round Table – Tony Hasler
  • 14:00 – What we had to Unlearn & Learn when Moving from M9000 to Super Cluster -Philippe Fierens
  • 15:00 – Maximum Availability Architecture: A Recipe for Disaster? – Julian Dyke
  • 16:30 – Chasing the Holy Grail of HA – Implementing Transaction Guard & Application Continuity in Oracle Database 12c -Mark Bobak
  • 17:30 – Five Hints for Efficient SQL – Me
Wednesday
  • 9:00 – Fundamentals of Troubleshooting (without graphics) pt.1 – Me
  • 10:00 – Fundamentals of Troubleshooting (without graphics) pt.2 – Me
  • 11:30 – Indexing in Exadata – Richard Foote

 

 


The Perfect Gift For The Oracle DBA: Top 5 DBA T-Shirts

The Perfect Gift For The Oracle DBA: Top 5 DBA T-Shirts
It's that time of year again and I can already hear it, "Dad, what do you want for Christmas?" This year I'm taking action. Like forecasting Oracle performance, I'm taking proactive action.

Like most of you reading this, you have a, let's say, unique sense of humor. I stumbled across the ultimate geek website that has an astonishing variety of t-shirts aimed at those rare individuals like us that get a rush in understanding the meaning of an otherwise cryptic message on a t-shirt.

I picked my Top 5 DBA Geek T-Shirts based on the challenges, conflicts and joys of being an Oracle DBA. With each t-shirt I saw, a story came to mind almost immediately. I suspect you will have a similar experience that rings strangely true.

So here they are—the Top 5 T-Shirts For The Oracle DBA:
Number 5: Change Your Password
According to Slash Data the top password is now "Password".  I guess the upper-case "P" makes people feel secure, especially since last years top password was "123456" and EVERYBODY knows thats a stupid password. Thanks to new and improved password requirements, the next most popular password is "12345678". Scary but not surprising.

As Oracle Database Administrators and those who listened to Troy Ligon's presentation last years IOUG conference presentation, passwords are clearly not safe. ANY passwords. Hopefully in the coming years, passwords will be a thing of the past.


Number 4: Show Your Work
Part of my job as a teacher and consultant is to stop behavior like this: I ask a DBA, "I want to understand why you want to make this change to improve performance." And the reply is something like one of these:

  1. Because it has worked on our other systems.
  2. I did a Google search and an expert recommended this.
  3. Because the box is out of CPU power, there is latching issues, so increasing spin_count will help.
  4. Because we have got to do something and quick!

I teach Oracle DBAs to think from the user experience to the CPU cycles developing a chain of cause and effect. If we can understand the cause and effect relationships, perhaps we can disrupt poor performance and turn it to our favor. "Showing your work" and actually writing it down can be really helpful.

Number 3: You Read My T-Shirt
Why do managers and users think their presence in close proximity to mine will improve performance or perhaps increase my productivity? Is that what they learn in Hawaii during "end user training"?

What's worse is when a user or manager wants to talk about it...while I'm obviously in concentrating on a serious problem.

Perhaps if I wear this t-shirt, stand up, turn around and remain silent they will stop talking and get the point. We can only hope.

Number 2: I'm Here Because You Broke Something
Obnoxious but true. Why do users wonder why performance is "slow" when they do a blind query returning ten-million rows and then scroll down looking for the one row they are interested in.... Wow. The problem isn't always the technology... but you know that already.

Hint to Developers: Don't let users do a drop down or a lookup that returns millions or even thousands or even hundreds of rows... Please for the love of performance optimization!


Number 1 (drum roll): Stand Back! I'm Going To Try SCIENCE
One of my goals in optimizing Oracle Database performance is to be quantitative. And whenever possible, repeatable. Add some basic statistics and you've got science. But stand back because, as my family tells me, it does get a little strange sometimes.

But seriously, being a "Quantitative Oracle Performance Analyst" is always my goal because my work is quantifiable, reference-able and sets me up for advanced analysis.


So there you go! Five t-shirts for the serious and sometimes strange Oracle DBA. Not only will these t-shirts prove and reinforce your geeky reputation, but you'll get a small yet satisfying feeling your job is special...though a little strange at times.

All the best in your Oracle performance endeavors!

Craig.
Categories: DBA Blogs

Auto Sales Data Visualization by Manufacturer

Nilesh Jethwa - Mon, 2014-12-01 14:42

Data: Edmunds

Image

 

Top Manufacturer

Image

 

Quarterly breakup of units sold by manufacturer

Image

 

View the interactive visualizations

About Proofs and Stories, in 4 Parts

Oracle AppsLab - Mon, 2014-12-01 14:14

Editor’s note: Here’s another new post from a new team member. Shortly after the ‘Lab expanded to include research and design, I attended a workshop on visualizations hosted by a couple of our new team members, Joyce, John and Julia. 

The event was excellent. John and Julia have done an enormous amount of critical thinking about visualizations, and I immediately started bugging them for blog posts. All the work and research they’ve done needs to be freed into the World so anyone can benefit from it. This post includes the first three installments, and I hope to get more. Enjoy.

Part 1

I still haven’t talked anyone into reading Proofs and Stories, and god knows I tried. If you read it, let me know. It is written by the author of Logicomix, Apostolos Doxiadis, if that makes the idea of reading Proofs and Stories more enticing. If not, I can offer you my summary:

H.C. Berann. Yellowstone National Park panorama

H.C. Berann. Yellowstone National Park panorama

1. Problem solving is like a quest. As in a quest, you might set off thinking you are bound for Ithaka only to find yourself on Ogygia years later. Or, in Apostolos’ example, you might set off to prove Fermat’s Last Theorem only to find yourself studying elliptic curves for years. The seeker walks through many paths, wonders in circles, reverses the steps, and encounters dead ends.

2. The quest has a starting point = what you know, the destination = the hypothesis you want to prove, and the points in between = statements of facts. Graph, in mathematical sense, is a great way to represent this. A is a starting point, B is the destination, F is a transitive point, C is a choice.

graph1

A story is a path through the graph, defined by the choices a storyteller makes on behalf of his characters.

graph_2

Frame P5 below shows Snowy’s dilemma. Snowy’s choice determines what happens to Tintin in Tibet. If only Snowy not gone for the bone, the story would be different.

swnoys_dilemma

Image from Tintin in Tibet by Hergé

Even though its own nature dictates the story to be linear, there is always a notion of alternative paths. How to linearize forks and branches of the path so that the story is most interesting, is an art of storytelling.

3. Certain weight, or importance, can be suggested based on the number of choices leading to a point, or resulting from it.

graph_weights

When a story is summarized, each storyteller likely to come up with a different outline. However the most important points usually survive majority of summarizations.

Stories can be similar. The practitioners of both narrative and problem solving rely on patterns to reduce choice and complexity.

So how does this have to do with anything?

Part 2

Another book I cannot make anyone to read but myself is called “Interaction Design for Complex Problem Solving: Developing Useful and Usable Software” by Barbara Mirel. The book is as voluminous as its title suggests, 397 pages, out of which I made it through the page 232 in four years. This probably doesn’t entice you into reading the book. Luckily there is a one-pager paper “Visualizing complexity: Getting from here to there in ill-defined problem landscapes” from the same author on the same very subject. If this is too much to read still, may I offer you my summary?

Mainly, cut and paste from Mirel’s text:

1. Complex problem solving is an exploration across rugged and at times uncharted problem terrains. In that terrain, analysts have no way of knowing in advance all moves, conditions, constraints or consequences. Problem solvers take circuitous routes through “tracts” of tasks toward their goals, sometimes crisscrossing the landscape and jump across foothills to explore distant knowledge, to recover from dead ends, or to reinvigorate inquiry.

2. Mountainscapes are effective ways to model and visualize complex inquiry. These models stress relationships among parts and do not reduce problem solving to linear and rule-based procedures or work flows. Mountainscapes represent spaces being as important to coherence as the paths. Selecting the right model affect the designs of the software and whether complex problem solvers experience useful support. Models matter.

 B. Mirel, L. Allmendinger. Analyzing sleep products visualized as a mountain climb

B. Mirel, L. Allmendinger. Analyzing sleep products visualized as a mountain climb

Complex problems can neither be solved nor supported with linear or pre-defined methods. Complex problems have many possible heuristics, indefinite parameters, and ranges of outcomes rather than one single right answer or stopping point.

3. Certain types of complex problems recur in various domains and, for each type, analysts across organizations perform similar patterns of inquiry. Patterns of inquiry are the regularly repeated sets of actions and knowledge that have a successful track record in resolving a class of problems in a specific domain
And so how does this have to do with anything?

Part 3

A colleague of mine, Dan Workman, once commented on a sales demo of a popular visual analytics tool. “Somehow” he said “the presenter drills down here, pivots there, zooms out there, and, miraculously, arrives to that view of the report where the answer to his question lies. But how did he know to go there? How would anyone know where the insight hides?”

His words stuck with me.

Imagine a simple visualization that shows revenue trend of a business by region by product by time. Let’s pretend the business operates in 4 regions, sells 4 products, and has been in business for 4 years. The combination of these parameters results in 64 views of sales data. Now imagine that each region is made up of hundreds of countries. If visualization allows user to view sales by country, there will be thousands and thousands of additional views. In the real world, a business might also have lots more products. The number of possible views could easily exceed what a human being can manually look at, and only some views (alone or in combination) possibly contain insight. But which ones?

I am yet to see an application that supports the users in finding insightful views of a visualization. Often users won’t even know where to start.
So here is the connection between Part1, Part2, and Part3. It’s the model. The visualization exploration can be represented as a graph (in mathematical sense), where the points are the views, and the connections are navigation between views. Users then trace a path through the graph as they explore new results.

J. Blyumen Navigating Interactive Visualizations

J. Blyumen Navigating Interactive Visualizations

From here, certain design research agenda comes to mind:

1. The world needs interfaces to navigate the problem mountainspaces: keeping track of places visited, representing branches and loops in the path, enabling to reverse steps, etc.

2. The world needs an interface for linearizing a completed quest into a story (research into presentation), and outlining stories.

3. The world needs software smarts that can collect the patterns of inquiry and use them to guide the problem solvers through the mountainspaces.

So I hope from this agenda the Part 4 will eventually follow . . . .Possibly Related Posts:

Kilobytes, Kibibytes and DBMS_XPLAN undocumented functions

The Anti-Kyte - Mon, 2014-12-01 12:46

How many bytes in a Kilobyte ? The answer to this question is pretty obvious…and, apparently, wrong.
Yep, apparently we’ve had it wrong all these years for there are, officially, 1000 bytes in a Kilobyte, not 1024.
Never mind that 1000 is not a factor of 2 and that, unless some earth-shattering breakthrough has happened whilst I wasn’t paying attention, binary is still the fundemental basis of computing.
According to the IEEE, there are 1000 bytes in a kilobyte and we should all get used to talking about a collection of 1024 bytes as a Kibibyte

Can you imagine dropping that into a conversation ? People might look at you in a strange way the first time “Kibibyte” passes your lips. If you then move on and start talking about Yobibytes, they may well conclude that you’re just being silly.

Let’s face it, if you’re going to be like that about things then C++ is actually and object orientated language and the proof is not in the pudding – the proof of the pudding is in the eating.

All of which petulant pedantry brings me on to the point of this particular post – some rather helpful formatting functions that are hidden in, of all places, the DBMS_XPLAN pacakge…

Function Signatures

If we happened to be strolling through the Data Dictionary and issued the following query…

select text
from dba_source
where owner = 'SYS'
and type = 'PACKAGE'
and name = 'DBMS_XPLAN'
order by line
/

we might be surprised at what we find….

***snip***
  ----------------------------------------------------------------------------
  -- ---------------------------------------------------------------------- --
  --                                                                        --
  -- The folloing section of this package contains functions and procedures --
  -- which are for INTERNAL use ONLY. PLEASE DO NO DOCUMENT THEM.           --
  --                                                                        --
  -- ---------------------------------------------------------------------- --
  ----------------------------------------------------------------------------
  -- private procedure, used internally

*** snip ***

  FUNCTION format_size(num number)
  RETURN varchar2;

  FUNCTION format_number(num number)
  RETURN varchar2;

  FUNCTION format_size2(num number)
  RETURN varchar2;

  FUNCTION format_number2(num number)
  RETURN varchar2;

  --
  -- formats a number representing time in seconds using the format HH:MM:SS.
  -- This function is internal to this package
  --
  function format_time_s(num number)
  return varchar2;

***snip***
Formatting a time in seconds

Let’s start with DBMS_XPLAN.FORMAT_TIME_S because we pretty much know what it does from the header comments.
To save myself a bit of typing, I’m just going to use the following SQL to see how the function copes with various values :

with actual_time as
(
    select &1 as my_secs
    from dual
)
select my_secs,
    dbms_xplan.format_time_s(my_secs) as formatted_time
from actual_time
/

Plug in a variety of numbers ( representing a time in seconds) and …

SQL> @format_time.sql 60
old   3:     select &1 as my_secs
new   3:     select 60 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
               60.00 00:01:00

SQL> @format_time.sql 3600
old   3:     select &1 as my_secs
new   3:     select 3600 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
             3600.00 01:00:00

SQL> @format_time.sql 86400
old   3:     select &1 as my_secs
new   3:     select 86400 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
            86400.00 24:00:00

SQL> @format_time.sql 129784
old   3:     select &1 as my_secs
new   3:     select 129784 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
           129784.00 36:03:04

SQL> 

I wonder how it treats fractions of a second ….

SQL> @format_time.sql  5.4
old   3:     select &1 as my_secs
new   3:     select 5.4 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
                5.40 00:00:05

SQL> @format_time.sql  5.5
old   3:     select &1 as my_secs
new   3:     select 5.5 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
                5.50 00:00:06

SQL> 

So, the function appears to round to the nearest second. Not great if you’re trying to list the times of the Olympic Finalists of the 100 metres, but OK for longer durations where maybe rounding to the nearest second is appropriate.
One minor quirk to be aware of :

SQL> @format_time.sql 119.5
old   3:     select &1 as my_secs
new   3:     select 119.5 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
              119.50 00:01:60

SQL> 

SQL> @format_time.sql 3599.5
old   3:     select &1 as my_secs
new   3:     select 3599.5 as my_secs

             MY_SECS FORMATTED_TIME
-------------------- --------------------------------------------------
             3599.50 00:59:60

SQL> 


If 59.5 seconds is rounded up, the function returns a value containing 60 seconds, rather than displaying the value as a minute.

Formatting Numbers

Next on our list of functions to explore are FORMAT_NUMBER and FORMAT_NUMBER2. At first glance, it may appear that these functions are designed to represent sizes using the IEEE standard definitions…

with myval as
(
    select &1 as the_value
    from dual
)
select the_value, 
    dbms_xplan.format_number(the_value) as format_size, 
    dbms_xplan.format_number2(the_value) as format_size2
from myval
/

Run this with a variety of inputs and we get :

SQL> @format_number.sql 999
old   3:     select &1 as the_value
new   3:     select 999 as the_value

 THE_VALUE FORMAT_NUMBER                  FORMAT_NUMBER2
---------- ------------------------------ ------------------------------
       999 999                             999

SQL> @format_number.sql 1000
old   3:     select &1 as the_value
new   3:     select 1000 as the_value

 THE_VALUE FORMAT_NUMBER                  FORMAT_NUMBER2
---------- ------------------------------ ------------------------------
      1000 1000                              1K

SQL> @format_number.sql 1024
old   3:     select &1 as the_value
new   3:     select 1024 as the_value

 THE_VALUE FORMAT_NUMBER                  FORMAT_NUMBER2
---------- ------------------------------ ------------------------------
      1024 1024                              1K

SQL> @format_number.sql 1000000
old   3:     select &1 as the_value
new   3:     select 1000000 as the_value

 THE_VALUE FORMAT_NUMBER                  FORMAT_NUMBER2
---------- ------------------------------ ------------------------------
   1000000 1000K                             1M

SQL> 

SQL> @format_number.sql 1500
old   3:     select &1 as the_value
new   3:     select 1500 as the_value

 THE_VALUE FORMAT_NUMBER                  FORMAT_NUMBER2
---------- ------------------------------ ------------------------------
      1500 1500                              2K

SQL> 

The FORMAT_NUMBER2 function reports 1000 as 1K.
Furthermore, for numbers above 1000, it appears to round to the nearest 1000.
FORMAT_NUMBER on the other hand, doesn’t start rounding until you hit 1000000.

From this it seems reasonable to infer that these functions are designed to present large decimal numbers in an easily readable format rather than being an attempt to conform to the new-fangled definition of a Kilobyte ( or Megabyte…etc).

Using the following script, I’ve created the BIG_EMPLOYEES table and populated it with 100,000 or so rows…

create table big_employees as
    select * from hr.employees
/

begin
    for i in 1..1000 loop
        insert into big_employees
        select * from hr.employees;
    end loop;
    commit;
end;
/

If we now apply these functions to count the rows in the table, we get the following :

select count(*),
    dbms_xplan.format_number(count(*)) as format_number,
    dbms_xplan.format_number2(count(*)) as format_number2
from big_employees
/

  COUNT(*) FORMAT_NUMBER        FORMAT_NUMBER2
---------- -------------------- --------------------
    107107 107K                  107K

You can see from this, how these functions might be useful when you’re looking at the number of rows in a very large table ( perhaps several million).

Counting the Kilobytes properly

We now come to the other two functions we’ve identified – FORMAT_SIZE and FORMAT_SIZE2.

with myval as
(
    select &1 as the_value
    from dual
)
select the_value, 
    dbms_xplan.format_size(the_value) as format_size, 
    dbms_xplan.format_size2(the_value) as format_size2
from myval
/

Running this the results are :

SQL> @format_size.sql 999
old   3:     select &1 as the_value
new   3:     select 999 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
       999 999                   999

SQL> @format_size.sql 1000
old   3:     select &1 as the_value
new   3:     select 1000 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
      1000 1000                 1000

SQL> @format_size.sql 1024
old   3:     select &1 as the_value
new   3:     select 1024 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
      1024 1024                    1k

SQL> @format_size.sql 1000000
old   3:     select &1 as the_value
new   3:     select 1000000 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
   1000000 976K                  977k

SQL> @format_size.sql 1048576
old   3:     select &1 as the_value
new   3:     select 1048576 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
   1048576 1024K                   1m

SQL> @format_size.sql 2047.4
old   3:     select &1 as the_value
new   3:     select 2047.4 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
    2047.4 2047                    2k

SQL> @format_size.sql 2047.5
old   3:     select &1 as the_value
new   3:     select 2047.5 as the_value

 THE_VALUE FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
    2047.5 2047                    2k

SQL> 

Things to notice here include the fact that format_size appears to FLOOR the value (1000000 bytes = 976.56 K), wheras FORMAT_SIZE2 rounds it up.
Additionally, once you pass in a value of over 1024, FORMAT_SIZE2 returns values in Kilobytes.

So, if we want to know the size of the BIG_EMPLOYEES table we’ve just created :

select bytes, 
    dbms_xplan.format_size(bytes) as format_size,
    dbms_xplan.format_size2(bytes) as format_size2
from user_segments
where segment_name = 'BIG_EMPLOYEES'
/

     BYTES FORMAT_SIZE          FORMAT_SIZE2
---------- -------------------- --------------------
   9437184 9216K                   9m

If all you need is an approximate value, then FORMAT_SIZE2 could be considered a reasonable alternative to :

select bytes/1024/1024 as MB
from user_segments
where segment_name = 'BIG_EMPLOYEES'
/

As well as it’s primary purpose, DBMS_XPLAN does offer some fairly useful functions if you need a quick approximation of timings, or counts or even sizes.
Fortunately, it adheres to the traditional definition of a Kilobyte as 1024 bytes rather than “Litebytes”.


Filed under: Oracle, PL/SQL, SQL Tagged: dbms_xplan, format_number, format_number2, format_size, format_size2, format_time_s

Watch: Hadoop vs. Cassandra

Pythian Group - Mon, 2014-12-01 10:53

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“Hadoop is generally deployed in a single data center, multi-RAC deployment, but they’re all reasonably geographically co-located with each other,” Alex explains. Cassandra on the other hand, “…is frequently deployed in a very distributed fashion… Somewhere in Asia, Europe, North America… So you end up with a very fault-tolerant environment.” Learn how the two platforms compare by watching Alex’s video Hadoop vs. Cassandra.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Find the rest of the series here

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Big Data expertise.

Categories: DBA Blogs