Skip navigation.

Feed aggregator

Keystore and Testing Suite Whitepapers available

Anthony Shorten - Tue, 2015-05-26 18:54

Two new whitepapers are available for Oracle Utilities Application Framework based products from My Oracle Support.

The new whitepapers are:

  • Oracle Application Testing Suite for Oracle Utilities - Overview (Doc Id: 2014163.1) - This is an overview of the Functional and Load Testing accelerator released for Oracle Utilities applications that uses Oracle Application Testing Suite to accelerate functional, regression, load and performance testing. This whitepaper outlines the features of the accelerator, the role Oracle Application Testing Suite pays in the testing process and a frequently asked questions section to clarify the accelerator.
  • Oracle Utilities Application Framework - Keystore Configuration (Doc Id: 2014161.1) - This is an overview of keystore management for Oracle Utilities Application Framework based products. The keystore is used for advanced security of transport and for inbuilt application encryption.


node-oracledb 0.6.0 is on NPM (Node.js driver for Oracle Database)

Christopher Jones - Tue, 2015-05-26 16:31

Node-oracledb 0.6.0 is now out on NPM. The Oracle Database Node.js driver powers high performance Node.js applications.

There is one feature change in this release: node-oracledb now builds with Node.js 0.10, 0.12 and with io.js. Huge thanks to Richard Natal for his GitHub pull request that added support.

For more information about node-oracledb see the node-oracledb GitHub page.

12c Dynamic Sampling and Standard Edition

Yann Neuhaus - Tue, 2015-05-26 14:47

12c is coming with more dynamic sampling, now called dynamic statistics and using the new Adaptive Dynamic Sampling algorithm. The goal is to have better estimations and better estimations gives better execution plans. However, this new approach will increase parse time because dynamic sampling kicks in more often, reads more blocs, and run more queries.

It's probably not a problem for applications that are well designed, using bind variables to avoid to many parses, having good statistics (histograms where it makes sense, extended statistics for correlated columns). The SQL Plan Directives are there to trigger dynamic sampling only where misestimates have been observed. An OLTP application should not parse often, and should not have huge misestimates. A reporting use-case can spend more time on parsing and the few seconds spend to do dynamic sampling will probably benefit to the execution time.

In addition to that, in order to lower the dynamic sampling overhead, Oracle 12c Adaptive Dynamic Sampling run its queries with the /*+ result_cache(snapshot=3600) */ hint. The result is cached in the result cache and is not invalidated by dependencies. So even when the underlying table is updated, the dynamic sampling result is still valid in cache for 3600 seconds. This is why doing more dynamic sampling is not a big overhead according that:

  • your result cache is sized accordingly. The default (0.25% of MEMORY_TARGET or 0.5% of SGA_TARGET or 1% of SHARED_POOL_SIZE) is probably too low to fit all the dynamic sampling result for frequently parsed statements.
  • your result cache is enabled, meaning that you are in Enterprise Edition
So the question of the day is that I want to know if the RESULT_CACHE hint is just ignored in Standard Edition, or if there is a mechanism that allows it from Adaptive Dynamic Sampling.

If you have a bad application (not using bind variables, parse at each execution) and you are in Standard Edition, then there is a risk that the current parse contention you suffer from (CPU and latches) will be more problematic (more CPU and I/O). Let's try the following:

declare
 c sys_refcursor;
begin
 for i in 1..100
 loop
  open c for 'select count(*) COUNT'||i||' from DEMO_TABLE where a+b=c+d';
  dbms_sql.return_result(c);
  null;
 end loop;
end;
/
which run 100 times the same statement not using bind variables. So I'm parsing it each time, but it's reading the same table with same predicate, so the result of dynamic sampling should not change a lot.

I'll run it in Standard and Enterprise editions, with no dynamic sampling, and with the new AUTO level.

Nymi Band Impressions

Oracle AppsLab - Tue, 2015-05-26 13:35

Editor’s note: Here’s the first post from Osvaldo Villagrana (@vaini11a), one of our AppsLab Mexico team members. Enjoy.

During last week I’ve been playing with Nymi Discovery Kit I got back in our AT&T hackathon participation, and here are my impressions as a developer for Nymi SDK point of view and as a user.

For those who don’t know, this band is wearable biometric identity device that let’s you use your heart’s unique signature (a.k.a. Electrocardiogram or EGC) to authenticate and validate your identity.

Main problem they want to solve is avoid user remembering all passwords, PIN numbers and security codes used in our daily basis.

First off Discovery Kit includes the band, Bluetooth dongle for Windows and USB cable for charging the band. Bluetooth dongle is included because at the beginning Nymi band only could be paired with Windows OS but now can be paired through OS X and Android as well.

unbox

Nymi band material at first feels cheap and easy to bend it and break it, but it really fits very well on my wrist. Band connections terminals are very exposed in both ends of the cord to water or dust but they say is water resistant but not waterproof.

band3

Band is adjustable and can accommodate wrist sizes up to 7.5” in circumference. A full charge takes approximately two hours when you use a wall outlet or computer and battery last 3 days.

band band1 band2

Setting up the band is requires some steps; band must be enrolled and authenticated with your own ECG using the NCA (Nymi Companion App) app available in Windows, OS X and Android. I decided use Android app this time. I tried OS X and Windows but it’s the same. Once the band is clasped on your wrist it will confirm you the charge level and immediately will enter in broadcast mode.

I found this step a bit confusing as there’s no feedback when band is already in broadcast mode so you are not quite sure if your band is ready to be discoverable. Funny thing is there’s no way to turn it off.

After band is clasped, Android app asks for putting your finger over the sensor in the band. It takes like a minute for the app to analyze and save your ECG info in the app. After that, you’re ready to pair your Nymi with any NEA (Nymi Enabled app or third party apps). Band supports just up to 7 different apps profiles (they say in coming updates will be supported more).

nca nca1

Anytime clasp is opened, band must be authenticated once again but with the same NCA app was before. If you want to use any other NCA app (OS X or Windows), the band should be reset and start over the setting up. This is not ideal.

NEA’s must provision a unique key-value (profile) that is saved in the band for future use and this happen only once for each NEA. The NEA should store the provision returned from the band for future communication. On subsequent usage, NEA’s validate against the provisioned Nymi band. Once validation is successful, the NEA can assume an authenticated user. All those steps must be implemented by the developer using the SDK’s for different platforms.

band4

To complete the exercise, I wrote an Android app that makes provisioning and validating flow and finally gets user authenticated if user is close enough to the device, in this case mobile or tablet. After I got authenticated my wife wore the band and tried to get authenticated but authentication failed all the time as expected.

SDK is good but needs some enhancements, though. Even at Nymi, they are having hard time with problems in their own NEA’s like the unlock app for Mac OS X that currently is not working and I have posted couple of issues and bugs I found.

As first attempt for this new authentication automatization niche, I like it, and I think is good enough.

I see a lot of potential and possibles use cases for this band in enterprise. Definitely I would use it, but what I would really love is a band that can handle authentication, sport tracking and motion, notifications and time in the same device. Probably that’s too much for now but I’m looking forward to seeing that device soon.Possibly Related Posts:

MySQL Query Profiling with Performance Schema

Pythian Group - Tue, 2015-05-26 12:34

One of my favorite tools for query optimization is profiling. But recently I noticed this warning:

mysql> set profiling=1;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> show warnings;
+---------+------+----------------------------------------------------------------------+
| Level   | Code | Message                                                              |
+---------+------+----------------------------------------------------------------------+
| Warning | 1287 | '@@profiling' is deprecated and will be removed in a future release. |
+---------+------+----------------------------------------------------------------------+

After looking through certain documentation , I should indeed start using the Performance Schema to get this information.

Okay, so let’s give that a try.

I confirmed that I started MySQL 5.6.23 with the default of Performance Schema = ON:

mysql> show global variables like '%perf%';
+--------------------------------------------------------+-------+
| Variable_name                                          | Value |
+--------------------------------------------------------+-------+
| performance_schema                                     | ON    |
...

I’ll be using a development server for doing query profiling, so I can turn all of these on:

mysql> update performance_schema.setup_instruments set enabled='YES', timed='YES'; #you want the stage* ones enabled
mysql> update performance_schema.setup_consumers set enabled='YES'; #you want the events_statements_history* and events_stages_history* enabled

Start with fresh collection tables:

mysql> truncate performance_schema.events_stages_history_long;
mysql> truncate performance_schema.events_statements_history_long;

Then turn the profiler on:

mysql> set profiling=1;

Now run a sample query:

mysql> select distinct(msa) from zip.codes;

And find the resulting event IDs to use in the query below:

mysql> select event_id, end_event_id, sql_text from performance_schema.events_statements_history_long where sql_text like '%msa%';
...
|      41 |       938507 | select distinct(msa) from zip.codes                                                                  |
...

Insert those beginning and ending event IDs, and here’s the new profiling output on my test query from Performance Schema:

mysql> select substring_index(event_name,'/',-1) as Status, truncate((timer_end-timer_start)/1000000000000,6) as Duration from performance_schema.events_stages_history_long where event_id>=41 and event_id<=938507;

+----------------------+----------+
| Status               | Duration |
+----------------------+----------+
| init                 | 0.000103 |
| checking permissions | 0.000006 |
| Opening tables       | 0.000051 |
| init                 | 0.000014 |
| System lock          | 0.000007 |
| optimizing           | 0.000003 |
| statistics           | 0.000011 |
| preparing            | 0.000011 |
| Creating tmp table   | 0.000048 |
| executing            | 0.000002 |
| Sending data         | 1.251331 |
| end                  | 0.000003 |
| removing tmp table   | 0.000008 |
| query end            | 0.000006 |
| closing tables       | 0.000009 |
| freeing items        | 0.000111 |
| cleaning up          | 0.000002 |
+----------------------+----------+

Compare the legacy profiling available for the query:

mysql> show profile for query 1;
+----------------------+----------+
| Status               | Duration |
+----------------------+----------+
| starting             | 0.000125 |
| checking permissions | 0.000007 |
| Opening tables       | 0.000020 |
| init                 | 0.000014 |
| System lock          | 0.000007 |
| optimizing           | 0.000003 |
| statistics           | 0.000011 |
| preparing            | 0.000011 |
| Creating tmp table   | 0.000027 |
| executing            | 0.000001 |
| Sending data         | 1.353825 |
| end                  | 0.000005 |
| removing tmp table   | 0.000007 |
| end                  | 0.000002 |
| query end            | 0.000006 |
| closing tables       | 0.000009 |
| freeing items        | 0.000069 |
| cleaning up          | 0.000028 |
+----------------------+----------+

The obvious question is: Why I would want to be limited to this information when the Performance Schema has so much more available?

But this proves we can get profiler information in a format we’re used to when MySQL fully deprecates the profiling tool.

 

Learn more about Pythian’s expertise in MySQL.

Categories: DBA Blogs

Making Existing SQLPLUS Scripts 12c and Container DB (PDB) Compatible

Pythian Group - Tue, 2015-05-26 12:21

Oracle 12c introduces new catalog features including CDB_ dictionary views (which include a CON_ID column) superseding the DBA_ views that most DBA sqlplus scripts are based upon.

However, existing DBA sqlplus scripts can easily be modified using just a few simple sqlplus techniques to be compatible with 11g, as well as all types of 12c databases including legacy and container databases.

The following simple SQL and sqlplus techniques can be used to make a “universal script” that is compatible with all versions.

Illustrating the Issue

Let’s say for sake of example that we have a simple 10g/11g monitoring script that’s checking the amount of freespace in each tablespace by querying the DBA_TABLESPACE_USAGE_METRICS view.

On our 10g or 11g database the following query gives the necessary information:

SQL> select version from v$instance;

VERSION
-----------------
11.2.0.4.0

SQL> select tablespace_name, tablespace_size, used_percent
  2  from DBA_TABLESPACE_USAGE_METRICS
  3  order by tablespace_name;

TABLESPACE_NAME                TABLESPACE_SIZE USED_PERCENT
------------------------------ --------------- ------------
FCCDEV                                  256000      .053125
SYSAUX                                 1024000   31.0617188
SYSTEM                                 1024000   9.19453125
TEMP                                   1024000            0
UNDOTBS1                               1024000      .015625
USERS                                   256000        1.275

6 rows selected.

SQL>

 

Now will the same query work on a 12c database? Of course it will:

SQL> select version from v$instance;

VERSION
-----------------
12.1.0.2.0

SQL> select tablespace_name, tablespace_size, used_percent
  2  from DBA_TABLESPACE_USAGE_METRICS
  3  order by tablespace_name;

TABLESPACE_NAME                TABLESPACE_SIZE USED_PERCENT
------------------------------ --------------- ------------
SYSAUX                                 4194302   .773048769
SYSTEM                                 4194302   1.05991414
TEMP                                   4194302            0
UNDOTBS1                               4194302   .031280532
USERS                                  4194302   .003051759

SQL>

 

It executes successfully on the 12c database but there’s a problem: the query is only returning the data from the root container (or more accurately, from the container in which the statement was executed). The PDB data is missing, I have both open and closed PDBs in this database:

SQL> select con_id, name, open_mode from V$CONTAINERS order by con_id;

    CON_ID NAME                           OPEN_MODE
---------- ------------------------------ ----------
         1 CDB$ROOT                       READ WRITE
         2 PDB$SEED                       READ ONLY
         3 TEST1                          READ WRITE
         4 LDB3                           MOUNTED

SQL>

 

The LDB3 PDB is closed (mounted) so I’m not interested in monitoring the tablespace freespace in it but I am interested in the details from the opened TEST1 PDB.

To get the required information we need to make two or three (Third being optional) changes:

1) Change the view from DBA_ to CDB_
2) Add the CON_ID column to the output
3) Add the CON_ID column to the ORDER BY clause

Hence (executing from CDB$ROOT) the query becomes:

SQL> select con_id, tablespace_name, tablespace_size, used_percent
  2  from CDB_TABLESPACE_USAGE_METRICS
  3  order by con_id, tablespace_name;

    CON_ID TABLESPACE_NAME                TABLESPACE_SIZE USED_PERCENT
---------- ------------------------------ --------------- ------------
         1 SYSAUX                                 4194302   .773048769
         1 SYSTEM                                 4194302   1.05991414
         1 TEMP                                   4194302            0
         1 UNDOTBS1                               4194302   .031280532
         1 USERS                                  4194302   .003051759
         3 AUDIT_DATA                               64000        .2875
         3 SYSAUX                                 4194302   .410843091
         3 SYSTEM                                 4194302   .474167096
         3 TPCCTAB                                1024000   5.63203125

9 rows selected.

SQL>

 

So that works fine, but as it stands we have two versions of the query and therefore we need two monitoring scripts.

 

Building Blocks for the Universal Script

Applying a number of simple sqlplus techniques can help us with this and will allow us to make the single universal version of the sqlplus script.

1) Use a SQLPLUS variable:

The sqlplus DEFINE command allows us to define variables. We can easily define a variable that tells us which view prefix to use depending on whether the database version is 11g or 12c.

SQL> COLUMN view_prefix NEW_VALUE view_prefix
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','CDB','DBA') view_prefix FROM v$instance;

VIE
---
CDB

SQL>

 

2) Dynamically build the view name:

The second tip is that in sqlplus to concatenate a variable with a string a period must be used to show where the variable name ends:

SQL> prompt &view_prefix
CDB

SQL> prompt &view_prefix._TABLESPACE_USAGE_METRICS
CDB_TABLESPACE_USAGE_METRICS

SQL>

 

Plugging that into the original query gives:

SQL> select tablespace_name, tablespace_size, used_percent
  2  from &view_prefix._TABLESPACE_USAGE_METRICS
  3  order by tablespace_name;
old   2: from &view_prefix._TABLESPACE_USAGE_METRICS
new   2: from CDB_TABLESPACE_USAGE_METRICS

TABLESPACE_NAME                TABLESPACE_SIZE USED_PERCENT
------------------------------ --------------- ------------
AUDIT_DATA                               64000        .2875
SYSAUX                                 4194302   .410843091
SYSAUX                                 4194302   .773048769
SYSTEM                                 4194302   1.05991414
SYSTEM                                 4194302   .474167096
TEMP                                   4194302            0
TPCCTAB                                1024000   5.63203125
UNDOTBS1                               4194302   .031280532
USERS                                  4194302   .003051759

9 rows selected.

SQL>

But we’re missing the container ID column.

 

3) Add columns dynamically using additional sqlplus variables:

We can “optionally” include columns such as the CON_ID column using the same technique:

SQL> COLUMN view_prefix NEW_VALUE view_prefix NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','CDB','DBA') view_prefix FROM v$instance;

SQL> COLUMN con_id_col NEW_VALUE con_id_col NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','con_id,','') con_id_col FROM v$instance;

SQL> select &con_id_col tablespace_name, tablespace_size, used_percent
  2  from &view_prefix._TABLESPACE_USAGE_METRICS
  3  order by &con_id_col tablespace_name;
old   1: select &con_id_col tablespace_name, tablespace_size, used_percent
new   1: select con_id, tablespace_name, tablespace_size, used_percent
old   2: from &view_prefix._TABLESPACE_USAGE_METRICS
new   2: from CDB_TABLESPACE_USAGE_METRICS
old   3: order by &con_id_col tablespace_name
new   3: order by con_id, tablespace_name

    CON_ID TABLESPACE_NAME                TABLESPACE_SIZE USED_PERCENT
---------- ------------------------------ --------------- ------------
         1 SYSAUX                                 4194302   .773239504
         1 SYSTEM                                 4194302   1.05991414
         1 TEMP                                   4194302            0
         1 UNDOTBS1                               4194302   .003814699
         1 USERS                                  4194302   .003051759
         3 AUDIT_DATA                               64000        .2875
         3 SYSAUX                                 4194302   .410843091
         3 SYSTEM                                 4194302   .474167096
         3 TPCCTAB                                1024000   5.63203125

9 rows selected.

SQL>

 

Note that the comma is in the variable and not in the column list in the SQL SELECT or ORDER BY clauses.

The script is now dynamically determining whether to use the CDB_ or DBA_ view and similarly dynamically adding the CON_ID column to the SELECT and ORDER BY clauses. (And of course should be executed from the root container.)

And the exact same script still works on the 11g database using the 11g version of sqlplus!

Similarly the optional column (including the comma) defined in the sqlplus variable could be used in an aggregation GROUP BY clause. However, if the query has no other aggregation columns then we might need to add a constant to the GROUP BY clause (and ORDER BY), otherwise the GROUP BY would have no columns listed and the universal sqlplus script is executed against an 11g database.

For example:

SQL> COLUMN view_prefix NEW_VALUE view_prefix NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','CDB','DBA') view_prefix FROM v$instance;

SQL> COLUMN con_id_col NEW_VALUE con_id_col NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','con_id,','') con_id_col FROM v$instance;

SQL> select &con_id_col min(extended_timestamp), max(extended_timestamp)
  2  from &view_prefix._AUDIT_TRAIL
  3  group by &con_id_col 1 order by &con_id_col 1;
old   1: select &con_id_col min(extended_timestamp), max(extended_timestamp)
new   1: select con_id, min(extended_timestamp), max(extended_timestamp)
old   2: from &view_prefix._AUDIT_TRAIL
new   2: from CDB_AUDIT_TRAIL
old   3: group by &con_id_col 1 order by &con_id_col 1
new   3: group by con_id, 1 order by con_id, 1

    CON_ID MIN(EXTENDED_TIMESTAMP)                  MAX(EXTENDED_TIMESTAMP)
---------- ---------------------------------------- ----------------------------------------
         3 13-MAY-15 11.54.52.106301 AM -06:00      13-MAY-15 12.16.18.941308 PM -06:00

SQL>

 

Finally, once we’re done testing and debugging, we can get rid of the ugly “old” and “new” statements using:

SET VERIFY OFF

Implementing these techniques will allow modifications of most existing DBA sqlplus scripts to create universal versions, of which will be compatible with 11g (and likely earlier) databases as well as 12c legacy and container databases.

 

Deeper Dive

What if our monitoring query is based on an underlying catalog table and not a dictionary view?

For example, let’s say that our objective is to report on users and the last time the database password was changed. The password change date isn’t presented in the DBA_USERS or CDB_USERS view, but it is in the underlying SYS.USER$ table. Hence the monitoring query might be something like:

SQL> select name, ptime from SYS.USER$
  2  where type#=1 order by name;

NAME                     PTIME
------------------------ ---------
ANONYMOUS                23-APR-15
...
SYSTEM                   23-APR-15
XDB                      23-APR-15
XS$NULL                  23-APR-15

 

If we look at the view definition of any of the CDB_ views it is apparent that the view traverses the open PDBs by using the new 12c “CONTAINERS” function which accepts a table name as the only argument.

When run from the root container the CONTAINERS() function will traverse all open PDBs (assuming the common user used has local PDB permission to access the referenced table).

NOTE: Prior to 12.1.0.2 the CONTAINERS function was called CDB$VIEW.

Thus, we can use the new function as follows:

SQL> select con_id, name, ptime from CONTAINERS(SYS.USER$)
  2  where type#=1 order by con_id, name;

    CON_ID NAME                     PTIME
---------- ------------------------ ---------
         1 ANONYMOUS                23-APR-15
...
         1 SYSTEM                   23-APR-15
         1 XDB                      23-APR-15
         1 XS$NULL                  23-APR-15
         3 ANONYMOUS                23-APR-15
...
         3 SYSTEM                   23-APR-15
         3 XDB                      23-APR-15
         3 XS$NULL                  23-APR-15

 

Or to make the script universal so the single script can be run on both 11g and 12c:

SQL> COLUMN view_prefix NEW_VALUE view_prefix NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','CONTAINERS(SYS.USER$)','SYS.USER$') view_prefix FROM v$instance;

SQL> COLUMN con_id_col NEW_VALUE con_id_col NOPRINT
SQL> SELECT DECODE(SUBSTR(version,1,INSTR(version,'.')-1),'12','con_id,','') con_id_col FROM v$instance;

SQL> select &con_id_col name, ptime from &view_prefix.
  2  where type#=1 order by &con_id_col name;
old   1: select &con_id_col name, ptime from &view_prefix.
new   1: select con_id, name, ptime from CONTAINERS(SYS.USER$)
old   2: where type#=1 order by &con_id_col name
new   2: where type#=1 order by con_id, name

    CON_ID NAME                     PTIME
---------- ------------------------ ---------
         1 ANONYMOUS                23-APR-15
...
         1 XDB                      23-APR-15
         1 XS$NULL                  23-APR-15
         3 ANONYMOUS                23-APR-15
...
         3 XDB                      23-APR-15
         3 XS$NULL                  23-APR-15

SQL>

 

A final question might be: why isn’t the PDB$SEED database shown in the results?

The answer is that a new 12c initialization parameter EXCLUDE_SEED_CDB_VIEW controls whether the seed database is displayed in CDB_ view (or CONTAINERS() function calls). EXCLUDE_SEED_CDB_VIEW is dynamic and session modifiable:

SQL> show parameter EXCLUDE_SEED_CDB_VIEW

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
exclude_seed_cdb_view                boolean     TRUE

SQL> select con_id, count(1) from cdb_users group by con_id;

    CON_ID   COUNT(1)
---------- ----------
         1         18
         3         20

SQL> alter session set EXCLUDE_SEED_CDB_VIEW=FALSE;

Session altered.

SQL> select con_id, count(1) from cdb_users group by con_id;

    CON_ID   COUNT(1)
---------- ----------
         1         18
         2         17
         3         20

SQL>

 

Other tools

A final question is whether this technique will still work if the SQL script is run through other tools? The answer is: “it depends“.

It depends on whether the other tools support the “define” command and the use of script variables. Specifically, Oracle SQL Developer and the newer sqlcl tool does. The above examples work fine in SQL Developer and sqlcl using the standard sqlcl “default” sqlformat. Other sqlformat options in sqlcl show some issues (testing with sqlcl version 4.2.0.15.121.1046).

 

Learn more about Pythian’s expertise in Oracle and MySQL.

Categories: DBA Blogs

Gilbane Webcast: Delivering Next-Gen Digital Experiences

WebCenter Team - Tue, 2015-05-26 12:02
ECONTENT SPONSORED WEBEVENTS | LIVE STREAMING AUDIO Delivering Next-Gen Digital Experiences Thursday May 28, 2015 - 11:00am PT / 2:00pm ET ORACLE Becoming a digital business is an imperative for all organizations to deliver the next wave of revenue growth, service excellence and brand loyalty. Today's enterprises need to connect "experiences" to outcomes, encompassing the entire customer engagement lifecycle. And the stakes are high — 94% of customers discontinue communications because of irrelevant messages and experiences.

The key to success is leveraging enterprise-level marketing technology to unlock breakthrough innovations for audience engagement and connecting experiences to business outcomes:
  • Deliver omni-channel experiences that are seamless, tailored and innovative across Paid, Owned and Earned media
  • Convert unknown audiences to known and involved customers
  • Extend reach and orchestrate engagement across all channels and devices
  • Attribute Marketing performance to revenue across channels and target audiences
REGISTER NOW to reserve your seat for this special webinar event.

Log Buffer #424: A Carnival of the Vanities for DBAs

Pythian Group - Tue, 2015-05-26 11:45

This Log Buffer Edition covers various valuable blog posts from the fields of Oracle, SQL Server and MySQL.

Oracle:

  • Oracle Big Data Appliance X5-2 with Big Data SQL for the DBA.
  • Loading, Updating and Deleting From HBase Tables using HiveQL and Python.
  • In keeping with the ODA quarterly patching strategy, Appliance Manager 12.1.2.3 is now available.
  • From time to time someone publishes a query on the OTN database forum and asks how to make it go faster, and you look at it and think, “it’s a nice example to explain a couple of principles because it’s short, easy to understand, obvious what sort of things might be wrong, and easy to fix.”
  • Optimizing the PL/SQL Challenge IV: More OR Condition Woes.

SQL Server:

  • Will RDBMs be obsolete? Should Data Professionals care about Big Data technologies? What is NoSQL? What is Hadoop?
  • In a development team, there are times when the relationships between developers and testers can become strained. How can you turn this potential conflict into something more positive?
  • Michael Fal is a huge advocate of automation and many ways it can improve the lives of developers and DBAs alike, but you can’t just automate all your problems away.
  • One way to handle a very complex database project with several databases and cross references.
  • Building the Ideal VMware-based SQL Server Virtual Machine.

MySQL:

  • Optimizing Out-of-order Parallel Replication with MariaDB 10.0.
  • General-purpose MySQL applications should read MySQL option files like /etc/my.cnf, ~/.my.cnf, … and ~/.mylogin.cnf. But ~/.mylogin.cnf is encrypted.
  • Creating and Restoring Database Backups With mysqldump and MySQL Enterprise Backup.
  • If you don’t know much about bash shell, you should start with the prior post to learn about bash arrays.
  • Installing Kubernetes Cluster with 3 minions on CentOS 7 to manage pods and services.

Learn more about Pythian’s expertise in Oracle , SQL Server and MySQL.

Categories: DBA Blogs

Worth Considering: Faculty perspective on student-centered pacing

Michael Feldstein - Tue, 2015-05-26 11:43

By Phil HillMore Posts (318)

Over the weekend I wrote a post based on the comment thread at Friday’s Chronicle article on e-Literate TV.

One key theme coming through from comments at the Chronicle is what I perceive as an unhealthy cynicism that prevents many people from listening to students and faculty on the front lines (the ones taking redesigned courses) on their own merits.

Sunday’s post highlighted two segments of students describing their experiences with re-designed courses, but we also need to hear directly from faculty. Too often the public discussion of technology-enabled initiatives focus on the technology itself, often assuming that the faculty involved are bystanders or technophiles. But what about the perspectives of faculty members – you know, those who are in the classrooms working with real students – on what challenges they face and what changes are needed from an educational perspective? There is no single perspective from faculty, but we could learn a great deal through their unique, hands on experiences.

Consider the the specific case of why students might need to work at their own pace.

The first example is from a faculty member at Middlebury College describing the need for a different, more personalized approach for his geographic information system (GIS) course.

Jeff Howarth: And what I would notice is that there would be some students who would like me to go a little bit faster but had to wait and kind of daydream because they were just waiting. And then there were some students that desperately wanted me slow down. Then you get into that kind of slowest-car-on-the-freeway, how-fast-can-you-really-go type of thing. So, I would slow down, which would lose part of the group.

Then there would be some folks that probably would want me to slow down but would never ask because they don’t want to call attention to themselves as being the kind of—the slow car on the freeway.

Michael Feldstein: At this point, Jeff realized that even his small class might not be as personalized as it could be with the support of a little more technology.

Jeff Howarth: What I realized is that, if I just started packaging that instruction, the worked example, I could deliver the same content but allow students to first—if I made videos and posted it on something like YouTube, I was putting out the same content, but students could now watch it at their own pace and in the privacy of being able to go as slow as they need to without the social hang-ups of being considered different.

Students could now watch it at their own pace and … and go as slow as they need to without the social hang-ups of being considered different. So, that was really the first step of—I did all of this, and then I told another colleague in languages what I was doing. And he said, “Well, that’s called ‘flipping the classroom.’” And I thought, “OK.” I mean, but that’s not really why. I did it without knowing that I was flipping the classroom, but then that’s how it happened.

Compare this description with an example from an instructor at Essex County College teaching developmental math.

Pamela Rivera: When I was teaching the traditional method, I’ll have students coming in and they didn’t know how to multiply. They didn’t know how to add and subtract. Rarely would those students be able to stay throughout the semester, because after the third—no, even after the second week, everyone else was already in division and they’re still stuck.

And the teacher can’t stop the class and say, “OK, let’s continue with multiplication,” because you have a syllabus to stick to. You have to continue teaching, and so those students will be frustrated, and so they drop the class. The Teacher can’t stop the class…because you have a syllabus to stick to.

At the same time, you had students who—the first couple of weeks they’ll be extremely bored because they already know all of that. And so, unfortunately, what would happen is eventually you would get to a point in the content that—they don’t know that, but because they have been zoning out for weeks, they don’t get that “OK, now, I actually have to start paying attention.” And so, yes, they should have been able to do that, but they still are not very successful because they were used to not paying attention.

Remarkably Similar Descriptions

Despite these two examples coming from very different cases, the actual descriptions that faculty offer on the need for course designs that allow students to control their own pacing are remarkably similar. These isolated examples are not meant to end debate on personalized learning or on what role technology should play (rather they should encourage debate), but it is very useful to listen to faculty members describe the challenges they face on an educational level.

The post Worth Considering: Faculty perspective on student-centered pacing appeared first on e-Literate.

Leveraging Oracle Developer Cloud Service in SQL and PL/SQL Projects - lifecycle and team collaboration

Shay Shmeltzer - Tue, 2015-05-26 11:37

Usually my demos are targeted at Java developers, but I realize that a lot of developers out there are not using Java, for example in the Oracle install base there is a huge section of PLSQL developers. This however doesn't change their requirements from a development platform. They can still benefit from version management and code review functionality. They still need to track bugs/issues and requirements from their users, and they still need to collaborate in a team environment. 

So I decided to try out and see what would be the development lifecycle experience for a PL/SQL developer if they'll leverage the services provided by the Oracle Developer Cloud Service - here is a demo that shows a potential experience. 

What you'll see in the demo:

  • Using JDeveloper to create DB Diagrams, Tables and PL/SQL code
  • Version manage PL/SQL and SQL with Git
  • Defining a cloud project and adding users
  • Check code in, and branch PL/SQL functions
  • Tracking tasks for developers
  • Code review by team members
  • Build automation (with Ant) - and almost a deploy to the DB

As you can see it is quite a nice complete solution that is very quick to setup and use.

It seems that the concepts of continuous integration in the world of PL/SQL development are not yet a common thing. In the demo I use the Ant SQL command to show how you could run a SQL script you created to create the objects directly in the database - which is probably the equivalent of doing a deployment in the world of Java. However if you prefer you can use Ant for example to copy files, zip them, or do many other tasks such as run automated testing frameworks.

The Ant task I used is this:

  <path id="antclasspath">
    <fileset dir=".">
      <include name="ojdbc7.jar"/>
    </fileset>
  </path>
   <target name="deploy">
    <sql driver="oracle.jdbc.OracleDriver" userid="hr2" password="hr"
         url="jdbc:oracle:thin:@//server:1521/sid" src="./script1.sql" 
	classpathref="antclasspath"/>
  </target> 

I had both the ojdbc7.jar file and the script file at the root of the project for convenience. 

While my demo uses JDeveloper - you should be able to achieve similar functionality with any tool that supports Git. In fact if you rather not use a tool you can simply use command lines to check your files directly into the cloud.

Categories: Development

OBIEE 11.1.1.9: New Features for Front-End Developers and Business Users

Rittman Mead Consulting - Tue, 2015-05-26 11:13

OBIEE 11.1.1.9 was released this May and Robin and Mark wrote great posts about New Features from System Administrators and the new support for HiveServer2 and Cloudera Impala respectively. In this post, we will see some useful new features for Front-End development. Some of these features like tree map, new styles, etc. were included in the OBIEE 11.1.1.7.10 version, but I’m including them here anyway because if you are moving from an earlier release of 11.1.1.7 you probably haven’t seen them yet.

Search & Sort options inside the Subject Area

There is a new option to search inside the subject area when we create an analysis. If we press the magnifying glass icon in the Subject Areas pane, a search box appears and OBIEE will return all the objects that contains the text that we entered. We can also use the % wildcard.

search_option

In addition there is also the option to order the objects inside the subject area sorting the folders and the presentation columns inside them. The default value is showing the objects in the way that are saved in the repository.

sort_subj_area

Save Calculated Items in the Web Catalog

A very useful feature for business users is the possibility to create calculated items in an analysis and save it in the catalog in order to reuse it in other analyses for the same subject area.

In the Results tab there is a new icon for creating a new measure. The calculation is created in the same way the column formulas are created.

new_calc_measure

After the new measure is created you can go to the Criteria tab and in the column pop-up menu select the Save Column As option. In this manner you save this new measure as a new object in the catalog to be reused in other analyses. The feature of creating a new calculated measure for tables and pivot tables was included in the 11.1.1.7.10 version but the possibility to save the column in the catalog is a new feature of the OBIEE 11.1.1.9.

savenew_colcalculated_measure4

You can also change the properties of the new column in the Criteria tab but be aware that conditional formatting and conditional action links for the column are not saved to the catalog. Regular action links are saved for the column as well as the format properties.

The saved column can be edited from the catalog pane and catalog page. All the changes that you make in the catalog are applied to every analysis that used this column. If you modify it inside a particular analysis, these changes are valid only for the analysis that you are working on.

editnew_measure

To be able to save new measures as columns you should have the Save Column privilege.

calculatedmeasure_priv

Global Variables

In this version we find a new new type of variables: the Global Variables. Global Variables can be defined in the context of an analysis and can be used in other analysis. Useful to do some intermediate reusable calculations.

To create a new global variable, select the Edit Formula option for the required column. When you press the Variable button in the Formula Editor you will see the new Global option. The Insert Global Variable dialog appears and you can select an existing global variable to be used in the formula, or you can create a new one. To create a new one you need to enter a name, type and value. If you want to use an expression like in the example (Min(Revenue)) as value, the data type should be Text.

global_variable

To reference a global variable you need to use the fully qualified name including the context: @{global.variables.variablename}.

global_var_result

Also to manage global variables you should have the Manage Global Variables privilege.

New Visualisation: Treemap

There is a new visualisation called Treemap since OBIEE 11.1.1.7.10. Treemap groups the dimensional data that you selected in the analysis in tiles. By default, the tiles size is based in the content of first measure of the analysis and the tiles colour is based in the content of the second measure. If you have only one measure in your analysis, is used for both size and colour.

treemap

You can edit it as any other view and change the group by information as well as the measures which affects the size and colour of the tile and the range of colours that you want to use. Also you can choose the style between seeing the information in coloured bins or using continuous colour fill. If you selected the first one you can also select the amount of different coloured bins do you want to use.

treemap2

New Preview Styles

When you want to preview an analysis in the Results tab, instead of showing immediately how the analysis will look, OBIEE offers you different Styles to see your analysis. These feature was also included since the 11.1.1.7.10 version. In case you want to create a custom style, Oracle recommends to use the new Skyros style as a starting point.

preview

Horizontal Layout for Radio Button and Check Box Dashboard Prompts

Radio button and check box dashboards prompts can be showed horizontally. When you create a new dashboard prompt using check box or radio button under Options, you can select between horizontal or Vertical Layout appears under Options.

horizontal_prompts

horizontal_prompt1

Enhancements in Export & Print Options

In this version, you will find more options in the Print and Export Options dialog. You can select if you want to include charts, images and formatting or to specify column properties like the column width and to wrap the text in columns.

dashboardprintoptions

In summary, these features are simple but quite useful for business users and front-end developers, and give more flexibility to create better data visualisations.

 

Categories: BI & Warehousing

We Couldn't Have Said It Any Better

Linda Fishman Hoyle - Tue, 2015-05-26 09:54

We can tell Oracle’s cloud story, but it’s even more credible when customers do it for us.

Six of them did just that at the Oracle Cloud Analyst Summit in Half Moon Bay last month. In fact, the international SaaS customer panel was a bellwether on the day’s agenda.

Moderator Shawn Price (pictured left), senior vice president of Oracle’s Global Cloud Business Unit, said, “I firmly believe the most important voice in this market as we transition to the cloud is our customers’ voice and the role it plays in advocacy.”

The six individuals on the panel represented a broad section of companies who use Oracle Cloud applications. They talked about their journeys to the cloud, the benefits, Oracle’s innovation, cross-cloud integration, and the future.

You can view a webcast replay of the SaaS panel and hear from these six customers:

  • Panasonic, Rhiannon Ainge, Marketing Manager
  • SePro, Stephen Barkley, Manager – Information and Data Services
  • Ovation Brands, Patrick Benson, CIO
  • Solairus Aviation, Mark Dennen, CFO
  • CorePower Yoga, Kathryn Palin, HR and Payroll Administrator
  • Hardware.com, Paul Wignall, Group Head of Business Development

Oracle MAF - Managing Local Notifications

Oracle has been working really hard to keep adding new features to Oracle Mobile Application Framework. One of the New Features in Oracle MAF 2.1.1 release is Managing Local Notifications. In this...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Webcast - Process Cloud Service (PCS) Technical Overview

Process Rapidly design, automate and manage business processes in the cloud. This presentation provides technical detail with demo of the latest Oracle Process Cloud...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Lab Report: Oracle Database on EMC XtremIO. A Compression Technology Case Study.

Kevin Closson - Tue, 2015-05-26 01:26

If you are interested in array-level data reduction services and how such technology mixes with Oracle Database application-level compression (such as Advanced Compression Option), I offer the link below to an EMC Lab Report on this very topic.

To read the entire Lab Report please click the following link:   Click Here.

The following is an excerpt from the Lab Report:

Executive Summary
EMC XtremIO storage array offers powerful data reduction features. In addition to thin provisioning, XtremIO applies both deduplication and compression algorithms to blocks of data when they are ingested into the array. These features are always on and intrinsic to the array. There is no added licensing, no tuning nor configuration involved when it comes to XtremIO data reduction.

Oracle Database also supports compression. The most common form of Oracle Database compression is the Advanced Compression Option—commonly referred to as ACO. With Oracle Database most “options” are separately licensed features and ACO is one such option. As of the publication date of this Lab Report, ACO is licensed at $11,000 per processor core on the database host1. Compressing Oracle Database blocks with ACO can offer benefits beyond simple storage savings. Blocks compressed with ACO remain compressed as they pass through the database host. In short, blocks compressed with ACO will hold more rows of data per block. This can be either a blessing or a curse. Allowing Oracle to store more rows per block has the positive benefit of caching more application data in main memory (i.e., the Oracle SGA buffer pool). On the other hand, compacting more data into each block often results in increased block-contention.

Oracle offers tuning advice to address this contention in My Oracle Support note 1223705.12. However, the tuning recommendations for reducing block contention with ACO also lower the compression ratios. Oracle also warns users to expect higher CPU overhead with ACO as per the following statement in the Oracle Database product documentation:

Compression technology uses CPU. Ensure that you have enough available CPU to handle the additional load.

Application vendors, such as SAP, also produce literature to further assist database administrators in making sensible choices about how and when to employ Advanced Compression Option. The importance of understanding the possible performance impact of ACO are made quite clear in such publications as SAP Note 14363524 which states the following about SAP performance with ACO:

Overall system throughput is not negatively impacted and may improve. Should you experience very long runtimes (i.e. 5-10 times slower) for certain operations (like mass inserts in BW PSA or ODS tables/partitions) then you should set the event 10447 level 50 in the spfile/init.ora. This will reduce the overhead for insertion into compressed tables/partitions.

The SAP note offers further words of caution regarding transaction logging (a.k.a., redo) in the following quote:

Amount of redo data generated can be up to 30% higher

Oracle Database Administrators, with prior ACO experience, are largely aware of the trade-offs where ACO is concerned. Database Administrators who have customarily used ACO in their Oracle Database deployments may wish to continue to use ACO after adopting EMC XtremIO. For this reason Database Administrators are interested in learning how XtremIO compression and Advanced Compression Option interact.

This Lab Report offers an analysis of space savings with and without ACO on XtremIO. In addition, a performance characterization of an OLTP workload manipulating the same application data in ACO and non-ACO tablespaces will be covered…please click the link above to continue reading…

 


Filed under: oracle

IT-centric notes on the future of health care

DBMS2 - Mon, 2015-05-25 23:02

It’s difficult to project the rate of IT change in health care, because:

  • Health care is suffused with technology — IT, medical device and biotech alike — and hence has the potential for rapid change. However, it is also the case that …
  • … health care is heavily bureaucratic, political and regulated.

Timing aside, it is clear that health care change will be drastic. The IT part of that starts with vastly comprehensive electronic health records, which will be accessible (in part or whole as the case may be) by patients, care givers, care payers and researchers alike. I expect elements of such records to include:

  • The human-generated part of what’s in ordinary paper health records today, but across a patient’s entire lifetime. This of course includes notes created by doctors and other care-givers.
  • Large amounts of machine-generated data, including:
    • The results of clinical tests. Continued innovation can be expected in testing, for reasons that include:
      • Most tests exploit electronic technology. Progress in electronics is intense.
      • Biomedical research is itself intense.
      • In particular, most research technologies (for example gene sequencing) can be made cheap enough over time to be affordable clinically.
    • The output of consumer health-monitoring devices — e.g. Fitbit and its successors. The buzzword here is “quantified self”, but what it boils down to is that every moment of our lives will be measured and recorded.

These vastly greater amounts of data cited above will allow for greatly changed analytics.

  • Right now, medical decisions are made based on research that looks at a few data points each for a specially-recruited sample of patients, then draws conclusions based on simplistic and questionable statistical methods.
  • More sophisticated analytic methods are commonly used, but almost always still to aid in the discovery and formation of hypotheses that will then be validated, if at all, using the bad old analytic techniques.
  • State of the art predictive modeling, applied to vastly more data, will surely yield greatly better results.

And so I believe that health care itself will be revolutionized.

  • Diagnosis will be much more accurate, pretty much across the board, except in those limited areas where it’s already excellent today.
  • Medication regimens will be much more personalized. (Pharma manufacturing may have to change greatly as a result.) So will other treatments. So will diet/fitness regimens.
  • The vulnerable (elderly, hospital patients) will be more accurately and comprehensively monitored. Also, their care will likely be aided by robotics.
  • Some of the same things will be true of infants and toddlers. (In other cases, they get such close attention today that I can’t imagine how it could be greatly increased. :) )

I believe that this will all happen because I believe that it will make health care vastly more successful. And if I’m right about that, no obstacles will be able to prevent it from coming into play — not cost (which will keep going down in a quasi-Moore’s-Law way), not bureaucratic inertia (although that will continue to slow things greatly), and not privacy fears (despite the challenges cited below).

So what are the IT implications of all this?

  • I already mentioned the need for new (or newly-used) kinds of predictive modeling.
  • Probably in association with those, event detection — which in many but not all cases will amount to anomaly detection — will be huge. If one goal is to let the elderly and ailing live independently, but receive help when it’s needed — well, recognizing when that help is needed will be crucial. Similar dynamics will occur in hospitals.
  • And in support of that, there will be great amount of monitoring, and hence strong demands upon sensors and recognition. Potentially, all five human senses will be mimicked, among others. These technologies will become even more important in health care if I’m right that robotics will play a big role.
  • Data quality will be a major challenge, especially in the doctors’-notes parts of health records. Reasons start:
    • Different medical professionals might evaluate the same situation differently; diagnosis is a craft rather than a dumb, repeatable skill.
    • If entries are selected from a predefined set of options, none may be a perfect match to the doctor’s actual opinion.
    • Doctors often say what’s needful to have their decisions (care, tests, etc.) approved, whether or not it precisely matches what they really think. Thus, there are significant incentives to enter bad data.
    • Free-text data is more central to health care than to many other application areas, and text data is inherently dirty.
    • Health records are decades later than many other applications in moving from paper to IT.
  • Data integration problems will also be and indeed already are huge, because different health care providers have addressed the tough challenges of record-keeping in different ways.

As for data management — well, almost everything discussed in this blog could come into play.

  • A person’s entire medical record resembles the kind of mess increasingly often dumped these days into NoSQL — typically MongoDB, Cassandra, or HBase.
  • There are plenty of business-transaction records in the mix, of the kind that have long been managed by RDBMS.
  • There are a whole lot of diverse machines in the mix, and managing the data to keep such a menagerie running is commonly the job of Splunk or streaming-enhanced Hadoop.
  • There’s a lot of free text in medical records. Also images, video and so on.
  • Since graph analytics is used in research today, it might at some point make its way into clinical use.

Finally, let me say:

  • Data-driven medicine cannot live up to its potential unless researchers can investigate data sets comprising private information of large numbers of people.
  • Researchers will not have the appropriate permissions unless privacy law moves toward a basis in data use, rather than exclusively regulating data possession.

Related links

Categories: Other

Temp Table Transformation Cardinality Estimates - 1

Randolf Geist - Mon, 2015-05-25 13:26
Having published recently two notes about the Temp Table Transformation highlighting the heuristics based decision and other weaknesses, for example regarding the projection of columns, it's time to publish some more notes about it.

The transformation can also have significant impact on cardinality estimates, both join and single table cardinality.

Looking at the difference in the join cardinality estimates of following simple example:

create table t1
as
select
rownum as id
, mod(rownum, 10) + 1 as id2
, rpad('x', 100) as filler
from
dual
connect by
level <= 1000
;

exec dbms_stats.gather_table_stats(null, 't1')

alter session set tracefile_identifier = 'temp_trans_join_card';

alter session set events '10053 trace name context forever, level 1';

explain plan for
with
cte as (
select /* inline */ id + 1 as id from t1 t
where 1 = 1
)
select /*+
--opt_estimate(@"SEL$2" join("A"@"SEL$2" "B"@"SEL$2") rows=1000)
no_merge(a) no_merge(b)
*/ * from cte a, cte b
where a.id = b.id
;

alter session set events '10053 trace name context off';

-- 11.2.0.x Plan with TEMP transformation
--------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
--------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 26 |
| 1 | TEMP TABLE TRANSFORMATION | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D660C_27269C | | |
| 3 | TABLE ACCESS FULL | T1 | 1000 | 4000 |
|* 4 | HASH JOIN | | 1 | 26 |
| 5 | VIEW | | 1000 | 13000 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660C_27269C | 1000 | 4000 |
| 7 | VIEW | | 1000 | 13000 |
| 8 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660C_27269C | 1000 | 4000 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

4 - access("A"."ID"="B"."ID")

-- 11.2.0.x Plan with INLINE hint
----------------------------------------------------
| Id | Operation | Name | Rows | Bytes |
----------------------------------------------------
| 0 | SELECT STATEMENT | | 10000 | 253K|
|* 1 | HASH JOIN | | 10000 | 253K|
| 2 | VIEW | | 1000 | 13000 |
| 3 | TABLE ACCESS FULL| T1 | 1000 | 4000 |
| 4 | VIEW | | 1000 | 13000 |
| 5 | TABLE ACCESS FULL| T1 | 1000 | 4000 |
----------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - access("A"."ID"="B"."ID")
the following becomes obvious:

- There are vastly different cardinality estimates possible depending on whether the transformation gets used or not. In principle due to the NO_MERGE hints used, the transformation shouldn't have any impact on the estimates, but it does

- Looking at the optimizer trace file some information seems to get lost when the transformation gets used, in particular column related statistics

- This information loss, like in the example here, can lead to join cardinality estimates of 0 (rounded to 1 in the plan output)

- And even worse, at least in pre-12c versions, such a 0 cardinality estimate can't be corrected via OPT_ESTIMATE hints, since somehow the correction gets ignored/lost although being mentioned in the optimizer trace:

11.2.0.1:
Column (#1): ID(
AvgLen: 22 NDV: 1 Nulls: 0 Density: 0.000000
Column (#1): ID(
AvgLen: 22 NDV: 1 Nulls: 0 Density: 0.000000
Join Card: 0.000000 = = outer (1000.000000) * inner (1000.000000) * sel (0.000000)
>> Join Card adjusted from 0.000000 to: 1000.000000, prelen=2
Adjusted Join Cards: adjRatio=1.00 cardHjSmj=0.000000 cardHjSmjNPF=0.000000 cardNlj=0.000000 cardNSQ=1000.000000 cardNSQ_na=0.000000
Join Card - Rounded: 1 Computed: 0.00
The behaviour regarding the OPT_ESTIMATE hint changes in 12c, but then there are other oddities introduced in 12c that are not there in pre-12c - have a look at the "Query Block" section when using the INLINE variant of the query - there are two identical fully qualified object names, clearly a bug, making hinting using global hint syntax impossible for that query block.

Although my simple example here can be corrected via extended statistics on the join column expression used in the CTE query my point here is that depending on whether the transformation gets used or not vastly different and extreme cardinality estimates are possible - and those extreme cases even can't be corrected in pre-12c versions.

For example I recently had a real life case where two columns were joined that had a significant number of NULL values, one coming from a temp table transformation row source. Without the transformation the join cardinality estimates were reasonable, but the transformation again lead to such a 0 cardinality estimate (that couldn't be corrected via a (correctly specified) OPT_ESTIMATE hint), ruining the whole plan.

Oracle BI Publisher 11.1.1.9.0 is available !!!

Tim Dexter - Mon, 2015-05-25 13:12

Hi Everyone,

I am happy to announce that Oracle BI Publisher 11.1.1.9.0 is released, although I admit that this is almost a week old news now and I am sure some of you may have already known this by now from the BI Publisher homepage in OTN or from other sources. My sincere apologies for the delay here.

Thank you Tim for helping me to get back into this blog membership and being patient to allow me put this word out. My activity in the blog has been so less in the past that I am as good as a new member here. When I tried to login last week, I was greeted with this message -

"Sorry, you do not have the privileges necessary to access the page you requested. This system is available to Oracle Employees only. Oracle Employees who would like to request a blog account should click here."

So I had to start all over and get a fresh access created. Thanks to Tim and Phil from IT support for helping me with this. I will now make sure to use this space more often and share more features, tips and tricks.

Oracle BI Publisher 11.1.1.9.0 was GA on May 19th and you can get the download, documentation, certification matrix and release notes here at the BI Publisher home page in OTN. Here is a quick snapshot of new features in this release. The download is also available at Oracle Software Delivery Cloud site. The documentation page has also been given a fresh new structure where in the left navigation you will notice "Task" and "Books" as two menu items. The task will provide quick reference to role based activities under different sub menu items such as "View & Publish", "Design Reports" etc. The "Books" menu will take you to the complete set of books. You can select Administrator's guide, or Developer's guide, Data Modelling Guide, Report Designer's guide and User's Guide for BI Publisher here. If you are looking for any feature, and do not find information under "Tasks" then check for the same under "Books" or use the search option.

Stay tuned for more updates on new features. Wish you have a good time exploring the new features!!

Categories: BI & Warehousing

IoT Hackathon Field Report: Mexico Edition

Oracle AppsLab - Mon, 2015-05-25 12:02

I recently ventured down to Mexico to participate in an Internet of Things (IoT) hackathon organized by Laurie Pattison’s (@lsptahoe) Apps UX Innovation Events team with some of my fellow AppsLab members, Luis Galeana, Tony Orciuoli, and Osvaldo Villagrana.

IoTgraphic

Being the lone non-developer, I wasn’t sure how much I would be able to contribute—but I had done some research pertaining to our use case, so I felt I had at least that much to offer.

Our rather illustrious use case pertained to a perennial workplace problem—lines to use the bathroom. In MDC, there is a preponderance of men, and so apparently waiting can be an issue. Some of my research has found that elsewhere, where there are more women than men, lines to use the women’s bathroom in the office place can be a serious annoyance.

Thus was born what was originally playfully titled Bathroom Management (BM), though we ended up with Presence, which would work more generally as a presence management system that could also handle conference room reservations, among other things.

image1 image5

I had never been part of a hackathon, but I definitely found out the appeal. As a lover of deadlines, and my own experiences coding at night (definitely the best time for coding), it seems just right for this sort of thing. Free snacks and beverages, food carts for lunch and dinner, and a beautiful view from the MDC office 9th floor, it was an excellent setting.

I was able to help intermittently with thinking through some of the logic of our scheduling system, and with our pitch at the end, so I did feel I added something, even if the lion’s share of the work was done by the other three. Being a two-day hackathon, we had one late night, which I stuck around for, and ended up reading about and playing with Python, in the hopes it might come in handy. It didn’t, but there’s always next time.

Our presentation of Presence garnered some good laughs, which we didn’t quite expect, but at least everyone was engaged. We had a great demo showing our scheduling system for bathroom stalls, which included proximity sensors, sounds, and displays in the stall, and a web interface for scheduling, as well as IM, phone, and watch notifications when the stall you reserved becomes free.

We came in third, after two other solid entries, and took home the People’s Choice award, perhaps because our solution filled a real need in the office! I did learn a lot from the other winners, particularly on how we could have pitched it better to highlight the enterprise applicability. So again, there’s always next time.

All in all I found it highly favorable, and hope I have another chance to do it again in the future.

walkingPossibly Related Posts:

A SharePoint - Business Analyst? What for?

Yann Neuhaus - Mon, 2015-05-25 08:04
Business analysis is becoming a widely accepted and valued position in organizations today, particularly those that are looking for new ways to enhance information systems.
Good business analysis allows us to deal effectively with complexity, which in turn enables creativity and innovation.    Image-13 Quick Reminder of what is SharePoint

Microsoft SharePoint is a robust collaboration platform that empowers people to work together. It's a CMS to share, organize, discover, build and manage projects / data.
This is a Collaboration tool.

 

Challenges with SharePoint Implementations

A Sharepoint Business Analyst has challenges to drive:

  • Because of the huge range of capabilities of SP, users are thinking it can do anything, BA has to prioritize needs Priorits.
  • In order to get this, the BA has to involve the Business, including the right people (departments, communication, SP Support Team, sponsors,...) to implement SharePoint successfully.
  • Understanding the Users requirements is one of the hugest part, BA has to translate in "IT - SharePoint words" the information given. Without this understanding, most of the time you will get this affirmation: "SharePoint? yes we have it, but we don't use it".
  • Define the Business and Stakeholder needs and perspectives is the key ! Perspectives could be so different depending the user role, an IT person will thinks everyone wants "wikis" whereas Records managers trust that all should be classified as a record, Marketing person thinks if it sparkles people will use it and other will always use folders. It really depends of the user.
  • User adoption: each organization is different, "change is hard" they use to say, it is creating complexity: the Business Analyst will have to plan and communicate change in order to manage the steps using the best way.
What is the SharePoint Business Analyst role?

The Business Analyst should be one of the most critical functions in your organization.The role of the business analyst is to:  

  • Understand what the business does, how it operates
  • Examine existing business processes
  • Identify gaps in processes, opportunities for improvements and automation
  • Capture Users requirements (URS), create mockups
  • Generate technical requirements, proof of concept solutions
  • Help implement the new processes, features and tools
  • Document improvements, measure, repeat the process

A Business Analyst must also understand the capabilities and constraints of SharePoint, and being aware of the latest Versions/Products/Features; this person must have business analysis skills competencies.

CONCLUSION

By defining the problem, you will build the right solution, plain and simple. That's why the Business Analyst function is so important for a SharePoint Project Management.