Skip navigation.

Feed aggregator

Subquery Factoring (10)

Jonathan Lewis - Mon, 2015-07-27 06:26

What prompted me to write my previous note about subquerying was an upgrade to 12c, and a check that a few critical queries would not do something nasty on the upgrade. As ever it’s always interesting how many little oddities you can discover while looking closely as some little detail of how the optimizer works. Here’s an oddity that came up in the course of my playing around investigation in 12.1.0.2 – first some sample data:


create table t1
nologging
as
select * from all_objects;

create index t1_i1 on t1(owner) compress nologging;

begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1 for columns owner size 254'
        );
end;
/

The all_objects view is convenient as a tool for modelling what I wanted to do since it has a column with a small number of distinct values and an extreme skew across those values. Here’s a slightly weird query that shows an odd costing effect:


with v1 as (
        select /*+ inline */ owner from t1 where owner > 'A'
)
select count(*) from v1 where owner = 'SYS'
union all
select count(*) from v1 where owner = 'SYSTEM'
;

Since the query uses the factored subquery twice and there’s a predicate on the subquery definition, I expect to see materialization – and that’s what happens (even though I’ve engineered the query so that materialization is more expensive than executing inline). Here are the two plans from 12.1.0.2 (the same pattern appears in 11.2.0.4, though the costs are a little less across the board):


=======================
Unhinted (materializes)
=======================

---------------------------------------------------------------------------------------------------------
| Id  | Operation                  | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT           |                            |     2 |   132 |    25  (20)| 00:00:01 |
|   1 |  TEMP TABLE TRANSFORMATION |                            |       |       |            |          |
|   2 |   LOAD AS SELECT           | SYS_TEMP_0FD9D661B_876C2CB |       |       |            |          |
|*  3 |    INDEX FAST FULL SCAN    | T1_I1                      | 85084 |   498K|    21  (15)| 00:00:01 |
|   4 |   UNION-ALL                |                            |       |       |            |          |
|   5 |    SORT AGGREGATE          |                            |     1 |    66 |            |          |
|*  6 |     VIEW                   |                            | 85084 |  5483K|    13  (24)| 00:00:01 |
|   7 |      TABLE ACCESS FULL     | SYS_TEMP_0FD9D661B_876C2CB | 85084 |   498K|    13  (24)| 00:00:01 |
|   8 |    SORT AGGREGATE          |                            |     1 |    66 |            |          |
|*  9 |     VIEW                   |                            | 85084 |  5483K|    13  (24)| 00:00:01 |
|  10 |      TABLE ACCESS FULL     | SYS_TEMP_0FD9D661B_876C2CB | 85084 |   498K|    13  (24)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

=============
Forced inline
=============

--------------------------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |     2 |    12 |    22  (14)| 00:00:01 |
|   1 |  UNION-ALL             |       |       |       |            |          |
|   2 |   SORT AGGREGATE       |       |     1 |     6 |            |          |
|*  3 |    INDEX FAST FULL SCAN| T1_I1 | 38784 |   227K|    21  (15)| 00:00:01 |
|   4 |   SORT AGGREGATE       |       |     1 |     6 |            |          |
|*  5 |    INDEX RANGE SCAN    | T1_I1 |   551 |  3306 |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------

I’m not surprised that the optimizer materialized the subquery – as I pointed out in my previous article, the choice seems to be rule-based (heuristic) rather than cost-based. What surprises me is that the cost for the default plan is not self-consistent – the optimizer seems to have lost the cost of generating the temporary table. The cost of the materialized query plan looks as if it ought to be 21 + 13 + 13 = 47. Even if the optimizer were smart enough to assume that the temporary table would be in the cache for the second scan (and therefore virtually free to access) we ought to see a cost of 21 + 13 = 34. As it is we have a cost of 25, which is 13 + 13 (or, if you check the 10053 trace file, 12.65 + 12.65, rounded).

Since the choice to materialize doesn’t seem to be cost-based (at present) this doesn’t really matter – but it’s always nice to see, and be able to understand, self-consistent figures in an execution plan.

Footnote

It is worth pointing out as a side note that materialization can actually be more expensive than running in-line, even for very simple examples. Subquery factoring seems to have become more robust and consistent over recent releases in terms of consistency of execution plans when the subqueries are put back inline, but you still need to think a little bit before rewriting a query for cosmetic (i.e. totally valid “readability”) reasons just to check whether the resulting query is going to produce an unexpected, and unexpectedly expensive, materialization.


Password Manager Woes

Tim Hall - Mon, 2015-07-27 05:57

I read a post this morning and it hit a raw nerve or two.

As followers of the blog will know, I use KeePass for all my work and personal passwords. I’ve come across a number of sites that prevent pasting passwords for “security reasons” and it drives me nuts. Fortunately, most of the them can’t prevent the auto-type feature, so at least that’s something…

This attitude goes beyond websites though. The policy at my current employer is all passwords should be strong and unique, but you are not allowed to use a password manager. Why? Because if someone installs a key-logger on your PC and gets the credentials for the password manager, they will have access to all your passwords. WTF? I think this attitude is moronic. I am not capable of remembering hundreds of unique, strong passwords. Using patterns is predictable, so that is also a fail.

I have seen the way some of my colleagues (past and present) deal with passwords and it is farcical.

  • One password to rule them all.
  • Kept in a text/word document on the desktop.
  • Kept in a text/word document on a network drive.
  • Kept on a piece of paper in their desk draw, that is never locked.
  • Freely shared amongst colleagues, so they can “test something using my account”.

For someone to step in and say we can’t use a tool that generates random, strong, completely unpredictable passwords and stores them in an encrypted format makes my blood boil.

Flippin’ morons!

Cheers

Tim…

Password Manager Woes was first posted on July 27, 2015 at 12:57 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Deadline Approaching: Nominations for Innovation Awards - 2015

WebCenter Team - Mon, 2015-07-27 05:00

Don't delay! The deadline to submit nominations for the most innovative use of technologies such as Oracle WebCenter and Oracle Business Process Management (BPM), Platform as a Service (PaaS) solutions like Oracle Documents Cloud, Oracle Process Cloud, and other Middleware solutions, is fast approaching!

Is your organization using Oracle Fusion Middleware to deliver unique business value? These awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The 2015 awards will be presented during Oracle OpenWorld 2015 (October 26-October 29) in San Francisco at a grand red carpet ceremony.

To share your use of Oracle Fusion Middleware solutions and how they help your organization drive business innovation, please read on to find out more information on the nomination process.

Customers may submit separate nominations forms for multiple categories; The 2015 Fusion Middleware categories are as follows:

Winners are selected by a panel of internal and external judges that score each entry across multiple different scoring categories. The entry with the highest aggregate score wins an award.

  • Nomination deadline: 5:00 p.m. (PT), July 31, 2015.
  • This solution should be in production or in active pilot phase.

Please send any questions regarding 2015 award process to: Innovation-Middleware_us@oracle.com.

  1. Nomination Forms must be completed in their entirety to be considered a valid entry.
  2. Oracle may contact you to collect additional details, if necessary.
  3. If you are a partner or an Oracle representative submitting on behalf of the customer please ensure the customer has approved this prior to final submission.

Learn it or don’t. The choice is yours.

Tim Hall - Mon, 2015-07-27 03:03

glasses-272399_1280-smallTechnology is scary for a lot of people, but the biggest problem I see out there is denial (It’s not just a river in Africa! :) ).

Newbies

For people who are new to technology, the biggest problem I see is they refuse to actually read what is on the screen. I’m not talking about those stupid End User License Agreement (EULA) screens that nobody reads. I’m talking about basic instructions. If a screen says,

“Enter your username and password, then click the Login button.”

I don’t think that should be a taxing problem for anyone, but for the less computer literate, if something doesn’t go *exactly* as they expect, they go into total melt down. People just have to take a deep breath and read what is in front of them.

Techies

The situation is not always much different for many techies when they are faced with learning new skills. All those lessons you learned in your core skill-set seem to go out of the window. Things like:

  • Read the manuals.
  • Check the log files.
  • Check the vendor support website.
  • Google it.
  • Raise a support call.

Instead, people throw their toys out of the pram and decide the product/feature is rubbish and give up.

This is exactly what happened to me when I started playing with the Multitenant option. I was in total denial for ages. When I finally made the decision to sit down and figure it out it wasn’t so bad. It was just different to what I was used to.

Learning is not a spectator sport!

(Shameless use of the title of Connor McDonald’s blog, which is in itself credited to D. Blocher.)

Learning stuff is all about time. The optimizer fairy didn’t visit Jonathan Lewis one day and tell him “the secret”. If you don’t spend the time, or you give up at the first hurdle, you are never going to get anywhere. You will probably start to make excuses. I’m too old. It’s too complicated. I’ve always been rubbish at learning new stuff. I don’t have time. My company doesn’t support me. We won’t use it for another 3 years, so I’ll leave it until later. The list is endless.

Next time you are sitting in front of the TV watching some trash, ask yourself what those “smart kids” are doing at the moment?

I don’t care what you do with your life. Your choices are no more or less valid than mine. Just don’t fool yourself. Be honest. If you wanted to learn it you would. The fact you haven’t means you really can’t be bothered. :)

Cheers

Tim…

Learn it or don’t. The choice is yours. was first posted on July 27, 2015 at 10:03 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Microsoft Office 2016 Crack Download

Jithin Sarath - Sun, 2015-07-26 23:37
Office for Windows 10 will be discharged in two flavors; one for littler 8-inch or less tablets and cell phones, and the other for bigger tablets, crossovers with touchscreens and so forth. These two renditions are manufactured by the same group and offer comparable usefulness, and are fundamentally a matter of tuning the UI to supplement the measurements of every gadget. Extensively talking, the elements will likewise be the same crosswise over distinctive stages as well, whether you're running on Android, iOS or Windows.

Microsoft has said that Office for Windows will be constrained to four applications (Word, Excel, PowerPoint, OneNote), on the grounds that those are the most crucial applications which the organization needed to organize, and boost the nature of. Redmond's Richard Ellis, chief for Office in the UK, as of late let us know: "When the requirement for different applications is known not by customers, we will listen and make arrangements to grow further applications for Office for Windows."

PowerPoint has a helpful scope of altering instruments, however the review adaptation we played with had shortcomings as far as upheld record groups, and the Presenter View not being as valuable as we'd trusted (it isn't generally full-screen, for one thing, with the title bar at the top continually being available, and the Windows taskbar gobbling up showcase land, as well).

In general, as of right now the Office for Windows 10 applications hope to offer a sensible choice of elements with a pleasant touch-accommodating interface, yet that UI takes up a lot of the screen in scene mode.

Exceed expectations has profited from a significant makeover, and increments including snap capacities and brilliant looking over which make tapping in your spreadsheet information a simpler procedure. It has the same touch-accommodating interface, yet it's not as stripped down as with Word, and it brags a status bar that gives you a chance to switch between sheets in your exercise manual and perspective the aftereffects of basic recipes for chose cells. Check more features
Categories: DBA Blogs

Installing node-oracledb on OS X with Oracle Instant Client 11.2.0.4

Christopher Jones - Sun, 2015-07-26 22:17

I've been hacking an Apple OS X shell script to install node-oracledb. You tell it where your Instant Client libraries and headers ZIP packages are. It then installs node-oracledb, resulting in an instantclient directory and a node_modules directory. This automates the instructions Node-oracledb Installation on OS X with Instant Client.

My osxinstall.sh script can be seen here.

I was investigating how to avoid needing to set DYLD_LIBRARY_PATH. I wanted to find how to replicate the use of rpath, which is available for node-oracledb on Linux. A standard install on OS X needs DYLD_LIBRARY_PATH set, otherwise Node.js will fail with the error:

   cjones@cjones-mac:~/n$ node select1.js

   /Users/cjones/n/node_modules/oracledb/lib/oracledb.js:28
       throw err;
	     ^
   Error: dlopen(/Users/cjones/n/node_modules/oracledb/build/Release/oracledb.node, 1):
           Library not loaded: /ade/b/3071542110/oracle/rdbms/lib/libclntsh.dylib.11.1
     Referenced from: /Users/cjones/n/node_modules/oracledb/build/Release/oracledb.node
     Reason: image not found
       at Module.load (module.js:356:32)
       at Function.Module._load (module.js:312:12)
       at Module.require (module.js:364:17)
       at require (module.js:380:17)
       at Object.<anonymous> (/Users/cjones/n/node_modules/oracledb/lib/oracledb.js:23:15)
       at Module._compile (module.js:456:26)
       at Object.Module._extensions..js (module.js:474:10)
       at Module.load (module.js:356:32)
       at Function.Module._load (module.js:312:12)
       at Module.require (module.js:364:17)

So, I was playing with osxinstall.sh to see how to circumvent this. Before running osxinstall.sh, edit it and set the paths to where the Instant Client 11.2.0.4 'basic' and 'sdk' ZIP files are located on your filesystem, see IC_BASIC_ZIP and IC_SDK_ZIP. (You can download Instant Client from OTN. Use the 64-bit packages). You also specify the target application directory you are using, see TARGET_DIR. This is where the components are installed into. Update https_proxy if you are behind a firewall, otherwise comment it out.

If you have various node_modules directories around, then npm might end up installing oracledb in an unexpected place and the script will error.

The key bit of osxinstall.sh that I was interested in is:

    # For Oracle Instant Client 11.2.0.4: these are the default paths we will change
    IC_DEF1=/ade/b/3071542110/oracle/rdbms/lib
    IC_DEF2=/ade/dosulliv_ldapmac/oracle/ldap/lib

    . . .

    # Warning: work in progress - may not be optimal
    chmod 755 $OCI_LIB_DIR/*dylib $OCI_LIB_DIR/*dylib.11.1
    install_name_tool -id libclntsh.dylib.11.1 $OCI_LIB_DIR/libclntsh.dylib.11.1
    install_name_tool -change $IC_DEF2/libnnz11.dylib $OCI_LIB_DIR/libnnz11.dylib \
                 $OCI_LIB_DIR/libclntsh.dylib.11.1
    install_name_tool -id libnnz11.dylib $OCI_LIB_DIR/libnnz11.dylib
    install_name_tool -change $IC_DEF1/libclntsh.dylib.11.1 \
                 $OCI_LIB_DIR/libclntsh.dylib.11.1 $OCI_LIB_DIR/libociei.dylib
    install_name_tool -change $IC_DEF1/libclntsh.dylib.11.1 \
                 $OCI_LIB_DIR/libclntsh.dylib.11.1 $NODE_ORACLEDB_LIB
    chmod 555 $OCI_LIB_DIR/*dylib $OCI_LIB_DIR/*dylib.11.1

This changes the library install and identification names using install_name_tool. Note this tool cannot allocate more space for path names than currently exists. My code is a work in progress; I may work out a better way, perhaps using libtool. Comments & suggestions welcome.

The script does more than most people probably need. In future even I might only run parts extracted from it.

If you are new to node-oracledb, check out its install and API documentation on GitHub. You may also be interested in reading The Easiest Way to Install Oracle Database on Mac OS X.

12c Parallel Execution New Features: Parallel FILTER Subquery Evaluation - Part 1: Introduction

Randolf Geist - Sun, 2015-07-26 11:11
12c introduces another interesting new Parallel Execution feature - the parallel evaluation of FILTER subqueries. In pre-12c FILTER subqueries always had to be evaluated in the Query Coordinator. This had several consequences, in particular the data driving the FILTER subquery always had to flow through the Query Coordinator, and hence represented a forced serial execution part of a parallel execution plan. This limitation also meant that depending on the overall plan shape the parallel plan was possibly decomposed into multiple DFO trees, leading to other side effects I've outlined in some of my other publications already.

In 12c now the FILTER subquery can be evaluated in the Parallel Slaves, and the driving data no longer needs to be processed in the Query Coordinator. However, the resulting plan shape can be a little bit confusing. Let's have a look at a simple example:

create table t_1
compress
as
select /*+ use_nl(a b) */
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_1', method_opt=>'for all columns size 1')

alter table t_1 parallel 4;

create index t_1_idx on t_1 (id) invisible;

explain plan for
select /*+
--optimizer_features_enable('11.2.0.4')
*/ count(*) from
t_1 t
where exists (select /*+ no_unnest */ null from t_1 where t.id = t_1.id);

-- 11.2.0.4 plan shape with index invisible
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 440M (2)| 04:47:04 | | | |
| 1 | SORT AGGREGATE | | 1 | 6 | | | | | |
|* 2 | FILTER | | | | | | | | |
| 3 | PX COORDINATOR | | | | | | | | |
| 4 | PX SEND QC (RANDOM)| :TQ20000 | 2000K| 11M| 221 (1)| 00:00:01 | Q2,00 | P->S | QC (RAND) |
| 5 | PX BLOCK ITERATOR | | 2000K| 11M| 221 (1)| 00:00:01 | Q2,00 | PCWC | |
| 6 | TABLE ACCESS FULL| T_1 | 2000K| 11M| 221 (1)| 00:00:01 | Q2,00 | PCWP | |
| 7 | PX COORDINATOR | | | | | | | | |
| 8 | PX SEND QC (RANDOM)| :TQ10000 | 1 | 6 | 222 (2)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 9 | PX BLOCK ITERATOR | | 1 | 6 | 222 (2)| 00:00:01 | Q1,00 | PCWC | |
|* 10 | TABLE ACCESS FULL| T_1 | 1 | 6 | 222 (2)| 00:00:01 | Q1,00 | PCWP | |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T_1" "T_1" WHERE "T_1"."ID"=:B1))
10 - filter("T_1"."ID"=:B1)

-- 12.1.0.2 plan shape with index invisible
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 1588M (2)| 17:14:09 | | | |
| 1 | SORT AGGREGATE | | 1 | 6 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 6 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 6 | | | Q1,00 | PCWP | |
|* 5 | FILTER | | | | | | Q1,00 | PCWC | |
| 6 | PX BLOCK ITERATOR | | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL | T_1 | 1 | 6 | 798 (2)| 00:00:01 | | | |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T_1" "T_1" WHERE "T_1"."ID"=:B1))
8 - filter("T_1"."ID"=:B1)

-- 11.2.0.4 plan shape with index visible
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 5973K (1)| 00:03:54 | | | |
| 1 | SORT AGGREGATE | | 1 | 6 | | | | | |
|* 2 | FILTER | | | | | | | | |
| 3 | PX COORDINATOR | | | | | | | | |
| 4 | PX SEND QC (RANDOM)| :TQ10000 | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | P->S | QC (RAND) |
| 5 | PX BLOCK ITERATOR | | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 6 | TABLE ACCESS FULL| T_1 | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
|* 7 | INDEX RANGE SCAN | T_1_IDX | 1 | 6 | 3 (0)| 00:00:01 | | | |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T_1" "T_1" WHERE "T_1"."ID"=:B1))
7 - access("T_1"."ID"=:B1)

-- 12.1.0.2 plan shape with index visible
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 5973K (1)| 00:03:54 | | | |
| 1 | SORT AGGREGATE | | 1 | 6 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 6 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 6 | | | Q1,00 | PCWP | |
|* 5 | FILTER | | | | | | Q1,00 | PCWC | |
| 6 | PX BLOCK ITERATOR | | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
|* 8 | INDEX RANGE SCAN | T_1_IDX | 1 | 6 | 3 (0)| 00:00:01 | | | |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "T_1" "T_1" WHERE "T_1"."ID"=:B1))
8 - access("T_1"."ID"=:B1)

I've included two variations of the setup, one without available index for evaluating the FILTER subquery and one with index.

The pre-12c plan shape without index makes the former limitation particularly obvious: The FILTER operator is above the PX COORDINATOR and marked serial, and the table scan in the FILTER subquery gets parallelized as separate DFO tree (indicated among others by the two PX COORDINATOR operators), which means that each time this separate DFO tree starts, a separate set of Parallel Slave will be allocated/deallocated, adding possibly a lot of overhead to a probably already inefficient execution plan anyway - assuming the FILTER subquery needs to be evaluated many times.

In 12c the FILTER operator is marked parallel and the need for a separate DFO tree is gone. What might be confusing with this plan shape is that the operations of the FILTER subquery are not marked parallel. In my opinion this is misleading and should actually be marked parallel, because at runtime the operations will be performed by the Parallel Slaves, and in case of a Full Table Scan each slave will run the entire full table scan (so no PX ITERATOR for dividing the scan into chunks / granules), which is comparable to what happens when a parallel Nested Loop join runs or the new PQ_REPLICATE feature gets used - and in those cases the operations are marked parallel:

-- 11.2.0.4 / 12.1.0.2 plan shape with index invisible
-- and subquery unnested using NL SEMI join
-----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 12 | 442M (2)| 04:48:03 | | | |
| 1 | SORT AGGREGATE | | 1 | 12 | | | | | |
| 2 | PX COORDINATOR | | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | 12 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | 12 | | | Q1,00 | PCWP | |
| 5 | NESTED LOOPS SEMI | | 2000K| 22M| 442M (2)| 04:48:03 | Q1,00 | PCWP | |
| 6 | PX BLOCK ITERATOR | | | | | | Q1,00 | PCWC | |
| 7 | TABLE ACCESS FULL| T_1 | 2000K| 11M| 221 (1)| 00:00:01 | Q1,00 | PCWP | |
|* 8 | TABLE ACCESS FULL | T_1 | 2000K| 11M| 796 (2)| 00:00:01 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

8 - filter("T"."ID"="T_1"."ID")

Summary
So the operators of the FILTER subquery can now be run in the slaves, which is the main point of this feature, although being represented in a confusing way in the execution plan. Avoiding the potential decomposition into multiple DFO trees is another possible side effect. Decreased query duration should be possible if the evaluation of the FILTER subquery requires significant time and can now be run in the Parallel Slaves instead of serial execution through the Query Coordinator.

Note that depending on the plan shape and SQL features used, it's still possible that 12c reverts to the old serial FILTER subquery evaluation plan shape, so the new feature doesn't get used always.

There is more to say about this feature. In the next part of this instalment I'll focus on the different distribution methods possible with the new parallel FILTER operator - there is a new PQ_FILTER hint that allows controlling the distribution, but there are also some interesting points to make about how the optimizer seems to make its choice which distribution method to use automatically. In the examples shown here there's no separate distribution for the FILTER, by the way, but this can look differently, as I'll show in the next part.

Taking a Closer Look at Knowledge Modules in ODI12c – Component-Style and Multi-Connect KMs

Rittman Mead Consulting - Sun, 2015-07-26 10:19

Another question that came up from the ODI12c Bootcamp Course I’m delivering for a client in London at the moment is how to choose between the different knowledge modules that come with ODI12c. What with the choice now between template-style KMs and the new component-style KMs, the new option of multi-connect KMs, and the general question around which KM you pick within a KM type when building a table mapping, I thought it’d be interesting to take a closer look at how knowledge modules work with ODI12c and how you go about making the right choice of KM when creating a mapping.

As a quick primer, Oracle Data Integrator up until the recent 12c release had six types of knowledge module you could use in a data mapping:

NewImage

They were:

  • Load Knowledge Modules, for loading source data out of the source database server and into a staging table typically on the target database platform
  • Integrate Knowledge Modules, for taking that staging data and integrating (inserting, updating, merging etc) it into the target table
  • Reverse Knowledge Modules, for reverse-engineering the table metadata from a source system
  • Check Knowledge Modules, for performing data quality checks on source and target tables
  • Journalise Knowledge Modules, for setting up change data capture on a source table or table set
  • Service Knowledge Modules, for exposing tables or other datastore as CRUD-type web services

Using ODI11g as an example, when you created a new mapping you selected an LKM for extracting data out of your source database, an IKM for integrating the results into the target table, and optionally a CKM or JKM if you needed to run data quality checks or use table journalization (CDC). In all cases you had to first import the knowledge module definitions into the ODI11g Work Repository and your project before you could use them. To take an example, an ODI11g mapping where the source was a file and the target, an Oracle database, might look like this as a Mapping diagram:

NewImage

Looking at the Flow diagram, at the start there are no knowledge modules to select from as none have yet been imported:

NewImage

I therefore import a selection of IKM, LKM and other KMs for the technologies I’m using, and then I’m able to assign an LKM and IKM to my flow diagram.

NewImage

For both Load Knowledge Modules (LKMs) and Integration Knowledge Modules (IKMs), you have a number of options ranging from generic JBDC/SQL-type modules that connect to a source and then transfer the data using JDBC batch routines, through to highly-specialized ones that leverage particular platform technologies. I typically start the prototyping phase of my ODI projects by selecting the simplest, most generic LKM and IKM I can find, and once I’ve got the mapping logic correct then shift to one that uses more of the underlying database’s features – the docs also have a nice guide for making your KM selection. For example, I might assign the LKM SQL to Oracle knowledge module to the source table and the IKM SQL Control Append one to the target, like this:

NewImage

In this case, ODI will first create a Java routine that extract via a JDBC connection the rows of data from the file, and then load that data into a staging table on the target database server. Then, that staging data will be integrated into the target table using a regular SQL INSERT statement.

Of course I could do this more efficiently using an Oracle External Table. Let’s select the LKM File to Oracle (EXTERNAL TABLE) load knowledge module and also change the IKM to IKM Oracle Incremental Update; note that this would only work if the target database server could see the file we’re loading in via this mechanism – if the file had to stay on a remote server then I’d have to stick with the IKM SQL Control Append and ODI would effectively ignore the request to use an Oracle External Table.

NewImage

With ODI12c, things are a bit different due to the introduction of another type of knowledge module: “Component-Style” Knowledge Modules. Component-style KMs were introduced for a few reasons with ODI12c; they made migration of mappings and projects from OWB easier as OWB mappings are made up of lots of arbitrarily-arranged mapping operator components that can be combined into all-types of data flow, and they also made it possible for Oracle to create lots more of these granular mapping components and use them across all technology types. For example as Oracle’s David Allen talks about in this comment on one of our previous blog posts and in this follow-up blog of his own, Oracle could create a generic Table Function component-style KM and have it apply a SQL table function for Oracle sources, or run a Pig relation through an arbitrary Pig Latin script as I did in a more recent blog post.

These new component-style KMs come built-in to ODI12c which means that you don’t actually have to import any template-style KMs in to get started with a mapping; in the example below from my blog post earlier today on ODI12c and Oracle Streams, I can run the mapping I’ve just created by just selecting from the built-in component-style KMs that ship with ODI12c out-of-the-box.

NewImage19

So should we now use component-style KMs when creating mappings and avoid using the old template-style ones? The docs don’t say this explicitly, but my impression is that component-style KMs are the way Oracle wants to take things forward and in most cases, ODI will automatically select suitable component-style LKMs and IKMs when you create the physical mapping and this is usually the best option; only time I switch to a template-style IKM or LKM is if I’m using a platform technology that component-style KMs don’t yet cover, or I’m working on some edge-case – by default though I go with component-style LKMs and IKMs. Of course you still need to import JKMs and other KM types and presumably Oracle will extend component KMs beyond IKMs and LKMs over time, but that’s my recommendation for now.

So now we’ve got what component-style KMs are, there’s another new KM concept that came along with ODI12c – “Multi-Connect” KMs. Multi-Connect KMs are a special type of template-style KM that allows the staging area to be on a separate data server to the target data warehouse, whereas most template-style KMs assume the staging and target data schemas are on the same data server. I used a multi-connect IKM File-Hive to Oracle knowledge module towards the end of another article where I used Oracle Loader for Hadoop to export data out of Hadoop and into an Oracle Database; normally when you use an IKM the staging and target areas are on the same database, but in this case the staging table for the mapping was on the Hadoop (Hive) side and I therefore had to select LKM SQL Multi-Connection as the load knowledge module, whereapon I could then select IKM File-Hive to Oracle (OLH/OSCH) as the integration knowledge module.

NewImage

Similarly, with our ODI12c mapping if I moved the staging area to the source database, or more commonly a separate “ETL-hub”-style database where the customer wants a more ETL (compared to ELT)-style integration setup, I can request that the staging location for the mapping moves to this hub database using a new feature in the logical mapping editor:

NewImage

Then in the mapping I can add some transformations that take place on the ETL hub database, something you’d most probably do to avoid licensing ODI on their full data warehouse database server.

NewImage

Then when I switch to the Physical mapping view I can see this additional execution unit with the transformations in it, I first select the LKM SQL Multi-Connect to bring the file data in, then I can select an LKM and IKM combination that supports multi-connect but uses an Oracle-specific technology (in this case, dblinks) to move the data from the ETL hub to the target database.

NewImage

Or, if I didn’t like using dblinks or the ETL hub was on a non-Oracle platform, I could use the component-style LKM SQL Multi-Connect LKM at the access point and then a mono-source component-style IKM to do the data integration – note however that whenever we use generic JDBC connections and batch-extraction to bring data across to the target platform the data flows through the agent, which won’t be as efficient as using the multi-connect template-style KM that transfers data using SQL*Net and dblinks.

NewImage

So – a few thoughts on the new knowledge module setup in ODI12c, and what these new component KMs and multi-connect KMs are actually for. You can find more articles on ODI12c and data integration over on our blog, if you’re interested in reading more about ODI12c development and internals.

Categories: BI & Warehousing

RMAN -- 5c : (Some More) Useful KEYWORDs and SubClauses

Hemant K Chitale - Sun, 2015-07-26 08:29
Here are a few more useful KEYWORDs and SubClauses


AS COPY  and   COPY OF
Unlike the BACKUPSET format that is the default for an RMAN Backup, Image Copy backups (those that would be akin to backups created as User Managed Backups without RMAN) can be created in RMAN using the AS COPY specifer.   COPY OF allows backups of such backup copies.

Thus, I take an Image Copy backup of a datafile while the database is OPEN :

RMAN> backup as copy datafile 7 ;       

Starting backup at 26-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
output file name=/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9s6b4o_.dbf tag=TAG20150726T214649 RECID=2 STAMP=886110422
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
Finished backup at 26-JUL-15

Starting Control File and SPFILE Autobackup at 26-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_26/o1_mf_s_886110425_bv9s6trt_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-JUL-15

RMAN>

Did you note how datafile 7 was copied to '/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9s6b4o_.dbf' by the BACKUP AS COPY command ?
Without me specifying a FORMAT, it created the copy in the datafile location under the FRA, not in the backupset location.
Next, I take a backup of this Image Copy backup.

RMAN> backup copy of datafile 7;

Starting backup at 26-JUL-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: including datafile copy of datafile 00007 in backup set
input file name=/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9s6b4o_.dbf
channel ORA_DISK_1: starting piece 1 at 26-JUL-15
channel ORA_DISK_1: finished piece 1 at 26-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_26/o1_mf_nnndf_TAG20150726T214939_bv9scm4h_.bkp tag=TAG20150726T214939 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 26-JUL-15

Starting Control File and SPFILE Autobackup at 26-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_26/o1_mf_s_886110582_bv9scp9w_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-JUL-15

RMAN>

This time the copy in the datafile location was backed up to a backupset location.  This new backup is not an Image Copy format of datafile 7.  Note the differences in the filenames.   The Image Copy done with BACKUP AS COPY has an OMF filename similar to that of the source datafile.  The BackupSet format includes the TAG as part of the BackupPiece filename.

Let's run some checks, from RMAN and SQLPlus :

SQL> select file_name from dba_data_files where file_id=7;

FILE_NAME
--------------------------------------------------------------------------------
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

SQL>

RMAN> list backup of datafile 7 completed after "trunc(sysdate)";

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
91 Full 7.40M DISK 00:00:01 26-JUL-15
BP Key: 103 Status: AVAILABLE Compressed: YES Tag: TAG20150726T214939
Piece Name: /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_26/o1_mf_nnndf_TAG20150726T214939_bv9scm4h_.bkp
List of Datafiles in backup set 91
File LV Type Ckp SCN Ckp Time Name
---- -- ---- ---------- --------- ----
7 Full 14141418 26-JUL-15 /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

RMAN> list copy of datafile 7;

List of Datafile Copies
=======================

Key File S Completion Time Ckp SCN Ckp Time
------- ---- - --------------- ---------- ---------------
2 7 A 26-JUL-15 14141418 26-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9s6b4o_.dbf
Tag: TAG20150726T214649


RMAN>

The LIST BACKUP command shows me the BackupSet backup of the Image Copy. If I want to see the Image Copy that I created first, I must run the command LIST COPY. LIST BACKUP shows BackupSets, not Image Copies themselves. Image Copies are displayed by the LIST COPY.

What is the advantage of Image Copy Backups ?  There are a few.
1)  You can integrate this with your User Managed Backups methods.
2)  You can "clone" a database without having to run a RESTORE(yes, with BACKUP AS COPY DATABASE)
3)  You can selectively relocate one or more datafiles with additonal usage of the SWITCH DATAFILE TO COPY command (see my previous post "BACKUP AS COPY")

Let me demonstrate advantage 3 with a tablespace.

SQL> select file_name from dba_data_files where tablespace_name = 'HEMANT';

FILE_NAME
--------------------------------------------------------------------------------
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf

SQL>

RMAN> backup as copy tablespace HEMANT;

Starting backup at 26-JUL-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf
output file name=/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tkkj1_.dbf tag=TAG20150726T220953 RECID=4 STAMP=886111799
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00007 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
output file name=/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tkrl9_.dbf tag=TAG20150726T220953 RECID=5 STAMP=886111807
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00008 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf
output file name=/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tkzo0_.dbf tag=TAG20150726T220953 RECID=6 STAMP=886111825
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting datafile copy
input datafile file number=00009 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
output file name=/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tlrr3_.dbf tag=TAG20150726T220953 RECID=7 STAMP=886111843
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00011 name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf
output file name=/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tm7z2_.dbf tag=TAG20150726T220953 RECID=8 STAMP=886111860
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
Finished backup at 26-JUL-15

Starting Control File and SPFILE Autobackup at 26-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_26/o1_mf_s_886111863_bv9tmq38_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-JUL-15

RMAN>
RMAN> sql 'alter tablespace HEMANT offline';

sql statement: alter tablespace HEMANT offline

RMAN> switch tablespace HEMANT to copy;

datafile 6 switched to datafile copy "/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tkkj1_.dbf"
datafile 7 switched to datafile copy "/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tkrl9_.dbf"
datafile 8 switched to datafile copy "/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tkzo0_.dbf"
datafile 9 switched to datafile copy "/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tlrr3_.dbf"
datafile 11 switched to datafile copy "/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tm7z2_.dbf"

RMAN> recover tablespace HEMANT;

Starting recover at 26-JUL-15
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 26-JUL-15

RMAN> sql 'alter tablespace HEMANT online';

sql statement: alter tablespace HEMANT online

RMAN>

SQL> select file_name from dba_data_files where tablespace_name = 'HEMANT';

FILE_NAME
--------------------------------------------------------------------------------
/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tkkj1_.dbf
/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tkrl9_.dbf
/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tkzo0_.dbf
/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tlrr3_.dbf
/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9tm7z2_.dbf

SQL>

Note how all the datafiles of the tablespace were copied and then the active copy of the datafiles has been switched to the new location (/NEW_FS/oracle/FRA/HEMANTDB/datafile/). Have the old datafiles (/home/oracle/app/oracle/oradata/HEMANTDB/datafile/) been deleted ?
Let's see :

RMAN> list copy of tablespace HEMANT;

List of Datafile Copies
=======================

Key File S Completion Time Ckp SCN Ckp Time
------- ---- - --------------- ---------- ---------------
9 6 A 26-JUL-15 14142997 26-JUL-15
Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf

10 7 A 26-JUL-15 14142997 26-JUL-15
Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

2 7 A 26-JUL-15 14141418 26-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9s6b4o_.dbf
Tag: TAG20150726T214649

11 8 A 26-JUL-15 14142997 26-JUL-15
Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf

12 9 A 26-JUL-15 14142997 26-JUL-15
Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

13 11 A 26-JUL-15 14142997 26-JUL-15
Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf


RMAN>
RMAN> delete copy of tablespace HEMANT;

released channel: ORA_DISK_1
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=44 device type=DISK
List of Datafile Copies
=======================

Key File S Completion Time Ckp SCN Ckp Time
------- ---- - --------------- ---------- ---------------
9 6 A 26-JUL-15 14142997 26-JUL-15
Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf

10 7 A 26-JUL-15 14142997 26-JUL-15
Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

2 7 A 26-JUL-15 14141418 26-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9s6b4o_.dbf
Tag: TAG20150726T214649

11 8 A 26-JUL-15 14142997 26-JUL-15
Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf

12 9 A 26-JUL-15 14142997 26-JUL-15
Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

13 11 A 26-JUL-15 14142997 26-JUL-15
Name: /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf


Do you really want to delete the above objects (enter YES or NO)? YES
deleted datafile copy
datafile copy file name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf RECID=9 STAMP=886111907
deleted datafile copy
datafile copy file name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf RECID=10 STAMP=886111907
deleted datafile copy
datafile copy file name=/NEW_FS/oracle/FRA/HEMANTDB/datafile/o1_mf_hemant_bv9s6b4o_.dbf RECID=2 STAMP=886110422
deleted datafile copy
datafile copy file name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf RECID=11 STAMP=886111907
deleted datafile copy
datafile copy file name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf RECID=12 STAMP=886111907
deleted datafile copy
datafile copy file name=/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf RECID=13 STAMP=886111907
Deleted 6 objects


RMAN>

(Note how datafile 7 had two datafile copies). I could delete the old copies of the datafiles.

 Note : The example with tablespace HEMANT uses OMF files. If I had non-OMF files, I could use the "%b" FORMAT modifier -- as demonstrated here.



ARCHIVELOG LIKE
The LIKE Keyword allows you to identify individual or groups of ArchiveLogs.

RMAN> list archivelog all;

List of Archived Log Copies for database with db_unique_name HEMANTDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
14 1 628 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc

18 1 629 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc

16 1 630 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc

22 1 631 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgv6f02_.arc

17 1 631 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc

23 1 632 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgv6f1y_.arc

15 1 632 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc

24 1 633 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_633_bsgv6f36_.arc

25 1 1 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_08/o1_mf_1_1_bst8r4yr_.arc

26 1 2 A 08-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_08/o1_mf_1_2_bstbf4nw_.arc

27 1 3 A 08-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_26/o1_mf_1_3_bv9vbq7c_.arc

28 1 4 A 26-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_26/o1_mf_1_4_bv9vbr1p_.arc

29 1 5 A 26-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_26/o1_mf_1_5_bv9vbtwz_.arc


RMAN> list archivelog like '%2015_07_26%';

List of Archived Log Copies for database with db_unique_name HEMANTDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
27 1 3 A 08-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_26/o1_mf_1_3_bv9vbq7c_.arc

28 1 4 A 26-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_26/o1_mf_1_4_bv9vbr1p_.arc

29 1 5 A 26-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_26/o1_mf_1_5_bv9vbtwz_.arc


RMAN> list archivelog like '%_6%';

List of Archived Log Copies for database with db_unique_name HEMANTDB
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
14 1 628 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_628_bsgrjztp_.arc

18 1 629 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_629_bsgrk0tw_.arc

16 1 630 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_630_bsgrk48j_.arc

22 1 631 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgv6f02_.arc

17 1 631 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_631_bsgrk49w_.arc

23 1 632 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgv6f1y_.arc

15 1 632 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_632_bsgrk8od_.arc

24 1 633 A 04-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_04/o1_mf_1_633_bsgv6f36_.arc

27 1 3 A 08-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_26/o1_mf_1_3_bv9vbq7c_.arc

28 1 4 A 26-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_26/o1_mf_1_4_bv9vbr1p_.arc

29 1 5 A 26-JUL-15
Name: /NEW_FS/oracle/FRA/HEMANTDB/archivelog/2015_07_26/o1_mf_1_5_bv9vbtwz_.arc


RMAN>

I can take advantage of this to backups of selective archivelogs.

RMAN> backup as compressed backupset archivelog like '%2015_07_26%';

Starting backup at 26-JUL-15
using channel ORA_DISK_1
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=27 STAMP=886112599
input archived log thread=1 sequence=4 RECID=28 STAMP=886112600
input archived log thread=1 sequence=5 RECID=29 STAMP=886112602
channel ORA_DISK_1: starting piece 1 at 26-JUL-15
channel ORA_DISK_1: finished piece 1 at 26-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_26/o1_mf_annnn_TAG20150726T222529_bv9vgs8r_.bkp tag=TAG20150726T222529 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 26-JUL-15

Starting Control File and SPFILE Autobackup at 26-JUL-15
piece handle=/NEW_FS/oracle/FRA/HEMANTDB/autobackup/2015_07_26/o1_mf_s_886112730_bv9vgtc5_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 26-JUL-15

RMAN>

This is useful if you have been switching the archivelog destination to different locations during the course of the day.
.
.
.

Categories: DBA Blogs

OTN Tour of Latin America 2015 (Southern Leg)

Tim Hall - Sun, 2015-07-26 06:41

ace-directorI put out a brief video a few days ago (re-uploaded today to fix typos) about my participation in the OTN Tour of Latin America (2015). I’ll be on the southern leg this year. Sorry to those countries who make up the northern leg. I will be back soon I hope.

Anyway, the southern leg of the tour shapes up like this.

  • 3/4 August Uruguay UYOUG
  • 5/6 August Argentina AROUG
  • 8 August Brazil GUOB
  • 10 August Chile CLOUG
  • 12 August Peru PEOUG

I’m looking forward to seeing everyone. See you soon!

After the Peru leg, the wife and I will be going off to see Machu Picchu.

Cheers

Tim…

OTN Tour of Latin America 2015 (Southern Leg) was first posted on July 26, 2015 at 1:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Using Streams with ODI12c for Oracle-to-Oracle Change Data Capture

Rittman Mead Consulting - Sun, 2015-07-26 05:10

Although Oracle GoldenGate replaced Oracle Streams a couple of years ago as the recommended data replication and change data capture technology for Oracle databases, many customers still on Oracle Database 11gR2 or earlier still use Streams for Oracle-to-Oracle change data capture as it works and compared to GoldenGate doesn’t require any additional licensing. Oracle’s GoldenGate Statement of Direction paper from 2014 states that streams in Oracle 11gR2 will continue to be supported but no future versions of the Oracle Database will come with Streams included, but if you’re on 11gR2 and you just want to trickle-feed capture between two Oracle databases it’s an interesting option.

I covered Oracle-to-Oracle data replication using Streams a few times in the past including this OTN article on ODI and Change Data Capture back from before 2007 or so, this article on OWB11gR2 and Change Data Capture from 2010 and once from back in 2006 that went into the details of setting up asynchronous hotlog change data capture with the new “Paris” OWB10gR2 release. We’re now on the 12c release of Oracle Data Integrator and I’m teaching our ODI12c Bootcamp course to a client next week who’s particularly interested in using Streams with ODI12c, so I thought it’d be worth taking a look at this feature in more detail to see if much has changed since the earlier articles.

Let’s start then with an ODI12c 12.1.3 install with a regular mapping set-up to copy and join the DEPT and EMP tables from one Oracle database into a denormalized table in another Oracle Database. Both are Oracle Database 11gR2 (11.2.0.3) and the initial mapping looks like this:

NewImage

One thing that many ODI developers don’t know about the 12c release is that it comes with a set of “component-style” knowledge modules built-into the tool, which you can use straightaway to get a mapping running without having to select and import IKMs, LKMs and other KMs from the ODI Studio filesystem. In my case the Physical mapping looks like the screenshot below with two execution units (one for each Oracle Database server) and a number of built-in component-style KMs available for selection. I choose the LKM SQL to SQL (Built-in) load knowledge module which uses a generic JDBC connection to load source records into the staging table on the target server, and then the IKM Oracle Insert integration knowledge module to take that staging data and integrate it into the target table.

NewImage

I then run this mapping and see that ODI extracted data from the source database using a Java routine and batch transfers into the Oracle staging table, and then integrated the contents of that staging table into the target Oracle table. I could of course improve this by using the LKM Oracle to Oracle (DBLink) knowledge module and thereby avoid loading in two steps, but what I’d instead like to do is use Oracle Streams to trickle-feed new and changed data from my source tables over to the target database server, as shown in the diagram below. 

 

Asynch

In the OWB and Asynchronous Change Data Capture article i linked to earlier in the post, setting up change data capture involved quite a few steps; the database had to be put into archivelog mode, the GLOBAL_NAMES parameter had to be set and a whole bunch of PL/SQL procedures had to be called to set up the source-to-target connection. Once it’s running, Streams takes transactions off of the redo log files on the source database and send them across the network to the target database server in a similar way to how GoldenGate sends transactions in the trail file across to target database servers – except it’s Oracle-to-Oracle only and in my experience is a lot more fragile than GoldenGate, which is why we and most other customers switched to GoldenGate when it came out.

ODI12c comes with a number of change data capture or “journalizing” knowledge modules that use either database triggers, UPDATE_DATE fields, Oracle Streams or GoldenGate to replicate data from source system to the target data warehouse. The journalizing knowledge module we’ll use, JKM Oracle 11g Consistent (Streams) is a template-style KM that needs to be imported first from the filesystem where ODI Studio was installed, as shown in the screenshot below – note also when you do this yourself that there’s a big “deprecated” notice next to it, saying that it could be removed at any time (presumably in-favour of the GoldenGate-based ones)

NewImage

ODI and the JKM Oracle 11g Consistent (Streams) KM takes a more “f*ck it, let’s just to it” approach to database configuration than my OWB10gR2 version did, automatically configuring the source database for Streams by running all of the setup PL/SQL routines leaving you just to put the database in archivelog mode if it’s not already, and granting the connecting user (in my case, SYSTEM) streams administrator privileges. Moving over to SQL*Plus on the source database I therefore run the setup commands listed in the KM notes like this:

SQL*Plus: Release 11.2.0.3.0 Production on Sun Jul 26 06:34:44 2015

 

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

 

SQL> grant dba to system;

 

Grant succeeded.

 

SQL> begin

  2  dbms_streams_auth.grant_admin_privilege(

  3       grantee   => ‘system’,

  4       grant_privileges => true);

  5  end;

  6  /

 

PL/SQL procedure successfully completed.

 

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup mount;

ORACLE instance started.

 

Total System Global Area 2471931904 bytes

Fixed Size    2230872 bytes

Variable Size  570426792 bytes

Database Buffers1895825408 bytes

Redo Buffers    3448832 bytes

Database mounted.

SQL> alter database archivelog;

 

Database altered.

 

SQL> alter database open;

 

Database altered.

Next I’ll go back to ODI Studio and enable the source Model for journalising by double-clicking on the model in ODI Studio and then selecting the JKM from the Journalizing tab, like this:

NewImage

I then right-click on the EMP and DEPT tables within the source model and select Changed Data Capture > Add to CDC, where I can fine-tune the replication order so that new departments (DEPTNO) employees link to will always have been created before the employee record hits.

NewImage

It’s at this next stage, when I enable journalizing, that Streams is set-up on the source and target database servers and all the supporting tables and views are created. To enable journalizing I click on the model, not the individual tables, and select Changed Data Capture > Start Journal, like this:

NewImage

If you’ve read any of our previous posts on ODI and changed data capture you’ll realise that this setup process is that same regardless of the underlying replication technology, which makes it easy to start with database-centric CDC technologies such as this and then move to GoldenGate later on without lots of rework or re-training. For now though lets run this setup process using the local agent and then check the Operator navigator to see what it did (and whether it worked…)

NewImage

And it did work. Enabling journalising with the Oracle Streams JKM involves quite a few setup steps including checking that all the database settings and parameters are enabled correctly, then running the various DBMS_STREAMS and other packages to setup the capture and transmission process. Then, as with all of the ODI JKMs a set of J$ tables are created to hold the primary keys of new and changed records coming from the source system, along with JV$ views that join those primary keys to the full incoming replicated rows – this blog post by the Oracle ODI team explains the background to JKMs very well if you want to understand them in more detail. Looking at the source SCOTT schema in SQL*Developer I can see the CDC and J$/JV$ tables and views created in the schema; if I didn’t want these tables created in the actual data schema I could have specified a different schema as the WORK schema when I created the database connection in ODI Studio prior to this exercise.

NewImage

Next I have to define one or more “subscribers’ to the journals; for these more advanced “consistent set JKMs” of which the Oracle Streams one is one, you can define multiple consumers or “subscribers” to the changed data so that one can be further down the queue than the other (Simple JKMs only allow a single subscriber). I call my subscriber “SUNOPSIS” as this is the default subscriber name ODI adds to the mappings downstream. 

NewImage

Pressing OK after adding the subscriber again brings up the prompt to select an agent, and going over to the Operator navigator I can see that another set of steps have run, again doing some streams setup but also adding details of the journal subscribers to the tables created on the source database.

NewImage

I can check that journalising is now working by using the View Data… feature in the Designer navigator to insert new rows into the EMP and DEPT tables, and then checking the J$ tables in the source database schema to see if the rows’ primary keys have been added – which they have been.

NewImage

To now just read journalised new and changed data into my mapping, all I do then is go to the Physical mapping diagram, select the first source table and check the Use Journalized Data Only checkbox, then do the same for the other table (note it is the table source you select, not the access point for that table into the target execution block).

NewImage

So now I’ll run the mapping and check the results in the Operator navigator … but instead, I get an error:

NewImage

This is because, given the way that ODI handles CDC and journalising, it can’t allow two journalised tables to be directly joined in a mapping – we think this is because ODI can’t guarantee both tables are in the same update “state” and therefore makes you copy their data into a staging or intermediary table before you can do the join. I therefore amend the mapping to load the journalised tables into staging tables on the target database server, amend the joins and filter to reference the staging tables, and then join their contents to then filter and load into the target reporting table, like this:

NewImage

With the Physical mapping details, the two incoming Oracle staging tables are loaded by the same LKM SQL to SQL component-style mapping as we used before to extract data from the two journalised tables, and the Journalized Data Only flag is still set for the source tables, as you can see below.

NewImage

What this also highlights is a key difference between the way I trickle-fed transactions across from my source database back with the OWB and Changed Data Capture article back at the start of this post, and the way ODI’s JKMs do it; in the OWB example I set up an actual trickle-feed process outside of OWB which transferred changed data across the network to my target data warehouse, whereapon I then read those change tables and used them to update the target DW tables in my data warehouse.

And this is actually how the GoldenGate KMs work with ODI – a GoldenGate replication process copies new and changed data from the source database to the target and ODI then reads from these change tables, whereas the Streams (and other non-GoldenGate) JKMs create the change capture tables back on the source database (the various J$ and JV$ tables I reviewed using SQL*Developer earlier on), with ODI then reading from those remote change tables and bringing across the new and changed data to the target database server. This I guess makes things easier to set up – you don’t have to worry about configuring the target database for streams replication – but it does mean that you still incur the network traffic every-time you micro-batch the changes across the network rather than spreading that traffic transaction-by-transaction.

Anyway, back to ODI and this time, when I run the mapping it works, though looking at the Operator navigator again I can see that no new data came across, and the journal data is still waiting to be consumed afterwards. Why is this?

NewImage

If you’ve only used the Simple CDC JKMs from Oracle before this is normally all you need to do, but with Consistent Set ones such as this one, or the GoldenGate JKMs, you need to lock the subscriber view of the journalised data and extend the CDC window before you can access the journal records; for the Simple JKMs the IKM (Integration Knowledge Module) takes care of the unlock, extend, purge and lock operations for you automatically in the background, whereas with Consistent Set ones you typically do this as part of a wider ODI package as shown in the screenshot below, with the first and last tasks created by dragging and dropping the journalized model onto the package canvas and selecting Journalizing Model as the type (the subscriber name, “SUNOPSIS” in this case is typed-in below those settings and is off-screen in the screenshot)

NewImage

Now when I run this package, as opposed to the mapping on its own, I get the row of new data I was expecting and the journal table is now empty, as I was the only subscriber.

NewImage

Finally, if I was looking for real-time continuous loading into this target table, I could wrap the package in an event-detection loop that waits for in this case ten seconds for three journal rows to be written, then either processes the three as soon as the third arrives or loops around again every ten seconds (obviously in-reality you’d want to put in a mechanism to halt the loop if needed, but in my case I’ll just kill the job from the Operator navigator when I want it to stop)

NewImage

So that’s the basics of using ODI with Oracle Streams for Oracle-to-Oracle changed data capture; if you’re just copying data between two Oracle databases and you’re on 11gR2 or earlier this might be an option, but long-term you’ll need to think about GoldenGate as Oracle aren’t developing Streams beyond the 11gR2 release. Note also that all the streams activity happens over on the source database server so you still need this additional step to copy the journaled data across to the target data warehouse, but it’s still a fairly non-invasive way to capture changes on the source Oracle database and it does have the considerable advantage (compared to GoldenGate) of being free-to-use.

Categories: BI & Warehousing

Check out 6.00.1x computer science class on edX!

Bobby Durrett's DBA Blog - Sat, 2015-07-25 11:15

I just finished the last program for a computer science class on edX and I urge you to try it.

I took this class:

MITx: 6.00.1x Introduction to Computer Science and Programming Using Python

I was more interested in how MIT taught the class than in the material itself because I already know the subjects covered.

The class taught the basics of programming – expressions, variables, loops, if statements, and functions.

It also had a large focus on bisection or binary search and the performance benefits of this type of search over sequentially reading through a list.

It also covered lists, hash tables, trees, stacks, and queues.

It discussed object-oriented programming.

The class concluded with the professor stating that the programming and computer science skills taught in this class are key to advancing your career, even if you do not work in a computer related job.

I interacted with a number of students in the class and found some that were in other fields and were having success taking the class.  Others were in business computing or IT and yet did not have a computer science background so they were good programmers but learning new concepts.  Many struggled with the class but, it is free, and is given often. The class starts up again August 26th.  Nothing stops you from taking it multiple times.

I tried to think about whether I should recommend this class to the people I work with as a method of helping develop my coworkers that do not have experience in these areas.  At first I thought that the subject is too academic and has no connection to their jobs. But, after thinking about it for a while, I now believe that just the opposite is true.

Searching for practical applications of the class, I first remembered the programs that we wrote that compared searching sequentially through a list to using binary search.  In one test case the sequential method took 15 seconds but the binary search took less than one second.  This reminded me so much of tuning Oracle SQL queries.  The sequential scan of the list was like a full table scan in Oracle.  The binary search was like looking up a single row using an index scan.  As I tune Oracle queries my computer science knowledge of binary search and binary trees makes it easy to understand index and full table scans.

In another example, we recently had slowness on a Weblogic portal server.  CPU was getting maxed out and the CPU spent most of its time in a Java ConcurrentHashMap object.  I don’t know the internals of Weblogic and I have never used a ConcurrentHashMap but I know how hashing works.  I know that hashing is very fast until your hash table fills up or if the hash function distributes the items in an unequal way. My knowledge of hashing helped me grasp why our portal server was using a lot of CPU despite my lack of access to its internals.

So, contrary to my original fear that the edX class was too academic and not practical I believe that the concepts covered are very practical.  If you do not know how binary search works or what a binary tree is you will benefit from 6.00.1x on edX.  If you can not explain how a hash table works and what causes hashing to slow down you can learn from 6.00.1x. And, if you have never written a computer program, although you may find the class difficult and have to take it more than once, you will benefit from 6.00.1x on edX.

– Bobby

 

Categories: DBA Blogs

Blackboard Ultra and Other Product and Company Updates

Michael Feldstein - Sat, 2015-07-25 08:58

By Michael FeldsteinMore Posts (1039)

Phil and I spent much of this past week at BbWorld trying to understand what is going on there. The fact that their next-generation Ultra user experience is a year behind is deservedly getting a lot of attention, so one of our goals going into the conference was to understand why this happened, where the development is now, and how confident we could be in the company’s development promises going forward. Blackboard, to their credit, gave us tons of access to their top executives and technical folks. Despite the impression that a casual observer might have, there is actually a ton going on at the company. I’m going to try to break down much of the major news at a high level in this post.

The News

Ultra is a year late: Let’s start with the obvious. The company showed off some cool demos at last year’s BbWorld, promising that the new experience would be Coming Soon to a Campus Near You. Since then, we haven’t really heard anything. So it wasn’t surprising to get confirmation that it is indeed behind schedule. What was more surprising was to see CEO Jay Bhatt state bluntly in the keynote that yes, Ultra is behind schedule because it was harder than they thought it would be. We don’t see that kind of no-spin honesty from ed tech vendors all that often.

Ultra isn’t finished yet: The product has been in use by a couple of dozen early adopter schools. (Phil and I haven’t spoken with any of the early adopters yet, but we intend to.) It will be available to all customers this summer. But Blackboard is calling it a “technical preview,” largely because there are large swathes of important functionality that have not yet been added to the Ultra experience–things like tests and groups. It’s probably fine to use it for simple (and fairly common) on-campus use cases, but there are still some open manholes here.

Screenshot 2015-07-25 09.34.48

Ultra is only available in SaaS at the moment and will not be available for on-premise installations any time soon: This was a surprise both to us and to a number of Blackboard customers we spoke to. It’s available now for SaaS customers and will be available for managed hosting customers, but the company is making no promises about self-hosted. The main reason is that they have added some pretty bleeding edge new components to the architecture that are hard to wrap up into an easily installable and maintainable bundle. The technical team believes this situation may change over time as the technologies that they are using mature—to be clear, we’re talking about third-party technologies like server containers rather than homegrown Blackboard technologies—they think it may become practical for schools to self-host Ultra if they still want to by that time. But don’t expect to see this happen in the next two years.

Ultra is much more than a usability makeover and much more ambitious than is commonly understood: There is a sense in the market that Ultra is Blackboard’s attempt to catch up with Instructure’s ease of use. While there is some truth to that, it would be a mistake to think of Ultra as just that. In fact, it is a very ambitious re-architecture that, for example, has the ability to capture a rich array of real-time learning analytics data. These substantial and ambitious under-the-hood changes, which Phil and I were briefed on extensively and which were also shared publicly at Blackboard’s Devcon, are the reason why Ultra is late and the reason why it can’t be locally installed at the moment. I’m not going to have room to go into the details here, but I may write more about it in a future post.

Blackboard “Classic” 9.x is continuing under active development: If you’re self-hosted, you will not be left behind. Blackboard claims that the 9.x code line will continue to be under active development for some time to come, and Phil and I found their claims to be fairly convincing. To begin with, Jay got burned at Autodesk when he tried to push customers onto a next-generation platform and they didn’t want to go. So he has a personal conviction that it’s a bad idea to try that again. But also, Blackboard gets close to a quarter of its revenue and most of its growth from international markets now, and for a variety of reasons, Ultra is not yet a good fit for those markets and probably won’t be any time soon. So self-hosted customers on Learn 9.x will likely get some love. This doesn’t mean development will be as fast as they would like; the company is pushing hard in a number of directions, and we get the definite sense that there is a strain on developer resources. But 9.x will not be abandoned or put into maintenance mode in the near future.

Slide09

If you want to get a sense of what Ultra feels like, try out the Blackboard Student mobile app: The way Blackboard uses the term “Ultra” is confusing, because sometimes it means the user experience but sometimes it means the next generation architecture for Learn. If you want to try Ultra the user experience, the play with the Student mobile app, which is in production today and which will work with Learn 9.x as well as Learn Ultra. Personally, I think it represents some really solid thinking about designing for students.

Slide60

Slide66

Moodle may make a comeback: One of the reasons that Moodle adoption has suffered in the United States the past few years is that it has lacked an advocate with a loud voice. Moodlerooms used to be the biggest promoter of the platform, and when Blackboard acquired them, they went quiet in the US. But, as I already mentioned, the international market is hugely important for Blackboard now, and Moodle is the cornerstone of the company’s international strategy. They have been quietly investing in the platform, making significant code contributions and acquisitions. There are signs that Blackboard may unleash Moodlerooms to compete robustly in the US market again. This would entail taking the risk that Moodle, a cheaper and lower-margin product, would cannibalize their Learn business, so file this under “we’ll believe it when we see it,” but Apple has killed the taboo of self-cannibalization when the circumstances are right, and they seem like they may be right in this situation.

Collaborate Ultra is more mature than Learn Ultra but still not mature: This is another case where thinking about Ultra as a usability facelift would be hugely underestimating the ambition of what Blackboard is trying to do. The new version of Collaborate is built on a new standard called WebRTC, which enables webconferencing over naked HTML rather than through Flash or Java. This is extremely hard stuff that big companies like Google, Microsoft, and Apple are still in the process of working out right now. It is just this side of crazy for a company the size of Blackboard to try to release a collaboration product based heavily on this technology. (And the only reason it’s not on the other side of crazy is because Blackboard acquired a company that has one of the world’s foremost experts on WebRTC.) Phil and I have used Collaborate Ultra a little bit. It’s very cool but a little buggy. And, like Learn Ultra, it’s still missing some features. At the moment, the sweet spot for the app appears to be online office hours.

Slide53

 

My Quick Take

I’m trying to restrain myself from writing a 10,000-word epic; there is just a ton to say here. I’ll give a high-level framework here and come back to some aspects in later posts. Bottom line: If you think that Ultra is all about playing catch-up with Instructure on usability, then the company’s late delivery, functionality gaps, and weird restrictions on where the product can and cannot be run look pretty terrible. But that’s probably not the right way to think about Ultra. The best analogy I can come up with is Apple’s Mac OS X. In both cases, we have a company that is trying to bring a large installed base of customers onto a substantially new architecture and new user experience without sending them running for the hills (or the competitors). This is a really hard challenge. Hardcore OS X early adopters will remember that 10.0 was essentially an unusable technology preview, 10.1 was usable but painful, 10.2 was starting to feel pretty good, and 10.3 was when we really began to see why the new world was going to be so much better than the old one. If I am right, Ultra will go through the same sort of evolution. I don’t know that these stages will each be a year long; I suspect that they may be shorter than that. But right now we are probably partway through the 10.0 era for Ultra. As I mentioned earlier in the post, Phil and I still need to talk to some Ultra customers to get a sense of real usage and, of course, since it will be generally available to SaaS customers for use in the fall semester, we’ll have more folks to talk to soon. We will be watching closely to see how big the gaps are and how quickly they are filled. For example, how long will it take Blackboard to get to the items labeled as “In Development” on their slides? Does that mean in a few months? More? And what about the “Research” column? Based on these slide and our conversations, I think the best case scenario is that we reach the 10.2 era—where the platform is reasonably feature-complete, usable, and feeling pretty good overall—by BbWorld 2016, and with some 10.3-type new and strongly differentiating features starting to creep into the picture. Or they could fall flat and utterly fail to deliver. Or something in between. I’m pretty excited by the scope of the company’s ambition and am willing to cut them some slack, partly because they persuaded me that what they are trying to do is pretty big and party because they persuaded me that they probably know what they are doing. But they have had their Mulligan. As the saying goes (when properly remembered), the proof of the pudding is in the eating. We’ll see what they deliver to customers in the next 6-12 months.

Watch this space.

The post Blackboard Ultra and Other Product and Company Updates appeared first on e-Literate.

PL/SQL Error Logging and Quantum Theory

The Anti-Kyte - Fri, 2015-07-24 14:17

When I started writing this post, it was going to be about something else.
This happens occasionally, I have an idea in my head and set to work.
Then I do some research – don’t look so surprised, I do look at the docs occasionally – and, as in this case, I find out that there’s rather more to the topic at hand than I first thought.
What follows is a re-visiting of some of the tools available in Oracle to help with error logging.
It includes stuff that either I’d forgotten or had never considered about some fairly common functions.
Before I dive in, I’d just like to say thanks to William Robertson, who first pointed out to me the similarity between PL/SQL error logging and Quantum Theory. If you’re still unclear of the connection between the two then consider, if you will, the Schrodinger’s Cat Thought Experiment.
It involves locking a cat in a box and possibly poisoning it.
Schrodinger postulates that the cat is both alive and dead…until you open the box to check.
The conclusions we can draw from this experiment are :

  • According to Quantum Theory, the act of observation changes the nature of the thing being observed
  • Schrodinger wasn’t a Cat person

Before going any further, I should point out that most of the stuff I know about Physics comes from watching Star Trek.

Moving on, I now invite you to consider…

Mysteriously moving errors

As with cats – according to Shrodinger at least – the act of “observing”- well, handling – a PL/SQL exception changes the error ( or the location from which it originated at any rate).

For example…

declare
    l_cat number;
begin
    l_cat := 'GREEBO';
end;
/

declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 4

No problem with this. It shows us that the error happened at line 4 in the code, which is correct.
However….

declare
    l_cat number;
begin
    l_cat := 'GREEBO';
exception
    when others then
        -- do some logging stuff....
        raise;
end;
/

declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 8

Here, the exception originated on line 4. However, the error is reported at line 8 – inside the exception handler.

This then, is the problem with which we need to wrestle.
Time to take a closer look at the tools that Oracle provides us for the purpose of error handling, starting with the most venerable…

SQLCODE and SQLERRM

SQLCODE and SQLERRM have been around for as long as I can remember.
If you’ve worked on PL/SQL applications for any length of time, you will almost certainly seen a variation of one of the following in an exception handler :

    sqlerrm(sqlcode);
    
    sqlcode||' : '||sqlerrm;
    
    substr(sqlerrm(sqlcode),1,500));

If any of the above ring any bells then it illustrates the point that these functions ( for that is what they are), are not especially well understood.

SQLCODE

SQLCODE returns the number of the last error encountered. Not the message, just the error number :

set serveroutput on
declare
    l_cat_lives number;
begin
    l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9';
exception 
    when others then
        dbms_output.put_line(sqlcode);
end;
/

-6502

PL/SQL procedure successfully completed.

SQLCODE returns 0 on successful completion…

declare
    l_cat_lives number;
begin
    l_cat_lives := 9;
    dbms_output.put_line(sqlcode);
end;
/
0

PL/SQL procedure successfully completed.

For user-defined errors, it returns 1 by default…

declare
    e_no_cat exception;
begin
    raise e_no_cat;
exception when e_no_cat then
    dbms_output.put_line(sqlcode);
end;
/

1

PL/SQL procedure successfully completed.

…unless you associate the exception with an error number using the EXCEPTION_INIT pragma…

declare
    e_no_cat exception;
    pragma exception_init( e_no_cat, -20000);
begin
    raise e_no_cat;
exception when e_no_cat then
    dbms_output.put_line(sqlcode);
end;
/
-20000

PL/SQL procedure successfully completed.

SQL> 

It will also return the relevant error code if you use RAISE_APPLICATION_ERROR…

begin
    raise_application_error(-20001, 'The cat has run off');
exception when others then
    dbms_output.put_line(sqlcode);
end;
/
-20001

PL/SQL procedure successfully completed.

SQL> 

On it’s own then, SQLCODE is not much help in terms of working out what went wrong unless you happen to have memorized all of the Oracle error messages.

Fortunately we also have…

SQLERRM

This function takes in an error number and returns the relevant message :

begin
    dbms_output.put_line(sqlerrm(-6502));
end;
/
ORA-06502: PL/SQL: numeric or value error

PL/SQL procedure successfully completed.

Because of this SQLERRM can be used to create the equivalent of the oerr utility in PL/SQL.
Better still, it takes SQLCODE as it’s default parameter…

declare
    l_cat_lives number;
begin
    l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9';
exception
    when others then
        dbms_output.put_line(sqlerrm);
end;
/

ORA-06502: PL/SQL: numeric or value error: character to number conversion error

PL/SQL procedure successfully completed.

The maximum length of a varchar returned by SQLERRM is, according to the documentation, “the maximum length of an Oracle Database error message” – 512.

Whilst we’re on the subject, the 11gR2 documentationincludes a note recommending that, generally, DBMS_UTILITY.FORMAT_ERROR_STACK be used instead…

DBMS_UTILITY.FORMAT_ERROR_STACK

So, let’s see what this function gives us when used as a drop-in replacement for SQLERRM….

declare
    l_cat_lives number;
begin
    l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9';
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/
ORA-06502: PL/SQL: numeric or value error: character to number conversion error


PL/SQL procedure successfully completed.

SQL> 

Not much then based on this example. However, there are a couple of differences.
The first is that this function returns up to 2000 characters of the error stack.
The second is that it does not take any arguments. Fair enough I suppose. From the name you’d infer that this function returns the entire error stack rather than the single message that SQLERRM does.
Let’s put that to the test…

create or replace package transporter as
    function find_target return varchar2;
    procedure beam_me_up_scotty;
end transporter;
/

create or replace package body transporter as
    function find_target 
        return varchar2
    is
    begin
        raise_application_error(-20003, 'Location or velocity unknown');
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    end beam_me_up_scotty;
end transporter;
/

This package is an analog of what Star Fleet Engineers would have been working with before they came up with the Heisenburg Compensator.

If we call this without any error handling, we’ll get a “stack” of errors…

begin
    transporter.beam_me_up_scotty;
end;
/

*
ERROR at line 1:
ORA-20003: Location or velocity unknown
ORA-06512: at "MIKE.TRANSPORTER", line 6 
ORA-06512: at "MIKE.TRANSPORTER", line 13
ORA-06512: at line 1

You’d expect something fairly similar if you used the FORMAT_ERROR_STACK function…

set serveroutput on size unlimited
begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/

ORA-20003: Location or velocity unknown



PL/SQL procedure successfully completed.

SQL> 

So, not similar at all then.
NOTE – if you’ve already spotted the deliberate mistake here. Bear with me for a bit.
If we change the package body so that errors are raised at multiple levels…

create or replace package body transporter as
    function find_target 
        return varchar2
    is
    begin
        raise_application_error(-20003, 'Location or velocity unknown');
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    exception when others then
        raise_application_error(-20004, 'I canna change the laws o physics!');
    end beam_me_up_scotty;
end transporter;
/

… we simply get the last error passed…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/

ORA-20004: I canna change the laws o physics!



PL/SQL procedure successfully completed.

SQL> 

From all of this, it would appear that DBMS_UTILITY.FORMAT_ERROR_STACK doesn’t really give us much (if anything) over SQLERRM. This is especially true if the documentation is correct and no single Oracle Error Message will exceed 512 bytes.
All of which is rather odd, until you consider…

RAISE_APPLICATION_ERROR

RAISE_APPLICATION_ERROR is usually invoked as in the above example. However, it actually accepts three arguments :

  • NUM – an error code in the range -20000 to -20999
  • MSG – an error message up to 1024 characters ( including the error code)
  • KEEPERRORSTACK – if TRUE then the error code is placed at the top of the error stack. Otherwise it replaces the error stack.
    Default is FALSE

The first point to note here is that, unlike SQLERRM, FORMAT_ERROR_STACK can accomodate the full length of a message from RAISE_APPLICATION_ERROR.
More relevant to the issue at hand however, it the KEEPERRORSTACK parameter. If we tweak the package once more to set this parameter to true…

create or replace package body transporter as
    function find_target 
        return varchar2
    is
    begin
        raise_application_error(-20003, 'Location or velocity unknown', true);
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    exception when others then
        raise_application_error(-20004, 'I canna change the laws o physics!', true);
    end beam_me_up_scotty;
end transporter;
/

…and re-run our test…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/
ORA-20004: I canna change the laws o physics!
ORA-06512: at "MIKE.TRANSPORTER", line 16
ORA-20003: Location or velocity unknown

PL/SQL procedure successfully completed.

…we now get a stack. However, we’re still stuck without the line number from where the error originated.
Fortunately, they’ve been burning the candle at both ends over at Star Fleet, or possibly at Redwood Shores…

DBMS_UTILITY.FORMAT_ERROR_BACKTRACE

Let’s try re-executing our package, this time using FORMAT_ERROR_BACKTRACE…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_backtrace);
end;
/

ORA-06512: at "MIKE.TRANSPORTER", line 7
ORA-06512: at "MIKE.TRANSPORTER", line 14
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

Well, that’s different. We get a stack, together with the line number at which the error originated. Unfortunately it doesn’t include the originating error message itself. Let’s try that again, but this time in combination with SQLERRM…

 
begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(sqlerrm);
        dbms_output.put_line(dbms_utility.format_error_backtrace);
end;
/
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at "MIKE.TRANSPORTER", line 7
ORA-06512: at "MIKE.TRANSPORTER", line 14
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

Now we have the original error. We also have the line at which it happened. At last, we have our Heisenburg compensator.
Well, in most circumstances. Just before we test it on Admiral Archer’s prize beagle …

create or replace package body transporter as
    function find_target 
        return varchar2
    is
        l_silly number;
    begin
        l_silly :=  'Location or velocity unknown';
        exception when others then
            -- do some logging and...
            raise;
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    end beam_me_up_scotty;
end transporter;
/

Now we’ve added an error handler to the innermost package member…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(sqlerrm);
        dbms_output.put_line(dbms_utility.format_error_backtrace);
end;
/

ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at "MIKE.TRANSPORTER", line 10
ORA-06512: at "MIKE.TRANSPORTER", line 17
ORA-06512: at line 2



PL/SQL procedure successfully completed.

…once again the handled error has “moved” to the exception block of the function.

In terms of retrieving the error stack, it would appear that a combination of SQLERRM and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE offer the most comprehensive and reliable information.
However, in order to further pin down where those pesky errors are originating we may need to turn to a time-honoured technique – albeit with a comparatively modern twist…

Location Markers with $$PLSQL_LINE

The $$PLSQL_LINE variable simply returns the line number of the stored program unit or anonymous block you’re currently in…

begin
    dbms_output.put_line('At line : '||$$plsql_line);
end;
/

At line : 2

PL/SQL procedure successfully completed.

SQL> 

By sprinkling a few dollars through our code, we should get a better (although still not necessarily exact) idea of where our error is originating.

I’m going to persevere with this transporter code. After all, they managed to get it working in the original Star Trek and that was way back in the 60’s…

create or replace package body transporter as
    function find_target 
        return varchar2
    is
        l_loc pls_integer;
        l_silly number;
    begin
        l_loc := $$plsql_line;
        l_silly :=  'Location or velocity unknown';
        exception when others then
            dbms_output.put_line('Error originating after line '||l_loc);
            raise;
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    end beam_me_up_scotty;
end transporter;
/

Now, we should get a bit more information…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(sqlerrm);
        dbms_output.put_line(dbms_utility.format_error_backtrace); 
end;
/
Error originating after line 8
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at "MIKE.TRANSPORTER", line 12
ORA-06512: at "MIKE.TRANSPORTER", line 19
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

The error actually originates from line 9 so that’s a pretty good approximation.
The downside is the aforementionned sprinkling of the assignment of $$PLSQL_LINE to a variable immediately before you perform any action.

Well, I’ve probably managed to annoy an Physics experts and Star Trek fans that happen to be reading. That’s before you even start thinking about PL/SQL Developers.
On the plus side I can say, hand-on-heart, that no cats were harmed in the writing of this post.


Filed under: Oracle, PL/SQL Tagged: $$plsql_line, dbms_utility.format_error_backtrace, dbms_utility.format_error_stack, pl/sql exceptions, pragma exception_init, raise_application_error, sqlcode, sqlerrm

Log Buffer #433: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-24 10:05

This Log Buffer Edition covers Oracle, SQL Server and MySQL blogs of the running week.

Oracle:

  • While checking the sources of the Cassandra/NetBeans integration into GitHub yesterday, something went very badly wrong and ALL the source files in my Maven project that disappeared!
  • AWR Reports, Performance Hub, historisches SQL Monitoring in 12c
  • Oracle Database Mobile Server 12c: Advanced data synchronization engine
  • ORA-39001, ORA-39000 and ORA-39142
  • ORA-15410: Disks in disk group do not have equal size

SQL Server:

  • SAN and NAS protocols and how they impact SQL Server
  • SQL Style Habits: Attack of the Skeuomorphs
  • Is It Worth Writing Unit Tests?
  • Large SQL Server Database Backup on an Azure VM and Archiving
  • Reporting Services: Drawing a Buffer on a Map

MySQL:

  • MySQL Tcpdump system : use percona-toolkit to analyze network packages
  • Replication in real-time from Oracle and MySQL into data warehouses and analytics
  • Altering tablespace of table – new in MySQL 5.7
  • MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh
  • MySQL upgrade 5.6 with innodb_fast_checksum=1

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

 

The post Log Buffer #433: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

APEX 5.0.1 : We’re all patched up!

Tim Hall - Fri, 2015-07-24 09:32

apexAPEX 5.0.1 was released about a week ago. I started to patch some stuff straight away. We were already on APEX 5.0 across the board, so we didn’t need to do any full installations, just patches.

During the patching I noticed we were getting some issues with supposed misconfiguration of static files. After clearing my browser cache, the message went away, so I tweeted this.

“Regarding APEX 5.0.1 patch. Clear your browser cache, or you may get that static files message, even though config is correct. :) #orclapex”

Practically before I hit return, Patrick Wolf came back with this.

“@oraclebase Thanks for letting us know. I have filed bug# 21463521 and fixed it for 5.0.2″

We’re a week on now and all our APEX installations are happily running 5.0.1. We had no issues with the upgrades and no problems with the apps.

We are small fry where APEX is concerned, so don’t take this as the green light to upgrade everything yourself. I’m just saying we had no problems with it, which is pretty cool. If APEX is a strategic environment for you, you will probably need to do a lot more testing than us. :)

Cheers

Tim…

APEX 5.0.1 : We’re all patched up! was first posted on July 24, 2015 at 4:32 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MobaXterm 8.0

Tim Hall - Fri, 2015-07-24 08:19

command-promptI just noticed MobaXterm 8.0 was released a few days go.

Downloads and changelog available in the usual places.

Happy unzipping!

Cheers

Tim…

MobaXterm 8.0 was first posted on July 24, 2015 at 3:19 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

AUDIT_SYS_OPERATIONS defaults to TRUE in #Oracle 12c

The Oracle Instructor - Fri, 2015-07-24 08:09

A small but remarkable change in Oracle Database 12c is the default value of AUDIT_SYS_OPERATIONS has changed to TRUE now. In other words, all actions done by the superuser sys are being audited now by default!

[oracle@uhesse ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jul 24 15:23:10 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select name,value from v$spparameter where isspecified='TRUE';

NAME                                     VALUE
---------------------------------------- --------------------------------------------------
memory_target                            1073741824
control_files                            /u01/app/oracle/oradata/prima/control01.ctl
db_block_size                            8192
compatible                               12.1.0.2
db_recovery_file_dest                    /u02/fra
db_recovery_file_dest_size               2147483648
undo_management                          auto
undo_tablespace                          undotbs1
remote_login_passwordfile                exclusive
db_name                                  prima
diagnostic_dest                          /u01/app/oracle

11 rows selected.


SQL> show parameter sys_oper

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_sys_operations                 boolean     TRUE
SQL> select count(*) from scott.dept;

  COUNT(*)
----------
         4

SQL> show parameter audit_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest                      string      /u01/app/oracle/product/12.1.0
                                                 /dbhome_1/rdbms/audit
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@uhesse ~]$ cd /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/audit

[oracle@uhesse audit]$ cat prima_ora_6204_20150724152310753136143795.aud
Audit file /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/audit/prima_ora_6204_20150724152310753136143795.aud
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1
System name:    Linux
Node name:      uhesse
Release:        3.8.13-68.2.2.el7uek.x86_64
Version:        #2 SMP Tue May 12 14:38:58 PDT 2015
Machine:        x86_64
Instance name: prima
Redo thread mounted by this instance: 1
Oracle process number: 41
Unix process pid: 6204, image: oracle@uhesse (TNS V1-V3)

Fri Jul 24 15:23:10 2015 +02:00
LENGTH : '160'
ACTION :[7] 'CONNECT'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[6] 'oracle'
CLIENT TERMINAL:[5] 'pts/1'
STATUS:[1] '0'
DBID:[10] '2113606181'
[Output shortened...]
Fri Jul 24 15:23:56 2015 +02:00
LENGTH : '185'
ACTION :[31] 'select count(*) from scott.dept'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[6] 'oracle'
CLIENT TERMINAL:[5] 'pts/1'
STATUS:[1] '0'
DBID:[10] '2113606181'

Something you might need to know as a DBA, don’t you think? :-)


Tagged: 12c New Features, security
Categories: DBA Blogs

Subquery Factoring (9)

Jonathan Lewis - Fri, 2015-07-24 05:34

Several years go (eight to be precise) I wrote a note suggesting that Oracle will not materialize a factored subquery unless it is used at least twice in the main query. I based this conclusion on a logical argument about the cost of creating and using a factored subquery and, at the time, I left it at that. A couple of years ago I came across an example where even with two uses of a factored subquery Oracle still didn’t materialize even though the cost of doing so would reduce the cost of the query – but I never got around to writing up the example, so here it is:


create table t1
as
select
        object_id, data_object_id, created, object_name, rpad('x',1000) padding
from
        all_objects
where
        rownum &lt;= 10000
;

exec dbms_stats.gather_table_stats(user,'T1')

explain plan for
with gen as (
        select /*+ materialize */ object_id, object_name from t1
)
select
        g1.object_name,
        g2.object_name
from
        gen g1,
        gen g2
where
        g2.object_id = g1.object_id
;

select * from table(dbms_xplan.display);

You’ll notice that my original table has very wide rows, but my factored subquery selects a “narrow” subset of those rows. My target is to have an example where doing a tablescan is very expensive but the temporary table holding the extracted data is much smaller and cheaper to scan.

I’ve included a materialize hint in the SQL above, but you need to run the code twice, once with, and once without the hint. Here are the two plans – unhinted first:


============================
Unhinted - won't materialize
============================

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   468K|   428   (2)| 00:00:03 |
|*  1 |  HASH JOIN         |      | 10000 |   468K|   428   (2)| 00:00:03 |
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   234K|   214   (2)| 00:00:02 |
|   3 |   TABLE ACCESS FULL| T1   | 10000 |   234K|   214   (2)| 00:00:02 |
---------------------------------------------------------------------------

==================================
Hinted to materialize - lower cost
==================================

--------------------------------------------------------------------------------------------------------- 
| Id  | Operation                  | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT           |                            | 10000 |   585K|   227   (2)| 00:00:02 |
|   1 |  TEMP TABLE TRANSFORMATION |                            |       |       |            |          |
|   2 |   LOAD AS SELECT           | SYS_TEMP_0FD9D6664_9DAAEB7 |       |       |            |          | 
|   3 |    TABLE ACCESS FULL       | T1                         | 10000 |   234K|   214   (2)| 00:00:02 | 
|*  4 |   HASH JOIN                |                            | 10000 |   585K|    13   (8)| 00:00:01 | 
|   5 |    VIEW                    |                            | 10000 |   292K|     6   (0)| 00:00:01 | 
|   6 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D6664_9DAAEB7 | 10000 |   234K|     6   (0)| 00:00:01 | 
|   7 |    VIEW                    |                            | 10000 |   292K|     6   (0)| 00:00:01 | 
|   8 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D6664_9DAAEB7 | 10000 |   234K|     6   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

Clearly the optimizer isn’t considering the costs involved. If I add the predicate “where object_id > 0” (which identifies ALL the rows in the table), materialization occurs unhinted (with the same costs reported as for the hinted plan above. My tentative conclusion is that the transformation is a heuristic one that follows the rule “two or more appearances of the subquery and some indication of row selection in the subquery rowsource”. (In fact if the rowsource is “select * from pipeline_function” the requirement for subsetting doesn’t seem to apply.)

The plans above came from 11.2.0.4 but I got the same result, with a slight difference in costs, in 12.1.0.2. It’s worth pointing out that despite Oracle apparently ignoring the costs when deciding whether or not to materialize, it still seems to report self-consistent values after materialization: the 227 for the plan above is the 214 for creating the temporary table plus the 13 for deriving the hash join of the two copies of the temporary table.


Giving D2L Credit Where Credit Is Due

Michael Feldstein - Thu, 2015-07-23 21:20

By Phil HillMore Posts (349)

Michael and I have made several specific criticisms of D2L’s marketing claims lately culminating in this blog post about examples based on work at the University of Wisconsin-Milwaukee (UWM) and California State University at Long Beach (CSULB).

I understand that other ed tech vendors make marketing claims that cannot always be tied to reality, but these examples cross a line. They misuse and misrepresent academic outcomes data – whether public research-based on internal research – and essentially take credit for their technology “delivering results”.

This week brought welcome updates from D2L that go a long way towards addressing the issues we raised. As of Monday, I noticed that the ‘Why Brightspace? Results’ page now has links to supporting material for each claim, and the UWM claim has been reworded. Today, D2L released a blog post explaining these changes and admitting the mistakes. D2L even changed the web page to allow text selection for copy / paste. From the blog post:

Everyone wants more from education and training programs—so it’s critical that our customers are part of the process of measurement and constant improvement.

At Fusion, our customers came together to share new ideas and practices to push education forward. They like to hear about the amazing results, like U-Pace, which we post on our website. In our excitement to share the great results our customers are seeing through their programs, we didn’t always provide the details around the results. When we make mistakes, it’s our job to fix it—as we are doing now.

U-Pace is the specific program at UWM (course redesign from large lecture to self-paced / mastery approach), and D2L now links to a documented case study and quotes this case study in the blog post.

We have a Customer Success Program in place where approvals from our clients are acquired before we post anything about them. Stories are revisited every six months to make sure that they’re still valid and accurate. However, a recent customer success story was mistakenly posted on our website without their permission or knowledge. We will be doubling down on our efforts to help ensure that this doesn’t happen again, and we will work harder to provide citations for all the facts.

This “without their permission or knowledge” paragraph refers to a claim about CSULB.

Make no mistake, we’re extremely proud of what our clients are accomplishing. Our customers’ innovation, dedication, and just plain awesomeness is making a huge difference—and we’re proud to be a part of it. We will continue to measure and improve our offerings, listen to our community for suggestions, and when warranted, share their results. Here’s to them!

Kudos to D2L for these admissions and changes. Well done.

Notes and Caveats

While the overall change is very positive, I do have a few additional notes and caveats to consider.

  • The blog post today should have come from Renny Monaghan (Chief Marketing Officer) or John Baker (CEO). The blog post was written by Barry Dahl[1], and unless I misunderstand he is their lead for community engagement – building a user community that is mostly behind-login and not public-facing. The “mistakes” were made in official marketing and company communications. The leader of the department in charge of official messaging (Renny) or the company leader (John) should have taken ownership of what happened in the past and the corrections they are making.
  • In the blog post section describing the U-Pace program at the UWM, I would have included the description of moving from large lecture to self-paced / mastery approach. That change should not be embedded as one of “many factors that came together for UWM to achieve the results that they did, and that the increases in student success are not all attributed to their use of Brightspace.” That change to self-paced / mastery was the intervention, and all other factors are secondary. The case study describes the program quite well, but such an omission in the blog post is misleading.
  • The blog post only references UWM and CSULB examples, yet the ‘Why Brightspace? Results’ page added links to all claims. Changing them all was the right move.
  • Apparently, specific criticisms do not carry a CC-BY license.

These are welcome changes.

  1. For what it’s worth, Barry does great work for the company

The post Giving D2L Credit Where Credit Is Due appeared first on e-Literate.