Skip navigation.

Feed aggregator

Oracle: an unexpected lock behavior with rollback

Yann Neuhaus - Sun, 2014-10-26 15:20

Here is an odd Oracle behavior I observed when a transaction that acquired a lock is rolled back. Note that this is related to a specific combination of locks that should not be encountered in production. So it's not a bug. Just something unexpected.

 

First session

In my first session I lock the DEPT table in share mode (RS)

20:56:56 SQL1> lock table dept in row share mode;
Table(s) Locked.

My first session (SID=53) has acquired a TM lock in mode 2 (row share):

20:56:56 SQL1> select object_name,session_id,LOCKED_MODE from v$locked_object join dba_objects using(object_id);

OBJECT_NAME SESSION_ID LOCKED_MODE
----------- ---------- -----------
DEPT                53           2

 

Second session

In my second session I lock the DEPT table in share + row exclusive mode (SRX). This is compatible with the RS.

20:56:59 SQL2> lock table dept in share row exclusive mode;
Table(s) Locked.

My second session (SID=59) has acquired a TM lock in mode 5 (share + row exclusive):

20:56:59 SQL2> select object_name,session_id,LOCKED_MODE from v$locked_object join dba_objects using(object_id);

OBJECT_NAME SESSION_ID LOCKED_MODE
----------- ---------- -----------
DEPT                59           5
DEPT                53           2

Then I rollback my transaction

20:56:59 SQL2> rollback;

My lock was released and I've only the one from Session 1 (SID=53):

OBJECT_NAME SESSION_ID LOCKED_MODE
----------- ---------- -----------
DEPT                53           2

Now comes the funny part. I run exactly the same SSX lock, which was immediately acquired before:

21:14:30 SQL2> lock table dept in share row exclusive mode wait 5 ;

But now it hangs. Let's see the blocking tree with utllockt.sql:

WAITING_SESSION   LOCK_TYPE         MODE_REQUESTED MODE_HELD      LOCK_ID1          LOCK_ID2
----------------- ----------------- -------------- -------------- ----------------- -----------------
53                None
   59             DML               Exclusive      Row-S (SS)     94228             0

What? My session wants to acquire an Exclusive lock? I've never requested that.
And finally it fails because my Session 1 has a RS lock which prevents exclusive locks.

lock table dept in share row exclusive mode wait 5
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

 

Third session

But there is worse. I disconnect my second session and try to do the same from a third one. And I even try only a RS lock:

21:15:20 SQL3> lock table dept in row share mode wait 5 ;

And I'm blocked again. It seems that because a session had acquired a SRX lock and has rolled back, while another session held a RS one, then any new transaction that wants to acquire any lock must acquire temporarily an exclusive one before.

 

Event 10704 "Print out information about what enqueues are being obtained"

In order to get further I traced the locks with event 10704 (see how in a previous blog).

Here is the trace when it failed, filtering on '00017014' which is the object_id of DEPT in hexadecimal:

ksqgtl *** TM-00017014-00000000-00000000-00000000 mode=2 flags=0x400 timeout=5 ***
ksqrcl: TM-00017014-00000000-00000000-00000000
ksqgtl *** TM-00017014-00000000-00000000-00000000 mode=6 flags=0x400 timeout=5 ***
ksqcmi: TM-00017014-00000000-00000000-00000000 mode=6 timeout=5
ksqcmi: returns 51
ksqgtl: RETURNS 51
ksqrcl: returns 0

My 'lock table dept in row share mode' acquires a mode 2 (which is the 'row share' mode) but then released it and tried to acquire a mode 6 (which is exclusive)

And then here is a trace when it was successful, after the transaction in Session 1 has been committed:

ksqgtl *** TM-00017014-00000000-00000000-00000000 mode=2 flags=0x400 timeout=5 ***
ksqrcl: TM-00017014-00000000-00000000-00000000
ksqgtl *** TM-00017014-00000000-00000000-00000000 mode=6 flags=0x400 timeout=5 ***
ksqcmi: TM-00017014-00000000-00000000-00000000 mode=6 timeout=5
ksqcmi: returns 0
ksqgtl: RETURNS 0
ksqgtl *** TM-00017014-00000001-00000000-00000000 mode=3 flags=0x400 timeout=5 ***
ksqgtl: RETURNS 0
ksqcnv: TM-00017014-00000000-00000000-00000000 mode=2 timeout=5
ksqcmi: TM-00017014-00000000-00000000-00000000 mode=2 timeout=5
ksqcmi: returns 0
ksqrcl: TM-00017014-00000001-00000000-00000000
ksqrcl: returns 0

So it it did the same, but that time the mode 6 can be acquired. Then we see a conversion to mode 2 which is the RS we wanted.

Finally I tried with all other combinations of locks, but it seems that only that one (RS then rolled back SRX) show that behavior. I tried also with DML instead of 'lock table' statement but the Share lock acquired by DML (the non-indexed foreign key case) is released immediately so I cannot rollback it. And a failure in the statement do not trigger the same behaviour.

Final note: the only reference I've find for that behavior is this post on oracle-l

Heuristic TEMP Table Transformation

Randolf Geist - Sun, 2014-10-26 09:48
There are at least three different ways how the Oracle optimizer can come up with a so called TEMP table transformation, that is materializing an intermediate result set:

- As part of a star transformation the repeated access to dimensions can be materialized

- As part of evaluating GROUPING SETs intermediate result sets can be materialized

- Common Subquery/Table Expressions (CTE, WITH clause)

Probably the most common usage of the materialization is in conjunction with the WITH clause.

This is nothing new but since I came across this issue several times recently, here's a short demonstration and a reminder that this so called "TEMP Table Transformation" - at least in the context of the WITH clause - isn't really cost-based, in contrast to most other optimizer transformations nowadays - although the unnest transformation of subqueries also has a "no-brainer" variant where costing isn't considered.

The logic simply seems to be: If the CTE expression is referenced more than once AND the CTE expression contains at least some (filter or join) predicate then it will be materialized.

While in most cases this makes sense to avoid the otherwise repeated evaluation of the CTE expression, there are cases where additional predicates that could be pushed inside the CTE would lead to different, significantly more efficient access paths than materializing the full CTE expression without applying the filters and filtering on the TEMP table afterwards.

Here are just two very simple examples that demonstrate the point, both based on this sample table setup:


create table t1
as
select
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <=1e5;

exec dbms_stats.gather_table_stats(null, 't1')

create index t1_idx on t1 (id);

The index on T1.ID opens up potentially a very precise access to rows.

Here is example number one:


with
a as
(
select /* inline */
id
, filler
from
t1
where
filler != 'x'
)
select
t1.*
, a1.filler
, a2.filler
from
t1
, a a1
, a a2
where
a1.id = t1.id
and a2.id = t1.id
and t1.id = 1
and a1.id = 1
and a2.id = 1
;

-- 11.2.0.3 Plan without INLINE hint
------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 236 | 1207 (1)| 00:00:01 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D6619_229329 | | | | |
|* 3 | TABLE ACCESS FULL | T1 | 99999 | 10M| 420 (1)| 00:00:01 |
|* 4 | HASH JOIN | | 1 | 236 | 787 (1)| 00:00:01 |
|* 5 | HASH JOIN | | 1 | 171 | 394 (1)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 106 | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
|* 8 | VIEW | | 99999 | 6347K| 392 (1)| 00:00:01 |
| 9 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6619_229329 | 99999 | 10M| 392 (1)| 00:00:01 |
|* 10 | VIEW | | 99999 | 6347K| 392 (1)| 00:00:01 |
| 11 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6619_229329 | 99999 | 10M| 392 (1)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

-- 11.2.0.4 Plan without INLINE hint
--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 9999M| 2197G| | 28468 (92)| 00:00:02 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D661A_229329 | | | | | |
|* 3 | TABLE ACCESS FULL | T1 | 99999 | 10M| | 420 (1)| 00:00:01 |
|* 4 | HASH JOIN | | 9999M| 2197G| 7520K| 28048 (93)| 00:00:02 |
|* 5 | VIEW | | 99999 | 6347K| | 392 (1)| 00:00:01 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D661A_229329 | 99999 | 10M| | 392 (1)| 00:00:01 |
|* 7 | HASH JOIN | | 99999 | 16M| | 394 (1)| 00:00:01 |
| 8 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 106 | | 2 (0)| 00:00:01 |
|* 9 | INDEX RANGE SCAN | T1_IDX | 1 | | | 1 (0)| 00:00:01 |
|* 10 | VIEW | | 99999 | 6347K| | 392 (1)| 00:00:01 |
| 11 | TABLE ACCESS FULL | SYS_TEMP_0FD9D661A_229329 | 99999 | 10M| | 392 (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------------------------

-- 11.2.0.3/11.2.0.4 Plan with INLINE hint
-----------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-----------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 318 | 6 (0)| 00:00:01 |
| 1 | NESTED LOOPS | | | | | |
| 2 | NESTED LOOPS | | 1 | 318 | 6 (0)| 00:00:01 |
| 3 | NESTED LOOPS | | 1 | 212 | 4 (0)| 00:00:01 |
|* 4 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 106 | 2 (0)| 00:00:01 |
|* 5 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
| 6 | TABLE ACCESS BY INDEX ROWID| T1 | 1 | 106 | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
|* 8 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
|* 9 | TABLE ACCESS BY INDEX ROWID | T1 | 1 | 106 | 2 (0)| 00:00:01 |
-----------------------------------------------------------------------------------------


The filter in the CTE expression is just there to fulfill the rules I've stated above, without it the TEMP table transformation wouldn't be considered at all. It could also be a (non-filtering) join condition, for example.

Notice the big difference in cost estimates between the plans with and without materialization. Clearly a cost-based evaluation should have rejected the TEMP table transformation, simply because it is a bad idea to materialize 100K rows and afterwards access this TEMP table twice to filter out exactly a single row, instead of accessing the original, untransformed row source twice via precise index access.

This is by the way an example of another anomaly that was only recently introduced (apparently in the 11.2.0.4 patch set / 12.1 release): Notice the bad cardinality estimate in the 11.2.0.4 plan with the TEMP table transformation - the filter on the TEMP table isn't evaluated properly (was already there in previous releases) and in addition the join cardinality is way off - 10G rows instead of a single row is not really a good estimate - and as a side effect the HASH JOIN uses a bad choice for the build row sources.

Another possible, perhaps less common variant is this example:


with
a as
(
select /* inline */
id
, filler
from
t1
where
filler != 'x'
)
select
id
, (select filler from a where id = x.id) as scal_val1
, (select filler from a where id = x.id) as scal_val2
from
t1 x
;

-- 12.1.0.2 Plan without INLINE hint
--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100K| 488K| 77M (1)| 00:50:26 |
|* 1 | VIEW | | 99999 | 6347K| 392 (1)| 00:00:01 |
| 2 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660F_229329 | 99999 | 10M| 392 (1)| 00:00:01 |
|* 3 | VIEW | | 99999 | 6347K| 392 (1)| 00:00:01 |
| 4 | TABLE ACCESS FULL | SYS_TEMP_0FD9D660F_229329 | 99999 | 10M| 392 (1)| 00:00:01 |
| 5 | TEMP TABLE TRANSFORMATION | | | | | |
| 6 | LOAD AS SELECT | SYS_TEMP_0FD9D660F_229329 | | | | |
|* 7 | TABLE ACCESS FULL | T1 | 99999 | 10M| 420 (1)| 00:00:01 |
| 8 | TABLE ACCESS FULL | T1 | 100K| 488K| 420 (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

-- 12.1.0.2 Plan with INLINE hint
----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 100K| 488K| 398K (1)| 00:00:16 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1 | 106 | 2 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
|* 3 | TABLE ACCESS BY INDEX ROWID BATCHED| T1 | 1 | 106 | 2 (0)| 00:00:01 |
|* 4 | INDEX RANGE SCAN | T1_IDX | 1 | | 1 (0)| 00:00:01 |
| 5 | TABLE ACCESS FULL | T1 | 100K| 488K| 420 (1)| 00:00:01 |
----------------------------------------------------------------------------------------------

This time I've shown plans from 12.1.0.2 - the latest available release as I write this - to demonstrate that this hasn't changed yet. What has changed in 12c is that the scalar subqueries are now actually represented in the final cost - in pre-12c these costs wouldn't be part of the total cost. So although due to that the cost difference between the two plans in 12c is much more significant than in pre-12c the optimizer still opts for materializing the CTE expression and running full table scans in the scalar subqueries on that temp table instead of taking advantage of the precise access path available - again very likely a pretty bad idea at runtime.

So whenever you make use of the WITH clause make sure you've considered the access paths that might be available when not materializing the result set.

FootnoteAs of Oracle 12.1 the MATERIALIZE and INLINE hints are still not officially documented.

Datameer at the time of Datameer 5.0

DBMS2 - Sun, 2014-10-26 02:42

Datameer checked in, having recently announced general availability of Datameer 5.0. So far as I understood, Datameer is still clearly in the investigative analytics business, in that:

  • Datameer does business intelligence, but not at human real-time speeds. Datameer query durations are sometimes sub-minute, but surely not sub-second.
  • Datameer also does lightweight predictive analytics/machine learning — k-means clustering, decision trees, and so on.

Key aspects include:

  • Datameer runs straight against Hadoop.
  • Like many other analytic offerings, Datameer is meant to be “self-service”, for line-of-business business analysts, and includes some “data preparation”. Datameer also has had some data profiling since Datameer 4.0.
  • The main way of interacting with Datameer seems to be visual analytic programming. However, I Datameer has evolved somewhat away from its original spreadsheet metaphor.
  • Datameer’s primitives resemble those you’d find in SQL (e.g. JOINs, GROUPBYs). More precisely, that would be SQL with a sessionization extension; e.g., there’s a function called GROUPBYGAP.
  • Datameer lets you write derived data back into Hadoop.

Datameer use cases sound like the usual mix, consisting mainly of a lot of customer analytics, a bit of anti-fraud, and some operational analytics/internet-of-things. Datameer claims 200 customers and 240 installations, the majority of which are low-end/single-node users, but at least one of which is a multi-million dollar relationship. I don’t think those figures include OEM sell-through. I forgot to ask for any company size metrics, such as headcount.

In a chargeable add-on, Datameer 5.0 has an interesting approach to execution. (The lower-cost version just uses MapReduce.)

  • An overall task can of course be regarded as a DAG (Directed Acyclic Graph).
  • Datameer automagically picks an execution strategy for each node. Administrator hints are allowed.
  • There are currently three choices for execution: MapReduce, clustered in-memory, or single-node. This all works over Tez and YARN.
  • Spark is a likely future option.

Datameer calls this “Smart Execution”. Notes on Smart Execution include:

  • Datameer sees a lot of tasks that look at 10-100 megabytes of data, especially in malware/anomaly detection. Datameer believes there can be a huge speed-up from running those on a single-node rather than in a clustered mode requiring data (re)distributed, with at least one customer reporting >20X speedup of at least one job.
  • Yes, each step of the overall DAG might look to the underlying execution engine as a DAG of its own.
  • Tez can fire up processes ahead of when they’re needed, so you don’t have to wait for all the process start-up delays in series.
  • Datameer had a sampling/preview engine from the getgo that outside of Hadoop MapReduce. That’s the basis for the non-MapReduce options now.

Strictly from a BI standpoint, Datameer seems clunky.

  • Datameer doesn’t have drilldown.
  • Datameer certainly doesn’t let you navigate from one visualization to the next ala QlikView/Tableau/et al. (Note to self: I really need to settle on a name for that feature.)
  • While Datameer does have a bit in the way of event series visualization, it seems limited.
  • Of course, Datameer doesn’t have streaming-oriented visualizations.
  • I’m not aware of any kind of text search navigation.

Datameer does let you publish BI artifacts, but doesn’t seem to have any collaboration features beyond that.

Last and also least: In an earlier positioning, Datameer made a big fuss about an online app store. Since analytic apps stores never amount to much, I scoffed.* That said, they do have it, so I asked which apps got the most uptake. Most of them seem to be apps which boil down to connectors, access to outside data sets, and/or tutorials. Also mentioned were two more substantive apps, one for path-oriented clickstream analysis, and one for funnel analysis combining several event series.

*I once had a conversation with a client that ended:

  • “This app store you’re proposing will not be a significant success.”
  • “Are you sure?”
  • “Almost certain. It really just sounds like StreamBase’s.”
  • “I ‘m not familiar with StreamBase’s app store.”
  • “My point exactly.”
Categories: Other

Website Outage Warning : 26 Oct 2014 20:00-00:00 GMT

Tim Hall - Sun, 2014-10-26 02:38

DiagnosticsJust a quick note to say the website will be out of action this evening for 3-4 hours.

There have been a couple of random failures recently. With nothing in the logs to work with, I figured I’d try testing the hardware. Yesterday I tested the disks and they came back OK. Tonight it’s the turn of the memory. The plan is for the site to go down about 20:00 UK Time (GMT) and be up by midnight.

Sorry if this annoys anyone, but I’ve been looking through the site statistics trying to find the best time to do this and Sunday night seems to be the quietest time.

I’ll let you know how it goes. :)

Cheers

Tim…

PS. You can read the stuff from Google’s cache in the short term. Search for the item on Google. When you get it, click the down-arrow next to the URL and select “Cached”. No need to miss anything… :)

GoogleCache

Update: It didn’t happen as the data centre people got back to me too late to start it (this morning). I’ll pencil this in for next week…

Website Outage Warning : 26 Oct 2014 20:00-00:00 GMT was first posted on October 26, 2014 at 9:38 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Getting Started with Impala Interactive SQL for Apache Hadoop by John Russell; O'Reilly Media

Surachart Opun - Sat, 2014-10-25 11:52
Impala is open source and a query engine that runs on Apache Hadoop. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. If you are looking for a book getting start with it - Getting Started with Impala Interactive SQL for Apache Hadoop by John Russell (@max_webster). Assist readers to write, tune, and port SQL queries and other statements for a Big Data environment, using Impala. The SQL examples in this book start from a simple base for easy comprehension, then build toward best practices that demonstrate high performance and scalability. For readers, you can download QuickStart VMs and install. After that, you can use it with examples in a book.
In a book, it doesn't assist readers to install Impala or how to solve the issue from installation or configuration. It has 5 chapters and not much for the number of pages, but enough to guide how to use Impala (Interactive SQL) and has good examples. With chapter 5 - Tutorials and Deep Dives, that it's highlight in a book and the example in a chapter that is very useful.
Free Sampler.

This book assists readers.
  • Learn how Impala integrates with a wide range of Hadoop components
  • Attain high performance and scalability for huge data sets on production clusters
  • Explore common developer tasks, such as porting code to Impala and optimizing performance
  • Use tutorials for working with billion-row tables, date- and time-based values, and other techniques
  • Learn how to transition from rigid schemas to a flexible model that evolves as needs change
  • Take a deep dive into joins and the roles of statistics
[test01:21000] > select "Surachart Opun" Name,  NOW() ;
Query: select "Surachart Opun" Name,  NOW()
+----------------+-------------------------------+
| name           | now()                         |
+----------------+-------------------------------+
| Surachart Opun | 2014-10-25 23:34:03.217635000 |
+----------------+-------------------------------+
Returned 1 row(s) in 0.14sBook: Getting Started with Impala Interactive SQL for Apache HadoopAuthor: John Russell (@max_webster)Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Open World Recap

PeopleSoft Technology Blog - Fri, 2014-10-24 16:29

Oracle’s Open World conference was completed in early October.  It was one of the best ever for PeopleSoft.  We introduced a lot of new and important features like the new Fluid User Experience and the PeopleSoft Update Manager (and much more).  The response from customers was overwhelmingly positive.  Our sessions were well attended, and the demo grounds were consistently busy with inquisitive customers.

PeopleSoft offered informative general sessions like Paco Aubrejuan’s  “PeopleSoft Strategy and Roadmap—Modern Solutions Delivered with Ease” .  In that session, Oracle’s Senior Vice President and General Manager of PeopleSoft Development addressed a standing room only crowd, and described how Oracle’s continued investment in PeopleSoft is delivering key innovations that address our customer’s greatest challenges:

  • Delivering a robust solution that is intuitive and easy to use – regardless of device (mobile device or desktop workstation)
  • Ensuring that new PeopleSoft capabilities are delivered on a continuous basis and are easy to adopt at a low cost, without the need to upgrade.

Paco shared how the investments in the new PeopleSoft Fluid User Interface and the new continuous PeopleSoft Delivery Model address these challenges.

PeopleSoft Application sessions were also well attended.  Our applications teams provided greater depth and specificity, describing how PeopleSoft applications are taking advantage of the latest PeopleTools technology, particularly in the areas of the Fluid UI and PeopleSoft Update Manager and the new delivery model.  

Our PeopleTools Tech sessions were packed with curious, enthusiastic customers.  Here are some highlights:

The PeopleTools team delivered several sessions focused on the Fluid User Interface, covering different aspects and for different audiences.

  • Matthew Haavisto presented a survey of the new features from a user’s point of view, as well as a glimpse at what is coming for Fluid in future releases of PeopleTools.
  • Jeff Robbins covered how to take advantage of many of the new Fluid features.
  • Dave Bain covered Fluid from a mobile perspective and described how developers can build, customize, and deploy Fluid features.

Life Cycle is another important topic that was addressed in these sessions.

  • Mike Krajicek presented with a customer (Boeing) on Optimizing change delivery.
  • Dave Bain covered not only the new PeopleSoft Update Manager, but the new delivery model that will enable customers to reduce cost and receive new functionality and fixes faster.
  • Tim Durant and Vijay Gopalan presented a deep dive on all the life-cycle management tools.

There were several valuable sessions that were either new this year or perennial favorites that are updated for the latest release.

  • Pramod Agrawal demonstrated how to set up a cluster of PeopleSoft Applications including the Interaction Hub.
  • Jim Marion presented his always popular PeopleTools Tips and Techniques session.
  • Jeff Robbins covered the PeopleSoft Analytics and Reporting Suite.
  • Greg Kelly described how to secure your system against the ever changing threats that are continuously emerging.
  • Dave Bain described the continuing evolution of PeopleSoft integration technology.
  • Mark Hoernemann covered the Platforms Roadmap.
  • Jeff Robbins presented the overall tools and tech roadmap.
  • Matthew Haavisto showed how to guide users through complex or infrequently performed processes using Activity Guides.

Finally, we also offered the ever-popular panel discussions and ‘Meet the Experts’ sessions.  

If you would like to review any of our sessions, go to Oracle’s Open World site and download the presentations from the content catalog.


Database administrators may have to worry about POODLE attacks

Chris Foot - Fri, 2014-10-24 11:18

Encryption is a regular component of database security, point-of-sale implementations, network protection and a plethora of other IT considerations. 

One protocol, SSL 3.0, was recently deemed sub-par. Dark Reading noted that Google experts discovered a vulnerability in the nearly 15-year-old encoding rule that could potentially allow cybercriminals to initiate man-in-the-middle attacks against users. 

What is "man-in-the-middle"?
MITM intrusions are some of the most malicious attacks organizations can sustain. According to Computer Hope, MITM occurs when a user disrupts the path between an entity sending information and the object or person receiving the data. 

For example, if Person A delivered an email to Person C, then Person B could initiate a MITM attack, manipulate the message however he or she sees fit, and then transfer the email to Person C. As one can see, this capability is quite dangerous. 

A fault for the skilled 
Google researchers dubbed the vulnerability (CVE-20140-3566), naming the type of attack a person would launch to take advantage of this fault as Padding Oracle On Downgraded Legacy Encryption, or POODLE. Apparently, a POODLE infiltration would be incredibly difficult to pull off, meaning only the most experienced hackers are capable of using the method to their advantage. 

Next steps 
Although SSL was replaced by updated encryption protocols, it's still employed to support antiquated software and older client servers. Nevertheless, these applications and machines likely hold valuable information for many companies, and enterprises should strongly consider consulting database administration services to apply revision and new data encoding processes. 

As far as vendor-related services go, Google will remove SSL 3.0 from its client programs, while Mozilla intends to do the same on November 25. 

Despite this cause for concern, WhiteOps Chief Scientist Dan Kaminsky assured Dark Reading that it's "not as bad as Heartbleed," but still a consideration companies should take seriously. 

The post Database administrators may have to worry about POODLE attacks appeared first on Remote DBA Experts.

Recap of yesterday’s Arizona Oracle User Group (AZORA) meeting

Bobby Durrett's DBA Blog - Fri, 2014-10-24 11:15

Yesterday was the first meeting of the newly restarted Arizona Oracle User Group, AZORA.  It was a good first meeting to kick off what I hope will turn into an active club.

We met in the Tempe Public Library in a large downstairs room with a projector screen and plenty of seating with tables so it was easy to take notes.  We also had snacks.  I got a Paradise Bakery chocolate chip cookie and bottled water so that put me in a good mood for the afternoon meeting. They also had some giveaways and so I picked up an Oracle pen and notebook to use to take notes during the talk.  They also gave away three or four Rich Niemiec books and some gift cards as prizes, but, as usual, I didn’t win anything.

The first talk was about “Big Data”.  I’m skeptical about any buzzword, including big data, but I enjoyed hearing from a technical person who was actually doing this type of work.  I would rather hear an honest perspective from someone in the midst of the battle than get a rosy picture from someone who is trying to sell me something.  Interestingly, the talk was about open source big data software and not about any Oracle product, so it gave me as an Oracle database specialist some insight into something completely outside my experience.

The second talk was about another buzzword – “The Cloud”.  The second talk was as helpful as the first because the speaker had exposure to people from a variety of companies that were actually doing cloud work.  This talk was more directly related to my Oracle database work because you can have Oracle databases and applications based on Oracle databases deployed in the cloud.

It was a good first meeting and I hope to attend and help support future meetings.  Hopefully we can spread the news about the club and get even more people involved and attending so it will be even more useful to all of us.  I appreciate those who put in the effort to kick off this first meeting.

– Bobby





Categories: DBA Blogs

Minimising Parse Time in Application Engine with ReUseStatement

David Kurtz - Fri, 2014-10-24 09:15
This article explains how to determine the overhead of using literal values rather than bind variables in SQL submitted by PeopleSoft Application Engine programs. Using a combination of PeopleSoft Application Engine Batch Timings and Oracle Active Session History (ASH), it is possible to determine where it is most effective to change to alter this behaviour by setting the ReUse attribute.
ReUse Statement Flag I originally wrote about the Performance Benefits of ReUseStatement in Application Engine over 5 years ago, and the problem is no less significant today than it was then.  There are still many places in the delivered PeopleSoft application that would benefit from it.  However, it is not possible to simply set the attribute across all Application Engine programs because it will cause errors in steps where SQL is dynamically generated, so each case must be considered.  Where the ReUse attributed is changed, it must be tested and migrated like any other customisation.

Recently, I have been encountering problems in a number of places on a Financials system which were resolved by enabling ReUseStatement.  So I started by calculating how much time was lost by not setting it.
Application Engine Batch TimingsIf an Application Engine step is not reused, then it is compiled prior to every execution during which the Application Engine bind variables are resolved to literals. The number and duration of compilations and executions are reported in the Batch Timings, the production of which are controlled with the TraceAE flag in the Process Scheduler configuration file (psprcs.cfg).  

;-------------------------------------------------------------------------
; AE Tracing Bitfield
;
; Bit Type of tracing
; --- ---------------
...
; 128 - Timings Report to AET file
...
; 1024 - Timings Report to tables
...
TraceAE=1152
;------------------------------------------------------------------------

Whatever other TraceAE flags you set, I would always recommend that you set 128 and 1024 so that batch timings are always emitted to both file and database.  The overhead of enabling them is negligible because they are managed in memory at run time and physically written once when the Application Engine program ends.

NB: The settings in the configuration file can be overridden by command line options.  If you specify -TRACE parameter in the Process Scheduler definitions remember to also set these flags.

Batch timings are written to the AE Trace file at end of an Application Engine program, and to PS_BAT_TIMINGS PeopleTools tables at the successful end of an Application Engine program. 
It can be useful to have batch timings in the trace file of an Application Engine that failed because they are not written to the database.  As your system runs, you will build up batch timings for all of your successful Application Engine processes (which will be most of them.  This is a useful source of performance metrics.
Compilations, Execution and ReUseIn this example, the number of compilations is equal to the number of executions because ReUseStatement is not set.

                          PeopleSoft Application Engine Timings
(All timings in seconds)

C o m p i l e E x e c u t e F e t c h Total
SQL Statement Count Time Count Time Count Time Time
------------------------------ ------- -------- ------- -------- ------- -------- --------
99XxxXxx.Step02.S 8453 2.8 8453 685.6 0 0.0 688.4
...

With ReUse Statement enabled, there is now only a single compilation, and most of the time is saved in execution not compilation.

                               C o m p i l e    E x e c u t e    F e t c h        Total
SQL Statement Count Time Count Time Count Time Time
------------------------------ ------- -------- ------- -------- ------- -------- --------
99XxxXxx.Step02.S 1 0.0 8453 342.3 0 0.0 342.3
...

So we can use the batch timings to identify steps where ReUseStatement is not set because they have as many compilations as executions, and then we can profile the top statements.
 Profile CompilationsThis query produces a simple profile of batch timings for statements. 
  • In sub-query x it extract batch timings for statements with more than one compilation in processes that ended in the last 7 days.
  • There is a long-standing bug in batch timings where negative timings can be returned when the clock that returns milliseconds recycles back to zero every 216 milliseconds.  Sub-query y calculates an adjustment that is applied in sub-query z if the timing is negative.
  • Arbitrarily, I am only looking at statements with more than a total of 10000 compilations.

REM ReUseCand.sql
REM (c)Go-Faster COnsultancy Ltd. 2014
COLUMN detail_id FORMAT a32
COLUMN step_time FORMAT 999990 HEADING 'AE|Step|Secs'
COLUMN compile_count HEADING 'AE|Compile|Count'
COLUMN execute_count HEADING 'AE|Execute|Count'
COLUMN processes HEADING 'Num|Process|Instances'
COLUMN process_name HEADING 'Process|Name'
SPOOL ReUseCand
WITH x AS (
SELECT l.process_instance, l.process_name
, l.time_elapsed/1000 time_elapsed
, l.enddttm-l.begindttm diffdttm
, d.bat_program_name||'.'||d.detail_id detail_id
, d.compile_count, d.compile_time/1000 compile_time
, d.execute_time/1000 execute_time
FROM ps_bat_Timings_dtl d
, ps_bat_timings_log l
WHERE d.process_instance = l.process_instance
AND d.compile_count = d.execute_count
AND d.compile_count > 1
AND l.enddttm > SYSDATE-7
), y as (
SELECT x.*
, GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
+EXTRACT(hour FROM diffdttm))
+EXTRACT(minute FROM diffdttm))
+EXTRACT(second FROM diffdttm)-x.time_elapsed) delta
FROM x)
, z as (
SELECT process_instance, process_name, detail_id
, CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
ELSE time_elapsed END time_elapsed
, compile_count
, CASE WHEN compile_time < 0 THEN compile_time+delta
ELSE compile_time END AS compile_time
, CASE WHEN execute_time < 0 THEN execute_time+delta
ELSE execute_time END AS execute_time
FROM y
), a as (
SELECT process_name, detail_id
, SUM(compile_time+execute_time) step_time
, SUM(compile_count) compile_count
, COUNT(DISTINCT process_instance) processes
FROM z
GROUP BY process_name, detail_id)
SELECT * FROM a
WHERE compile_count >= 10000
ORDER BY step_time DESC
/
SPOOL OFF

So now I have a list of steps with lots of compilations.  I know how long they took, but I don't know how much time I might save by enabling ReUseStatement. That will save some time in Application Engine, but it will also cut down database parse time.  So now I need determine the parse time from ASH data.

WITH x AS (
Process Step Compile Process
Name DETAIL_ID SEcs Count Instances
------------ -------------------------------- ------ ---------- ----------
AP_PSTVCHR AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10
AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step03.S 4313 49536 28
FS_VATUPDFS FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3
AP_VCHRBLD APVEDTMOVE.UPDQTY03.Step02.S 2799 49536 28
PC_BI_TO_PC PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10
FS_VATUPDFS FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3
GL_JEDIT_0 FS_CEDT_ECFS.iTSELog.iTSELog.S 1628 13104 519
AP_APY2015 AP_APY2015.V_CREATE.Step14.H 1192 11318 19

This query is based on the previous one, but includes scalar queries on the ASH data for each step.
  • WARNING: This query can run for a long period because it has to scan ASH data for each entry in BAT_TIMINGS_DTL.
  • This time in sub-query x I am looking for a rolling 7-day period up to the last hour, because ASH data will have been flushed to the ASH repository.
  • In sub-query y there are two scalar queries that retrieve the DB time spent on hard parse, and all DB time for each batch timing entry.  These queries count 10 seconds for each distinct sample ID to estimate elapsed time rather than total DB time.
  • The query does not group by process name because one step can be called from many Application Engine programs and I want to aggregate the total time across all of them.

REM ReUseCandASH.sql
REM ReUseCandASH.sql
REM (c)Go-Faster Consultancy Ltd. 2014
COLUMN processes HEADING 'Num|Process|Instances'
COLUMN process_name HEADING 'Process|Name'
COLUMN detail_id FORMAT a32
COLUMN step_time HEADING 'AE|Step|SEcs' FORMAT 999990
COLUMN compile_count HEADING 'AE|Compile|Count'
COLUMN execute_count HEADING 'AE|Execute|Count'
COLUMN hard_parse_secs HEADING 'Hard|Parse|Secs' FORMAT 99990
COLUMN ash_secs HEADING 'DB|Time' FORMAT 99990
SPOOL ReUseCandASH
WITH x AS (
SELECT l.process_instance, l.process_name
, l.time_elapsed/1000 time_elapsed
, l.begindttm, l.enddttm
, l.enddttm-l.begindttm diffdttm
, d.bat_program_name||'.'||d.detail_id detail_id
, d.compile_count, d.compile_time/1000 compile_time
, d.execute_time/1000 execute_time
FROM ps_bat_timings_dtl d
, ps_bat_timings_log l
WHERE d.process_instance = l.process_instance
AND d.compile_count = d.execute_count
AND d.compile_count > 1
AND l.enddttm >= TRUNC(SYSDATE-7,'HH24')
AND l.enddttm < TRUNC(SYSDATE,'HH24')
), y as (
SELECT x.*
, GREATEST(0,60*(60*(24*EXTRACT(day FROM diffdttm)
+EXTRACT(hour FROM diffdttm))
+EXTRACT(minute FROM diffdttm))
+EXTRACT(second FROM diffdttm)-x.time_Elapsed) delta
FROM x)
, z as (
SELECT process_instance, process_name, detail_id
, CASE WHEN time_elapsed < 0 THEN time_elapsed+delta
ELSE time_elapsed END AS time_elapsed
, compile_count
, CASE WHEN compile_time < 0 THEN compile_time+delta
ELSE compile_time END AS compile_time
, CASE WHEN execute_time < 0 THEN execute_time+delta
ELSE execute_time END AS execute_time
, (
SELECT 10*COUNT(DISTINCT h.sample_id)
FROM psprcsque q
, dba_hist_snapshot x
, dba_hist_active_sess_history h
WHERE q.prcsinstance = y.process_instance
AND x.begin_interval_time <= y.enddttm
AND X.END_INTERVAL_TIME >= y.begindttm
AND h.sample_time between y.begindttm and y.enddttm
AND h.SNAP_id = x.SNAP_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
AND h.action = y.detail_id
AND h.in_hard_parse = 'Y'
) hard_parse_secs
, (
SELECT 10*COUNT(DISTINCT h.sample_id)
FROM psprcsque q
, dba_hist_snapshot x
, dba_hist_active_sess_history h
WHERE q.prcsinstance = y.process_instance
AND x.begin_interval_time <= y.enddttm
AND X.END_INTERVAL_TIME >= y.begindttm
AND h.sample_time between y.begindttm and y.enddttm
AND h.SNAP_id = X.SNAP_id
AND h.dbid = x.dbid
AND h.instance_number = x.instance_number
AND h.module = 'PSAE.'|| y.process_name||'.'||q.sessionidnum
AND h.action = y.detail_id
) ash_secs
FROM y
), a AS (
SELECT /*process_name ,*/ detail_id
, SUM(compile_time+execute_time) step_time
, SUM(compile_count) compile_count
, COUNT(DISTINCT process_instance) processes
, SUM(hard_parse_secs) hard_parse_secs
, SUM(ash_secs) ash_secs
FROM z
GROUP BY /*process_name,*/ detail_id)
SELECT a.*
FROM a
WHERE compile_count >= 10000
ORDER BY step_time DESC
/
spool off

Now we can also see how much time the database is spending on hard parse on each step, and it is this time that is likely to be saved by enabling ReUseStatement.
However, before we can enable the ReUseStatement attribute we must first manually inspect each step in Application Designer and determine whether doing so would break anything.  The Comment column in this profile was added manually as I did that.  Some statements I can change, some I have to accept the overhead.

                                   Step    Compile    Process      Parse         DB
DETAIL_ID Secs Count Instances Secs Time Comment
-------------------------------- ------ ---------- ---------- ---------- ---------- …………………………………………………………………………………………………………………………………
AP_PSTCOMMON.ACCT_UPD.Step02.S 12001 40704 10 11820 11920 Set ReUseStatement
FS_CEDT_ECMB.4EditCDT.uValCDT.S 5531 10289 679 620 5870 Dynamic SQL, can't set ReUseStatement
APVEDTMOVE.UPDQTY03.Step03.S 4306 49471 27 4020 4100 Set ReUseStatement
FS_VATUPDFS.Seq0-b.DJ300-2.S 4057 203704 3 3150 3860 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECFS.iTSELog.iTSELog.S 3332 19073 716 2130 3520 Dynamic SQL, can't set ReUseStatement
APVEDTMOVE.UPDQTY03.Step02.S 2796 49471 27 2730 2820 Set ReUseStatement
PC_BI_TO_PC.UPD_PRBI.UPDATE.S 1974 23132 10 230 1920 Set ReUseStatement
FS_VATUPDFS.Seq0-a.X0001.D 1960 37368 3 0 0 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.4uAnchCT.uAnchCDT.S 1319 10289 679 510 1290 Dynamic SQL, can't set ReUseStatement
AP_APY2015.V_CREATE.Step14.H 1169 11094 19 0 0 Set ReUseStatement
GL_JETSE.GA100.CHKEDT.S 1121 15776 569 860 930 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.iTSELog.iTSELog.S 988 10289 679 450 990 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.uMarkVal.uMarkVal.S 711 10289 679 50 670 Dynamic SQL, can't set ReUseStatement
FS_CEDT_ECMB.uMarkInv.uMarkInv.S 668 10289 679 40 790 Dynamic SQL, can't set ReUseStatement
  • Due to a bug in the instrumentation of Application Engine, the session's action attribute is not set for Do Select (type D) and Do When (type H) steps.  ASH data cannot therefore be matched for them.
  • More DB Time is reported for FS_CEDT_ECMB.uMarkInv.uMarkInv.S than is reported by batch timings.  This is a consequence of ASH sampling, where we count 10 seconds for each sample.
ConclusionSetting ReUseStatement is very simple because it doesn't involve changing SQL, but there are lots of places where it could be set.  This technique picks out the relatively few places where doing so could potentially have a significant effect.
    ©David Kurtz, Go-Faster Consultancy Ltd.

    Glorious Data Visualizations for Your Friday

    Oracle AppsLab - Fri, 2014-10-24 09:00

    If you’ve read here for more than a hot minute, you’ll know that I love me some data visualization.

    This love affair dates back to when Paul (@ppedrazzi) pointed me to Hans Rosling’s (@hansrosling) first TED talk. I’m sure Hans has inspired an enormous city of people by now, judging by the 8 million plus views his TED talk has garnered. Sure, those aren’t unique view, but even so.

    There’s an interesting meta-project: visualize the people influenced by various visualization experts, like a coaching tree or something.

    sandwich

    Classic comic from xkcd, used under CC 2.5

    Back on track, if you haven’t yet, watch the BBC documentary on him, “The Joy of Stats,” fantastic stuff, or if you have seen it, watch it again.

    As luck would have it, one area of specialization of our newest team members is, wait for it, data visualization.

    Last week, I got to see them in action in a full-day workshop on data visualization, which was eye-opening and very informative.

    I’m hoping to get a few blog posts out of them on the subject, and while we wait, I wanted to share some interesting examples we’ve been throwing around in email.

    I started the conversation with xkcd because, of course I did. Randal Munroe’s epic comic isn’t usually mentioned as a source for data visualizations, but if you read it, you’ll know that he has a knack for exactly that. Checking out the Google Image search for “xkcd data visualization” reminded me of just how many graphs, charts, maps, etc. Randal has produced over the years.

    I also discovered that someone has created a D3 chart library as an homage to the xkcd style.

    Anyway, two of my favorite xkcd visualizations are recent, possibly a function of my failing memory and not coincidence, Pixels and Click and Drag.

    I probably spent 10 minutes zooming into Pixels, trying to find the bottom; being small-minded, I gave up pretty early on Click and Drag, assuming it was small. It’s not.

    How much time did you spend, cough, waste, on these?

    During our conversation, a couple interesting examples have come back to me, both worth sharing.

    First is Art of the Title, dedicated to the opening credits of various films. In a very specific way, opening credits are data visualizations; they set the mood for the film and name the people responsible for it.

    Second is Scale of the Universe, which is self-explanatory and addictive.

    So, there you go. Enjoy investigating those two and watch this space for more visualization content.

    And find the comments.Possibly Related Posts:

    APEX Tabular Forms Deep Dive at #ORCLAPEX NYC Meetup

    Marc Sewtz - Fri, 2014-10-24 08:35
    Ever heard of APEX$ROW_NUM, APEX$ROW_SELECTOR and APEX$ROW_STATUS? Did you know you can reference tabular form columns using bind variable syntax in your page processes? Are you familiar with the execution scope of tabular form proecesses for modified vs submitted rows? Ever struggled using apex_application.g_fxx arrays with checkboxes? And do you really need to use the apex_item APIs? In APEX 4.0, 4.1 and 4.2 we've introduced lots of new tabular form features, including declarative tabular form validations and page processes, yet there are still PL/SQL processes beeing written that loop through the apex_application.g_fxx arrays or tabular form regions that mix apex_item API calls with using built-in form elements.

    So when planning our next #ORCLAPEX NYC Meetup, we've figured this would be a great topic to cover. And since we don't want to talk you to death during a single meetup - we decided that we'll do a three part series, starting slow and covering the basics during our next meetup on 11/06, then dig a little deeper in part two and then conclude this series with a grand finale during the third meetup in this series.

    Join us on Thurday, November 6th at the Oracle office in Manhattan, at 120 Park Ave. We're starting at 6pm and we'll serve pizza this time - thanks to everyone's generous donations last time.

    RSVP today:  #ORCLAPEX NYC Meetup

    OCP 12C – Resource Manager and Performance Enhancements

    DBA Scripts and Articles - Fri, 2014-10-24 07:16

    Use Resource Manager for a CDB and a PDB Managing Resources between PDBs The Resource Manager uses Shares ans Utilization limit to manage resources allocated to PDBs. The more “Shares” you allocate to a PDB, the more resource it will have. Shares are allocated through DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE. One directive can only concern one PDB and you can’t [...]

    The post OCP 12C – Resource Manager and Performance Enhancements appeared first on Oracle DBA Scripts and Articles (Montreal).

    Categories: DBA Blogs

    Download Oracle VirtualBox VM with Database version 12.1.0.2

    Marco Gralike - Fri, 2014-10-24 06:35
    Somehow missed the announcement that the Oracle Development team refreshed (9/26/2014) the VirtualBox Oracle Database…

    Oracle ACE Program: Follow Up

    Tim Hall - Fri, 2014-10-24 06:13

    I just wanted to write something as a follow up to my recent and provocatively titled Oracle ACE = Oracle’s Bitch? post. Obviously, that subject has been preying on my mind a little… I said before, it is hard to be objective about yourself, so maybe some aspects of the “being an Oracle ACE changes you” debate may be true. It would be wrong of me to deny that outright, but I think there are some indirect consequences of being an ACE that might alter my opinions about things, so I thought I would share my thoughts…

    Access to Information

    I don’t want this to sound patronising, but there are a lot of people out there spouting rubbish about things that happen in the tech industry with no knowledge of the history or actual decision processes that lead up to the final result. If you don’t know what is actually going on, it is easy to come to the wrong conclusion. Let’s use Oracle Linux as an example.

    When Oracle Linux was announced it seemed like the whole internet was full of people saying, “Poor Red Hat. Oracle is trying to kill them!”. If you had spoken to people in the Oracle Linux team you would know that Oracle was incredibly frustrated with how long it was taking Red Hat to get performance improvements through to the kernel because of the way they manage their long term support of versions. Apart from critical updates, many major performance features will not be added into the kernel until the next major release. This was exactly what was happening with RHEL5. There were a whole host of performance improvements in the mainline Linux kernel, that had not been added to the RHEL5 kernel, but would be in the RHEL6 kernel. The problem was RHEL6 was *massively* delayed. That delay meant the performance of Oracle on Linux was essentially being crippled by Red Hat’s release cycles. Add to that other features like InfiniBand support that were being held back and you start to see the problem.

    One option was for Oracle to make a binary clone of RHEL (like CentOS, Scientific Linux etc.) and give the option of their own kernel (UEK) that was closer to the mainline Linux kernel and therefore included all the latest performance stuff, without having to wait for Red Hat to get their ass in gear. That is what Oracle did with Oracle Linux. We had the performance of RHEL6, but we were still effectively using RHEL5. What’s more, by breaking this dependency between the distro version and the kernel version, the incentive to upgrade from RHEL5 to RHEL6 to RHEL7 was broken. For the most part, Oracle servers use a minimal amount of the features in the full distro. Who cares what version of Firefox is on the database server? For some people, running OL5 + UEK is pretty much like running OL7, so why bother upgrading as long as you still have erata produced?

    Most people out there had not spoken to the Oracle Linux team, so they were quite happily spouting the “Oracle are killing Red Hat” crap, without knowing the real motivation. When someone like me comes along and sings the praises of Oracle Linux and actively defends Oracle’s position, I sound like a kiss ass, but really I’m just standing up for what I believe to be right.

    Caveats: The arguments I was told could have been fiction used to influence me, but I was there through much of the process and have a lot of respect for the people involved, so a choose to believe them!

    Why that long explanation? If I had not been in the ACE program, I personally would never have had that contact with the people in the Oracle Linux team and I would have been one of those people saying Oracle were a bunch of bastards! Because of my involvement in the ACE program, I got to see “behind the curtain” and chose a different path. So yes, being an Oracle ACE did change me!

    I’ve used Oracle Linux as an example, but I could have used a whole bunch more!

    Access to Support

    We have all lived through Oracle Support nightmares. I’ve written several things in the past ranting about support. Since being part of the ACE program I’ve got to know a number of product managers (and in some cases developers) at Oracle, so when I have problems I can contact them directly and ask questions. In many cases, that significantly reduces the amount I actually have to interact with Oracle Support. If I know I will be working with product X, I actively seek out people in that field (ACEs and Oracle employees) and use that networking to my advantage. A case in point is the 12c JSON support. At Oracle OpenWorld I made a point of going to the demo grounds and speaking to Mark Drake about it. When I met up with David Peake I asked him some questions about what I was planning to do with APEX in 12c to get a second opinion. As long as you don’t bug these folks with stupid questions, they are usually willing to help.

    If I had not been part of the ACE program, I would probably never have met these people and this sort of thing would not be possible for me *. That must have changed me, but I don’t think of it as incidious. I guess in this case I could say being an Oak Table member has changed me because of my access to people and information also…

    Maybe you see change where there has been no change?

    You hear those stories about people winning the lottery then losing all their friends, because their friends don’t want to be seen as “hangers on” so they avoid them. In some cases it is possible that the people who become ACEs haven’t changed, but your perception of them has. Before I became an ACE, if I said something supportive of Oracle you probably wouldn’t notice. If I say the same thing now I am a sell-out. :) I can think of a couple of cases.

    Grid Control/Cloud Control : I’ve used OEM in its various forms since 9i, where it was the Java-based console on top of the Management Server. Back then you couldn’t admit to using it in public or you would be ridiculed. You had to quickly close it down and open SQL*Plus if someone came in the room. Over the course of 10g and 11g Grid Control, then Cloud Control, became cool and everyone talks about it now. When I am presenting and I say things like, “I believe every DBA should use Cloud Control”, I mean it because I think it is true. The problem is I sound like a suck-up now. I’m just telling people what Oracle want me to say! Back in the 9i days when I was afraid to admit I used the 9i OEM Management Server I had credibility. :)

    Certification : I’ve been doing certifications since Oracle 7. I started doing them to confirm to myself I really did know something about being a DBA. :) Now it is all about my personal development. From time to time I have contact with Oracle Education about my views on certification. For a few years they interviewed me at OOW and so far have used about 5 seconds on the footage. Why? Because my views don’t line up with theirs. Just before OOW14 I was asked if I would write a post for the Oracle Certification Blog. I was willing to do one with a focus on personal development, but said I could not fall in line with the Oracle Education message. I don’t think that post will happen now, which is a pity. I think the people involved are a great group of people and I’ve met many of them for years at OOW, but we do have a difference of opinion where the value of certification is concerned. So now when I say I like certification (for my reasons) and I agree with Oracle’s new re-certification policy I am a drone that constantly spouts the Oracle message!

    Conclusion

    If you are looking for conspiracy you will find it, but it doesn’t mean it’s real!

    I’m sorry this post has been so long, but I do care about what I do and I care about the ACE program. It’s been a big part of my life for the last 8 years. As you can tell, I’m a little butt-hurt about this subject, but I know that trying to defend yourself makes you look all the more guilty… :)

    Sod it. It’s nearly the weekend, which means I get more time to play with Oracle…

    Cheers

    Tim…

    * For clarification, I wasn’t suggesting I can only speak to these people because I’m an ACE. I meant that I (me personally) only came into contact with them in the first place because I’m an ACE.

    Oracle ACE Program: Follow Up was first posted on October 24, 2014 at 1:13 pm.
    ©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

    Documentum Multiple ADTS: Ratio of rendition creations between instances (Part 1)

    Yann Neuhaus - Fri, 2014-10-24 04:34

    I recently realized there are very few information about multiple ADTS instances installation and management. And we have a customer which has two ADTS running for one docbase. I therefore decided to write a couple of blog posts on "Multiple ADTS", since it is likely to be helpful for someone in the same situation.

     

    Rendition processing & ADTS instance

    This first blog post will tell you how to find out which rendition has been processed by which ADTS instance. Hence, we will be able to calculate what the ratio of processing between two (or more) servers is. The blog post will be split in two parts as I know two ways of getting the same result.

    The best architecture is to have 2 dedicated servers for the two different ADTS instances. Let assume we have SERVER1 and SERVER2. Each one is hosting an instance of ADTS with the same version and patch level. We will see in another blog how to configure both servers to allow a kind of load balancing.

    You have to find some audit or log in the docbase to figure out when a rendition was requested. When a rendition is requested, it is posted in the dmi_queue_items queue, but you may already be aware of this. If not, you can take a short look at my other blog post (http://www.dbi-services.com/index.php/blog/entry/documentum-useful-queries-dqliapi) which presents some DQLS to find what is pending in the rendition queue.

    But unfortunately, when the rendition is done, the queued item is deleted and all your hope of getting some information vanishes with it. We can get a little hint if the rendition failed as it is kept in this queue, but with the attribute event set to DTS (go to my other post for more info). But still, it doesn't show which server failed it.

     

    CTS Report

    In this part I will talk about the CTS report method. In the next part of this blog post, I will introduce the dmr_content method.

    If you heard about Digital Asset Manager you may be on the way. It is an EMC tool which can be used to generate reports of ADTS activities. If you can get it, just use it, as it is exactly what you need. But in this post, I will show you how to do without. So, just like Digital Asset Manager, you will have to enable a specific option in the DA which is not enabled by default. It is some kind of audit of ADTS activities and it will populate a table with rendition information and, as you may guess, with information of which server processed each rendition.

    Login to your DA as installation owner and navigate to Content Transformation Services and then to CTS Reporting Configuration. You should see the following form:

    MA-RR-01_config_report.PNG

    Now check Reporting Configuration to "On", select Archiving Configuration mode and put 7 for Archiving Interval. You just configured your repository to write an entry each time a rendition is done by whichever server with a seven days period before archiving. It means that each time a rendition is done, an entry will be stored in dm_cts_response table and after seven days (starting now), all entries will be stored in dmc_cts_request_archiving. With DQL, you can look at these tables which should be empty because no rendition was done yet.

     

    select * from dm_cts_response;
    select * from dmc_cts_request_archiving;

     

    After a period of time, let's say 5 days, some entries have been added to the first table. Only if you have rendition activities of course. And you can check which server is involved in the process thanks to the following query:

     

    select cts_server_name, trans_completed_time from dm_cts_response order by trans_completed_time desc;

    MA-RR-01_config_report2.PNG

    Conclusion

    These tables are used by the EMC tool Digital Asset Manager and if you know a little DQL, you can build some stats without buying the tool. However there is a drawback for this method: In fact, you enable some kind of audit trace which will populate 2 tables in the repository. Hence, if you have a lot of rendering processes, the tables could grow fast and it will need space. In addition, you will not be able to trace older renditions as everything is stored only since you activated the report.

    But it's quite useful as you can get the rendition ratio in the following way:

     

    select cts_server_name, count(*) as val from dm_cts_response group by cts_server_name;

    RDX services: Platform-specific Expertise [VIDEO]

    Chris Foot - Fri, 2014-10-24 04:28

    Transcript

    Hi, welcome to RDX. RDX has a wide range of platform-specific experience to help keep your database environment highly available and high performance. Our DBAs can help supplement any gaps in skill sets, leaving your internal team to focus on the tasks they do best.

    Whether you prefer to store information in SQL Server, Oracle, MySQL, PostgreSQL or Hyperion/Essbase, our specialists provide you with a wealth of expertise and support. Our staff is well-versed in optimizing and protecting all of these environments 24×7, providing your business with a greater peace of mind.

    In addition to our varied expertise, we provide clients with the choice of customizing their environments. We’re here to accommodate any of your unique business needs, and our DBA experts are equipped to solve your toughest challenges.

    Thanks for watching. Be sure to watch next time.
     

    The post RDX services: Platform-specific Expertise [VIDEO] appeared first on Remote DBA Experts.

    From 8.0 extended rowid to 12c pluggable db: Why Oracle Database is still a great software

    Yann Neuhaus - Fri, 2014-10-24 03:31

    Do you know why Oracle Database is a leader in the database market since 30 years? Do you know any other software that is still the leading solution after decades? It think that it is because Oracle has been a good software from the get-go. Many early decisions in the software architecture have revealed themselves later to be the right decision. Several decisions, such as the C language that made it portable to all platforms that were relevant during those 30 years, or the parallel server that has brought RAC when standalone servers were not able to scale anymore. Here, I will illustrate a decision made 15 years ago that has made the whole 12c pluggable databases stuff possible.

     

    Oracle 7 rowid

    The ROWID is the physical address of a row in the database. Physically, a database is just a set of files where data is stored as rows in blocks. So, at the time of Oracle 7, the ROWID was just:

    • the file number
    • the block number in that file
    • the row number in that block
    That was printed in hexadecimal when we selected the ROWID pseudo-column:  

    CaptureOracle7Rowid.PNG

     

    Here my rows are numeroted from 0 to 3 and are all in the file 2 at block offset 58 (0x3a)

    At that time, the maximum number of datafiles in a database was 1022. That was enough until the datawarehouse days brought the need for Very Large DataBases.

     

    Relative file number

    In Oracle 8 they wanted to increase the possible number of datafile without changing the rowid (which would have involved updating all blocks during an upgrade). And here is how they did:

    1. they introduced the 'relative file number'. The file identification part is not unique for the database but only for the tablespace. That means that you can have 1022 datafiles per tablespace instead of 1022 datafiles per database.

    2. they kept the relative_fno equal to the absolute file_id for the first 1022 datafiles, so that an Oracle 7 datafile is still compatible: the number that was the absolute file_id being now the relative_fno.

    3. when going over the 1022 datafiles per database, the file_id can go beyond (it is not stored in the rowid) but the relative_fno just recycles to 1.

    The consequence is that a rowid is not unique anymore. When we have to look for a row by its rowid, we must know on which tablespace it is, because the rowid identification is related to the tablespace.

    No problem. Where are the rowid used? In chained rows, to get the other piece. A table cannot store parts of its row in different tablespace, so we know that if we are in tablespace USERS and have a pointer to a chained row, it has to be in the USER tablespace as well. They are used in indexes as well. And that's the same: if we unique scan the index EMP_PK to get the rowid of the EMP row, we know that it is in the USERS tablespace because we know that EMP table is in the USERS tablespace.

    However, a new feature appeared at that time: a table can be partitioned. We cannot say anymore that EMP rows are in USERS tablespace because EMP may be partitioned over several tablespaces. That does not concern chained rows. That does not concern non-partitioned tables. And with partitioned tables, that does not concern local indexes because local index partitions always address rows in one table partition.

    This is where Oracle has introduced an extended rowid, for global indexes on partitioned tables, which is larger and is able to give the tablespace information in addition to the relative file number. It is called extended as opposite to 'restricted' rowid which is restricted to cases where we know the tablespace.

    They could have choosen to store only the tablespace number. But they choose to store the object id instead, which - from the dictionary - can be used to know the tablespace. People were puzzled by that choice at that time, but it has been revealed later as a great software architecture decision because this is what allows us to have pluggable databases 15 years later.

    But before that, I have to introduce another 8.0 change, which is the data object id.

     

    Data object id

    In Oracle 7, there is only one object_id. Because there is a one-to-one relation between the logical object (table) and the physical object (segment). That has changed with the introduction of partitioning, where one table may have several partitions.

    Partitions are also logical objects, and each of them has a unique object_id. But once again, the software architects made a very good decision by separating the identification of the physical object: they introduced the data_object_id. When you create a table, the data_object_id of the segment is equal to the object_id of the table. But that can change.

    For example, what happens when you truncate a table? You just reset the high water mark without going to all the blocks. And you can insert new rows that may go into the same place. But how are you sure that concurrent users will not mix old blocks with new ones ? Remember that the reader do not lock anything, so they didn't notice your truncate. The truncate changes the data object_id as if it were a new segment, but still related to the same logical table.

    And with partitioning you can exchange partition: the logical object_id changes but the physical data_object_id is still the same because it is still the same segment.

    It's not always easy to know if we are dealing with an object_id or a data_object_id. Basically, things that are logical (for example lock table or lock partition) show the object_id and they are - depending on the place - called OBJECT_ID, OBJ# or OBJN. When it is related to the physical segment (blocks in buffer cache, block corruption) we see DATA_OBJECT_ID, DATAOBJ# or OBJD or even OBJ or OBJECT_NUMBER.

    When I'm not sure about what to use in my scripts, I test it on a table that has been truncated at least once, so that both values are different. I advise you to always test your scripts on a table that has been truncated and on a tablespace that has been transported.

     

    Oracle 8 extended rowid

    So I said that the extended rowid does not store the tablespace id. It stores the data_object_id, which is associated with one and only one tablespace. The format of the extended rowid is:

    • the data object id (which gives the tablespace from the data dictionary)
    • the relative file number (relative to the tablespace)
    • the block number in that file
    • the row number in that block

    and it is needed only when we don't navigate from an object which can be used to define exactly which tablespace can contain the segment.

    Now let's understand why the Oracle software architects have decided to store the data_object_id instead of the tablespace number. By doing that, they made the tablespaces physically independant of the database.

     

    Pluggable tablespaces

    Yes, I know it's called transportable tablespaces. But it was the premise of pluggable database. Anyway, pluggable databases are just transportable tablespaces that include the system tablespace (which contain the metadata for the other tablespaces).

    You can transport a tablespace from another database and plug it in you database. Of course, the absolute file number will change as it is assigned by your database. The tablespace number will change. But the relative file numbers - relative to the tablespace - do not have to change. 

    And of course the object_id will change: a new one will be used when importing the metadata. But the data_object_id do not have to change. The reason is that data_object_id is not expected to be unique in the database. It must be unique only whithin the object (two partitions of the same table cannot have the same data_object_id).

     

    Oracle 8 extended rowid was designed for pluggable databases

    And this is where those early decisions have all their meaning. You can plug a tablespace and the rowid of the rows in that tablespace do not have to be updated. This is what makes it a quick operation because only the dictionary and the file headers have to be updated. The time depends on the number of objects, but not on the volume of data. The agility brought by pluggable databases in 2013 were actually designed in 1997.

     

    rowid is not unique

    So I said that rowid is not unique? Let's prove it. I create a table DEMO2 in tablespace DEMO1, export that tablespace, rename the table to DEMO2 to DEMO1, import that tablespace as DEMO2, so that I have now two tables DEMO1 and DEMO2 respectively in tablespaces DEMO1 and DEMO1.

    Here is my table:

     

    SQL> select dname,rowid
      2   ,dbms_rowid.rowid_object(rowid) data_object_id
      3   ,dbms_rowid.rowid_relative_fno(rowid) relative_fno
      4   ,dbms_rowid.rowid_block_number(rowid) block_id
      5   ,dbms_rowid.rowid_row_number(rowid) row_number
      6   from DEMO2
      7  /
    
    DNAME          ROWID              DATA_OBJECT_ID RELATIVE_FNO   BLOCK_ID ROW_NUMBER
    -------------- ------------------ -------------- ------------ ---------- ----------
    ACCOUNTING     AAAX7uAACAAAACEAAA          98030            2        132          0
    RESEARCH       AAAX7uAACAAAACEAAB          98030            2        132          1
    SALES          AAAX7uAACAAAACEAAC          98030            2        132          2
    OPERATIONS     AAAX7uAACAAAACEAAD          98030            2        132          3
    

     

    I export the tablespace with transportable tablespaces:

     

    SQL> host expdp '"/ as sysdba"' transport_tablespaces=DEMO1
    Export: Release 12.1.0.2.0 - Production on Fri Oct 24 15:33:35 2014
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
    Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01":  "/******** AS SYSDBA" transport_tablespaces=DEMO1
    Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
    Processing object type TRANSPORTABLE_EXPORT/TABLE
    Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
    Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
    Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
    Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
      /u02/app/oracle/admin/DEMO/dpdump/expdat.dmp
    ******************************************************************************
    Datafiles required for transportable tablespace DEMO1:
      /tmp/demo1.dbf
    Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at Fri Oct 24 15:34:35 2014 elapsed 0 00:00:59
    SQL> host cp '/tmp/demo1.dbf' '/tmp/demo2.dbf'
    SQL> alter tablespace DEMO1 read write;
    Tablespace altered.

    Then import it to DEMO2 tablespace (after renaming my previous table)

     

    SQL> rename DEMO2 to DEMO1;
    SQL> host impdp '"/ as sysdba"' transport_datafiles='/tmp/demo2.dbf ' remap_tablespace='DEMO1:DEMO2';
    Import: Release 12.1.0.2.0 - Production on Fri Oct 24 15:34:35 2014
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
    Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
    Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01":  "/******** AS SYSDBA" transport_datafiles=/tmp/demo2.dbf  remap_tablespace=DEMO1:DEMO2
    Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
    Processing object type TRANSPORTABLE_EXPORT/TABLE
    Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
    Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
    Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
    Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Fri Oct 24 15:34:49 2014 elapsed 0 00:00:13
    SQL> alter tablespace DEMO2 read write;
    Tablespace altered.
    

     

    Now I have 2 tables with different object_id but same data_object_id:

     

    SQL> select object_name,object_id,data_object_id from user_objects where object_name like 'DEMO_';
    
    OBJECT_NAM  OBJECT_ID DATA_OBJECT_ID
    ---------- ---------- --------------
    DEMO2           98076          98030
    DEMO1           98029          98030
    

     

    And 2 segments in different files (file_id) but same relative_fno:

     

    SQL> select segment_name,tablespace_name,file_id,relative_fno,block_id from dba_extents where segment_name like 
    
    SEGMENT_NAME    TABLESPACE    FILE_ID RELATIVE_FNO   BLOCK_ID
    --------------- ---------- ---------- ------------ ----------
    DEMO1           DEMO1               2            2        128
    DEMO2           DEMO2               4            2        128
    

     

    I update the rows so that I be sure to select on different tables

     

    SQL> update DEMO1 set dname=upper(dname);
    4 rows updated.
    SQL> update DEMO2 set dname=lower(dname);
    4 rows updated.
    

     

    And now showing the ROWID from both tables:

     

    SQL> select dname,rowid
      2   ,dbms_rowid.rowid_object(rowid) data_object_id
      3   ,dbms_rowid.rowid_relative_fno(rowid) relative_fno
      4   ,dbms_rowid.rowid_block_number(rowid) block_id
      5   ,dbms_rowid.rowid_row_number(rowid) row_number
      6   from DEMO1
      7  /
    
    DNAME          ROWID              DATA_OBJECT_ID RELATIVE_FNO   BLOCK_ID ROW_NUMBER
    -------------- ------------------ -------------- ------------ ---------- ----------
    ACCOUNTING     AAAX7uAACAAAACEAAA          98030            2        132          0
    RESEARCH       AAAX7uAACAAAACEAAB          98030            2        132          1
    SALES          AAAX7uAACAAAACEAAC          98030            2        132          2
    OPERATIONS     AAAX7uAACAAAACEAAD          98030            2        132          3
    
    SQL> select dname,rowid
      2   ,dbms_rowid.rowid_object(rowid) data_object_id
      3   ,dbms_rowid.rowid_relative_fno(rowid) relative_fno
      4   ,dbms_rowid.rowid_block_number(rowid) block_id
      5   ,dbms_rowid.rowid_row_number(rowid) row_number
      6   from DEMO2
      7  /
    
    DNAME          ROWID              DATA_OBJECT_ID RELATIVE_FNO   BLOCK_ID ROW_NUMBER
    -------------- ------------------ -------------- ------------ ---------- ----------
    accounting     AAAX7uAACAAAACEAAA          98030            2        132          0
    research       AAAX7uAACAAAACEAAB          98030            2        132          1
    sales          AAAX7uAACAAAACEAAC          98030            2        132          2
    operations     AAAX7uAACAAAACEAAD          98030            2        132          3
    

     

    Conclusion: I have in my database two different tables with sames rowid because it is a physical copy. Only the data dictionary makes the difference.

    PeopleTools 8.54 Feature: ExcelToCI Errors and Warnings Worksheet

    Javier Delgado - Fri, 2014-10-24 02:58
    Some years ago, I wrote this post on ExcelToCI limitations. One of the limitations I've found annoying in the past was the need to move the mouse over each Warning or Error result cell. It was not just annoying, it actually didn't allow the users to easily work on the different error types and analyze useful information such as the most common error messages, how many rows would go through if they solved a particular issue, etc.



    PeopleTools 8.54 has introduced a new worksheet showing all the warning and error messages. The following screenshot provides a clear view on how the information is presented:



    From that point on, the users may analyze the messages using Excel dynamic tables, filters, etc. Yet, there is some room for improvement. The most obvious one is to put each particular error in a different Excel row. That would make error analysis much richer.

    Let's see how this evolves with the next releases of PeopleTools.

    Upgrading PeopleTools with Zero Downtime (3/3)

    Javier Delgado - Fri, 2014-10-24 02:57
    In the past post, we covered the approach we followed to have both PeopleTools versions running at once. In this post we will cover the issues we faced and how we got around.

    Missing Tables

    Our approach for identifying tables was not perfect. By examining the traces and logs, we missed some tables that were directly impacted by Application Designer (we did not enable tracing while copying the PeopleTools projects as it would have taken too much time to complete, so we did not have those tables in our lists). Fortunately enough, we only had to adjust the list of tables a couple of times and we were up and running.

    Sequences

    For performance reasons, the customer had created Oracle Sequences instead of using the PeopleTools auto-numbering approach. In order to have the sequences in sync between both databases we followed the same approach we applied to the tables, deleting one of the sequences and creating a synonym pointing to the sequence in the other instance.

    Performance

    Most of the performance in running SQL statements is impacted by how the database builds the execution plans. The execution plans are generated taking into account the database statistics. When using DB Links, the database on which the SQL sentence is executed does not have statistics information of the linked tables, so the execution plan may not be optimal. This is particularly true for mixed queries involving local and linked tables.

    During our testing, we identified a few performance issues which required to make the table physically present in both environments. This was not always possible, but for tables that are quite unlikely to change (for instance Business Unit options), it was definitely an option.

    Data Types

    Some of the data types used by PeopleTools changed from one release to the other. In our case, both timestamps and memo fields (LONG to CLOBS) had changed types. If the table is linked, only one of the data types could be used. In this sense, we were fortunate, as the data types used by the target PeopleTools release worked correctly in the previous release, so we could use the new ones.

    Commit

    In earlier releases of the Oracle Database, each insert/update/delete operation done against a DB Link table would commit immediately (and implicitly). This would pose a serious integrity issue. Luckily, both the source and the target database releases supported global transactions (which means that the data is committed when the system explicitly calls the commit command).

    Conclusions

    In the end, our attempt was successful. However, as you can see from the issues we faced, it cannot be concluded that it will work for all types of PeopleTools upgrades nor for certain customer requirements. Still, I thought it was a creative approach and that's why I decided to share it in this blog.