Skip navigation.

Feed aggregator

Good Blog Bad Blog

Denes Kubicek - Sat, 2014-12-06 03:29
Just checked if the http://www.odtug.com/apex is available again and it is. It seems the people there are filtering blogs because I don't see my blog post from yesterday appearing there and I don't understand why. Is that just because I said that the old blog listing was much better? Or this is just another technical problem they have? Am I going to be removed from that blog listing forever if I continue saying things which they may not like?
Categories: Development

UKOUG 2014 : Are you there?

Angelo Santagata - Fri, 2014-12-05 09:55

Im going to be at UKOUG next week helping out with the AppsTech 2014 Apps "Just Do It Workshop"...

Are you going to be there?? if so come and find me on Monday in the Executive Rooms, Tuesday/Wednesday I'll a "participant" and attending the various presentations on Cloud, Integration technologies , Mobile and ADF.. Come and find me :-)

 https://blogs.oracle.com/fadevrel/entry/don_t_miss_us_at


Getting JDeveloper HttpAnalyzer to easily work against SalesCloud

Angelo Santagata - Fri, 2014-12-05 09:48

Hey all

Little tip here. If your trying to debug some Java code working against SalesCloud one of the tools you might try and use is the http analyzer.. Alas I couldn’t get it to recognize the oracle sales cloud security certificate and the currently version of JDeveloper (11.1.1.7.1) doesnt give you an option to ignore the certificate..

However.. there is a workaround, simply start JDeveloper using a special flag which tells JDevelopers Http Analyzer to trust everybody!

jdev -J-Djavax.net.ssl.trusteverybody=true

Very useful…and obviously for testing and development its ok, but not for anyting else

For more information please see this  Doc reference

Log Buffer #400, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-12-05 09:40

Another centurion mark achieved by the Log Buffer as it reaches 400. Freshness and uniqueness of Log Buffer still is as youthful as was with the edition 1. Enjoy the gems of Oracle, SQL Server and MySQL.

Oracle:

What Cloud Infrastructure Will Best Deliver?

Adaptive Case Management 12c and ADF Human Tasks.

What Does “Backup Restore Throttle Speed” Wait Mean?

All You Need, and Ever Wanted to Know About the Dynamic Rolling Year.

Using grant connect through to manage database links.

The Future of Oracle Forms Straight From the Source’s Mouth.

SQL Server:

Create a repository of all your database devices and stay informed about changes in their size and usage.

When a hospital’s mission-critical database fails at Christmas, disaster for the hospital – and its hapless DBA – seems certain. With less than an hour to spare before catastrophe, can the DBA Team save the day?

How do you use SQL Server, and how do you expect this to change next year?

How can you get a list of columns that have changed within a trigger in T-SQL? How can you see what bits are set within a varbinary or integer? How would you pass a bitmap parameter to a system stored procedure?

Have you ever wanted to run a query across every database on a server with the convenience of a stored procedure? If so, Microsoft provided a stored procedure to do so. It’s unreliable, outdated, and somewhat obfuscated, though. Let’s improve on it!

MySQL:

Thanks, Oracle, for fixing the stupid and dangerous SET GLOBAL sql_log_bin!

Auto-bootstrapping an all-down cluster with Percona XtraDB Cluster.

Proposal to deprecate collation_database and character_set_database settings.

Puppet is a powerful automation tool that helps administrators manage complex server setups centrally. You can use Puppet to manage MariaDB.

Tips from the trenches for over-extended MySQL DBAs.

Categories: DBA Blogs

Join Us For a Networking Event at UKOUG

Pythian Group - Fri, 2014-12-05 09:25
UKOUG event photo

Ask not what you can do for your data. Ask what your data can do for you!

Join us for an informal networking event alongside Rittman Mead on Monday December 8th during UKOUG. We will be discussing how to leverage data to drive your organization’s success. Come meet with peers and industry experts, Mark Rittman and Jon Mead of Rittman Mead, and Marc Fielding and Christo Kutrovsky of Pythian. The networking event will take place at PanAm Bar and Restaurant in Liverpool from 6-8 PM, and will include drinks and light refreshments.

Please be sure to RSVP to the event here—we hope to see you there! Find more information about Pythian’s speaking sessions here.

Questions? Please contact Elliot Zissman, Director of Sales at zissman@pythian.com.

Categories: DBA Blogs

Are All Your Project Managers Certified?

WebCenter Team - Fri, 2014-12-05 09:14

Originally posted on the Redstone Content Solutions blog
____________________________________________________________________________________________________________________________________

"We place a high value on the manner and effectiveness in which we manage our client’s projects."

Many companies over the years have made this or similar statements to their customers. Which begs the question, “What, if anything, have they done to assure their customers that they mean what they say and that it is truly a top priority to them.?"

Okay, we admit it.  We have made statements similar to the one above here at Redstone Content Solutions, but to us it not merely a statement used to close deals or woo a customers into doing business with us. We truly believe that process and knowledge are key in delivering our customers the most effective and efficient Oracle WebCenter Project experience as possible. One of the ways we accomplish this is by investing in our project managers on a continuous basis - In fact, all of Redstone’s project managers are Trained and Certified Project Management Professionals (PMP). 

Read the entire article here

RELY DISABLE

Dominic Brooks - Fri, 2014-12-05 07:57

Learning, relearning or unforgetting…

What value is there in a DISABLEd constraint?

This was a question on the OTN forums this week and a) my first reply was wrong and b) I couldn’t find a clear demonstration elsewhere.

The key is RELY.

The documentation is clear.

RELY Constraints

The ETL process commonly verifies that certain constraints are true. For example, it can validate all of the foreign keys in the data coming into the fact table. This means that you can trust it to provide clean data, instead of implementing constraints in the data warehouse. You create a RELY constraint as follows:

ALTER TABLE sales ADD CONSTRAINT sales_time_fk FOREIGN KEY (time_id) REFERENCES times (time_id) RELY DISABLE NOVALIDATE;

This statement assumes that the primary key is in the RELY state. RELY constraints, even though they are not used for data validation, can:

– Enable more sophisticated query rewrites for materialized views. See Chapter 18, “Basic Query Rewrite” for further details.
– Enable other data warehousing tools to retrieve information regarding constraints directly from the Oracle data dictionary.

Creating a RELY constraint is inexpensive and does not impose any overhead during DML or load. Because the constraint is not being validated, no data processing is necessary to create it.

We can prove the value of a RELY DISABLEd CONSTRAINT by playing withTom Kyte’s illustrations on the value of ENABLEd constraints.

EMP/DEPT Table:

drop table emp;
drop table dept;

create table dept 
(deptno number(2)     not null,
 dname  varchar2(15),
 loc    varchar2(15));

insert into dept values (10, 'ACCOUNTING', 'NEW YORK');
insert into dept values (20, 'RESEARCH', 'DALLAS');
insert into dept values (30, 'SALES', 'CHICAGO');
insert into dept values (40, 'OPERATIONS', 'BOSTON');

create table emp
(empno    number(4) not null
,ename    varchar2(10)
,job      varchar2(9)
,mgr      number(4)
,hiredate date
,sal      number(7, 2)
,comm     number(7, 2)
,deptno   number(2) not null);

insert into emp values (7369, 'SMITH', 'CLERK',    7902, to_date('17-DEC-1980', 'DD-MON-YYYY'), 800, null, 20);
insert into emp values (7499, 'ALLEN', 'SALESMAN', 7698, to_date('20-FEB-1981', 'DD-MON-YYYY'), 1600, 300, 30);
insert into emp values (7521, 'WARD',  'SALESMAN', 7698, to_date('22-FEB-1981', 'DD-MON-YYYY'), 1250, 500, 30);
insert into emp values (7566, 'JONES', 'MANAGER',  7839, to_date('2-APR-1981',  'DD-MON-YYYY'), 2975, null, 20);
insert into emp values (7654, 'MARTIN', 'SALESMAN', 7698,to_date('28-SEP-1981', 'DD-MON-YYYY'), 1250, 1400, 30);
insert into emp values (7698, 'BLAKE', 'MANAGER', 7839,to_date('1-MAY-1981', 'DD-MON-YYYY'), 2850, null, 30);
insert into emp values (7782, 'CLARK', 'MANAGER', 7839,to_date('9-JUN-1981', 'DD-MON-YYYY'), 2450, null, 10);
insert into emp values (7788, 'SCOTT', 'ANALYST', 7566,to_date('09-DEC-1982', 'DD-MON-YYYY'), 3000, null, 20);
insert into emp values (7839, 'KING', 'PRESIDENT', null,to_date('17-NOV-1981', 'DD-MON-YYYY'), 5000, null, 10);
insert into emp values (7844, 'TURNER', 'SALESMAN', 7698,to_date('8-SEP-1981', 'DD-MON-YYYY'), 1500, 0, 30);
insert into emp values (7876, 'ADAMS', 'CLERK', 7788,to_date('12-JAN-1983', 'DD-MON-YYYY'), 1100, null, 20);
insert into emp values (7900, 'JAMES', 'CLERK', 7698,to_date('3-DEC-1981', 'DD-MON-YYYY'), 950, null, 30);
insert into emp values (7902, 'FORD', 'ANALYST', 7566,to_date('3-DEC-1981', 'DD-MON-YYYY'), 3000, null, 20);
insert into emp values (7934, 'MILLER', 'CLERK', 7782,to_date('23-JAN-1982', 'DD-MON-YYYY'), 1300, null, 10);

begin
  dbms_stats.set_table_stats
  ( user, 'EMP', numrows=>1000000, numblks=>100000 );
  dbms_stats.set_table_stats
  ( user, 'DEPT', numrows=>100000, numblks=>10000 );
end; 
/

First, there’s nearly always an oddity to observe or tangent to follow:

alter table dept add constraint dept_pk primary key(deptno);
alter table emp add constraint emp_fk_dept foreign key(deptno) references dept(deptno) rely disable novalidate;

Results in:

SQL Error: ORA-25158: Cannot specify RELY for foreign key if the associated primary key is NORELY
25158. 00000 -  "Cannot specify RELY for foreign key if the associated primary key is NORELY"
*Cause:    RELY is specified for the foreign key contraint, when the
           associated primary key constraint is NORELY.
*Action:   Change the option of the primary key also to RELY.

But this is ok?

alter table emp add constraint emp_fk_dept foreign key(deptno) references dept(deptno) disable novalidate;
alter table emp modify constraint emp_fk_dept rely;

Odd!

Anyway, first, we can show a clear demonstration of JOIN ELIMINATION.

No FK constraint:

alter table emp drop constraint emp_fk_dept;

create or replace view emp_dept
as
select emp.ename, dept.dname
from   emp, dept
where  emp.deptno = dept.deptno; 

select ename from emp_dept;
select * from table(dbms_xplan.display_cursor);

Gives plan:

Plan hash value: 4269077325
------------------------------------------------------------------------------
| Id  | Operation          | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |         |       |       | 21974 (100)|          |
|   1 |  NESTED LOOPS      |         |  1000K|    31M| 21974   (1)| 00:04:24 |
|   2 |   TABLE ACCESS FULL| EMP     |  1000K|    19M| 21924   (1)| 00:04:24 |
|*  3 |   INDEX UNIQUE SCAN| DEPT_PK |     1 |    13 |     0   (0)|          |
------------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")

Now with added constraint, RELY DISABLE:

alter table emp add constraint emp_fk_dept foreign key(deptno) references dept(deptno) disable novalidate;
alter table emp modify constraint emp_fk_dept rely;

select ename from emp_dept;
select * from table(dbms_xplan.display_cursor);

And we get:

Plan hash value: 3956160932
--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |       |       | 21925 (100)|          |
|*  1 |  TABLE ACCESS FULL| EMP  | 50000 |   976K| 21925   (1)| 00:04:24 |
--------------------------------------------------------------------------
 
Predicate Information (identified by operation id):
---------------------------------------------------
    1 - filter("EMP"."DEPTNO" IS NOT NULL)

And just to confirm our constraint state:

select constraint_name, status, validated, rely from user_constraints where constraint_name = 'EMP_FK_DEPT';
CONSTRAINT_NAME                STATUS   VALIDATED     RELY
------------------------------ -------- ------------- ----
EMP_FK_DEPT                    DISABLED NOT VALIDATED RELY 

Now we can also see benefit in MV query_rewrite:

create materialized view mv_emp_dept
enable query rewrite
as
select dept.deptno, dept.dname, count (*) 
from   emp, dept
where  emp.deptno = dept.deptno
group by dept.deptno, dept.dname;

begin
   dbms_stats.set_table_stats
   ( user, 'mv', numrows=>100000, numblks=>10000 );
end; 
/

alter session set query_rewrite_enabled = false;
select count(*) from emp;
select * from table(dbms_xplan.display_cursor);
Plan hash value: 2083865914
-------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       | 21917 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |            |          |
|   2 |   TABLE ACCESS FULL| EMP  |  1000K| 21917   (1)| 00:04:24 |
-------------------------------------------------------------------

Enable query_rewrite and we can use MV instead:

alter session set query_rewrite_enabled = true;
alter session set query_rewrite_integrity = trusted;
select count(*) from emp;
select * from table(dbms_xplan.display_cursor);
Plan hash value: 632580757
---------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |       |       |     3 (100)|          |
|   1 |  SORT AGGREGATE               |             |     1 |    13 |            |          |
|   2 |   MAT_VIEW REWRITE ACCESS FULL| MV_EMP_DEPT |     3 |    39 |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

What is the Oracle Audit Vault?

Oracle Audit Vault is aptly named; the Oracle Audit Vault is a vault in which data about audit logs is placed, and it is based on two key concepts.  First, Oracle Audit Vault is designed to secure data at its source.  Second, Oracle Audit Vault is designed to be a data warehouse for audit data. 

The Oracle Audit Vault by itself does not generate audit data.  Before the Oracle Audit Vault can be used, standard auditing needs to be first enabled in the source databases.  Once auditing is enabled in the source databases, the Oracle Audit Vault collects the log and audit data, but does not replicate, copy and/or collect the actual data.  This design premise of securing audit data at the source and not replicating it differentiates the Oracle Audit Vault from other centralized logging solutions. 

Once log and audit data is generated in source databases, Oracle Audit Vault agents are installed on the source database(s) to collect the log and audit data and send it to the Audit Vault server.  By removing the log and audit data from the source system and storing it in the secure Audit Vault server, the integrity of the log and audit can be ensured and proven that it has not been tampered with.  The Oracle Audit Vault is designed to be a secure data warehouse of information of log and audit data.

Application Log and Audit Data

For applications, a key advantage to the Audit Vault’s secure-at-the-source approach is that the Oracle Audit Vault is transparent.  To use the Oracle Audit Vault with applications such as the Oracle E-Business Suite or SAP, standard Oracle database auditing only needs to be enabled on the application log and audit tables.  While auditing the application audit tables might seem duplicative, the advantage is that the integrity of the application audit data can be ensured (proven that it has not been tampered with) while not having to replicate or copy the application log and audit data. 

For example, the Oracle E-Business Suite has the ability to log user login attempts, both successful and unsuccessful.  To protect the E-Business Suite login audit tables, standard Oracle database auditing first needs to be enabled.  An Oracle Audit Vault agent will then collect information about the E-Business Suite login audit tables.  If any deletes or updates occur to these tables, the Audit Vault would then alert and report the incident.  The Audit Vault is transparent to the Oracle E-Business Suite, no patches are required for the Oracle E-Business Suite to be used with the Oracle Audit Vault.

Figure 1 Secure At-Source for Application Log and Audit data

Figure 2 Vault of Log and Audit Data

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Audit Vault
Categories: APPS Blogs, Security Blogs

Ten Year Site Anniversary

Marco Gralike - Fri, 2014-12-05 05:35
I realized yesterday that this site has pasted his ten year anniversary. In all funny…

You shouldn't think this happens only to you

Denes Kubicek - Fri, 2014-12-05 03:17
Since several hours I am getting this while trying to access all blogs at http://www.odtug.com/apex. It seems that this list has a lots of problems listing all the relevant APEX blogs. The previous version from Dimitri was so much better and user friendly.
Categories: Development

Closure

Jonathan Lewis - Fri, 2014-12-05 02:11

It’s been a long time since I said anything interesting about transitive closure in Oracle, the mechanism by which Oracle can infer that if a = b and b = c then a = c but only (in Oracle’s case) if one of a, b, or c is a literal constant rather than a column. So with that quick reminder in place, here’s an example of optimizer mechanics to worry you. It’s not actually a demonstration of transitive closure coming into play, but I wanted to remind you of the logic to set the scene.

I have three identical tables, one million rows, no indexes. The SQL to create the first table is one I supplied a couple of days ago to demonstrate changes in join cardinality dependent on Oracle version:


create table t1
nologging
as
with generator as (
	select	--+ materialize
		rownum id
	from dual
	connect by
		level <= 1e4
)
select
	trunc(dbms_random.value(0,1000))	n_1000,
	trunc(dbms_random.value(0,750))		n_750,
	trunc(dbms_random.value(0,600))		n_600,
	trunc(dbms_random.value(0,400))		n_400,
	trunc(dbms_random.value(0,90))		n_90,
	trunc(dbms_random.value(0,72))		n_72,
	trunc(dbms_random.value(0,40))		n_40,
	trunc(dbms_random.value(0,3))		n_3
from
	generator	v1,
	generator	v2
where
	rownum <= 1e6
;

Here’s a simple SQL statement that joins the three tables:


select
	t1.*, t2.*, t3.*
from
	t1, t2, t3
where
	t2.n_90  = t1.n_90
and	t3.n_90  = t2.n_90
and	t3.n_600 = t2.n_600
and	t1.n_400 = 1
and	t2.n_400 = 2
and	t3.n_400 = 3
;

Given the various n_400 = {constant} predicates we should expect to see close to 2,500 rows from each table participating in the join – and that is exactly what Oracle predicts in the execution plan. The question is: what is the cardinality of the final join? Before showing you the execution plan and its prediction I’m going to bring transitivity into the picture.  Note the lines numbered 6 and 7.  If t2.n_90 = t1.n_90 and t3.n_90 = t2.n_90 then t3.n_90 = t1.n_90; so I might have written my query slightly differently – note the small change at line 7 below:


select
	t1.*, t2.*, t3.*
from
	t1, t2, t3
where
	t2.n_90  = t1.n_90
and	t3.n_90  = t1.n_90		-- changed
and	t3.n_600 = t2.n_600
and	t1.n_400 = 1
and	t2.n_400 = 2
and	t3.n_400 = 3
;

So here’s the exciting bit. My two queries are logically equivalent, and MUST return exactly the same row set. Check the final cardinality predictions in these two execution plans (from 12.1.0.2, but you get the same results in 11.2.0.4, older versions have other differences):


First Version - note the predicate for operation 3
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      | 70949 |  5820K|  1869  (10)| 00:00:01 |
|*  1 |  HASH JOIN          |      | 70949 |  5820K|  1869  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL | T1   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  3 |   HASH JOIN         |      |  2554 |   139K|  1245  (10)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL| T3   |  2500 | 70000 |   622  (10)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_90"="T1"."N_90")
   2 - filter("T1"."N_400"=1)
   3 - access("T3"."N_90"="T2"."N_90" AND "T3"."N_600"="T2"."N_600")
   4 - filter("T2"."N_400"=2)
   5 - filter("T3"."N_400"=3)

Second Version - note the predicate for operation 1
----------------------------------------------------------------------------
| Id  | Operation           | Name | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |  3264 |   267K|  1868  (10)| 00:00:01 |
|*  1 |  HASH JOIN          |      |  3264 |   267K|  1868  (10)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL | T1   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  3 |   HASH JOIN         |      | 10575 |   578K|  1245  (10)| 00:00:01 |
|*  4 |    TABLE ACCESS FULL| T2   |  2500 | 70000 |   622  (10)| 00:00:01 |
|*  5 |    TABLE ACCESS FULL| T3   |  2500 | 70000 |   622  (10)| 00:00:01 |
----------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T2"."N_90"="T1"."N_90" AND "T3"."N_90"="T1"."N_90")
   2 - filter("T1"."N_400"=1)
   3 - access("T3"."N_600"="T2"."N_600")
   4 - filter("T2"."N_400"=2)
   5 - filter("T3"."N_400"=3)

The a small change in the choice of presenting the predicates gives me a factor of 22 in the cardinality estimate – oops!

The actual result with my data was close to 3,000 rows – so one of the estimates in the second version was pretty good; but the point of the blog isn’t that you can “tune” the optimizer by carefully picking your way through transitive closure, the point is that a small “cosmetic” change you might make to a query could result in a significant change in the cardinality calculations which could then make a dramatic difference to the final execution plan. This example, by the way, depends on the same “multi-column sanity check” that showed up in the previous posting.

I will be expanding on this posting some time in the next couple of weeks but, again, the example should come up in my session on calculating selectivity at “Super Sunday” at UKOUG Tech 14.

 

 


Announcing SLOB 2.2 : Think Time and Limited-Scope User-Data Modification

Kevin Closson - Fri, 2014-12-05 00:19

This is a hasty blog post to get SLOB 2.2 out to those who are interested.

In addition to doing away with the cumbersome “seed” table and procedure.sql, this kit introduces 5 new slob.conf parameters. By default these parameters are disabled.

This SLOB distribution does not require re-executing setup.sh. One can simply adopt the kit and use it to test existing SLOB databases. The following explains the new slob.conf parameters:

DO_UPDATE_HOTSPOT=FALSE

When set to TRUE, modify SQL will no longer affect random rows spanning each session’s schema. Instead, each session will only modify HOTSPOT_PCT percent of their data.

HOTSPOT_PCT=10

This parameter controls how much of each session’s schema gets modified when UPDATE_PCT is non-zero. The default will limit the scope of each session’s data modifications to a maximum of 10% of their data.

THINK_TM_MODULUS=0

When set to non-zero this is a frequency control on how often sessions will incur think time. For example, if set to 7, every seventh SQL statement will be following by a sleep (think time) for a random amount of time between THINK_TM_MIN and THINK_TM_MAX. It’s best to assign a prime number to THINK_TM_MODULUS.

THINK_TM_MIN=.1

The low-bound for selection of a random period to sleep when THINK_TM_MODULUS triggers a think time event.

THINK_TM_MAX=.5

The high-bound for selection of a random period to sleep when THINK_TM_MODULUS triggers a think time event.

Notes About Think Time

The resolution supported for think time is hundreds of a second. The following is a link to the SLOB 2.2 release tarball (md5 is be3612c50d134636a56ef9654b5865c5) :

https://my.syncplicity.com/share/5vmflakvyqbawsy/2014.12.05.slob_2.2.1.tar

The additional tarball (at the following link) has a slob.conf, simple.ora and awr.txt that show a way to have 256 sessions produce the following load profile (on 2s16c32t E5 Xeon):
https://my.syncplicity.com/share/geydubw3q42okrt/think-time-help-files.tar

load-profile-think-time


Filed under: oracle, SLOB Tagged: Oracle, SLOB

SQL Server tips: how to list orphaned logins

Yann Neuhaus - Thu, 2014-12-04 21:56

I read a lot of about orphaned database users in SQL Server, but I have almost never read about orphaned logins. Many of my customers migrate or remove databases in SQL Server. They forget - not every time but often - to remove the logins and jobs associated with these databases. I have created a script - without any cursors, YES, it is possible - allowing to search all logins who are not "attached" to a database of an instance.

GUI Be Gone

Michael Dinh - Thu, 2014-12-04 18:08

I am losing the luxury of using GUI since clients typically may not have X-Windows or VNC Server installed

Adapt or die and re-learning command line again.

Better to verify before installation than to clean up a problematic install.

Use runcluvfy to verify DB install:
runcluvfy.sh stage -pre dbinst -n rac01,rac02 -r 11gR2 -d /u01/app/oracle/product/11.2.0.4/db_1 -osdba dba -fixup -fixupdir /tmp -verbose

Use runInstaller -executePrereqs to verify responseFile for silent install and detect issues:
runInstaller -silent -executePrereqs -waitforcompletion -force -responseFile /media/sf_Linux/11.2.0.4/database/rac_db_swonly.rsp

Use grep to find results from installActions log:
grep -e ‘[[:upper:]]:’ installActions2014-12-04_02-49-27PM.log|cut -d “:” -f1|sort -u

DEMO:

[oracle@rac01:/media/sf_Linux/11.2.0.4/grid]
$ ./runcluvfy.sh stage -pre dbinst -n rac01,rac02 -r 11gR2 -d /u01/app/oracle/product/11.2.0.4/db_1 -osdba dba -fixup -fixupdir /tmp -verbose

Performing pre-checks for database installation

Checking node reachability...

Check: Node reachability from node "rac01"
  Destination Node                      Reachable?
  ------------------------------------  ------------------------
  rac02                                 yes
  rac01                                 yes
Result: Node reachability check passed from node "rac01"


Checking user equivalence...

Check: User equivalence for user "oracle"
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed
Result: User equivalence check passed for user "oracle"

Checking node connectivity...

Checking hosts config file...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed

Verification of the hosts config file successful


Interface information for node "rac02"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   10.0.2.15       10.0.2.0        0.0.0.0         10.0.2.2        08:00:27:87:91:11 1500
 eth1   192.168.56.12   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:4A:7B:27 1500
 eth1   192.168.56.33   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:4A:7B:27 1500
 eth1   192.168.56.32   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:4A:7B:27 1500
 eth1   192.168.56.22   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:4A:7B:27 1500
 eth2   10.10.10.12     10.0.0.0        0.0.0.0         10.0.2.2        08:00:27:E8:D6:21 1500
 eth2   169.254.82.236  169.254.0.0     0.0.0.0         10.0.2.2        08:00:27:E8:D6:21 1500


Interface information for node "rac01"
 Name   IP Address      Subnet          Gateway         Def. Gateway    HW Address        MTU
 ------ --------------- --------------- --------------- --------------- ----------------- ------
 eth0   10.0.2.15       10.0.2.0        0.0.0.0         10.0.2.2        08:00:27:02:B1:57 1500
 eth1   192.168.56.11   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:BD:66:A4 1500
 eth1   192.168.56.21   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:BD:66:A4 1500
 eth1   192.168.56.31   192.168.56.0    0.0.0.0         10.0.2.2        08:00:27:BD:66:A4 1500
 eth2   10.10.10.11     10.0.0.0        0.0.0.0         10.0.2.2        08:00:27:60:79:0F 1500
 eth2   169.254.34.109  169.254.0.0     0.0.0.0         10.0.2.2        08:00:27:60:79:0F 1500


Check: Node connectivity for interface "eth1"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac02[192.168.56.12]            rac02[192.168.56.33]            yes
  rac02[192.168.56.12]            rac02[192.168.56.32]            yes
  rac02[192.168.56.12]            rac02[192.168.56.22]            yes
  rac02[192.168.56.12]            rac01[192.168.56.11]            yes
  rac02[192.168.56.12]            rac01[192.168.56.21]            yes
  rac02[192.168.56.12]            rac01[192.168.56.31]            yes
  rac02[192.168.56.33]            rac02[192.168.56.32]            yes
  rac02[192.168.56.33]            rac02[192.168.56.22]            yes
  rac02[192.168.56.33]            rac01[192.168.56.11]            yes
  rac02[192.168.56.33]            rac01[192.168.56.21]            yes
  rac02[192.168.56.33]            rac01[192.168.56.31]            yes
  rac02[192.168.56.32]            rac02[192.168.56.22]            yes
  rac02[192.168.56.32]            rac01[192.168.56.11]            yes
  rac02[192.168.56.32]            rac01[192.168.56.21]            yes
  rac02[192.168.56.32]            rac01[192.168.56.31]            yes
  rac02[192.168.56.22]            rac01[192.168.56.11]            yes
  rac02[192.168.56.22]            rac01[192.168.56.21]            yes
  rac02[192.168.56.22]            rac01[192.168.56.31]            yes
  rac01[192.168.56.11]            rac01[192.168.56.21]            yes
  rac01[192.168.56.11]            rac01[192.168.56.31]            yes
  rac01[192.168.56.21]            rac01[192.168.56.31]            yes
Result: Node connectivity passed for interface "eth1"


Check: TCP connectivity of subnet "192.168.56.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac01:192.168.56.11             rac02:192.168.56.12             passed
  rac01:192.168.56.11             rac02:192.168.56.33             passed
  rac01:192.168.56.11             rac02:192.168.56.32             passed
  rac01:192.168.56.11             rac02:192.168.56.22             passed
  rac01:192.168.56.11             rac01:192.168.56.21             passed
  rac01:192.168.56.11             rac01:192.168.56.31             passed
Result: TCP connectivity check passed for subnet "192.168.56.0"


Check: Node connectivity for interface "eth2"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac02[10.10.10.12]              rac01[10.10.10.11]              yes
Result: Node connectivity passed for interface "eth2"


Check: TCP connectivity of subnet "10.0.0.0"
  Source                          Destination                     Connected?
  ------------------------------  ------------------------------  ----------------
  rac01:10.10.10.11               rac02:10.10.10.12               passed
Result: TCP connectivity check passed for subnet "10.0.0.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "192.168.56.0".
Subnet mask consistency check passed for subnet "10.0.0.0".
Subnet mask consistency check passed.

Result: Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "192.168.56.0" for multicast communication with multicast group "230.0.1.0" passed.

Checking subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0"...
Check of subnet "10.0.0.0" for multicast communication with multicast group "230.0.1.0" passed.

Check of multicast communication passed.

Check: Total memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         3.8674GB (4055296.0KB)    1GB (1048576.0KB)         passed
  rac01         3.8674GB (4055296.0KB)    1GB (1048576.0KB)         passed
Result: Total memory check passed

Check: Available memory
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         2.6666GB (2796104.0KB)    50MB (51200.0KB)          passed
  rac01         2.7855GB (2920820.0KB)    50MB (51200.0KB)          passed
Result: Available memory check passed

Check: Swap space
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         8GB (8388604.0KB)         3.8674GB (4055296.0KB)    passed
  rac01         8GB (8388604.0KB)         3.8674GB (4055296.0KB)    passed
Result: Swap space check passed

Check: Free disk space for "rac02:/u01/app/oracle/product/11.2.0.4/db_1,rac02:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/oracle/product/11.2.0.4/db_1  rac02         /             45.459GB      6.7GB         passed
  /tmp              rac02         /             45.459GB      6.7GB         passed
Result: Free disk space check passed for "rac02:/u01/app/oracle/product/11.2.0.4/db_1,rac02:/tmp"

Check: Free disk space for "rac01:/u01/app/oracle/product/11.2.0.4/db_1,rac01:/tmp"
  Path              Node Name     Mount point   Available     Required      Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  /u01/app/oracle/product/11.2.0.4/db_1  rac01         /             45.4215GB     6.7GB         passed
  /tmp              rac01         /             45.4215GB     6.7GB         passed
Result: Free disk space check passed for "rac01:/u01/app/oracle/product/11.2.0.4/db_1,rac01:/tmp"

Check: User existence for "oracle"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac02         passed                    exists(54321)
  rac01         passed                    exists(54321)

Checking for multiple users with UID value 54321
Result: Check for multiple users with UID value 54321 passed
Result: User existence check passed for "oracle"

Check: Group existence for "oinstall"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac02         passed                    exists
  rac01         passed                    exists
Result: Group existence check passed for "oinstall"

Check: Group existence for "dba"
  Node Name     Status                    Comment
  ------------  ------------------------  ------------------------
  rac02         passed                    exists
  rac01         passed                    exists
Result: Group existence check passed for "dba"

Check: Membership of user "oracle" in group "oinstall" [as Primary]
  Node Name         User Exists   Group Exists  User in Group  Primary       Status
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             yes           yes           yes           yes           passed
  rac01             yes           yes           yes           yes           passed
Result: Membership check for user "oracle" in group "oinstall" [as Primary] passed

Check: Membership of user "oracle" in group "dba"
  Node Name         User Exists   Group Exists  User in Group  Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             yes           yes           yes           passed
  rac01             yes           yes           yes           passed
Result: Membership check for user "oracle" in group "dba" passed

Check: Run level
  Node Name     run level                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         5                         3,5                       passed
  rac01         5                         3,5                       passed
Result: Run level check passed

Check: Hard limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             hard          65536         65536         passed
  rac01             hard          65536         65536         passed
Result: Hard limits check passed for "maximum open file descriptors"

Check: Soft limits for "maximum open file descriptors"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             soft          1024          1024          passed
  rac01             soft          1024          1024          passed
Result: Soft limits check passed for "maximum open file descriptors"

Check: Hard limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             hard          16384         16384         passed
  rac01             hard          16384         16384         passed
Result: Hard limits check passed for "maximum user processes"

Check: Soft limits for "maximum user processes"
  Node Name         Type          Available     Required      Status
  ----------------  ------------  ------------  ------------  ----------------
  rac02             soft          16384         2047          passed
  rac01             soft          16384         2047          passed
Result: Soft limits check passed for "maximum user processes"

Check: System architecture
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         x86_64                    x86_64                    passed
  rac01         x86_64                    x86_64                    passed
Result: System architecture check passed

Check: Kernel version
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         2.6.39-400.17.1.el6uek.x86_64  2.6.32                    passed
  rac01         2.6.39-400.17.1.el6uek.x86_64  2.6.32                    passed
Result: Kernel version check passed

Check: Kernel parameter for "semmsl"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             250           250           250           passed
  rac01             250           250           250           passed
Result: Kernel parameter check passed for "semmsl"

Check: Kernel parameter for "semmns"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             32000         32000         32000         passed
  rac01             32000         32000         32000         passed
Result: Kernel parameter check passed for "semmns"

Check: Kernel parameter for "semopm"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             100           100           100           passed
  rac01             100           100           100           passed
Result: Kernel parameter check passed for "semopm"

Check: Kernel parameter for "semmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             128           128           128           passed
  rac01             128           128           128           passed
Result: Kernel parameter check passed for "semmni"

Check: Kernel parameter for "shmmax"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4398046511104  4398046511104  2076311552    passed
  rac01             4398046511104  4398046511104  2076311552    passed
Result: Kernel parameter check passed for "shmmax"

Check: Kernel parameter for "shmmni"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4096          4096          4096          passed
  rac01             4096          4096          4096          passed
Result: Kernel parameter check passed for "shmmni"

Check: Kernel parameter for "shmall"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4294967296    4294967296    2097152       passed
  rac01             4294967296    4294967296    2097152       passed
Result: Kernel parameter check passed for "shmall"

Check: Kernel parameter for "file-max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             6815744       6815744       6815744       passed
  rac01             6815744       6815744       6815744       passed
Result: Kernel parameter check passed for "file-max"

Check: Kernel parameter for "ip_local_port_range"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed
  rac01             between 9000.0 & 65500.0  between 9000.0 & 65500.0  between 9000.0 & 65500.0  passed
Result: Kernel parameter check passed for "ip_local_port_range"

Check: Kernel parameter for "rmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             262144        262144        262144        passed
  rac01             262144        262144        262144        passed
Result: Kernel parameter check passed for "rmem_default"

Check: Kernel parameter for "rmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             4194304       4194304       4194304       passed
  rac01             4194304       4194304       4194304       passed
Result: Kernel parameter check passed for "rmem_max"

Check: Kernel parameter for "wmem_default"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             262144        262144        262144        passed
  rac01             262144        262144        262144        passed
Result: Kernel parameter check passed for "wmem_default"

Check: Kernel parameter for "wmem_max"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             1048576       1048576       1048576       passed
  rac01             1048576       1048576       1048576       passed
Result: Kernel parameter check passed for "wmem_max"

Check: Kernel parameter for "aio-max-nr"
  Node Name         Current       Configured    Required      Status        Comment
  ----------------  ------------  ------------  ------------  ------------  ------------
  rac02             1048576       1048576       1048576       passed
  rac01             1048576       1048576       1048576       passed
Result: Kernel parameter check passed for "aio-max-nr"

Check: Package existence for "binutils"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed
  rac01         binutils-2.20.51.0.2-5.36.el6  binutils-2.20.51.0.2      passed
Result: Package existence check passed for "binutils"

Check: Package existence for "compat-libcap1"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         compat-libcap1-1.10-1     compat-libcap1-1.10       passed
  rac01         compat-libcap1-1.10-1     compat-libcap1-1.10       passed
Result: Package existence check passed for "compat-libcap1"

Check: Package existence for "compat-libstdc++-33(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed
  rac01         compat-libstdc++-33(x86_64)-3.2.3-69.el6  compat-libstdc++-33(x86_64)-3.2.3  passed
Result: Package existence check passed for "compat-libstdc++-33(x86_64)"

Check: Package existence for "libgcc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libgcc(x86_64)-4.4.7-11.el6  libgcc(x86_64)-4.4.4      passed
  rac01         libgcc(x86_64)-4.4.7-11.el6  libgcc(x86_64)-4.4.4      passed
Result: Package existence check passed for "libgcc(x86_64)"

Check: Package existence for "libstdc++(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libstdc++(x86_64)-4.4.7-11.el6  libstdc++(x86_64)-4.4.4   passed
  rac01         libstdc++(x86_64)-4.4.7-11.el6  libstdc++(x86_64)-4.4.4   passed
Result: Package existence check passed for "libstdc++(x86_64)"

Check: Package existence for "libstdc++-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libstdc++-devel(x86_64)-4.4.7-11.el6  libstdc++-devel(x86_64)-4.4.4  passed
  rac01         libstdc++-devel(x86_64)-4.4.7-11.el6  libstdc++-devel(x86_64)-4.4.4  passed
Result: Package existence check passed for "libstdc++-devel(x86_64)"

Check: Package existence for "sysstat"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed
  rac01         sysstat-9.0.4-20.el6      sysstat-9.0.4             passed
Result: Package existence check passed for "sysstat"

Check: Package existence for "gcc"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         gcc-4.4.7-11.el6          gcc-4.4.4                 passed
  rac01         gcc-4.4.7-11.el6          gcc-4.4.4                 passed
Result: Package existence check passed for "gcc"

Check: Package existence for "gcc-c++"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         gcc-c++-4.4.7-11.el6      gcc-c++-4.4.4             passed
  rac01         gcc-c++-4.4.7-11.el6      gcc-c++-4.4.4             passed
Result: Package existence check passed for "gcc-c++"

Check: Package existence for "ksh"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         ksh-20120801-21.el6.1     ksh-20100621              passed
  rac01         ksh-20120801-21.el6.1     ksh-20100621              passed
Result: Package existence check passed for "ksh"

Check: Package existence for "make"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         make-3.81-20.el6          make-3.81                 passed
  rac01         make-3.81-20.el6          make-3.81                 passed
Result: Package existence check passed for "make"

Check: Package existence for "glibc(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         glibc(x86_64)-2.12-1.149.el6  glibc(x86_64)-2.12        passed
  rac01         glibc(x86_64)-2.12-1.149.el6  glibc(x86_64)-2.12        passed
Result: Package existence check passed for "glibc(x86_64)"

Check: Package existence for "glibc-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         glibc-devel(x86_64)-2.12-1.149.el6  glibc-devel(x86_64)-2.12  passed
  rac01         glibc-devel(x86_64)-2.12-1.149.el6  glibc-devel(x86_64)-2.12  passed
Result: Package existence check passed for "glibc-devel(x86_64)"

Check: Package existence for "libaio(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed
  rac01         libaio(x86_64)-0.3.107-10.el6  libaio(x86_64)-0.3.107    passed
Result: Package existence check passed for "libaio(x86_64)"

Check: Package existence for "libaio-devel(x86_64)"
  Node Name     Available                 Required                  Status
  ------------  ------------------------  ------------------------  ----------
  rac02         libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed
  rac01         libaio-devel(x86_64)-0.3.107-10.el6  libaio-devel(x86_64)-0.3.107  passed
Result: Package existence check passed for "libaio-devel(x86_64)"

Checking for multiple users with UID value 0
Result: Check for multiple users with UID value 0 passed

Check: Current group ID
Result: Current group ID check passed

Starting check for consistency of primary group of root user
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed

Check for consistency of root user's primary group passed

Check default user file creation mask
  Node Name     Available                 Required                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         0022                      0022                      passed
  rac01         0022                      0022                      passed
Result: Default user file creation mask check passed

Checking CRS integrity...

Clusterware version consistency passed
The Oracle Clusterware is healthy on node "rac02"
The Oracle Clusterware is healthy on node "rac01"

CRS integrity check passed

Checking Cluster manager integrity...


Checking CSS daemon...

  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 running
  rac01                                 running

Oracle Cluster Synchronization Services appear to be online.

Cluster manager integrity check passed


Checking node application existence...

Checking existence of VIP node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         yes                       yes                       passed
  rac01         yes                       yes                       passed
VIP node application check passed

Checking existence of NETWORK node application (required)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         yes                       yes                       passed
  rac01         yes                       yes                       passed
NETWORK node application check passed

Checking existence of GSD node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         no                        no                        exists
  rac01         no                        no                        exists
GSD node application is offline on nodes "rac02,rac01"

Checking existence of ONS node application (optional)
  Node Name     Required                  Running?                  Comment
  ------------  ------------------------  ------------------------  ----------
  rac02         no                        yes                       passed
  rac01         no                        yes                       passed
ONS node application check passed


Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
Check: CTSS Resource running on all nodes
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed
Result: CTSS resource check passed


Querying CTSS for time offset on all nodes...
Result: Query of CTSS for time offset passed

Check CTSS state started...
Check: CTSS state
  Node Name                             State
  ------------------------------------  ------------------------
  rac02                                 Active
  rac01                                 Active
CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Reference Time Offset Limit: 1000.0 msecs
Check: Reference Time Offset
  Node Name     Time Offset               Status
  ------------  ------------------------  ------------------------
  rac02         0.0                       passed
  rac01         0.0                       passed

Time offset is within the specified limits on the following set of nodes:
"[rac02, rac01]"
Result: Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed
Checking consistency of file "/etc/resolv.conf" across nodes

Checking the file "/etc/resolv.conf" to make sure only one of domain and search entries is defined
File "/etc/resolv.conf" does not have both domain and search entries defined
Checking if domain entry in file "/etc/resolv.conf" is consistent across the nodes...
domain entry in file "/etc/resolv.conf" is consistent across nodes
Checking if search entry in file "/etc/resolv.conf" is consistent across the nodes...
search entry in file "/etc/resolv.conf" is consistent across nodes
Checking file "/etc/resolv.conf" to make sure that only one search entry is defined
All nodes have one search entry defined in file "/etc/resolv.conf"
Checking all nodes to make sure that search entry is "localdomain" as found on node "rac02"
All nodes of the cluster have same value for 'search'
Checking DNS response time for an unreachable node
  Node Name                             Status
  ------------------------------------  ------------------------
  rac02                                 passed
  rac01                                 passed
The DNS response time for an unreachable node is within acceptable limit on all nodes

File "/etc/resolv.conf" is consistent across nodes

Check: Time zone consistency
Result: Time zone consistency check passed

Checking Single Client Access Name (SCAN)...
  SCAN Name         Node          Running?      ListenerName  Port          Running?
  ----------------  ------------  ------------  ------------  ------------  ------------
  dinh-scan         rac01         true          LISTENER_SCAN1  1521          true
  dinh-scan         rac02         true          LISTENER_SCAN2  1521          true
  dinh-scan         rac02         true          LISTENER_SCAN3  1521          true

Checking TCP connectivity to SCAN Listeners...
  Node          ListenerName              TCP connectivity?
  ------------  ------------------------  ------------------------
  rac01         LISTENER_SCAN1            yes
  rac01         LISTENER_SCAN2            yes
  rac01         LISTENER_SCAN3            yes
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for "dinh-scan"...

Checking integrity of name service switch configuration file "/etc/nsswitch.conf" ...
Checking if "hosts" entry in file "/etc/nsswitch.conf" is consistent across nodes...
Checking file "/etc/nsswitch.conf" to make sure that only one "hosts" entry is defined
More than one "hosts" entry does not exist in any "/etc/nsswitch.conf" file
All nodes have same "hosts" entry defined in file "/etc/nsswitch.conf"
Check for integrity of name service switch configuration file "/etc/nsswitch.conf" passed

  SCAN Name     IP Address                Status                    Comment
  ------------  ------------------------  ------------------------  ----------
  dinh-scan     192.168.56.33             passed
  dinh-scan     192.168.56.31             passed
  dinh-scan     192.168.56.32             passed

Verification of SCAN VIP and Listener setup passed
Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Checking Database and Clusterware version compatibility


Checking ASM and CRS version compatibility
ASM and CRS versions are compatible
Database version "11.2" is compatible with the Clusterware version "11.2.0.4.0".
Database Clusterware version compatibility passed

Pre-check for database installation was successful.
[oracle@rac01:/media/sf_Linux/11.2.0.4/grid]
$

[oracle@rac01:/media/sf_Linux/11.2.0.4/database]
$ ./runInstaller -silent -executePrereqs -showProgress -waitforcompletion -force -responseFile /media/sf_Linux/11.2.0.4/database/rac_db_swonly.rsp

Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 44381 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 8191 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-12-04_02-49-27PM. Please wait ...
[oracle@rac01:/media/sf_Linux/11.2.0.4/database]
$ cd /u01/app/oraInventory/logs/
[oracle@rac01:/u01/app/oraInventory/logs]
$ ls -lrt
total 1068
-rw-r-----. 1 grid   oinstall     47 Nov 30 19:40 time2014-11-30_07-40-41PM.log
-rw-r-----. 1 grid   oinstall      0 Nov 30 19:42 oraInstall2014-11-30_07-40-41PM.err
-rw-r-----. 1 grid   oinstall      0 Nov 30 19:45 oraInstall2014-11-30_07-40-41PM.err.rac02
-rw-r-----. 1 grid   oinstall    121 Nov 30 19:46 oraInstall2014-11-30_07-40-41PM.out.rac02
-rw-r-----. 1 grid   oinstall   7650 Nov 30 19:46 AttachHome2014-11-30_07-40-41PM.log.rac02
-rw-r-----. 1 grid   oinstall    348 Nov 30 19:46 silentInstall2014-11-30_07-40-41PM.log
-rw-r-----. 1 grid   oinstall   1968 Nov 30 19:46 oraInstall2014-11-30_07-40-41PM.out
-rw-r-----. 1 grid   oinstall 975962 Nov 30 19:46 installActions2014-11-30_07-40-41PM.log
-rw-r-----. 1 grid   oinstall      0 Nov 30 20:12 oraInstall2014-11-30_08-12-19PM.err
-rw-r-----. 1 grid   oinstall      0 Nov 30 20:12 oraInstall2014-11-30_08-12-19PM.err.rac02
-rw-r-----. 1 grid   oinstall   7357 Nov 30 20:13 UpdateNodeList2014-11-30_08-12-19PM.log.rac02
-rw-r-----. 1 grid   oinstall     33 Nov 30 20:13 oraInstall2014-11-30_08-12-19PM.out.rac02
-rw-r-----. 1 grid   oinstall  11305 Nov 30 20:13 UpdateNodeList2014-11-30_08-12-19PM.log
-rw-r-----. 1 grid   oinstall     33 Nov 30 20:13 oraInstall2014-11-30_08-12-19PM.out
-rw-r--r--. 1 oracle oinstall     47 Dec  4 14:49 time2014-12-04_02-49-27PM.log
-rw-rw----. 1 oracle oinstall  56317 Dec  4 14:49 installActions2014-12-04_02-49-27PM.log
-rw-r--r--. 1 oracle oinstall      0 Dec  4 14:49 oraInstall2014-12-04_02-49-27PM.out
-rw-r--r--. 1 oracle oinstall      0 Dec  4 14:49 oraInstall2014-12-04_02-49-27PM.err

[oracle@rac01:/u01/app/oraInventory/logs]
$ tail -20 installActions2014-12-04_02-49-27PM.log

INFO: Actual Value:libaio-devel(x86_64)-0.3.107-10.el6
INFO: -----------------------------------------------
INFO: *********************************************
INFO: Users With Same UID: This test checks that multiple users do not exist with user id as "0".
INFO: Severity:CRITICAL
INFO: OverallStatus:SUCCESSFUL
INFO: -----------------------------------------------
INFO: Verification Result for Node:rac01
WARNING: Result values are not available for this verification task
INFO: *********************************************
INFO: Root user consistency: This test checks the consistency of the primary group of the root user across the cluster nodes
INFO: Severity:IGNORABLE
INFO: OverallStatus:SUCCESSFUL
INFO: -----------------------------------------------
INFO: Verification Result for Node:rac01
WARNING: Result values are not available for this verification task
INFO: All forked task are completed at state prepInstall
INFO: Exit Status is 0
INFO: Shutdown Oracle Database 11g Release 2 Installer
INFO: Unloading Setup Driver

[oracle@rac01:/u01/app/oraInventory/logs]
$ grep -e ‘[[:upper:]]:’ installActions2014-12-04_02-49-27PM.log|cut -d “:” -f1|sort -u

INFO
/tmp/OraInstall2014-12-04_02-49-27PM
WARNING
[oracle@rac01:/u01/app/oraInventory/logs]
$

EBS VMs explained

Wim Coekaerts - Thu, 2014-12-04 16:59
A great blog entry from the EBS team explaining the various Oracle VM appliances for EBS :

https://blogs.oracle.com/stevenChan/entry/e_business_suite_virtual_machines

Oracle Priority Support Infogram for 04-DEC-2014

Oracle Infogram - Thu, 2014-12-04 14:46

RAC
Oracle Database In-Memory on RAC - Part 2, from Oracle Database In-Memory.
Performance
From flashdba: awr-parser.sh – Script for parsing Oracle AWR Reports.
OVM
Oracle VM 3.2.9 Released, from Oracle's Virtualization Blog.
ZFS
New ZFS Videos, from Oracle EMEA Value-Added Distributor News.
Getting Down to the Metal
Announcing Oracle Server X5-2 and X5-2L, from Systems Technology Enablement for Partners (STEP).
From the same source: Now Available: Oracle FS1 Flash Storage System Implementation Exam.
VCA
From Wim Coekaerts Blog, SAP certification for Oracle's Virtual Compute Appliance X4-2 (VCA X4-2).
SOA
Best Practices for SOA Suite 11g to 12c Upgrade, from the SOA & BPM Partner Community Blog.
From the same source: SOA 12c demo system (12.1.3) hosted at Oracle Cloud – free for Oracle Partners.
WLS
Additional new material WebLogic Community, from WebLogic Partner Community EMEA.
Java
From the JCP Program Office: JSR Updates - Java EE 8 & Java SE 9.
ADF
From Archbeat: 2 Minute Tech Tip: Using Oracle ADF Libraries.
Analytics
Advisor Webcast: Getting Started with Essbase Aggregate Storage Option - ASO 101, from Business Analytics - Proactive Support.
EBS
From Oracle E-Business Suite Technology:
JRE Support Ends Earlier than JDK Support
EBS VMs: Appliances, Templates, and Assemblies Explained
November 2014 Updates to AD and TXK for EBS 12.2
From Oracle E-Business Suite Support Blog:
Webcast: Rapid Planning: Enabling Mass Updates to Demand Priorities and Background Processing
Webcast: Discrete Costing Functional Changes And Bug Fixes For 12.2.3 And 12.2.4
Considering Customizations with POR_CUSTOM_PKG for iProcurement Requisitions? Check this out!
Webcast: Get Proactive with Doc ID 432.1

Adaptive Case Management 12c and ADF Human Tasks

Andrejus Baranovski - Thu, 2014-12-04 14:08
I'm diving into the new topic - Adaptive Case Management 12c and ADF integration. Today will be the first post in the category and there are more posts planned for the future. I strongly believe that ACM (Adaptive Case Management) makes a great extension for standard BPM. Mainly because it allows to define a loose process, without strict order steps. Process steps can be executed in different order, depending on the situation requirements, at given time. I will be explaining how to implement ADF Human Task for ACM activity and will share several tips, how to make it run in BPM Workspace application.

This is how sample application (HotelBookingProcessing_v1.zip) is constructed, there are two Human Tasks (AddHotelBooking and ValidateHoteBooking) and HotelBookingProcessing Case control:


HotelBookinfProcessing case is defined with Hotel Booking Details data type (this type is based on XSD schema and is defined as Business Component variable - don't mix up with ADF Business Components) - you can think about it as about main data structure type for the case, this can be transferred into every case activity:


There are two stakeholders defined, this could help to control who could have access to human task and case activity. Customer Service Representative is supposed to add new hotel booking, while Financial Accountant can approve or reject it:


I have created Human Task activity directly through composite, it is not necessary to have BPM process to define human tasks. Important to set Application Context property for Human Task to be OracleBPMProcessRolesApp, this will help later with security roles configuration in BPM workspace:


In order to register human task with Case management, we are given option to promote human task as a case activity. This will allow to initiate human task from the case management:


We can define input and output for the case activity, based on the same data type defined in the case. This will allow to transfer data from the case to the activity, and to the underlying human task in our situation:


You could generate ADF form case data, this form will be rendered in BPM workspace case UI. I'm going to look into customisation options of this kind of form in my future posts (checkbox is set to generate editable form):


This is how case data form is rendered, out of the box is given option to save and reset data for the case - Hotel Booking Details:


Human task form is generated in the same way as it was in 11g - no change here for 12c. You could auto generate this form, it generates a lot of code and I would prefer to build custom light form instead:


Important hint - auto generated human task form will not render in BPM workspace window. You need to change FRAME_BUSTING parameter generated in web.xml from differentDomain to never. With differentDomain option it doesn't render human task form, in 11g it was generating with option set to never, for some reason this was changed in 12c - not for good:


With FRAME_BUSTING set to never, human task form renders well:


Human task is started directly from the case activity - Add Hotel Booking from the available list of activities:


We can track in the case activity log - when activity was started, completed or modified. This is quite helpful info to track activity history:


One of the main advantages - user could decide the order of activities, on contrary to strict BPM process. Start Validate Hotel Booking activity, this will create new task for Financial Accountant:


Activity was started, we can see this from the audit log:


This is readonly human task, rendered in BPM workspace - Financial Accountant could approve or reject it:


Case can be closed and hotel booking approved:

Presenting at #UKOUG_APPS14 (8th Dec Monday 4:30 PM) : EBS integration with Identity Management

Online Apps DBA - Thu, 2014-12-04 14:05
  I am presenting paper Integrating Oracle E-Business Suite with Identity & Access Management & Lessons Learned with Neha Mittal. Presentation in on 8th December Monday 4:30 PM at Liverpool (UK) covering Overview of Oracle Identity & Access Management  Integration options including OAM (SSO), OIM (Provisioning & Reconciliation) & GRC (SoD) High level lessons learned from our various [...]

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

Getting Started with Oracle Fusion Cloud Integrations

Angelo Santagata - Thu, 2014-12-04 12:32

Hey all,

If your getting started with integrating your application with Oracle Fusion Cloud then I wholeheartedly recommend you read the following resources before starting.. Most of the below is specific to Oracle Sales Cloud because it has App Composer, however much of the below is also applicable to HCM, ERP and other Fusion products.. 

Some of these are a MUST have read before you start integrating/coding/customizing :-) I've put them here in the order I think would work for most people... Kinda like a getting started check-list

I consider this blog entry an living blog entry, in that  I'll be updating it on a regular basis, so make sure you periodically check this location 



Top 5 Fusion Integrations Must Reads 

1. Familiarise yourself with the Sales Cloud Documentation. Specifically :
    • Go through the "User" section, documents like "Using Sales Cloud", "book. If your a techie like me you'll sit there and think, "Hey this is functional why do I need to read this?", well you do.. Even as a technical person, reading through the various user documents like the Using Sales Cloud" bits as an end user helps you understand what the different concepts/topics are.. You'll also understand things like the difference between a Prospect and a Sales Account, territories, assessments and much more.. Its worth a quick read, but do make sure you have a functional consultant to hand to make sure your not building something which can be done by configuration....
    • Read through all the books in the "Extensibility" section. The only anomaly here is the "Business Card Scanner mobile App" document. Its a walkthrough of how to integrate SalesCloud with a 3rd party Service to do business card scanning with MAF... Id leave that till last...
    • Peruse the Development section, this section contains a number of example usecases, ie how to create a customer in R8, how to call an outbound service, its a good read....
2. Get an overview of the tasks you might do
    • Once you've this then look at the "Tasks" section of the docs....Here the curriculum development folk have categorised some of the most common tasks and put short cuts to the documentation detailing how to do this.. e.g. like adding a field to SalesCloud, calling a soap webservice etc
3. Are you going to be customizing the SalesCloud User Interface?
    • Most integrations customize the SalesCloud User Interface. The customization could be as simple as adding a few fields to a standard object (like Opportunity), creating new objects (like MyOrder), validation or adding external content to one or many pages.
    • If your adding fields make sure you read the "Introduction to SalesCloud Customizations" section.
    • If you will be adding validation, triggers or calling webservices from SalesCloud then make sure you read up on groovy scripting, and specifically the chapter on calling outbound SOAP webservices from groovy.
    • Make sure you understand the difference between calling a SOAP Service from groovy and creating an outbound webservice call using object workflows
      • In a nutshell , calling SOAP Services from groovy is a synchronous call, and calling a SOAP Service from a object workflow is a fire-and-forget asynchronous call
    • On the subject of groovy be aware that in Sales Cloud you do not have access to the entire groovy language, make sure you understand that we only support a number of groovy functions (whitelisting) and these are documented at the end of the book , Appendix A Supported Groovy Classes and Methods
4. Are you going to be accessing data from SalesCloud from the external app??
    • If you think you will be calling SOAP WebServices in SalesCloud then the "Getting started with WebServices" is a MUST read...  This doc goes into details into how to look up the SOAP webservice in Fusion OER, how to create static proxies, querying data and how to perform CRUD operations...
    • Get to know Oracle Fusion OER,, its a gold mine of information.......
5. Do you need your app to know who is calling it? 
    • Many integrations involve embedding a 3rd party web app into Oracle Sales Cloud as an iFrame or pressing a button in SalesCloud and calling the 3rd party app (either a UI or WebService call) . If your doing this then you'll almost certainly need to pass a "token" to the 3rd party application so it can use that it can call back to Sales Cloud with a key rather than a plain text username/password combo.. We call this key JWT TOKEN and its based on industry standards (http://jwt.io/) .  For a starters read my JWT Getting started blog  entry and then use the links to read the core documentation

That covers the top 5 areas of integration.. Now for a list of locations where you can get even MORE useful information :

More Information sources

  1. Oracle Learning Centres Quick Webinars on SalesCloud Integration
    • I worked with Development to get this mini tutorial series done, its excellent but Im obviously not biased eh  ;-) 
  2. R9 Simplified WebServices doc
    • This is a new document we recently completed based on how to use the new R9 Simplified SOAP TCA Services..  Although the document is targetted at R9 developers, it covers many of the standard topics like how to create a proxy, how to create a create operation etc.. It even has some sample CRUD payloads which are really really useful 
  3. Oracle Fusion Developer Relations
    1. Good friends of mine, they host a fantastic blog, youtube channel and whitepapers for Fusion Developers, another gold mine of information covering customization , extensions and integration code.
  4. Oracle Fusion Developer Relations Youtube channel
    • Not content with an awesome blog the Developer Relations folk even have a you tube channel where they host a collection of short "tutorials", showing all sorts such as "How to add a field to a page" , " How to call a webservice" etc.. 
  5. Oracle Fusion Developer Relations Whitepapers
    1. on topics including custom application development, ESS development, and Groovy and Expression Language.
  6. And finally there is my humble blog where I try and blog on things which arent documented anywhere else.. if they are documented and are interesting I often link to it.. mainly because I want to find it myself :-)

Thats it folks!

If there are blog entries you'd like to see or specific how to's then feel free to contact me

Angelo 


Debugging PeopleSoft Absence Management Forecast

Javier Delgado - Thu, 2014-12-04 12:02
Forecasting is one of the most useful PeopleSoft Absence Management functionalities. It allows users to know which is going to be the resulting balance when entering an absence. The alternative is to wait until the Global Payroll calendar group is calculated, which naturally is far from being an online calculation.

Although this is a handy functionality, the calculation process does not always return the expected results. For some specific needs, the system element FCST ASOF DT, FCST BGN DT and FCST END DT may be needed. These elements are null for normal Global Payroll runs, so the formulas may behave differently in these runs than in the actual forecast execution. If you ever hit a calculation issue in the forecast process that cannot be solved by looking at the element definitions, you may be stuck.

When this type of issues are found in a normal Global Payroll execution, one handy functionality is to enable the Debug information and then review the Element Resolution Chain page. This page shows the step by step calculation of each element and it is particularly helpful in identifying how an element is calculated.

Unfortunately, this information is not available in the standard forecast functionality. Luckily, it can be enabled using a tiny customisation.

In PeopleSoft HCM 9.1, the forecast functionality is executed from two different places:

DERIVED_GP.FCST_PB.FieldFormula - Abs_ForecastSetup function
FUNCLIB_GP_ABS.FCST_PB.FieldFormula - Abs_ForecastExec function

In both PeopleCode events, you will find a sentence like this one:

SQLExec("INSERT INTO PS_GP_RUNCTL(OPRID, RUN_CNTL_ID, CAL_RUN_ID, TXN_ID, STRM_NUM, GROUP_LIST_ID, RUN_IDNT_IND, RUN_UNFREEZE_IND, RUN_CALC_IND, RUN_RECALC_ALL_IND, RUN_FREEZE_IND, SUSP_ACTIVE_IND, STOP_BULK_IND, RUN_FINAL_IND, RUN_CANCEL_IND, RUN_SUSPEND_IND, RUN_TRACE_OPTN, RUN_PHASE_OPTN, RUN_PHASE_STEP, IDNT_PGM_OPTN, NEXT_PGM, NEXT_STEP, NEXT_NUM, CANCEL_PGM_OPTN, NEXT_EMPLID, UPDATE_STATS_IND, LANGUAGE_CD, EXIT_POINT, SEQ_NUM5, UE_CHKPT_CH1, UE_CHKPT_CH2, UE_CHKPT_CH3, UE_CHKPT_DT1, UE_CHKPT_DT2, UE_CHKPT_DT3, UE_CHKPT_NUM1, UE_CHKPT_NUM2, UE_CHKPT_NUM3,PRC_NUM,OFF_CYCLE) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,%datein(:33),%datein(:34),%datein(:35),:36,:37,:38,:39,:40)", &OprID, &RunCntl_ID, &CalcRunId, &TxnID, 0, &SpaceFiller, "Y", "N", "Y", "N", "N", "N", &ApprByInd, "N", "N", "N", "N", &RunPhaseOptN, &RunPhaseStep, &SpaceFiller, &SpaceFiller, 0, 0, &SpaceFiller, &SpaceFiller, "N", "ENG", &SpaceFiller, 0, &SpaceFiller, &SpaceFiller, &SpaceFiller, "", "", "", 0, 0, 0, 0, "N");

You will notice that the RUN_TRACE_OPTN field is set to "N". If you use "A" instead as the trace option value, you will obtain the Element Resolution Chain:

SQLExec("INSERT INTO PS_GP_RUNCTL(OPRID, RUN_CNTL_ID, CAL_RUN_ID, TXN_ID, STRM_NUM, GROUP_LIST_ID, RUN_IDNT_IND, RUN_UNFREEZE_IND, RUN_CALC_IND, RUN_RECALC_ALL_IND, RUN_FREEZE_IND, SUSP_ACTIVE_IND, STOP_BULK_IND, RUN_FINAL_IND, RUN_CANCEL_IND, RUN_SUSPEND_IND, RUN_TRACE_OPTN, RUN_PHASE_OPTN, RUN_PHASE_STEP, IDNT_PGM_OPTN, NEXT_PGM, NEXT_STEP, NEXT_NUM, CANCEL_PGM_OPTN, NEXT_EMPLID, UPDATE_STATS_IND, LANGUAGE_CD, EXIT_POINT, SEQ_NUM5, UE_CHKPT_CH1, UE_CHKPT_CH2, UE_CHKPT_CH3, UE_CHKPT_DT1, UE_CHKPT_DT2, UE_CHKPT_DT3, UE_CHKPT_NUM1, UE_CHKPT_NUM2, UE_CHKPT_NUM3,PRC_NUM,OFF_CYCLE) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,%datein(:33),%datein(:34),%datein(:35),:36,:37,:38,:39,:40)", &OprID, &RunCntl_ID, &CalcRunId, &TxnID, 0, &SpaceFiller, "Y", "N", "Y", "N", "N", "N", &ApprByInd, "N", "N", "N", "A", &RunPhaseOptN, &RunPhaseStep, &SpaceFiller, &SpaceFiller, 0, 0, &SpaceFiller, &SpaceFiller, "N", "ENG", &SpaceFiller, 0, &SpaceFiller, &SpaceFiller, &SpaceFiller, "", "", "", 0, 0, 0, 0, "N");

By performing this change, you will notice that GP_AUDIT_TBL table starts to be populated with the Element Resolution Chain information. However, it may still not be visible from the page itself, because some tables are only populated temporarily in the forecast execution. In order to enable the access for the forecast runs, you will need to customise the GP_AUDIT_SEG_VW search record by adding the lines in italics to the SQL definition:

SELECT DISTINCT A.CAL_RUN_ID 
 , A.EMPLID 
 , A.EMPL_RCD 
 , A.GP_PAYGROUP 
 , A.CAL_ID 
 , A.ORIG_CAL_RUN_ID 
 , B.RSLT_SEG_NUM 
 , A.FICT_CAL_ID 
 , A.FICT_CAL_RUN_ID 
 , A.FICT_RSLT_SEG_NUM 
 , B.RSLT_VER_NUM 
 , B.RSLT_REV_NUM 
 , B.SEG_BGN_DT 
 , B.SEG_END_DT 
  FROM PS_GP_AUDIT_TBL A 
  , PS_GP_PYE_SEG_STAT B 
 WHERE A.CAL_RUN_ID = B.CAL_RUN_ID 
   AND A.EMPLID = B.EMPLID 
   AND A.EMPL_RCD = B.EMPL_RCD 
   AND A.GP_PAYGROUP = B.GP_PAYGROUP 
   AND A.CAL_ID = B.CAL_ID 
  UNION ALL 
 SELECT DISTINCT A.CAL_RUN_ID 
 , A.EMPLID 
 , A.EMPL_RCD 
 , A.GP_PAYGROUP 
 , A.CAL_ID 
 , A.ORIG_CAL_RUN_ID 
 , A.RSLT_SEG_NUM 
 , A.FICT_CAL_ID 
 , A.FICT_CAL_RUN_ID 
 , A.FICT_RSLT_SEG_NUM 
 , 1 
 , 1 
 , NULL 
 , NULL 
  FROM PS_GP_AUDIT_TBL A 
 WHERE NOT EXISTS ( 
 SELECT 'X' 
  FROM PS_GP_PYE_SEG_STAT B 
 WHERE A.CAL_RUN_ID = B.CAL_RUN_ID 
   AND A.EMPLID = B.EMPLID 
   AND A.EMPL_RCD = B.EMPL_RCD 
   AND A.GP_PAYGROUP = B.GP_PAYGROUP 
   AND A.CAL_ID = B.CAL_ID)

I hope you find this useful. Should you have any question or doubt, I will be happy to assist.

Note: Keep in mind that it is not a good idea to leave the Debug information enabled for Production environments, at least permanently. The time needed to run a forecast calculation with this type of information is significantly higher than without it. So, if you do not want to hit performance issues, my recommendation is to store in a table a flag indicating if the Element Resolution Chain for forecast should be enabled or not.