Skip navigation.

Feed aggregator

<b>Contributions by Angela Golla,

Oracle Infogram - Mon, 2014-04-07 13:28
Contributions by Angela Golla, Infogram Deputy Editor

Oracle Learning Library Oracle Learning Library was designed to allow you to search for free online training content (OBEs, Demos and Tutorials) on OTN.
  • Oracle by Example (OBE) tutorials provide hands-on, step-by-step instructions on how to implement various technology solutions to business problems. In addition to the following OBE tutorials, you can also access more product training at the Oracle University Knowledge Center.
  • Demos provide an automated demonstration of a particular task with explanations on how the task is performed.
  • Tutorials provide concept explanations, demos and step-by-step instructions for a particular product or topic.

To access the learning library click here.

Virtual Developer Day - Java 2014 - Register!

OTN TechBlog - Mon, 2014-04-07 11:37

Our next Virtual Developer Day is around JAVA! Watch tutorials from the experts to improve your expertise in Java, and ask questions during live chats. This FREE virtual event will cover:


  • Java SE 8 New Features: Lambdas and more
!
  • The latest on the Java EE 7 

  • How Java makes it easy for you to control a wide range of embedded devices.
We will have three chances for you to hear from experts in Java SE 8 , Java EE 7 and Java Embedded - May 6th (Americas), May 14th (EMEA) and May 21st (APAC).

Register today!

Are You an Oracle Cloud Support Customer?

Joshua Solomin - Mon, 2014-04-07 10:56
Untitled Document

HandCircle
The Get Proactive Essentials series includes a webcast for customers who need to learn more about Oracle Cloud Support portal. In this introduction, you will learn about the resources available to you, terminology, and best practices.

Learn how to engage with Oracle Support—sign up now!

Top 10 sessions from v$active_session_history

DBA Scripts and Articles - Mon, 2014-04-07 10:21

Description This query returns the top 10 sessions from v$active_session_history. The result is ordered by total resources consumed by the session including I/O, WAITS and CPU Be careful, this view is part of the diagnostic pack, you should not query this view if you don’t have license for it. top 10 sessions from v$active_session_history [crayon-535727bf69ae8084174662/] [...]

The post Top 10 sessions from v$active_session_history appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

The PeopleSoft Roadshow 2014 – What’s coming next for PeopleSoft?

Duncan Davies - Mon, 2014-04-07 09:00

Last week I attended the UKOUG PeopleSoft Roadshow, and it was very interesting for a number of reasons. Here’s what I took away from the event:

Presenters

We’re quite used to hearing Marc Weintraub and Jeff Robbins speak here in the UK. They come over every year for the roadshow, and they’re the sessions that everyone attends for.

Marc’s style was a touch different this time in that he gave us a little more insight into his personality. Finding out who his sports teams are, what he does for fun (running Tough Mudders), what he drives (surprisingly, a Mini Cooper) etc rounded him out more as a person in our eyes – an important change as previously we really only got the professional side of Marc.

Jeff’s style was the same as ever … dry, humorous, and very comfortable and relaxed speaking to a room full of people. At one point he even paused his session so that he could photobomb a pic I was taking of his demo.

photobomb

Strategy

So what did we learn about PeopleSoft from a strategic point of view?

The switch to patching via images every 10 weeks means that customers don’t have to wait for a major release to gain new functionality. This does sound more like the continuous delivery model that has been used with Campus Solutions (where there is no major release) whereby new functionality comes via regular bundles. As a result, the 9.3 applications might just be a roll-up of everything that has been released in the images since v9.2. There are some interesting implications of this, and we’re not sure how a client can truly be sure that they are on 9.3 if they’ve only applied some of each of the patches that comprise it. The 9.3 releases are still looking like appearing in 2017, but it doesn’t have focus within Oracle as continuous delivery is the preferred method of providing new functionality. There is an internal edict not to target functionality for 9.3 as that means you’re not thinking about delivering something now, which does sound a positive message.

Much was made of the fact that PeopleSoft is “the only enterprise application suite that gives you the ability to deploy PERFECT FIT applications, through the use of PeopleTools.” After years of customisation=bad it’s interesting that there is now an admission that often a small amount of judiciously applied changes are needed to fully meet client expectations.

Also, there was another statement which I didn’t realise the signifcance of until letting things percolate down through on the train journey home, but Marc’s statement that PeopleSoft has a “95% retention rate, and the focus is on our existing customers.” is quite important. It’s great that Oracle are focusing on keeping existing customers happy – that’s what the ongoing licence fee is for, after all – and 95% is a good success rate and ongoing investment is designed to add value to existing customers.

EDITED above paragraph on 11th April to correct paraphrased quote.

User Interface

There was a lot that was exciting to see here. The new UI (christened FLuiD) is very contemporary and pleasing to look at. It works across multiple devices (i.e. mobile, tablet and desktop) and is responsive based on the device resolution. Jeff gave a live demo where the items on the screen realigned themselves and changed as he dragged the width of the screen to be smaller.

2014-03-26 11.35.28

Role-based landing pages (eg. for execs, team members, employees). 

It appears to have been very well thought through. The previous changes to the UI have been ‘all or nothing’. If you wanted the Swan UI (Tools 8.50) or the Tangerine UI (Tools 8.53) it was switched on or off as a system side setting. Whether or not the FLuiD UI is shown is based on your preferences and whether your device is capable of displaying it, so one user may get the full FLuiD UI and for another user with an older device PeopleSoft will seamlessly fall back to the ‘Classic UI’ – which I assume means the Tools 8.53 tangerine UI.

Unified UX seems to be a trend at the moment as Fusion R8 has introduced a new UI also. This UI convergence is sensible from a co-existence P.O.V. as users are going to be surprised if you switch them to Taleo and the look-and-feel is different. It’s notable that as Oracle’s applications UIs converge, PeopleSoft is often getting there first – possibly because of the toolset, and possibly as it had a better UI starting point. Often Fusion or EBS or JDE adopt UI elements that PeopleSoft has already adopted.

The FLuiD UI components are new components that run alongside the existing components. Security is inherited however, if you have security for the existing ‘PIA’ component then you’ll be allowed to use the new component. In terms of browser requirements, we’ll need IE9 for the Classic UI and IE11 for the FLuiD UI. You can still use FLuiD on your smartphone and tablet without issues, and you can use Chrome, Firefox or Safari quite happily.

2014-03-26 11.35.43  2014-03-26 11.37.11
Configuring a Landing Page and the Side Nav Menu

2014-03-26 11.38.59 2014-03-26 11.38.52

Components within FLuiD

Technical

Aside from the UI, what else is new in Tools 8.54?

Also arriving is the Mobile Application Platform (MAP). This is a standalone app – i.e. it’ll be native to your device, and can retain credentials etc. Applications for MAP will come in a PUM image after Tools 8.54, but there was no comment about how quickly after Tools 8.54 this will come, and maybe not until Tools 8.55 (Oracle aim to release a new version of PeopleTools every 15 months approx).

2014-03-26 15.47.37

There are also lots of changes for analytics and reporting. Pivot Grids are getting a lot of new functionality – and they’re already better and more dynamic than much of the competition offers. It was stressed that Pivot grids aren’t static images pasted onto pages, or from data taken into other systems, but live and dynamic analytics over your data. You’ll also be able to add multiple pivot grid views on top of a pivot grid model from 8.54.

Jeff also demoed functionality where a component search page was replaced by a pivot grid, allowing you to select segments of the grid to refine the search results. This was a very slick upgrade to the search facets that we’ve previously seen.

2014-03-26 16.03.48

Filter search results by facet

PeopleSoft Test Framework in 8.54 has improved around management of test cases and delivery of pre-supplied test cases will apparently come in later tools versions.

Graham Smith

As well as the Oracle guys, Graham Smith from Oxfam also gave a session on their Financials upgrade from 8.9/8.50 to 9.2/8.53.

Graham was particularly enthusiastic about the new PeopleSoft Images and the PUM update process. Although he did concede that the PUM process is difficult to get the hang of initially – and that they ended up doing some things more than once – however it works very well for them now. Upskilling the team in advance was very important. The Oxfam approach is to divide up the improvements, allocate them to team members and give everyone time to research their topics and then report back to the team – which seems a very good way of improving the team quickly.

It has added a new requirement however, as it doesn’t replace the DMO environment (we need DMO to contain the vanilla versions of just the patches that we’ve applied), whereas the PUM image contains all modules and patches that have been released. We previously used Oracle Support as the repository of all patches and bundles.

Oxfam have also used Performance Monitor to gather intelligence on how their users are using the system. It’s really interesting to gather stats on which areas of the system are used the most (both ‘Most popular components by number hits’ and ‘most popular components by the number of users’). This allows the team to invest time in the components that are used the most, or are used by the most users – thereby targetting the effort at the areas which will give the most impact.

Other tips from Graham included:
- The Merge page functionality in App Designer is really useful during upgrades when comparing updated pages with customised ones.
- Reapply customisations in module order (as this helps the testers) instead of object type order.
- When applying custom code, aim for empty events if you can, as the cost of upgrading is less (as there’ll be no code to compare).

Graham also spoke about SES which they’ve found to be very fast and doesn’t need particularly beefy hardware. They have needed to spend some time looking at indexing, but in the main it’s a positive experience.

Summary

In conclusion, this was a very strong event with lots of great content. The next versions of PeopleSoft are going to bring a lot of exciting changes, and we can’t wait to read the Release Value Proposition when it is released.


Monitor Oracle Golden Gate from SQL

DBASolved - Mon, 2014-04-07 08:41

One of my presentations at Collaborate 14 this year revolves around how many different ways there are to monitor Oracle Golden Gate.   As I was putting the presentation together, I was listing out the different ways for monitoring. I have covered a few of the ways already in earlier posts.  What I want to show you here is how to execute a simple “info all” command and see the results from SQL*Plus or SQL Developer using SQL.

First, a script (shell, Perl, etc..) needs to be written to write the output of the “info all” command to a text file.  In this case, I’m going to write the text file in /tmp since I’m on Linux.


#!/usr/bin/perl -w
#
#Author: Bobby Curtis, Oracle ACE
#Copyright: 2014
#Title: gg_monitor_sqldev.pl
#
use strict;
use warnings;

#Static Variables

my $gghome = "/oracle/app/product/12.1.2/oggcore_1";
my $outfile = "/tmp/gg_process_sqldev.txt";

#Program
my @buf = `$gghome/ggsci << EOF
info all
EOF`;

open (GGPROC, ">$outfile") or die "Unable to open file";
foreach (@buf)
{
      if(/MANAGER/||/JAGENT/||/EXTRACT/||/REPLICAT/)
    {
        no warnings 'uninitialized';
         chomp;
        my ($program, $status, $group, $lagatchkpt, $timesincechkpt) = split(" ");
       
        if ($group eq "")
        {
            $group = $program;
        }
        if ($lagatchkpt eq "" || $timesincechkpt eq "")
        {
            $lagatchkpt = "00:00:00";
            $timesincechkpt = "00:00:00";
        }
        print GGPROC "$program|$status|$group|$lagatchkpt|$timesincechkpt\n";
    }
}
close (GGPROC);

Next, is the text file needs to be placed into a table to be read by SQL*Plus or SQL Developer.  External Tables are great for this.


create directory TEMP as '/tmp';
grant read on directory TEMP to PUBLIC;

drop table ggate.os_process_mon;

create table ggate.os_process_mon
(
process char(15),
status char(15),
group char(15),
lagatchk char(15),
timelastchk char(15)
)
organization external
(type oracle_loader
default directory TEMP
access parameters
(
RECORDS DELIMITED BY NEWLINE
    FIELDS TERMINATED BY '|'
    MISSING FIELD VALUES ARE NULL
(
            process char(15),
            status char(15),
            ggroup char(15),
            lagatchk char(15),
            timelastchk char(15)
         )
    )
    location ('gg_process_sqldev.txt')
);

select * from ggate.os_process_mon;

Lastly, with these two pieces in place, I can now select the status from SQL*Plus or SQL Developer using SQL. Image 1 shows a sample from my testing environment, I’m building.

Image 1:
image

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate
Categories: DBA Blogs

COLLABORATE minus 3

Pythian Group - Mon, 2014-04-07 07:50

I always like to get to the location for a conference a day in advance so I can

  • Get accustomed to the time change
  • Get a feel for my way around the venue
  • Figure out where my room is
  • Establish a few landmarks so I do not wander aimlessly around the facility and hotel as though every voyage is a new life experience

COLLABORATE officially starts on Tuesday, though there are education sessions all day Monday facilitated by the three main groups responsible for the show – the IOUG, OAUG, and Quest International Users Group. So where did this animal called COLLABORATE come from one may wonder?

Rewind to about 2004. The three above-mentioned user groups each had their own show. Each reached out to Oracle for logistic and education support, something that the vendor was (and still is) happy to give. It was starting to become obvious that the marketplace upheaval was having a dramatic effect on user group conference attendance. At the same time Oracle expressed a desire to support fewer shows. You do the math – it only made sense. Why not have a 4-5 day mega conference and work with Oracle for many facets of support. Not only were the attendees of each show being asked to pick one or the other; Oracle was investing a massive number of personnel to support all three shows separately. It was a cumulative decision to amalgamate the shows and we wondered where it all would start.

With the blessing of the IOUG board I made one of those very first phone calls to one more people on the OAUG board and the rest is history. I do not remember who I spoke to first and there were probably a handful of feelers going out from other places in the IOUG infrastructure to OAUG bigwigs. I spoke to board member Donna Rosentrater  (@DRosentrater) and we jammed on what could/should become of a co-operative effort. We chatted a few times and the interest amongst board members of the IOUG and OAUG reflected cautious optimism that we could pull if off. Each user group had its own revenue stream from separate shows. We needed to embark down a path that would not put these at risk. That is what the brunt of the negotiations centered on and the work we did together led to the very first COLLBORATE at the Gaylord in Nashville in 2006.

Once the initial framework was established, it was time to turn the discussions over to the professionals. Both groups’ professional resources collaborated (hence the name maybe) and this mega/co-operative show became a reality. COLLABORATE 14 is the 9th show put on by Quest, OAUG, and IOUG. I am not going to say “this year’s show is going to be the best yet” as I believe that implicitly belittles previous successful events. Suffice to say, for what the user community needs from an information-sharing perspective – COLLABORATE is just what the doctor ordered.

Tomorrow’s a day off; wander aimlessly through Las Vegas tempted by curios, shops, food emporiums, and just about every other possible temptation one could think of. Sunday starts with a helicopter trip to the Grand Canyon and I went all out and forked over the extra $50 to sit in the convex bubble beside the pilot. There’s a bazillion vendors poised to whisk ine away to the canyon with a fly over the Hoover dam there or on the way back. I chose Papillon and am looking forward to the excitement of the day which starts at 5:10am with a shuttle to the site where the whirlybird takes off. Talk about taking one’s breath away.

Categories: DBA Blogs

HTTP-404 on /oamconsole

Frank van Bortel - Mon, 2014-04-07 07:35
WeblogicHost versus WeblogicCluster Despite the fact, the oamconsole can not be clustered, it has to be "clustered". If you ever find yourself in a scenario, where your configure a webgate in front of your OAM Console, make sure you configure it like ############################################## ## Entries Required by Oracle Access Manager ############################################## # OAM Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

ADF and alternate unique keys revisited

Today I'd like to share a quick (and dirty) trick to handle one nuisance of well known pattern to achieve uniqueness of "non-primary key" ADFBC entity attributes. If you ever needed to...

We share our skills to maximize your revenue!
Categories: DBA Blogs

#Oracle University Expert Summit in London

The Oracle Instructor - Mon, 2014-04-07 06:09

Three days full of seminars are offered by Oracle University in London (19th to 21st May) at the Expert Summit

Oracle University Expert Summit

It is my pleasure to present there together with Arup Nanda, Dan Hotka, Jonathan Lewis and my dear colleagues Iloon Ellen-Wolff and Joel Goodman.

One funny detail here: There has been another event (an Exadata Workshop) in Vienna on my schedule during that week – yes, I’m very busy these days. Now in order to make it possible for me to present in London, the class in Vienna will be interrupted on Tuesday and continued on Wednesday :-)

A big “Thank You!”  goes out to the attendees in Vienna who agreed with the one day interruption to make that happen! Specifically, I’m going to talk about and demonstrate the 12c New Features of Data Guard in London.


Categories: DBA Blogs

Oracle ASM 12c: New Features

Jason Arneil - Mon, 2014-04-07 04:44

Last week I was lucky enough to be presenting at the UKOUG AIM SIG. There was a decent enough crowd in attendance and there were some really interesting talks and some really good speakers. In particularly I found Chris Lawless speaking on replication a particularly engaging speaker, and Dave Webster really held the audiences attention late in day.

I was giving a presentation on the new features available to you with 12c ASM. The presentation is below. What you don’t get from the ppt, though is the various demos I did, and in particular seeing flex ASM in action on my 4-node 12c RAC demo cluster.

I should confess, the above isn’t quite what I presented as I did pictures instead of text for the new features.

For clearest understanding, you probably want to download the ppt and actually read the notes attached to each slide.


Password Change Sample

Anthony Shorten - Sun, 2014-04-06 20:27

In the Technical Best Practices whitepaper ((Doc Id: 560367.1), available from My Oracle Support, there is a section (Password Management Solution for Oracle WebLogic) that mentions a sample password change JSP that used to be provided by BEA for WebLogic. That site is no longer available but the sample code is now available on this blog.

Now, this is an example only and is very generic. It is not a drop and install feature that you can place in your installation but the example is sufficient to give an idea of the Oracle WebLogic API available for changing your password. It is meant to allow you to develop a CM JSP if you required this feature.

There is NO support for this as it is sample code only. It is merely an example of the API available. Link to this code is here. Examine it to get ideas for your own solutions.

The API used will most probably work for any security system that is configured as an authentication security provider.

Private Cloud Planning Guide available for Oracle Utilities

Anthony Shorten - Sun, 2014-04-06 17:56

Oracle Utilities Application Framework based applications can be housed in private cloud infrastructure which is either onsite or as a partner offering. Oracle provides a Private Cloud foundation set of software that can be used to house Oracle Utilities software. To aid in planning for installing Oracle Utilities Application Framework based products on private cloud a whitepaper has been developed and has been published.

The Private Cloud Planning Guide (Doc Id: 1308165.1) which is available from My Oracle Support, provides and architecture and software manifest for implementing a fully functional private cloud offering onsite or via a partner. It refers to other documentation to install and configure specific components of a private cloud solution.

Creating Users in Oracle Internet Directory (OID)

Online Apps DBA - Sun, 2014-04-06 15:03
This post covers creating users in OID using ODSM, this OID user will be used as admin user for OAM-OID integration in our Oracle Access Manager (OAM) 11gR2 Admin Training (training starts on 3rd May and fee is 699 USD). For part I of OID/OVD installation click here and for part II click here . In this exercise, we use Oracle Directory [...]

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

Analysing Parallel Execution Skew - Without Diagnostics / Tuning Pack License

Randolf Geist - Sun, 2014-04-06 14:27
This is the third part of the video tutorial "Analysing Parallel Execution Skew". In this part I show how to analyse a parallel SQL execution regarding Parallel Execution Skew.

If you don't have a Diagnostics / Tuning Pack license the options you have for doing that are quite limited, and the approach, as demonstrated in the tutorial, has several limitations and shortcomings.

Here is the video:



If you want to reproduce or play around with the examples shown in the tutorial here is the script for creating the tables and running the queries / DML commands used in the tutorial. A shout goes out to Christo Kutrovsky at Pythian who I think was the one who inspired the beautified version on V$PQ_TQSTAT.

---------------------
-- Links for S-ASH --
---------------------
--
-- http://www.perfvision.com/ash.php
-- http://www.pythian.com/blog/trying-out-s-ash/
-- http://sourceforge.net/projects/orasash/files/v2.3/
-- http://sourceforge.net/projects/ashv/
---------------------

-- Table creation
set echo on timing on time on

drop table t_1;

purge table t_1;

drop table t_2;

purge table t_2;

drop table t_1_part;

purge table t_1_part;

drop table t_2_part;

purge table t_2_part;

drop table t1;

purge table t1;

drop table t2;

purge table t2;

drop table t3;

purge table t3;

drop table t4;

purge table t4;

drop table t5;

purge table t5;

drop table x;

purge table x;

create table t1
as
select /*+ use_nl(a b) */
(rownum * 2) as id
, rownum as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000) */ * from dual
connect by
level <= 1000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't1')

alter table t1 cache;

create table t2
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000000) */ * from dual
connect by
level <= 1000000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't2')

alter table t2 cache;

create table t3
as
select /*+ use_nl(a b) */
(rownum * 2) as id
, rownum as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000) */ * from dual
connect by
level <= 1000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't3')

alter table t3 cache;

create table t4
compress
as
select
(rownum * 2) + 1 as id
, mod(rownum, 2000) + 1 as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000000) */ * from dual
connect by
level <= 1000000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't4')

alter table t4 cache;

create table t5
as
select /*+ use_nl(a b) */
(rownum * 2) as id
, rownum as id2
, rpad('x', 100) as filler
from
(select /*+ cardinality(1000) */ * from dual
connect by
level <= 1000) a, (select /*+ cardinality(2) */ * from dual connect by level <= 2) b
;

exec dbms_stats.gather_table_stats(null, 't5')

alter table t5 cache;

create table x
compress
as
select * from t2
where 1 = 2;

create unique index x_idx1 on x (id);

alter table t1 parallel 2;

alter table t2 parallel 2;

alter table t3 parallel 15;

alter table t4 parallel 15;

alter table t5 parallel 15;

create table t_1
compress
as
select /*+ use_nl(a b) */
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_1')

create table t_2
compress
as
select
rownum as id
, case when rownum <= 5e5 then mod(rownum, 2e6) + 1 else 1 end as fk_id_skew
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_2', method_opt=>'for all columns size 1', no_invalidate=>false)

alter table t_1 parallel 8 cache;

alter table t_2 parallel 8 cache;

create table t_1_part
partition by hash(id) partitions 8
compress
as
select /*+ use_nl(a b) */
rownum as id
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_1_part')

create table t_2_part
partition by hash(fk_id_skew) partitions 8
compress
as
select
rownum as id
, case when rownum <= 5e5 then mod(rownum, 2e6) + 1 else 1 end as fk_id_skew
, rpad('x', 100) as filler
from
(select /*+ cardinality(1e5) */ * from dual
connect by
level <= 1e5) a, (select /*+ cardinality(20) */ * from dual connect by level <= 20) b
;

exec dbms_stats.gather_table_stats(null, 't_2_part', method_opt=>'for all columns size 1', no_invalidate=>false)

alter table t_1_part parallel 8 cache;

alter table t_2_part parallel 8 cache;

---------------------------------------------------------------
-- Single DFO tree (with Parallel Execution Skew), many DFOs --
---------------------------------------------------------------

set echo on timing on time on verify on

define num_cpu = "14"

alter session set workarea_size_policy = manual;

alter session set sort_area_size = 200000000;

alter session set sort_area_size = 200000000;

alter session set hash_area_size = 200000000;

alter session set hash_area_size = 200000000;

select
max(t1_id)
, max(t1_filler)
, max(t2_id)
, max(t2_filler)
, max(t3_id)
, max(t3_filler)
from (
select /*+ monitor
no_merge
no_merge(v_1)
no_merge(v_5)
parallel(t1 &num_cpu)
PQ_DISTRIBUTE(T1 HASH HASH)
PQ_DISTRIBUTE(V_5 HASH HASH)
leading (v_1 v_5 t1)
use_hash(v_1 v_5 t1)
swap_join_inputs(t1)
*/
t1.id as t1_id
, regexp_replace(v_5.t3_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as t1_filler
, v_5.*
from (
select /*+ parallel(t2 &num_cpu)
parallel(t3 &num_cpu)
leading(t3 t2)
use_hash(t3 t2)
swap_join_inputs(t2)
PQ_DISTRIBUTE(T2 HASH HASH)
*/
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id2 = t2.id2 (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
) v_1
, (
select /*+ parallel(t2 &num_cpu)
parallel(t3 &num_cpu)
leading(t3 t2)
use_hash(t3 t2)
swap_join_inputs(t2)
PQ_DISTRIBUTE(T2 HASH HASH)
*/
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id = t2.id (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
) v_5
, t1
where
v_1.t3_id = v_5.t3_id
and v_5.t2_id2 = t1.id2 (+) + 2001
and regexp_replace(v_5.t3_filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t1.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
)
;

break on dfo_number nodup on tq_id nodup on server_type skip 1 nodup on instance nodup

-- compute sum label Total of num_rows on server_type

select
/*dfo_number
, */tq_id
, cast(server_type as varchar2(10)) as server_type
, instance
, cast(process as varchar2(8)) as process
, num_rows
, round(ratio_to_report(num_rows) over (partition by dfo_number, tq_id, server_type) * 100) as "%"
, cast(rpad('#', round(num_rows * 10 / nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type), 0)), '#') as varchar2(10)) as graph
, round(bytes / 1024 / 1024) as mb
, round(bytes / nullif(num_rows, 0)) as "bytes/row"
from
v$pq_tqstat
order by
dfo_number
, tq_id
, server_type desc
, instance
, process
;

---------------------------------------------------------------------------------------------------
-- Same statement with Parallel TEMP TABLE TRANSFORMATION, V$PQ_TQSTAT shows useless information --
---------------------------------------------------------------------------------------------------

set echo on timing on time on verify on

define num_cpu = "14"

alter session set workarea_size_policy = manual;

alter session set sort_area_size = 200000000;

alter session set sort_area_size = 200000000;

alter session set hash_area_size = 200000000;

alter session set hash_area_size = 200000000;

with result as
(
select /*+ materialize
monitor
no_merge
no_merge(v_1)
no_merge(v_5)
parallel(t1 &num_cpu)
PQ_DISTRIBUTE(T1 HASH HASH)
PQ_DISTRIBUTE(V_1 HASH HASH)
PQ_DISTRIBUTE(V_5 HASH HASH)
leading (v_1 v_5 t1)
use_hash(v_1 v_5 t1)
swap_join_inputs(t1)
*/
t1.id as t1_id
, regexp_replace(v_5.t3_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') as t1_filler
, v_5.*
from (
select /*+ parallel(t2 &num_cpu) parallel(t3 &num_cpu) leading(t3 t2) use_hash(t3 t2) swap_join_inputs(t2) PQ_DISTRIBUTE(T2 HASH HASH) */
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id2 = t2.id2 (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
)
v_1
, (
select /*+ parallel(t2 &num_cpu) parallel(t3 &num_cpu) leading(t3 t2) use_hash(t3 t2) swap_join_inputs(t2) PQ_DISTRIBUTE(T2 HASH HASH) */
t2.id as t2_id
, t2.filler as t2_filler
, t2.id2 as t2_id2
, t3.id as t3_id
, t3.filler as t3_filler
from
t1 t2
, t2 t3
where
t3.id = t2.id (+)
and regexp_replace(t3.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and mod(t3.id2, 3) = 0
) v_5
, t1
where
v_1.t3_id = v_5.t3_id
and v_5.t2_id2 = t1.id2 (+) + 2001
and regexp_replace(v_5.t3_filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t1.filler (+), '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
)
select max(t1_id), max(t1_filler), max(t2_id), max(t2_filler), max(t3_id), max(t3_filler) from
result;

break on dfo_number nodup on tq_id nodup on server_type skip 1 nodup on instance nodup

-- compute sum label Total of num_rows on server_type

select
/*dfo_number
, */tq_id
, cast(server_type as varchar2(10)) as server_type
, instance
, cast(process as varchar2(8)) as process
, num_rows
, round(ratio_to_report(num_rows) over (partition by dfo_number, tq_id, server_type) * 100) as "%"
, cast(rpad('#', round(num_rows * 10 / nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type), 0)), '#') as varchar2(10)) as graph
, round(bytes / 1024 / 1024) as mb
, round(bytes / nullif(num_rows, 0)) as "bytes/row"
from
v$pq_tqstat
order by
dfo_number
, tq_id
, server_type desc
, instance
, process
;

--------------------------------------------------------------------------------------------------
-- This construct results in misleading information from V$PQ_TQSTAT (actually a complete mess) --
--------------------------------------------------------------------------------------------------

set echo on timing on time on

alter session enable parallel dml;

truncate table x;

insert /*+ append parallel(x 4) */ into x
select /*+ leading(v1 v2) optimizer_features_enable('11.2.0.1') */
v_1.id
, v_1.id2
, v_1.filler
from (
select
id
, id2
, filler
from (
select /*+ parallel(t2 4) no_merge */
rownum as id
, t2.id2
, t2.filler
from
t2
where
mod(t2.id2, 3) = 0
and regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) v1
) v_1
, (
select
id
, id2
, filler
from (
select /*+ parallel(t2 8) no_merge */
rownum as id
, t2.id2
, t2.filler
from
t2
where
mod(t2.id2, 3) = 0
and regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') = regexp_replace(t2.filler, '^([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
) v2
) v_2
where
v_1.id = v_2.id
and v_1.filler = v_2.filler
;

-- Parallel DML requires a COMMIT before querying V$PQ_TQSTAT
commit;

break on dfo_number nodup on tq_id nodup on server_type skip 1 nodup on instance nodup

compute sum label Total of num_rows on server_type

select
dfo_number
, tq_id
, cast(server_type as varchar2(10)) as server_type
, instance
, cast(process as varchar2(8)) as process
, num_rows
, round(ratio_to_report(num_rows) over (partition by dfo_number, tq_id, server_type) * 100) as "%"
, cast(rpad('#', round(num_rows * 10 / nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type), 0)), '#') as varchar2(10)) as graph
, round(bytes / 1024 / 1024) as mb
, round(bytes / nullif(num_rows, 0)) as "bytes/row"
from
v$pq_tqstat
order by
dfo_number
, tq_id
, server_type desc
, instance
, process
;

----------------------------------------------------------------------
-- Single DFO tree (with Parallel Execution Skew, almost no impact) --
----------------------------------------------------------------------

set echo on timing on time on

alter session set workarea_size_policy = manual;

alter session set sort_area_size = 500000000;

alter session set sort_area_size = 500000000;

alter session set hash_area_size = 500000000;

alter session set hash_area_size = 500000000;

select /*+ leading(v1)
use_hash(t_1)
no_swap_join_inputs(t_1)
pq_distribute(t_1 hash hash)
*/
max(t_1.filler)
, max(v1.t_1_filler)
, max(v1.t_2_filler)
from
t_1
, (
select /*+ no_merge
leading(t_1 t_2)
use_hash(t_2)
no_swap_join_inputs(t_2)
pq_distribute(t_2 hash hash) */
t_1.id as t_1_id
, t_1.filler as t_1_filler
, t_2.id as t_2_id
, t_2.filler as t_2_filler
from t_1
, t_2
where
t_2.fk_id_skew = t_1.id
) v1
where
v1.t_2_id = t_1.id
and regexp_replace(v1.t_2_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t_1.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
and regexp_replace(v1.t_2_filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i') >= regexp_replace(t_1.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'i')
;

break on dfo_number nodup on tq_id nodup on server_type skip 1 nodup on instance nodup

-- compute sum label Total of num_rows on server_type

select
/*dfo_number
, */tq_id
, cast(server_type as varchar2(10)) as server_type
, instance
, cast(process as varchar2(8)) as process
, num_rows
, round(ratio_to_report(num_rows) over (partition by dfo_number, tq_id, server_type) * 100) as "%"
, cast(rpad('#', round(num_rows * 10 / nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type), 0)), '#') as varchar2(10)) as graph
, round(bytes / 1024 / 1024) as mb
, round(bytes / nullif(num_rows, 0)) as "bytes/row"
from
v$pq_tqstat
order by
dfo_number
, tq_id
, server_type desc
, instance
, process
;

--------------------------------------------------------------------------------------------------------------------------------
-- Full Partition Wise Join with partition skew - V$PQ_TQSTAT is of no help, since no redistribution takes place (single DFO) --
--------------------------------------------------------------------------------------------------------------------------------

set echo on timing on time on

alter session set workarea_size_policy = manual;

alter session set sort_area_size = 500000000;

alter session set sort_area_size = 500000000;

alter session set hash_area_size = 500000000;

alter session set hash_area_size = 500000000;

select count(t_2_filler) from (
select /*+ monitor
leading(t_1 t_2)
use_hash(t_2)
no_swap_join_inputs(t_2)
pq_distribute(t_2 none none)
*/
t_1.id as t_1_id
, t_1.filler as t_1_filler
, t_2.id as t_2_id
, t_2.filler as t_2_filler
from t_1_part t_1
, t_2_part t_2
where
t_2.fk_id_skew = t_1.id
and regexp_replace(t_2.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c') >= regexp_replace(t_1.filler, '^\s+([[:alnum:]]+)\s+$', lpad('\1', 10), 1, 1, 'c')
);

break on dfo_number nodup on tq_id nodup on server_type skip 1 nodup on instance nodup

-- compute sum label Total of num_rows on server_type

select
/*dfo_number
, */tq_id
, cast(server_type as varchar2(10)) as server_type
, instance
, cast(process as varchar2(8)) as process
, num_rows
, round(ratio_to_report(num_rows) over (partition by dfo_number, tq_id, server_type) * 100) as "%"
, cast(rpad('#', round(num_rows * 10 / nullif(max(num_rows) over (partition by dfo_number, tq_id, server_type), 0)), '#') as varchar2(10)) as graph
, round(bytes / 1024 / 1024) as mb
, round(bytes / nullif(num_rows, 0)) as "bytes/row"
from
v$pq_tqstat
order by
dfo_number
, tq_id
, server_type desc
, instance
, process
;

OAMSSA-06252 after patching

Frank van Bortel - Sun, 2014-04-06 04:01
Once upon a time.. you had a working environment with WebLogic, Access and Identity Management (or Discoverer, or ...) and all of a sudden things start failing. Symptoms You notice the dreaded OAMSSA-06252 (Policy Store not Available) while starting up, and start fearing the worst. Also, it seems as-if you cannot login to OAM management console anymore; your credentials are accepted, but you get Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

OUGN : Summary

Tim Hall - Sun, 2014-04-06 02:33

With the exception of a 5+ hour layover in Amsterdam, the trip home was pretty straight forward. I flew to Amsterdam with Lonneke DikmansRonald Luttikhuizen and Roel Hartman. During my rather excessive layover, I played catchup with all the internet stuff I missed during the trip… :)

I must say OUGN 2014 was a pretty cool event all round! The speaker lineup was incredible. The location (on a boat) was fun. I’ve not done that for a while. In addition to the presentations, I got a lot of time to talk to people about technology, which is what I love doing, so that made me happy…

Big thanks to the organisers of the event for inviting me and paying the bill for the boat and hotel room! Thanks to all the speakers and attendees that managed to put up with me for a couple of days. On a boat, there is nowhere to run! Thanks also to OTN and the Oracle ACE Program. They didn’t fund this trip for me, but I’m still happy to be flying the flag on their behalf. :)

Cheers

Tim…

 

OUGN : Summary was first posted on April 6, 2014 at 9:33 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Bangalore Coonoor on Royal Enfield

Vattekkat Babu - Sat, 2014-04-05 11:28

Route is indicated by green icons on the map. Return was on next day, indicated by red icons. Each marker was done when I had stopped for at least a 5 minute break. Click/hover on the marker to get info and odometer reading.

Open Google Route Map

Onward

Outside Ramanagaram.
Outside Ramanagaram at 6:30am. From here till Mysore, 2 hours non-stop ride!

  • Early morning traffic was peaceful. Nobody on the roads to Mysore really. Very different when you are driving on weekends though.
  • Route was Sony World - Hosur Road - NICE Road - Ramanagaram - Mysore City - Nanjangud - Gundlupet - Bandipur - Theppakkadu - Masinagudi - Ooty
  • Overall about 330km. Took about 8 hours with about 1.25 hours break.
  • I was apprehensive of climbing the Kallatti Ghat on a relatively new bike. Just pushed it - it climbed with no drama.
  • Steady speed of 60kmph was followed mostly. Once in a while, went up to 70kmph for less than 1km, just to try the bike.
  • The waterhole near Bandipur visitor's center has all the trees burned down. Quite bad. This is the place where I had seen elephants, bisons and even a tiger across the road once before.

MDS Seeded Customization Approach with Empty External Project

Andrejus Baranovski - Sat, 2014-04-05 11:27
Great feature in ADF - MDS Seeded customisations support. This is particularly useful for independent software vendors, who are developing their own products on top of ADF framework. With MDS Seeded customisation, maintenance for multiple customers becomes easier. We could create customisation, instead of changing original source code - this makes it easier to maintain a product. I would like to share one important hint, related to technical architecture for MDS Seeded customisations - this will be related to the way how MDS Seeded customisations are organised and maintained. By default, you would create MDS Seeded customisation files in the original application, however this is not very clean approach. There is a way to create and keep all MDS Seeded customisation files in empty external application. I will describe in the post, how this can be achieved with a few easy steps.

If you are going to test sample application - MDSCustomizationsApp.zip, with integrated WLS instances in JDeveloper, make sure to read this post (it describes how to setup local MDS repository in file system) - How To Setup MDS Repository for Embedded WLS Instance.

Let's start - you can download initial version of sample ADF application from the blog post mention above. Employees form and empty Submit button:


I don't want to create any MDS Seeded customisation files inside, rather I would build and deploy ADF library out of main application:


Sample comes with special application - CustomizationApp (you can find it in the archive). This application was created to keep MDS Seeded customisation files, no other purpose. Initially, empty project was created - where ADF library was imported (the one we have just deployed):


Empty project is enabled with MDS Seeded customisation support:


Restart JDeveloper in customisation mode - so we could create some customisations for the content from imported ADF library:


If MDS Seeded customisation mode was successfully applied, you should see some special icon, next to the application name. Choose 'Show Libraries' to see a list of libraries - so, we could see contents from imported ADF library:


All attached libraries will be displayed, you should locate our ADF library. Expand it and you should see application packaging:


We could apply several customisations now. Let's open Employees VO and define View Criteria (filter employees by salary >= 1000). This customisation will be stored inside our empty project:


There will be a change in AM - View Criteria will be applied for VO instance:


We could go and review XML files for applied MDS Seeded customisations. There is one for VO and AM. XML for AM contains delta information about applied View Criteria for VO instance:


One more customisation on UI level now - drag and drop Commit operation for Submit button:


This change creates two additional XML files with MDS Seeded customisations - for JSF fragment and Page Definition:


You must define MDS Seeded customisations deployment profile (MAR) for application with empty project (containing XML's):


Development is completed, now will be a last bit - deployment. Make sure WLS is started (you should start it separately, if you want to test MAR profile deployment):


Go ahead and deploy main application first - you should get a list of MDS repositories (see a hint how to define local test repository in the blog post mentioned above):


Once main application was deployed, you can apply MDS Seeded customisations and export them through a MAR file from external application:


You should see main application name in the wizard, MDS Seeded customisations will be applied for this application:


All the changes applied through MDS Seeded customisations, will be visible in the log:


Good point - there is no need to restart main application, after MDS Seeded customisation changes were applied. Here you can see, original application with changes as per above:


If applied changes should be removed, this could be simply removed from MDS Seeded customisation XML file and re-applied. Main advantage of this approach - no need to store XML files with MDS Seeded customisations inside original project, we could keep them outside.

OUGN : Day 2

Tim Hall - Sat, 2014-04-05 09:22

Day 2 started really early. Having got to bed about 02:00, I was up at 05:30 and thinking about my 08:30 session. The previous evening’s conversation with Brynn was playing on my mind a little (in a good way), thinking how that conversation should/would affect my session. The session itself seemed to go well. I enjoyed it anyway. :)

From there it was more conversations with people, including a chat with Martin Bach, Martin Nash and one of the attendees (sorry, I forgot your name) about some Exadata issues he was having. I freely admit to knowing nothing about Exadata, but I do know about most of the technology stack that makes up Exadata (like ASM, RAC etc.). Since a number of the issues people have are not really “Exadata” issues, so much as “RAC” issues or “ASM” issues, it’s surprising how much you can get involved in these discussions, provided you don’t try to pretend to be something you are not!

After that I attempted to catch up on some sleep, which didn’t really work out, so then it was off to lunch.

After lunch it was time for an open database panel session. This is the first time they’ve done this sort of thing at this event, so I’m not sure what people expected, including us panalists. :) I think this sort of thing needs to run for a few conferences to let people get a feel for it, before you make a decision about whether it is going to work or not going forward. You have to give it an opportunity to mature… :)

After that it was off to Martin Bach‘s session on Oracle 12c, features that didn’t make the top 10. There were some things I already knew about and some things that passed me by. Food for thought!

Next up was Martin Nash speaking about what an Oracle DBA should know about Linux administration. I was pretty confident I was going to be so ahead of the game here, but he mentioned a number of things I’ve not played with. I’ll probably end up downloading his slides and working through some stuff.

Then it was my WebLogic session. Marcus Eisele did a session called WebLogic 101 in the morning and after a quick discussion I decided that we were effectively giving the same talk. I’m always a little nervous about doing WebLogic talks, but this made me even more nervous. As it turned out, with the exception of one person, I got a different crowd to him, so it was fine.

After my last session of the conference, I inevitably did some more chatting and then went off to see “Oracle Cloud Oddessy“. From there it was dinner, chatting to Mike Dietricht in the bar, then off to bed to sleep through the overnight ride back to OSLO.

Cheers

Tim…

PS. I think I got married to Debra Lilley by the ship’s captain…

OUGN : Day 2 was first posted on April 5, 2014 at 4:22 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.