Skip navigation.

Feed aggregator

The new %SelectDummyTable MetaSQL

Javier Delgado - Tue, 2014-10-14 07:26
Does anyone know a PeopleSoft developer who didn't ever use a SQL statement like the following one?

select %CurrentDateOut
from PS_INSTALLATION;

Where PS_INSTALLATION could be any single-row table in the PeopleSoft data model.

If you look at the previous statement, the SELECT clause is not retrieving any field from the PS_INSTALLATION table, but just using it to comply with ANSI SQL. The same statement could be written in Microsoft SQL Server like this:

select %CurrentDateOut;

In Oracle Database, as:

select %CurrentDateOut
from dual;

In both cases, the sentences are a better performing option. Both solutions do not require accessing any physical table.

The problem with these solutions is that they are platform specific, and we want to avoid platform specific syntax. Believe me, when you perform a platform migration you suddenly have very present in your mind the ancestors of the programmers who used this type of syntax. So, up to now, we had to stick with the SELECT ... FROM PS_INSTALLATION solution.








Until now. PeopleTools 8.54 introduces a new MetaSQL name %SelectDummyTable, which automatically translates into a platform specific sentences. Our previous sample would be written as:

select %CurrentDateOut
from %SelectDummyTable

We now have a platform independent and well performing solution. What else can we ask for? ;-)

Note: I've checked the online PeopleBooks from Oracle and at this point there is no documentation on this Meta SQL. Still, I've conducted some tests and it seems to be working correctly.

OOW14 : One week in a nutshell

Luc Bors - Tue, 2014-10-14 04:26

Mind Control?

Oracle AppsLab - Mon, 2014-10-13 16:37

Editor’s note: Hey look, a new author. Here’s the first post from Raymond Xie, who joined us nearly a year ago. You may remember him from such concept demos as geo-fencing or Pebble watchface. Raymond has been busy at work and wants to share the work he did with telekinesis. Or something, you decide. Enjoy.

You put on a headband, stare at a ball, tilt your head back-forth and left-right . . . the ball navigates through a simple maze, rushing, wavering, changing colors, and finally hitting the target.

That is the latest creation out of AppsLab: Muse Sphero Driver. When it was first showed at OAUX Exchange during OOW, it amused many people, as they would call it “mind control” game.

The setup consists of  Muse – a brain-sensing headband, Sphero – a robotic ball, and a tablet to bridge the two.

Technically, it is your brainwave data (Electroencephalography – EEG) driving the Sphero (adjusting speed and changing color with spectrum from RED to BLUE, where RED: fast, active;  BLUE: slow, calm);  and head gesture (3d Accelerarometer- ACC) controlling the direction of Sphero movement.  Whether or not you call that as “mind control” is up to your own interpretation.

You kind of drive the ball with your mind, but mostly brainwave noises instead of conscious thought. It is still too early to derive accurate “mind control” from EEG data out of any regular person, for the reasons:

1. For EEG at Scalp level, the noise-to-signal ratio is very poor;
2. Need to establish the correlation between EEG and mind activity.

But it does open up a dialog in HCI, such as voice-control vs mind-control (silence); or in Robotics, instead of asking machine to “see”/”understand”, we can “see”/”understand” and impersonate it with our mind and soul.

While it is difficult to read out “mind” (any mind activity) transparently, we think it is quite doable to map your mind into certain states, and use the “state” as command indirectly.

We may do something around this area. So stay tuned.

Meanwhile, you can start to practice Yoga or Zen, to get better noise-to-signal ratio, and to set your mind into certain state with ease.Possibly Related Posts:

Using Global Temporary Tables in Application Engine

Javier Delgado - Mon, 2014-10-13 15:52
One of the new PeopleTools 8.54 features that went probably a bit unnoticed amidst the excitement on the new Fluid interface is the ability of Application Engine programs to take advantage of Global Temporary Tables (GTTs) when using an Oracle Database.

What are GTTs?
The Global Temporary Tables were introduced by Oracle already on the 8i version of its database product. These tables are session specific, meaning that the data inserted in them only lasts until the session is closed (in Oracle Database there is the possibility of using them only until the next commit, but that option is not used by PeopleSoft). The data inserted in the table by each session is not seen by other sessions. In other words, it is a very similar behavior to Application Engine Temporary Tables. The benefit of using a database supported solution rather the traditional temporary tables is better performance, since GTTs are optimized for temporary data.
How is it implemented in PeopleTools?

The implementation in PeopleTools is quite simple. When selecting Temporary Table as the record type, a new option is enabled: "Global Temporary Table (GTT)".













The build SQL generated by PeopleTools is slightly different to traditional tables:
CREATE GLOBAL TEMPORARY TABLE PS_BN_JOB_WRK (PROCESS_INSTANCE DECIMAL(10) NOT NULL,   EMPLID VARCHAR2(11) NOT NULL,   EMPL_RCD SMALLINT NOT NULL,   EFFDT DATE,   EFFSEQ SMALLINT NOT NULL,   EMPL_STATUS VARCHAR2(1) NOT NULL) ON COMMIT PRESERVE ROWS TABLESPACE BNAPP/
Note: The SQL Build process still creates as many instances of the table as it did with traditional temporary tables. This sounds like a bug to me, as my guess is that the whole idea of using GTTs is to be able to share a table without actually sharing the data, but I may be wrong. In any case, it does not do any harm. Any insight on this? 

Constraints
Due to specific characteristics of GTTs, there are some limitations regarding when they can be used:
  • If the Application Engine is run in online mode, then the GTTs cannot be shared between different programs on the same run.
  • You cannot use Restart Enabled with GTTs as the data is deleted when the session ends. In its current version, PeopleBooks state otherwise, but I think it is a typo.
  • %UpdateStats are not supported. Before Oracle Database 12c, if the statistics would be shared among all the sessions. Oracle Database 12c also supports session specific statistics, which would be the desired behavior in PeopleSoft (from a higher level point of view, programmers are expecting the temporary table to be dedicated to the instance). I guess the %UpdateStats is not supported because Oracle Database 11g is still supported by PeopleTools 8.54 and in that case running statistics would generate unexpected results. Still, the DBA can run statistics outside of the Application Engine program.
Note: As learnt from Oracle OpenWorld 2014, Oracle is evaluating supporting of Oracle Database 12c session specific statistics for GTT’s in a future releases of PeopleTools.
Conclusion
If you are moving to PeopleTools 8.54 and you want to improve the performance of a given Application Engine program, the GTTs may bring good value to your implementation. Please remember that you need to be using an Oracle Database.

    PeopleTools 8.54 will be the last release to certify Crystal Reports

    Javier Delgado - Mon, 2014-10-13 15:49
    It was just a question of time. In July 2011, Oracle announced that newly acquired PeopleSoft applications would not include a Crystal Reports license. Some years before, in October 2007, Business Objects was acquired by SAP. You don't need to read Machiavelli's Il Principe to understand why the license was now not included.

    In order to keep customer's investment on custom reports safe, Oracle kept updating Crystal Reports certifications for those customers who purchased PeopleSoft applications before that date. In parallel, BI Publisher was improved release after release, providing a viable replacement to Crystal Reports, and in many areas surpassing its features.

    Now, as announced in My Oracle Support's document 1927865.1, PeopleTools 8.54 will be the last release for which Crystal Reports will be certified, and support for report issues will end together with the expiration of PeopleSoft 9.1 applications support.








    PeopleTools 8.54 was just released a couple of months ago, so there is no need to panic, but PeopleSoft applications managers would do well if they start coming up with an strategy to convert their existing Crystal Reports into BI Publisher reports.

    Extending SaaS with PaaS free eLearning lectures

    Angelo Santagata - Mon, 2014-10-13 15:17

    Hey all,

    Over the last 4 months I've been working with some of my US friends to create a eLearning version of the PTS SaaS extending PaaS workshop I co-wrote....., Well the time has come and we've published the first 4 eLearning seminars, and I'm sure there will be more coming.

    Check em out and let me know what you think and what other topics need to be covered.

    https://apex.oracle.com/pls/apex/f?p=44785:24:0::::P24_CONTENT_ID,P24_PREV_PAGE:10390,24

    How to measure Oracle index fragmentation

    Yann Neuhaus - Mon, 2014-10-13 14:39

    At Oracle Open World 2014, or rather the Oaktable World, Chris Antognini has presented 'Indexes: Structure, Splits and Free Space Management Internals'. It's not something new, but it's still something that is not always well understood: how index space is managed, block splits, fragmentation, coalesce and rebuilds. Kyle Hailey has made a video of it available here.
    For me, it is the occasion to share the script I use to see if an index is fragmented or not.

    First, forget about those 'analyze index validate structure' which locks the table, and the DEL_LEAF_ROWS that counts only the deletion flags that are transient. The problem is not the amount of free space. The problem is where is that free space. Because if you will insert again in the same range of values, then that space will be reused. Wasted space occurs only when lot of rows were deleted in a range where you will not insert again. For exemple, when you purge old ORDERS, then the index on the ORDER_DATE - or on the ORDER_ID coming from a sequence - will be affected. Note that the problem occurs only for sparse purges because full blocks are reclaimed when needed and can get rows from an other range of value.

    I have a script that shows the number of rows per block, as well as used and free space per block, and aggregates that by range of values.

    First, let's create a table with a date and an index on it:

    SQL> create table DEMOTABLE as select sysdate-900+rownum/1000 order_date,decode(mod(rownum,100),0,'N','Y') delivered , dbms_random.string('U',16) cust_id
      2  from (select * from dual connect by level

    My script shows 10 buckets with begin and end value and for each of them the averge number of rows per block and the free space:

    SQL> @index_fragmentation 
    
    ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
    --------- -- --------- ---------- ----------- ----------- ---------- -----                     
    24-APR-12 -> 02-AUG-12        377        7163          11        266                        
    03-AUG-12 -> 11-NOV-12        377        7163          11        266                        
    11-NOV-12 -> 19-FEB-13        377        7163          11        266                        
    19-FEB-13 -> 30-MAY-13        377        7163          11        265                        
    30-MAY-13 -> 07-SEP-13        377        7163          11        265                        
    07-SEP-13 -> 16-DEC-13        377        7163          11        265                        
    16-DEC-13 -> 26-MAR-14        377        7163          11        265                        
    26-MAR-14 -> 03-JUL-14        377        7163          11        265                        
    04-JUL-14 -> 11-OCT-14        377        7163          11        265                        
    12-OCT-14 -> 19-JAN-15        376        7150          11        265                        
    

    Note that the script reads all the table (it can do a sample but here it is 100%). Not exactly the table but only the index. It counts the index leaf blocks with the undocumented function sys_op_lbid() which is used by oracle to estimate the clustering factor.

    So here I have no fragmentation. All blocks have about 377 rows and no free space. This is because I inserted them in increasing order and the so colled '90-10' block split occured.

    Let's see what I get if I delete most of the rows before the 01-JAN-2014:

    SQL> delete from DEMOTABLE where order_dateSQL> @index_fragmentation 
    
    ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
    --------- -- --------- ---------- ----------- ----------- ---------- -----                     
    25-APR-12 -> 02-AUG-12          4          72          99        266 oooo                   
    03-AUG-12 -> 11-NOV-12          4          72          99        266 oooo                   
    11-NOV-12 -> 19-FEB-13          4          72          99        266 oooo                   
    19-FEB-13 -> 30-MAY-13          4          72          99        265 oooo                   
    30-MAY-13 -> 07-SEP-13          4          72          99        265 oooo                   
    07-SEP-13 -> 16-DEC-13          4          72          99        265 oooo                   
    16-DEC-13 -> 26-MAR-14          4          72          99        265 oooo                   
    26-MAR-14 -> 03-JUL-14          4          72          99        265 oooo                   
    04-JUL-14 -> 11-OCT-14         46         870          89        265 oooo                   
    12-OCT-14 -> 19-JAN-15        376        7150          11        265                        
    

    I have the same buckets, and same number of blocks. But blocks which are in the range below 01-JAN-2014 have only 4 rows and a lot of free space. This is exactly what I want to detect: I can check if that free space will be reused.

    Here I know I will not enter any orders with a date in the past, so those blocks will never have an insert into them. I can reclaim that free space with a COALESCE:

    SQL> alter index DEMOINDEX coalesce;
    
    Index altered.
    
    SQL> @index_fragmentation 
    
    ORDER_DAT to ORDER_DAT rows/block bytes/block %free space     blocks free                      
    --------- -- --------- ---------- ----------- ----------- ---------- -----                     
    25-APR-12 -> 03-OCT-14        358        6809          15         32                        
    03-OCT-14 -> 15-OCT-14        377        7163          11         32                        
    15-OCT-14 -> 27-OCT-14        377        7163          11         32                        
    27-OCT-14 -> 08-NOV-14        377        7163          11         32                        
    08-NOV-14 -> 20-NOV-14        377        7163          11         32                        
    20-NOV-14 -> 02-DEC-14        377        7163          11         32                        
    02-DEC-14 -> 14-DEC-14        377        7163          11         32                        
    14-DEC-14 -> 26-DEC-14        377        7163          11         32                        
    27-DEC-14 -> 07-JAN-15        377        7163          11         32                        
    08-JAN-15 -> 19-JAN-15        371        7056          12         32                        
    

    I still have 10 buckets because this is defined in my script, but each bucket noew has less rows. I've defragmented the blocks and reclaimed the free blocks.

    Time to share the script now. Here it is: index_fragmentation.zip

    The script is quite ugly. It's SQL generated by PL/SQL. It's generated because it selects the index columns. And as I don't want to have it too large it is not indented nor commented. However, if you run it with set servertoutput on you will see the generated query.

    How to use it? Just change the owner, table_name, and index name. It reads the whole index so if you have a very large index you may want to change the sample size.

    Color Analysis of Flags – Patterns and symbols – Visualizations and Dashboards

    Nilesh Jethwa - Mon, 2014-10-13 13:54

    Recently, here at InfoCaptor we started a small research on the subject of flags. We wanted to answer certain questions like what are the most frequently used colors across all country flags, what are the different patterns etc.

    Read more Color and Pattern analysis on Flags of Countries – Simple visualization but interesting data

    ShareThis

    Oracle Critical Patch Update October 2014 - Massive Patch

    Just when you thought the Oracle Database world was getting safer, Oracle will be releasing fixes for 32 database security bugs on Tuesday, October 14th.  This is in stark contrast to the previous twenty-five quarters where the high was 16 database bugs and average per quarter was 8.2 database bugs.  For the previous two years, the most database bugs fixed in a single quarter was six.

    In addition to the 32 database security bugs, there are a total of 155 security bugs fixed in 44 different products.

    Here is a brief analysis of the pre-release announcement for the upcoming October 2014 Oracle Critical Patch Update (CPU).

    Oracle Database

    • There are 32 database vulnerabilities; only one is remotely exploitable without authentication and 4 are applicable to client-side only installations.
    • Since at least one database vulnerability has a CVSS 2.0 metric of 9.0 (critical for a database vulnerability), this is a fairly important CPU due to severity and volume of fixes.
    • The remotely exploitable without authentication bug is likely in Application Express (APEX).  Any organizations running APEX externally on the Internet should look to apply the relevant patches immediately.  To patch APEX, the newest version must be installed, which requires appropriate testing and upgrading of applications.
    • There are four cilent-side only installations and likely most are in JDBC.
    • Core RDBMS and PL/SQL are listed as patched components, so most likely there are critical security vulnerabilities in all database implementations.

    Oracle Fusion Middleware

    • There are 17 new Oracle Fusion Middleware vulnerabilities, 13 of which are remotely exploitable without authentication and the highest CVSS score being 7.5.
    • Various Fusion Middleware products are listed as vulnerable, so you should carefully review this CPU to determine the exact impact to your environment.
    • The core WebLogic Server is listed as a patched component, therefore, most likely all Fusion Middleware customers will have to apply the patch.

    Oracle E-Business Suite 11i and R12

    • There are nine new Oracle E-Business Suite 11i and R12 vulnerabilities, seven of which are remotely exploitable without authentication.  Many of these are in core Oracle EBS components such as Oracle Applications Framework (OAF) and Application Object Library (AOL/FND).  Even though the maximum CVSS score is 5.0, most of these vulnerabilities should be considered high risk.
    • All DMZ implementations of Oracle EBS should carefully review the CPU to determine if there environment is vulnerable.  As all Oracle EBS CPU patches are now cumulative, the CPU patch should be prioritized or mitigating controls, such as AppDefend, be implemented.

    Planning Impact

    • We anticipate this quarter's CPU to be higher risk than most and should be prioritized.  Based on the patched components, this may be a higher than average risk CPU for all Oracle database environments.
    • As with all previous CPUs, this quarter's security patches should be deemed critical and you should adhere to the established procedures and timing used for previous CPUs.
    • For Oracle E-Business Suite customers, DMZ implementations may have to apply this quarter's patch faster than previous quarters due to the number and severity of bugs.
    Tags: Oracle Critical Patch Updates
    Categories: APPS Blogs, Security Blogs

    Here We Grow Again

    Oracle AppsLab - Mon, 2014-10-13 12:18

    Cheesy title aside, the AppsLab (@theappslab) is growing again, and this time, we’re branching out into new territory.

    As part of the Oracle Applications User Experience (@usableapps) team, we regularly work with interaction designers, information architects and researchers, all of whom are pivotal to ensuring that what we build is what users want.

    Makes sense, right?

    So, we’re joining forces with the Emerging Interactions team within OAUX to formalize a collaboration that has been ongoing for a while now. In fact, if you read here, you’ll already recognize some of the voices, specifically John Cartan and Joyce Ohgi, who have authored posts for us.

    For privacy reasons (read, because Jake is lazy), I won’t name the entire team, but I’m encouraging them to add their thoughts to this space, which could use a little variety. Semi-related, Noel (@noelportugal) was on a mission earlier this week to add content here and even rebrand this old blog. That seems to have run its course quickly.

    One final note, another author has also joined the fold, Mark Vilrokx (@mvilrokx); Mark brings a long and decorated history of development experience with him.

    So, welcome everyone to the AppsLab team.Possibly Related Posts:

    Memory

    Jonathan Lewis - Mon, 2014-10-13 10:24

    On a client site recently, experimenting with a T5-2 – fortunately a test system – we decided to restart an instance with a larger SGA. It had been 100GB, but with 1TB of memory and 256 threads (2 sockets, 16 cores per socket, 8 threads per core) it seemed reasonable to crank this up to 400GB for the work we wanted to do.

    It took about 15 minutes for the instance to start; worse, it took 10 to 15 seconds for a command-line call to SQL*Plus on the server to get a response; worse still, if I ran a simple “ps -ef” to check what processes were running the output started to appear almost instantly but stopped after about 3 lines and hung for about 10 to 15 seconds before continuing. The fact that the first process name to show after the “hang” was one of the Oracle background processes was a bit of hint, though.

    Using truss on both the SQL*Plus call and on the ps call, I found that almost all the elapsed time was spent in a call to shmatt (shared memory attach); a quick check with “ipcs – ma” told me (as you might guess) that the chunk of shared memory identified by truss was one of the chunks allocated to Oracle’s SGA. Using pmap on the pmon process to take a closer look at the memory I found that it consisted of a few hundred pages sized at 256MB and a lot of pages sized at 8KB; this was a little strange since the alert log had told me that the instance was allocating about 1,600 memory pages of 256MB (i.e. 400GB) and 3 pages of 8KB – clearly a case of good intentions failing.

    It wasn’t obvious what my next steps should be – so I bounced the case off the Oak Table … and got the advice to reboot the machine. (What! – it’s not my Windows PC, it’s a T5-2.) The suggestion worked: the instance came up in a few seconds, with shared memory consisting of a few 2GB pages, a fair number of 256MB pages, and a few pages of other sizes (including 8KB, 64KB and 2MB).

    There was a little more to the advice than just rebooting, of course; and there was an explanation that fitted the circumstances. The machine was using ZFS and, in the absence of a set limit, the file system cache had at one point managed to acquire 896 GB of memory. In fact when we first tried to start the instance at with a 400GB SGA Oracle couldn’t start up at all until the system administrator had reduced the filesystem cache and freed up most of the memory; even then so much of the memory had been allocated originally in 8KB pages that Oracle had made a complete mess of building a 400GB memory map.

    I hadn’t passed all these details to the Oak Table but the justification for the suggested course of action (which came from Tanel Poder) sounded like a perfect description of what had been happening up to that point. In total his advice was:

    • limit the ZFS ARC cache (with two possible strategies suggested)
    • use sga_target instead of memory_target (to avoid a similar problem on memory resize operations)
    • start the instance immediately after the reboot

    Maxim: Sometimes the solution you produce after careful analysis of the facts looks exactly like the solution you produce when you can’t think of anything else to do.


    Digital Learning – LVC: It’s the attitude, stupid!

    The Oracle Instructor - Mon, 2014-10-13 06:33

    The single most important factor for successful digital learning is the attitude both of the instructor as well as of the attendees towards the course format. Delivery of countless Live Virtual Classes (LVCs) for Oracle University made me realize that. There are technical prerequisites of course: A reliable and fast network connection and the usage of a good headset is mandatory, else the participant is doomed from the start. Other prerequisites are the same as for traditional courses: Good course material, working lab environment for hands on practices and last not least knowledgeable instructors. For that part notice that we have the very same courseware, lab environments and instructors like for our classroom courses at Oracle University education centers also for LVCs. The major difference is in your head :-)

    Delivering my first couple of LVCs, I felt quite uncomfortable with that new format. Accordingly, my performance was not as good as usual. Meanwhile, I consider the LVC format as totally adequate for my courses and that attitude enables me to deliver them with the same performance as my classroom courses. Actually, they are even better to some degree: I always struggle producing clean sketches with readable handwriting on the whiteboard. Now look at this MS paint sketch from one of my Data Guard LVCs:

    Data Guard Real-Time Apply

    Data Guard Real-Time Apply

    Attendees get all my sketches per email if they like afterwards.

    In short: Because I’m happy delivering through LVC today, I’m now able to do it with high quality. The attitude defines the outcome.

    Did you ever have a teacher in school that you just disliked for some reason? It was hard to learn anything from that teacher, right? Even if that person was competent.

    So this is also true on the side of the attendee: The attitude defines the outcome. If you take an LVC thinking “This cannot work!”, chances are that you are right just because of your mindset. When you attend an LVC with an open mind – even after some initial trouble because you need to familiarize yourself with the learning platform and the way things are presented there – it is much more likely that you will benefit from it. You may even like it better than classroom courses because you can attend from home without the time and expenses it takes to travel :-)

    Some common objections against LVC I have heard from customers and my usual responses:

    An LVC doesn’t deliver the same amount of interaction like a classroom course!

    That is not necessarily so: You are in a small group (mostly less than 10) that is constantly in an audio conference. Unmute yourself and say anything you like, just like in a classroom. Additionally, you have a chatbox available. This is sometimes extremely helpful, especially with non-native speakers in the class :-) You can easily exchange email addresses using the chatbox as well and stay in touch even after the LVC.

    I have no appropriate working place to attend an LVC!

    You have no appropriate working place at all, then, for something that requires a certain amount of concentration. Talk to your manager about it – maybe there is something like a quiet room available during the LVC.

    I cannot keep up the attention when starring the whole day on the computer screen!

    Of course not, that is why we have breaks and practices in between the lessons.

    Finally, I would love to hear about your thoughts and experiences with online courses! What is your attitude towards Digital Learning?


    Tagged: Digital Learning, LVC
    Categories: DBA Blogs

    PLS-00201 in stored procedures

    Laurent Schneider - Mon, 2014-10-13 03:36

    When you grant table access thru a role, you cannot use that role in a stored procedure or view.

    
    create role r;
    
    create user u1 identified by ***;
    grant create procedure, create session to u1;
    
    create user u2 identified by ***;
    grant create procedure, create session, r to u2;
    
    conn u1/***
    create procedure u1.p1 is begin null; end; 
    /
    
    grant execute on u1.p1 to r;
    
    conn u2/***
    
    create procedure u2.p2 is begin u1.p1; end; 
    /
    
    sho err procedure u2.p2
    
    Errors for PROCEDURE U2.P2:
    
    L/COL ERROR
    ----- -------------------------------------------
    1/26  PL/SQL: Statement ignored
    1/26  PLS-201: identifier U1.P1 must be declared
    

    However, If i run it in an anonymous block, it works

    
    declare
      procedure p2 is 
      begin 
        u1.p1; 
      end; 
    begin
      p2; 
    end; 
    /
    
    PL/SQL procedure successfully completed.
    

    But this only works when my role is active. If my role is no longer active, then it obviously fails.

    
    set role none;
    
    declare 
      procedure p2 is 
      begin 
        u1.p1; 
      end; 
    begin 
      p2; 
    end; 
    /
    ERROR at line 1:
    ORA-06550: line 4, column 5:
    PLS-00201: identifier 'U1.P1' must be declared
    ORA-06550: line 4, column 5:
    PL/SQL: Statement ignored
    

    It is all written in the doc,

    All roles are disabled in any named PL/SQL block (stored procedure, function, or trigger) that executes with definer’s rights

    I knew the behavior but not the reason behind it. Thanks to Bryn for bringing me so much knowledge on plsql.

    #OOW14, #DB12c, #BikeB4OOW(!) at 14:45 today, #CON8269: "Doing PL/SQL from SQL: Correctness and Performance" http://t.co/A9BKqrKygv

    — Bryn Llewellyn (@BrynLite) September 29, 2014

    Context for Cloudera

    DBMS2 - Mon, 2014-10-13 02:02

    Hadoop World/Strata is this week, so of course my clients at Cloudera will have a bunch of announcements. Without front-running those, I think it might be interesting to review the current state of the Cloudera product line. Details may be found on the Cloudera product comparison page. Examining those details helps, I think, with understanding where Cloudera does and doesn’t place sales and marketing focus, which given Cloudera’s Hadoop market stature is in my opinion an interesting thing to analyze.

    So far as I can tell (and there may be some errors in this, as Cloudera is not always accurate in explaining the fine details):

    • CDH (Cloudera Distribution … Hadoop) contains a lot of Apache open source code.
    • Cloudera has a much longer list of Apache projects that it thinks comprise “Core Hadoop” than, say, Hortonworks does.
      • Specifically, that list currently is: Hadoop, Flume, HCatalog, Hive, Hue, Mahout, Oozie, Pig, Sentry, Sqoop, Whirr, ZooKeeper.
      • In addition to those projects, CDH also includes HBase, Impala, Spark and Cloudera Search.
    • Cloudera Manager is closed-source code, much of which is free to use. (I.e., “free like beer” but not “free like speech”.)
    • Cloudera Navigator is closed-source code that you have to pay for (free trials and the like excepted).
    • Cloudera Express is Cloudera’s favorite free subscription offering. It combines CDH with the free part of Cloudera Manager. Note: Cloudera Express was previously called Cloudera Standard, and that terminology is still reflected in parts of Cloudera’s website.
    • Cloudera Enterprise is the umbrella name for Cloudera’s three favorite paid offerings.
    • Cloudera Enterprise Basic Edition contains:
      • All the code in CDH and Cloudera Manager, and I guess Accumulo code as well.
      • Commercial licenses for all that code.
      • A license key to use the entirety of Cloudera Manager, not just the free part.
      • Support for the “Core Hadoop” part of CDH.
      • Support for Cloudera Manager. Note: Cloudera is lazy about saying this explicitly, but it seems obvious.
      • The code for Cloudera Navigator, but that’s moot, as the corresponding license key for Cloudera Navigator is not part of the package.
    • Cloudera Enterprise Data Hub Edition contains:
      • Everything in Cloudera Basic Edition.
      • A license key for Cloudera Navigator.
      • Support for all of HBase, Accumulo, Impala, Spark, Cloudera Search and Cloudera Navigator.
    • Cloudera Enterprise Flex Edition contains everything in Cloudera Basic Edition, plus support for one of the extras in Data Hub Edition.

    In analyzing all this, I’m focused on two particular aspects:

    • The “zero, one, many” system for defining the editions of Cloudera Enterprise.
    • The use of “Data Hub” as a general marketing term.

    Given its role as a highly influential yet still small “platform” vendor in a competitive open source market, Cloudera even more than most vendors faces the dilemma:

    • Cloudera wants customers to adopt its views as to which Hadoop-related technologies they should use.
    • However, Cloudera doesn’t want to be in the position of trying to ram some particular unwanted package down a customer’s throat.

    The Flex/Data Hub packaging fits great with that juggling act, because Cloudera — and hence also Cloudera salespeople — get paid exactly as much when customers pick 2 Flex options as when they use all 5-6. If you prefer Cassandra or MongoDB to HBase, Cloudera is fine with that. Ditto if you prefer CitusDB or Vertica or Teradata Hadapt to Impala. Thus Cloudera can avoid a lot of religious wars, even if it can’t entirely escape Hortonworks’ “More open source than thou” positioning.

    Meanwhile, so far as I can tell, Cloudera currently bets on the “Enterprise Data Hub” as its core proposition, as evidenced by that term being baked into the name of Cloudera’s most comprehensive and expensive offering. Notes on the EDH start:

    • Cloudera also portrays “enterprise data hub” as an architectural/reference architecture concept.
    • “Enterprise data hub” doesn’t really mean anything very different from “data lake” + “data refinery”; Cloudera just thinks it sounds more important. Indeed, Cloudera claims that the other terms are dismissive or disparaging, at least in some usages.

    Cloudera’s long-term dream is clearly to make Hadoop the central data platform for an enterprise, while RDBMS fill more niche (or of course also legacy) roles. I don’t think that will ever happen, because I don’t think there really will be one central data platform in the future, any more than there has been in the past. As I wrote last year on appliances, clusters and clouds,

    Ceteris paribus, fewer clusters are better than more of them. But all things are not equal, and it’s not reasonable to try to reduce your clusters to one — not even if that one is administered with splendid efficiency by low-cost workers, in a low-cost building, drawing low-cost electric power, in a low-cost part of the world.

    and earlier in the same post

    … these are not persuasive reasons to put everything on a SINGLE cluster or cloud. They could as easily lead you to have your VMware cluster and your Exadata rack and your Hadoop cluster and your NoSQL cluster and your object storage OpenStack cluster — among others — all while participating in several different public clouds as well.

    One system is not going to be optimal for all computing purposes.

    Categories: Other

    Workaround for ADF 12c Choice List Blank Selection Issue

    Andrejus Baranovski - Sun, 2014-10-12 09:28
    I would like to share a workaround for Choice List component in ADF 12c. There is specific issue, related to blank selection - as soon as user selects blank selection in the choice list, it starts to invoke value change listener for that list, each time when any other element is selected in the table. This is quite annoying and could lead to unexpected results, especially if you depend on logic implemented in value change listener.

    Workaround is implemented in the sample application - ADFChoiceList12cApp.zip. We should take a look first, how it works without workaround applied. Imagine you have a table and one of the columns implements a choice list. Go and select a blank selection for a couple of rows:


    Navigate between rows, do couple of clicks - you should notice in the log information about value change listener invoked. This is quite strange, it should not call value change listener each time, just only first time after selection was changed in the list:


    Choice list UI component is set with AutoSubmit=true and assigned with value change listener method:


    Choice list is configured with blank item selection in ADF BC:


    To apply workaround, go and uncheck blank item selection in ADF BC, make sure checkbox for 'Include No Selection Item' is unchecked:


    On UI side, provide any value for UnselectedLabel property of the choice list component, this will generate property in the source (alternatively you could just type it manually directly in the source):


    Make sure UnselectedLabel property is set to be blank, this will generate blank item on the runtime (it works much better than the one enabled from ADF BC):


    User could navigate through the table, there will be no value change listener triggered anymore (besides one time, when value was actually changed):


    Exciting as usual - new features and new bugs :)

    Managing your Database in a Zombie Apocalypse

    Steve Karam - Sun, 2014-10-12 07:00
    Zombie Server Room

    Only two things are really certain: network latency over long distances, and the fact that humanity will soon rapidly degenerate into undead brain-eaters.

    When that day comes, when the dead are crowding at your door and the windows are busted out and ripped up rotted arms are clawing at the inside of your home, I know what you’ll be thinking: is my database protected?

    Don’t worry, my friends. The Oracle Alchemist has you covered. We just need to zombie-proof your DR plan. Let’s get started.

    Getting the Power Back

    Hopefully you did the smart thing and figured out how much battery and generator power you’d need to survive multiple years of failing power systems due to zombies. I know I did.

    However, if you didn’t get this critical task done you may still have some options. Statistics show that the demand for U.S. gasoline was 8.73 million barrels in 2012. That comes out to 23,917.80821917808219 barrels per day of fuel that’s out there just waiting for you to snatch it up. The problem is going to be getting it. You’ll need to load yourself down with lots of weaponry and strike out in a fuel truck a few times a week, which will definitely take away from your database administration time. It’s a smart idea to enable a lot of automation and monitoring to take care of things while you’re out.

    Zombie Gas StationYou’re going to need to fight other groups of surviving IT for fuel. This means you’re going to need friends. The way I see it, you have two choices: SysAdmins and Developers. They’re the two groups you work closest with as a DBA, so they’re the most likely to have your back when the dead walk. Start your planning now. If you want to get the developers on your side, tune some queries for them. Seriously, nothing will convince a developer to slice through the brain base of a walker like adding some key indexes when a query goes south during testing. However, if you think the SysAdmins are more likely to fight off rival gangs of resource hogs on the prowl for food and fuel, you can make them feel good by keeping all your filesystems cleaned up and RAM usage at a minimum.

    The Problem with Zombies

    Remember, the walking dead are tenacious. You remember before the apocalypse when a bunch of reporting users would all log into the database and run huge ad hoc queries against PROD without thinking about what they were doing? That was nothing. Zombies are the real deal. They will tear through a database faster than a multi-terabyte cartesian product. You can deploy outside the box now increase your chances of having a clone out there somewhere, just in case. If you want that database to survive, you’re going to need standbys. Lots of them.

    I’d recommend a hub and spoke configuration. One central production database, at least 5 standby databases. As everybody knows, the chances of a zombie bringing down a database are roughly 89.375%. With 5 standby environments, you can drastically reduce the odds of being left without a standby system. On the plus side, zombies are completely brainless. What this means is that you don’t have to worry about masking or obfuscating your backup data in any way. Even on the off chance one of them kicks off a query (even zombies can figure out SQL Developer), they won’t be able to comprehend your users’ personal data, and with the complete downfall of the dollar it won’t matter if they see any credit information. So rest easy.

    When All Else Fails

    At some point, the zombies are going to come for you. Sorry, but it’s a statistical fact and there’s not much we can do about that. At that moment, when all hope is lost, you’re really going to need to protect your database because once you become a zombie too there really won’t be anyone left and you won’t be focused on maintaining it anymore; you’ll be focused on acquiring copious amounts of human flesh.

    Zombie Server RoomSo make your last stand count. You’re a soon-to-be-undead DBA, act like it! Remember how we tune. Eliminate the wait, punch through the bottlenecks, make efficient use of processing power. Don’t get trapped between a rack and a hard place. If you have to play a game of circle-the-Exadata in order to get away, go for it, but don’t let them corner you. And whatever you do, make sure you keep your badge with you. The last thing you need is to hit a door you can’t get through without the proper credentials. Above all else: remember to kick off a backup before they finally take you. I’d recommend having the script ready and running on the console just in case you have to hit a quick key.

    Good luck. You’re going to need it.

    The post Managing your Database in a Zombie Apocalypse appeared first on Oracle Alchemist.

    Latest Oracle allows SELECT without SELECT...FOR UPDATE

    Gary Myers - Sat, 2014-10-11 21:11
    Digging through a backlog of Oracle blogs, I came across an gem in a presentation from AMIS (on Slideshare). Got to bullet point 5 on slide 63 and boom !

    You all know that when you grant SELECT on a table to a user, they can do a SELECT FOR UPDATE, locking records in the table and preventing other updates or deletes. [Some client tools may do that in the background. ]

    Well finally Oracle have cottoned on to that too, and there's a lighter-weight "READ" privilege in 12.1.0.2 which won't allow SELECT FOR UPDATE.

    This will make DBAs very happy. Actually it won't. The natural state of a DBA is grumpy, at least when in the vicinity of a developer or salesman.


    PS. Why would SELECT FOR UPDATE ever be a good idea for a user with no UPDATE privilege ?
    If I had to guess, I'd say it went back to a 'pre-read consistency' model when you might use a SELECT FOR UPDATE to try to select data that wasn't being updated.

    Partner Webcast Special Edition – Oracle Mobile Application Framework: Developer Challenge

    Win up to $6.000 by developing a mobile application with Oracle Mobile Application Framework! Mobile technology has changed the way that we live and work. As mobile interfaces take the lion’s...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs