Skip navigation.

Feed aggregator

HOWTO: Create a Structured XMLIndex, using the In-Memory Column Store

Marco Gralike - Thu, 2014-11-27 03:41
In Oracle database version 12.1.0.2, Oracle introduced the Oracle In-Memory Database option. It is possible…

White Papers

Oracle in Action - Wed, 2014-11-26 23:44

RSS content

Oracle’s Approach to Performance Tuning Part-I by Darrick Addison

Oracle’s Approach to Performance Tuning Part-II by Darrick Addison

SQL Plan Management in 11g (Oracle White Paper)

SQL Plan Management in 12c (Oracle White Paper)

Adaptive Cursors And SQL Plan Management (Arup Nanda)

Partitioning in 11g (Oracle White paper)

Oracle Database Parallel Execution Fundamentals (Oracle White Paper)

Understanding Parallel Execution Part-I (Randolf Geist)

Understanding Parallel Execution Part-II (Randolf Geist)

Oracle Active Dataguard 11g (Oracle White Paper)

Oracle 11g RAC (Oracle White paper)

Oracle 11gR2 RAC (Oracle White Paper)

Oracle Single Client Access Name (Oracle White Paper)

Oracle RAC One Node (Oracle White Paper)

11g R2 RAC : Architecture, Best Practices And Troubleshooting (Kai Yu)

Automatic Workload Management With Oracle RAC (Oracle White Paper)

RAC Administering Parallel Execution (Riyaz Shamsudeen)

Using RAC Parallel Instance Groups (Chris Lawson)

Oracle 12c RAC (Oracle White paper)

Maximize Availability with Oracle 12c (Oracle White Paper)



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [White Papers], All Right Reserved. 2014.

The post White Papers appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Oracle 12c privilege analysis rocks

Yann Neuhaus - Wed, 2014-11-26 14:18

12c came with a very nice feature: privilege analysis. You don't know which privileges are required? then just grant DBA, run your application, and check which minimal privileges are needed. And today, I've discovered how it is very powerful: you can even see privileges used internally, even when not done by SQL, and even not documented.

It starts like that, with a question from Vladimir Sitnikov (who publishes very interesting stuff from his twitter account) in the tone of a challenge:

@FranckPachot Ok, ace. Do you think dbms_utility.get_parameter_value requires special grants (e.g. in current 11gR2)?

— Vladimir Sitnikov (@VladimirSitnikv) November 26, 2014

So I got to the doc which has a special security model for some functions but nothing about get_parameter_value.

Then I created a simple user with only CREATE SESSION privilege and got:

SQL> drop user TEST;
User dropped.
SQL> grant create session to TEST identified by TEST;
Grant succeeded.
SQL> connect TEST/TEST
Connected.
SQL> variable i number
SQL> variable s varchar2(1000)
SQL> variable t number
SQL> exec :t:=DBMS_UTILITY.GET_PARAMETER_VALUE('NLS_LENGTH_SEMANTICS',:i,:s);
BEGIN :t:=DBMS_UTILITY.GET_PARAMETER_VALUE('NLS_LENGTH_SEMANTICS',:i,:s); END;
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_UTILITY", line 140
ORA-06512: at line 1

So, which privileges do you need? Let's try the 12c privilege analysis:

SQL> grant dba to TEST;
Grant succeeded.

SQL> connect / as sysdba
Connected.

SQL> exec dbms_privilege_capture.create_capture (name=>'demo',type =>dbms_privilege_capture.g_role,roles=>role_name_list('DBA'));
PL/SQL procedure successfully completed.

SQL> exec dbms_privilege_capture.enable_capture (name=>'demo');
PL/SQL procedure successfully completed.

SQL> connect TEST/TEST
Connected.

SQL> exec :t:=dbms_utility.get_parameter_value('NLS_LENGTH_SEMANTICS',:i,:s);
PL/SQL procedure successfully completed.

SQL> print s

S
--------------------------------------------------------------
BYTE

SQL> connect / as sysdba
Connected.

SQL> exec dbms_privilege_capture.disable_capture(name=>'demo');
PL/SQL procedure successfully completed.

SQL> exec dbms_privilege_capture.generate_result(name=>'demo');
PL/SQL procedure successfully completed.

SQL> select object_owner,object_name,obj_priv from dba_used_objprivs ;

OBJECT_OWN OBJECT_NAME     OBJ_PRIV
---------- --------------- ----------
SYS        V_$PARAMETER    SELECT

SQL> select path from dba_used_objprivs_path ;

PATH
--------------------------------------------------------------
GRANT_PATH('TEST', 'DBA', 'EXP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'EM_EXPRESS_ALL', 'EM_EXPRESS_BASIC', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'DATAPUMP_IMP_FULL_DATABASE', 'EXP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'DATAPUMP_EXP_FULL_DATABASE', 'EXP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'IMP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'DATAPUMP_IMP_FULL_DATABASE', 'IMP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'SELECT_CATALOG_ROLE')

SQL> exec dbms_privilege_capture.drop_capture (name=>'demo');
PL/SQL procedure successfully completed.

I've granted the DBA privilege and have run the privilege analysis capture on that role while calling the function. And bingo: you need to be granted SELECT on V_$PARAMETER (which come into DBA role through the SELECT_CATALOG_ROLE) ... which sounds legitimate as the goal is to get a parameter value. 

But do you know what? DBMS_UTILITY.GET_PARAMETER_VALUE do not execute any select statement. That behavior is documented in that package for other function, but not for the GET_PARAMETER_VALUE one:

Rem The dbms_utility package is run-as-caller (psdicd.c) only for
Rem its name_resolve, compile_schema, analyze_schema, wait_on_pending_dml,
Rem and expand_sql_text procedures. This package is not run-as-caller
Rem w.r.t. SQL (psdpgi.c) so that the SQL works correctly (runs as
Rem SYS). The privileges are checked via dbms_ddl.

 

That function calls a C function (KSPGPNICD) so we don't know what happens behind. If you sql_trace it, you don't see anything about V_$PARAMETER.

But privilege analysis show the required privileges anyway, and that rocks.

Happy Thanksgiving

Oracle AppsLab - Wed, 2014-11-26 13:55

Editor’s note: Here’s a first post from one of our new team members, Thao Nguyen (@thaobnguyen), who runs our Emerging Interactions team, the Research and Design part of the R, D & D.

That last D is Development if that’s unclear. Anyway, like Thao says, Happy Thanksgiving for those who celebrate it, and for those who don’t enjoy the silence in our absence. To Thao’s question, I’m going with Internet. Yes, it’s a gadget because it’s a series of tubes, not a big truck.

Find the comments to add the gadget for which you are most thankful.

Tomorrow is Thanksgiving and this seems like a good time to put my voice out on The AppsLab (@theappslab). I’m Thao, and my Twitter (@thaobnguyen) tagline is “geek mom.” I’m a person of few words and those two words pretty much summarize my work and home life. I manage The AppsLab researchers and designers. Jake welcomed us to the AppsLab months ago here, so I’m finally saying “Thank you for welcoming us!”

Photo by floodllama on Flickr used under Creative Commons

Photo by floodllama on Flickr used under Creative Commons

As we reflect on all the wonderful things in our lives, personal and professional, I sincerely want to say I am very thankful for having the best work family ever. I was deeply reminded of that early this week, when I had a little health scare at work and was surround by so much care and support from my co-workers. Enough of the emotional stuff, and onto the fun gadget stuff . . . .

My little health scare led me to a category of devices that hadn’t hit my radar before – potentially, life saving, personal medical apps. I’ve been looking at wearables, fitness devices, healthcare apps, and the like for a long time now but there is a class of medical-grade devices (at least recommended by my cardiologist) that is potentially so valuable in my life, as well as those dear to me . . . AliveCor. It is essentially turns your smartphone into an ECG device so you can monitor your heart health anytime and share it with your physician. Sounds so cool!

Back to giving thanks, I’m so thankful for all the technology and gadgets of today – from the iPhone and iPad that lets me have a peaceful dinner out with the kids to these medical devices that I’ll be exploring now. I want to leave you with a question, “What gadget are you most thankful for?”Possibly Related Posts:

Lunchtime quiz

Jonathan Lewis - Wed, 2014-11-26 06:41

There was a question on OTN a few days ago asking the following question:

Here’s a query that ran okay on 11g, but crashed with Oracle error “ORA-01843: not a valid month” after upgrade to 12c; why ?

The generically correct answer, of course, is that the OP had been lucky (or unlucky, depending on your point of view) on 11g – and I’ll explain that answer in another blog posting.

That isn’t the point of this posting, though. This posting is a test of observation and deduction. One of the respondants in the thread had conveniently supplied a little bit of SQL that I copied and fiddled about with to demonstrate a point regarding CPU costing, but as I did so I thought I’d show you the following and ask a simple question.’


drop table T;

Create Table T
As
with
periods as (
                  Select 'January' period, 1 cal  From Dual
        union all Select 'February' period , 2 cal From Dual
        union all Select 'March' period , 3 cal From Dual
        union all Select 'April' period , 4 cal From Dual
        union all Select 'May'  period, 5 cal From Dual
        union all Select 'June' period, 6 cal From Dual
        union all Select 'July' period, 7 cal From Dual
        union all Select 'August' period, 8 cal From Dual
        union all Select 'September' period, 9 cal  From Dual
        union all Select 'October' period, 10 cal From Dual
        union all Select 'November' period, 11 cal From Dual
        Union All Select 'December' Period, 12 Cal From Dual
        Union All Select '13 Series' Period, Null Cal  From Dual
)
Select  Period,Cal
from periods;

prompt  ==================================
prompt  When we invoke below SQL it works.
prompt  ==================================

set autotrace on explain

select *
from    (
        select
                Period,
                Cal,
                to_date(Period || ', ' || 2014,'Month, YYYY') col1 ,
                to_date('November, 2014','Month, YYYY') col2
        From  T
        Where  Cal > 0
        )
;

prompt  ================================================
prompt  But when we add comparison operations , it fails
prompt  ================================================

select *
from    (
        select
                Period,
                Cal,
                to_date(Period || ', ' || 2014,'Month, YYYY')   col1,
                to_date('November, 2014','Month, YYYY')         col2
        From  T
        Where  Cal > 0
        )
where
        col1 >= col2
;

set autotrace off



All I’ve done is create a table then run and generate the execution plans for two queries – with a comment that if you try to run one query it will succeed but if you try to run the other it will fail (and raise ORA-01843). As far as the original supplier was concerned, both queries succeeded in 11g and the failure of the second one appeared only in 12c. In fact, for reasons that I won’t discuss here, it is POSSIBLE for the failure to appear in 11g as well, though not necessarily with this exact data set.

Here’s the COMPLETE output I got from running the code above on an 11.2.0.4 instance:



Table dropped.


Table created.

==================================
When we invoke below SQL it works.
==================================

PERIOD           CAL COL1      COL2
--------- ---------- --------- ---------
January            1 01-JAN-14 01-NOV-14
February           2 01-FEB-14 01-NOV-14
March              3 01-MAR-14 01-NOV-14
April              4 01-APR-14 01-NOV-14
May                5 01-MAY-14 01-NOV-14
June               6 01-JUN-14 01-NOV-14
July               7 01-JUL-14 01-NOV-14
August             8 01-AUG-14 01-NOV-14
September          9 01-SEP-14 01-NOV-14
October           10 01-OCT-14 01-NOV-14
November          11 01-NOV-14 01-NOV-14
December          12 01-DEC-14 01-NOV-14

12 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |    12 |   228 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |    12 |   228 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("CAL">0)

Note
-----
   - dynamic sampling used for this statement (level=2)

================================================
But when we add comparison operations , it fails
================================================

PERIOD           CAL COL1      COL2
--------- ---------- --------- ---------
November          11 01-NOV-14 01-NOV-14
December          12 01-DEC-14 01-NOV-14

2 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |     1 |    19 |     2   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| T    |     1 |    19 |     2   (0)| 00:00:01 |
--------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("CAL">0 AND TO_DATE("PERIOD"||', '||'2014','Month,
              YYYY')>=TO_DATE(' 2014-11-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))


So this is the question. What’s the anomaly in this output ?

Bonus question: What’s the explanation for the anomaly ?

Answers:

If I had asked why the query might, or might not, crash – the answer would be about the order of predicate evaluation, and simply collecting stats (or not) might have made a difference. Ever since “system stats”  and “CPU costing” appeared the optimizer has been able to change the order in which it applies filter predicates to a table (there’s a pdf of an article of mine from Oracle magazine in the 9i / 10g timeline linked at this URL) .  In this case, applying the “cal > 0″ predicate first luckily eliminates the rows that would fail the second predicate. Since the effect is driven by the optimizer’s stats this type of failure could occur ANY TIME you have a predicate that requires coercion between types to take place – which is one reason why you see the injunctions to use the correct data types; and why, if you need coercion to work around incorrect data types you have to consider writing your own functions to trap and resolve the necessary errors raised by Oracle’s implicit conversion mechanisms.

For a quick sketch of the optimizer strategy, the arithmetic is roughly:  predicate A costs c1 and predicate B costs c2; if I apply predicate A to every row I have to apply predicate B to only N surviving rows; if I apply predicate B to every row I have to apply predicate A to M surviving rows; which is smaller: (input_rows * c1 + N * c2) or (input_rows * c2 + M * c1).

The answer to the question I actually asked is this, though: I stressed the fact that this was the COMPLETE output because, as Narenda highlighted in comment 7 below –  the first query shows a note about dynamic sampling and the second query does not. This is a little surprising; we don’t have stats on the table, and the two queries are different so we have to optimizer both of them.  In 12c, of course, it’s possible that the optimizer may have done something clever with statistics feedback (formerly cardinality feedback) and created an SQL directive – but even then we should have seen a note about that.

For the bonus question: given the second output doesn’t report dynamic sampling we should be curious why not – did the optimizer simply decide not to try, did it try then decide not to use the results for some reason, or is there some other reason.  The obvious next step is to look at the 10053 (optimizer) trace – where you find that the optimizer DID do dynamic sampling or rather, it tried to do dynamic sampling but the query generated to take the sample failed with Oracle error ORA-01843, as suggested by Chinar Aliyev in comment 9  and expanded by Mohamed Houri in comment 11.

The irony of the sampling problem (hinted by Chinar Aliyev in comment 10) is that you could be in a position where you have a large table and oracle picks a small sample which happens to miss any of the problem rows and then return a sample that persuades the optimizer to pick an execution plan that is bound to find a problem row; alternatively the SQL used to generate the sample might apply the predicate in an order that manages to eliminate the problem rows, while the final plan derived after sampling persuades the optimizer to use the predicate in the order B, A.

 


Mandatory Auditing - Oracle 12c Always-On-Auditing

Certainly from an auditing and logging perspective, one of the best new features delivered by Oracle 12c is mandatory auditing of the administrative users such as SYSDBA.  This can be described as ‘always on auditing’.  By default, the following audit related activities are now mandatorily audited -

  • CREATE AUDIT POLICY
  • ALTER AUDIT POLICY
  • DROP AUDIT POLICY
  • AUDIT
  • NOAUDIT
  • EXECUTE of the DBMS_FGA PL/SQL package
  • EXECUTE of the DBMS_AUDIT_MGMT PL/SQL package
  • All configuration changes that are made to Oracle Database Vault
  • ALTER TABLE attempts on the AUDSYS audit trail table (this table cannot be altered)
  • Top level statements by administrative users SYS, SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, and SYSKM, until the database opens.  When the database opens, Oracle Database audits these users using the audit configurations in the system.

The audit activity resulting from mandatory auditing can be found in SYS.UNIFIED_AUDIT_TRAIL. 

Note when the database is not writable (such as during database mounting), if the database is closed, or if it is read-only, then Oracle writes the audit records to external files in the $ORACLE_BASE/audit/$ORACLE_SID directory. 

Mandatory Auditing

Integrigy Framework Event

  • CREATE AUDIT POLICY
  • ALTER AUDIT POLICY
  • DROP AUDIT POLICY
  • EXECUTE of the DBMS_FGA PL/SQL package
  • EXECUTE of the DBMS_AUDIT_MGMT PL/SQL package
  • All configuration changes that are made to Oracle Database Vault
  • ALTER TABLE attempts on the AUDSYS audit trail table (remember that this table cannot be altered)

E12 - Modify audit and logging

  • Top level statements by the administrative users SYS, SYSDBA, SYSOPER, SYSASM, SYSBACKUP, SYSDG, and SYSKM until the database opens
  • AUDIT
  • NOAUDIT

E11 - Privileged commands

Note: Activity and be found in SYS.UNIFIED_AUDIT_TRAIL when in pure mode and to the traditional audit trails in mixed mode.

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Database
Categories: APPS Blogs, Security Blogs

Suppressing ADF LOV Like Operator Filtering V2

Andrejus Baranovski - Wed, 2014-11-26 01:53
I had a post about the solution to suppress ADF LOV Like operator and prevent LOV popup loading when user is typing existing value - Suppressing ADF LOV Like Operator Filtering. Thanks to a blog reader, there was one use case found, when Like operator was not suppressed properly. This is fixed now and I would like to post updated sample application here.

Additional method is overridden in LOV VO implementation class - applyViewCriteria(ViewCriteria, boolean). Besides previously overridden method buildViewCriteriaClauses(ViewCriteria), this method provides additional handling for LOV Like operator suppression. Download sample application - ADFTableApp_v2.zip.

Blog reader reported an issue, when LOV value was changed to 10, again changed to 100 and again 10, then LOV popup was opened on the last change. This should not happen, as value 10 exists in the LOV. It seems like when LOV value is changed to the same as it was set before, ADF BC executes extra call for LOV filter through applyViewCriteria method. This is why this method is overridden as well as buildViewCriteriaClauses.

Both methods are overridden and check for STARTSWITH (Like) operator is implemented:


It works well now. Try to enter value 10, LOV will accept this value without opening LOV popup:


Enter value 100, this value will be accepted as it exists in the LOV list:


Again enter value 10, it will be accepted as it was previously (it will not be, without overriding applyViewCriteria method):


If I type any value, not available in the LOV list:


As expected LOV popup will be opened and user could select a valid value:

The insanity that is Uber - a 100$B company?

FeuerThoughts - Tue, 2014-11-25 20:55
So we've had taxis for years and we know that generally taxi drivers work hard, long hours and make small amounts of money. The cab companies make more, of course, but I don't think there are a whole lot of billionaires in the taxi business.

And now there is Uber. An earlier round of VC $ put its value at $17B. According to Fortune, Uber is now "raising new funding at a valuation of between $35 billion and $40 billion, according to a new report from Bloomberg. This would be one of the richest “venture capital” rounds in history (Facebook still holds the crown), and likely mean that investors expect Uber to eventually go public at a valuation of at least $100 billion."
How are to make any sense of this? Where would all the money come from to make all these investors (and shareholders) rich? 
By cutting out the "middleman" (regulation to ensure safe rides, primarily)? Maybe, but I can't imagine it will generate that much revenue?
By reducing the cost of a ride, compared to a taxi? That's true, apparently, some of the time with Uber, but often it is way MORE expensive - because prices are "market-driven."
By shifting more and more of the costs and risks to the drivers? That's pretty darn likely. Just look at the poor "contractors" who have to pay for their trucks and lease their gear from FedEx. 
By shifting riders from mass transit to Uber (in other greatly expanding the "pie" of pay-per-ride)? Again, that seems unlikely.
What am I missing? How could Uber replace an existing business that brings in nowhere near that much money and suddenly be printing the stuff?
Oh, and that's if they don't self-destruct due to their cavalier, arrogant attitudes and actions of their management.
Categories: Development

Cooking The Bird

Floyd Teter - Tue, 2014-11-25 19:08
So this has very little to do with Oracle, but it’s the big thing everyone has been asking me about over the past two or three weeks.  If you’re a vegan or a vegetarian, you may want to stop reading right now.

Thanksgiving is coming up here in the States.  It’s a bit deal in terms of remembering what to be thankful about.  It’s also a big deal in terms of cooking, especially cooking turkey.  Y’all asked for it, so here it is:  this is my best recipe for cooking a Thanksgiving turkey.  This will make about 18 servings.

Orange Brine
---------------

  • 6 cups water
  • 2 cups kosher or sea salt, or one cup table salt
  • 1 cup white sugar or 2 cups brown sugar
  • 2 oranges, quartered
  • 1 tablespoon of whole cloves
  • 3 bay leaves
  • 2 teaspoons whole peppercorns
  • 1 Whole Turkey, 12 to 14 pounds, thawed
  • 1 Tablespoon Extra Virgin Olive Oil

Maple Glaze
---------------
  • 1/2 cup pure maple syrup
  • 1 small orange, juiced
  • 1 teaspoon of pure vanilla extract


  • In a large saucepan over high heat, bring the water, salt and sugar to a boil.  Be sure you’re stirring to dissolve the salt and sugar.  Once boiling, turn off heat and let cool to room temperature.
  • In a 3-gallon food-safe container (I use a food storage bucket or a camping cooler), combine one gallon of water with the oranges, cloves, bay leaves and peppercorns.  Add the sugar-salt solution and stir.
  • Congrats!  You’ve made the brine!
  • Remove the giblets and neck from the turkey (I keep ‘em around for making gravy).  Remove excess fat and pat the turkey dry with paper towels.
  • Submerge the turkey in the brine.  Top it off with a weight if needed to keep it submerged.  If the turkey is a bit large, add more water.
  • Keep the turkey and brine in the refrigerator for 12 hours.
  • While the turkey is soaking in the brine, make your glaze.  Stir all three ingredients in a small bowl.  Cover and refrigerate until one hour before the turkey is done cooking.  Don’t worry, we’ll use this in a bit.
  • If you’re cooking in a smoker, load the smoker with apple or cherry wood and start your fire.  If you’re grilling, set up your grill for indirect medium heat (google this if you need instructions).  If you’re cooking in the oven, pre-heat to 325 degrees F.
  • Remove the turkey from the brine and pat it dry with paper towels.  Brush all over with a thin coating of the olive oil.  Do not season, as the brine soak took care of that.
  • Put your turkey in a large foil roasting pan.
  • Place the turkey, still in the roasting pan, in your smoker/grill/oven breast side up!!!  Close the lid or door and find something else to do - don’t peek.  If you’re grilling or oven roasting, plan to cook around 13 minutes per pound.  In a smoker, figure it closer to 30 minutes per pound - yeah, that’s a long cook, so plan accordingly.  NOTE:  if you’re grilling or oven roasting, you’ll likely miss out on most of the wood smoke flavor.  Grab some Liquid Smoke in Applewood flavor from the BBQ Sauce section of your local grocery - add in a teaspoon when you’re making the glaze; it’s not the same, but it’ll fool most people.  Just keep in mind that too much will make your turkey taste extremely bitter, so err on the light side.
  • Discard your brine.  You’re all done with it.
  • After two hours, begin basting with a combination of orange juice plus either water or apple juice (not both!); I prefer apple juice - more moistening and leaves no flavor behind, but to each his/her own. Baste every two hours until the glaze is applied.
  • One hour before the turkey is done cooking, remove the glaze from the refrigerator and let sit at room temp.
  • 30 minutes before your turkey is done cooking, remove the turkey from the foil pan.  See all the drippings in your pan?  That’s for the gravy.  Grab the can and put the turkey breast up directly on the cooking grate.  Close the lid or door and go make your gravy.
  • When you grab the pan and drippings for the gravy, brush the glaze all over your turkey.
  • Your turkey is done when an instant-read thermometer inserted in the thickest part of the thigh reads 180 degrees F.  Don’t have an instant-read thermometer? Stick a toothpick in the thickest part of the thigh without touching the bone; remove the toothpick and inspect the juices running out of the hole; your turkey is done when the juices run clear.
  • When your turkey is done, remove from the heat to a platter and (very important) let it stand for 20 minutes at room temp before carving!
  • But wait, you say, what about stuffing???  Truth is, using this recipe, the inside of the bird will never get hot enough to entirely cook the stuffing.  I cook my stuffing in an aluminum pan on top of the stove…usually add a teaspoon or two of the drippings while it’s cooking.  If I want the bird stuffed, I’ll stuff it while it’s standing after the cooking is done…yeah, I normally don’t do this…never have heard any complaints.
  • One note about something everyone fusses over:  carving.  I use an odd technique taught to me by a professional butcher - it keeps the meat juicier, avoids shredding the meat, and makes the overall presentation much better.  First, remove the entire breast from the bone in one large cut.  Cut across the breast to make crescent-shaped pieces.  Move the pieces to the serving platter as a whole breast, then cut off and add the drumsticks, thighs and wings.

So there ya go!  We'll get back to the Oracle stuff next week.  In the meantime, enjoy and have a happy Thanksgiving!

Update at Windsor's request:


Pythian at UKOUG 14

Pythian Group - Tue, 2014-11-25 13:39

Will you be joining us at the UKOUG Conference and Exhibition in Liverpool, UK? Over 200 world-class speakers and industry experts will be in attendance, including some of our very own.

Michael Abbey, Oracle ACE and Team Lead at Pythian notes that Pythian’s attendance is important, not only for the company and its employees, but for the Oracle community as a whole. “Pythian’s presence at UKOUG this year is the next chapter in an ongoing participation in database and EIS technical events around the world,” Michael explains. “We pride ourselves on presence on all seven continents and our appearances in Liverpool are strategic to the worldwide user community, as are all the locations we frequent every year.The user group community is one of the fundamental building blocks for technical resources as they hone their skills to better serve their clients and the masses in general. Since its founding, Pythian has been a strategic and financial support organization feeding talent to many of the world’s largest technical shows.”

“You will see a number of presenters from the Pythian suite of experts including Oracle ACEs, Oracle ACE Directors, and members of the OakTable Network. We look forward to catching up with new acquaintances and rekindling existing relationships.” In the meantime, you can find their speaking sessions down below.

UPDATE: Join us for an informal networking event alongside Rittman Mead on Monday December 8th during UKOUG. We will be discussing how to leverage data to drive your organization’s success. Come meet with peers and industry experts, Mark Rittman and Jon Mead of Rittman Mead, and Marc Fielding and Christo Kutrovsky of Pythian. The networking event will take place at PanAm Bar and Restaurant in Liverpool from 6-8 PM, and will include drinks and light refreshments.

Please be sure to RSVP to the event—we hope to see you there!

 

RMAN: The Necessary Basics

Presented by Michael Abbey | Monday December 8, 2014 — 9:00-9:50 AM

This session, best suited for attendees just getting started with RMAN, the basic skills to write consistent backups and perform recovery activities from day one. Highlighted in this masterclass will be the following: RMAN architecture, using a catalog, backups to disk, backups to tape, recovery (complete and incomplete), database duplication, and tips and tricks.

About Michael: Michael Abbey is a seasoned and experienced presenter on the Oracle Database CORE technology. He first cut his teeth on V3 in 1986 and it has been a whirlwind of Oracle since then. Michael co-authored the first work in the Oracle Press series in 1994.

 

Why Use OVM for Oracle Database?
Presented by Francisco Munoz Alvarez | Monday December 8, 2014 — 5:00-5:50 PM

Vibrant session about OVM, that will explain how, when and why use this product for Virtualization. It will also give an overview of how Revera is currently using this product in NZ (Biggest OVM Farm in the ANZ region) and show benchmark results between Bare Metal, OVM and ESX concluding with some tips and showing the scalability and break point of load of the Virtualization solutions.

Come and discover the answers for the following questions:

  • Does an Oracle Database perform well on a virtualized environment?
  • What virtualization technology is more stable and allows an Oracle database to perform faster?
  • What is the performance difference between using a bare metal and a virtualized guest?
  • Is it safe to run a production database in a virtualized environment?

About Francisco:  Working out of Pythian’s Australian office in Macquarie Park, Francisco Munoz Alvarez is Vice President and Managing Director of Service Delivery in Asia Pacific, overseeing its regional expansion effort. Francisco previously served as Chief Technology Officer at Database Integrated Solutions Ltd and has more than two decades’ experience in Oracle databases. As President of the Chilean Oracle Users Group and founder of the Oracle Technology Network (OTN) tours in Latin America and Asia Pacific, he is best known for his evangelist work with the Oracle community.

 

Big Data with Exadata
Presented by Christo Kutrovsky | Tuesday December 9, 2014 — 5:30-6:20 PM

In this presentation, Oracle ACE Christo Kutrovsky will discuss common big data use cases and how they can be implemented efficiently with Exadata. Attendees will learn how Exadata actually delivers most of the benefits touted by newer big data technologies, and can often be the right platform for data scalability.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Measuring Performance in Oracle Solaris and Oracle Linux
Presented by Christo Kutrovsky | Wednesday December 10, 2014 —9:00-9:50 AM

You can’t improve what you can’t measure. If you want to get the most value from your database, you need to start with the basics: are you using your hardware and operating systems efficiently? Attend this session to learn how to measure system utilization in the Linux and Oracle Solaris operating systems and how to use this information for tuning and capacity planning.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Lessons Learned in Implementing Oracle Access Manager 11g with Forms, Reports, and Discoverer
Presented by Sudeep Raj and Maris Elsins | Wednesday December 10, 2014 — 12:30-1:20 PM

Support for SSO has ended in December 2011. To take advantage of the latest security enhancements, it’s always recommended for customers to upgrade their system to the latest and the greatest version of the product i.e OAM/OID 11g, very important to stay on the supported configurations. This session will give you an opportunity to understand how OAM 11g can be configured with Forms/Reports/Discoverer 11g, integration with external directory service like Microsoft AD and discuss about the upgrade considerations for customers planning to upgrade from 10g SSO/OID to OAM 11g and OID 11g.

About Sudeep: Sudeep Raj is a Team Lead/Oracle Applications Database Consultant at Pythian, managing a group of expert DBAs spread across the globe. With nearly a decade of experience as an Apps DBA, he has been involved in and led multiple Oracle E-Business Suite 11i/R12 implementations, maintenance, migration and upgrade projects. Sudeep Raj is an OCP certified professorial and holds a Bachelor of Engineering degree in Computer Science.

About Maris: Recently awarded the Oracle ACE designation, Maris Elsins is a Lead Database Consultant at Pythian. He is a blogger and frequent speaker at many Oracle related conferences like Collaborate, UKOUG, and LVOUG where he is a board member. Maris is an exceptional trouble shooter and enjoys learning why things behave the way they do.

 

Optimizing and Simplifying Complex SQL with Advanced Grouping
Presented by Jared Still | Wednesday December 10, 2014 — 3:30-4:20 PM

This presentation will show how these features can be used to simplify SQL that was previously quite complex by reducing the amount of code needed and improving readability, and perhaps most importantly, greatly optimizing the performance of SQL statements.

About Jared: Jared Still is a Senior Database Consultant at Pythian. His experience includes working with Oracle databases beginning with version 7.0. While Oracle has expanded to encompass many aspects of the application environment, Jared’s focus has been on the database itself and related infrastructure.

 

Oracle RAC — Designing Applications for Scalability
Presented by Christo Kutrovsky | Wednesday December 10, 2014 — 3:30-4:20 PM

Oracle Real Application Clusters (RAC) promises 100% transparent active-active clustering technology – true horizontal scaling, but does it work in all cases? This presentation explores the challenges with Oracle’s active-active solution and how to solve them from both database side and application side. Both conceptual design and highly practical solutions are explored.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Database as a Service on the Oracle Database Appliance Platform
Presented by Marc Fielding and Maris Elsins | Wednesday December 10, 2014 — 3:30-4:20 PM

Oracle Database Appliance provides a robust, highly-available, cost-effective, and surprisingly scalable platform for database as a service environment. By leveraging Oracle Enterprise Manager’s self-service features, databases can be provisioned on a self-service basis to a cluster of Oracle Database Appliance machines. Discover how multiple ODA devices can be managed together to provide both high availability and incremental, cost-effective scalability. Hear real-world lessons learned from successful database consolidation implementations.

About Marc: Marc Fielding is a passionate and creative problem solver, drawing on deep understanding of the full enterprise application stack to identify the root cause of problems, and to implement effective and sustainable solutions. He has extensive experience implementing Oracle’s engineered system portfolio, including leading one of the first enterprise Oracle Exadata implementations. Marc has a strong background in performance tuning and high availability.

About Maris: Recently awarded the Oracle ACE designation, Maris Elsins is a Lead Database Consultant at Pythian. He is a blogger and frequent speaker at many Oracle related conferences like Collaborate, UKOUG, and LVOUG where he is a board member. Maris is an exceptional trouble shooter and enjoys learning why things behave the way they do.

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Oracle expertise or read some of our Oracle-related blog posts.

Categories: DBA Blogs

Holiday Sales by category

Nilesh Jethwa - Tue, 2014-11-25 13:26

Image

Big Data... Is Hadoop the good way to start?

Tugdual Grall - Tue, 2014-11-25 09:27
In the past 2 years, I have met many developers, architects that are working on “big data” projects. This sounds amazing, but quite often the truth is not that amazing. TL;TR You believe that you have a big data project? Do not start with the installation of an Hadoop Cluster -- the "how" Start to talk to business people to understand their problem -- the "why" Understand the data you must Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com2

Why we won’t need a PeopleSoft v9.3

Duncan Davies - Tue, 2014-11-25 09:00

3291330534_84cc20eac9_z[1]I caught up with Paco Aubrejuan’s “PeopleSoft Townhall” webinar from Quest the other day. Paco is Senior VP of Development for the PeopleSoft product line and it was a really interesting listen. The session can be found here, although you need to sign-up with Quest to view it. It’s an hour long and he discusses the future direction of the PeopleSoft product family plus the new simplified and mobile user experience for PeopleSoft, the new Fluid User Interface (UI) and the delivery model of more frequent, customer-driven product enhancements which is enabled by PeopleSoft Update Manager.

Most interestingly for me though, was the Q&A section at the end. Paco tackled the v9.3 question head on. I’ve transcribed his words, and I think it’s a strong and positive message for those with an interest in the PeopleSoft product line. Here are the ‘best bits':

On PUM:

We’re calling our model PeopleSoft Selective Adoption … and let me be specific about what it means, we’re going to deliver new capabilities about 2 to 3 times a year (and may deliver some functionality more frequent than that). Once you’re on 9.2 you can get this functionality without upgrading ever.

On PeopleSoft v9.3:

Should I upgrade to PeopleSoft 9.2 or should I wait for 9.3? There is no 9.3. We don’t have a 9.3 codeline, there’s no 9.3 plan, our plan is to never do a 9.3 and we’re going to continuously deliver on 9.2 using the PeopleSoft Selective Adoption and so you should not be waiting for a 9.3. … We’re just going to continue extending the timelines for PeopleSoft 9.2 so the idea is that there is no more upgrade and premier support will just continue.

On why a 9.3 isn’t needed:

The risk we take with saying that there’s no 9.3 is that people read into that and say that PeopleSoft is dead. … That’s not true. The investment level that we’re making in the product does not change with this delivery model at all. … We’re delivering all the Fluid functionality without a new release. We’ve never done that before. The only thing that this is comparable to is the 8.0 version when we moved from client-server to the internet, and that was a major release. We’re now doing something equivalent to that without even a minor release. It’s now just selective features that you can take as long as you’re on 8.54. So PeopleSoft is not dead, and having no PeopleSoft 9.3 does not mean that PeopleSoft is dead.

So, we now have a definitive answer to the v9.3 question. I think it’s a strong and positive message which is backed up with evidence of the investment that Oracle are putting in to the product family, and a nod to the fact that PeopleSoft is adapting its model to the changing needs of the customer.


Using DB Adapter to connect to DB2 on AS400

Darwin IT - Tue, 2014-11-25 04:37
In my current project I need to connect to a DB2 database on an AS400. To do so is no rocket science, but not exactly a NNF (Next-Next-Finish) config.

First you need to download the IBM JDBC adapter for DB2, which is open souce. Download the JT400.jar  from http://jt400.sourceforge.net/. Place it in a folder on your server. Since it's not an Oracle driver, I don't like to have it placed in the Oracle-Home, so I would put it on a different lib folder, where it is recognisable. Create a logical one, where you place other shared libs as well.

There are several methods to add the lib to your weblogic class path. What worked for me was to add it to the 'setDomainEnv.cmd'/'setDomainEnv.sh' file in the domain home.

(The Default Domain of the integrated weblogic of JDeveloper 12.1.3 under Windows can be found in: “c:\Users\%USER%\AppData\Roaming\JDeveloper\system12.1.3.0.41.140521.1008\DefaultDomain”)
Search for the keyword ‘POST_CLASSPATH’ and add the following at the end of the list of POST_CLASSPATH-additions:
set POST_CLASSPATH=c:\Oracle\lib\jtopen_8_3\lib\jt400.jar;%POST_CLASSPATH%
Where 'c:\Oracle\lib\jtopen_8_3' was the folder where I put it under windows. Then restart your server(s), and create a DataSource. For 'Database Type' as well as for 'Driver' choose 'Other' in the wizard. Then for the following fields enter the corresponding values in the given format (see also the doc.):
FieldValue/FormatURLjdbc:as400://hostname/Schema-Name;translate binary=trueDriver Class
com.ibm.as400.access.AS400JDBCDriver
or
com.ibm.as400.access.AS400JDBCXADataSource
Driver Jar
jt400.jar
or
jt400Native.jar

Since in our case the database apparently has a time out (don't know it this is default behaviour with DB2-AS400), I put in a one-row-query in the Test Table Name-field. And I checked the Test Connections On Reserve-checkbox, because I don't know the time-out frequency.

A description of configuring the library and connection in JDeveloper and the DBAdapter can be found in section 9.6.2 of this doc.

Having the DataSource in Weblogic setup, you can register it in de Database Adapter. Besides provinding the DataSourceName or XADataSourceName you should adapt the PlatformClassName:

The default is 'org.eclipse.persistence.platform.database.oracle.Oracle10Platform' (It only now strikes me that it contains 'org.eclipse.persistence' in the package name). Leaving it like this could have you running in the exception:
ConnectionFactory property platformClassName was set to org.eclipse.persistence.platform.database.oracle.Oracle10Platform but the database you are connecting to is DB2 UDB for AS/400
For DB2 on AS/400, the value should be: 'oracle.tip.adapter.db.toplinkext.DB2AS400Platform', see the docs here.

Fabric8 Gateway for the Unified Push Server

Matthias Wessendorf - Tue, 2014-11-25 03:28

If you want to run the Unified Push Server behind a firewall, you still need to expose those RESTful endpoints that are accessed from the mobile apps running on the different devices:

With the help of the Fabric8 Gateway Servlet this is a fairly simple task!

I have created such a gateway that only exposes the above URLs, nothing else. Checkout therepository on github!

Have fun!


SQL Server tips: Executing a query with the EXECUTE command

Yann Neuhaus - Mon, 2014-11-24 22:52

This short SQL Server blog post is meant to help people who have experienced the error messages 2812 and 203 with the EXECUTE command.

Parsing blocks stats blocks parsing

Bobby Durrett's DBA Blog - Mon, 2014-11-24 13:33

I had a five-minute conversation with Oracle development Friday that rocked my world.  I found out that parsing blocks stats which blocks parsing.

We have a system with queries that are taking minutes to parse.  These queries include the main tables on our database, one of which is interval partitioned and has 20,000 sub-partitions.  We have seen a situation where Oracle’s delivered statistics gathering job hangs on a library cache lock waiting for a query to finish parsing.  But, much worse than that, we find most queries on our system hanging on library cache lock waits blocked by the statistics job.

We have an SR open on this situation because it seems to be a bug, but Friday someone on the phone from Oracle development explained that this parse blocks stats blocks parse situation is normal.  Later after I got off the phone I built a simple test case proving that what he said was true.  I took a query that took a long time to parse in production and ran it on our development database and it took 16 seconds to parse there.  I choose the smallest table that the query included and gathered stats on it.  The stats ran in a fraction of a second when run by itself, but if I started the long parsing query in one window and ran the stats in another window the stats hung on a library cache lock wait for 15 seconds.  Then I created a trivial query against the same small table I had gathered stats on.  The query ran instantly by itself.  But, if I ran the long parsing query first, kicked off the stats which hung on the lock, and then kicked off the short query against the table I was gathering stats on the short query hung on a library cache lock also.  This example convinced me that the parse blocks stats blocks parse chain was real.

This morning I built a standalone test case that others can run to prove this out on their databases: zip of testcase.  To run the testcase you need three windows where you can run three sqlplus scripts in rapid succession.  In one window first just run tables.sql to create the test tables.  Then run these three scripts one after the other in each window to create the three link chain: chain1.sql, chain2.sql, chain3.sql.  Chain1.sql has the explain plan of a query that takes a long time to parse.  Chain2.sql gathers stats on one table.  Chain3.sql runs a simple query against the table whose stats are being gathered.  Chain1 spends all of its time on the CPU doing the parse.  Chain2 and 3 spends all of their time on library cache lock waits.

First I created two tables:

create table t1 as select * from dba_tables;
create table t2 as select * from dba_tables;

Next I kicked off the explain plan that takes a long time to run.  It joined 100 tables together:

explain plan into plan_table for 
select 
count(*)
from
     t1,
     t2,
     t2 t3,
...
     t2 t100
where
  t1.owner=t2.owner and
...
  t1.owner=t100.owner
/

This explain plan ran for 26 seconds, almost all of which was CPU:

Elapsed: 00:00:26.90

...

CPU in seconds
--------------
         26.81

Right after I kicked off the explain plan I kicked off this statement which gathered stats on the first table in the from clause:

execute dbms_stats.gather_table_stats(NULL,'T1');

This ran for 25 seconds and almost all of the time was spent on a library cache lock wait:

Elapsed: 00:00:25.77

...

Library cache lock in seconds
-----------------------------
                        25.55

Right after I kicked off the gather table stats command I ran this simple query making sure that it was unique and required a hard parse:

select /* comment to force hard parse */ count(*) from T1;

This ran for 24 seconds and almost all of the time was spent on a library cache lock wait:

Elapsed: 00:00:24.48

...

Library cache lock in seconds
-----------------------------
                        24.48

Evidently when a session parses a query it needs to obtain a shared lock on every table that the query includes.  When you gather statistics on a table you need to obtain an exclusive lock on the table, even if you are gathering statistics on one partition or sub-partition of the table.  While the statistics gathering session waits to acquire an exclusive lock any new parses that include the same table will hang.

Prior to Friday I did not think that there was any non-bug situation where gathering optimizer statistics would lock up sessions.  I thought that the only negative to gathering statistics at the same time as other application processing was that statistics gathering would compete for system resources such as CPU and I/O and possibly slow down application code.  But, now I know that gathering statistics can hang all queries that use the given table if stats gathering gets hung up waiting for a query that takes a long time to parse.

– Bobby

P.S. After reviewing the SR I wanted to understand what this parse blocks stats blocks parse looked like in a state dump.  The Oracle support analyst explained how the locks looked in a state dump that we uploaded but I didn’t get a chance to look at it closely until today.  I found the most important information in lines with the string “LibraryObjectLock” at the front of the line after some spaces or tabs.  There were three types of lines – the holding share lock, the waiting exclusive lock, and the many waiting share locks:

LibraryObjectLock:  Address=... Handle=0x5196c8908 Mode=S ...
LibraryObjectLock:  Address=... Handle=0x5196c8908 RequestMode=X ...
LibraryObjectLock:  Address=... Handle=0x5196c8908 RequestMode=S ...

The “…” indicates places I edited out other details.  The handle 0x5196c8908 identifies the table being locked.  The “Mode=S” string indicates a successful share lock of that table by the session with the long parse time.  The “RequestMode=X” was from the stats job trying to get exclusive access to the table.  The “RequestMode=S” was all the other sessions trying to get shared access to the table waiting for stats to first get exclusive access.  Anyway, I just wanted to translate what Oracle support told me into something that might be useful to others.  Plus I want to remember it myself!




Categories: DBA Blogs

Feeling trepidatious? Time to lay very low?

FeuerThoughts - Mon, 2014-11-24 12:34
Sure, "trepidatious" might not be a word, per se.

But I am confident it is something that more than one very famous male actor is feeling right now, as they watch Bill Cosby go down in flames.

As in: seriously and deeply apprehensive about what the future might bring.

There are a few things we can be sure of right now, even if Cosby never faces a judge or jury:

1. Bill Cosby is a nasty piece of work, and very likely (was) a pedophile.

The pattern of behavior, finally brought to light after years of self-censorship by victims and callous disregard by the media and judicial system, is overwhelming and seemingly never-ending. Mr. Cosby is a serial rapist, and he did it by drugging young women, some of them less than 18 years old at the time.

2. Bill Cosby is an actor. 

The roles he played were just that: roles. We are easily fooled into thinking of the people behind the roles as sharing characteristics of their characters, but that's just, well, foolish.

The whole point of being a great actor is that you can act really well. You can pretend to be someone else really convincingly. But they are still someone else and not the "real you."

3. Bill Cosby cannot be the only one.

That's where the trepidation comes in. Seriously, what's the chance that Cosby is the only famous, powerful, rich actor who has a long history of taking advantage of and raping women (and/or men, for that matter)?

There have got to be others, and they've got to be terrified that soon their victims will say "Enough!" and then the next deluge will begin.

So my advice to all those A-listers who are also serial rapists:

Lay low, lay really low. Do not provoke your victims. Do not laugh in their faces.

And then maybe you will be able to retire and fade into the sunset, so that your obituary will not be some variation of:

Funny Guy, Sure, But Also a Rapist
Categories: Development