Skip navigation.

Feed aggregator

Partner Webcast – Oracle R Enterprise: Bringing the power of R to the enterprise

For those of you just getting interested in Big “Data” and/or “Advanced Analytics”, the addition of R through Oracle R Enterprise could leave them wondering "What is R?" R is an...

We share our skills to maximize your revenue!
Categories: DBA Blogs

SQL Translation Framework in Oracle Database 12c

One of the new Oracle Database 12c features is the SQL Translation Framework. A key part of migrating non-Oracle databases to Oracle Database involves the conversion of non-Oracle SQL statements...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle SQL Profiles: Check what they do before accepting them blindly

Yann Neuhaus - Fri, 2014-11-07 16:28

People often ask me if I use the SQL Tuning Advisor. The answer is no, at least not through the end of it. But don't take me wrong. SQL Tuning Advisor is a really nice feature. If you like the CBO then you must love the Tuning Advisor, because it is just the CBO with more time to optimize, and more ways to optimize. The fact is that when a customer calls me, they expect a bit more than running an advisor and implement the result. I'm there to fix the root cause, not just workaround some missing stats. And this is why I use it very rarely.

However when I have a big query, with a plan that covers several pages, it takes a lot of time to find what is wrong. The method is often based on comparing the estimated cardinalities with the actual ones. But If I have access to the Tuning Pack, then the SQL Tuning Advisor can help to find very quickly where the estimations are going wrong.

The SQL Tuning Advisor proposes a SQL Profile to adjust the estimations. Then I just have to check the biggest adjustment and I can focus where the estimations are wrong. However, that information is not exposed. The SQL Tuning Advisor report shows the new plan, but not the ways it gets to it.

The goal of this post is to give you the query I use to show exactly what the profile will implement when you accept it.

Even if you're going to accept the profile, It's a good idea to check it before. It will help to choose if you need to enable 'force matching' or not. And if it is a good fix or if there are more sustainable ways to achieve the same.

You probably know that a profile implements the estimation adjustment with the OPT_ESTIMATE hints which adjust it with a 'scale_rows' factor that can apply to tables, index selectivity or joins. They is very well explained on the Pythian blog

So, let's take an exemple. My favorite query to show bad estimations on the HR schema is:

alter session set current_schema=HR;
select distinct DEPARTMENT_NAME
 from DEPARTMENTS join EMPLOYEES using(DEPARTMENT_ID)
 where DEPARTMENT_NAME like '%ing' and SALARY>20000;
For the simple reason that I know that lot of the department names are ending with 'ing' (Marketing, Purchasing,...) but the CBO doesn't know that. And I can't give that information through column statistics or histograms:
SQL> select * from table( dbms_xplan.display_cursor(format=>'rowstats last ') );

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------
SQL_ID  4fz1vtn0w8aak, child number 0
-------------------------------------
select distinct DEPARTMENT_NAME  from DEPARTMENTS join EMPLOYEES
using(DEPARTMENT_ID)  where DEPARTMENT_NAME like '%ing' and SALARY>20000

Plan hash value: 3041748347

----------------------------------------------------------------------------------------------
| Id  | Operation                             | Name              | Starts | E-Rows | A-Rows |
----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                   |      1 |        |      0 |
|   1 |  HASH UNIQUE                          |                   |      1 |      1 |      0 |
|   2 |   NESTED LOOPS SEMI                   |                   |      1 |      1 |      0 |
|*  3 |    TABLE ACCESS FULL                  | DEPARTMENTS       |      1 |      1 |      7 |
|*  4 |    TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES         |      7 |      1 |      0 |
|*  5 |     INDEX RANGE SCAN                  | EMP_DEPARTMENT_IX |      7 |     10 |     55 |
----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("DEPARTMENTS"."DEPARTMENT_NAME" LIKE '%ing')
   4 - filter("EMPLOYEES"."SALARY">20000)
   5 - access("DEPARTMENTS"."DEPARTMENT_ID"="EMPLOYEES"."DEPARTMENT_ID")

Note
-----
   - this is an adaptive plan

Look at E-Rows and A-Rows: I have 7 departments ending with 'ing' but the optimizer thinks that there is only one. In 11g you need a profile to help the optimizer or you're going into an expensive nested loop. This example has few rows, but imagine the consequence when a nested loop is choosen but must execute on millon of rows. In 12c - and if you are in Enterprise Edition - the adaptive plan will help to avoid that situation. As soon as a threshold is reached another plan will be executed.

But even with adaptive plan, there may be a better plan that is possible only with accurate estimations. Let's see what the SQL Tuning Advisor will find.

Running SQL Tuning Advisor

I create and execute the tuning task:

begin
 dbms_output.put_line('task id: '||
 dbms_sqltune.create_tuning_task(
  task_name=>'dbiInSite',
  description=>'dbi InSite workshop Oracle Tuning',
  scope=>dbms_sqltune.scope_comprehensive,
  time_limit=>30,
  sql_id=>'4fz1vtn0w8aak'
  )
 );
 dbms_sqltune.execute_tuning_task('dbiInSite');
end;
/

And show the report:

SQL> set long 1000000 longc 1000000
SQL> select dbms_sqltune.report_tuning_task('dbiInSite') FROM dual;

DBMS_SQLTUNE.REPORT_TUNING_TASK('DBIINSITE')
-------------------------------------------------------------------------------
GENERAL INFORMATION SECTION
-------------------------------------------------------------------------------
Tuning Task Name   : dbiInSite
Tuning Task Owner  : SYS
Workload Type      : Single SQL Statement
Scope              : COMPREHENSIVE
Time Limit(seconds): 30
Completion Status  : COMPLETED
Started at         : 11/08/2014 00:03:22
Completed at       : 11/08/2014 00:03:23

-------------------------------------------------------------------------------
Schema Name: HR
SQL ID     : 4fz1vtn0w8aak
SQL Text   : select distinct DEPARTMENT_NAME
              from DEPARTMENTS join EMPLOYEES using(DEPARTMENT_ID)
              where DEPARTMENT_NAME like '%ing' and SALARY>20000

-------------------------------------------------------------------------------
FINDINGS SECTION (1 finding)
-------------------------------------------------------------------------------

1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
  A potentially better execution plan was found for this statement.

  Recommendation (estimated benefit: 33.67%)
  ------------------------------------------
  - Consider accepting the recommended SQL profile.
    execute dbms_sqltune.accept_sql_profile(task_name => 'dbiInSite',
            task_owner => 'SYS', replace => TRUE);

  Validation results
  ------------------
  The SQL profile was tested by executing both its plan and the original plan
  and measuring their respective execution statistics. A plan may have been
  only partially executed if the other could be run to completion in less time.

                           Original Plan  With SQL Profile  % Improved
                           -------------  ----------------  ----------
  Completion Status:            COMPLETE          COMPLETE
  Elapsed Time (s):             .005964           .000177      97.03 %
  CPU Time (s):                 .005999             .0002      96.66 %
  User I/O Time (s):                  0                 0
  Buffer Gets:                       13                 9      30.76 %
  Physical Read Requests:             0                 0
  Physical Write Requests:            0                 0
  Physical Read Bytes:                0                 0
  Physical Write Bytes:               0                 0
  Rows Processed:                     0                 0
  Fetches:                            0                 0
  Executions:                         1                 1

  Notes
  -----
  1. Statistics for the original plan were averaged over 10 executions.
  2. Statistics for the SQL profile plan were averaged over 10 executions.

-------------------------------------------------------------------------------
EXPLAIN PLANS SECTION
-------------------------------------------------------------------------------

1- Original With Adjusted Cost
------------------------------
Plan hash value: 3041748347

-------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name              | Rows  | Bytes | Cost (%CPU)| 
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                   |     1 |    23 |     7  (15)| 
|   1 |  HASH UNIQUE                          |                   |     1 |    23 |     7  (15)| 
|   2 |   NESTED LOOPS SEMI                   |                   |     1 |    23 |     6   (0)| 
|*  3 |    TABLE ACCESS FULL                  | DEPARTMENTS       |     7 |   112 |     3   (0)| 
|*  4 |    TABLE ACCESS BY INDEX ROWID BATCHED| EMPLOYEES         |     1 |     7 |     1   (0)| 
|*  5 |     INDEX RANGE SCAN                  | EMP_DEPARTMENT_IX |    10 |       |     0   (0)| 
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("DEPARTMENTS"."DEPARTMENT_NAME" LIKE '%ing')
   4 - filter("EMPLOYEES"."SALARY">20000)
   5 - access("DEPARTMENTS"."DEPARTMENT_ID"="EMPLOYEES"."DEPARTMENT_ID")

2- Using SQL Profile
--------------------
Plan hash value: 2473492969

---------------------------------------------------------------------------------------------
| Id  | Operation                     | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |             |     1 |    23 |     5  (20)| 00:00:01 |
|   1 |  HASH UNIQUE                  |             |     1 |    23 |     5  (20)| 00:00:01 |
|   2 |   NESTED LOOPS                |             |     1 |    23 |     4   (0)| 00:00:01 |
|   3 |    NESTED LOOPS               |             |     1 |    23 |     4   (0)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL         | EMPLOYEES   |     1 |     7 |     3   (0)| 00:00:01 |
|*  5 |     INDEX UNIQUE SCAN         | DEPT_ID_PK  |     1 |       |     0   (0)| 00:00:01 |
|*  6 |    TABLE ACCESS BY INDEX ROWID| DEPARTMENTS |     1 |    16 |     1   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter("EMPLOYEES"."SALARY">20000)
   5 - access("DEPARTMENTS"."DEPARTMENT_ID"="EMPLOYEES"."DEPARTMENT_ID")
   6 - filter("DEPARTMENTS"."DEPARTMENT_NAME" LIKE '%ing')

-------------------------------------------------------------------------------

So even if I have adaptive plan, a better plan is possible. When the optimizer know that there are more rows from DEPARTMENTS than EMPLOYEES, it's better to start with EMPLOYEES, for nested loop as well as for hash join.

Showing the OPT_ESTIMATE hints

Then I don't want to accept the profile yet, but want to see all those OPT_ESTIMATE hints that has been determined by the SQL Tuning Advisor and that will be added to the query when the profile is accepted. Here my script. Note that this script is for 11g and 12c. In 10g the information was stored elsewhere. You can go to Jonathan Levis post for the 10g query.

set serveroutput on echo off
declare
  -- input variables
  input_task_owner dba_advisor_tasks.owner%type:='SYS';
  input_task_name dba_advisor_tasks.task_name%type:='dbiInSite';
  input_show_outline boolean:=false;
  -- local variables
  task_id  dba_advisor_tasks.task_id%type;
  outline_data xmltype;
  benefit number;
begin
  for o in ( select * from dba_advisor_objects where owner=input_task_owner and task_name=input_task_name and type='SQL')
  loop
          -- get the profile hints (opt_estimate)
          dbms_output.put_line('--- PROFILE HINTS from '||o.task_name||' ('||o.object_id||') statement '||o.attr1||':');
          dbms_output.put_line('/*+');
          for r in (
            select hint,benefit from (
             select case when attr5 like 'OPT_ESTIMATE%' then cast(attr5 as varchar2(4000)) when attr1 like 'OPT_ESTIMATE%' then attr1 end hint,benefit
             from dba_advisor_recommendations t join dba_advisor_rationale r using (task_id,rec_id)
             where t.owner=o.owner and t.task_name = o.task_name and r.object_id=o.object_id and t.type='SQL PROFILE'
             and r.message='This attribute adjusts optimizer estimates.'
            ) order by to_number(regexp_replace(hint,'^.*=([0-9.]+)[^0-9].*$','\1'))
          ) loop
           dbms_output.put_line('   '||r.hint); benefit:=to_number(r.benefit)/100;
          end loop;
          dbms_output.put_line('*/');
          -- get the outline hints
          begin
          select outline_data into outline_data from (
              select case when other_xml is not null then extract(xmltype(other_xml),'/*/outline_data/hint') end outline_data
              from dba_advisor_tasks t join dba_sqltune_plans p using (task_id)
              where t.owner=o.owner and t.task_name = o.task_name and p.object_id=o.object_id  and t.advisor_name='SQL Tuning Advisor' --11gonly-- and execution_type='TUNE SQL'
              and p.attribute='Using SQL profile'
          ) where outline_data is not null;
          exception when no_data_found then null;
          end;
          exit when not input_show_outline;
          dbms_output.put_line('--- OUTLINE HINTS from '||o.task_name||' ('||o.object_id||') statement '||o.attr1||':');
          dbms_output.put_line('/*+');
          for r in (
              select (extractvalue(value(d), '/hint')) hint from table(xmlsequence(extract( outline_data , '/'))) d
          ) loop
           dbms_output.put_line('   '||r.hint);
          end loop;
          dbms_output.put_line('*/');
          dbms_output.put_line('--- Benefit: '||to_char(to_number(benefit),'FM99.99')||'%');
  end loop;
  dbms_output.put_line('');
end;
/


And here is the output:

--- PROFILE HINTS from dbiInSite (1) statement 4fz1vtn0w8aak:
/*+
OPT_ESTIMATE(@"SEL$2CBA5DDD", TABLE, "EMPLOYEES"@"SEL$1", SCALE_ROWS=2)
OPT_ESTIMATE(@"SEL$58A6D7F6", TABLE, "EMPLOYEES"@"SEL$1", SCALE_ROWS=2)
OPT_ESTIMATE(@"SEL$6AE97DF7", TABLE, "DEPARTMENTS"@"SEL$1", SCALE_ROWS=5.185185185)
OPT_ESTIMATE(@"SEL$58A6D7F6", TABLE, "DEPARTMENTS"@"SEL$1", SCALE_ROWS=5.185185185)
*/

PL/SQL procedure successfully completed.

This is very interesting information. It says that the actual number of employees in this query (with that specific where clause) is 2 times what is estimated from statistics. And that the estimated number of departments is 5 times what is estimated.

So what it is used for?

That gives me several ways to improve, even without implementing the profile.

First, the number of employees for a 'SALARY>2000' predicate can be improved with histograms. That is better than a profile because it will improve all queries that filter on employee salary.

Then, for DEPARTMENTS, histograms will not help because the bad estimation comes from the LIKE '%ing' predicate and I've no way to give that information with statistics. Ok, let's go for the profile. If you want to implement the profile, will you choose 'force matching' or not? Of course not. The estimation adjustment makes sense only with our '%ing' values. This is were looking at the OPT_ESTIMATE is very important, or you can't do the right choice.

And you may choose something else than profile. Dynamic sampling makes sense in this case. Another solution may be to add OPT_ESTIMATE or CARDINALITY hints in the query.

Conclusion SQL Tuning Advisor is powerful, when used intelligently. It gives ideas about what is wrong and proposes a way to fix it. But you can have more when retrieving the internal hints that the profile generate. Better choice to implement the profile, or alternative solutions. As usual, if you see something wrong or to improve in my query, please comment. Quizz

The answer is left for another post, but if you have an idea, please don't hesitate to comment.
Question is: In the above output, I've two OPT_ESTIMATE rows for each table. what is the reason for that?

Emphasis on Practical Usability Research at HFES Annual Meeting in Chicago

Usable Apps - Fri, 2014-11-07 15:16

By Anna Wichansky

Senior Director, Oracle Applications User Experience

HFES 2014 Meeting in Chicago

The 2014 International Annual Meeting of the Human Factors and Ergonomics Society (HFES) was recently held in Chicago, on October 27-31, 2014. This conference deals with all the latest research and issues in the field of human factors, the study of human-machine systems. Some 1450 professionals in human factors, user experience (UX), and related fields attended the event.

Anna Wichansky and Ultan O’Broin (@usableapps) of the Oracle Applications User Experience (OAUX) team presented a half-day workshop on How to Create User Requirements for Software to HFES members and students, including industry consultants and end-user customers. This is their third presentation of the workshop, which features a highly interactive format with small groups engaged in hands-on role-playing exercises.

In this unique workshop, students prepared a business case about a fictitious auto parts company requiring a financial software upgrade to a more efficient, effective, and satisfying application. They worked in small groups and played the roles of UX consultant, user, and stakeholders in the implementation. Ultan and Anna facilitated the groups, played stakeholder roles as needed, and presented relevant concepts and methods for setting UX requirements based on the NIST IR 7432 Common Industry Format for Requirements. Students left with a completed template of user requirements for the workshop business case.

 The fundamental layer of user requirements. Picture of people coding together

Understanding the context of use (the who, what, where, how, and with whom) dimension of software user requirements gathering is fundamental to a successful implementation. The HFES workshop explored context of use thoroughly as an underlying layer of the Common Industry Format for Usability Requirements.

In other highlights of the conference, Deborah Hersman, President and CEO of the U.S. National Safety Council and former head of the National Transportation Safety Board, gave an invited speech on the importance of human factors in promoting safety. One particular theme was computer-distracted operators of transportation vehicles. She related examples of the Northwest Airlines pilots who overflew their destination while reading rosters on a laptop, a texting engineer responsible for a train collision in Chatsworth, California, and the Delaware River tug boat mate in charge of towing a barge that collided with another vessel because he was distracted by his cell phone. Her clear message is that we need to use technology thoughtfully to ensure the benefits outweigh any detrimental effects. Automated cars, for example, could have many benefits in providing a very safe ride, possibly decreasing the effects of driver distraction, fatigue, and aging on highway accidents.

The fastest growing technical group in HFES is Healthcare, with many papers and sessions presented on testing medical devices, the design and human factors of electronic medical records, and online consumer information systems for patient compliance and support.

A symposium on research being conducted to support the NASA manned extra-planetary missions was also presented, with many relevant findings for life here on Earth, including the effects of sleep deprivation and sleep inertia (when you are suddenly awakened in the middle of sleep) on human performance.

BMW presented research on the optimal design for augmented displays in automated driving scenarios. The research found that drivers’ reactions to the displayed alerts and warnings as they attempted to avoid hazards in simulated driving tasks were often unpredictable, depending on features of the visual design.

About the Human Factors and Ergonomics Society 

Human Factors and Ergonomics Society logo

The Human Factors and Ergonomics Society is a 4500-member professional organization dedicated to the study of human-machine systems. Anna Wichansky is a Fellow of the HFES.

Related Information

Dropbox Database Infiltrated by Hackers [VIDEO]

Chris Foot - Fri, 2014-11-07 12:57

Transcript

While providing database security services to cloud storage providers is possible, many such companies aren't taking the necessary precautions to ensure customer data remains protected. 

According to 9 to 5 Mac, Dropbox recently announced that a database holding 7 million logins for its users was infiltrated. The environment was operated by a third party, which was hired by Dropbox to store its customer data. To the company's relief, many of the stolen passwords were outdated. 

While Dropbox is taking steps to mitigate the situation, the enterprise advised its customers to change their login information as an extra precaution. 

The best way to prevent breaches from occurring is to install automated intrusion detection software. In addition, regularly auditing existing systems for vulnerabilities is considered a best practice. 

Thanks for watching! 

The post Dropbox Database Infiltrated by Hackers [VIDEO] appeared first on Remote DBA Experts.

Quiz night

Jonathan Lewis - Fri, 2014-11-07 12:37

Prompted by an email from Yves Colin (who’ll be presenting on the Tuesday of UKOUG Tech14) I was prompted to dig out a little script I wrote some years ago and re-run an old test, leading to this simple question:  what’s the largest size array insert that Oracle will handle ?

If you’re tempted to answer, watch out – it’s not exactly a trick question, but there is a bit of a catch.

Answer:

There is an internal limit of 255 on the size of array that Oracle can insert as a unit. I leave it as an exercise to the reader to decide whether or not this makes any significant difference to performance, since the effects of row size, number of indexes maintained, and disk I/O requirements may make the effects of the limit virtually undetectable. To demonstrate the limit all we need do is insert a few hundred rows into a table and do a block dump, optionally followed by a dump of the redo log.


create table t1 (n1 number not null) segment creation immediate;

alter system switch logfile;

insert into t1 select rownum from all_objects where rownum <= 1000;
commit;

execute dump_log

start rowid_count t1

/*
   FILE_NO   BLOCK_NO ROWS_IN_BLOCK
---------- ---------- -------------
         5        180           660
         5        184           340

2 rows selected.


ROWS_IN_BLOCK     BLOCKS
------------- ----------
          340          1
          660          1

2 rows selected.
*/

alter system flush buffer_cache;

-- SQL> alter system dump datafile 5 block 180;

The dump_log procedure is simply a pl/sql wrapper for a call to ‘alter system dump logfile {current log}'; I the script rowid_count.sql extracts the file and block numbers from rowids in the given table and aggregates them in different ways.  The reason for running the script is to find a table block with a lot of rows in it; the block I dumped actually held the first 660 rows of the insert. Here’s a tiny extract from the block dump (with one little comment added):


tab 0, row 0, @0x1904			-- 6,400 dec
tl: 6 fb: --H-FL-- lb: 0x1  cc: 1
col  0: [ 2]  c1 02
tab 0, row 1, @0x190a
tl: 6 fb: --H-FL-- lb: 0x1  cc: 1
col  0: [ 2]  c1 03
tab 0, row 2, @0x1910
tl: 6 fb: --H-FL-- lb: 0x1  cc: 1
col  0: [ 2]  c1 04
tab 0, row 3, @0x1916
tl: 6 fb: --H-FL-- lb: 0x1  cc: 1

This shows the first four rows inserted, and you can see that the offset to the first row is 6,400 bytes, and then each subsequent row is 6 bytes further down the block (when the numbers get larger the row lengths will increase to 7 bytes). The positioning of these rows is, at first sight, a little odd – you might have guessed that they would either start at offset “zero” and work down to offset “8K”, or start at “8K” and work backwards up to zero – why is the first row 3/4 of the way down the block ?

Rather than show you more row dumps, I’ll give you a carefully selected extract from the row directory:


0x12:pri[0]     offs=0x1904
0x14:pri[1]     offs=0x190a

...

0x20c:pri[253]  offs=0x1f8a
0x20e:pri[254]  offs=0x1f91		End of first 255 rows
0x210:pri[255]  offs=0x120e		Start of second 255 rows
0x212:pri[256]  offs=0x1215

...

0x40a:pri[508]  offs=0x18f6
0x40c:pri[509]  offs=0x18fd		End of second 255 rows
0x40e:pri[510]  offs=0xdf5		Start of last 150 rows
0x410:pri[511]  offs=0xdfc

...

0x536:pri[658]  offs=0x1200		End of last 150 rows
0x538:pri[659]  offs=0x1207

The first 255 rows inserted are stacked at the bottom of the block at offsets 0x1904 to 0x1f91.
The second 255 rows inserted are stacked above them at offsets 0x120e to 0x18fd (note 0x18fd + 6 = 0x1903)
The last 150 rows inserted are stack above them at offsets 0xdf5 to 0x1207 (note 0x1207 + 6 = 0x120d)

No matter how large your attempted array insert, the maximum number of rows (or index entries) Oracle can insert into a block in a single internal array operation is 255.

Further corroboration comes from the redo log dump – here’s a tiny bit of a single change vector (i.e. a single atomic change to a single Oracle block) from the redo generated while this insert was going on:


CHANGE #18 CON_ID:0 TYP:0 CLS:1 AFN:5 DBA:0x014000c7 OBJ:95876 SCN:0x0000.007528dd SEQ:1 OP:11.11 ENC:0 RBL:0 FLG:0x0000
KTB Redo
op: 0x01  ver: 0x01
compat bit: 4 (post-11) padding: 1
op: F  xid:  0x0007.007.000009f2    uba: 0x01000bda.02a9.14
KDO Op code: QMI row dependencies Disabled
  xtype: XA flags: 0x00000000  bdba: 0x014000c7  hdba: 0x01400083
itli: 1  ispac: 0  maxfr: 4858
tabn: 0 lock: 1 nrow: 255
slot[0]: 0
tl: 6 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 2]  c1 02
slot[1]: 1
tl: 6 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 2]  c1 03
slot[2]: 2
tl: 6 fb: --H-FL-- lb: 0x0  cc: 1
col  0: [ 2]  c1 04

Although the CON_ID in the line 1 tells you this dump is from 12c the same limit holds across all (modern) versions of Oracle. The operation (OP 11.11 – in line 1) is “Insert Multiple Rows”, and at line 9 you can see: “nrow: 255″. I haven’t included the rest of the change vector, but all it does is show the remaining 252 rows.


setting pctfree to 0 for regular indexes ... ( not familiar with this theory ) ... help!?

Grumpy old DBA - Fri, 2014-11-07 12:31
One of my coworkers has this theory ( habit ) of creating new regular indexes on regular table columns.  Not compressed indexes or anything special.

He is currently believing ( back from somewhere / not sure of the origin ) that "if a table only gets inserts and deletes" that any indexes created on that table should be set with pctfree set to 0.

I do not agree.  In my opinion one should stick with the tried and true default values here.  I also am skeptical of this belief.

It seems to me like a very small amount of potential tangible gain in space usage while causing potential problems down the road.

Even if this theory is somehow true ( don't think so ) ... what happens in the future when something changes in the application and what one was initially told ( only inserts and deletes ) changes.

Anyone seen / heard /debunked this theory?  Any idea of where it comes from?

Thanks John   
Categories: DBA Blogs

Multi-Channel Service Delivery

WebCenter Team - Fri, 2014-11-07 10:18
By Mitchell Palski, Oracle WebCenter Sales Consultant
This week we’re happy to have Oracle WebCenter expert Mitchell Palski join us for a Q&A around Multi-Channel Service Delivery. 
Q: So Mitchell, what exactly do you mean by Multi-Channel Service Delivery? A multi-channel service delivery approach allows your organization to provide the best all-around user experience across multiple communication channels. Increasingly powerful and user-friendly technologies are creating opportunities for organizations to offer new ways to interact with end users in order to respond to their needs more effectively. 
By taking advantage of the introduction of devices such as smartphones, interactive voice response systems, digital television, and self-service terminals, organizations can make use of multiple channels to increase end user participation.
Organizations should envision new forms of interaction that allow service providers to be as accessible and responsive as possible. Multichannel service delivery is the provision of public services by various means in an integrated and coordinated way. End users can make selections according to their specific needs and circumstances to receive consistent outcomes across channels. The result is in an increase in user satisfaction and a growing trust in the organizations they are doing business with.
Q: Why is it important in today’s business environment to provide a trustworthy multi-channel experience to customers, employees, partners and citizens? The ability to deliver superior customer service and support across multiple channels isn't necessarily a requirement to be Best-in-Class. It is, however, a trait of leading organizations that are looking to lead their customers to newer media of information discovery and service interaction. What really differentiates leading organizations is investing in the consistency and effectiveness of a strong service experience regardless of the channel being leveraged. By architecting a service platform that is designed for multi-channel service delivery, you are setting your organization up for lower service costs, greater agent effectiveness and improved customer satisfaction.
We talked about trust being an important factor. In today’s world, ease of use and information security is more important than ever before. In parallel, the range of device types accessing our information systems is growing exponentially. User reluctance is certainly warranted, so in order to drive user participation with new systems an organization needs to address both ease of use AND information security. Building a platform that is consistent across all channels is critical as you engage new users. Perception is everything, and inconsistent user experiences inherently leave the perception that a system is unstable (even if that’s not the case). 
Q: You touched on some of the benefits of delivering a strong multi-channel service experience including lowering service costs and improving customer satisfaction. Are there other benefits you could highlight for our readers? By providing a multi-channel delivery platform, you are positioning your organization to leverage that platform for its own benefit. Improved customer experience gives your organization an opportunity to:
  • Increase your sales conversion rate
  • Generate revenue through “convenience” fees
  • Improve effectiveness of organizational promotions

Q: So if someone was looking to dive in and begin delivering services across multiple channels, how would they get started and what technologies are available to them?
The Oracle solution for Multi-Channel service delivery is a combination of two Fusion Middleware products – Oracle WebCenter and Oracle Business Process Management. Oracle WebCenter is the next generation user engagement platform that allows for rapid development of responsive user interfaces. WebCenter provides organizations with a toolset to deliver role-based user experiences that are intuitive, social, and allow for easy content management. Oracle Business Process Management – also known as BPM – is the engine that drives self-service capabilities. BPM streamlines and automates existing processes into well-oiled workflows that rely on any combination of human and system-driven events to reach business goals. The integration between these products is seamless, leading implementers to stress-free implementations and faster time-to-production.

Thank you, Mitchell for sharing your best practices on Multi-Channel Service Delivery.  If you’d like to listen to a podcast on this topic, you can do so here

Watch: The Most Underrated Features of SQL Server 2014 — Part 3

Pythian Group - Fri, 2014-11-07 10:02

Since its release back in April, SQL Server experts across the globe are becoming familiar with the top features in Microsoft SQL Server 2014—the In-Memory OLTP engine, the AlwaysOn enhancements, and more. But we couldn’t help but notice that there are a few features that aren’t getting the same attention. Warner Chaves, a Microsoft Certified Master and SQL Server Principal Consultant at Pythian has filmed a video series sharing the most underrated features of SQL Server 2014.

In his final video in the series, Warner discusses SQL Server 2014’s parallel implementation of the SELECT INTO statement. In previous versions of SQL Server, it was always a one-threaded operation, using only one CPU—the new feature however, uses multiple threads.“This makes the operation a lot faster and a lot more efficient.” Warner says. Learn common use cases and how to enable the new feature by watching his video The Most Underrated Features of SQL Server 2014 — Part 3 down below.

Watch the rest of the series here:

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Microsoft SQL Server expertise.

 

Categories: DBA Blogs

Log Buffer #396, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-11-07 08:52

This Log Buffer Edition lists down some of valuable blog posts from across the arenas of Oracle, SQL Server, and MySQL.

Oracle:

OOW Recap: Industrial Innovation with IoT, enabled by Oracle Fusion Middleware and JD Edwards.

PeopleSoft HCM 9.2 Update Image 9 is Now Available! – Simple and Intuitive User Interface on Multiple Devices.

API Catalog (OAC) allows organizations to easily build a catalog of their APIs to provide visibility to those APIs for application development.

Oracle Announces Oracle API Catalog to Simplify API Management.

Tools For Generating Consistent Loads.

SQL Server:

Is Database Mirroring Causing The Transaction Log To Fill Up?

Free SQL in the City Seminar Washington DC – Dec 5, 2014.

Stop Relying on Cloud File Stores as a Backup Strategy.

Stairway to SQL Server Indexes: Level 1, Introduction to Indexes.

SSRS – Report for Stored Procedure with Parameters.

MySQL:

Can you use the binary logs to undo a DELETE? Maybe, if you’re using ROW format logging. The difference between a delete and an insert event could be just one byte – the one that maps it as a DELETE_ROWS_EVENT or a WRITE_ROWS_EVENT.

Ubuntu 14.10 LAMP server tutorial with Apache 2, PHP 5 and MySQL (MariaDB).

Resizing the InnoDB Buffer Pool Online.

Inside Distributed Architectures: Seeing In-Flight Requests And Blockers in Realtime.

Stopping a Runaway Rollback due to Failing Mass Import or ALTER TABLE.

Categories: DBA Blogs

Pass Summit 2014: the beginning

Yann Neuhaus - Fri, 2014-11-07 08:05

This is our first time with Stéphane Haby at this amazing SQLServer community event. At 7:30 am, we were already in the conference center at the registration desk. They gave us a funny badge :-)

 

blog_20_-_1_-_badge

blog_20_-_11_-_badge

 

So, after breakfast time, we went to the keynote session.

 

blog20_2_passsummit

blog20__3_keynote2

 

During this session we listened to a speech of SQL Pass President Thomas Larock. The SQLPass community is growing over time. The following picture gives you a good idea of the importance of this community this year:

 

blog20_-_4_-_keynote

 

Then, it was the turn of Microsoft to lay out their vision of the future data market. I won't hide from you that Microsoft will focus on Microsoft Azure, which will be a main component of the Microsoft data platform.

 

blog20_-_5_-_keynote3

 

We had a presentation of different services like Azure DocumentDB (NoSQL), Azure HDInsight (with Hadoop), Polybase (bringing the relational world and Hadoop together), Azure search (search as a service in Azure) and the PowerBI stack. So, among all this, what are the news about the SQL Server engine? Microsoft has announced some interesting features like stretch databases and the possibility to add nonclustered columnstore indexes to in-memory tables. The former consists of extending on-premise databases to Azure DBs by storing hot business data on-premise and cold or close business data to Azure DBs. The latter will provide the ability to run analytics queries concurrently with OLTP workload in the same database.

This keynote was a good reminder of the importance of the cloud capabilities and the hybrid perspective in future database architectures. Personally, I don’t think that we can still reject it, but maybe I have to get a good prediction from Azure machine learning :-)

See you soon for the next sessions!

rowid of the last insert

Laurent Schneider - Fri, 2014-11-07 06:42

If you look for the last insert result, check returning into.

Is it identity column, the rowid, any expression, get it back


SQL> var r varchar2(24)
SQL> var x number
SQL> var d varchar2(30)
SQL> insert into t values (default) 
  returning rowid,x,sysdate into :r,:x,:d;

1 row created.

SQL> print

R
--------------------
AAAaFTAAIAAAAILAAD

         X
----------
         6

D
--------------------------------
2014-11-07_13:33:03

It is documented in the SQL Reference. No need for PL/SQL here. I it very usefull if you use sequences too.


SQL> insert into t(x) values (s.nextval) returning x into :x;

1 row created.

SQL> print x

         X
----------
         2

Logging Oracle Database Link Activity

A database link is a one-way connection between two databases.  Starting with Oracle version 11.2.0.3, database session information now reports additional information for those sessions involving database links.  As often database links are created between databases of different security profiles; it is important to log session activity that includes the details of the database link.

DBLINK_INFO returns the source of a database link.  Specifically, it returns a string of the form –

SOURCE_GLOBAL_NAME=dblink_src_global_name

DBLINK_NAME=dblink_name

SOURCE_AUDIT_SESSIONID=dblink_src_audit_sessionid

where:

  • dblink_src_global_name is the unique global name of the source database
  • dblink_name is the name of the database link on the source database
  • dblink_src_audit_sessionid is the audit session ID of the session on the source database that initiated the connection to the remote database using dblink_name

You can verify DBLINK_INFO –

  • Oracle 12c provides a DBLINK_INFO column in SYS.UNIFIED_AUDIT_TRAIL.
  • SELECT SYS_CONTEXT('USERENV','DBLINK_INFO') FROM DUAL

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Database
Categories: APPS Blogs, Security Blogs

Dynamic ADF Form Solution for Oracle BPM Process

Andrejus Baranovski - Fri, 2014-11-07 03:07
I was working recently on integrating ADF 11.1.2.4 and BPM 11.1.1.7 environments. This worked pretty well, all configuration was done based on my previous blog post for similar topic, only previous ADF and BPM versions - BPM 11g R1 Worklist Integration (Hacking Experiment) Into Non-SOA ADF 11g R2 Domain. Goal of this post is to describe dynamic ADF form approach associated with BPM process human task.

BPM payload contains additional element called voInstanceName. This element is supposed to contain ADF VO instance name, this instance will be used later to render dynamic ADF UI for the currently selected task:


Sample application - adfbpm11gr4.zip, is based on two separate applications. One implements simple BPM process with payload including ADF VO instance name and another implements standalone dynamic ADF application with BPM API.

There are two human tasks in BPM process. Each of these human tasks accepts a payload element with ADF VO instance name. This allows to control ADF UI from BPM process, by providing ADF VO instance name to be used in ADF. The first human task is set to work with EmployeesView1 instance:


Second human task is set to work with different ADF VO instance - EmployeesROView1:


I'm reading voInstanceName payload variable value in ADF application, to construct dynamic ADF UI accordingly. Accessing currently selected task by ID and retrieving payload information for voInstanceName. Later this information will be used to construct dynamic ADF iterator:


Value from voInstanceName payload is used as a parameter for the ADF task flow rendering dynamic ADF UI. Task flow is configured to refresh each time, when parameter value is changed - new UI will be rendered for each new task selected:


ADF task flow checks if parameter for voInstanceName is not empty (it could be empty, if there are no tasks from BPM or payload value was not set). If there is no value, user will be redirected to the empty fragment:


Dynamic ADF UI is constructed in the bindings, instead of using static ADF iterator - we are initialising iterator from task flow parameter value passed through voInstanceName. This is how we can switch between different VO's from the ADF BC, depending on BPM task selected:


ADF UI is rendered with dynamic ADF form component. There are several ways to implement dynamic ADF UI, it can be all custom or using out of the box ADF UI components (depends on use case requirements). Here I'm using out of the box dynamic component, because UI is fairly simple:


This is how it looks like. Initially there are no tasks, no dynamic ADF UI - blank screen:


Start a new task in BPM, task is started through BPM API from standalone ADF application:


ADF UI form is rendered for the task. User can select a task and then select Employee, by navigating with Next/Previous buttons. As soon as Employee is selected (ID: 103), task can be submitted for further processing:


Login with different user - redsam2 and you are going to see new task in the inbox. Employee data is rendered from different VO instance and is readonly now. Employee ID was taking from the payload, this is how a row with ID: 103 is selected (the same as selected in the previous step):


Employee data in the second step is rendered from different read-only VO, including Job Title attribute:


There are two human task activities in the process, both are rendered with different dynamic ADF UI:

Blog : Cloud Community France

Jean-Philippe Pinte - Fri, 2014-11-07 01:46
Intéressé par les solutions Cloud (IaaS & PaaS / Public & Privé) d'Oracle ?
Suivez le blog : https://blogs.oracle.com/Cloud-Community-France/

Starting a Pivotal GemFireXD Distributed System from IntelliJ IDEA

Pas Apicella - Thu, 2014-11-06 21:34
The example below shows how you can start a Pivotal GemFireXD distributed system from IntelliJ IDEA. Here we will start a Locator which has Pulse enabled as well as one member. We use the following class method to achieve this from an IDE such as IntelliJ

FabricServiceManager.getFabricLocatorInstance()
FabricServiceManager.getFabricServerInstance()

1. Add the following to your maven POM file to ensure the correct libraries are present.
  
<dependency>
<groupId>com.pivotal.gemfirexd</groupId>
<artifactId>gemfirexd</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>com.pivotal.gemfirexd</groupId>
<artifactId>gemfirexd-client</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-core</artifactId>
<version>8.0.14</version>
</dependency>
<dependency>
<groupId>org.apache.tomcat.embed</groupId>
<artifactId>tomcat-embed-logging-juli</artifactId>
<version>8.0.14</version>
</dependency>

2. Create a start locator class as follows
  
package pivotal.au.gemfirexd.demos.startup;

import com.pivotal.gemfirexd.FabricLocator;
import com.pivotal.gemfirexd.FabricServiceManager;

import java.sql.SQLException;
import java.util.Properties;

/**
* Created by papicella on 4/11/2014.
*/
public class StartLocator1
{
public static void main(String[] args) throws SQLException, InterruptedException {
// TODO Auto-generated method stub

Properties serverProps = new Properties();
serverProps.setProperty("sys-disk-dir","./gfxd/locator1");
serverProps.setProperty("server-bind-address","localhost");
serverProps.setProperty("jmx-manager-start","true");
serverProps.setProperty("jmx-manager-http-port","7075");
serverProps.setProperty("jmx-manager-bind-address","localhost");

FabricLocator locator = FabricServiceManager.getFabricLocatorInstance();
locator.start("localhost", 41111, serverProps);

locator.startNetworkServer("127.0.0.1", 1527, null);

System.out.println("Locator started ... ");
Object lock = new Object();
synchronized (lock) {
while (true) {
lock.wait();
}
}

}
}

3. Edit the run configuration to ensure you specify the GEMFIREXD ENV variable as shown below.

Note: This is needed to ensure Pulse can start when the locator starts


4. Create a start server class as follows.
  
package pivotal.au.gemfirexd.demos.startup;

import com.pivotal.gemfirexd.FabricServer;
import com.pivotal.gemfirexd.FabricServiceManager;

import java.sql.SQLException;
import java.util.Properties;

public class StartServer1
{
public static void main(String[] args) throws SQLException, InterruptedException {
// TODO Auto-generated method stub
FabricServer server = FabricServiceManager.getFabricServerInstance();

Properties serverProps = new Properties();
serverProps.setProperty("server-groups", "mygroup");
serverProps.setProperty("persist-dd", "false");
serverProps.setProperty("sys-disk-dir","./gfxd/server1");
serverProps.setProperty("host-data","true");
serverProps.setProperty("locators", "localhost[41111]");

server.start(serverProps);

server.startNetworkServer("127.0.0.1", 1528, null);

Object lock = new Object();
synchronized (lock) {
while (true) {
lock.wait();
}
}

}
}

5. Start the locator by running "StartLocator1" class.


6. Start one server by running "StartServer1" class.


7. Connect to pulse to verify you have a 2 node distributed system with one locator and one member.

Using URL: http://localhost:7075/pulse/Login.html




http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Michael’s Keynote at Sakai Virtual Conference

Michael Feldstein - Thu, 2014-11-06 18:51

Michael is giving the keynote address at tomorrow’s (Friday, Nov 7) Sakai Virtual Conference #SakaiVC14. The virtual conference is only $50 registration, with more information and registration link here. The schedule at a glance is available as PDF here.

Michael’s keynote is at 10:00am EDT, titled “Re-Imagining Educational Content for a Digital World”. At 4:30pm, there will be a Q&A session based on the keynote.

The post Michael’s Keynote at Sakai Virtual Conference appeared first on e-Literate.

Partner Webcast – Oracle SOA 12c: BPM 12c integration with OEP 12c

Increased integration complexity may seem inevitable as organizations are suddenly faced with the requirement to support new integration challenges, such as Cloud, Mobile and Internet of Things....

We share our skills to maximize your revenue!
Categories: DBA Blogs

Oracle Priority Support Infogram for 6-NOV-2014

Oracle Infogram - Thu, 2014-11-06 15:39

RDBMS
Premier Support for Oracle 11.2 will end soon ...Upgrade to Oracle Database 12c now!, from Upgrade your Database - NOW!
Exalytics
How to identify the version on Exalytics T5-8, from Exalytics and OBIEE.
Exalogic
Oracle Traffic Director on Exalogic, from Oracle Exalogic.
SOA
From SOA & BPM Partner Community Blog:
What’s New in Oracle SOA Suite 12c?
SOA Suite 12c article series
Interoperability between Microsoft and SOA Suite 12c.
BI
Business Analytics Monthly Index - October 2014, from Business Analytics - Proactive Support.
WebLogic
WebLogic Partner Community Newsletter October 2014, from the WebLogic Partner Community EMEA blog.
Data Masking
Oracle OpenWorld 2014 Updates on Oracle Data Masking and Subsetting, from Oracle Data Masking and Subsetting.
ADF
Adding an ADF Human Task User Interface to our SOA Maven build, from SOA & BPM Partner Community Blog.
ADF 12.1.3 – Highlights in Data Visualization (DVT), from WebLogic Partner Community EMEA.
Mobile Computing
Options For Tailoring Your Mobile Apps, from Fusion Applications Developer Relations.
Java
Building a 3-tiered Application with Java EE, from The Java Source.
EBS
From Oracle E-Business Suite Technology:
Webcast: Empowering Users with Oracle EBS Endeca Extensions
Inventory Consignment Gets More Flexible In Release 12.2
Don't Miss This! Leverage Endeca extension with iProcurement to Increase Operating Profit
R12 : Important Enhancement ON-THE-FLY Upgrade of Historical SLA Distributions
Calling all E-Business Suite Customers!
Webcast: Oracle Receivables Posting & Reconciliation Process In R12
New Learning Solution for Oracle E-Business Suite
…and Finally
Some really useful Google search tips: 8 Tips For Google Search That Will Streamline Nearly Everything You Do.

Using tmux for semi-interactive demos

Yann Neuhaus - Thu, 2014-11-06 14:36

You are a speaker and like to show some demos. Which kind of demo do you do?

  • you script everything so that you just have to press 'enter' during the demo
  • you prepare everything in a text file and copy/paste during the demo
  • you type everything in live?