Skip navigation.

DBA Blogs

Partner Webcast – Oracle NoSQL key-value database

The Oracle NoSQL Database is a horizontally scaled, Key-Value database for Web Services and Cloud, designed specifically to provide highly reliable, scalable and available data storage across a...

We share our skills to maximize your revenue!
Categories: DBA Blogs

The Hitchhiker’s Guide to the EXPLAIN PLAN: The story so far (Part 11–22)

Iggy Fernandez - Sat, 2014-09-27 17:20
On the Toad World site, I’m writing a series of blog posts and Wiki articles on the subject of EXPLAIN PLAN. I’m using EXPLAIN PLAN as a motif to teach not just SQL tuning but also relational theory, logical database design, and physical database design. In a year’s time, I hope to have enough material for […]
Categories: DBA Blogs

I Heart Logs - Event Data, Stream Processing, and Data Integration by Jay Kreps; O'Reilly Media

Surachart Opun - Fri, 2014-09-26 23:01
As I have worked in server-side a long time as System Administrator. I must spend with logs. To use it for checking and investigation in issue. As some policies in some Companies, they want to keep logs over year or over ten years. So, it is not unusual to find out idea to store, integrate logs and do something.
A book tittle "I Heart Logs - Event Data, Stream Processing, and Data Integration" by Jay Kreps. It's very interesting. I'd like to know what I can learn from it, how logs work in distributed systems and learn from author who works at LinkedIn. A book! Not much for the number of pages. However, it gives much more for data flow idea, how logs work and author still shows readers why logs are worthy of reader's attention. In a book, that has only 4 chapters, but readers will get concept and idea about Data integration (Making all of an organization’s data easily available in all its storage and processing systems), Real-time data processing (Computing derived data streams) and Distributed system design (How practical systems can by simplified with a log-centric design). In addition, I like it. because author wrote from his experience at LinkedIn.

After reviewing: A book refers a lot of information(It's easy on ebook to click links) that's useful. Readers can use them and find out more on the Internet and use. For Data integration, It's focused to Kafka software that is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system. Additional, It gave why the Big Data Lambda Architecture is good for batch system and a stream processing system and point about things a log can do.

So, Readers will be able to learn:
  • Learn how logs are used for programmatic access in databases and distributed systems
  • Discover solutions to the huge data integration problem when more data of more varieties meet more systems
  • Understand why logs are at the heart of real-time stream processing
  • Learn the role of a log in the internals of online data systems
  • Explore how Jay Kreps applies these ideas to his own work on data infrastructure systems at LinkedIn
Book - I Heart Logs - Event Data, Stream Processing, and Data Integration
Author: Jay KrepsWritten By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Patch 19183482 resolves ORA-01403 getting plan with baseline

Bobby Durrett's DBA Blog - Fri, 2014-09-26 14:51

I was testing SQL plan baselines on a base 11.2.0.3 release of Oracle on a 64 bit Linux virtual machine.  I ran DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE to create a SQL plan baseline for a test query after running that query once to get its plan in the cursor cache.  When I ran the test query after creating the SQL plan baseline and called dbms_xplan.display_cursor to see its new plan I got an ORA-01403 error:

ORCL:SYSTEM>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
SQL_ID  4mu5a860ardpz, child number 1

An uncaught error happened in prepare_sql_statement : ORA-01403: no data found

NOTE: cannot fetch plan for SQL_ID: 4mu5a860ardpz, CHILD_NUMBER: 1
      Please verify value of SQL_ID and CHILD_NUMBER;
      It could also be that the plan is no longer in cursor cache (check v$sql_plan)

I applied patch 19183482 to my test system and the ORA-01403 error went away:

ORCL:SYSTEM>select * from table(dbms_xplan.display_cursor(null,null,'ALL'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
SQL_ID  4mu5a860ardpz, child number 1
-------------------------------------
select sum(blocks) from test

Plan hash value: 1950795681

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |    29 (100)|          |
|   1 |  SORT AGGREGATE    |      |     1 |     3 |            |          |
|   2 |   TABLE ACCESS FULL| TEST |  2844 |  8532 |    29   (0)| 00:00:01 |
---------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):
-------------------------------------------------------------

   1 - SEL$1
   2 - SEL$1 / TEST@SEL$1

Column Projection Information (identified by operation id):
-----------------------------------------------------------

   1 - (#keys=0) SUM("BLOCKS")[22]
   2 - "BLOCKS"[NUMBER,22]

Note
-----
   - SQL plan baseline SQL_PLAN_dscatqjvyk4qy6b581ab9 used for this statement

Here is a zip of the script that I ran to create the ORA-01403 error and the output that it generated with and without the patch applied on my test database: zip

Here is a list of the bugs that patch 19183482’s readme says it will fix:

14512308: SPM BASELINE PLAN CANNOT BE REPRODUCED
15858022: ‘LIBRARY CACHE: MUTEX X’ AND LIBRARY CACHE LOCKS  PURGED_CURSOR
16400122: SPIKES IN LIBRARY CACHE
16625010: SPM BASELINE NOT WORKING FOR SQL CALLED FROM PL/SQL

I haven’t gotten very far into my investigation of SQL plan baselines but it looks like it would be a good idea to apply 19183482 before using SQL plan baselines on 11.2.0.3.  I barely got started using SQL plan baselines and I immediately hit this bug.

– Bobby









Categories: DBA Blogs

Log Buffer #390, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-09-26 10:50

All eyes are on the Oracle Open World. Thousands of sessions, demos, and labs topped up with the countless networking opportunities, Moscone Center is the place to be.

Oracle:

Oracle OpenWorld 2014 – Bloggers Meetup.

Query to show sql_ids related to SQL Profiles.

Oracle OpenWorld is nearly upon us and the agenda is packed with interesting sessions. Prominent among these are several sessions by customers who are able to share their stories of success with Oracle Multitenant.

The biggest challenge for people coming to such huge events as Oracle OpenWorld is to navigate through all of the events that simultaneously happen at various locations.

Pythian at Oracle OpenWorld 2014.

SQL Server:

Monitoring Longest Running Transaction using SQL Server Agent Alerts .

Free PDF Booklet: 119 SQL Code Smells.

Level 1: What is “SQL Server AlwaysOn“?

10 things I learned about rapidly scaling websites with Azure.

Importing Excel Data into SQL Server Via SSIS: Questions You Were Too Shy to Ask

MySQL:

How to do reverse engineering with MySQL Workbench.

MySQL 5.5.40 Overview and Highlights.

MySQL 5.7.5: GROUP BY respects functional dependencies!

JSON UDF functions version 0.3.2 have been released.

Importing related MySQL tables into an Excel Data Model using MySQL for Excel.

Categories: DBA Blogs

On the Road with Luan

Pythian Group - Fri, 2014-09-26 08:59

For the months of September, October, and November, Microsoft SQL Server MVP Luan Moreno will be touring Brazil, Europe, and EUA for various speaking engagements. He’ll be sharing his SQL Server knowledge and insights during SQL Saturdays, conferences, and virtual conferences.

“Pythian is a company that values and truly supports employees in sharing their knowledge at community events,” Luan says. “It’s a pleasure to be a part of this amazing company.”

 

Date Location Event Topic Notes 27-Sep-14 São Paulo, Brazil SQL Saturday #325 In-Memory OLTP a.k.a Hekaton Speaking schedule 28-Sep-14 São Paulo, Brazil SQL Saturday #325 MythBusters – Caçadores de Mitos Speaking schedule 6-Oct-14 Online MVP ShowCast 2014 In-Memory OLTP – Cenários de Aplicação Register here 7-Oct-14 Online 24 Hours of Pass Troubleshooting SQL Server Extended Events Register here 7-Oct-14 Online 24 Hours of Pass In-Memory OLAP a.k.a ColumnStore Index Internals Register here 17-Oct-14 Porto Alegre, Brazil TDC 2014 ORM e Consultas x Performance Speaking schedule 25-Oct-14 Rio de Janerio, Brazil SQL Saturday #329 In-Memory OLAP a.k.a ColumnStore Index Speaking schedule Oct 28-31 Barcelona, Spain TechED 2014 Europe Subject Matter Expert (SME) – SQL Server Event schedule Nov 3-6 Redmond, Seattle MVP Summit 2014 TBD Event schedule Nov 4-7 Redmond, Seattle Pass Summit 2014 TBD Event schedule

 

Will you be attending any of these events? If so, Luan extends the invite to chat SQL Server over a coffee! Reach out to him at moreno@pythian.com or follow him on Twitter at @luansql.

Categories: DBA Blogs

12c: Does PDB Have An SPfile?

Oracle in Action - Fri, 2014-09-26 01:39

RSS content

In a multi-tenant container database, since there are many PDB’s per CDB, it is possible for set some parameters for each individual PDB. The SPFILE for CDB stores parameter values associated with the root which apply to the root, and serve as default values for all other containers. Different values can be set in PDBs for those parameters where the column ISPDB_MODIFIABLE in V$PARAMETER is TRUE. The parameters are set for a PDB and  are stored in table PDB_SPFILE$ remembered across PDB close/open and across restart of the CDB.

– Currently  I have a CDB called CDB1 having one PDB – PDB1.

SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO
3 PDB1                           READ WRITE NO

– There is a table pdb_spfile for each of the containers (cdb$root and PDB1)

SQL>  select con_id,  table_name from cdb_tables  where table_name = 'PDB_SPFILE$';

CON_ID TABLE_NAME
---------- --------------------
3 PDB_SPFILE$
1 PDB_SPFILE$

– pdb_spfile contains only those parameters which have been specifically   set for a container hence currently there are  no records

SQL>   col container_name for a10
col parameter for a20
col value$ for a30

select container.name container_name, par.name PARAMETER,
par.value$
from pdb_spfile$ par, v$containers container
where par.pdb_uid = container.con_uid
and par.name = 'cursor_sharing';

no rows selected

– Let’s explicitly set cursor_sharing = ‘exact’ in root and check if   it is reflected in pdb_spfile$

SQL> alter system set cursor_sharing='similar';

col container_name for a10
col parameter for a20
col value$ for a30
select container.name container_name, par.name PARAMETER,
par.value$
from pdb_spfile$ par, v$containers container
where par.pdb_uid = container.con_uid
and par.name = 'cursor_sharing';

no rows selected

– It does not show any results but v$spparameter has been updated    probably implying that spfile for the root is maintained in the    operating system only and pdb_spfile does not contain info about parameters in cdb$root.

SQL> select name, value from v$spparameter where name='cursor_sharing';

NAME                           VALUE
------------------------------ -------
cursor_sharing                 similar

-- v$parameter shows the value of parameter for root

SQL> col name for a30
col value for a30

select con_id, name, value from v$parameter
where name = 'cursor_sharing';


CON_ID NAME                           VALUE
---------- ------------------------------ --------
1 cursor_sharing                 similar

– To see parameters for the CDB and all the PDB’s (except PDB$SEED),  v$system_parameter can be accessed. It can be seen that currently it shows only the value for the CDB which will be inherited by all the PDB’s.

SQL>select con_id, name, value from v$system_parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ---------
0 cursor_sharing                 similar

– Change container to PDB1 and verify that PDB has inherited the value from CDB

SQL> alter session set container=pdb1;

sho parameter cursor_sharing

NAME                                 TYPE        VALUE
------------------------------------ -------     ---------------
cursor_sharing                       string      similar

– Since parameter has not been explicitly specified    in PDB ,  v$spparameter shows record  for con_id = 0 and null in value column

SQL> select con_id, name, value from v$spparameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ----------
0 cursor_sharing

– Let’s check if the parameter can be modified for the PDB

SQL> col ispdb_modifiable for a17
select con_id, name, value, ispdb_modifiable

from v$parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE      ISPDB_MODIFIABLE
---------- ------------------------------ ---------- -----------------
3 cursor_sharing                 similar    TRUE

– Since the parameter can be modified in PDB, let us modify its value in PDB to ‘FORCE’

SQL> alter system set cursor_sharing = 'FORCE';

sho parameter cursor_sharing

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----
cursor_sharing                       string      FORCE

SQL> select con_id, name, value from v$parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ----------
3 cursor_sharing                 FORCE

– v$spparameter shows updated value but con_id is still 0 (bug??)

SQL> select con_id, name, value from v$spparameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ----------
 0 cursor_sharing                 FORCE

– Current value of the parameter  for PDB can be viewed from root using v$system_parameter

SQL> alter session set container=cdb$root;
select con_id, name, value from v$system_parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ -------
0 cursor_sharing                 similar
3 cursor_sharing                 FORCE

– Current value in spfile of PDB can be seen from pdb_spfile$

SQL> col value$ for a30
select pdb.name PDB_NAME, par.name PARAMETER, par.value$
from pdb_spfile$ par, v$pdbs pdb
where par.pdb_uid = pdb.con_uid
and par.name = 'cursor_sharing';

PDB_NAME   PARAMETER            VALUE$
---------- -------------------- ------------------------------
PDB1       cursor_sharing       'FORCE'

– The parameter still has earlier value of similar for cdb$root

SQL> sho parameter cursor_sharing

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------
cursor_sharing                       string      similar

SQL> col name for a30
col value for a30

select con_id, name, value from v$parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ --------
1 cursor_sharing                 similar

– Let’s check if modified value persists across closing / opening of the PDB

SQL> alter pluggable database pdb1 close;

– After PDB is closed, entry in its spfile is still visible    but current value cannot be seen as PDB is closed

SQL> col value$ for a30
select pdb.name PDB_NAME, par.name PARAMETER, par.value$
from pdb_spfile$ par, v$pdbs pdb
where par.pdb_uid = pdb.con_uid
and par.name = 'cursor_sharing';

PDB_NAME   PARAMETER            VALUE$
---------- -------------------- -------------
PDB1       cursor_sharing       'FORCE'

SQL> select con_id, name, value from v$system_parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ------------
0 cursor_sharing                 similar

– It can be seen that after PDB is re-opened, the updated
   value still persists

SQL>  alter pluggable database pdb1 open;

select con_id, name, value from v$system_parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ -------------
0 cursor_sharing                 similar
3 cursor_sharing                 FORCE

– Let’s verify that parameter change persists across CDB shutdown

SQL> shu immediate;
     startup
     alter pluggable Database  pdb1 open;

     select con_id, name, value 
     from   v$system_parameter
     where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ -----------
0 cursor_sharing                 similar
3 cursor_sharing                 FORCE

SQL> col value$ for a30
     select pdb.name PDB_NAME, par.name 
            PARAMETER, par.value$
     from pdb_spfile$ par, v$pdbs pdb
     where par.pdb_uid = pdb.con_uid
      and par.name = 'cursor_sharing';

PDB_NAME   PARAMETER            VALUE$
---------- -------------------- ------------------------------
PDB1       cursor_sharing       'FORCE'

– Now we will change the parameter in PDB spfile only

SQL> alter session set container=pdb1;

     alter system set cursor_sharing = 'EXACT' scope=spfile;

– Current value still remains FORCE

sho parameter cursor_sharing

NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------
cursor_sharing                       string      FORCE

–Value has been changed to EXACT in SPfile only

SQL> select con_id, name, value
     from     v$spparameter
     where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ -----------
0 cursor_sharing                 EXACT

– The above changes can be seen from root as well

SQL> alter session set container=cdb$root;

-- The current value is shown as FORCE

SQL> select con_id, name, value 
      from  v$system_parameter
     where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ----------
0 cursor_sharing                 similar
3 cursor_sharing                 FORCE

– The value in SPFILE is ‘EXACT’ as set

SQL> col value$ for a30
     select pdb.name PDB_NAME, par.name 
            PARAMETER, par.value$
     from pdb_spfile$ par, v$pdbs pdb
      where par.pdb_uid = pdb.con_uid
     and par.name = 'cursor_sharing';

PDB_NAME   PARAMETER            VALUE$
---------- -------------------- ------------------------------
PDB1       cursor_sharing       'EXACT'

– Let’s close and re-open PDB to vefify that value in spfile is
   applied

SQL> alter pluggable database pdb1 close;

    alter pluggable database pdb1 open;
  
     select con_id, name, value 
     from   v$system_parameter
     where name = 'cursor_sharing';  2

CON_ID NAME                           VALUE
---------- ------------------------------ ---------------
0 cursor_sharing                 similar
3 cursor_sharing                 EXACT

– Since the value in spfile is same as default, we can remove this
entry by resetting the value of the parameter.

SQL> alter session set container=pdb1;
     alter system reset cursor_sharing;
     sho parameter cursor_sharing

NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------
cursor_sharing                       string      EXACT

– The entry has been deleted from spfile

SQL> select con_id, name, value 
     from v$spparameter
     where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ -------------
0 cursor_sharing

SQL> alter session set container=cdb$root;

     col value$ for a30
     select pdb.name PDB_NAME, par.name 
            PARAMETER, par.value$
     from pdb_spfile$ par, v$pdbs pdb
     where par.pdb_uid = pdb.con_uid
     and par.name = 'cursor_sharing';

no rows selected

I hope this post was useful. Your comments and suggestions are always welcome!!

References:

Oracle documentation

——————————————————————————————-

Related Links:

Home

Oracle 12c Index

 

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c: Does PDB Have An SPfile?], All Right Reserved. 2014.

The post 12c: Does PDB Have An SPfile? appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Query to show sql_ids related to SQL Profiles

Bobby Durrett's DBA Blog - Thu, 2014-09-25 17:58

I have a number 0f SQL Profiles on a database I’m working on and I wanted to know the sql_id of the SQL statements that they relate to.  Here is what I came up with:

select distinct 
p.name sql_profile_name,
s.sql_id
from 
dba_sql_profiles p,
DBA_HIST_SQLSTAT s
where
p.name=s.sql_profile;

Here is the output on my system:

SQL_PROFILE_NAME               SQL_ID
------------------------------ -------------
coe_5up1944u1zb7r_1979920644   5up1944u1zb7r
coe_b9ad7h2rvtxwf_3569562598   b9ad7h2rvtxwf
coe_9f21udzrfcygh_2815756405   9f21udzrfcygh
coe_366brswtxxuct_10194574     366brswtxxuct
coe_2261y37rp45py_2815756405   2261y37rp45py

These are all profiles I created manually with the SQLT scripts so the profile name includes the sql_id but not all sql profiles are like this.  I have more entries in dba_sql_profiles than these five but these are the only rows that matched a row in dba_hist_sqlstat so I guess this won’t work for queries that are not used very often or are so fast that they don’t get pulled into the tables that are behind dba_hist_sqlstat.

– Bobby



Categories: DBA Blogs

In-Memory OLTP – ORDER BY Behavior Clause Using Range Index

Pythian Group - Thu, 2014-09-25 14:43

Some clients are beginning to thinking about the possibilities of migrating some disk tables to In-Memory tables — this process is not so simple. Migrating tables requires a new mindset as some things changed in the SQLOS and SQL Serve architecture.

It is extremely important to know about the differences, the scenarios that can be applied, and the non-supported aspects. First, take a look at the official link for the In-Memory Tables documentation – http://msdn.microsoft.com/en-us/library/dn133186%28v=sql.120%29.aspx

One of the things that we need to know about in In-Memory tables, is that this feature comes with two new types of indexes known as HASH INDEX and RANGE INDEX.

1. Hash Index – The Hash Index is perfect to use for equality purposes. For example, to search for a specific number and character.

hash_index
Figure 1 – Hash Index Structure

 

2. Range Index – Basically the Range Index is perfect for range purposes, So for example search for a range of values and normally is more applied when used with date ranges – DATE, DATETIME and DATETIME2.

range_index
Figure 2 – Range Index Structure

When we realize the range index creation, a very important aspect is the ORDER of the data sort (ASC or DESC). Creating In-Disk Tables (Conventional Tables) the ORDER BY is not normally a big point of concern because the data pages are double linked — this difference becomes significant when you have more than one column in two different directions.

Another interesting aspect of the ORDER BY clause in In-Disk tables is the ORDERING creation and Parallelism usage, and you can check this interesting behavior here sqlmag.com/t-sql/descending-indexes

Now, let’s analyze the ORDER BY clause in In-Memory tables using the RANGE INDEX and check the behavior and the differences when using the ASC and DESC order in queries.

 

USE HktDB

go

sp_help ‘inmem_DadosAtendimentoClientes’

table_info
Figure 3 – Range Index = idxNCL_DataCadastro

 

Creating an In-Memory table with a NONCLUSTERED Range Index in Ascending Order.

 

CREATE TABLE [dbo].[inmem_DadosAtendimentoClientes]

(

INDEX [idxNCL_DataCadastro] NONCLUSTERED

(

[DataCadastro] ASC

)

)

 

Grabbing information’s about RANGE INDEX

 

SELECT SIS.name AS IndexName,

SIS.type_desc AS IndexType,

XIS.scans_started,

XIS.rows_returned,

XIS.rows_touched

FROM sys.dm_db_xtp_index_stats AS XIS

INNER JOIN sys.indexes AS SIS

ON XIS.object_id = SIS.object_id

AND XIS.index_id = SIS.index_id

WHERE XIS.object_id = 50099219

AND  SIS.type_desc = ‘NONCLUSTERED’

index_info
Figure 4 – Range Index Information

 

Execution 1 – Ordering using the ORDER BY ASC

SELECT *

FROM inmem_DadosAtendimentoClientes

WHERE DataCadastro BETWEEN ‘2005-02-15 18:58:48.000? AND ‘2005-02-25 18:58:48.000?

ORDER BY DataCadastro ASC

plan_1

 

Execution 2 – Ordering using the ORDER BY DESC

SELECT *

FROM inmem_DadosAtendimentoClientes

WHERE DataCadastro BETWEEN ‘2005-02-15 18:58:48.000? AND ‘2005-02-25 18:58:48.000?

ORDER BY DataCadastro DESC

plan_2

 

sort_operation

Sort of 80% in this query, Why ?

 

Analyzing the XML of the Execution Plan…

<OrderBy>

  <OrderByColumn Ascending=”false”>

<ColumnReference Database=”[HktDB]” Schema=”[dbo]” Table=”[inmem_DadosAtendimentoClientes]” Column=”DataCadastro” />

</OrderByColumn>

</OrderBy>

 

What happened?

When we execute a query with the ORDER BY clause in Range Index column, we need to verify the order that was created – ASC or DESC. This happened because is this case I created the column ‘DataCadastro‘ with ASC order, this way the data is ordered in the ascending way and not in descending way, and talking about In-Memory tables the order MATTERS a lot, You just can benefit of the ORDER if the order that you searched is the same that the order that you created the tables, this happens because the data is stored in another way in-memory, this is a BY DESIGN consideration.

MSDN – “Nonclustered indexes (not hash indexes) support everything that hash indexes supports plus seek operations on inequality predicates such as greater than or less than, as well as sort order. Rows can be retrieved according to the order specified with index creation. If the sort order of the index matches the sort order required for a particular query, for example if the index key matches the ORDER BY clause, there is no need to sort the rows as part of query execution. Memory-optimized nonclustered indexes are unidirectional; they do not support retrieving rows in a sort order that is the reverse of the sort order of the index. For example, for an index specified as (c1 ASC), it is not possible to scan the index in reverse order, as (c1 DESC).”

 

Recommendation

Always realize the creation of the RANGE Index in the correct ordination that you want, in the most of times the most common ORDER pattern is the DESC, because normally you want to visualize and search the most recent data of your application or search for last transaction that you had in an specific date. You should always be careful about this because if you want to change the ORDER BY is necessary to DROP and CREATE again the table, In-Memory tables don’t enable the ALTER clause option.

You don’t want to see this in your environment, ever!

 

plan_comparison

 

 

Categories: DBA Blogs

The ADMINISTER SQL MANAGEMENT OBJECT Privilege

Hemant K Chitale - Thu, 2014-09-25 08:26
In 11.2.0.2

Having seen in the previous post, "EXECUTE Privilege on DBMS_SPM not sufficient", let's see if there is a risk to the ADMINISTER SQL MANAGEMENT OBJECT privilege.

First, recreating the SQL Plan

SQL> connect spm_test/spm_test
Connected.
SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE;

Session altered.

SQL> variable qrn number ;
SQL> exec :qrn := 5;

PL/SQL procedure successfully completed.

SQL> select * from spm_test_table where id_column=:qrn;
5
ID_COLUMN DATA_COL
---------- ---------------
5 5

SQL>select * from spm_test_table where id_column=:qrn;

ID_COLUMN DATA_COL
---------- ---------------
5 5

SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE;

Session altered.

SQL>
SQL> connect hemant/hemant
Connected.
SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
2 from dba_sql_plan_baselines
3 where creator='SPM_TEST'
4 /

SQL_HANDLE PLAN_NAME
------------------------------ ------------------------------
SQL_TEXT
--------------------------------------------------------------------------------
ENA ACC FIX
--- --- ---
SQL_6ceee9b24e9fd50a SQL_PLAN_6tvr9q979zp8a1e198e55
select * from spm_test_table where id_column=:qrn
YES YES NO


SQL>

Next, setup the BREAK !

SQL> create user spm_break identified by spm_break;

User created.

SQL> grant create session, administer sql management object to spm_break;

Grant succeeded.

SQL> connect spm_break;
Enter password:
Connected.
SQL>
SQL> set serveroutput on
SQL> declare
2 ret_value pls_integer;
3 begin
4 ret_value := dbms_spm.drop_sql_plan_baseline(
5 sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
6 dbms_output.put_line('Return : ' || ret_value);
7 end;
8 /
Return : 1

PL/SQL procedure successfully completed.

SQL>
SQL> connect hemant/hemant
Connected.
SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
2 from dba_sql_plan_baselines
3 where creator = 'SPM_TEST'
4 /

no rows selected

SQL>

I was able to use the SPM_BREAK account to *DROP* an SQL Plan Baseline that was created by the SPM_TEST account without identifying which account it belonged to -- i.e. which account was the creator ! See Oracle Support Doc 1469099.1 and reference to Bug 12932784.   Isn't that a bug, or a security loophole ?
Apparently, this privilege is to be used only by Administrators.  But a non-Administrator cannot manage and evolve his own SQL Plan Baselines without this privilege.  So does that mean that only an Administrator should capture, evolve and manage SQL Plan Baselines ?

If you have a shared environment with different development teams developing different applications in different schemas, how do you provide them the facility to manage their own SQL Plan Baselines ?  The EXECUTE privilege on DBMS_SPM is not sufficient.  Yet, the ADMINISTER SQL MANAGEMENT OBJECT is excessive as one development team could drop the SQL Plan Baselines of another development team (i.e. another application).


Can anyone test that the ADMINISTER SQL MANAGEMENT privilege is required in addition to the EXECUTE on DBMS_SPM  in order to simply manage / evolve one's own SQL Plans in 11.2.0.4 / 12.1.0.1 / 12.1.0.2  ?
.
.
.

Categories: DBA Blogs

12c: Optimizer_Dynamic_Sampling = 11

Oracle in Action - Thu, 2014-09-25 04:31

RSS content

With default sampling level of 2 (from 10g onwards) , dynamic sampling is performed only for the objects for which statistics do not exist. If the statistics are stale or insufficient, dynamic  sampling is not done.

12c introduces a new value of 11 for OPTIMIZER_DYNAMIC_SAMPLING . This value allows the optimizer to automatically perform dynamic sampling using an appropriate level for a SQL statement, even if all basic table statistics exist but they are found to be stale or insufficient. The results of the dynamically sampled queries are persisted in the cache, as dynamic statistics, allowing other SQL statements to share these statistics. This level will no doubt generate a better plan during the first execution of the statement itself but will also lead to dynamic sampling being triggered more frequently and sometime unnecessarily as well.

This example demonstrates that if  OPTIMIZER_DYNAMIC_SAMPLING is set to  11, dynamic sampling will be performed even in case of stale or insufficient statistics. Moreover dynamic sampling  may be unnecessary triggered in some scenarios.

Insufficient Statistics 

I have created a table HR.BIRTHDAYS having 10000 rows whose column MM is indexed and contains numeric month of birth with NDV = 12. The data distribution in the column is skewed . Statistics have been gathered for the table without histogram.

DB12c>select mm, count(*) from hr.birthdays group by mm order by mm;

MM   COUNT(*)
---------- ----------
1       9989
2         1
3         1
4         1
5         1
6         1
7         1
8         1
9         1
10        1
11        1
12        1

12 rows selected.

If OPTIMIZER_DYNAMIC_SAMPLING were set to 2 (default), dynamic sampling will not be done, as statistics are present for the table. However, if the parameter is set to 11 (new in 12c), in view of skewed data distribution, existing statistics  are found to be insufficient (missing histogram) and  dynamic sampling is performed  leading to accurate cardinality estimates for both  MM = 1 which occurs 0.01% times and MM = 12 which occurs 99.89% times .

DB12c>alter session set optimizer_dynamic_sampling=11;
set autot trace explain

select * from hr.birthdays where mm = 12;

Execution Plan
----------------------------------------------------------
Plan hash value: 3569291752
-----------------------------------------------------------------------------
|Id |Operation                          |Name    |Rows|Bytes|Cost(%CPU)|Time|
-----------------------------------------------------------------------------
|  0|SELECT STATEMENT                    |         | 1 |  37| 2(0)| 00:00:01|
|  1| TABLE ACCESS BY INDEX ROWID BATCHED|BIRTHDAYS| 1 |  37| 2(0)| 00:00:01|
|* 2|  INDEX RANGE SCAN                  |BDAY_IDX | 1 |    | 1(0)| 00:00:01|
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("MM"=12)

Note
-----
- dynamic statistics used: dynamic sampling (level=AUTO)

DB12c>select * from hr.birthdays where mm = 1;
set autot off

Execution Plan
----------------------------------------------------------
Plan hash value: 3605468880

-----------------------------------------------------------------------------
| Id  | Operation         | Name      | Rows  | Bytes | Cost (%CPU)| Time   |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |           |  9989 |   360K|    17   (0)|00:00:01|
|*  1 |  TABLE ACCESS FULL| BIRTHDAYS |  9989 |   360K|    17   (0)|00:00:01|
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("MM"=1)
Note
-----
- dynamic statistics used: dynamic sampling (level=AUTO)
Stale Statistics

Now to make the statistics stale, I have modified the data in the table and have not refreshed  statistics. Note that there are no rows for MM = 1 or 2, so that actual NDV = 10.
Although there are 8388 rows in table, dictionary statistics still shows NUM_ROWS = 10000 and NDV = 12

DB12c> select count(*), count(distinct mm) from hr.birthdays;

COUNT(*) COUNT(DISTINCTMM)
---------- -----------------
 8388                10

DB12c>select owner, table_name, num_rows
from dba_tables
where owner = 'HR'
and table_name = 'BIRTHDAYS';

OWNER   TABLE_NAME        NUM_ROWS
------- --------------- ----------
HR      BIRTHDAYS            10000

DB12c>select owner, table_name, column_name, num_distinct
from dba_tab_cols
where table_name= 'BIRTHDAYS' and column_name = 'MM';

OWNER   TABLE_NAME      COLUMN_NAME     NUM_DISTINCT
------- --------------- --------------- ------------
HR      BIRTHDAYS       MM                        12

If OPTIMIZER_DYNAMIC_SAMPLING = 2 (default),  dynamic sampling will not be done, as statistics (although stale) are present for the table.

With  OPTIMIZER_DYNAMIC_SAMPLING =11, since statistics are stale, dynamic sampling is performed  and almost correct no. of rows are estimated for both MM = 1 and 4

DB12c>alter session set optimizer_dynamic_sampling=11;
select /*+ gather_plan_statistics */ count(*)
from hr.birthdays where mm = 1;

select * from table(dbms_xplan.display_cursor(format => 'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------
SQL_ID  ghg0pr81m1ha3, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from hr.birthdays where
mm = 1

Plan hash value: 4218648105
-----------------------------------------------------------------------------
|Id  |Operation             |Name    |Starts|E-Rows|A-Rows|  A-Time| Buffers|
-----------------------------------------------------------------------------
|  0 |SELECT STATEMENT      |        |    1 |      |    1 |00:00:00.01|  40 |
|  1 | SORT AGGREGATE       |        |    1 |    1 |    1 |00:00:00.01|  40 |
|* 2 |  INDEX FAST FULL SCAN|BDAY_IDX|    1 |    1 |    0 |00:00:00.01|  40 |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("MM"=1)
Note
-----
- dynamic statistics used: dynamic sampling (level=AUTO)

DB12c>select /*+ gather_plan_statistics */ count(*)
from hr.birthdays where mm = 4;

select * from table(dbms_xplan.display_cursor(format => 'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------SQL_ID  bhrdb027v2pnt, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from hr.birthdays where
mm = 4

Plan hash value: 3164848757
-----------------------------------------------------------------------------
|Id  |Operation         |Name    |Starts|E-Rows|A-Rows|  A-Time   | Buffers |
-----------------------------------------------------------------------------|  0 |SELECT STATEMENT  |        |    1 |      |    1 |00:00:00.0 |       5 |
|  1 | SORT AGGREGATE   |        |    1 |    1 |    1 |00:00:00.0 |       5 |
|* 2 |  INDEX RANGE SCAN|BDAY_IDX|    1 |  810810 |00:00:00.0 |       5 |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("MM"=4)

Note
-----
- dynamic statistics used: dynamic sampling (level=AUTO)  
 Unnecessary Sampling

To demonstrate unnecessary sampling due to sampling level = 11, I have created unique index on NAME column and deleted the statistics for the table.
A search for a NAME would not have triggered dynamic sampling with a  sampling level of  2  since there is a unique index on NAME column.
On the contrary, if  OPTIMIZER_DYNAMIC_SAMPLING =11, dynamic sampling is unnecessarily performed .

DB12c> alter session set optimizer_dynamic_sampling=11;

select /*+ gather_plan_statistics */ count(*)
from hr.birthdays where NAME = 'NAME OCTOBER  8802';


select * from table(dbms_xplan.display_cursor(format => 'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------SQL_ID  d79yg9wq02swy, child number 1
-------------------------------------
select /*+ gather_plan_statistics */ count(*) from hr.birthdays where
NAME = 'NAME OCTOBER  8802'

Plan hash value: 480407801
-----------------------------------------------------------------------------
|Id  |Operation          |Name         |Starts|E-Rows|A-Rows|A-Time |Buffers|
-----------------------------------------------------------------------------
|  0 |SELECT STATEMENT   |             |    1 |      |    1 |00:00:00.01| 2 |
|  1 | SORT AGGREGATE    |             |    1 |    1 |    1 |00:00:00.01| 2 |
|* 2 |  INDEX UNIQUE SCAN|BDAY_NAME_IDX|    1 |    1 |    1 |00:00:00.01| 2 |
-----------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("NAME"='NAME OCTOBER  8802')
Note
-----
- dynamic statistics used: dynamic sampling (level=AUTO)

Hence, If OPTIMIZER_DYNAMIC_SAMPLING = 11, sampling is performed in case of
•    missing statistics (as earlier with sampling level of 2),
•    stale statistics and
•    insufficient statistics
possibly leading to an optimal plan during first execution of the statement itself. But this has the disadvantage that dynamic sampling will be unnecessarily triggered in some cases leading to performance degradation. To have the best of both worlds, sampling level can be set to its default value of 2 and SPD’s  can be employed to guide the optimizer to perform dynamic sampling in case of stale / insufficient statistics too.

References:

http://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=5&ved=0CDQQFjAE&url=http%3A%2F%2Fwww.hroug.hr%2Fhr%2Fcontent%2Fdownload%2F14418%2F236628%2Ffile%2F401_Senegacnik_What_is_new_in_CBO.pdf&ei=p_AjVIjqHYyPuATD4IDAAQ&usg=AFQjCNEAXoYoFbMqYIXNimGgXLQ7N2Ra8Q&sig2=WbVNvkAsluX2Y0rp-N0QNw&bvm=bv.76247554,d.c2E
http://docs.oracle.com/cd/E11882_01/server.112/e16638/stats.htm#PFGRF30101

————————————————————————————

Related Links:

Home

Database 12c Index

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c: Optimizer_Dynamic_Sampling = 11], All Right Reserved. 2014.

The post 12c: Optimizer_Dynamic_Sampling = 11 appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Is X a Big Data product?

Pythian Group - Wed, 2014-09-24 18:40

Virtually everyone in data space today claims that they are a Big Data vendor and that their products are Big Data products. Of course — if you are not in Big Data then you are legacy. So how do you know whether a product is a Big Data product?

While there might not be fully objective criteria (and mainly because Big Data definition is still in the air and people interpret it as they see fit for their purpose), I think I can provide one good suggestion on how to determine when a certain product is NOT a Big Data product. Of course, it will depend on the definition of Big Data that you believe in.

I believe that Big Data is mostly about being “affordable at scale“, quoting Jeff Needham, my good friend and fellow member of OakTable Network. In practice, that means commodity software, commodity hardware and commodity operations of the solution. I won’t define the thresholds of scale in terabytes or levels of complexity and etc but I can provide some guidelines.

Talking about commodity hardware, it’s generally based on x86 architecture (though, some say ARM is emerging but it’s been emerging way too long for my liking) with some reasonably priced components. That would typically be dual socket systems with up to few hundred GB of RAM and maybe a dozen disks or some SSDs and cost effective networking. If we narrow down to Hadoop-like architectures then a cluster node would typically cost between $4,000 and $10,000. Anything significantly above that is probably overpriced or overspec’ed.

OK. Now that we are good with hardware let’s look at software. Obviously, open-source software without any commercial support qualifies for commodity and being affordable. If you are Facebook-scale (or getting relatively close), your commercial support can be you own large scale, capable engineering team. Otherwise, you will most likely have commercial support. Back to Hadoop world, you should expect to pay for commercially supported Hadoop distribution (whoever it is out of three leading distributions — Cloudera, Hortonworks or MapR) the same order of magnitude as for the hardware itself. Annually, it would be a fraction of hardware cost or over three years it would be about the cost of hardware purchase or slightly above depending on the level of support and platform features. You get an idea. Non-open-source products licensed on similar pricing levels are Big Data products too — you don’t have to be open-source to call your technology Big Data.

Let’s take an example of a supposedly Big Data product. If a product has “Big Data” in the name, it surely must be a Big Data product. Eh?

I love quite a few Oracle products so why don’t I look at their line up… Big Data Appliance is a prebuilt Hadoop system or Hadoop appliance with 18 powerful data nodes per rack and list price tag of $525K per rack. That gets you to almost $30K per data node which is quite high and you would likely not build your own clusters like that. Add to that about $100K per year of support and maintenance for systems and OS (you can check pricing in the public engineered system price list). Big Data Appliance does include commercially supported Cloudera distribution so it might not be that terrible pricing-wise. If you have experience buying Oracle products you also know that customers don’t pay list prices. Thus, I can accept that Big Data Appliance can actually be called a Big Data product… just.

Now let’s looks at another product — Big Data SQL. It has been announced but hasn’t quite been released just yet (or did I miss it?). Awesome product, by the way. Great way to push some of data-intensive SQL processing from Oracle Database down to Hadoop. Now, it’s probably not widely known (since it wasn’t really publicly released and sold yet) that Big Data SQL is licensed per disk spindle and it’s $4,000 per spindle as list-price. Add to that typical 22% of annual software support and maintenance from Oracle. If I were to license Big Data SQL for a 100 nodes Hadoop cluster with 12 disks per node, it would cost me almost $5M based on list-price. Don’t forget to add 22% annually. This is order of magnitude more than I would spend on the hardware building such cluster. But wait, it looks like Big Data SQL is only working with Big Data Appliance. Even in this case, the cost of Big Data SQL per single rack appliance is $864K + 22% annually and that’s just one additional tool for your Big Data platform.

Based on what I know about Big Data SQL (and assuming it works as advertised when released), I love it — push code to data, scalable massive parallel processing, leveraging great features from Exadata Storage software. Great job to the folks who developed this product. Unfortunately, I cannot call it a Big Data product — it’s not affordable at scale.

So when you look at other vendors calling their product Big Data — do this costing assessment and if it doesn’t come as affordable at scale then it’s not a Big Data product. And feel free to share your assessments for the rest of us here. I’m sure not everyone will share my line of thinking here either. Fire way.

Categories: DBA Blogs

Top 7 Reasons Why Oracle Conferences Rock!

Top 7 Reasons Why Oracle Conferences Rock!
Why take the time and make the effort to attend an Oracle database conference or Oracle user group meeting? We're all busy, so there had better be some super good reasons to make the effort! For me, the benefits definitely exceed the cost.

There are many different conferences to choose from. There are professional conferences, leadership conferences, scientific conferences and business focused conferences. So why an Oracle Database conference? What's the big deal?

This is difficult for me to explain, so I've summarized why I love Oracle conferences into a "Top 7" list.

Number 7. Get free stuff from vendorsI suspect 50% of Oracle DBAs attend conferences because they love getting free stuff. If you want to maximize the good stuff, timing is everything. Get to the booth when the exhibition hall first opens, because vendors have tons of stuff to give away and they are full of energy. Also, just before the exhibition hall closes on the final day show up again. Vendors are tired and want to get rid of as much stuff as possible... and you just happen to be there!

Number 6. Become known in your industryIf you want to focus and excel in your career in a particular area, then pass on what you know. If you hold on to what you learn, no one will know you're an expert but you... and that's lonely. One of the best ways to become known and enjoy conferences is to speak at them. If this is something you want to do, please email me. I'll mentor you. I'm serious.

Number 5. Talk to others, if you want toWant to talk with people who care about Oracle technology as much as you do? Conferences are a great way to break out of your rut and think... and if you want to talk. One of the reasons I like conferences is it gives me a chance to meet with past students.

Number 4. Learn, if you want toThere are always opportunities to learn at conferences. How many times have I heard someone say they were in a lame session. But when I ask if they read the abstract first, they say no. If you want to increase your chances of attending great sessions, read the abstract. Second, look for speakers you like. Third, sit next to the door just in case you need to make a quick exit! The worst you can do is look at the agenda and pick the sessions you want to attend. Do a little research and you'll be surprised how many good sessions there are.


Number 3. Be inspired!How many places can you go to receive inspiration? Not many, is my answer! Because I can chill for a bit at conferences, observe what's going on in our industry and interact with a wide variety of people associated with Oracle technology, I tend to leave with a fresher and more refined view... or better said, "A Wider View." That in itself is worth the cost of any conference.

Number 2. Network with vendors, speakers, colleaguesI take it one step at a time. I always cruise the exhibition halls looking for new products and touching base with the vendors I know. Personally, I like to encourage the new vendors because it is a massive investment for them to exhibit. When I'm cruising, that's where I typically reconnect with students and other DBAs. It's a more comfortable and relaxing environment for me. If you want to speak with an expert face-to-face, a conference is a natural place to do this. If someone walks up to me with an AWR report, I'll make time for them. If they email me the week before, I'll do whatever I can to schedule some time with them.

Number 1. Be with friendsThis one is personal. When I'm teaching or consulting, it's very intense and compressed. But at conferences, I get a chance to unwind, sit back, have a beer, and talk. I'm really involved with the IOUG conference and each year I look forward to reconnecting with this small group of friends. It's such a good feeling to be with "Oracle friends" without having to talk... about Oracle. I can just sit and listen without anyone wondering... What's wrong with Craig? You can do the same thing. Find an Oracle user group and volunteer.

Why Attend? Because Oracle conferences rock!There you have it. Seven reasons why Oracle conferences and Oracle User Groups rock! If you're not planning to attend any, you're missing a great way to connect, advance your career, and have a lot of fun. See you there!!

All the best in your Oracle performance work!

Craig.




Categories: DBA Blogs

New 12c Default: Controlfile Autobackup On – But only for Multitenant

The Oracle Instructor - Wed, 2014-09-24 10:39

This a a little discovery from my present Oracle Database 12c New Features course in Copenhagen: The default setting for Controlfile Autobackup has changed to ON – but only for Multitenant, apparently:

$ rman target sys/oracle_4U@cdb1

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 24 13:28:39 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CDB1 (DBID=832467154)

RMAN> select cdb from v$database;

using target database control file instead of recovery catalog
CDB
---
YES

RMAN> show controlfile autobackup;

RMAN configuration parameters for database with db_unique_name CDB1 are:
CONFIGURE CONTROLFILE AUTOBACKUP ON; # default

Above you see the setting for a container database (CDB). Now an ordinary (Non-CDB) 12c Database:

$ rman target sys/oracle_4U@orcl

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Sep 24 13:33:27 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL (DBID=1386527354)

RMAN> select cdb from v$database;

using target database control file instead of recovery catalog
CDB
---
NO

RMAN> show controlfile autobackup;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default

I really wonder why we have this difference! Is that still so with 12.1.0.2? Don’t believe it, test it! :-)


Tagged: 12c New Features, Backup & Recovery, Multitenant, RMAN
Categories: DBA Blogs

My Data Model Checklist book is now available in Spanish – Just in time for #OOW14!

Galo Balda's Blog - Wed, 2014-09-24 09:34

Originally posted on Oracle Data Warrior:

Exciting news!

I just got this email from Amazon:

Congratulations, your book “UNA LISTA DE VERIFICACIÓN PARA REALIZAR REVISIONES A LOS DISEÑOS DE MODELOS DE DATOS” is live in the Kindle Store and is currently enrolled in KDP Select. It is available for readers to purchase here.

If you are in Mexico, you can get the book here.

If you are in Spain, you can get it here.

Now, truth is I do NOT speak, read or write Spanish. But my good friend, and Oracle expert, Galo Balda does!

I am very grateful to Galo for putting in the effort to translate my little book so other data professionals around the world could read it in their native language.

You can (and should) follow Galo on Twitter, and on his personal blog in either English or Spanish.

BTW – Galo is speaking at OOW14 too…

View original 36 more words


Filed under: Uncategorized
Categories: DBA Blogs

EXECUTE Privilege on DBMS_SPM not sufficient

Hemant K Chitale - Wed, 2014-09-24 07:57
In 11.2.0.2

Here is a quick demo to show that the "ADMINISTER SQL MANAGEMENT OBJECT"  privilege is required for a non-DBA user to use DBMS_SPM even if EXECUTE has been granted on DBMS_SPM.

SQL> create user spm_test identified by spm_test quota unlimited on users;

User created.

SQL> alter user spm_test default tablespace users;

User altered.

SQL> grant create session, create table to spm_test;

Grant succeeded.

SQL> connect spm_test/spm_test
Connected.
SQL> create table spm_test_table (id_column number primary key, data_col varchar2(15));

Table created.

SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=TRUE;

Session altered.

SQL> insert into spm_test_table select rownum, to_char(rownum) from dual connect by level < 10001;

10000 rows created.

SQL> commit;

Commit complete.

SQL> variable qrn number;
SQL> exec :qrn := 5;

PL/SQL procedure successfully completed.

SQL> select * from spm_test_table where id_column=:qrn;

ID_COLUMN DATA_COL
---------- ---------------
5 5

SQL> select * from spm_test_table where id_column=:qrn;

ID_COLUMN DATA_COL
---------- ---------------
5 5

SQL> alter session set OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES=FALSE;

Session altered.

SQL>
SQL> connect hemant/hemant
Connected.
SQL>
SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
2 from dba_sql_plan_baselines
3 where creator='SPM_TEST'
4 /

SQL_HANDLE PLAN_NAME
------------------------------ ------------------------------
SQL_TEXT
--------------------------------------------------------------------------------
ENA ACC FIX
--- --- ---
SQL_6ceee9b24e9fd50a SQL_PLAN_6tvr9q979zp8a1e198e55
select * from spm_test_table where id_column=:qrn
YES YES NO


SQL>
SQL> connect spm_test/spm_test
Connected.
SQL> declare
2 ret_value pls_integer;
3 begin
4 ret_value := dbms_spm.drop_sql_plan_baseline(
5 sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
6 end;
7 /
declare
*
ERROR at line 1:
ORA-38171: Insufficient privileges for SQL management object operation
ORA-06512: at "SYS.DBMS_SPM", line 2532
ORA-06512: at line 4


SQL> select table_name, grantee, privilege
2 from all_tab_privs
3 where table_name='DBMS_SPM'
4 order by 2,3;

TABLE_NAME GRANTEE
------------------------------ ------------------------------
PRIVILEGE
----------------------------------------
DBMS_SPM PUBLIC
EXECUTE


SQL>
SQL> connect / as sysdba
Connected.
SQL> grant execute on dbms_spm to spm_test;

Grant succeeded.

SQL> connect spm_test/spm_test
Connected.
SQL> declare
2 ret_value pls_integer;
3 begin
4 ret_value := dbms_spm.drop_sql_plan_baseline(
5 sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
6 dbms_output.put_line(ret_value);
7 end;
8 /
declare
*
ERROR at line 1:
ORA-38171: Insufficient privileges for SQL management object operation
ORA-06512: at "SYS.DBMS_SPM", line 2532
ORA-06512: at line 4


SQL>
SQL> connect / as sysdba
Connected.
SQL>
SQL> grant administer sql management object to spm_test;

Grant succeeded.

SQL>
SQL> connect spm_test/spm_test
Connected.
SQL> declare
2 ret_value pls_integer;
3 begin
4 ret_value := dbms_spm.drop_sql_plan_baseline(
5 sql_handle=>'SQL_6ceee9b24e9fd50a',plan_name=>'SQL_PLAN_6tvr9q979zp8a1e198e55');
6 dbms_output.put_line(ret_value);
7 end;
8 /

PL/SQL procedure successfully completed.

SQL>
SQL> connect hemant/hemant
Connected.
SQL> select sql_handle, plan_name, sql_text, enabled, accepted, fixed
2 from dba_sql_plan_baselines
3 where creator = 'SPM_TEST'
4 /

no rows selected

SQL>

Thus, although EXECUTE on DBMS_SPM had been granted to PUBLIC and even explicitly to this ordinary user, it couldn't execute DROP_SQL_PLAN_BASELINE.  The ADMINISTER SQL MANAGEMENT OBJECT privilege was required.
.
.
.


Categories: DBA Blogs

Partner Webcast – Beyond the Dashboard with Oracle BI Publisher

The Reporting tools are widely used to support decision making and measure performance. The Business Intelligence tools, take the dashboard to the next level. It’s more than simply graphically...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Startup upgrade suppresses ORA-00955 on create table WRH$_SQL_PLAN

Bobby Durrett's DBA Blog - Tue, 2014-09-23 15:13

Today I was trying to see if upgrading from 11.2.0.2 to 11.2.0.4 would change the SYS.WRH$_SQL_PLAN table.  This table is large on our production system so I wanted to find out if some time-consuming update to this table would occur that would slow down our production upgrade but not be detected on our test systems.  We recently performed this upgrade on our development database and I was looking at the logs to see whether SYS.WRH$_SQL_PLAN was modified.  I found this curious entry (edited for brevity):

create table WRH$_SQL_PLAN
2  (snap_id           number        /* last snap id, used for purging */
3  ,dbid           number       not null
4  ,sql_id           varchar2(13)    not null
...
42   using index tablespace SYSAUX
43  ) tablespace SYSAUX
44  /

Table created.

The “Table created.” message sounds like the database created a new table without any errors.  But, looking at DBA_OBJECTS the table was not new.  So, I guessed that when you are running the catproc.sql script which includes the create table statement for SYS.WRH$_SQL_PLAN it must contain something that suppresses the error that you should get when you try to create a table and the table already exists:

ORA-00955: name is already used by an existing object

So, I opened my 11.2.0.3 test database using STARTUP RESTRICT  and ran @catproc.sql as SYSDBA and to my surprise I got the error just as you normally would:

 42   using index tablespace SYSAUX
 43  ) tablespace SYSAUX
 44  /
create table WRH$_SQL_PLAN
             *
ERROR at line 1:
ORA-00955: name is already used by an existing object

So, I decided to restart this database with STARTUP UPGRADE and rerun catproc.sql and as a result the error disappeared:

 40  ,constraint WRH$_SQL_PLAN_PK primary key
 41      (dbid, sql_id, plan_hash_value, id)
 42   using index tablespace SYSAUX
 43  ) tablespace SYSAUX
 44  /

Table created.

Cue the mysterious Twilight Zone music…

I guess this is a “feature” of the startup upgrade command but the “Table created.” message is kind of confusing.  The table isn’t really created if it exists.  But, I guess the good thing is that it doesn’t report an error.

– Bobby

 



Categories: DBA Blogs

#Oracle Certification: Always go for the most recent one!

The Oracle Instructor - Tue, 2014-09-23 11:14

It is quite often that I encounter attendees in my Oracle University courses that strive to become OCP or sometimes even OCM, asking me whether they should better go for an older versions certificate before they take on the most recent. The reasoning behind those questions is mostly that it may be easier to do it with the older version. My advise is then always: Go for the most recent version! No Oracle Certification exam is easy, but the older versions certificate is already outdated. The now most recent one will become outdated also sooner as you may think :-)

OCP 12c upgrade

For that reason I really appreciate the option to upgrade from 9i/10g/11g OCA directly to 12c OCP as discussed in this posting. There is just no point in becoming a new 11g OCP now when 12c is there, in my opinion. What do you think?


Tagged: Oracle Certification
Categories: DBA Blogs