Mihajlo Tekic
_direct_read_decision_statistcs_driven, _small_table_threshold and direct path reads on partitioned tables in 11.2.0.3 (Part 2)
To recap, the behavior I observed is that direct path reads will be performed if number of blocks for all partitions that will be accessed exceeds _small_table_threshold value. That is if a table is consisted of 10 partitions each having 100 blocks and if a query goes after two of the partitions, direct path reads will be performed if _small_table_threshold is lower than 200.
Also regardless of how much data has been cached(in the buffer cache) for each of the partitions, if direct path reads are to be performed, all partition segments will be directly scanned. So, it is all or nothing situation.
I also indicated that _direct_read_decision_statistics_driven parameter was set to TRUE (default) for the tests done in my earlier post.
What is _direct_read_decision_statistics_driven anyway? According to the parameter description, it enables direct path read decision to be based on optimizer statistics. If the parameter is set to FALSE Oracle will use segment headers to determine how many blocks the segment has. (read Tanel Poder’s blogpost for more information)
Let’s see how queries that access table partitions (full scan) behave if _direct_read_decsision_statiscs_driven parameter is set to FALSE in 11.2.0.3. My expectation was that it should be the same as if it was set to TRUE. I thought that once Oracle gets information about the number of blocks in each of the partitions it would use the same calculation as if the parameter was set to TRUE. Let’s see.
But, before moving forward a small disclaimer: Do not perform these tests in production or any other important environment. Changing of undocumented parameters should be done under the guidance of Oracle Support. The information presented here is for demonstration purposes only.
I will use the same table, TEST_PART, that I used in my earlier post.
I started with flushing the buffer cache (to make sure none of the partitions has blocks in the cache).
I set _direct_read_decision_statistcs_driven parameter to false and ran a query that selects data from PART_1 partition only. Each of the partitions contains 4000 rows stored in 65 blocks, plus one segment header block.
_small_table_threshold in my sandbox environment was set to 117.
SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;
Session altered.
SQL> SELECT count(1) FROM test_part WHERE col1 in (1);
COUNT(1)
----------
4000
SQL> @sese direct
no rows selected
_small_table_threshold(117) parameter.
SQL> select count(1) from test_part where col1 in (1,2);
COUNT(1)
----------
8000
SQL> @sese direct
no rows selected
How about for a query that accesses three partitions:
SQL> select count(1) from test_part where col1 in (1,2,3);
COUNT(1)
----------
12000
SQL> @sese direct
no rows selected
How about if we access all 7 partitions:
SQL> select count(1) from test_part where col1 in (1,2,3,4,5,6,7);
COUNT(1)
----------
28000
SQL> @sese direct
no rows selected
So what is going on? Seems when _direct_read_decision_statistcs_driven is set to FALSE, Oracle makes decision on partition by partition basis. If the number of blocks in the partition is less or equal than _small_table_threshold buffer cache will be used, otherwise direct path reads.
What if some of the partitions were already cached in the buffer cache?
In the next test I’ll:
- Flush the buffer cache again
- Set _direct_read_decision_statistcs_driven is set to FALSE
- Run a query that accesses the first two partitions
- Decrease the value for _small_table_threshold to 60
- Run a query that accesses the first three partitions.
- Check if direct path reads were performed and how many
SQL> alter system flush buffer_cache;
System altered.
SQL> alter session set "_direct_read_decision_statistics_driven"=FALSE;
Session altered.
SQL> select count(1) from test_part where col1 in (1,2);
COUNT(1)
----------
8000
SQL> @sese direct
no rows selected
SQL> conn /as sysdba
Connected.
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;
SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
Now let’s change decrease _small_table_threshold to 60 and run a query that scans the first three partitions:
SQL> alter session set "_small_table_threshold"=60;
Session altered.
SQL> alter session set events '10046 trace name context forever, level 8';
Session altered.
SQL> select count(1) from test_part where col1 in (1,2,3);
COUNT(1)
----------
12000
alter session set events '10046 trace name context off';
SQL> @sese direct
SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
9 76 STAT.consistent gets direct 65
9 81 STAT.physical reads direct 65
9 380 STAT.table scans (direct read) 1
If you query X$KCBOQH again you can see that only one block of PART_3 is in the cache. That is the segment header block.
SQL> conn /as sysdba
Connected.
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;
SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 1 <===
Moreover, seems Oracle examines each of the partitions separately (which I initially expected to be a case even when _direct_read_decision_statistcs_driven is set to TRUE ) and applies the rules as described in Alex Fatkulin’s blogpost. That is, if any of the following is true, oracle will scan the data in the buffer cache, otherwise direct path reads will be performed:
- the number of blocks in the segment is lower or equal than _small_table_threshold
- at least 50% of the segment data blocks are in the buffer cache
- at least 25% of the data blocks are dirty
I will stop here. I ran the same tests against 11.2.0.4 and 12.1.0.1 and noticed some differences in the behavior compared to the one I just wrote about (11.2.0.3). I will post these results in the next few days.
Stay tuned...
_small_table_threshold and direct path reads on partitioned tables in 11.2.0.3
I was troubleshooting a performance problem few days ago. The database the problem was experienced on was recently migrated from Oracle 10.2.0.4 to Oracle 11.2.0.3.
Long story short, the problem was described as performance of a query that scans two or more partitions in a table is much worse compared to combined performances of queries accessing each of the partitions separately.
After a short investigation I narrowed down the problem to “direct path reads” being the culprit of the problem.
As you know, due to the adaptive direct read feature introduced in 11g full table scans may utilize PGA instead of the buffer cache as it was a case in the earlier versions.
There are few good articles on this change in behavior among which I personally favor Tanel’s blogpost and hacking session and the post by Alex Fatkulin. You could also check MOS Note 793845.1.
What I observed in 11.2.0.3.0 was quite surprising and a bit different from what I’ve read so far. I know that there are different parameters/variables that influence the decision whether or not direct part reads should be used. I tried to be careful and not to fall in any of these traps.
Please note all the tests were done in a sandbox environment. I advise against trying these tests in any production environment.
The database version was 11.2.0.3.0.
_serial_direct_read = auto
_direct_read_decision_statistics_driven = TRUE
_small_table_threshold = 117
As you may know _small_table_threshold parameter is set to about 2% of the size of the buffer cache. On my test machine I have pretty small buffer cache, 5892 buffers big (117 is 1.98% of 5892)
SQL> SELECT name,block_size,buffers FROM v$buffer_pool;
NAME BLOCK_SIZE BUFFERS
-------------------------------------------------- ---------- ----------
DEFAULT 8192 5892
CREATE TABLE test_part
(
col1 NUMBER NOT NULL
, col2 VARCHAR2(100)
)
PARTITION BY LIST (col1)
(
PARTITION PART_1 VALUES (1)
, PARTITION PART_2 VALUES (2)
, PARTITION PART_3 VALUES (3)
, PARTITION PART_4 VALUES (4)
, PARTITION PART_5 VALUES (5)
, PARTITION PART_6 VALUES (6)
, PARTITION PART_7 VALUES (7)
) ;
INSERT INTO test_part (SELECT mod(rownum,7)+1, rpad('A',100,'A') FROM dual CONNECT BY rownum<=28000);
exec dbms_stats.gather_table_stats(user,'TEST_PART');
SQL> select table_name, partition_name, num_rows, blocks, avg_row_len
from user_tab_partitions
where table_name='TEST_PART'; 2 3
TABLE_NAME PARTITION_NAME NUM_ROWS BLOCKS AVG_ROW_LEN
------------------------------ ------------------------------ ---------- ---------- -----------
TEST_PART PART_1 4000 65 104
TEST_PART PART_2 4000 65 104
TEST_PART PART_3 4000 65 104
TEST_PART PART_4 4000 65 104
TEST_PART PART_5 4000 65 104
TEST_PART PART_6 4000 65 104
TEST_PART PART_7 4000 65 104
7 rows selected.
_small_table_threshold is applied on the total number of blocks expected to be returned by the query (considering all partition segments that will be accessed)
As you can see number of blocks in each of the partitions (65) is lower than _small_table_threshold value (117). Therefore a query that accesses only one of the partitions uses the buffer cache to store the segment blocks.
SQL> select count(1) from test_part where col1 in (1);
COUNT(1)
----------
4000
SQL> @sese direct
no rows selected
SQL> select count(1) from test_part where col1 in (1,2);
COUNT(1)
----------
8000
SQL> @sese direct
SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 130
7 81 STAT.physical reads direct 130
7 380 STAT.table scans (direct read) 2
SQL> alter session set "_small_table_threshold"=130;
Session altered.
SQL> select count(1) from test_part where col1 in (1,2);
COUNT(1)
----------
8000
SQL> @sese direct
SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 130
7 81 STAT.physical reads direct 130
7 380 STAT.table scans (direct read) 2
How about if I we add one more partition to the equation now (_small_table_threshold=130):
SQL> select count(1) from test_part where col1 in (1,2,3);
COUNT(1)
----------
12000
SQL> @sese direct
SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 325
7 81 STAT.physical reads direct 325
7 380 STAT.table scans (direct read) 5
Therefore seems the logic behind the decision whether or not to perform direct path reads is:
IF SUM(blocks of all partitions that are accessed)>_small_table_threshold THEN
perform direct path reads for all partitions that are accessed
ELSE
utilize buffer cache
Again, just to remind you this behavior is specific to 11.2.0.3.
Observation #2 -
The percentage of cached blocks per partition is not relevant
This brings me to the second observation. If you query X$KCBOQH.NUM_BUF for the partition segments (read Tanel’s blogpost or watch his hacking session ) you can see that even though partitions PART_1 and PART_2 were in the cache, Oracle still performed direct path reads for all three partitions:
SQL> conn /as sysdba
Connected.
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;
SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 1
But, even when all 3 partitions were fully loaded in the buffer cache, Oracle still performed direct path reads:
SQL> conn *****/*****
Connected.
SQL> select count(1) from test_part where col1 in (3);
COUNT(1)
----------
4000
SQL> conn /as sysdba
Connected.
SQL> select o.subobject_name, b.obj#, sum(b.num_buf)
2 from X$KCBOQH b, dba_objects o
3 where b.obj#=o.data_object_id
4 and o.object_name='TEST_PART'
5 group by o.subobject_name, b.obj#
6 order by 1;
SUBOBJECT_NAME OBJ# SUM(B.NUM_BUF)
------------------------------ ---------- --------------
PART_1 146024 66
PART_2 146025 66
PART_3 146026 66
SQL> conn *****/*****
Connected.
SQL> @sese direct
no rows selected
SQL> select count(1) from test_part where col1 in (1,2,3);
COUNT(1)
----------
12000
SQL> @sese direct
SID ID NAME VALUE
---------- ---------- -------------------------------------------------- ----------
7 76 STAT.consistent gets direct 195
7 81 STAT.physical reads direct 195
7 380 STAT.table scans (direct read) 3
SQL>
(Update: Part2 - what difference does _direct_read_decision_statistics_driven=FALSE make)
I will repeat the same tests in 11.2.0.4 and 12.1.0.1 and see if the behavior is any different.
Stay tuned.
Find the enabled events using ORADEBUG EVENTDUMP
Just a quick note on how to use ORADEBUG in order to find events that are enabled on system or session level.
Starting from Oracle 10.2 you could use ORADEBUG EVENTDUMP in order to get all events enabled either on session or system level. (not sure if this is available in 10.1)
The synopsis is:
oradebug eventdump
SQL> oradebug doc event action eventdump
eventdump
- list events that are set in the group
Usage
-------
eventdump( group < system | process | session >)
SQL> alter session set events '10046 trace name context forever, level 8';
Session altered.
SQL> alter session set events '942 trace name errorstack';
Session altered.
SQL> alter system set events 'trace[px_control][sql:f54y11t3njdpj]';
System altered.
SQL> oradebug setmypid
SQL> oradebug eventdump session;
trace [RDBMS.PX_CONTROL] [sql:f54y11t3njdpj]
sql_trace level=8
942 trace name errorstack
SQL> oradebug eventdump system
trace [RDBMS.PX_CONTROL] [sql:f54y11t3njdpj]
This note was more for my own reference. I hope someone else finds it useful too.
Fun with global temporary tables in Oracle 12c
CREATE GLOBAL TEMPORARY TABLE t1 (id NUMBER);
CREATE GLOBAL TEMPORARY TABLE t2 (id NUMBER);
CREATE TABLE t3 (id NUMBER);
SQL> INSERT INTO t1 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.
SQL> INSERT INTO t2 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.
SQL> INSERT INTO t3 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.
SQL> exec DBMS_STATS.GATHER_TABLE_STATS(user,'T1');
PL/SQL procedure successfully completed.
SQL> SELECT count(1) FROM t1;
COUNT(1)
----------
5
SQL> INSERT INTO t1 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.
SQL> INSERT INTO t2 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.
SQL> INSERT INTO t3 (SELECT rownum FROM dual CONNECT BY rownum<=5);
5 rows created.
SQL> exec DBMS_STATS.GATHER_TABLE_STATS(user,'T3');
PL/SQL procedure successfully completed.
SQL> SELECT count(1) FROM t1;
COUNT(1)
----------
0
Playing with Oracle Spatial - An important consideration when using SDO_NN operator
This post covers an important consideration, I think, one should make when using SDO_NN spatial operator with sdo_num_res parameter.
Please note, the database I use in the examples below is Oracle 11.2.0.3.0 Enterprise Edition with no additional patches applied and default Oracle Spatial settings.
CREATE TABLE restaurants
(
id NUMBER NOT NULL,
point MDSYS.SDO_GEOMETRY
);
ALTER TABLE restaurants ADD CONSTRAINT restaurants_pk PRIMARY KEY (id);
INSERT INTO user_sdo_geom_metadata
( TABLE_NAME
, COLUMN_NAME
, DIMINFO
, SRID)
VALUES
( 'restaurants'
, 'POINT'
, SDO_DIM_ARRAY(
SDO_DIM_ELEMENT('X', 0, 10000000, 0.005),
SDO_DIM_ELEMENT('Y', 0, 10, 0.005)
)
, null -- SRID
);
CREATE INDEX restaurants_sidx ON restaurants(point) INDEXTYPE IS MDSYS.SPATIAL_INDEX;
INSERT INTO restaurants
(SELECT rownum --id
, SDO_GEOMETRY(
2001 -- 2 dimensional point
, null -- SDO SRID
, SDO_POINT_TYPE(rownum,0,null) –-x=rownum, y=0
, null
, null
)
FROM dual CONNECT BY rownum<=500000
);
SQL> SELECT p.id, sdo_nn_distance(1) distance
2 FROM restaurants p
3 WHERE SDO_NN(p.point,
4 SDO_GEOMETRY(
5 2001 -- 2 dimensional point
6 , null
7 , SDO_POINT_TYPE(10,0,null)
8 , null
9 , null
10 ),
11 'sdo_num_res=5',
12 1
13 )='TRUE'
14* ORDER BY 2;
ID DISTANCE
---------- ----------
10 0
9 1
11 1
12 2
8 2
SQL>
CREATE TABLE menu_items
(id NUMBER NOT NULL,
rest_id NUMBER NOT NULL,
item_number NUMBER NOT NULL,
description varchar2(100)
);
ALTER TABLE menu_items ADD CONSTRAINT menu_items_pk PRIMARY KEY (id);
ALTER TABLE menu_items ADD CONSTRAINT menu_items_rest_FK FOREIGN KEY (rest_id) REFERENCES restaurants(id);
CREATE INDEX menu_items_rest_idx ON menu_items(rest_id);
INSERT INTO menu_items
(SELECT rownum
, b.rn
, a.rn
, rpad(rownum,100,'A')
FROM
(SELECT rownum rn FROM dual CONNECT BY rownum<=5) a, --5 menu items per rest.
(SELECT rownum rn FROM dual CONNECT BY rownum<=500000) b
);
commit;
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(ownname=>user,tabname=>'restaurants',cascade=>TRUE);
DBMS_STATS.GATHER_TABLE_STATS(ownname=>user,tabname=>'menu_items',cascade=>TRUE, method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO');
END;
SELECT t1.id
, t2.id
, t2.restaurants_id
, t2.item_number
FROM restaurants t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
AND SDO_NN(t1.point,
SDO_GEOMETRY(
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
),
'sdo_num_res=5',
1
)='TRUE'
ORDER BY t1.id, t2.item_number;
SQL>SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
2 FROM restaurants t1
3 , menu_items t2
4 WHERE t1.id=t2.restaurants_id
5 AND SDO_NN(t1.point,
6 SDO_GEOMETRY(
7 2001 -- 2 dimensional point
8 , null
9 , SDO_POINT_TYPE(10,0,null)
10 , null
11 , null
12 ),
13 'sdo_num_res=5',
14 1
15 )='TRUE'
16 ORDER BY t1.id, t2.item_number;
ID ID RESTAURANTS_ID ITEM_NUMBER
---------- ---------- -------------- -----------
8 8 8 1
8 500008 8 2
8 1000008 8 3
8 1500008 8 4
8 2000008 8 5
9 9 9 1
9 500009 9 2
9 1000009 9 3
9 1500009 9 4
9 2000009 9 5
10 10 10 1
10 500010 10 2
10 1000010 10 3
10 1500010 10 4
10 2000010 10 5
11 11 11 1
11 500011 11 2
11 1000011 11 3
11 1500011 11 4
11 2000011 11 5
12 12 12 1
12 500012 12 2
12 1000012 12 3
12 1500012 12 4
12 2000012 12 5
25 rows selected.
SQL> SELECT * FROM TABLE(dbms_xplan.display_cursor(null,null,'ALLSTATS LAST'));
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID gwpqub3k0awqm, child number 0
-------------------------------------
SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id,
t2.item_number FROM restaurants t1 , menu_items t2 WHERE
t1.id=t2.restaurants_id AND SDO_NN(t1.point, SDO_GEOMETRY(
2001 -- 2 dimensional point , null ,
SDO_POINT_TYPE(10,0,null) , null , null ),
'sdo_num_res=5', 1 )='TRUE' ORDER BY t1.id, t2.item_number
Plan hash value: 2076547507
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:01.32 | 41540 | 41465 | | | |
| 1 | SORT ORDER BY | | 1 | 24678 | 25 |00:00:01.32 | 41540 | 41465 | 2048 | 2048 | 2048 (0)|
|* 2 | HASH JOIN | | 1 | 24678 | 25 |00:00:01.32 | 41540 | 41465 | 1517K| 1517K| 886K (0)|
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5000 | 5 |00:00:00.01 | 64 | 0 | | | |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 63 | 0 | | | |
| 5 | TABLE ACCESS FULL | MENU_ITEMS | 1 | 2500K| 2500K|00:00:00.62 | 41476 | 41465 | | | |
------------------------------------------------------------------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("T1"."ID"="T2"."RESTAURANTS_ID")
4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='
TRUE')
29 rows selected.
SQL>
Why selectivity of 1%?
SDO_NN is an operator that is bound to the implementation of the function NN that is part of PRVT_IDX package owned by MDSYS.
By default, statistics for PRVT_IDX package is defined by SDO_STATISTICS type. That is Extensible Optimizer feature is used to define how cardinality and the cost will be calculated for functions/procedures defined in the package. (if statistics are not associated, CBO uses default selectivity of 1%)
Seems like the logic implemented in ODCIStatsSelectivity() function is not good enough to detect that we are only after the first N rows, as defined with SDO_NUM_RES parameter.
MDSYS.SDO_STATISTICS.ODCIStatsFunctionCost returned:
CPUCost : 100000000000000
IOCost : 100000000000000
NetworkCost : -1
Calling user-defined selectivity function...
predicate: "MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE'
declare
sel number;
obj0 "MDSYS"."SDO_GEOMETRY" := "MDSYS"."SDO_GEOMETRY"(NULL, NULL, NULL, NULL, NULL);
begin
:1 := "MDSYS"."SDO_STATISTICS".ODCIStatsSelectivity(
sys.ODCIPREDINFO('MDSYS',
'PRVT_IDX',
'NN',
45),
sel,
sys.ODCIARGDESCLIST(sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(2, 'RESTAURANTS', 'TEST', '"POINT"', NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
:3,
:4
, obj0, "MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL), :5,
sys.ODCIENV(:6,:7,:8,:9));
if sel IS NULL then
:2 := -1.0;
else
:2 := sel;
end if;
exception
when others then
raise;
end;
Bind :3 Value 'TRUE'
Bind :4 Value 'TRUE'
Bind :5 Value 'sdo_num_res=5'
ODCIEnv Bind :6 Value 0
ODCIEnv Bind :7 Value 0
ODCIEnv Bind :8 Value 0
ODCIEnv Bind :9 Value 3
MDSYS.SDO_STATISTICS.ODCIStatsSelectivity returned selectivity: 1.00000000%
Table: RESTAURANTS Alias: T1
Card: Original: 500000.000000 Rounded: 5000 Computed: 5000.00 Non Adjusted: 5000.00
STATIC FUNCTION ODCIStatsSelectivity (
pred_info IN SYS.ODCIPredInfo,
sel OUT NUMBER,
args IN SYS.ODCIArgDescList,
p_start IN VARCHAR2,
p_stop IN VARCHAR2,
geometry1 IN MDSYS.SDO_GEOMETRY,
geometry2 IN MDSYS.SDO_GEOMETRY,
param IN VARCHAR2,
env IN SYS.ODCIEnv
) RETURN NUMBER IS
BEGIN
*******************
if param=NULL then
sel:=1;
else if lower(substr(param,1,instr(param,'=')-1)) = 'sdo_num_res' then
SELECT decode(num_rows,null,1,
decode(to_number(trim(substr(param,instr(param,'=')+1))),0,1,to_number(trim(substr(param,instr(param,'=')+1)))/num_rows)*100
)
INTO sel
FROM dba_tables where owner=args(3).TABLESCHEMA and table_name=args(3).TABLENAME;
if sel>100 then
sel:=100 ;
end if;
end if;
end if;
****************
RETURN ODCIConst.success;
END ODCIStatsSelectivity;
Let's see how it works:
SQL> EXPLAIN PLAN FOR
2 SELECT t1.id, t2.id, t2.restaurants_id, t2.item_number
3 FROM restaurants t1
4 , menu_items t2
5 WHERE t1.id=t2.restaurants_id
6 AND SDO_NN(t1.point,
7 SDO_GEOMETRY(
8 2001 -- 2 dimensional point
9 , null
10 , SDO_POINT_TYPE(10,0,null)
11 , null
12 , null
13 ),
14 'sdo_num_res=5',
15 1
16 )='TRUE'
17* ORDER BY t1.id, t2.item_number
SQL> /
Explained.
SQL> set line 200 pagesize 9999
SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY);
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2341889131
------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 25 | 1000 | 36 (3)| 00:00:01 |
| 1 | SORT ORDER BY | | 25 | 1000 | 36 (3)| 00:00:01 |
| 2 | NESTED LOOPS | | 25 | 1000 | 35 (0)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 5 | 80 | 0 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | | | 0 (0)| 00:00:01 |
| 5 | TABLE ACCESS BY INDEX ROWID| MENU_ITEMS | 5 | 120 | 7 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | | 2 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,N
ULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
6 - access("T1"."ID"="T2"."RESTAURANTS_ID")
20 rows selected.
As you can see the cardinality is correctly calculated which resulted CBO to produce better execution plan. With this plan in place, the query completes in less than 1 cs.
If you look into 10053 trace file you can also see that the calculated selectivity is 0.001%:
MDSYS.SDO_STATISTICS_CUSTOM.ODCIStatsFunctionCost returned:
CPUCost : 10000
IOCost : 10000
NetworkCost : 10000
Calling user-defined selectivity function...
predicate: "MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE'
declare
sel number;
obj0 "MDSYS"."SDO_GEOMETRY" := "MDSYS"."SDO_GEOMETRY"(NULL, NULL, NULL, NULL, NULL);
begin
:1 := "MDSYS"."SDO_STATISTICS_CUSTOM".ODCIStatsSelectivity(
sys.ODCIPREDINFO('MDSYS',
'PRVT_IDX',
'NN',
45),
sel,
sys.ODCIARGDESCLIST(sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(2, 'RESTAURANTS', 'TEST', '"POINT"', NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL), sys.ODCIARGDESC(3, NULL, NULL, NULL, NULL, NULL, NULL)),
:3,
:4
, obj0, "MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL), :5,
sys.ODCIENV(:6,:7,:8,:9));
if sel IS NULL then
:2 := -1.0;
else
:2 := sel;
end if;
exception
when others then
raise;
end;
Bind :3 Value 'TRUE'
Bind :4 Value 'TRUE'
Bind :5 Value 'sdo_num_res=5'
ODCIEnv Bind :6 Value 0
ODCIEnv Bind :7 Value 0
ODCIEnv Bind :8 Value 0
ODCIEnv Bind :9 Value 3
MDSYS.SDO_STATISTICS_CUSTOM.ODCIStatsSelectivity returned selectivity: 0.00100000%
SELECT /*+ cardinality(t1 5) gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
FROM restaurants t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
AND SDO_NN(t1.point,
SDO_GEOMETRY(
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
),
'sdo_num_res=5',
1
)='TRUE'
ORDER BY t1.id, t2.item_number;
Plan hash value: 2341889131
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:00.01 | 92 | 5 | | | |
| 1 | SORT ORDER BY | | 1 | 25 | 25 |00:00:00.01 | 92 | 5 | 2048 | 2048 | 2048 (0)|
| 2 | NESTED LOOPS | | 1 | 25 | 25 |00:00:00.01 | 92 | 5 | | | |
| 3 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5 | 5 |00:00:00.01 | 55 | 0 | | | |
|* 4 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 54 | 0 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| MENU_ITEMS | 5 | 5 | 25 |00:00:00.01 | 37 | 5 | | | |
|* 6 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | 5 | 25 |00:00:00.01 | 12 | 0 | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------------
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
4 - access("MDSYS"."SDO_NN"("T1"."POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
6 - access("T1"."ID"="T2"."RESTAURANTS_ID")
29 rows selected.
SELECT /*+ gather_plan_statistics */ t1.id, t2.id, t2.restaurants_id, t2.item_number
FROM (SELECT *
FROM restaurants
WHERE
SDO_NN(point,
SDO_GEOMETRY(
2001 -- 2 dimensional point
, null
, SDO_POINT_TYPE(10,0,null)
, null
, null
),
'sdo_num_res=5',
1
)='TRUE'
AND rownum<=5
) t1
, menu_items t2
WHERE t1.id=t2.restaurants_id
ORDER BY t1.id, t2.item_number;
Plan hash value: 3570932640
----------------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 25 |00:00:00.01 | 101 | | | |
| 1 | SORT ORDER BY | | 1 | 25 | 25 |00:00:00.01 | 101 | 2048 | 2048 | 2048 (0)|
| 2 | NESTED LOOPS | | 1 | | 25 |00:00:00.01 | 101 | | | |
| 3 | NESTED LOOPS | | 1 | 25 | 25 |00:00:00.01 | 76 | | | |
| 4 | VIEW | | 1 | 5 | 5 |00:00:00.01 | 64 | | | |
|* 5 | COUNT STOPKEY | | 1 | | 5 |00:00:00.01 | 64 | | | |
| 6 | TABLE ACCESS BY INDEX ROWID| RESTAURANTS | 1 | 5 | 5 |00:00:00.01 | 64 | | | |
|* 7 | DOMAIN INDEX | RESTAURANTS_SIDX | 1 | | 5 |00:00:00.01 | 63 | | | |
|* 8 | INDEX RANGE SCAN | MENU_ITEMS_RESTAURANTS_IDX | 5 | 5 | 25 |00:00:00.01 | 12 | | | |
| 9 | TABLE ACCESS BY INDEX ROWID | MENU_ITEMS | 25 | 5 | 25 |00:00:00.01 | 25 | | | |
----------------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter(ROWNUM<=5)
7 - access("MDSYS"."SDO_NN"("POINT","MDSYS"."SDO_GEOMETRY"(2001,NULL,"SDO_POINT_TYPE"(10,0,NULL),NULL,NULL),'sdo_num_res=5',1)='TRUE')
8 - access("T1"."ID"="T2"."RESTAURANTS_ID")
34 rows selected.
I hope you got some valuable information out of this post.
CURSOR_SHARING=SIMILAR available in Oracle 12c, or not ?
test[12,133]> ALTER SESSION SET CURSOR_SHARING=SIMILAR;
Session altered.
test[12,133]> ALTER SESSION SET CURSOR_SHARING=foo;
ERROR:
ORA-00096: invalid value FOO for parameter cursor_sharing, must be
from among SIMILAR, EXACT, FORCE
CREATE TABLE tab1 AS (SELECT * FROM user_objects);
ALTER SESSION SET CURSOR_SHARING=SIMILAR;
SELECT COUNT(1) FROM tab1 WHERE object_id>1;
SELECT COUNT(1) FROM tab1 WHERE object_id>2;
SELECT COUNT(1) FROM tab1 WHERE object_id>3;
SELECT COUNT(1) FROM tab1 WHERE object_id>4;
SELECT COUNT(1) FROM tab1 WHERE object_id>5;
SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1"
set line 200
col sql_text format a80
SELECT sql_id, sql_text, executions, child_number FROM v$sql WHERE sql_id='c73v21bgp4956';
SQL_ID SQL_TEXT EXECUTIONS CHILD_NUMBER
------------- -------------------------------------------------------------------------------- ---------- ------------
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 5 0
set line 200
col sql_text format a80
SELECT sql_id, sql_text, executions, child_number FROM v$sql WHERE sql_id='c73v21bgp4956';
SQL_ID SQL_TEXT EXECUTIONS CHILD_NUMBER
------------- -------------------------------------------------------------------------------- ---------- ------------
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 1 0
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 1 1
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 1 2
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 1 3
c73v21bgp4956 SELECT COUNT(:"SYS_B_0") FROM tab1 WHERE object_id>:"SYS_B_1" 1 4
ORA-12514 during switchover using Data Guard Broker (Update)
DB112A=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hosta)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DB112A)
)
)
DB112B=
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostb)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = DB112B)
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = db112a)
(ORACLE_HOME = /oracle/product/11.2/dbms)
(SID_NAME = db112a)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hosta)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = db112b)
(ORACLE_HOME = /oracle/product/11.2/dbms)
(SID_NAME = db112b)
)
)
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hostb)(PORT = 1521))
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
)
)
DGMGRL> connect sys
Password:
Connected.
DGMGRL> show configuration
Configuration - db112
Protection Mode: MaxPerformance
Databases:
dg112a - Primary database
dg112b - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
DGMGRL>
DGMGRL> edit database db112a set property staticConnectidentifier='db112a';
DGMGRL> edit database db112b set property staticConnectidentifier='db112b';
DGMGRL> switchover to dg112b
Performing switchover NOW, please wait...
New primary database "dg112b" is opening...
Operation requires shutdown of instance "dg112a" on database "dg112a"
Shutting down instance "dg112a"...
ORACLE instance shut down.
Operation requires startup of instance "dg112a" on database "dg112a"
Starting instance "dg112a"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "dg112b"
DGMGRL> show configuration
Configuration - db112
Protection Mode: MaxPerformance
Databases:
dg112b - Primary database
dg112a - Physical standby database
Fast-Start Failover: DISABLED
Configuration Status:
SUCCESS
Oracle 12c - Session-specific statistics for global temporary tables
However, when temporary tables are involved, it is often a challenge for the optimizer to generate efficient execution plan(s) that will be appropriate for all use cases.
In some cases when temporary tables are populated with data having similar characteristics, it is often a practice to have appropriate statistics collected and fixed (dynamic sampling may also fit in this case) so the optimizer produces efficient and stable execution plan(s). (one size fits all)
However, often temporary tables do not have statistics simply because it is difficult to find ones that will cover all use cases.
Consider the following example:
There is table MY_OBJECTS created using the following CTAS statement:
CREATE TABLE my_objects AS
SELECT * FROM all_objects
, (SELECT * FROM dual CONNECT BY rownum<=10);
A table MY_TYPES that contains all distinct data types derived from ALL_OBJECTS
CREATE TABLE my_types AS
SELECT DISTINCT object_type FROM all_objects;
CREATE TABLE t AS
SELECT * FROM my_objects WHERE 1=2;
CREATE GLOBAL TEMPORARY TABLE temp_objects
(object_id NUMBER NOT NULL)
ON COMMIT PRESERVE ROWS;
EXEC dbms_stats.gather_table_stats(ownname=>'TEST',tabname=>'MY_OBJECTS',cascade=>true);
EXEC dbms_stats.gather_table_stats(ownname=>'TEST',tabname=>'MY_TYPES',cascade=>true);
INSERT INTO t
SELECT /*+ gather_plan_statistics */ o.*
FROM my_objects o
, my_types t
, temp_objects tt
WHERE o.object_type=t.object_type
AND o.object_id=tt.object_id;
test[22,39]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1);
1 row created.
test[22,39]> commit;
Commit complete.
test[22,39]> @insert_1
10 rows created.
test[22,39]> @explain_last
PLAN_TABLE_OUTPUT
----------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id
Plan hash value: 3579371359
---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.56 | 14067 | 14028 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.56 | 14067 | 14028 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10 |00:00:00.04 | 14036 | 14028 | 1519K| 1519K| 1521K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38 |00:00:00.02 | 6 | 2 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1 |00:00:00.01 | 3 | 0 | | | |
| 5 | BUFFER SORT | | 1 | 38 | 38 |00:00:00.02 | 3 | 2 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.02 | 3 | 2 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:02.09 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
30 rows selected.
test[251,65]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1000);
1000 rows created.
test[251,65]> @insert_1
10000 rows created.
test[251,65]> @explain_last
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id
Plan hash value: 3579371359
---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:03.87 | 15495 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:03.87 | 15495 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10000 |00:00:00.22 | 14037 | 14026 | 3162K| 2024K| 3101K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38000 |00:00:00.73 | 7 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1000 |00:00:00.02 | 4 | 0 | | | |
| 5 | BUFFER SORT | | 1000 | 38 | 38000 |00:00:00.21 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.97 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
30 rows selected.
sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';
PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 2 0 Y N
sys[263,185]>
There are several strategies available, that I am aware of, on how to attack this problem. They include: use of stored outlines; fixed statistics; baselines; hints (cardinality hint for example); periodically invalidating plans; using separate workspaces (schemas) for different types of workloads; and couple of others more or less effective. Each of these comes with its own limitations.
With Oracle 12c users have an ability to gather session specific statistics for global temporary tables. This feature seems to nicely address the challenges from above.
When session specific statistics are used, each session collects “private” statistics and have CBO generate execution plan/cursor based on them. These cursors are not shared with other sessions. Also a cursor that’s been generated using session specific statistics is invalidated when statistics are re-collected within the same session.
Session specific statistics are enabled when GLOBAL_TEMP_TABLE_STATS preference is set to SESSION.
exec dbms_stats.set_table_prefs(ownname=>'TEST', tabname=>'TEMP_OBJECTS',pname=>'GLOBAL_TEMP_TABLE_STATS',pvalue=>'SESSION');
Once the preference is set all one needs to do is to gather stats for session based statistics to be collected.
And that is all, now let’s see which plans will be generated for each of the sessions if we repeat the example from above when session statistics are being used.
Session with SID 251 loads 1000 rows into TEMP_OBJECTS table, collects stats and executes the same insert statement.
test[251,69]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1000);
1000 rows created.
test[251,69]> exec dbms_stats.gather_table_stats(ownname=>'TEST', tabname=>'TEMP_OBJECTS');
PL/SQL procedure successfully completed.
test[251,69]> @insert_1
10000 rows created.
test[251,69]> @explain_last
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 1
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id
Plan hash value: 4256520316
---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.71 | 15514 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.71 | 15514 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 9858 | 10000 |00:00:00.07 | 14037 | 14026 | 1696K| 1696K| 1583K (0)|
| 3 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
|* 4 | HASH JOIN | | 1 | 9858 | 10000 |00:00:00.04 | 14034 | 14026 | 2293K| 2293K| 1607K (0)|
| 5 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1000 | 1000 |00:00:00.01 | 4 | 0 | | | |
| 6 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.64 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE")
4 - access("O"."OBJECT_ID"="TT"."OBJECT_ID")
Note
-----
- Global temporary table session private statistics used
30 rows selected.
sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';
PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 2 0 Y N
4256520316 1 00000000625A3928 1 0 Y N
sys[263,185]>
test[26,145]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1000);
1000 rows created.
test[26,145]> commit;
Commit complete.
test[26,145]> @insert_1
10000 rows created.
test[26,145]> @explain_last
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 0
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id
Plan hash value: 3579371359
---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:03.71 | 15513 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:03.71 | 15513 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10000 |00:00:00.13 | 14039 | 14026 | 3162K| 2024K| 3114K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38000 |00:00:00.64 | 9 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1000 |00:00:00.01 | 6 | 0 | | | |
| 5 | BUFFER SORT | | 1000 | 38 | 38000 |00:00:00.19 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:01.88 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
30 rows selected.
sys[263,185]> select plan_hash_value, child_number, CHILD_ADDRESS, executions, invalidations, is_shareable, is_obsolete from v$sql where sql_id='8a1c7phuna9vn';
PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
4256520316 1 00000000625A3928 1 0 Y N
test[251,69]> DELETE FROM temp_objects;
1000 rows deleted.
test[251,69]> INSERT INTO temp_objects (SELECT object_id FROM all_objects WHERE rownum<=1);
1 row created.
test[251,69]> commit;
Commit complete.
test[251,69]> exec dbms_stats.gather_table_stats(ownname=>'TEST', tabname=>'TEMP_OBJECTS');
PL/SQL procedure successfully completed.
test[251,69]>
PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
4256520316 1 00000000625A3928 1 1 Y N
test[251,69]> @insert_1
10 rows created.
test[251,69]> @explain_last
PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 8a1c7phuna9vn, child number 1
-------------------------------------
insert into t select /*+ gather_plan_statistics */ o.* from my_objects
o, my_types t, temp_objects tt where o.object_type=t.object_type and
o.object_id=tt.object_id
Plan hash value: 3579371359
---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 1 | | 0 |00:00:02.43 | 14051 | 14026 | | | |
| 1 | LOAD TABLE CONVENTIONAL | | 1 | | 0 |00:00:02.43 | 14051 | 14026 | | | |
|* 2 | HASH JOIN | | 1 | 10 | 10 |00:00:00.01 | 14037 | 14026 | 1519K| 1519K| 1426K (0)|
| 3 | MERGE JOIN CARTESIAN | | 1 | 38 | 38 |00:00:00.01 | 7 | 0 | | | |
| 4 | TABLE ACCESS FULL | TEMP_OBJECTS | 1 | 1 | 1 |00:00:00.01 | 4 | 0 | | | |
| 5 | BUFFER SORT | | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | 73728 | 73728 | |
| 6 | TABLE ACCESS FULL | MY_TYPES | 1 | 38 | 38 |00:00:00.01 | 3 | 0 | | | |
| 7 | TABLE ACCESS FULL | MY_OBJECTS | 1 | 882K| 882K|00:00:02.05 | 14030 | 14026 | | | |
---------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - access("O"."OBJECT_TYPE"="T"."OBJECT_TYPE" AND "O"."OBJECT_ID"="TT"."OBJECT_ID")
Note
-----
- Global temporary table session private statistics used
30 rows selected.
PLAN_HASH_VALUE CHILD_NUMBER CHILD_ADDRESS EXECUTIONS INVALIDATIONS I I
--------------- ------------ ---------------- ---------- ------------- - -
3579371359 0 00000000618B54B8 3 0 Y N
3579371359 1 00000000625A3928 1 1 Y N
test[251,69]> SELECT owner, table_name, num_rows, last_analyzed, scope FROM dba_tab_statistics WHERE table_name='TEMP_OBJECTS';
OWNER TABLE_NAME NUM_ROWS LAST_ANAL SCOPE
--------------- --------------- ---------- --------- -------
TEST TEMP_OBJECTS SHARED
TEST TEMP_OBJECTS 1 30-JUN-13 SESSION
Resources
Review: uCertify PrepKit 1Z0-047
I was asked by uCertify to review their preparation kit product.
I am currently preparing myself for Oracle 1Z0-047 Oracle Database: SQL Certified Expert Exam and I thought this could be a good chance for me test my current knowledge and chose to review their preparation kit for exactly this exam. (Link)
To be honest, this is my first GUI based prep kit I’ve ever tried. So far I was using the Oracle official documentation as my primary source, and some of the books (Exam Study Guides) that were available.
My study strategy is read the official documentation and practice, practice, practice.
From my experience, the toughest part during preparation for an exam is when it comes to answer the question: “Am I ready enough?”
Usually I go through the questions available in some of the preparation books plus making even more questions based on different scenarios. I also use OTN forums to find discussions with challenging topics where I can test my knowledge.
With the last in mind, the uCertify Prep Kit is just another handy resource that one can use to test his/her knowledge before he/she actually takes the exam.
It is a solid product that provides more than three hundred questions combined into one diagnostic, one final, four practice tests and a quiz. The user also has an ability to create custom tests using those questions he/she finds interesting.
There are also number of questions that are pretty challenging. For all questions, the users have an option to read explanations about the answers including references to the Oracle documentation or other resources.
For each question, there is an option to start a discussion with other users, to send feedback to uCertify and to write personal notes and tags.
The feedback and discussion features are really nice. I found some answers that I did not agree with and used this feature to provide feedback to uCertify. I tested the discussion feature as well and it works nice.
The exam objectives are well covered. There are also questions specific to 11g version of the database.
The software has an update option so the user can download the latest updates and fixes.
The Graphic User Interface is very nice and the navigation is great. Each of the test results can be saved and used for later reference. Also there are features like Flash cards, a Quiz, Study notes and Articles. There are number of notes and articles available.
Finally there is a readiness report that can help the user to find the answer to the question: “Am I ready for the real thing?”
In general, this is a nice product that can be very helpful for one to prepare for an exam (they have various prep kits available (Link)). However, I don’t want anybody to get an impression that I think this is the only resource that one could use to prepare and pass an exam. Whenever I was asked for an advice on what resources should (must) one use to prepare for an exam, my answer was always straight, The Official Documentation, period. (The official trainings are also very valuable, but sometimes they are a must-do regardless of someone’s recommendations :-))
This product and all the other available out there (books, prep kits, study guides) are just another resource for learning and practicing that one may consider to use when preparing for an exam.
Save some disk space - One Reminder and a Cool Windows Command
It’s the time of the year when the January CPU is released. I’d like to remind you to think about cleaning up your obsolete backups from $ORACLE_HOME/.patch_storage directory (the ones that are not needed anymore for rollback purposes).
Starting from 10g R2 Oracle backs up the affected libraries along with the rest of the affected files. Some of the libraries can be pretty big and thus after several CPUs the amount of disk space consumed by can be significant.
In order to prevent from unnecessary wasting the disk space, you could use
Opatch
utility using util cleanup
option to remove those backups that are not needed anymore.Below is the syntax and the options available when using Opatch tool to clean up old backups:
SYNTAX
opatch util cleanup [-invPtrLoc ]
[-jre ] [-oh ]
[-silent] [-report]
[-ps , this will
be located under ORACLE_HOME/.patch_storage/]
OPTIONS
-invPtrLoc
Used to locate the oraInst.loc file. Needed when the
installation used the -invPtrLoc flag. This should be
the path to the oraInst.loc file.
-jre
This option tells OPatch to use JRE (java) from the
specified location instead of the default location
under Oracle Home. Both -jdk and -jre options cannot
be specified together. OPatch will display error in
that case.
-oh
The oracle home to work on. This takes precedence over
the environment variable ORACLE_HOME.
-ps
This option is used to specify the Patch ID with timestamp.
This Patch ID with timestamp should be the same as in
.patch_storage directory.
A directory by this name will be present under
ORACLE_HOME/.patch_storage. If this directory is specified
and is valid, then the contents specified in the description
will be cleaned up only for this patch. Otherwise, all patch
related directories will be acted upon by this utility.
-silent
In silent mode, the cleanup always takes place.
-report
Prints the operations without actually executing them.
Make sure you specify the patch id (ps parameter) of the patch you want to remove.
Cool Windows Command
Few days ago I learned about
forfiles
command that can be used to select a set of files using some criteria (like date modified) and run command against each of them (like delete). (Something like find –exec in Unix)I found this very useful for cleaning up the trace and log files that are not needed anymore.
It is very handy and I am really surprised that Microsoft finally came up with something like this. Before I used to write batch scripts to implement the logic that
forfiles
provides.It is available on Windows Vista, Windows 2003 Server and Windows 2008 Server.
You should really take a look if you didn’t know about this one already.
Find more (the syntax and few examples) about
forfiles
from this TechNet Note .Example:
Task: Delete all the trace files that are older more than 30 days
c:\app\oracle\diag\rdbms\db11g\db11g\trace>forfiles /s /m *.trc /d -30 /c "cmd /c
echo @FILE @FDATE"
"db11g_ora_13200.trc" 12/18/2008
"db11g_ora_18716.trc" 12/18/2008
"db11g_ora_18768.trc" 12/18/2008
"db11g_ora_18892.trc" 12/18/2008
"db11g_ora_3004.trc" 12/18/2008
"db11g_ora_4428.trc" 12/18/2008
"db11g_ora_6256.trc" 12/18/2008
"db11g_ora_6444.trc" 12/18/2008
"db11g_ora_6480.trc" 12/18/2008
"db11g_ora_6504.trc" 12/18/2008
"db11g_ora_6844.trc" 12/18/2008
"db11g_ora_6912.trc" 12/18/2008
"db11g_ora_6928.trc" 12/18/2008
"db11g_ora_7044.trc" 12/18/2008
c:\app\oracle\diag\rdbms\db11g\db11g\trace>forfiles /s /m *.trc /d -30 /c "cmd /c
del @FILE"
c:\app\oracle\diag\rdbms\db11g\db11g\trace>forfiles /s /m *.trc /d -30 /c "cmd /c
echo @FILE @FDATE"
ERROR: No files found with the specified search criteria.
Pretty handy, isn’t it?
ORA-12514 during switchover using Data Guard Broker
This is most likely because the special service db_unique_name_DGMGRL has not been registered properly with the listener.
This is one of the requirements when configuring Data Guard broker.
From the Oracle documentation (Reference Oracle® Data Guard Broker 10g Release 2 (10.2)):
To enable DGMGRL to restart instances during the course of broker operations, a service with a specific name must be statically registered with the local listener of each instance. The value for the GLOBAL_DBNAME attribute must be set to a concatenation of db_unique_name_DGMGRL.db_domain. For example, in the LISTENER.ORA file:
LISTENER = (DESCRIPTION =
(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=host_name)
(PO1RT=port_num))))
SID_LIST_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=sid_name)
(GLOBAL_DBNAME=db_unique_name_DGMGRL.db_domain)
(ORACLE_HOME=oracle_home)))
I think the main reason for overlooking this prerequisite is because nothing about this is mentioned in Oracle 10g Data Guard Concepts and Administration Guide, the chapters for standby databases configuration.
So once the initial configuration is set up and works fine, the listener prerequisites for setting up Data Guard Broker are probably overlooked.
Of course, this applies only if Data Guard Broker is configured manually. If one uses OEM Grid Control to set up and configure the Data Guard Configuration, OEM will make the necessary changes.
Another point I want to make here is the importance of practicing various switchover and failover scenarios. It is not only that you will gain more experience and fill more comfortable doing the same thing in real situation, but you also will learn about any hidden configuration wholes that you may have overlooked or missed in your configuration steps.
Here is an example of this problem and its solution:
For this example I used two Oracle 10.2.0.1 databases, dg1db running as a primary database and dg2db running as a physical standby database.
[oracle@dg1 ~]$ dgmgrl
DGMGRL for Linux: Version 10.2.0.1.0 - Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/***
Connected.
DGMGRL> show configuration
Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Primary database
dg2db - Physical standby database
Current status for "dg-test":
SUCCESS
Now lets try the switchover. As you can see the role transition was done successfully, but the database startup that should happened at the end failed with ORA-12514: TNS:listener does not currently know of service requested in connect descriptor.
DGMGRL> switchover to dg2db;
Performing switchover NOW, please wait...
Operation requires shutdown of instance "dg1db" on database "dg1db"
Shutting down instance "dg1db"...
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "dg2db" on database "dg2db"
Shutting down instance "dg2db"...
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "dg1db" on database "dg1db"
Starting instance "dg1db"...
Unable to connect to database
ORA-12514: TNS:listener does not currently know of service requested in connect descriptor
Failed.
You are no longer connected to ORACLE
Please connect again.
Unable to start instance "dg1db"
You must start instance "dg1db" manually
Operation requires startup of instance "dg2db" on database "dg2db"
You must start instance "dg2db" manually
Switchover succeeded, new primary is "dg2db"
DGMGRL>
After starting the databases manually, I checked the status of the Data Guard configuration and it was SUCCESS.
DGMGRL> show configuration
Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Physical standby database
dg2db - Primary database
Current status for "dg-test":
SUCCESS
DGMGRL>
Now I will make the necessary changes to the listeners and try the switchover in the opposite direction. After the change listener.ora should look like (pay attention to
GLOBAL_NAME=dg1db_DGMGRL.localdomain
).
[oracle@dg1 admin]$ more listener.ora
# listener.ora Network Configuration File: /u01/app/oracle/product/10.2.0/ora10g/network/admin/listener.ora
# Generated by Oracle configuration tools.
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = dg1db_DGMGRL.localdomain)
(ORACLE_HOME = /u01/app/oracle/product/ora10g)
(SID_NAME= dg1db)
)
)
LISTENER =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = dg1.localdomain)(PORT = 1521))
)
[oracle@dg1 admin]$ lsnrctl reload
[oracle@dg1 admin]$ lsnrctl status
LSNRCTL for Linux: Version 10.2.0.1.0 - Production on 13-JAN-2009 17:02:35
Copyright (c) 1991, 2005, Oracle. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dg1.localdomain)(PORT=1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version 10.2.0.1.0 - Production
Start Date 13-JAN-2009 08:08:17
Uptime 0 days 8 hr. 54 min. 18 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /u01/app/oracle/product/10.2.0/ora10g/network/admin/listener.ora
Listener Log File /u01/app/oracle/product/10.2.0/ora10g/network/log/listener.log
Listening Endpoints Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=dg1.localdomain)(PORT=1521)))
Services Summary...
Service "dg1db.localdomain" has 1 instance(s).
Instance "dg1db", status READY, has 1 handler(s) for this service...
Service "dg1db_DGB.localdomain" has 1 instance(s).
Instance "dg1db", status READY, has 1 handler(s) for this service...
Service "dg1db_DGMGRL.localdomain" has 1 instance(s).
Instance "dg1db", status UNKNOWN, has 1 handler(s) for this service...
Service "dg1db_XPT.localdomain" has 1 instance(s).
Instance "dg1db", status READY, has 1 handler(s) for this service...
The command completed successfully
I did the same thing with the second listener.
Now the switchover will complete without any problems.
DGMGRL> show configuration
Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Physical standby database
dg2db - Primary database
Current status for "dg-test":
SUCCESS
DGMGRL> switchover to dg1db
Performing switchover NOW, please wait...
Operation requires shutdown of instance "dg2db" on database "dg2db"
Shutting down instance "dg2db"...
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Operation requires shutdown of instance "dg1db" on database "dg1db"
Shutting down instance "dg1db"...
ORA-01109: database not open
Database dismounted.
ORACLE instance shut down.
Operation requires startup of instance "dg2db" on database "dg2db"
Starting instance "dg2db"...
ORACLE instance started.
Database mounted.
Operation requires startup of instance "dg1db" on database "dg1db"
Starting instance "dg1db"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "dg1db"
DGMGRL>
DGMGRL> show configuration
Configuration
Name: dg-test
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
dg1db - Primary database
dg2db - Physical standby database
Current status for "dg-test":
SUCCESS
DGMGRL>
Cheers,
Mihajlo Tekic
ORA-23375 when adding new master site in multi-master environment
This is especially important for those databases that have their global name containing country internet code of any of the following countries: Austria (AT), Belarus (BY), India (IN), Iceland (IS), etc. AT, BY, IN, IS are among the database reserved words.
You may experience problems even before you try to add the new master site. The creation of the database link to the new master site may fail with:
ORA-02084: database name is missing a componentHowever, you may not get this error in some cases, for instance when AT keyword is used; or if you put the database link name in double-quotes.
(the environment used in the examples below is DB1.MYDOMAIN.COM(master definition site, release 10.2.0.2) and DB11G.MYDOMAIN.IN (master site, release 11.1.0.6))
SQL> create database link DB11G.MYDOMAIN.IN
2 connect to rep_admin
3 identified by “rep_password”
4 using 'DB11G';
create database link DB11G.MYDOMAIN.IN
*
ERROR at line 1:
ORA-02084: database name is missing a component
SQL> create database link "DB11G.MYDOMAIN.IN"
2 connect to rep_admin
3 identified by "rep_password"
4 using 'db11g';
Database link created.
SQL> create database link DB11G.MYDOMAIN.AT
2 connect to rep_admin
3 identified by "rep_password"
4 using 'DB11G';
Database link created.
But, even if the database link creation succeeds, the attempt to add the new master site would definitely fail with:
ORA-23375: feature is incompatible with database version at global database name
SQL> begin
2 dbms_repcat.add_master_database(
3 gname=>'TEST_GROUP',
4 master=>'"DB11G.MYDOMAIN.IN"',
5 use_existing_objects =>true,
6 copy_rows =>false,
7 propagation_mode=>'ASYNCHRONOUS');
8 end;
9 /
begin
*
ERROR at line 1:
ORA-23375: feature is incompatible with database version at DB11G.MYDOMAIN.IN
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_REPCAT_MAS", line 2159
ORA-06512: at "SYS.DBMS_REPCAT", line 146
ORA-06512: at line 2
SQL>
SQL> begin
2 dbms_repcat.add_master_database(
3 gname=>'TEST_GROUP',
4 master=>'DB11G.MYDOMAIN.AT',
5 use_existing_objects =>true,
6 copy_rows =>false,
7 propagation_mode=>'ASYNCHRONOUS');
8 end;
9 /
begin
*
ERROR at line 1:
ORA-23375: feature is incompatible with database version at DB11G.MYDOMAIN.AT
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.DBMS_REPCAT_MAS", line 2159
ORA-06512: at "SYS.DBMS_REPCAT", line 146
ORA-06512: at line 2
If you turn 10046 trace event you can notice that the error occurs when DBMS_REPCAT_RPC.REPLICATION_VERSION_RC and DBMS_REPCAT_UTL2.REPLICATION_VERSION procedures are invoked against the new master database.
PARSE ERROR #39:len=73 dep=1 uid=61 oct=47 lid=61 tim=1191209094695404 err=6550
BEGIN sys.dbms_repcat_rpc.replication_version_rc@DB11G.MYDOMAIN.IN (:rv);END;
=====================
PARSE ERROR #39:len=72 dep=1 uid=61 oct=47 lid=61 tim=1191209094702737 err=6550
BEGIN sys.dbms_repcat_utl2.replication_version@DB11G.MYDOMAIN.IN (:rv);END;
To solve this problem the global name of the new master database should be changed. Once the global name is changed to value that does not contain any reserved word, you should be able to add the database to the replication environment.
In the example below, I will try to add a new master site which global name was changed to DB11G.FOO.BAR
SQL>conn sys/***@DB11G as sysdba
Connected.
SQL> alter database rename global_name to DB11G.FOO.BAR;
Database altered.
SQL> conn rep_admin/rep_password@DB1
Connected.
SQL>
SQL> create database link DB11G.FOO.BAR
2 connect to rep_admin
3 identified by "rep_password"
4 using 'DB11G';
Database link created.
SQL> begin
2 dbms_repcat.add_master_database(
3 gname=>'TEST_GROUP',
4 master=>'DB11G.FOO.BAR',
5 use_existing_objects =>true,
6 copy_rows =>false,
7 propagation_mode=>'ASYNCHRONOUS');
8 end;
9 /
PL/SQL procedure successfully completed.
SQL> column dblink format a30
SQL> select DBLINK, MASTERDEF, MASTER
2 from dba_repsites;
DBLINK M M
------------------------------ - -
DB1.MYDOMAIN.COM Y Y
DB11G.FOO.BAR N Y
There are few ways to find Oracle PL/SQL reserved words:
- V$RESERVED_WORDS
- Oracle documentation (PL/SQL Reserved Words and Keywords). Check your version specific documentation available on http://tahiti.oracle.com
- Issue HELP RESERVED WORDS (PL/SQL) from Sql*Plus (if Sql*Plus help is installed).
Fast-Start Failover - It is reliable
He had some concerns about FSFO being unreliable, very difficult to be implemented and actually doesn't work properly.
The OP got some immediate response from the OTN users. I couldn't agree more with Joseph Meeks's comment on the topic.
In my opinion, FSFO is very nice feature that plays big role in Oracle's Maximum Availability Architecture. There might be some valid reasons not to implemented it, but if the automatic failover is a requirement, FSFO is the way to go. Should one have any problems implementing it, the best way to go is to get Oracle Support involved.
In this post, I'd like to show that implementation of the FSFO should not be a difficult task once you have the Data Guard environment set up.
I configured data guard physical standby environment on my laptop. Due to hardware limitations, I'll have the observer running on the same machine with the standby databases. Keep in mind this is just an example. In practice, the primary database, the standby database and the observer should run on different hosts. This example also answers one of the questions OP asked: Will it be possible to set it up on one machine? The answer would be it is possible, as shown in the example :-), but it is not the right way to go.
I use DGMGRL utility in the example..
So, I configured a Data Guard environment where DB1.MYDOMAIN.COM is primary database and STDB.MYDOMAIN.COM is physical standby. Both databases are 10.2.x
By default, protection mode of the Data Guard configuration is set to MAXIMUM PERFORMANCE
DGMGRL> connect sys/***@db1
Connected.
DGMGRL> show configuration
Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxPerformance
Fast-Start Failover: DISABLED
Databases:
db1 - Primary database
stdb - Physical standby database
Current status for "DRTest":
SUCCESS
Enabling Fast-Start Failover requires the following pre-requisites to be met:
Flashback Database feature is enabled on both the primary and the standby database.
The protection mode of the configuration must be set to MAXIMUM AVAILABILITY
tnsnames.ora in the ORACLE_HOME where the observer runs must be set to see both databases, the primary and the standby.
DGMGRL must be available on the observer host.
Enable Flashback Database
I will enable flashback database feature on both databases. This assumes that the flash recovery area is configured.
The flashback database feature provides an ability for an easy reinstatement of the failed primary database to new standby database.
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup mount
ORACLE instance started.
Total System Global Area 285212672 bytes
Fixed Size 1261348 bytes
Variable Size 100663516 bytes
Database Buffers 176160768 bytes
Redo Buffers 7127040 bytes
Database mounted.
SQL> alter database flashback on;
Database altered.
SQL> alter database open;
Database altered.
Set the protection mode to MAXIMUM AVAILABILITY
The Fast-Start Failover can be enabled only if the protection mode is set to MAXIMUM AVAILABILITY.
LogXptMode has to be set to 'SYNC'
DGMGRL> edit database 'db1'
> set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit database 'stdb'
> set property LogXptMode='SYNC';
Property "logxptmode" updated
DGMGRL> edit configuration set protection mode as MaxAvailability;
Succeeded.
DGMGRL> show configuration
Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: DISABLED
Databases:
db1 - Primary database
stdb - Physical standby database
Current status for "DRTest":
SUCCESS
SQL> conn sys/***@db1 as sysdba
Connected.
SQL> select protection_mode, protection_level from v$database;
PROTECTION_MODE PROTECTION_LEVEL
-------------------- --------------------
MAXIMUM AVAILABILITY MAXIMUM AVAILABILITY
SQL> conn sys/***@stdb as sysdba
Connected.
SQL> select protection_mode, protection_level from v$database;
PROTECTION_MODE PROTECTION_LEVEL
-------------------- --------------------
MAXIMUM AVAILABILITY MAXIMUM AVAILABILITY
SQL>
I ensured that tnsnames.ora are set correctly as well DGMGRL is installed.
Now, since all the prerequisites are met, lets move forward and enable the FSFO.
Before enabling it, make sure each of the databases in the configuration has set a fast start failover target. This is achieved by setting the FastStartFailoverTarget parameter.
DGMGRL> edit database 'db1' set property faststartfailovertarget='stdb';
Property "faststartfailovertarget" updated
DGMGRL> edit database 'stdb' set property faststartfailovertarget='db1';
Property "faststartfailovertarget" updated
Another important parameter that has to be set is FastStartFailoverThreshold. This parameter specifies the amount of time (in seconds) the observers attempts to reconnect to the primary database before starting the fast-start failover to the standby database. The default value is set to 30 seconds. In the example I set this parameter to 120 seconds.
DGMGRL> edit configuration set property FastStartFailoverThreshold=120;
Property "faststartfailoverthreshold" updated
Now lets enable the fast start failover:
DGMGRL> ENABLE FAST_START FAILOVER;
Enabled.
If you check the configuration at this moment you may find the following warning: ORA-16608: one or more databases have warnings.
If you check the status of one of the databases, you can see this warning: ORA-16819: Fast-Start Failover observer not started
DGMGRL> show configuration
Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Primary database
stdb - Physical standby database
- Fast-Start Failover target
Current status for "DRTest":
Warning: ORA-16608: one or more databases have warnings
DGMGRL> show database verbose 'db1'
Database
Name: db1
Role: PRIMARY
Enabled: YES
Intended State: ONLINE
Instance(s):
db1
Properties:
InitialConnectIdentifier = 'db1.mydomain.com'
LogXptMode = 'SYNC'
Dependency = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '180'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyParallel = 'AUTO'
StandbyFileManagement = 'auto'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '2'
LogArchiveMinSucceedDest = '1'
DbFileNameConvert = 'stdb, db1'
LogFileNameConvert = 'stdb, db1'
FastStartFailoverTarget = 'stdb'
StatusReport = '(monitor)'
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
HostName = 'miki-laptop'
SidName = 'db1'
LocalListenerAddress = '(ADDRESS=(PROTOCOL=tcp)(HOST=miki-laptop)(PORT=1521))'
StandbyArchiveLocation = 'dgsby_db1'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = '%t_%s_%r.dbf'
LatestLog = '(monitor)'
TopWaitEvents = '(monitor)'
Current status for "db1":
Warning: ORA-16819: Fast-Start Failover observer not started
So lets start the observer. I will repeat again, the observer should run on a different host, however for the sake of this example it will run on the same machine as the databases.
In order to start the observer one should start DGMGRL utility and login to the data guard configuration.
Once logged in, issue START OBSERVER command. This will start the observer.
Optionally you can set a log file destination while invoking DGMGRL utility and specify name for the observer configuration file (for more information check Data Guard Command-Line Interface Reference).
Once the observer is started, the control is not returned to the user until the observer is stopped.
DGMGRL for Linux: Version 10.2.0.3.0 - Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/***@db1
Connected.
DGMGRL> start observer;
Observer started
So lets check the configuration now.
$ dgmgrl
DGMGRL for Linux: Version 10.2.0.3.0 - Production
Copyright (c) 2000, 2005, Oracle. All rights reserved.
Welcome to DGMGRL, type "help" for information.
DGMGRL> connect sys/***@db1
Connected.
DGMGRL> show configuration verbose;
Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Primary database
stdb - Physical standby database
- Fast-Start Failover target
Fast-Start Failover
Threshold: 120 seconds
Observer: miki-laptop
Current status for "DRTest":
SUCCESS
DGMGRL>
As of this moment my configuration has FSFO enabled.
Now lets test if the FSFO really works.
One should be aware of the conditions that must be satisfied for the observer to attempt FSFO.
The full list of conditions can be found in Data Guard Broker user guide under 5.5.2.1 What Happens When Fast-Start Failover and the Observer Are Running?
I will issue SHUTDOWN ABORT to the primary database (SHUTDOWN NORMAL/IMMEDIATE/TRANSACTIONAL would not trigger the failover).
Once I crash the database, the status of the configuration will return Error: ORA-16625: cannot reach the database. Be aware that since the primary database is down, the only way to check the configuration is to connect using the standby database credentials.
SQL> conn sys/***@db1 as sysdba
Connected.
SQL> shutdown abort
ORACLE instance shut down.
SQL>
DGMGRL> connect sys/***@stdb
Connected.
DGMGRL> show configuration
Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Primary database
stdb - Physical standby database
- Fast-Start Failover target
Current status for "DRTest":
Error: ORA-16625: cannot reach the database
After waiting for two minutes (FSFO threshold was set to 120 seconds), I checked the observer log file and found out that it started the failover.
[W000 08/09 17:58:49.75] Observer started.
18:05:38.31 Saturday, August 09, 2008
Initiating fast-start failover to database "stdb"...
Performing failover NOW, please wait...
Failover succeeded, new primary is "stdb"
18:06:16.82 Saturday, August 09, 2008
DGMGRL> show configuration verbose
Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Physical standby database (disabled)
- Fast-Start Failover target
stdb - Primary database
Fast-Start Failover
Threshold: 120 seconds
Observer: miki-laptop
Current status for "DRTest":
Warning: ORA-16608: one or more databases have warnings
DGMGRL>
So the observer started the fast-start failover and has successfully converted the former standby to new primary database.
Once the failover is done the observer will attempt to automatically reinstate the former primary database to new physical standby. Certain requirements have to be met for the automatic reinstatement to take place. If not, the database can be manually reinstated. Once the reinstatement is successful you can see that the former primary database became new physical standby.
DGMGRL> show configuration
Configuration
Name: DRTest
Enabled: YES
Protection Mode: MaxAvailability
Fast-Start Failover: ENABLED
Databases:
db1 - Physical standby database
- Fast-Start Failover target
stdb - Primary database
Current status for "DRTest":
SUCCESS
DGMGRL>
Finally, you can do switchover to perform role change and to place the configuration in its initial state.
The example above shows that enabling fast-start failover is pretty straight forward. There are many good documents with information how to enable FSFO feature. Some of them are listed below:
Oracle® Data Guard Broker 10g Release 2 (10.2) - 5.5 Fast-Start Failover
What Happens When Fast-Start Failover and the Observer Are Running?
7.6 Scenario 5: Enabling Fast-Start Failover and the Observer
Metalink Note#359555.1 IMPLEMENTING FAST-START FAILOVER IN 10GR2 DATAGUARD BROKER ENVIRONMENT
Oracle Maximum Availability Architecture – MAA
I hope I didn't miss something.
Cheers!
Metalink, SCM, the error and the good stuff
Dear MetaLink Customer,
Oracle is committed to consistently improving your customer support experience. In the fall of 2008, MetaLink will have a new user interface. To help you prepare for the transition, you may now preview MetaLink's new user interface and provide valuable feedback about its features.
******
I clicked on the link, that was supposed to redirect me to Metalink, and I got one very fancy schmancy login page. Well, actually, it is the Software Configuration Manager (SCM).
I put in my login credentials and ... I got an error message "IO Error Error #2032"
WOW ... :-)
Well, OK the problem was fixed latter that day.
Regardless of the problem I experienced, I must say that I've been using SCM for quite a while and I am pretty impressed with its functionality. It is really much easier to create and manage service requests using the configurations you I have registered with SCM. Oracle Support engineers have all the information they need about the configuration of the server and the database.
Searching the Knowledge base looks improved as well. Now you can have your search results visible on the left panel of the screen, while, at the same time, you can read the content of the selected note. This makes navigation much easier.
Service Requests part has new design too.
All in all, improved functionality, better navigation, good design, some new features too ... Good job !!!
Useful Links:
http://csm.oracle.com/
http://www.oracle.com/support/premier/software-configuration-manager.html
1Z0-043 ... Passed
I have been studying very hard, especially the last month. I started with Oracle 10g Certification and Oracle Database 10g OCP Certification All-In-One Exam Guide, but the most of the time I spent reading from the Official Documentation and of course, practicing a lot.
The books I mentioned are good, they cover some basics, but I found they are not quite sufficient to prepare you for the exam. Some of the material is not up to date with the exam topics. The sample tests that come with the books contain some questions out of the scope of the exam. Some of the questions contain answers that are wrong or arguable.
It is very important that while preparing for the exam you practice a lot. It is not very difficult to set up a test environment for each of the topics. It was slightly complicated to create a test environment for ASM on my Ubuntu Linux, but the post from Grégory Guillou How To Set Up Oracle ASM on Ubuntu Gutsy Gibbon has helped me with that.
Finally, while preparing for the exam, I have improved my skills significantly and learned some new stuff that I didn't have any experience with before (Resource Manager for instance).
Blocked Patch files cause OPatch to fail with error code = 170
Opatch utility was failing with the following error:
The patch directory area must be a number.
ERROR: OPatch failed because of problems in patch area.
OPatch returns with error code = 170
Well, there are quite few reasons why you may get this error when you try to apply a patch using Opatch.
Many of them are published in some of the following Metalink notes:
Note:369555.1
Note:294350.1
Note:418479.1
But, I couldn't find anything about the one that I came across recently.
I tried everything described in the notes above. Nothing worked out, until I realized that all of the patch files have been somehow blocked.
Although the patch files had all the necessary permissions, especially read, the opatch utility couldn't read them because of the lock set by the OS.
If you check the file properties you can see the message:
"This file came from another computer and might be blocked to help protect this computer".
The reason why these files were blocked was because I copy them from a different machine.
A blocked file can be unlocked by pressing Unlock button.
The odd thing was that only one file can be unlocked at a time.
So I just deleted the patch directory and unlocked the patch zip before I extract it again.
I haven't had similar issue before. I believe this is a new "security" feature of Windows 2003 Server.
Query to determine the clustering factor before you create the index
select count(1) clustering_factor
from
(
select dbms_rowid.rowid_block_number(rowid) block_no,
list of the indexed columns,
count(1) num_rows,
LAG(dbms_rowid.rowid_block_number(rowid))
over (order by list of the indexed columns) prev_block
from table_name
group by dbms_rowid.rowid_block_number(rowid),
list of the indexed columns
order by list of the indexed columns
)
where block_no<>prev_block or prev_block is null
Let's take a look at the following example:
SQL> create table test
2 tablespace example
3 as select
4 mod(rownum,8) col1
5 lpad('x',1000) col2
6 from all_objects
7 where rownum<=1000;
Using the following query, we can see that there are about 7 rows per block (block size 8192) that makes the records with same COL1 to be scattered across different blocks:
SQL> select num_rows_in_block, count(1) blocks
2 from
3 (
4 select block_no, count(1) num_rows_in_block
5 from
6 (
7 select dbms_rowid.rowid_block_number(rowid) block_no
8 from test
9 )
10 group by block_no
11 )
12* group by num_rows_in_block
SQL> /
NUM_ROWS_IN_BLOCK BLOCKS
----------------- ----------
6 1
7 142
Let see how big would be the clustering factor of an index built on TEST(COL1).
SQL> select count(1) clustering_factor
2 from
3 (
4 select dbms_rowid.rowid_block_number(rowid) block_no,
5 col1,
6 count(1) num_rows,
7 LAG(dbms_rowid.rowid_block_number(rowid))
8 over (order by col1) prev_block
9 from test
10 group by dbms_rowid.rowid_block_number(rowid), col1
11 order by col1
12 )
13* where block_no<>prev_block or prev_block is null
SQL> /
CLUSTERING_FACTOR
-----------------
1000
The result was pretty obvious.
Now let's add another column COL3 in the table TEST and set its value to mod(col1, 4).
This will make the blocks to contain at most two records with the same value of COL3.
SQL> alter table test
2 add col3 number;
Table altered.
SQL> update test set col3=mod(col1,4);
1000 rows updated.
SQL> commit;
Commit complete.
SQL>
Now let see what would be the value of the clustering factor of the index created on COL3 column:
SQL> select count(1) clustering_factor
2 from
3 (
4 select dbms_rowid.rowid_block_number(rowid) block_no,
5 col3,
6 count(1) num_rows,
7 LAG(dbms_rowid.rowid_block_number(rowid))
8 over (order by col3) prev_block
9 from test
10 group by dbms_rowid.rowid_block_number(rowid), col3
11 order by col3
12 )
13* where block_no<>prev_block or prev_block is null
SQL> /
CLUSTERING_FACTOR
-----------------
572
Now let's create the actual indexes and see what their clustering factor would be:
SQL> create index test_col1_idx on test(col1);
Index created.
SQL> create index test_col3_idx on test(col3);
Index created.
SQL> select index_name, clustering_factor
2 from user_indexes
3* where table_name='TEST'
SQL> /
INDEX_NAME CLUSTERING_FACTOR
------------------------------ -----------------
TEST_COL1_IDX 1000
TEST_COL3_IDX 572
Perfect insights about the importance of the clustering factor for calculating the cost I found in Jonathan Lewis's book Cost-Based Oracle Fundamentals.
Let's go back to the very first query in this post. Why I think this query is useful?
Because sometime, even though I think some columns are perfect combination to create index on, the index might not be used by the optimizer because the clustering factor is big.
It can give you an initial clue how effective some index could be.
Run Flashback commands only from Sql*Plus 10.1.x or newer
ORA-08186: invalid timestamp specified
Well ... take a look at the following example
First I wanted to make sure, that the format I use is the correct one.
1* select to_char(systimestamp,'DD-MON-RR HH.MI.SSXFF AM') from dual
SQL> /
TO_CHAR(SYSTIMESTAMP,'DD-MON-RR
-------------------------------
14-DEC-06 10.27.26.622829 AM
Now, when I tried to run FVQ, I got "ORA-30052: invalid lower limit snapshot expression". That was an expected result, since my lower limit did not belong in (SYSDATE-UNDO_RETENTION, SYSDATE] range. (UNDO_RETENTION parameter was set to 900).
But you can agree with me that Oracle successfully processed timestamp values that I used in this query.
SQL> ed
Wrote file afiedt.buf
1 select comm
2 from scott.emp
3 versions between timestamp
4 to_timestamp('14-DEC-06 09.45.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM') and
5 to_timestamp('14-DEC-06 10.00.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM')
6 where
7* empno = 7369
SQL> /
from scott.emp
*
ERROR at line 2:
ORA-30052: invalid lower limit snapshot expression
So I modified the lower limit to fit in the right range, and I got ORA-08186: invalid timestamp specified. !?!?!?
SQL> ed
Wrote file afiedt.buf
1 select comm
2 from scott.emp
3 versions between timestamp
4 to_timestamp('14-DEC-06 10.20.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM') and
5 to_timestamp('14-DEC-06 11.00.00.000000 AM','DD-MON-RR HH.MI.SSXFF AM')
6 where
7* empno = 7369
SQL> /
from scott.emp
*
ERROR at line 2:
ORA-08186: invalid timestamp specified
After some time that I spent trying to resolve this issue (I couldn't dare to open SR about it:-)) I remembered I have had similar problems while trying to test some flashback features (flashback table to before drop) on Sql*Plus 9.2.x while ago….and I was using Sql*Plus 9.2 again.
I tried the same example on Sql*Plus 10.1.0.2
… and everything worked well.
Some books might make you think wrong
You better check any information you think is different than you know.
Currently I am reading a book that is suppose to prepare me for OCP exam.
There are some questions and answers after each Chapter. I found few answers that I think are not correct.
Don’t get me wrong I still think that the book I read is a good one.
One Example: (I changed usernames and sequence name used in the original question):
Question:
The following SQL statement will allow user test_usr which operations on sequence scott.test_sq?
GRANT ALL ON scott.test_sq to test_usr;
A) Select the next value from scott.test_sq
B) Alter sequence scott.test_sq to change the next value
C) Change the number of sequence numbers that will be cached in memory
D) Both A and C
E) All of the above
The answer provided in the book is D.
But, I think it is E. Why?
SQL> conn scott/*****
Connected.
scott@ora10g> create sequence test_sq
2 start with 1
3 increment by 1;
Sequence created.
scott@ora10g> select test_sq.nextval from dual;
NEXTVAL
----------
1
scott@ora10g> conn sys as sysdba
Connected.
sys@ora10g> create user test_usr identified by test_usr
2 default tablespace users
3 temporary tablespace temp;
User created.
sys@ora10g> grant connect, resource to test_usr;
Grant succeeded.
sys@ora10g> grant all on scott.test_sq to test_usr;
Grant succeeded.
sys@ora10g> conn test_usr/test_usr
Connected.
test_usr@ora10g> select scott.test_sq.nextval from dual;
NEXTVAL
----------
2
test_usr@ora10g> alter sequence scott.test_sq
2 increment by 10;
Sequence altered.
test_usr@ora10g> select scott.test_sq.nextval from dual;
NEXTVAL
----------
12
test_usr@ora10g> alter sequence scott.test_sq
2 increment by 1;
Sequence altered.
test_usr@ora10g> conn scott/*****
Connected.
scott@ora10g> select test_sq.nextval from dual;
NEXTVAL
----------
13
scott@ora10g>