Skip navigation.

Dominic Brooks

Syndicate content OraStory
Dominic Brooks on Oracle Performance, Tuning, Data Quality & Sensible Design ... (Now with added Sets Appeal)
Updated: 17 hours 40 min ago

Oracle 12c Multitenant – Inmemory admin basics

Tue, 2014-10-21 11:15

A couple of very, very basic observations on getting going with 12c Inmemory in a multitenant database.

1. When trying to set inmemory_size within a PDB when inmemory_size is 0 in the CDB
ORA-02096: specified initialization parameter is not modifiable with this option

SQL> alter session set container = cdb$root;

Session altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
0

SQL> alter session set container = orcl;

Session altered.

SQL> alter session set inmemory_size=100M;

alter session set inmemory_size=100M
                  *
ERROR at line 1:
ORA-02096: specified initialization parameter is not modifiable with this option

2. You have to use scope=spfile when setting inmemory on CDB and it requires restart to take effect

SQL> alter session set container = cdb$root;

Session altered.

SQL> alter system set inmemory_size = 500M;

alter system set inmemory_size = 500M
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-02095: specified initialization parameter cannot be modified

SQL> alter system set inmemory_size = 500M scope=spfile;

System altered.

SQL> select * from v$sga;

NAME                 VALUE      CON_ID
-------------------- ---------- ----------
Fixed Size           2926472    0
Variable Size        1224738936 0
Database Buffers     905969664  0
Redo Buffers         13848576   0

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size               2926472 bytes
Variable Size            1275070584 bytes
Database Buffers         318767104 bytes
Redo Buffers             13848576 bytes
In-Memory Area           536870912 bytes
Database mounted.
Database opened.

3. You don’t use scope when setting in a PDB otherwise
ORA-02096: specified initialization parameter is not modifiable with this option

SQL> alter session set container = orcl;

Session altered.

SQL> alter system set inmemory_size=400M scope=spfile;

alter system set inmemory_size=400M scope=spfile
                                               *
ERROR at line 1:
ORA-02096: specified initialization parameter is not modifiable with this option

SQL> alter system set inmemory_size=400M;

System altered.

4. If you try to set the PDB inmemory_size larger than the CDB then you get
ORA-02097: parameter cannot be modified because specified value is invalid

SQL> alter session set container = orcl;

Session altered.

SQL> alter system set inmemory_size=600M;

alter system set inmemory_size=600M
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-02095: specified initialization parameter cannot be modified

Added:

5. Similar to point (2), if we change the inmemory size of the CDB, we have to restart for it to take effect

SQL> select sys_context('USERENV','CON_NAME') from dual;

SYS_CONTEXT('USERENV','CON_NAME')
--------------------------------------------------------------------------------
CDB$ROOT

SQL> alter system set inmemory_size=500M scope=spfile;

System altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
218103808

SQL> shutdown immediate  
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.

Total System Global Area 2147483648 bytes
Fixed Size               2926472 bytes
Variable Size            1157630072 bytes
Database Buffers         436207616 bytes
Redo Buffers             13848576 bytes
In-Memory Area           536870912 bytes
Database mounted.
Database opened.
SQL> 

6. If we modify inmemory_size for PDB, it takes effect immediately (presumably because all it really is a maximum quota on the actual memory structure established in the CDB?):

SQL> alter session set container = orcl;

Session altered.

SQL> alter system set inmemory_size=200M;

System altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
209715200

SQL> alter system set inmemory_size=400M;

System altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
419430400

7. While we showed in point (4) that a single PDB inmemory_size cannot exceed the CDB size, we can overallocate such “maximum quotas” across all PDBs – i.e. the sum of inmemory_size for all PDBs can exceed CDB size (prompts future investigations):

SQL> alter session set container=cdb$root;

Session altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
536870912

SQL> alter session set container=orcl;

Session altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
419430400

SQL> alter session set container=cdb$root;

Session altered.

SQL> create pluggable database dom from orcl file_name_convert=
 2  ('/home/oracle/app/oracle/oradata/cdb1/orcl',
 3  '/home/oracle/app/oracle/oradata/cdb1/dom');

Pluggable database created.

SQL> alter pluggable database dom open;

Pluggable database altered.

SQL> alter session set container = dom;

Session altered.

SQL> alter system set inmemory_size = 400M;

System altered.

SQL> select value from v$parameter where name = 'inmemory_size';

VALUE
--------------------------------------------------------------------------------
419430400

Remote DML with DBMS_PARALLEL_EXECUTE

Fri, 2014-08-22 10:20

An example of sucking data into a table over a db link using DBMS_PARALLEL_EXECUTE.

This particular example is based on something I needed to do in the real world, copying data from one database into another over a db link. Datapump is not available to me. Tables in question happen to be partitioned by a date-like number (boo!) hence some of the specific actions in the detail.

I think it’s a good example of how to use dbms_parallel_execute but also it might be interesting to see how we might combine that functionality with parallel sessions each operating on a single numeric partition.

For setup, let’s create a suitable source table on a remote db.
In this example, I’m recreating the entries in dba_objects for every day for a couple of years.

CREATE TABLE remote_px_test
(dt,owner,object_name,subobject_name,object_id,data_object_id,object_type,created,last_ddl_time,timestamp,status,temporary,generated,secondary,namespace,edition_name)
PARTITION BY RANGE(dt) INTERVAL(1)
(PARTITION p_default VALUES LESS THAN (20120101))
AS
WITH days AS
(SELECT TO_NUMBER(TO_CHAR(TO_DATE(20120101,'YYYYMMDD') + ROWNUM - 1,'YYYYMMDD')) dt
 FROM   dual
 CONNECT BY ROWNUM <= (TRUNC(SYSDATE) - TO_DATE(20120101,'YYYYMMDD')))
SELECT d.dt, o.*
FROM   dba_objects o
CROSS JOIN days d;
SELECT /*+ parallel(16) */ COUNT(*) FROM remote_px_test;
209957272

SELECT round(sum(bytes)/power(1024,3)) FROM user_segments WHERE segment_name = 'REMOTE_PX_TEST';
31

First step is to see how long it takes to do a parallel INSERT SELECT over a db link.

The benefits of parallelisation in such an operation is severely limited because we have a single session over the db link.

Back to the target database.

First create an empty destination table, same as remote.

CREATE TABLE remote_px_test
(dt,owner,object_name,subobject_name,object_id,data_object_id,object_type,created,last_ddl_time,timestamp,status,temporary,generated,secondary,namespace,edition_name)
PARTITION BY RANGE(dt) INTERVAL(1)
(PARTITION p_default VALUES LESS THAN (20100101))
AS
WITH days AS
(SELECT TO_NUMBER(TO_CHAR(TO_DATE(20120101,'YYYYMMDD') + ROWNUM - 1,'YYYYMMDD')) dt
 FROM   dual
 WHERE 1=0)
SELECT d.dt, o.*
FROM   dba_objects o
CROSS JOIN days d;

Now, let’s see how long it takes to do an INSERT SELECT over a db link.
Time is often not a good measure but in this case I’m primarily interested in how long it takes to copy a whole bunch of tables from A to B over a db link.

insert /*+ append */ into remote_px_test l
select * 
from   remote_px_test@d1 r;

209,957,272 rows inserted.

commit;

This executed in 20 minutes.

As mentioned, you could parallelise bits of it either side but the benefit is limited, it might even make things worse thanks to BUFFER SORT operation.

Next let’s compare to method with DBMS_PARALLEL_EXECUTE.

We want some parallel threads to work on independent partitions, doing direct path inserts, concurrently.

First I’m just going to create a view on the SOURCE DB to make my chunking on daily partition interval simpler.

I could create this on the TARGET DB with references to the dictionary tables over db link but it could be significantly slower depending on the number of partitioned tables and whether predicates are being pushed.

CREATE OR REPLACE VIEW vw_interval_partitions
AS
SELECT table_name, partition_name, partition_position, hi
FROM   (SELECT table_name, partition_name, partition_position
        ,      to_char(
                 extractvalue(
                   dbms_xmlgen.getxmltype
                  ('select high_value from user_tab_partitions x'
                 ||' where x.table_name   = '''||t.table_name||''''
                 ||' and   x.partition_name = '''|| t.partition_name|| ''''),'//text()')) hi
        FROM   user_tab_partitions t);

Secondly, I’m going to create a little helper package which will generate the dynamic SQL for our inserts into specific partitions (PARTITION FOR clause not able to use binds).

		
CREATE OR REPLACE PACKAGE sid_data_pkg
AS
  --
  PROCEDURE sid_ipt (
    i_table_name                 IN     VARCHAR2,
    i_table_owner                IN     VARCHAR2,
    i_column_name                IN     VARCHAR2,
    i_dblink                     IN     VARCHAR2,
    i_start_id                   IN     NUMBER,
    i_end_id                     IN     NUMBER
  );
  --
END sid_data_pkg;
/

CREATE OR REPLACE PACKAGE BODY sid_data_pkg
AS
  PROCEDURE sid_ipt (
    i_table_name                 IN     VARCHAR2,
    i_table_owner                IN     VARCHAR2,
    i_column_name                IN     VARCHAR2,
    i_dblink                     IN     VARCHAR2,
    i_start_id                   IN     NUMBER,
    i_end_id                     IN     NUMBER
  )
  AS
    --
    l_cmd CLOB;
    --
  BEGIN
     --
     l_cmd :=
     q'{INSERT /*+ APPEND */}'||chr(10)||
     q'{INTO   }'||i_table_name||chr(10)||
     q'{PARTITION FOR (}'||i_start_id||')'||chr(10)||
     q'{SELECT *}'||chr(10)||
     q'{FROM   }'||CASE WHEN i_table_owner IS NOT NULL THEN i_table_owner||'.' END
                 ||i_table_name
                 ||CASE WHEN i_dblink IS NOT NULL THEN '@'||i_dblink END
                 ||chr(10)||
     q'{WHERE  }'||i_column_name||' < '||i_end_id||chr(10)||
     CASE WHEN i_start_id IS NOT NULL THEN q'{AND   }'||i_column_name||' >= '||i_start_id END;
     --
     --DBMS_OUTPUT.PUT_LINE(l_cmd);
     --
     EXECUTE IMMEDIATE l_cmd;
     --
     COMMIT;
     --
  END sid_ipt;
  --
END sid_data_pkg;
/

Next, truncate our target table again.

Then create our parallel execute task:

begin
  DBMS_PARALLEL_EXECUTE.CREATE_TASK(task_name => 'PX_TEST_TASK');
end;
/

Create the chunks of work to be executed concurrently:

declare
 l_chunk_sql varchar2(1000);
begin
  l_chunk_sql := q'{select (hi - 1) AS partval, hi }'||chr(10)||
                 q'{from   vw_interval_partitions@d1 v }'||chr(10)||
                 q'{where  table_name = 'REMOTE_PX_TEST' }'||chr(10)||
                 q'{order  by partition_position }';
  DBMS_PARALLEL_EXECUTE.CREATE_CHUNKS_BY_SQL(task_name => 'PX_TEST_TASK',sql_stmt => l_chunk_sql, by_rowid => false);
end;
/

Check our task and our chunks:

select * from dba_parallel_execute_tasks;

TASK_OWNER TASK_NAME    CHUNK_TYPE   STATUS  TABLE_OWNER TABLE_NAME NUMBER_COLUMN TASK_COMMENT JOB_PREFIX SQL_STMT LANGUAGE_FLAG EDITION APPLY_CROSSEDITION_TRIGGER FIRE_APPLY_TRIGGER PARALLEL_LEVEL JOB_CLASS
---------- ------------ ------------ ------- ----------- ---------- ------------- ------------ ---------- -------- ------------- ------- -------------------------- ------------------ -------------- ---------
ME_DBA     PX_TEST_TASK NUMBER_RANGE CHUNKED 
select * from dba_parallel_execute_chunks order by chunk_id;

  CHUNK_ID TASK_OWNER TASK_NAME    STATUS     START_ROWID END_ROWID START_ID END_ID   JOB_NAME START_TS END_TS ERROR_CODE ERROR_MESSAGE
---------- ---------- ------------ ---------- ----------- --------- -------- -------- -------- -------- ------ ---------- -------------
      3053 ME_DBA     PX_TEST_TASK UNASSIGNED                       20120100 20120101 
      3054 ME_DBA     PX_TEST_TASK UNASSIGNED                       20120101 20120102 
        ...
      4017 ME_DBA     PX_TEST_TASK UNASSIGNED                       20140821 20140822 

 965 rows selected 

Then we run our parallel tasks thus, each executing the helper package and working on individual partitions:

set serveroutput on
DECLARE
  l_task     VARCHAR2(24) := 'PX_TEST_TASK';
  l_sql_stmt VARCHAR2(1000);
BEGIN
  --
  l_sql_stmt := q'{begin sid_data_pkg.sid_ipt ('REMOTE_PX_TEST','ME_DBA','DT','D1',:start_id,:end_id); end;}';
  --
  DBMS_PARALLEL_EXECUTE.RUN_TASK(l_task, l_sql_stmt, DBMS_SQL.NATIVE,parallel_level => 16);
  --
  dbms_output.put_line(DBMS_PARALLEL_EXECUTE.TASK_STATUS(l_task));
  --
end;
/

This executed in 2 minutes and returned code 6 which is FINISHED (without error).

Status of individual chunks can be checked via DBA_PARALLEL_EXECUTE_CHUNKS.


Plan Instability

Mon, 2014-07-28 09:04

There seems to me to be a relatively simple choice.

Either you except that the Oracle Optimizer has a wealth of complicated strategies and, in this complex effort to get the best executions it can, will inevitably either get it wrong sometimes (or take some extra executions to realise it’s wrong).

Or you stick your head in the sand and raise a bug for every unexpected poor execution or plan flip.

But let’s say that above else you wanted Plan Stability.

This is an widespread desire.

What would be the best strategy?
And to what lengths would you take it?

SQL Plan Management features are designed to give you some degree of stability.

You could baseline statements which have given you a problem in the past.

What if that is not deemed adequate?

So, on the one hand, you could baseline everything you could find.

Taking a long history of AWR and taking regularly snaps of V$SQL, you could put all in a SQL Plan Baseline.

But, depending on your application, you might regularly get new SQL appear unprotected by a baseline.
In one Hibernate application I know, a change in the application domain model often results in the table aliases changing, meaning that there is lots of brand new, but very familiar, SQL.
So, you then become dependant on having a reliable testing infrastructure which will generate nearly all the SQL you’re likely to get in advance of production.

In addition, you might want multiple plans in your baseline – for ACS for example – and then, once that extra bindset awareness is flushed out of memory, you then need a couple of executions to rediscover it (unless you use a SQL Patch to inject BIND_AWARE into the specific sql statements).

It’s certainly no guarantee of stability.

What is the alternative?

I like to argue that most problematic executions stem from two sources:

  1. Badly written SQL
  2. Using poor combinations of features – e.g. bind variable peeking + histograms

The other day I made a comment to someone that “if you’re heavily reliant on ACS, perhaps you should rethink whether you should be using literals rather than bind variables”.

So, you might then take the position that plan instability stems from increasing complexity in the optimizer.

In which case, maybe a viable strategy might be to turn off much of the complexity:

  • Bind Variable Peeking
  • Cardinality feedback
  • Adaptive Cursor Sharing
  • Dynamic Sampling
  • Adaptive direct path reads
  • 12c adaptive execution plans
  • Anything with the word “adaptive” in it?
  • Default Stats job and/or default method_opt histogram behaviour

This seems quite radical to many. Perhaps justifiably.

Perhaps it would be nice if there were new optimizer strategies – OPTIMIZER_MODE? – perhaps “AGGRESSIVE_PERFORMANCE” (maybe “ADAPTIVE_PERFORMANCE_WITH_THE_ODD_HICCUP” ?) vs “PERFORMANCE_STABILITY” which were an umbrella of many of the features above.

Thoughts?

To what extent are you concerned with performance stability, above all else?

How many of the optimizer features in each release are actually aligned with that widespread concern?