Skip navigation.

The Oracle Instructor

Syndicate content The Oracle Instructor
Explain, Exemplify, Empower
Updated: 8 hours 15 min ago

LVC Producers at #Oracle University

Tue, 2014-04-08 11:18

LVC stands for Live Virtual Class – this is how we call our courses done interactively over the internet. At Oracle University, we have a fine crew of people who take care that the attendees (as well as the instructor, sometimes) are not impacted by technical problems. This can be e.g. connectivity issues, browser incompatibilities, questions how to deal with the learning platform WebEx or which way to choose to access the remote lab environment. All that and more is handled by LVC producers, so that the instructor can focus on the educational matters. I really appreciate this separation of duties, because I find it already demanding enough to deliver high quality Oracle Technology classes!

Many of the LVC producers work from Bucharest, and they kindly invited me to visit them at their workplace today. I gladly accepted and we had the nicest chat up on the 6th floor – it was so cool to meet these guys in person that supported me so many times already! As you can see, this is a bright bunch :-)

LVC Producers from Bucharest


Tagged: LVC
Categories: DBA Blogs

#Oracle University Expert Summit in London

Mon, 2014-04-07 06:09

Three days full of seminars are offered by Oracle University in London (19th to 21st May) at the Expert Summit

Oracle University Expert Summit

It is my pleasure to present there together with Arup Nanda, Dan Hotka, Jonathan Lewis and my dear colleagues Iloon Ellen-Wolff and Joel Goodman.

One funny detail here: There has been another event (an Exadata Workshop) in Vienna on my schedule during that week – yes, I’m very busy these days. Now in order to make it possible for me to present in London, the class in Vienna will be interrupted on Tuesday and continued on Wednesday :-)

A big “Thank You!”  goes out to the attendees in Vienna who agreed with the one day interruption to make that happen! Specifically, I’m going to talk about and demonstrate the 12c New Features of Data Guard in London.


Categories: DBA Blogs

Dealing with technical questions about #Oracle

Wed, 2014-04-02 10:51

During the OCM Preparation Workshop today in Bucharest, we got into a discussion about parallel inserts vs. direct load with append hint, where I said something like: “I think a parallel insert is always* also a direct load.” Most attendees were satisfied with that, but some still looked a bit sceptical. And I was also not 100% sure about it. While the others practiced, I did a brief research and came up with this answer from Tom Kyte and quoted him: “Parallel is always a direct path, if you go parallel, you will be appending.” Everybody in the room was immediately convinced – only I felt a bit uncomfortable with the outcome myself. After all, I don’t want to teach people to blindly follow my or any other guys claims without checking it if possible. And as with so many other claims about Oracle, it is possible to prove (or falsify) it with little effort:

 

SQL> grant dba to adam identified by adam;

Grant succeeded.

SQL> connect adam/adam
Connected.

SQL> create table t as select 'TEST' as col1 from dual connect by level<=1e6; 
Table created. 

SQL> select bytes/1024/1024 from user_segments;

BYTES/1024/1024
---------------
             13

SQL> delete from t;

1000000 rows deleted.

SQL> commit;

Commit complete.

SQL> insert into t select 'TEST' as col1 from dual connect by level<=1e6; 
1000000 rows created. 

SQL> commit;

Commit complete.

SQL> select bytes/1024/1024 from user_segments;

BYTES/1024/1024
---------------
             13

The above shows that a conventional insert reuses the empty space before the high water mark of the table and does not allocate additional extents. Now I’m going to do a parallel insert:

SQL> delete from t;

1000000 rows deleted.

SQL> commit;

Commit complete.
SQL> alter table t parallel;

Table altered.

SQL> alter session enable parallel dml;

Session altered.

SQL> insert into t select 'TEST' as col1 from dual connect by level<=1e6; 
1000000 rows created.

SQL> commit;

Commit complete.

SQL> select bytes/1024/1024 from user_segments;

BYTES/1024/1024
---------------
         24.375

QED! Obviously, the parallel insert appended rows after the high water mark – it was a direct load, in other words. That is in itself not so spectacular, but that little anecdote illustrates the in my opinion proper way to deal with technical questions about Oracle: Always try to back up your assumptions respectively claims with proof instead of relying only on (your or others) authority.

In other words: Don’t believe it, test it! :-)

*Addendum: Never say always, I always say: If you include the NOAPPEND hint to the parallel insert, it will not be done as a direct load. Thank you guys for pointing that out!


Tagged: 3e
Categories: DBA Blogs

Impressions from #oug_ire 2014

Thu, 2014-03-13 01:53

The Oracle User Group Ireland had its annual conference for 2014 at the Convention Centre in Dublin – which is a very nice location:

Convention Centre in DublinThere have been many well known Oracle Experts speaking, as you can see from the agenda. Oracle University was represented by Joel Goodman and myself – we really enjoyed talking there and listening to the other speakers :-)

Joel Goodman and Uwe Hesse at the OUG Ireland Conference 2014The first presentation I attended was delivered by Tom Kyte, who in the absence of Maria Colgan delivered an Introduction to the New Oracle Database In-Memory Option. This extra charged option will probably be available with 12.1.0.2, which we expect to be released this calendar year. Safe Harbor etc. as usual. One key point that I took away here was that it is extremely easy to implement the In-Memory Option: Just set one initialization parameter and designate the use on the segment level with alter table e.g. Especially, the storage structure on disk remains unchanged and the whole thing is completely transparent for the application layer.

Next talk was delivered by fellow Oakie Tim Hall: An Oracle DBA’s Guide to WebLogic Server.

Tim Hall

He tried to pack all the information that he wished he knew before he dealt with WebLogic for the first time into that presentation. Very useful indeed for starters, I suppose. There was no single key point here to highlight, instead we got a collection of small but helpful tips that I will definitely come back to when implementing WebLogic myself.

Next I went to Simon Haslam, who talked about What’s So Special about the Oracle Database Appliance?

Simon HaslamSome key points here: ODA is definitely to be considered for rapid RAC deployment, especially for small to midsized projects with less demand for extremely good I/O performance. Also, it is probably a clever move to install it as virtualized platform based on Oracle VM, even in case that at first only databases are supposed to run there. This is easy to install initially (but an effort to change that after the fact) and has no negative performance impact.

Afterwards I did my own two presentations: Real-Time Query for Data Guard 11g in Action and Data Guard 12c New Features in Action

Uwe Hesse presenting about Data GuardMany thanks to my dear colleague Mina Sagha Zadeh who took the picture and introduced my talk in the most charming and flattering way I have ever experienced so far :-)

All in all it was a fantastic event in Dublin, kudos to the Oracle User Group Ireland for organizing and running it so well – great job!


Tagged: OUG Ireland 2014
Categories: DBA Blogs

Active Data Guard – what does it mean?

Mon, 2014-02-17 03:10

There are misconceptions and half-truths about that term that I see time after time again in forums, postings and comments.

Some people think that Active Data Guard is a fancy marketing term for Standby Databases in Oracle. Wrong, that is just plain Data Guard :-)

Most people think that Active Data Guard means that a Physical Standby Database can be used for queries while it is still applying redo. Not the whole truth, because that is just one featureReal-Time Query – which is included in the Active Data Guard option.

Active Data Guard is an option, coming with an extra charge. Active is supposed to indicate that you can use the standby database for production usage – it is not just waiting for the primary database to fail.

In 11g, Active Data Guard includes three features:

  • Real-Time Query
  • Automatic Block Media Recovery
  • Block Change Tracking on the physical standby

In 12c, Active Data Guard got even more enhanced and includes now the features:

  • Real-time Query
  • Automatic Block Media Recovery
  • Block Change Tracking on the physical standby
  • Far Sync
  • Real-Time Cascade
  • Global Data Services
  • Application Continuity
  • Rolling Upgrade using DBMS_ROLLING

The bad news is that many of the 12c Data Guard New Features require Active Data Guard


Tagged: Active Data Guard, Data Guard
Categories: DBA Blogs

Join us at the OUG Ireland 2014!

Thu, 2014-02-13 07:25

Oracle User Group Ireland 2014 Conference

The Oracle User Group Ireland has their annual conference in Dublin with an interesting agenda.

Speakers like Tom Kyte and Timothy Hall stand as a guarantee for high quality.

Oracle University is also showing presence there with Joel Goodman and me, not only as speakers on Tuesday but also with two Master Classes on Wednesday during the OUG Ireland 12c Workshop.

Hope to see you there :-)


Tagged: OUG Ireland 2014
Categories: DBA Blogs

New member of The OakTable: me!

Thu, 2014-01-30 01:20

I am an Oakie now :-)

OakTableLogo

 

 

To be a part of this highly respected circle of Oracle Scientists is something that I consider one of the greatest achievements of my professional life.


Categories: DBA Blogs

Speed up Import with TRANSFORM=DISABLE_ARCHIVE_LOGGING in #Oracle 12c

Mon, 2014-01-20 07:47

A very useful 12c New Feature is the option to suppress the generation of redo during Data Pump import. I was talking about it during my recent 12c New Features class in Finland and like to share that info with the Oracle Community here. My usual demo user ADAM owned a table named BIG with one index on it. Both were in LOGGING mode when I exported them. The Data Pump export did not use any 12c New Feature and is not shown therefore.

SQL> select log_mode,force_logging from v$database;

LOG_MODE     FORCE_LOGGING
------------ ---------------------------------------
ARCHIVELOG   NO

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                             0                         0               0          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

8 rows selected.

The database is not in force logging mode – else the new Data Pump parameter would be ignored. Archive log mode was just turned on, therefore no archive log file yet. First I will show the redo generating way to import, which is the default. Afterwards the new feature for comparison.

SQL> host impdp adam/adam directory=DPDIR tables=big

Import: Release 12.1.0.1.0 - Production on Mon Jan 20 11:50:42 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "ADAM"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "ADAM"."SYS_IMPORT_TABLE_01":  adam/******** directory=DPDIR tables=big
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ADAM"."BIG"                                660.1 MB 5942016 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "ADAM"."SYS_IMPORT_TABLE_01" successfully completed at Mon Jan 20 11:54:32 2014 elapsed 0 00:03:48

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                         13.47                         0              12          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

8 rows selected.

The conventional way with redo generation took almost 4 minutes and generated 12 archive logs – my online logs are 100 megabyte in size. Now let’s see the new feature:

SQL> drop table big purge;

Table dropped.

SQL> host impdp adam/adam directory=DPDIR tables=big transform=disable_archive_logging:y

Import: Release 12.1.0.1.0 - Production on Mon Jan 20 11:57:19 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "ADAM"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "ADAM"."SYS_IMPORT_TABLE_01":  adam/******** directory=DPDIR tables=big transform=disable_archive_logging:y
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ADAM"."BIG"                                660.1 MB 5942016 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "ADAM"."SYS_IMPORT_TABLE_01" successfully completed at Mon Jan 20 11:58:21 2014 elapsed 0 00:01:01

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                         13.47                         0              12          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

8 rows selected.

SQL> select table_name,logging from user_tables;

TABLE_NAME                                                   LOG
------------------------------------------------------------ ---
BIG                                                          YES

SQL> select index_name,logging from user_indexes;

INDEX_NAME                                                   LOG
------------------------------------------------------------ ---
BIG_IDX                                                      YES

Note that the segment attributes are not permanently changed to NOLOGGING by the Data Pump import.
The comparison shows a striking improvement in run time – because the 2nd run did not generate additional archive logs, we still see the same number as before the 2nd call.

Another option is to suppress redo generation only for the import of indexes, in my example with the command

impdp adam/adam directory=DPDIR tables=big transform=disable_archive_logging:y:index

That is a safer choice because indexes are always reproducible. Keep in mind that any NOLOGGING operation is a risk – that is the price to pay for the speed up.
As always: Don’t believe it, test it! :-)


Tagged: 12c New Features
Categories: DBA Blogs

Initialization Parameter Handling for Pluggable Databases in #Oracle 12c

Tue, 2014-01-14 15:25

In a Multitenant Database, the handling of initialization parameters has changed. This post shows some major differences. The playing field:

$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Tue Jan 14 21:44:10 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>  select name,open_mode,con_id from v$pdbs;

NAME                           OPEN_MODE      CON_ID
------------------------------ ---------- ----------
PDB$SEED                       READ ONLY           2
PDB1                           MOUNTED             3

Apart from the mandatory pluggable seed database, there is one pluggable database which is presently not opened. Can I modify initialization parameters for pdb1 now?

SQL> alter session set container=pdb1;

Session altered.

SQL> alter system set ddl_lock_timeout=30;
alter system set ddl_lock_timeout=30
*
ERROR at line 1:
ORA-01219: database or pluggable database not open: queries allowed on fixed
tables or views only

SQL> alter system set ddl_lock_timeout=30 scope=spfile;

System altered.

In spite of the syntax, this did not modify the spfile:

SQL> show parameter spfile

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
spfile                               string      /u01/app/oracle/product/12.1.0
                                                 /dbhome_1/dbs/spfilecdb1.ora
SQL> host strings /u01/app/oracle/product/12.1.0/dbhome_1/dbs/spfilecdb1.ora
cdb1.__data_transfer_cache_size=0
cdb1.__db_cache_size=251658240
cdb1.__java_pool_size=4194304
cdb1.__large_pool_size=4194304
cdb1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
cdb1.__pga_aggregate_target=167772160
cdb1.__sga_target=503316480
cdb1.__shared_io_pool_size=20971520
cdb1.__shared_pool_size=209715200
cdb1.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/cdb1/adump'
*.audit_trail='db'
*.compatible='12.1.0.0.0'
*.control_files='/u01/app/oracl
e/oradata/cdb1/control01.ctl','/u01/app/oracle/fast_recovery_area/cdb1/control02.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_name='cdb1'
*.db_recovery_file_dest='/u01/app/oracle/fast_recovery_area'
*.db_recovery_file_dest_size=4800m
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=cdb1XDB)'
*.enable_pluggable_database=true
*.open_cursors=300
*.pga_aggregate_target=160m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=480m
*.undo_tablesp
ace='UNDOTBS1'

Initialization parameters for pluggable databases are not kept in the spfile – if they are not inherited from the container database, that is. Instead, they are stored in a dictionary table of the container database. The documented view V$SYSTEM_PARAMETER displays them. But only if the pluggable database is opened:

SQL> connect / as sysdba
Connected.
SQL> select name,value,con_id from v$system_parameter where name='ddl_lock_timeout';

NAME                           VALUE                                    CON_ID
------------------------------ ---------------------------------------- ------
ddl_lock_timeout               0                                             0

SQL> select a.name,value$,con_id from pdb_spfile$ a join v$pdbs b on (a.pdb_uid=b.con_uid);

NAME                           VALUE$                                   CON_ID
------------------------------ ---------------------------------------- ------
ddl_lock_timeout               30                                            3

The undocumented table PDB_SPFILE$ is internally queried and the parameters are set accordingly when the pluggable database is opened:

SQL> alter pluggable database pdb1 open;

Pluggable database altered.

SQL> select name,value,con_id from v$system_parameter where name='ddl_lock_timeout';

NAME                           VALUE                                    CON_ID
------------------------------ ---------------------------------------- ------
ddl_lock_timeout               0                                             0
ddl_lock_timeout               30                                            3

Attention, show parameter displays different results, depending on the current container now:

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT
SQL> show parameter ddl_lock_timeout

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
ddl_lock_timeout                     integer     0

SQL> alter session set container=pdb1;

Session altered.

SQL> show parameter ddl_lock_timeout

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
ddl_lock_timeout                     integer     30

Upon unplugging the pluggable database, an xml file is generated, containing the description of the pluggable database. I thought that this file contains also the modified initialization parameters of that pluggable database, but that doesn’t seem to be the case. I checked it for the parameters DDL_LOCK_TIMEOUT and RESUMABLE_TIMEOUT and both do not show in the xml file. Instead, they got stored in the system tablespace of the unplugged database! Again, that is not documented and may change in the future.

Isn’t that fun to play with this exciting new stuff? At least I enjoyed it, and I hope you found it useful :-)


Tagged: 12c New Features
Categories: DBA Blogs

#Oracle Database whoami for Multitenant

Wed, 2014-01-08 05:49

As an enhancement to the Oracle Database whoami for versions before 12c, this also shows the Container Name to which the session is connected:

[oracle@linuxbox ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Wed Jan 8 12:34:04 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> @whoami
USER: SYS
SESSION ID: 253
CURRENT_SCHEMA: SYS
INSTANCE NAME: cdb1
CDB NAME: cdb1
CONTAINER NAME: CDB$ROOT
DATABASE ROLE: PRIMARY
OS USER: oracle
CLIENT IP ADDRESS:
SERVER HOSTNAME: linuxbox
CLIENT HOSTNAME: linuxbox

PL/SQL procedure successfully completed.

SQL> connect system/oracle_4U@pdb1
Connected.
SQL> @whoami
USER: SYSTEM
SESSION ID: 253
CURRENT_SCHEMA: SYSTEM
INSTANCE NAME: cdb1
CDB NAME: cdb1
CONTAINER NAME: PDB1
DATABASE ROLE: PRIMARY
OS USER: oracle
CLIENT IP ADDRESS: 555.555.5.555
SERVER HOSTNAME: linuxbox
CLIENT HOSTNAME: linuxbox

PL/SQL procedure successfully completed.

The content of whoami.sql:

set serveroutput on
begin
dbms_output.put_line('USER: '||sys_context('userenv','session_user'));
dbms_output.put_line('SESSION ID: '||sys_context('userenv','sid'));
dbms_output.put_line('CURRENT_SCHEMA: '||sys_context('userenv','current_schema'));
dbms_output.put_line('INSTANCE NAME: '||sys_context('userenv','instance_name'));
dbms_output.put_line('CDB NAME: '||sys_context('userenv','cdb_name'));
dbms_output.put_line('CONTAINER NAME: '||sys_context('userenv','con_name'));
dbms_output.put_line('DATABASE ROLE: '||sys_context('userenv','database_role'));
dbms_output.put_line('OS USER: '||sys_context('userenv','os_user'));
dbms_output.put_line('CLIENT IP ADDRESS: '||sys_context('userenv','ip_address'));
dbms_output.put_line('SERVER HOSTNAME: '||sys_context('userenv','server_host'));
dbms_output.put_line('CLIENT HOSTNAME: '||sys_context('userenv','host'));
end;
/

Shortcut to get the name of the current container is:

SQL> show con_name

CON_NAME
------------------------------
PDB1

You may find that useful in a multitenant environment with many Pluggable Databases within one Container Database :-)


Tagged: 12c New Features, whoami
Categories: DBA Blogs

Happy New Year 2014 to all of you!

Tue, 2013-12-31 02:53

My best wishes go out to you and your families – may the new year be a great one for you!

Special thanks to all the visitors of The Oracle Instructor – WordPress has crafted this Annual Report for 2013, if you’re interested :-)


Categories: DBA Blogs