Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from http://www.dbi-services.com/
Updated: 1 week 5 days ago

UKOUG 2014 - Middleware Day 2

Tue, 2014-12-09 11:06

Today the sessions were more “high level” so don’t expect deep information and concrete implementations.

Roadmap to Cloud Infrastructure and Service Integration with cloud application foundation and SOA Suite Frances Zhao-Perez and Simone Greib(Oracle)

Here Frances was talking about CAF (Cloud Application Foundation) which regroup products like weblogic, coherence and so on. She introduced the Oracle’s strategic investments list for this topic:

- #1 Mobile
- Fusion Middleware and applications
- Continuous availability
- Multitenancy for Density/Utilization
- Cloud management and operations

She also talk about future features in 12cR2 such as:

- Multi Datacenters support
- Coherence federated caching
- Recoverable caching
- Full Java EE7
- Java SE8
- Available in Cloud

Frances also briefly talk about ODA and OHS roadmaps but it was only from marketing side :)

Then Simone took the lead and made a recap’ of SOA key features such as:

- Operation made simple (startup acceleration, tuned profiles…)
- Developer productivity (Debugger and tester, Java database instead of full big one…)
- Mobile standards (REST, Json…)
- Cloud: SAP/JDE adapters

A new feature in the cloud is MFT (Managed File Transfer) for file-based integrations.

She also remind us about how simple it is to upgrade from SOA suite 11g to 12c and began with new incoming features such as:

- Business Insight: Business Process Monitoring (Business process simplified without JDeveloper)
- Internet of Things (IoT): Events driven actions
- BAM predictive analytics & BI Integration: it could build trends using our business data. For example it could predict the market for next weeks.

Mobile enabling your enterprise with Oracle SOA Suite Simone Greib and Yogesh Sontakke(Oracle)

This session was more oriented on the mobile part of SOA. Yogesh and Simone explained that you can support SOAP and REST on mobiles and they demonstrated how simple it is to build UI and business behind by exposing as a service.

They talked about architecture of mobile UI and their integration with a lot of adapters for different products. They took “Kiloutou”, in France, as an example of mobile application user as they use an application to manage their stocks, commands and services.

They also made a real live demo of how to use JSon or XML to manage events and communications between elements or services.

Maximun Availability in he Cloud: Oracle Weblogic Server and Oracle Coherence Frances Zhao-Perez(Oracle)

This session was heavily oriented on MAA (Maximum Availability Architecture) and Frances strongly underlined that Oracle is investing in maximum availability.

The goals of MAA are the following:

- Prevent business interruption
- Prevent data loss
- Deliver adequate response time
- Cost: Reduce deployments, managements and support costs
- Risk: Consistently achieve required service level

Here are the High Availability requirements for Oracle:

- Outage protection
- Recovery time
- Testing frequency
- Typical data loss
- Complexity in deployments
- ROI (Return on Investment)

Frances talked about Multi-data MAA solutions such as stretch cluster/domains, cache safety, Tx Logs in database, database Global Data Services, federated caching, recoverable caching and storage replication.

She introduced fastly Oracle Site Guard which provides recovery automation. And talked about next version features.

12.1.3:

- No Tlog option - Phase 1 (other phases will be implemented at each new releases)
- XA Transactions recovery across site

12.2.1 (will be a huge update next year):

- JTA HA
- Cross site Txn recovery
- Density for GridLink deployments
- Application continuity (Optimize connection harvesting on down events)

She finished on Coherence caching recovery allowing recover data from cache directly.

SQL*Plus COPY Command is back as BRIDGE

Mon, 2014-12-08 15:38

Did you ever use the COPY command in sqlplus? It's very old, and documentation says :
The COPY command is not being enhanced to handle datatypes or features introduced with, or after Oracle8i. The COPY command is likely to be deprecated in a future release.

Deprecated? But it is back, with a new name, in the new SQL Developer based SQL*Plus (currently called sdsql in beta)

UKOUG 2014: News on Oracle WebLogic, Fusion Middleware, SOA Suite

Mon, 2014-12-08 11:58

Today I went to UKOUG 2014 at Liverpool. In this blog, you'll find the sessions on Oracle WebLogic, Fusion Middleware, SOA Suite, and EDG I attended and a little resumé about it.

Administration of SOA Suite - A few tips Jon Petter Hjulstad (SYSCO Middleware)

Jon Petter's general advice was to make the monitoring stronger. You have to be pro-active in your administration of SOA Suite. You have several tools for monitoring SOA such as FMW Control, Enterprise Manager 12c or even JMC. You may want to check CPUs, memory usage and Disk space as usual but don’t forget to look at database growth and logs.

His advice is to look at patches as soon as they are releases as it will repair leaks and security issues but it will also tune up the whole system and its components.

Strongly use RDA support, it can brings a lot of information for creating Service Requests and it will hardly reduce the “ping pong” between your organization and the support team.

Fusion Middleware 12c Installation and Administration: First Experiences Peter de Vall and Nicolay Moot (Transfer Solutions)

Both are from Transfer Solutions Company from Netherland. They talked about the differences of installing and managing Fusion Middleware 11g and 12c. The benefits of using 12c is that it is clearly more integrated than the 11g version. Moreover when installing for Devs it packages for an easy to use version. This way, developers don’t have to borrow a heavy installation architecture.

Another difference is the homes path. When in 11g you had Middleware Home, Oracle Common Home, Oracle Product Home and RCU Home, here in 12c you only have Oracle Home.

Same thing for the configuration part. In 11g you had Domain Homes, Instance Homes and Node Manager Home, here you have only Domain Homes as you have one node per Domain.

Upgrading is really easy to handle as 12c provides some scripts for the upgrade. It manages the upgrade of domains and configuration in less than an hour.

It upgrades schemas, configurations and domains you just have to provide it the old (11g) domains path then the new (12c) domains path; which can be the same as demonstrated in their demo! And no redeployments or application adaptations to make, developers don’t even have to participate.

A new feature in FM 12c is RESTful. It provides urls that can be monitored by an external tool as it pushes json table which contain metrics information as memory usage and jvm utilization. Really interesting.

Best practices to manage and deliver oracle weblogic and fusion middleware to your organization James Kao (Oracle)

James is a really good speaker, managing to keep you intensively focused on his speech. He was talking about how to manage huge infrastructure with lot of domains. Talking about how many admin console you have to manage, how many configurations you have to perform before your whole environment is setup, when you have for example DEV/TEST/VAL and production environments. It could become hell out there! As it is time consuming and when upgrading or patching it is hard to automate tasks.

James’s answer is “Cloud”.

The goal is to make it available through a single point of access and make it globally administrable in one place.

“Use Cloud Control as a single administration console” he said. As you can manage your configurations and actions on a single place to specific domains or all at the same time. It provides more “templatized” administration through frameworks and it standardize ressources.

Here‘s his best practices:

#1 - Cloud Control as a single admin console
As said before, administration and configuration are integrated to Cloud Control. A system allows to enter your credentials once. When updating a configuration you lock it hence nobody can edit it during the time you are working on it. It can prevent issues when an error is reported and almost 20 administrator are trying to access the system to debug it!

Cloud Control can record every actions you do when editing or administrating. That means you can verify what you did for audit purpose or simply transform it into a WLST script to make it repeatable.

Cloud Control centralizes SOA tracing and stores metrics for history.

#2 - Automates tasks across domains
You can automate tasks by creating WLST scripts thanks to your records. For example you can start recording your session, then edit some values and stop the recording. You create a script from it and execute it on multiple domain!

#3 - Automate responses to issues vie corrective actions
And the best is that the scripts you wrote can be triggered by events from the system. You can create incident rules and specify the actions that would be triggered, all over the domains.

Enterprise Deployments: The real world of best practices Simon Haslam (Veriton) and Jacco Landlust (Oracle)

Simon and Jacco where here to talk about EDG: Enterprise Deployment Guides. These guides are a result from the historical MAA: Maximum Availability Architecture. So, EDG is a kind of guide on how to build your architecture to make it reusable and efficient but there is a difference between “paper” and reality.

EDG benefits:

- You don’t have to know every products by heart
- It’s based on Oracle experience
- It’s familiar to other Admins using EDG (a kind of OFA)
- It’s recognized by Oracle Support!

First EDG covers the following topics:

- Topologies and segregations
- Naming conventions
- It only focuses on single site high availability with load balancers
- Database HA (in RAC)

EDG is a layered step by step builder recipe.

- Layering each stage
- Assumes manually installation (No scripts)

The real world problems for EDG:

- It doesn’t including security thoughts
- It doesn’t include patching thoughts
- It doesn’t include dev/test thoughts (you have to build it by yourself)
- It’s not covering Disaster Recovery

FIO (Flexible I/O) - a benchmark tool for any operating system

Mon, 2014-12-08 04:55

I have just attended an interesting session held by Martin Nash (@mpnsh) at UKOUG 14 - Liverpool: "The least an Oracle DBA Should Know about Linux Administration" . During this session I had the opportunity to discover some interesting commands and tools such as FIO (Flexible I/O). FIO is a workload generator that can be used both for benchmark and stress/hardware verification.

FIO has support for 19 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format.

This tool has the huge advantage to be available for almost all kind of Operating Systems ( POSIX, Linux, BSD, Solaris, HP-UX, AIX ,OS X, Android, Windows). If you want to use this tool in the context of Oracle database I invite you to have a look on the following blog from Yann Neuhaus: Simulating database-like I/O activity with Flexible I/O

 

In order to install it on ubuntu simply use the following command:


steulet@ThinkPad-T540p:~$ sudo apt-get install fio

 

After having installed fio you can run your first test. This first test will run 2 gigabyte of IO (read write) in directory /u01/fio.


steulet@ThinkPad-T540p:~$ mkdir /u01/fio

 

Once the directory have been created we can set up the configuration script as described below. However it is perfectly possible to execute this command in command line without configuration script (fio --name=global --ioengine=posixaio --rw=readwrite --size=2g --directory=/u01/fio --threads=1 --name=myReadWriteTest-Thread1):

 

[global]
ioengine=posixaio
rw=readwrite
size=2g
directory=/u01/fio
threads=1

[myReadWriteTest-Thread1]

 

Now you can simply run your test with the command below:


steulet@ThinkPad-T540p:~$ fio testfio.fio

 

The output will looks like the following:

 

myReadWriteTest-Tread1: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.1.3
Starting 1 thread
Jobs: 1 (f=1): [M] [100.0% done] [112.9MB/113.1MB/0KB /s] [28.9K/29.2K/0 iops] [eta 00m:00s]
myReadWriteTest-Tread1: (groupid=0, jobs=1): err= 0: pid=7823: Mon Dec  8 12:45:27 2014
  read : io=1024.7MB, bw=98326KB/s, iops=24581, runt= 10671msec
    slat (usec): min=0, max=72, avg= 1.90, stdev= 0.53
    clat (usec): min=0, max=2314, avg=20.25, stdev=107.40
     lat (usec): min=5, max=2316, avg=22.16, stdev=107.41
    clat percentiles (usec):
     |  1.00th=[    4],  5.00th=[    6], 10.00th=[    7], 20.00th=[    7],
     | 30.00th=[    7], 40.00th=[    7], 50.00th=[    7], 60.00th=[    7],
     | 70.00th=[    8], 80.00th=[    8], 90.00th=[    8], 95.00th=[   10],
     | 99.00th=[  668], 99.50th=[ 1096], 99.90th=[ 1208], 99.95th=[ 1208],
     | 99.99th=[ 1256]
    bw (KB  /s): min=    2, max=124056, per=100.00%, avg=108792.37, stdev=26496.59
  write: io=1023.4MB, bw=98202KB/s, iops=24550, runt= 10671msec
    slat (usec): min=1, max=24, avg= 2.08, stdev= 0.51
    clat (usec): min=0, max=945, avg= 9.71, stdev=24.52
     lat (usec): min=5, max=947, avg=11.79, stdev=24.54
    clat percentiles (usec):
     |  1.00th=[    5],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
     | 30.00th=[    8], 40.00th=[    8], 50.00th=[    9], 60.00th=[    9],
     | 70.00th=[    9], 80.00th=[    9], 90.00th=[   10], 95.00th=[   11],
     | 99.00th=[   15], 99.50th=[   20], 99.90th=[  612], 99.95th=[  628],
     | 99.99th=[  652]
    bw (KB  /s): min=108392, max=123536, per=100.00%, avg=114596.33, stdev=3108.03
    lat (usec) : 2=0.01%, 4=0.01%, 10=91.43%, 20=6.93%, 50=0.71%
    lat (usec) : 100=0.13%, 250=0.01%, 500=0.01%, 750=0.47%, 1000=0.01%
    lat (msec) : 2=0.31%, 4=0.01%
  cpu          : usr=10.46%, sys=21.17%, ctx=527343, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=262309/w=261979/d=0, short=r=0/w=0/d=0Run status group 0 (all jobs):
   READ: io=1024.7MB, aggrb=98325KB/s, minb=98325KB/s, maxb=98325KB/s, mint=10671msec, maxt=10671msec
  WRITE: io=1023.4MB, aggrb=98202KB/s, minb=98202KB/s, maxb=98202KB/s, mint=10671msec, maxt=10671msecDisk stats (read/write):
  sda: ios=6581/67944, merge=0/67, ticks=4908/196508, in_queue=201408, util=56.49%

You will find some really good examples and a detailed list of parameters on the following website: http://www.bluestop.org/fio/HOWTO.txt

This tool is really powerful and present the huge advantage to be available for more or less any Operating System. Such advantage will allow you to make some consistent comparison accross different kind of architecture.

FIO (Flexible I/O) - a benchmark tool for any Operating System

Mon, 2014-12-08 04:55

I just attended to an interesting session at UKOUG 14 - Liverpool, named "The least an Oracle DBA Should Know about Linux Administration". This session has been given by Martin Nash (@mpnsh).

During this session I had the opportunity to discover some interesting commands and tools such as fio (Flexible I/O). fio is a workload generator that can be used both for benchmark and stress/hardware verification.

fio has support for 19 different types of I/O engines (sync, mmap, libaio, posixaio, SG v3, splice, null, network, syslet, guasi, solarisaio, and more), I/O priorities (for newer Linux kernels), rate I/O, forked or threaded jobs, and much more. It can work on block devices as well as files. fio accepts job descriptions in a simple-to-understand text format.

This tool has the huge advantage to be available for almost all kind of Operating Systems ( POSIX, Linux, BSD, Solaris, HP-UX, AIX ,OS X, Android, Windows). If you want to use this tool in the context of Oracle database I invite you to have a look on the following blog from Yann Neuhaus: Simulating database-like I/O activity with Flexible I/O

 

In order to install it on ubuntu simply use the following command:


steulet@ThinkPad-T540p:~$ sudo apt-get install fio

 

After having installed fio you can run your first test. This first test will run 2 gigabyte of IO (read write) in directory /u01/fio.


steulet@ThinkPad-T540p:~$ mkdir /u01/fio

 

Once the directory have been created we can set up the configuration script as described below. However it is perfectly possible to execute this command in command line without configuration script (fio --name=global --ioengine=posixaio --rw=readwrite --size=2g --directory=/u01/fio --threads=1 --name=myReadWriteTest-Thread1):

 

[global]
ioengine=posixaio
rw=readwrite
size=2g
directory=/u01/fio
threads=1

[myReadWriteTest-Thread1]

 

Now you can simply run your test with the command below:


steulet@ThinkPad-T540p:~$ fio testfio.fio

 

The output will looks like the following:

 

myReadWriteTest-Tread1: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=posixaio, iodepth=1
fio-2.1.3
Starting 1 thread
Jobs: 1 (f=1): [M] [100.0% done] [112.9MB/113.1MB/0KB /s] [28.9K/29.2K/0 iops] [eta 00m:00s]
myReadWriteTest-Tread1: (groupid=0, jobs=1): err= 0: pid=7823: Mon Dec  8 12:45:27 2014
  read : io=1024.7MB, bw=98326KB/s, iops=24581, runt= 10671msec
    slat (usec): min=0, max=72, avg= 1.90, stdev= 0.53
    clat (usec): min=0, max=2314, avg=20.25, stdev=107.40
     lat (usec): min=5, max=2316, avg=22.16, stdev=107.41
    clat percentiles (usec):
     |  1.00th=[    4],  5.00th=[    6], 10.00th=[    7], 20.00th=[    7],
     | 30.00th=[    7], 40.00th=[    7], 50.00th=[    7], 60.00th=[    7],
     | 70.00th=[    8], 80.00th=[    8], 90.00th=[    8], 95.00th=[   10],
     | 99.00th=[  668], 99.50th=[ 1096], 99.90th=[ 1208], 99.95th=[ 1208],
     | 99.99th=[ 1256]
    bw (KB  /s): min=    2, max=124056, per=100.00%, avg=108792.37, stdev=26496.59
  write: io=1023.4MB, bw=98202KB/s, iops=24550, runt= 10671msec
    slat (usec): min=1, max=24, avg= 2.08, stdev= 0.51
    clat (usec): min=0, max=945, avg= 9.71, stdev=24.52
     lat (usec): min=5, max=947, avg=11.79, stdev=24.54
    clat percentiles (usec):
     |  1.00th=[    5],  5.00th=[    8], 10.00th=[    8], 20.00th=[    8],
     | 30.00th=[    8], 40.00th=[    8], 50.00th=[    9], 60.00th=[    9],
     | 70.00th=[    9], 80.00th=[    9], 90.00th=[   10], 95.00th=[   11],
     | 99.00th=[   15], 99.50th=[   20], 99.90th=[  612], 99.95th=[  628],
     | 99.99th=[  652]
    bw (KB  /s): min=108392, max=123536, per=100.00%, avg=114596.33, stdev=3108.03
    lat (usec) : 2=0.01%, 4=0.01%, 10=91.43%, 20=6.93%, 50=0.71%
    lat (usec) : 100=0.13%, 250=0.01%, 500=0.01%, 750=0.47%, 1000=0.01%
    lat (msec) : 2=0.31%, 4=0.01%
  cpu          : usr=10.46%, sys=21.17%, ctx=527343, majf=0, minf=12
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued    : total=r=262309/w=261979/d=0, short=r=0/w=0/d=0Run status group 0 (all jobs):
   READ: io=1024.7MB, aggrb=98325KB/s, minb=98325KB/s, maxb=98325KB/s, mint=10671msec, maxt=10671msec
  WRITE: io=1023.4MB, aggrb=98202KB/s, minb=98202KB/s, maxb=98202KB/s, mint=10671msec, maxt=10671msecDisk stats (read/write):
  sda: ios=6581/67944, merge=0/67, ticks=4908/196508, in_queue=201408, util=56.49%

You will find some really good examples and a detailed list of parameters on the following website: http://www.bluestop.org/fio/HOWTO.txt

This tool is really powerful and present the huge advantage to be available for more or less any Operating System. Such advantage will allow you to make some consistent comparison accross different kind of architecture.

 

 

SQL Server tips: how to list orphaned logins

Thu, 2014-12-04 21:56

I read a lot of about orphaned database users in SQL Server, but I have almost never read about orphaned logins. Many of my customers migrate or remove databases in SQL Server. They forget - not every time but often - to remove the logins and jobs associated with these databases. I have created a script - without any cursors, YES, it is possible - allowing to search all logins who are not "attached" to a database of an instance.

Oracle 12c: comparing TTS with noncdb_to_pdb

Mon, 2014-12-01 08:44

How to migrate from non-CDB to CDB? Of course all known migration methods works. But there is also another solution: upgrade to 12c if necessary and then convert the non-CDB to a PDB. This is done with the noncdb_to_pdb.sql which converts a non-CDB dictionary to a PDB one, with metadata and object links. But do you get a clean PDB after that ? I tested it and compared the result with same database migrated by transportable tablespaces.

The test case

In 12c I can use Full Transportable database, but here I've only one tablespace as I'm doing my comparison on an empty database with the EXAMPLE schemas.

Here is my database:

RMAN> report schema;

Report of database schema for database with db_unique_name NDB1

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    790      SYSTEM               YES     /u01/app/oracle/oradata/NDB1/system01.dbf
3    610      SYSAUX               NO      /u01/app/oracle/oradata/NDB1/sysaux01.dbf
4    275      UNDOTBS1             YES     /u01/app/oracle/oradata/NDB1/undotbs01.dbf
5    1243     EXAMPLE              NO      /u01/app/oracle/oradata/NDB1/example01.dbf
6    5        USERS                NO      /u01/app/oracle/oradata/NDB1/users01.dbf

It's a new database, created with dbca, all defaults, and having only the EXAMPLE tablespace. SYSTEM is 790MB and SYSAUX is 610MB. We can have a lot of small databases like that, where system size is larger than user size and this is a reason to go to multitenant.

I will compare the following:

  • the migration with transportable tablespaces (into pluggable database PDB_TTS)
  • the plug and run noncdb_to_pdb (into the pluggable database PDB_PLG)

Transportable tablespace

Transportable tablespace will plug only the non system tablespaces and all the dictionary entries are recreated while importing metadata. Here it is:

SQL> alter tablespace EXAMPLE read only;
Tablespace altered.
SQL> host expdp '"/ as sysdba"' transport_tablespaces='EXAMPLE'

The log gives me the dump file (containing the metadata) and the datafiles to copy:

Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
  /u01/app/oracle/admin/NDB1/dpdump/expdat.dmp
******************************************************************************
Datafiles required for transportable tablespace EXAMPLE:
  /u01/app/oracle/oradata/NDB1/example01.dbf
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at ... elapsed 00:03:55

then on the destination I create an empty pluggable database:

SQL> create pluggable database PDB_TTS admin user pdbadmin identified by "oracle" file_name_convert=('/pdbseed/','/PDB_TTS/');
SQL> alter pluggable database PDB_TTS open;
SQL> alter session set container=PDB_TTS;

and import the metadata after having copied the datafile to /u03:

SQL> create or replace directory DATA_PUMP_DIR_NDB1 as '/u01/app/oracle/admin/NDB1/dpdump';
SQL> host impdp '"sys/oracle@//vm211:1666/pdb_tts as sysdba"' transport_datafiles=/u03/app/oracle/oradata/PDB_TTS/example01.dbf directory=DATA_PUMP_DIR_NDB1

which took only two minutes because I don't have a lot of objects. That's all. I have a brand new pluggable database where I've imported my tablespaces.

Here I used the transportable tablespace and had to pre-create the users. But in 12c you can do everything with Full Tabsportable Database.

noncdb_to_pdb.sql

The other solution is to plug the whole database, including the SYSTEM and SYSAUX tablespaces, and then run the noncdb_to_pdb.sql script to transform the dictionary to a multitenant one. First, we generate the xml describing the database, which is similar to the one generated when we unplug a PDB:

SQL> shutdown immediate
SQL> startup open read only;
SQL> exec dbms_pdb.describe('/tmp/NDB01.xml');

And then plug it:

SQL> CREATE PLUGGABLE DATABASE PDB_PLG USING '/tmp/NDB01.xml' COPY FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/NDB1', '/u03/app/oracle/oradata/PDB_PLG');

At that point I can open the PDB but it will act as a Non-CDB, with its own dictionary that is not linked to the root. For example, you have nothing when you query DBA_PDBS from the PDB:

SQL> show con_id
CON_ID
------------------------------
6
SQL> select * from dba_pdbs;
no rows selected

I put in my todo list to test what we can do in that PDB which is not yet a PDB before raising lot of ORA-600.

Now you have to migrate the dictionary to a PDB one. The noncdb_to_pdb.sql will do internal updates to transform the entries in OBJ$ to be metadata links.

SQL> alter session set container=PDB_PLG;
SQL> @?/rdbms/admin/noncdb_to_pdb;
SQL> alter pluggable database PDB_PLG open;

The updates will depend on the number of dictionary objects, so that is fixed for the version. And the remaining time is to recompile all objects, but that can be done in parallel. Here, I've run it in serial to see how long it takes (screenshot from Lighty):

b2ap3_thumbnail_Capturenon_cdb_to_pdb.png

Comparison

My goal was to compare both methods. As I expected, the SYSTEM and SYSAUX tablespaces did not decrease when using the noncdb_to_pdb, so if you want to go to multitenant to save space, the noncdb_to_pdb method is not the good one:

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1_SITE1

List of Permanent Datafiles
===========================
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    781      SYSTEM               YES     /u02/app/oracle/oradata/cdb1_site1/system01.dbf
3    691      SYSAUX               NO      /u02/app/oracle/oradata/cdb1_site1/sysaux01.dbf
4    870      UNDOTBS1             YES     /u02/app/oracle/oradata/cdb1_site1/undotbs01.dbf
5    260      PDB$SEED:SYSTEM      NO      /u02/app/oracle/oradata/cdb1_site1/pdbseed/system01.dbf
6    5        USERS                NO      /u02/app/oracle/oradata/cdb1_site1/users01.dbf
7    570      PDB$SEED:SYSAUX      NO      /u02/app/oracle/oradata/cdb1_site1/pdbseed/sysaux01.dbf
8    260      PDB1:SYSTEM          NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/system01.dbf
9    580      PDB1:SYSAUX          NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/sysaux01.dbf
10   10       PDB1:USERS           NO      /u02/app/oracle/oradata/cdb1_site1/pdb1/pdb1_users01.dbf
14   270      PDB_TTS:SYSTEM       NO      /u02/app/oracle/oradata/cdb1_site1/PDB_TTS/system01.dbf
15   590      PDB_TTS:SYSAUX       NO      /u02/app/oracle/oradata/cdb1_site1/PDB_TTS/sysaux01.dbf
17   1243     PDB_TTS:EXAMPLE      NO      /u03/app/oracle/oradata/PDB_TTS/example01.dbf
22   790      PDB_PLG:SYSTEM       NO      /u03/app/oracle/oradata/PDB_PLG/system01.dbf
23   680      PDB_PLG:SYSAUX       NO      /u03/app/oracle/oradata/PDB_PLG/sysaux01.dbf
24   5        PDB_PLG:USERS        NO      /u03/app/oracle/oradata/PDB_PLG/users01.dbf
25   1243     PDB_PLG:EXAMPLE      NO      /u03/app/oracle/oradata/PDB_PLG/example01.dbf

The SYSTEM tablespace which is supposed to contain only links (my user schemas don't have a lot of objects) is the same size as the root. This is bad. Let's look at the detail:

SQL> select *
  from (select nvl(pdb_name,'CDB$ROOT') pdb_name,owner,segment_type,bytes from cdb_segments 
  left outer join dba_pdbs using(con_id))
  pivot (sum(bytes/1024/1024) as "MB" for (pdb_name) 
  in ('CDB$ROOT' as "CDB$ROOT",'PDB_TTS' as PDB_TTS,'PDB_PLG' as PDB_PLG))
  order by greatest(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))-least(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0)) 
  desc fetch first 20 rows only;

OWNER                SEGMENT_TYPE       CDB$ROOT_MB PDB_TTS_MB PDB_PLG_MB
-------------------- ------------------ ----------- ---------- ----------
SYS                  TABLE                      539         96        540
SYS                  INDEX                      187        109        195
SYS                  LOBSEGMENT                 117        105        118
SYS                  TABLE PARTITION             17          1         12
SYSTEM               INDEX                       10          1         10
SYS                  SYSTEM STATISTICS                                  8
SYSTEM               TABLE                        8          1          8
SYS                  LOBINDEX                    12          7         13
SYS                  INDEX PARTITION              9          0          6
SYSTEM               LOBSEGMENT                   5          0          5
APEX_040200          LOBSEGMENT                  80         74         80
SYSTEM               INDEX PARTITION              4                     4
SYSTEM               TABLE PARTITION              3                     3
SYS                  TABLE SUBPARTITION           2                     2
SYS                  CLUSTER                     52         50         52
SYS                  LOB PARTITION                3          1          2
SYSTEM               LOBINDEX                     2          0          2
APEX_040200          TABLE                      100         99        100
XDB                  TABLE                        7          6          7
AUDSYS               LOB PARTITION                1          0          1

20 rows selected.

Here I've compared the dictionary sizes. While the PDB_TTS table segments are below 100MB, the PDB_PLG is the same size as the root. The noncdb_to_pdb has updated OBJ$ but did not delete the rows reclaim space from other tables (see update 2).

Which tables?

SQL> select *
   from (select nvl(pdb_name,'CDB$ROOT') pdb_name,owner,segment_type,segment_name,bytes 
   from cdb_segments left outer join dba_pdbs using(con_id) 
   where owner='SYS' and segment_type in ('TABLE'))
   pivot (sum(bytes/1024/1024) as "MB" for (pdb_name) 
   in ('CDB$ROOT' as "CDB$ROOT",'PDB_TTS' as PDB_TTS,'PDB_PLG' as PDB_PLG))
   order by greatest(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))-least(nvl(PDB_TTS_MB,0),nvl(PDB_PLG_MB,0))
   desc fetch first 20 rows only;

OWNER             SEGMENT_TYPE       SEGMENT_NAME                   CDB$ROOT_MB PDB_TTS_MB PDB_PLG_MB
----------------- ------------------ ------------------------------ ----------- ---------- ----------
SYS               TABLE              IDL_UB1$                               288          3        288
SYS               TABLE              SOURCE$                                 51          2         52
SYS               TABLE              IDL_UB2$                                32         13         32
SYS               TABLE              ARGUMENT$                               13          0         13
SYS               TABLE              IDL_CHAR$                               11          3         11

The IDL_UB1$ is the table that contains all the pcode for pl/sql. All those wrapped dbms_ packages are there. And we don't need them in the PDB: we have link to the root which has exactly the same version.

Conclusion

My conclusion is that I'll not advise to use using noncdb_to_pdb. First, that script doing a lot of internal stuff scares me. I prefer to start that new implementation of the dictionary with a clean one.

But now that I made this test, I've two additional reasons to avoid it. First, it's not faster - except if you have a lot of objects. And the main goal is to reduce the total space by having the oracle packages stored only once. And this is cleary not done by the noncdb_to_pdb.

However, that conclusion is only for small databases. If you a database with a huge number of objects and pl/sql packages, then the overhead to keep the dictionary objects will not be very significant. And the TSS solution will be longer because it has to import all metadata. So there is still a case for noncdb_to_pdb. But test is before. And be sure to have a large shared pool for the recompile step.

Update 1: I forgot to add another reason to be very careful with noncdb_to_pdb from Bertrand Drouvot in his post about optimizer_adaptive_features huge negative impact on it.

Update 2: From a comment on OTN forum I changed the sentence about deleted rows because it was wrong. In fact, rows are deleted when the objects are recompiled:

SQL> select name,count(*) from containers(IDL_UB1$) left outer join v$containers using(con_id) group by name order by 1;

NAME                             COUNT(*)
------------------------------ ----------
CDB$ROOT                            53298
PDB1                                 6457
PDB_PLG                              6432
PDB_TTS                              6354

SQL> select name,count(*) from containers(SOURCE$) left outer join v$containers using(con_id) group by name order by 1;

NAME                             COUNT(*)
------------------------------ ----------
CDB$ROOT                           327589
PDB1                                73055
PDB_PLG                             20306
PDB_TTS                             17753
The issue is only that space is still allocated. And you can't SHRINK those objects because SYSTEM is DMT, and anyway the large tables contain LONG, and finally:
SQL> alter table sys.IDL_UB1$ shrink space;
alter table sys.IDL_UB1$ shrink space
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database
Of course, the space can be reused, but do you expect to add 200MB of compiled pl/sql in future releases?

WSFC with SQL Server: Manual or automatic failover? That is the question

Mon, 2014-12-01 00:05

During the night, you receive an alert concerning your SQL Server failover cluster or your availability groups. You're in panic because the message displayed is "a failover has occured .. see the log for more details" ...

So you try to keep quiet and after connecting to your environment, you are not able to find anything ... What has happened? Maybe someone has triggered a failover manually and you are not aware of it. I'm sure that by reading the previous sentences, many of you will find the situation familiar, but the real question is: Is it possible to distinguish a manual failover from an automatic failover with a Windows failover cluster?

The answer is yes and one way to find out the response is to take a look at the cluster.log. In fact, you have a record entry that clearly identifies a manual failover of resources:

 

[RCM] rcm::RcmApi::MoveGroup: (, 1, 0, MoveType::Manual )

 

As a reminder, this is the resource control monitor [RCM] that is responsible for performing actions according to the state of a resource. In fact, when you trigger a manual failover, the MoveGroup API is called with a identified parameter MoveType::Manual

Let me know if you find a other way of discovering a manual failover :-)

Happy failover (or not)!

Security improvements in MySQL 5.7

Sun, 2014-11-30 05:24

If you have a look on the last mysql 5.7.4 version or later you will probably see that there are several security improvements. The list of added security features and improvements can be seen on the following page: http://dev.mysql.com/doc/refman/5.7/en/mysql-nutshell.html

There are three main improvements that are shortly described in this blog:

1. Nonempty plugin column
2. Password lifetime policy
3. mysql_install_db secured

 

Nonempty plugin column

As of MySQL 5.7.2, the server requires account rows in the mysql.user table to have a nonempty plugin column value and disables accounts with an empty value. The following error will occor when trying to connect with user having empty plugin colum:

 

2014-11-30T10:41:04.943384Z 2 [Note] Access denied for user 'sbtest'@'localhost' (using password: YES)

 

If the user is connected when you update the plugin column, MySQL will behave as described below:

1. The user connect to the database:

 

mysql -u sbtest -p --socket=/u00/app/mysql/admin/mysqld5/socket/mysqld5.sock
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 5.7.4-m14 MySQL Community Server (GPL)Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>

2. Update column with root user:

mysql> update mysql.user set plugin='' where user='sbtest';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0


3. With the root user you flush the privileges:

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)


4. The following message will appear in the sbtest session:

2014-11-30T22:08:16.652477Z 8 [Warning] User entry 'sbtest'@'localhost' has an empty plugin value. The user will be ignored and no one can login with this user anymore.



Password Lifetime policy

Since mysql 5.7.4 MySQL enables database administrators to expire account passwords manually and to establish a policy for automatic password expiration. How does it work ?

Two new columns have been added to MySQL:

 

| password_last_changed | timestamp            | YES | | NULL |
| password_lifetime     | smallint(5) unsigned | YES | | NULL |

 

These two columns allow to see when password has been changed and to set a password lifetime.

You can establish a global password policy by setting the variable default_password_lifetime in the option file. By default this variable is set to 360. It means that all users will have to change their password once per year. A value of 0 disables automatic password expiration.

As stated in the documentation, the global password expliration policy can be overridden as desired for individual accounts using the ALTER USER statement.


Example:

mysql> ALTER USER 'sbtest'@'localhost' PASSWORD EXPIRE INTERVAL 90 DAY;
Query OK, 0 rows affected (0.00 sec)


A client session operates in restricted mode if the account password has been expired. In restricted mode, operations performed in the session result in an error until the user issues a SET PASSWORD statement to establish a new account password:

 

mysql> alter user 'sbtest'@'localhost' password expire interval 1 day;

[mysqld5] mysql -u sbtest -p

mysql> show databases;
ERROR 1820 (HY000): You must SET PASSWORD before executing this statement

mysql> set password=password('sbtest');
Query OK, 0 rows affected (0.01 sec)

mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| sysbench           |
+--------------------+
2 rows in set (0.00 sec)

 

To remove the password expiration policy simple use "expire never" as presented in the following example:

mysql>alter user 'sbtest'@'localhost' password expire never;

 

mysql_install_db secured

MySQL deployments installed using mysql_install_db now are secure by default. The following changes have been implemented as the default deployment characteristics:

The installation process creates only a single root account and not anymore anonymous-user accounts.

Example on MySQL 5.7.4:

 

mysql> select user,password, host from mysql.user;

+-------------+-------------------------------------------+-----------+
| user        | password                                  | host      |
+-------------+-------------------------------------------+-----------+
| root        | *7D2ABFF56C15D67445082FBB4ACD2DCD26C0ED57 | localhost |
+-------------+-------------------------------------------+-----------+
1 rows in set (0.00 sec)

 

Example on mysql 5.6.20:

 

mysql> select user,password, host from mysql.user;

+-------------+-------------------------------------------+----------------+
| user        | password                                  | host           |
+-------------+-------------------------------------------+----------------+
| root        | *7D2ABFF56C15D67445082FBB4ACD2DCD26C0ED57 | localhost      |
| root        |                                           | thinkpad-t540p |
| root        |                                           | 127.0.0.1      |
| root        |                                           | ::1            |
|             |                                           | localhost      |
|             |                                           | thinkpad-t540p |
+-------------+-------------------------------------------+----------------+
6 rows in set (0.01 sec)

 

After the installation you can also note that there is no "test" database anymore.

Example on MySQL 5.7.4:


mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
+--------------------+
3 rows in set (0.00 sec)


Example on MySQL 5.6.20:

mysql> show databases;

+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| test               |
+--------------------+
4 rows in set (0.00 sec)

 

Conclusion

Oracle did some significative security improvements on MySQL 5.7 version. This improvements will help database administrators to deploy MySQL with a better security level than with previous versions. Some improvements have also been done on the mysql_secure_installation script. The list of improvement and additional features can be seen on the following URL: http://dev.mysql.com/doc/refman/5.7/en/mysql-secure-installation.html

When Oracle resets session statistics

Sat, 2014-11-29 15:59

During our Oracle 12c New Features workshop I had a very good question about whether the session statistics are reset or not when doing ALTER SESSION SET CONTAINER. My initial thought was that they were not reset because it's the same session (same SID and SERIAL#). But when I'm not 100% sure about something, I test it. And once again, it proves that even the instructor can learn something new when giving a workshop, thanks to the great interaction with the participants.

My test was very simple, querying the 'logon' statistics after an ALTER SESSION SET CONTAINER and I came with the following tweet:

Quizz: How can I be connected and get 0 logons from V$MYSTAT ? pic.twitter.com/YZPQNU8FiH

— Franck Pachot (@FranckPachot) November 26, 2014

Of course that needs more investigation. This is about session statistics. What about session events? and session time model?

SQL> connect / as sysdba
Connected.

SQL> select name,value from v$mystat join v$statname using(statistic#) where name like 'logon%';

NAME                                     VALUE
----------------------------------- ----------
logons cumulative                            1
logons current                               1

I'm connected to the root container. I generate a bit of activity (using dbms_system.wait_for_event for fake i/o activity):

SQL> exec for i in 1..1e5 loop dbms_system.wait_for_event('db file sequential read', 0 , 0 ); end loop;
PL/SQL procedure successfully completed.

SQL> exec for i in 1..100 loop dbms_system.wait_for_event('db file sequential read', 0 , 1 ); end loop;
PL/SQL procedure successfully completed.

SQL> declare s date:=sysdate; begin loop exit when sysdate>s+60/24/60/60; end loop; end;
  2  /
PL/SQL procedure successfully completed.

and here are my session statistics:

Time Model show 60 seconds of CPU and the 100 seconds of I/O is in the DB time (time model values are in microseconds):

SQL> select stat_name,value from v$sess_time_model where sid=sys_context('userenv','sid') and stat_name in ('DB time','DB CPU');

STAT_NAME                                VALUE
----------------------------------- ----------
DB time                              165084940
DB CPU                                61119000

Session Events shows those 100 seconds of I/O and the longest call took 1 second (values are in centisecond when not with 'MICRO'):

SQL> select event,total_waits,time_waited_micro,max_wait from v$session_event where sid=sys_context('userenv','sid');

EVENT                          TOTAL_WAITS TIME_WAITED_MICRO   MAX_WAIT
------------------------------ ----------- ----------------- ----------
Disk file operations I/O                19              1407          0
db file sequential read             100100         104060605        101
SQL*Net message to client               33               123          0
SQL*Net message from client             32             13842          0

Session Statistics where time is in centiseconds show 60 seconds of CPU and 100 + 60 of DB time:

SQL> select name,value from v$mystat join v$statname using(statistic#) where name = 'DB time' or name like 'CPU%' or name like 'SQL*Net%client';

NAME                                     VALUE
----------------------------------- ----------
CPU used when call started                6118
CPU used by this session                  6118
DB time                                  16513
user I/O wait time                       10406
SQL*Net roundtrips to/from client           33

Ok, so that is consistent. And this has to be consistent as we often compare information for those 3 sources.

Let's now change to another container with ALTER SESSION SET CONTAINER:

SQL> show con_id

CON_ID
------------------------------
1

SQL> alter session set container=PDB1;

Session altered.

and look at the same statistics now:

SQL> select stat_name,value from v$sess_time_model where sid=sys_context('userenv','sid') and stat_name in ('DB time','DB CPU');

STAT_NAME                                VALUE
----------------------------------- ----------
DB time                              103907514
DB CPU                                    2000
SQL> select event,total_waits,time_waited_micro,max_wait from v$session_event where sid=sys_context('userenv','sid');

EVENT                          TOTAL_WAITS TIME_WAITED_MICRO   MAX_WAIT
------------------------------ ----------- ----------------- ----------
Disk file operations I/O                25              2084          0
db file sequential read             100100         104060605        101
SQL*Net message to client               42               180          0
SQL*Net message from client             41             17774          0
SQL> select name,value from v$mystat join v$statname using(statistic#) where name = 'DB time' or name like 'CPU%' or name like 'SQL*Net%client';

NAME                                     VALUE
----------------------------------- ----------
CPU used when call started                  11
CPU used by this session                    11
DB time                                      8
user I/O wait time                           0
SQL*Net roundtrips to/from client            5

All the session statistics have been reset and now have low values. And only them: events and time model still show cumulative values from the begining of the session.

So this unfortunately brings inconsistency. Here:

  • session statistic DB time cannot be compared with time model
  • SQL*Net roundtrips cannot be compared to the count of 'SQL*net message to client' event
  • CPU info from session statistics cannot be compared with the time model
  • session statistic wait time cannot be compared with session event time
and this how I can be connected and see 0 logons for my session:

SQL> select name,value from v$mystat join v$statname using(statistic#) where name like 'logon%';

NAME                                     VALUE
----------------------------------- ----------
logons cumulative                            0
logons current                               0

Now, as we are talking about resetting statistics, there is something else we can do. As you see above the session keeps track of the longest time for each wait event. Here above, the maximum I/O time was 1 second. We can reset that if we want:

SQL> exec dbms_system.kcfrms;
PL/SQL procedure successfully completed.

we still have all cumulative values for the session, but the high water mark of wait event duration has been reset:

SQL> select event,total_waits,time_waited_micro,max_wait from v$session_event where sid=sys_context('userenv','sid');

EVENT                          TOTAL_WAITS TIME_WAITED_MICRO   MAX_WAIT
------------------------------ ----------- ----------------- ----------
Disk file operations I/O                29              2263          0
db file sequential read             100100         104060605          0
SQL*Net message to client               49               209          0

Finally, I made other tests that I do not reproduce here:

If you ALTER SESSION SET CONTAINER with your current container then statistics are not reset.

If you ALTER SESSION SET CURRENT_SCHEMA the statistics are not reset.

So I'll stay with a reconnect when I want to reset all session statistics with wait events and time model as well.

Windows Server 2012 R2: solving .NET Framework 3.5 installation problems

Thu, 2014-11-27 20:57

I faced a problem at a customer site last week when I tried to install the .NET Framework 3.5 - a prerequisite for installing SQL Server 2012 on a Windows Server 2012 R2. I opened the Server Manager and then navigated to the Manage, Add Roles and Features section:

 

b2ap3_thumbnail_pic01.jpg

 

I selected the .NET Framework 3.5 Features option:

 

b2ap3_thumbnail_pic02.jpg

 

I specified an alternate source path:

 

b2ap3_thumbnail_pic03.jpg

 

... and surprise! Even though an ISO of Windows Server 2012 R2 was mapped to my D: drive, the installation failed with this strange error: "The source file could not be found..."

After some investigations, I found that this problem is quite common and that Microsoft has published a fix ... which unfortunately does not work for me!

I tried the same installation with different ways: command prompt, PowerShel l... but absolutely NO RESULT.

I finally decided to open a PowerShell console to check the Windows Features available on my server with the cmdlet Get-WindowsFeature:

 

b2ap3_thumbnail_Pic1.jpg

 

Strangely, the status of the .NET Framework 3.5 is not showing "Available", but "Removed"!

 

b2ap3_thumbnail_Pic2.jpg

 

So, how do I change this state from removed to available?

After some investigations and after having tried some fixes provided by persons who faced to the same problem as me, I finally found the Standalone Offline Installer tool that solved my problem by enabling the .NET Framework 3.5 (many thanks to Abbodi1406).

I downloaded this exe file and executed it on my server.

An installer screen appeared:

 

b2ap3_thumbnail_pic4.jpg

 

After clicking on the Next button, a command prompt screen appeared which showed the completion state of the process.

pic6.jpg

 

As soon as the process was finished, I went back to my PowerShell screen to check if my .NET Framework 3.5 is now available - by running my PowerShell cmdlet Get-WindowsFeature:

 

b2ap3_thumbnail_pic10.jpg

 

The .NET Framework 3.5 now was available and I as able to restart the installation process from the beginning by navigating to the server manager, selecting the concerned feature and giving the alternate source path.

pic8.jpg

I finally succeded in installing my .NET Framework 3.5!

I hope that my blog post will help some of you to resolve this installation problem ;-)

Oracle AVDF - Database Firewall Policies

Thu, 2014-11-27 04:13

The successful deployment of a Database Firewall depends on an effective policy. Oracle AVDF includes preconfigured firewall policies as described in the Firewall Policy page in the Policy tab of the Audit Vault Server console.

These include policies that log all SQL statements, or log only unique SQL statements. In addition, the Database Firewall policy editor enables you to design your own policies quickly and efficiently.

Policy rules can depend on any combination of the SQL statement type, name of the database user, IP address of the database client, operating system user name, client program name, or any exceptions you specify.

 

First policy and global concept

1. Log in to the Audit Vault Server console as an auditor, and click on the Policy tab:

 

001_20141122-190809_1.png

 

2. Under the Policy menu, click Firewall Policy.

3. Click Create Policy.

The Create Policy dialog appears. Select the Database Type from the drop-down list (choice between IBM DB2, Microsoft SQL Server, MySQL, Oracle Database, Sybase ASE, Sybase SQL Anywhere), Enter a Policy Name and Optionally, enter a Description:

 

002.png

 

3. Click on “Create”. The new policy is created, and the policy's Overview page appears:

 

003.png

 

When you create a new policy, or click an existing policy name in the Firewall Policies page, that policy's Overview page appears. This page shows the policy rules that are being applied to the statement types (clusters) being monitored by the Database Firewall, as well as exceptions and other rules that may apply.

The policy's Overview page is divided into these sub-sections:

  • Exception Rules - Lists exceptions you have created. The rules that you have assigned to SQL statement clusters will not apply to these exceptions. You can move the rules up or down in the list. The rules are evaluated in the order listed.
  • Analyzed SQL - Displays the number of SQL statement clusters for which you have defined policy rules, and their policy actions (such as Warn or Block).
  • Novelty Policies (Any) - Lists special policies you have created for specific statement classes and/or specific tables in your secured target databases. If you have identified specific tables in a policy in this section, the policy rule applies if it matches Any of the tables.
  • Novelty Policies (All) - Lists special policies you have created for specific statement classes and/or specific tables in your secured target databases. If you have identified specific tables in a policy in this section, the policy rule applies if it matches All of the tables.
  • Default Rule - Shows the default rule for any statements that are not matched by the rules set for Analyzed SQL clusters, Exceptions, or Novelty Policies.
  • Policy Controls - Lets you configure firewall policy settings, create policy profiles, as well as sets of filters to use in defining profiles and Exception rules.

 

Practical case

Developing a policy is an iterative process that keeps refining and improving the policy with new data. In order to be able to create a policy statements have to be executed on the database. The examples below present some possibilities provided by Firewall Policy.

These examples are based on a very simple context with two schemas/users:

  1. PSI
  2. GRS

PSI schema contains three tables:

  1. PSI.CREDIT_CARD containing Credit Cards numbers
  2. PSI.EMP containing employees’ salaries
  3. PSI.TEST containing one non sensitive row

 

SQL> select table_name from dba_tables where owner='PSI';

TABLE_NAME

------------------------------

EMP

CREDIT_CARD

TEST

 

Novelty Policy

The table CREDIT_CARD contains credit cards numbers and EMP contains Employee salary. These two tables are very sensitive and nobody can have a look on these tables:

 

 

GRS

PSI

PSI.EMP

NOK

NOK

PSI.CREDIT_CARD

NOK

NOK

PSI.TEST

OK

OK

 

The first step in order to create this policy is to create a novelty rule. Novelty policies specify the action, logging level, and threat severity to use for specific types of statements and/or statements that operate on selected tables. Novelty policies can be used to loosen or tighten your normal policy rules if certain statements are encountered. In our context we want to create a novelty policy that will block all access to these tables:

1. In the Audit Vault Server console, select the “Policy” tab.

2. From the Policy menu, click “Firewall Policy”.

3. Click on the newly created Firewall Policy named “MyPolicy

4. Click Add Novelty Rule in section Novelty Policy (Any):

5. In the Novelty Policy Details dialog, define the following:

a. Novelty Rule: Enter a name for this rule: MyNR

b. Statement Classes: Select one or more types of statements that SQL statements must match in order to apply this rule. In this example we have to select “Data Manipulation Read Only”

c. Policy Controls: Select the Action, Logging Level, and Threat Severity for this rule from the appropriate drop-down list. In this example we have to select “Block” for action and specify in the substitution field, the statement below:

select 'You do not have access to this table' from dual

6. Affected Tables: Select the table(s) to use for matching statements to this policy. In order to have tables in this list, tables have to be accessed first. If there is no activity on the database the list will be empty. In our specific case we select tables: PSI.EMP and PSI.CREDIT_CARD and we click on “Add Tables”:

 

004.png

 

7. Click on “Create”.

8. Now we can test this policy. For the moment this policy will block access to any user trying to have access to these two tables.In order to apply this policy we have to save the policy by clicking on “save” and then “publish”.

9. Click on “Secured Targets”

10. Click on the target where you want to apply the policy

11. Click on Firewall Policy

12. Select the Policy “MyPolicy”

13. Now you can check that the policy is applied by doing a select on this table.

 

C:Usersadministrateur>sqlplus psi/psi@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 13:36:14 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.credit_card;


'YOUDONOTHAVEACCESSTOTHISTABLE'

------------------------------------
You do not have access to this table
SQL>

 

We can execute the same query with user GRS, the result will be the same:

 

C:Usersadministrateur>sqlplus grs/grs@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 13:36:14 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.credit_card;


'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------
You do not have access to this table

SQL>
  Exception Rule

The table CREDIT_CARD contains credit cards numbers and EMP contains Employee salary. These two tables are still very sensitive but since PSI has been promoted Chief Financial Officer he need access to these tables. Therefore we will create an exception for him:

 

GRS

PSI

PSI.EMP

NOK

OK

PSI.CREDIT_CARD

NOK

OK

PSI.TEST

OK

OK

  1. In order to change the policy this one has to be unused. Click on secured targets, select the target, and click on firewall policy and change the policy to “log all”:

 

005.png

 

2. Now you can make modification to your policy “MyPolicy”.

First of all we need to create a Profile. Within a firewall policy, a profile lets you define a different set of policy rules based on the session data associated with SQL statements. To define the profile, you use the session filters you defined in the Policy Controls section of the firewall policy. These session filters filter SQL statements based on:

  • IP addresses
  • Database user login names
  • Client program names (for example, SQL*Plus)
  • Operating system user names

In this example we will create a profile based on Database user login named. This user will be PSI.

3.Click on Policy

4.Click on Firewall Policy

5. Click on MyPolicy

6. Click on Database User Set

7. Create a new set by clicking on “Create New Set”

8. Enter values for field New Set Name and member:

 

006.png

 

9.Click on “Create Set”

10. Click on “Cancel”

11.Click on “Profiles”

12. Create a new Profile by clicking on “Create New Profile”

13. Enter the Profile Name and select “UsersHavingAccessToMyTable” in the field “DB User Set”:

 

007.png

 

14. Click on “Create Profile”

 

008.png

 

Now we have to create an exception based on this profile. An exception determines the action, logging level, and threat severity to use when certain session data is encountered. For example, an exception could specify rules for statements that originate (or do not originate) from selected client IP addresses or database user names. In this example, the exception will be based on database user name.

Exceptions override all other policy rules. For example, you may want to override the normal policy rules if SQL statements originate from an administrator, or if they originate from anywhere other than a specific IP address.

You can define many exceptions and control the order in which they are evaluated. Each Exception has its own Action, Logging, and Threat Severity settings.

15. Click on policy

16. Click on firewall policy

17. Click on your newly created policy “MyPolicy”

18. Click on “Add Exception”

19. Enter the expception rule name : “ExceptionForPSITable”

20. Into DB User Set select “Include” and select “UsersHavingAccessToblMyTable”

21. In Policy Control click on “Pass”

22. Click on “Create”:

 

009.png

 

23. Click on “Save” and “Publish”

24. Apply this policy to the target

Now, the user PSI can access to all his tables and user GRS have no access to sensitive tables.

 

C:Usersadministrateur>sqlplus grs/grs@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 14:09:07 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.emp;

'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------

You do not have access to this table

SQL> select * from psi.credit_card;

'YOUDONOTHAVEACCESSTOTHISTABLE'

------------------------------------

You do not have access to this table


SQL> select * from psi.test;

NAME
----------
hello C:Usersadministrateur>sqlplus psi/psi@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 14:18:54 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.


ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.credit_card;

NAME               ID
---------- ----------

Larry     4,8960E+15
John       7,8623E+15

SQL> select * from psi.emp;


NAME             SAL
---------- ----------
Larry         150000
John           80000

SQL> select * from psi.test;

NAME
----------
hello
SQL>
  Analyzed SQL

With the exception we granted access to PSI database user to all his table. Since PSI didn’t present good results to shareholders he has been replaced by a new CFO and this one decided that PSI has now only access to credit card number but is not anymore allowed to make select statement on employees’ salaries table:

 

 

GRS

PSI

PSI.EMP

NOK

NOK

PSI.CREDIT_CARD

NOK

OK

PSI.TEST

OK

OK

  1. First of all we have to remove the exception we did before by clicking on the exception rule and clicking on “delete”

  2. In the policy overview click on “Modify SQL”3.Click on “Change”

  3. Select “Secured Target”, select the profile “ProfileForPrivsUsers” and enter in Event Time last 24 hours.

  4. Click on “Apply”

  5. Click on column header “User Name”

  6. Select “psi”

  7. A line looking like the one below should appear in the list

  8. Select this line

  9. Click on “Set Policy”

  10. In the Action list click on “Pass”:


    0010.png


  11. If you have a look on the list of SQL, the following statement should appear.


    0013.png

  12. Create a profile for user GRS named “ProfileForStdUsers” as we did for user PSI. This profile won't have the possibility to execute statement “Select * from psi.credit_card”.

  13. In “Analyze SQL” select profile “ProfileForStdUsers” and filter on GRS user as we did in step 6.

  14. Select the following statement:


    0014.png


  15. Click on “Set Policy”

  16. Select action “Block”

  17. In the field Substitution enter the following: “select 'Your profile does not allow access to this statement' from dual ”

  18. Now your section “Analyzed SQL” should look like the screenshot below:

  19. Save and Publish the modification done on this policy's

  20. Apply this policy to your target, click on secured targets, Firewall Policy and select “MyPolicy” in the list.

Now we can test the access:


C:Usersadministrateur>sqlplus grs/grs@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 16:33:55 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ConnectÚ Ó :Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.emp;


'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------
You do not have access to this table

SQL> select * from psi.credit_card;
'

YOURPROFILEDOESNOTALLOWACCESSTOTHISSTATEMENT'
----------------------------------------------------
Your profile does not allow access to this statement

SQL> select * from psi.test;

NAME

----------

hello

 

C:Usersadministrateur>sqlplus psi/psi@souk
SQL*Plus: Release 12.1.0.2.0 Production on Mer. Nov. 12 16:35:35 2014

Copyright (c) 1982, 2014, Oracle. All rights reserved.

ConnectÚ Ó :
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

SQL> select * from psi.emp;

'YOUDONOTHAVEACCESSTOTHISTABLE'
------------------------------------
You do not have access to this table


SQL> select * from psi.credit_card;


NAME               ID
---------- ----------
Larry     4,8960E+15
John       7,8623E+15

SQL> select * from psi.test;

NAME
----------
hello

 

This blog is not intended to be a comprehensive description of all AVDF policies functionnalities but I do hope that it provided you a good overview of some basic functionnalities.

Oracle 12c privilege analysis rocks

Wed, 2014-11-26 14:18

12c came with a very nice feature: privilege analysis. You don't know which privileges are required? then just grant DBA, run your application, and check which minimal privileges are needed. And today, I've discovered how it is very powerful: you can even see privileges used internally, even when not done by SQL, and even not documented.

It starts like that, with a question from Vladimir Sitnikov (who publishes very interesting stuff from his twitter account) in the tone of a challenge:

@FranckPachot Ok, ace. Do you think dbms_utility.get_parameter_value requires special grants (e.g. in current 11gR2)?

— Vladimir Sitnikov (@VladimirSitnikv) November 26, 2014

So I got to the doc which has a special security model for some functions but nothing about get_parameter_value.

Then I created a simple user with only CREATE SESSION privilege and got:

SQL> drop user TEST;
User dropped.
SQL> grant create session to TEST identified by TEST;
Grant succeeded.
SQL> connect TEST/TEST
Connected.
SQL> variable i number
SQL> variable s varchar2(1000)
SQL> variable t number
SQL> exec :t:=DBMS_UTILITY.GET_PARAMETER_VALUE('NLS_LENGTH_SEMANTICS',:i,:s);
BEGIN :t:=DBMS_UTILITY.GET_PARAMETER_VALUE('NLS_LENGTH_SEMANTICS',:i,:s); END;
*
ERROR at line 1:
ORA-01031: insufficient privileges
ORA-06512: at "SYS.DBMS_UTILITY", line 140
ORA-06512: at line 1

So, which privileges do you need? Let's try the 12c privilege analysis:

SQL> grant dba to TEST;
Grant succeeded.

SQL> connect / as sysdba
Connected.

SQL> exec dbms_privilege_capture.create_capture (name=>'demo',type =>dbms_privilege_capture.g_role,roles=>role_name_list('DBA'));
PL/SQL procedure successfully completed.

SQL> exec dbms_privilege_capture.enable_capture (name=>'demo');
PL/SQL procedure successfully completed.

SQL> connect TEST/TEST
Connected.

SQL> exec :t:=dbms_utility.get_parameter_value('NLS_LENGTH_SEMANTICS',:i,:s);
PL/SQL procedure successfully completed.

SQL> print s

S
--------------------------------------------------------------
BYTE

SQL> connect / as sysdba
Connected.

SQL> exec dbms_privilege_capture.disable_capture(name=>'demo');
PL/SQL procedure successfully completed.

SQL> exec dbms_privilege_capture.generate_result(name=>'demo');
PL/SQL procedure successfully completed.

SQL> select object_owner,object_name,obj_priv from dba_used_objprivs ;

OBJECT_OWN OBJECT_NAME     OBJ_PRIV
---------- --------------- ----------
SYS        V_$PARAMETER    SELECT

SQL> select path from dba_used_objprivs_path ;

PATH
--------------------------------------------------------------
GRANT_PATH('TEST', 'DBA', 'EXP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'EM_EXPRESS_ALL', 'EM_EXPRESS_BASIC', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'DATAPUMP_IMP_FULL_DATABASE', 'EXP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'DATAPUMP_EXP_FULL_DATABASE', 'EXP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'IMP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'DATAPUMP_IMP_FULL_DATABASE', 'IMP_FULL_DATABASE', 'SELECT_CATALOG_ROLE')
GRANT_PATH('TEST', 'DBA', 'SELECT_CATALOG_ROLE')

SQL> exec dbms_privilege_capture.drop_capture (name=>'demo');
PL/SQL procedure successfully completed.

I've granted the DBA privilege and have run the privilege analysis capture on that role while calling the function. And bingo: you need to be granted SELECT on V_$PARAMETER (which come into DBA role through the SELECT_CATALOG_ROLE) ... which sounds legitimate as the goal is to get a parameter value. 

But do you know what? DBMS_UTILITY.GET_PARAMETER_VALUE do not execute any select statement. That behavior is documented in that package for other function, but not for the GET_PARAMETER_VALUE one:

Rem The dbms_utility package is run-as-caller (psdicd.c) only for
Rem its name_resolve, compile_schema, analyze_schema, wait_on_pending_dml,
Rem and expand_sql_text procedures. This package is not run-as-caller
Rem w.r.t. SQL (psdpgi.c) so that the SQL works correctly (runs as
Rem SYS). The privileges are checked via dbms_ddl.

 

That function calls a C function (KSPGPNICD) so we don't know what happens behind. If you sql_trace it, you don't see anything about V_$PARAMETER.

But privilege analysis show the required privileges anyway, and that rocks.

SQL Server tips: Executing a query with the EXECUTE command

Mon, 2014-11-24 22:52

This short SQL Server blog post is meant to help people who have experienced the error messages 2812 and 203 with the EXECUTE command.

Journées SQL Server 2014: il n'est pas trop tard pour vous inscrire

Mon, 2014-11-24 01:35

Le plus grand événement français va bientôt avoir lieu (1 et 2 décembre) à Paris et il vous reste  encore une semaine pour vous y inscrire! Pensez également à commander votre lunch box, ce qui peut être pratique si vous voulez manger :-) 

 

blog_24_-_5_-_AlwaysOn

 

Pour ma part j’aurai l’occasion d’ouvrir le bal et de vous retrouver au cours d’une session sur SQL Server et les groupes de disponibilité (cela devient presque une habitude) lundi matin à 10h30. Cela fait quelques temps maintenant que je mets en place des infrastructures de groupes de disponibilité chez des clients et cette fois-ci je voulais partager avec vous les problématiques auxquelles j’ai été confronté pendant mes différentes prestations que ce soit pendant les phases d’implémentation mais aussi sur des problèmes concrets de production. Une session avec beaucoup de cas concrets et de démo! Plus d'informations ici.

Cette année est également particulière car la société dbi services, pour laquelle je travaille, fera parti des sponsors (Gold) de l’événement. Une autre façon de montrer que nous attachons beaucoup d’importance à ce genre d’événement autour des technologies de bases de données.

Un petit clin d’oeil également à mon collègue Stéphane Haby (MVP SQL Server) qui présentera une session sur la gestion des règles avec SQL Server … session à ne pas manquer si la gestion de la sécurité est est une problématique importante.

Il y a également un nombre conséquent de sessions qui peuvent vous intéressés et dans plusieurs domaines (Microsot Azure, BI, moteur SQL etc.) et personnellement j'ai déjà réaliser mon planning de sessions à voir mais vous? Si ce n'est pas le cas rendez-vous à l'agenda de session ... faites votre marché :-)

 

blog_24_-_3_-_Agenda

 

En espérant vous y retrouver nombreux, que ce soit aux sessions ou au stand dbi services!

SQL Server 2014: buffer pool extension & corruption

Wed, 2014-11-19 01:49

I had the opportunity to attend Paul Randal’s session on advanced data recovery techniques at the Pass Summit. During this session one attendee asked Paul if a page that has just been corrupted can remain in the buffer pool extension (BPE). As you probably know, BPE only deals with clean pages. Paul hesitated a lot and asked us to test and this is exactly what I will do in the blog post.

First, let’s start by limiting the maximum memory that can be used by the buffer pool:

 

-- Configure SQL Server max memory to 1024 MB EXEC sp_configure'show advanced options', 1; GO RECONFIGURE; EXEC sp_configure'max server memory (MB)', 1024; GO RECONFIGURE; GO

 

Then we can enable the buffer pool extension feature:

 

ALTER SERVER CONFIGURATION SET BUFFER POOL EXTENSION ON (           -- change the path if necessary        FILENAME = N'E:SQLSERVERMSSQLSERVERDATAssdbuffer_pool.bpe',        SIZE = 4096 MB );

 

I configured a buffer pool extension size with 4X the max memory value for the buffer cache

At this point I need a database with a big size in order to have a chance to retrieve some data pages in the buffer pool extension part. My AdventureWorks2012 database will fit this purpose:

 

USE AdventureWorks2012; GO   EXEC sp_spaceused;

 

blog_23_-_1_-_adventureworks2012_size

 

I have also 3 big tables in this database: dbo.bigTransactionHistory_rs1 (2.2GB), dbo.bigTransactionHistory_rs2 (2.1 GB) and BigTransactionHistory (1.2GB)

 

blog_23_-_2_-_adventureworks2012_top_tables_size

 

I have a good chance to find out some pages related on these tables in the BPE, if I perform a big operation like a DBCC CHECKDB on the AdventureWorks2012 database.

After performing a complete integrity check of this database and executing some queries as well, here it is the picture of my buffer pool:

 

SELECT        CASE is_in_bpool_extension              WHEN 1 THEN 'SSD'              ELSE 'RAM'        END AS location,        COUNT(*) AS nb_pages,        COUNT(*) * 8 / 1024 AS size_in_mb,        COUNT(*) * 100. /(SELECT COUNT(*) FROM sys.dm_os_buffer_descriptors(nolock)) AS percent_ FROM sys.dm_os_buffer_descriptors(nolock) GROUP BY is_in_bpool_extension ORDER BY location;

 

blog_23_-_21_-_buffer_pool_overview

 

Is it possible to find some pages in the buffer pool extension part that concerns the table bigTransactionHistory_rs1?

 

SELECT        bd.page_id, da.page_type, bd.is_modified FROM sys.dm_os_buffer_descriptors AS bd        JOIN sys.dm_db_database_page_allocations(DB_ID('AdventureWorks2012'), OBJECT_ID('dbo.bigTransactionHistory_rs1'), NULL, NULL, DEFAULT) AS da              ON bd.database_id = da.database_id                     AND bd.file_id = da.allocated_page_file_id                            AND bd.page_id = da.allocated_page_page_id WHERE bd.database_id = DB_ID('AdventureWorks2012')                     AND bd.is_in_bpool_extension = 1                            AND da.page_type IS NULL

 

blog_23_-_3_-_bigTransactionHistory_rs1_pages

 

I chose the first page 195426 and I finally corrupted it

 

DBCC WRITEPAGE(AdventureWorks2012, 1, 195426, 0, 2, 0x0000);

 

blog_23_-_4_-_corrupt_page

 

Then, let's take a look at the page with ID 195426 to see if it still remains in the BPE:

 

SELECT        page_id,        is_in_bpool_extension,        is_modified FROM sys.dm_os_buffer_descriptors AS bd WHERE bd.page_id = 195426

 

blog_23_-_5_-_check_location_page_after_corruption

 

Ok (fortunately) not :-) However we can notice that the page has not been tagged as "modified" by looking at the sys.dm_os_buffer_descriptors DMV. Hum my guess at this point is that using DBCC WRITEPAGE is not a classic process for modifying a page but in fact the process used by the BPE extension is not what we can imagine at the first sight.

Indeed, moving a page from BPE is almost orthogonal to the dirty nature of a page because the buffer manager will move a page into the memory because it becomes hot due to the access attempt. Modifying a page needs first access to the page (a particular thanks to Evgeny Krivosheev - SQL Server Program Manager - for this clarification).

We can verify if the page with ID 195426 is really corrupted (remember this page belongs to the bigTransactionHistory_rs1 table):

 

DBCC CHECKTABLE(bigTransactionHistory_rs1) WITH NO_INFOMSGS, ALL_ERRORMSGS, TABLERESULTS;

 

blog_23_-_6_-_check_corruption_for_data_page

 

Note some other corruptions but in this context it doesn't matter because I performed some other corruption tests in this database :-)

So the next question could be the following: Do you think a corrupted page can be moved from the buffer pool into the memory? … The following test will give us the response:

 

CHECKPOINT GO DBCC DROPCLEANBUFFERS; GO -- Perform queries in order to full fill the buffer cache and its extension

 

We flush dirty pages to disk and the we clean the buffer cache. Afterward, I perform some others queries in order to populate the buffer cache (memory and BPE) with database pages. At this point we have only clean pages. A quick look at the buffer cache with the sys.dm_os_buffer_descriptor DMV give us the following picture (I recorded into a temporary table each time I found out the page ID 195426 into the buffer cache (either memory or BPE):

 

blog_23_-_7_-_find_out_a_corrupted_page_in_the_BPE

 

We can notice that a corrupted page can be part of the buffer pool extension and this is an expected behavior because the page ID 195426 is not dirty or modified but corrupted only at this point.

Enjoy!

Oracle locks: Identifiying blocking sessions

Mon, 2014-11-17 09:17

When you have sessions blocked on locks, you probably have all information about the waiters (they call you and anyway their waiting session is visible in v$session our ASH). But you usually need to get enough information that help to identify the blocker.

Here is a query I use to get that quickly, based on V$WAIT_CHAINS

Here is the result I want to get:

session                 wait event                                      minutes USER PRO
----------------------- ----------------------------------------------- ------- ---- ---
 ABCLBP1 '831,54109@1'  SQL*Net message from client                        13.5 SYS  sql
  ABCLBP4 '395,21891@4' enq: TX - row lock contention on TABLE             13.2 SYS  SQL
                          "SYS"."TEST_FRANCK" on rowid AAC0aCAAnAAABSCAAA

I have information about blocking session, waiting session, the type of lock (here TX - row lock) and because it is a row lock I want to know the ROWID of the locked row. 

Here is the query I used to get it:

column "wait event" format a50 word_wrap
column "session" format a25
column "minutes" format 9999D9
column CHAIN_ID noprint
column N noprint
column l noprint
with w as (
select
 chain_id,rownum n,level l
 ,lpad(' ',level,' ')||(select instance_name from gv$instance where inst_id=w.instance)||' '''||w.sid||','||w.sess_serial#||'@'||w.instance||'''' "session"
 ,lpad(' ',level,' ')||w.wait_event_text ||
   case
   when w.wait_event_text like 'enq: TM%' then
    ' mode '||decode(w.p1 ,1414332418,'Row-S' ,1414332419,'Row-X' ,1414332420,'Share' ,1414332421,'Share RX' ,1414332422,'eXclusive')
     ||( select ' on '||object_type||' "'||owner||'"."'||object_name||'" ' from all_objects where object_id=w.p2 )
   when w.wait_event_text like 'enq: TX%' then
    (
     select ' on '||object_type||' "'||owner||'"."'||object_name||'" on rowid '
     ||dbms_rowid.rowid_create(1,data_object_id,relative_fno,w.row_wait_block#,w.row_wait_row#)
     from all_objects ,dba_data_files where object_id=w.row_wait_obj# and w.row_wait_file#=file_id
    )
   end "wait event"
 , w.in_wait_secs/60 "minutes"
 , s.username , s.program
 from v$wait_chains w join gv$session s on (s.sid=w.sid and s.serial#=w.sess_serial# and s.inst_id=w.instance)
 connect by prior w.sid=w.blocker_sid and prior w.sess_serial#=w.blocker_sess_serial# and prior w.instance = w.blocker_instance
 start with w.blocker_sid is null
)
select * from w where chain_id in (select chain_id from w group by chain_id having max("minutes") >= 1 and max(l)>1 )
order by n
/

This query retrieves the wait chains where a session is waiting for more than one minute on a table lock (TM) or row lock (TX) .

When it is a table lock (TM), I get the locked object_id from the P2 parameter, in order to know the table name.

When it is a row lock, I get the table and rowid from V$SESSION. Note that I have to join with dba_data_files in order to convert the absolute file_id to a relative one, and to join to dba_objects in order to convert the object_id to the data_object_id one - in order to built the ROWID.

More information about ROWID, relative file number and data object id in my previous post: From 8.0 extended rowid to 12c pluggable db: Why Oracle Database is still a great software

 

My planning for DOAG 2014

Mon, 2014-11-17 01:53

I'm quickly checking the planning for DOAG these 3 days and here is the list of sessions I would like to attend.

There are still a lot of interesting ones even if my choice is limited by the language (I would like to understand German but I'm limited so sessions in English). And I've still some concurrency issues to solve because I cannot be at two places at the same time.

12c SQL Plan Directive: state has changed between 12.1.0.1 and 12.1.0.2

Sun, 2014-11-16 09:38

SQL plan Directives have been introduced in 12c. If you have scripts that check their states (and I have that as I prefer to monitor closely the features that are new) you probably have seen a difference when going from 12.1.0.1 - the first release of 12c and the only one avilable yet in Standard Edition - and 12.1.0.2 - the first patchest. I'll explain here what are the SQL Plan Directive states and how they changed.

When a SQL Plan Directive is created, it's state is NEW. It means that a misestimate has been encountered but the reason has not been yet determined. Remember that the directive is created only at execution time, so very limited information is available, except the fact that A-Rows is different that E-Rows.

On a subsequent optimization (same or different statement that uses the same table and same column predicates) the optimizer sees the SQL Plan Directive and can update it with more information. If the reason of misestimation is that some statistics are missing then the state of the directive is changed from NEW to MISSING_STATS. 

MISSING_STATS directives lead to short term and long term solutions:

  • each new query will solve missing stats by gathering more statistics with Dynamic Sampling
  • the next dbms_stats gathering will gather extended statistics to definitly fix the issue

Once the latter is done, we can expect to have good estimations without Dynamic Sampling anymore, thanks to the extended statistics. But that has to be checked. So the next query optimization will check it and update the SQL plan Directive accordingly:

  • HAS_STATS to show that statistics (extended statistics here) are now sufficient to get correct estimations
  • PERMANENT to show that the misestimation is still there and extended statistics have to be ignored because they were calculated for an issue that they do not solve.
Finally, the HAS_STATS SQL Plan Directives are purged after some weeks as they are not needed anymore - the issue being solved definitely.   So what has changed in 12.1.0.1 ?   There are only two states now:
  • 'USABLE' that covers the 'NEW', 'MISSING_STATS' and 'PERMANENT' which means that a directive is there to be evaluated by the optimizer, but the issue is not solved yet.
  • 'SUPERSEDED' when it has been solved (the 'HAS_STATS') or it is redundant with another directive, which means that the issue is solved somewhere else.
This is a simplification, but if you want to have the same level of detail that you had in 12.1.0.2 then you can get it from the 'internal state' which is exposed in XML in the NOTES column.   Here is an example of two USABLE state:  
SQL> select directive_id,type,state,reason,notes from dba_sql_plan_directives where directive_id in(select directive_id from dba_sql_plan_dir_objects where owner='DEMO' );

        DIRECTIVE_ID TYPE              STATE         REASON
-------------------- ---------------- ---------- ------------------------------------
NOTES
--------------------------------------------------------------------------------
 1350980939868665098 DYNAMIC_SAMPLING USABLE         SINGLE TABLE CARDINALITY MISESTIMATE

  NEW
  NO
  {EC(DEMO.DEMO_TABLE)[A, B, C, D]}

which is 'NEW' and  
        DIRECTIVE_ID TYPE              STATE         REASON
-------------------- ---------------- ---------- ------------------------------------
NOTES
--------------------------------------------------------------------------------
 1350980939868665098 DYNAMIC_SAMPLING USABLE         SINGLE TABLE CARDINALITY MISESTIMATE

  MISSING_STATS
  NO
  {EC(DEMO.DEMO_TABLE)[A, B, C, D]}

which is 'MISSING_STATS'.   And a 'SUPERSEDED' once dbms_stats has run:  
        DIRECTIVE_ID TYPE              STATE         REASON
-------------------- ---------------- ---------- ------------------------------------
NOTES
--------------------------------------------------------------------------------
 1350980939868665098 DYNAMIC_SAMPLING SUPERSEDED SINGLE TABLE CARDINALITY MISESTIMATE

  HAS_STATS
  NO
  {EC(DEMO.DEMO_TABLE)[A, B, C, D]}

which is 'HAS_STATS'
Note that the xml tags were eaten by my blog editor. They are: internal_state,redundant,spd_text
We do full demos of SQL Plan Directives in our 'Oracle 12c new features workshop' and 'Oracle performance tuning workshop'. It's a great feature that brings the CBO to another level of intelligence. And there are some misconceptions about them. Some people think that they store statistics. But that's wrong. Statistics come from cardinality feedback, dynamic sampling, or object statistics. There is no need for another component to store them. The only thing that is stored by SQL Plan Directives are their state. Which makes the state a very important information - and the reason for that blog post.  

 

 

Oracle AVDF post-installation configuration

Thu, 2014-11-13 04:47

In one of my last blog, named: "Oracle Audit Vault and Database Firewall (AVDF) 12.1 - installation on VirtualBox" I explained how to install AVDF on VirtualBox. Since some of you asked for a blog on "How to configure AVDF", I decided to write this posting on AVDF post-installation configuration. This one only concerns the post-installation phase, a third blog will be dedicated to practical cases concerning the configuration of Database Firewall Policies.

Specifying the Audit Vault Server Certificate and IP Address

You must associate each Database Firewall with an Audit Vault Server by specifying the server's certificate and IP address, so that the Audit Vault Server can manage the firewall. If you are using a resilient pair of Audit Vault Servers for high availability, you must associate the firewall to both servers.


1. Log in to the Audit Vault Server as an administrator, and then click the Settings tab.

2. In the Security menu, click Certificate. The server's certificate is displayed.

avdf001.png
  
3. Copy the server's certificate.

4. Log in to the Database Firewall administration console

5. In the System menu, click Audit Vault Server.

6. Enter the IP Address of the Audit Vault Server.

7. Paste the Audit Vault Server's Certificate in the next field.

avdf002.png


8.  Click Apply.

Registering Oracle Secured Target Ensure That Auditing Is Enabled in the Oracle Secured Target

Databaseoracle@vmtest12c:/home/oracle/ [DUMMY] SOUK
******** dbi services Ltd. ********
STATUS         : OPEN
DB_UNIQUE_NAME : SOUK
OPEN_MODE      : READ WRITE
LOG_MODE       : NOARCHIVELOG
DATABASE_ROLE  : PRIMARY
FLASHBACK_ON   : NO
FORCE_LOGGING  : NO
VERSION        : 11.2.0.3.0
***********************************
oracle@vmtest12c:/home/oracle/ [SOUK] sqlplus "/as sysdba"SQL*Plus: Release 11.2.0.3.0 Production on Sun Sep 15 22:35:49 2013Copyright (c) 1982, 2011, Oracle.  All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit ProductionSQL>SQL>
SQL> SHOW PARAMETER AUDIT_TRAILNAME                                 TYPE        VALUE
------------------------------------ ----------- ---------------------------
audit_trail                          string      DB

 

If the output of the SHOW PARAMETER command is NONE or if it is an auditing value that you want to change, then you can change the setting as follows.For example, if you want to change to XML, and if you are using a server parameter file, you would enter the following:

SQL> ALTER SYSTEM SET AUDIT_TRAIL=XML SCOPE=SPFILE;
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP;

Registering Hosts in the Audit Vault Server

1. Log in to the Audit Vault Server console as an administrator.  

2. Click the Hosts tab. A list of the registered hosts, if present, appears in the Hosts page. To control the view of this list see "Working With Lists of Objects in the UI".

3. Click Register.  

4. Enter the Host Name and Host IP address.  

avdf003.png

 

5. Click Save.

Deploying and Activating the Audit Vault Agent on Secured Target Hosts

1. Log in to the Audit Vault Server console as an administrator.

2. Click the Hosts tab, and then from the Hosts menu, click Agent.

3. Click “Download Agent” and save the agent.jar file to a location of your choice.

avdf004.png

 

4. Using an OS user account, copy the agent.jar file to the secured target's host computer.

5. On the host machine, set JAVA_HOME to the installation directory of the jdk1.6 (or higher version), and make sure the java executable corresponds to this JAVA_HOME setting.

avdf005.png

 

6. Start a command prompt with Run as Administrator. In the directory where you placed the agent.jar file, extract it by running:

java -jar agent.jar -d Agent_Home

avdf006.png

 

Request agent Activation

To request activation of the Audit Vault Agent:

1. On the secured target host computer, go to the following directory:

Agent_Home/bin

2. Agent_Home is the directory created in the step 7 above.  Run the following command:

./agentctl activate

avdf007.png

Activate and Start the Agent

In this step, you approve the agent activation request in the Audit Vault Server, then start the agent on the secured target host machine.To activate and start the agent:

1. Log in to the Audit Vault Server console as an administrator.

2. Click the Hosts tab.

3. Select the host you want to activate, and then click Activate.

avdf008.png

 

This will generate a new activation key under the Agent Activation Key column.You can only activate a host if you have completed the procedures in Step 1: Deploy the Audit Vault Agent on the Host Machine. Otherwise the Agent Activation Status for that host will be No Request.

4. Change directory as follows:

cd Agent_Home/bin

Agent_Home is the directory created in the step 7 above.

5. On the secured target host machine, run the following command and provide the activation key from Step 3:

./agentctl start -k key

Note: the -k argument is not needed after the initial agentctl start command.

avdf009.png

avdf010.png

Stopping and Starting the Audit Vault Agent

To stop or start the Audit Vault Agent after initial activation and start, run one of the following commands from the Agent_Home/bin directory on the secured target host machine:

./agentctl stop

./agentctl start

Changing the Logging Level for the Audit Vault Agent

The logging level you set affects the amount of information written to the log files. You may need to take this into account for disc space limitations.The following logging levels are listed in the order of amount of information written to log files, with debug providing the most information:

  • error - Writes only error messages
  • warn - (Default) Writes warning and error messages
  • info - Writes informational, warning, and error messages
  • debug - Writes detailed messages for debugging purposes

To change the logging level for an Audit Vault Agent:  

1. Ensure that you are logged into AVCLI on the Audit Vault Server.

2. Run the ALTER HOST command. The syntax is as follows:

ALTER HOST host_name SET LOGLEVEL=av.agent:log_level     

In this specification:

  •  host_name: The name of the host where the Audit Vault Agent is deployed.
  •  log_level: Enter a value of info, warn, debug, or error.
Registering Secured Targets

1. If you will collect audit data from a secured target, do stored procedure auditing (SPA), entitlements auditing, or enable database interrogation, create a user account on the secured target, with the appropriate privileges to allow Oracle AVDF to access the required data.     

Setup scripts: Scripts are available to configure user account privileges for these secured target types:- "Oracle Database Setup Scripts"

  • "Sybase ASE Setup Scripts"
  • "Microsoft SQL Server Setup Scripts"
  • "IBM DB2 for LUW Setup Scripts"
  • "MySQL Setup Scripts"
  • "Sybase SQL Anywhere Setup Scripts"     

Linux secured targets: Assign the Oracle AVDF user to the log_group parameter in the Linux /etc/audit/auditd.conf configuration file. This user must have execute permission on the folder that contains the audit.log file (default folder is /var/log/audit).

Other types of secured targets: You must create a user that has the appropriate privileges to access the audit trail required. For example, for a Windows secured target, this user must have administrative permissions in order to read the security log.

Note: Oracle AVDF does not accept user names with quotation marks. For example, "JSmith" would not be a valid user name for an Audit Vault and Database Firewall user account on secured targets.

avdf011.png

 

2. Log in to the Audit Vault Server console as an administrator.

3. Click the Secured Targets tab. The Secured Targets page lists the configured secured targets to which you have access. You can sort or filter the list of targets. See "Working With Lists of Objects in the UI".

4. Click Register, and in the Register Secured Target page, enter a name and description for the new target.

5. In the Secured Target Location field, enter the connect string for the secured target.      See "Secured Target Locations (Connect Strings)" for the connect string format for a specific secured target type. For example, for Oracle Database, the string might look like the following:     

jdbc:oracle:thin:@//HOSTNAME:PORT/SERVICE_NAME

6. In the Secured Target Type field, select the secured target type, for example, Oracle Database.

7. In the User Name, Password, and Re-enter Password fields, enter the credentials for the secured target user account you created in Step 1.

8. If you will monitor this secured target with a Database Firewall, in the Add Secured Target Addresses area, for each available connection of this database enter the following information, and then click Add.

  • IP Address (or Host Name)
  • Port Number
  • Service Name (Oracle Database only)

9. If required, enter values for Attribute Name and Attribute Value at the bottom of the page, and click Add. Collection attributes may be required by the Audit Vault Agent depending on the secured target type. See "Collection Attributes" to look up requirements for a specific secured target type.

10. If you will monitor this secured target with a Database Firewall, you can increase the processing resource for this secured target by adding the following Collection Attribute:

  • Attribute Name: MAXIMUM_ENFORCEMENT_POINT_THREADS
  • Attribute Value: A number between 1 - 16 (default is 1)     

This defines the maximum number of Database Firewall processes (1 - 16) that may be used for the enforcement point associated with this secured target. You should consider defining this if the number of secured targets you are monitoring is less than the number of processing cores available on the system running the Database Firewall. Setting a value when it is not appropriate wastes resources.

avdf012.png

 

11. Click Save.


Configuring an Audit Trail in the Audit Vault Server

In order to start collecting audit data, you must configure an audit trail for each secured target in the Audit Vault Server, and then start the audit trail collection manually. Before configuring an audit trail for any secured target, you must:

  • Add the secured target in the Audit Vault Server. See "Registering or Removing Secured Targets in the Audit Vault Server" for details.
  • Register the secured target host machine and deploy and activate the agent on that machine. See "Registering Hosts".

This procedure assumes that the Audit Vault Agent is installed on the same computer as the secured target.

To configure an audit trail for a secured target:

1. Log in to the Audit Vault Server console as an administrator.

2. Click the Secured Targets tab.

3. Under Monitoring, click Audit Trails. The Audit Trails page appears, listing the configured audit trails and their status.

4. In the Audit Trails page, click Add.

5. From the Collection Host drop-down list, select the host computer of the secured target.  

6. From the Secured Target Name drop-down list, select the secured target's name.  

7. From the Audit Trail Type drop-down list, select one of the following:

  • CUSTOM
  • DIRECTORY
  • EVENT LOG
  • NETWORK
  • SYSLOG
  • TABLE
  • TRANSACTION LOG     

See Table B-13 for details on which type(s) of audit trails can be collected for a specific secured target type, and "Data Collected for Each Audit Trail Type" for descriptions of data collected.  

8. In the Trail Location field, enter the location of the audit trail on the secured target computer, for example, sys.aud$.      The trail location depends on the type of secured target. See "Audit Trail Locations" for supported trail locations.      Note: If you selected DIRECTORY for Audit Trail Type, the Trail Location must be a directory mask.

9. If you have deployed plug-ins for this type of secured target, select the plug-in in the Collection Plug-in drop-down list. For more information on plug-ins, see "About Agent Plug-ins".

avdf013.png

 

10. Click Save.

Starting and Stopping Audit Trails in the Audit Vault Server

To start or stop audit trail collection for a secured target:

1. Log in to the Audit Vault Server console as an administrator.

2. Click the Secured Targets tab.

3. Click Audit Trails.

4. Select the audit trail(s) you want to start or stop, and then click Start or Stop. avdf014.png

 

 I very do hope that this blog will help you delpoying AVDF. Do not hesitate to post comments if you have any questions.