Skip navigation.

Yann Neuhaus

Syndicate content
All blog entries from http://www.dbi-services.com/
Updated: 3 hours 32 min ago

SQL Server 2016 CTP2: Stretch database feature - Part 2

Thu, 2015-06-18 02:55

In my previous blog SQL Server 2016 CTP2: Stretch database feature - Part 1, I explained how to prepare your instance and your database to enable the Stretch feature for your table.

Now I present you how to enable the feature for your tables!

Prerequisites

Enabling Stretch Database at the table level requires ALTER permissions on this table.

 

Limitations

For the moment in the SQL Server 2016 CTP2, there are several and important limitations, which drastically reduces the scope of use of Stretch Database feature.

These limitations include Table Properties, Data Types, Column Properties, Column Types and Constraints and Indexes.

You can obtain the detailed list with the following Microsoft article: Requirements and limitations for Stretch Database.

 

Enable Stretch for a Table

First, I create a new table to avoid all limitations I explained above. Here is the SQL script:

Use AdventureWorks2014;
CREATE TABLE Stretch_Table
(
    Column1 nvarchar(50),
    Column2 int,
    Column3 nchar(10)
)

If you have followed all steps in my previous blog, you must be ready for enabling Stretch feature for your table!

As all have been pre-configured, you just need to enable the feature for the targeted table.

enable_table_for_stretch.png

 

If we take a look at the SQL Database server in Azure, we must be able to visualize the "Stretch_Table" table:

SQL_azure_20150618-090340_1.png

 

You can notice that a new column named "batchID" has been included in the original table. Indeed, a non-null bigint is incremented each time you insert a new row in your table: it is the PRIMARY KEY of your table.

Moreover, your table still remains visible on your on-premise instance and you can perform your normal work.

 

Next step

Performing backup of the database with Stretch feature enabled will not include the data stored in Azure. It means you also needs to perform a backup in Azure.

I will detail this part in my next blog.

ODA workshop at Arrow ECS

Wed, 2015-06-17 02:30
On the 16th and 17th of June David Hueber, Georges Grey and myself had the chance to attend the ODA hands on workshop at Arrow ECS. Lead Trainer Ruggero Citton (Oracle ODA Product Development) did the first day with plenty of theory and examples. On the second day we had the opportunity to play on a brand new ODA X5-2:

SQL Server 2014: Analysis, Migrate and Report (AMR) - a powerful In-Memory migration tool

Wed, 2015-06-17 02:28

An important new functionality of Microsoft SQL Server 2014 is the In-Memory OLTP engine, which enable you to load Tables and also Stored Procedures In-Memory for a very fast response time.
The goal is not to load all the database In-Memory but just Tables with critical performances and Stored Procedures with complex logical calculations.

To identify which Tables or Stored Procedures will give you the best performance gain after migration; Microsoft SQL Server 2014 has introduced a new tool: Analysis, Migrate and Report (AMR).

This tool will collect statistics about Tables and Stored Procedures in order to analyze the current workload. It will give you advice on the migration benefits of the different Tables or Stored Procedures. It will also give you an overview of the time/work needed to push Tables or Stored Procedures In-Memory.

In the following article I will show you how to setup and use this Tool.

Configuration of the Management Data Warehouse

The AMR Tool is built into SQL Server Management Studio.
It consists of:

  • Reports which come from a Management Data Warehouse and give recommendations about tables and Stored procedures which could be migrated to In-Memory OLTP
  • Memory Optimization Advisor which will help you during the migration process of a disk table to a Memory Optimized table
  • Native Compilation Advisor which will help you migrated a Stored Procedure to a Natively Compiled Stored Procedure

AMR Tool leverages the Management Data Warehouse and the Data Collector, with his new Transaction Performance Collection Sets, for gathering information about workloads.

AMR will analyze the collected data and provide recommendations via reports.
First, we have to configure the Management Data Warehouse.

To start the configuration, open Management Studio, go to Object Explorer, then Management folder, and right-click on Data Collection. Then select Tasks and click on Configure Management Data Warehouse as shown below:

AMR_picture1.png

On the Configure Management Data Warehouse Storage screen, enter the server name and the database name where you Management Data Warehouse will be host. AMR tool will collect, via its collection sets, data from three Dynamic Management Views every fifteen minutes and will save those data in the MDW database. Uploading data will have minimal performance impact.

If you already have a database, enter her name. If not, click the New button to create a new one.
On the Map logins and Users page, if needed, you can map a user to administer, read, or write the Data Management Warehouse database.
Verify the Management Data Warehouse configuration and proceed to the configuration.
When the configuration of the Management Data Warehouse has been successfully finalized, you should see the following screen:

 AMR_picture2.png

The Management Data Warehouse setup is finished.

Configuration of the Data collection

Take care, SQL Server agent has to be started on the instance that will collect the data.
To collect data, we will enable the new Transaction Performance Collection set which is composed of two new collection sets:

  • Stored Procedure Usage Analysis: used to capture statistics about Stored Procedures which could be migrate to Natively Compiled Stored Procedures
  • Table Usage Analysis: takes information about disk based tables for a future migration to Memory Optimized tables.

 In order to configure the Data Collection, go to Object Explorer, then Management folder, right-click on Data Collection, select Tasks, and click on Configure Data Collection, as shown below:

AMR_picture3.png

After having skipped the Welcome page, you have to select a server and a database name that will host the Management Data Warehouse.

Now, you need to select the data collector sets. In the wizard, check “Transaction Performance Collection Set” in the list of collection sets. This will collect statistics for transaction performance issues.

If the Management Data Warehouse is located on a different SQL Server instance from the data collector and SQL Server agent is not running under a domain account which has dc_admin permissions on the remote instance you have to use a SQL Server Agent proxy.

AMR_picture4.png

After having performed the Data Collection configuration with success, you will have an enabled Data Collection which will collect information about all user databases.

Via SQL Server Agent Folder Jobs, you are now able to see new collection jobs used to collect data from your workloads with names like collection_set_N_collection and jobs used to populate our new Management Data Warehouse database with names like collection_set_N_upload.

It is also good to know that upload jobs will be run every thirty minutes for Stored Procedure Usage Analysis (job: collection_set_5_upload) and every fifteen minutes for Table Usage Analysis (job: collection_set_6_upload). So if you want to speed your upload, you can execute these jobs manually.
 

Reports

To access recommendations based on collected information about all user databases on the workload server, you have to right-click on your Management Data Warehouse database, select Reports, then Management Data Warehouse, and finally Transaction Performance Analysis.

In the Transaction Performance Analysis Overview report, you can choose among three reports, depending on what you want or need:

  • Usage analysis for tables
  • Contention analysis for tables
  • Usage analysis for Stored procedures

AMR_picture5.png

Usage analysis for tables

This report, based on table's usage, shows you best candidate tables that could be pushed In-Memory.
On the left side of the report you have the possibility to select the database and the number of tables you would like to see.
The central part is a chart with two axis:

  • Horizontal axis represents significant to minimal works needed to migrate a table to In-Memory OLTP
  • Vertical axis represents increasing gains you will benefit after having moved the table to In-Memory OLTP

The best part of this graph is the top right corner, which shows tables that could be migrated easily In-Memory, but will give you the best performance gain.

AMR_picture6.png

When you click a table point on the graph, you will access a more detailed statistics report.

This report shows access characteristics (lookup statistics, range scan statistics, etc.) and also contention statistics (latches statistics, lock statistics, etc.) of the concerning table and for the monitoring time period of your instance’s workload with the Transaction Performance Collection Set.

AMR_picture7.png

Contention analysis for table

This report is based on table's contention instead of usage. It shows you best candidate tables that could be migrated In-Memory.
As before, on the left side of the report you have the possibility to select the database and the number of tables you would like to see.
The central part is a chart with two axis:

  • Horizontal axis represents significant to minimal works needed to migrate a table to In-Memory OLTP
  • Vertical axis represents increasing gains you will benefit after having moved the table to In-Memory OLTP

The best part of this graph is the top right corner, showing tables that can be easily migrated In-Memory, but will give you best performance gain.

AMR_picture8.png

As for the usage analysis report, you can also click a table name on the graph to see the statistics details of the table.

Usage analysis for Stored Procedures

This report contains the top candidate stored procedures for an In-Memory OLTP migration with regards to their usage. This report is based on the Total CPU Time.

You also have the possibility to select the database and the number of stored procedure you would like to see.

AMR_picture9.png

If you want to see the usage statistics for a specific stored procedure, you can click on the blue bar. You will then have a more detailed report.

AMR_picture10.png

Now, you know which Tables and Stored Procedures will give you best performance gain after migration to In-Memory OLTP.
AMR provide us two Advisors which will help you to manage the transformation of your disk tables to Memory Optimized Tables as well as your Stored Procedures to Natively Compiled Stored Procedures. To know more about those advisors, please have a look to my blog.

DOAG Database 2015

Wed, 2015-06-17 00:17

It was my first time at DOAG Datenbank in Dusseldorf. 

What is more efficient: arrays or single columns values? - oracle

Mon, 2015-06-15 06:38
In the last post on this topic it turned out that using an array as a column type needs more space than using a column per value in PostgreSQL. Now I'll do the same test case in Oracle.

What is more efficient: arrays or single column values?

Mon, 2015-06-15 05:59
In PostgreSQL ( as well as in other rdbms ) you can define columns as arrays. What I wondered is: What is more efficient when it comes to space: Creating several columns or just creating once column as array? The result, at least for me, is rather surprising.

SQL Interpolation with psql

Mon, 2015-06-15 04:45

The PostgreSQL psql utility provides some really nice features. One of these features is SQL interpolation which allows us to do interesting things, e.g. reading files and analyze the results directly in the database. This post will show how to use this by reading and analyzing sar files of a linux server.

A free PostgreSQL cloud database?

Sun, 2015-06-14 04:10
Recently I was looking for a free PostgreSQL cloud database service for testing. Why? Because I'd like to use such a cloud instance for testing no matter on which workstation or OS I am currently on. Another reason is, that I could could prepare some demos at home and use the same demos at work without needing to worry about taking the database with me each time.

PostgreSQL portable?

Fri, 2015-06-12 05:01
Initially I was looking for a free PostgreSQL cloud database which I can use from any workstation and any OS in case I'll need to do some tests or demos and do not have a PostgreSQL database available. In fact I found a free cloud service for PostgreSQL (there are several. More on that in another post) but then I needed a client as I did not want to work with the tools the service provider listed on the website. So I did a search on "portable psql" as I was on a Windows machine:

Show the Top 10 events from latest AWR snapshot

Fri, 2015-06-12 02:17

Want to display a specific section from the latest AWR report? I'll share the script I use to get something like that:

SQL> @ LASTAWRSEC.sql 'Top 10 Foreground Events by Total Wait Time' ''

AWR_REPORT_TEXT
--------------------------------------------------------------------------------
Top 10 Foreground Events by Total Wait Time
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                                           Total Wait       Wait   % DB Wait
Event                                Waits Time (sec)    Avg(ms)   time Class
------------------------------ ----------- ---------- ---------- ------ --------
direct path write temp               7,146        8.1       1.14   67.3 User I/O
DB CPU                                            4.6              38.0
direct path read                     1,102         .3       0.32    2.9 User I/O
enq: KO - fast object checkpoi           1         .2     228.12    1.9 Applicat
db file sequential read                140          0       0.15     .2 User I/O
Disk file operations I/O                 4          0       0.66     .0 User I/O
log file sync                            2          0       0.68     .0 Commit
reliable message                         1          0       0.94     .0 Other
control file sequential read           122          0       0.01     .0 System I
SQL*Net break/reset to client            2          0       0.25     .0 Applicat

The script

The LASTAWRSEC.sql used here script get the latest snapshot from DBA_HIST_SNAPSHOT for the current instance and calls DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_TEXT to get the report. Then it adds a column with the section name (I've listed the sections available in 12.1.0.2 - you may have to modify that list) and the script accepts two parameters:

  • &1 is a regexp to apply to the section name
  • &2 is a regexp applied to the report line (can be the empty string)
and they are combined with a 'or'. I use that to get the Top event section and a few instance statistics in addition to that.

So here is the script:

with snap as (
  select * from (
    select dbid,lead(snap_id)over(partition by instance_number order by end_interval_time desc) bid,snap_id eid,row_number() over(order by end_interval_time desc) n
    from dba_hist_snapshot where dbid=(select dbid from v$database)
  ) where n=1
),
awr as (
        select rownum line,output
        from table(
                dbms_workload_repository.awr_report_text(l_dbid=>(select dbid from snap),l_inst_num=>(select instance_number from v$instance),l_bid=>(select bid from snap),l_eid=>(select eid from snap),l_options=>1+4+8)
        )
),
awr_sections as (
        select
         last_value(case when regexp_replace(output,' *DB/Inst.*$') in (''
        ,'Database Summary'
        ,'Database Instances Included In Report'
        ,'Top Event P1/P2/P3 Values'
        ,'Top SQL with Top Events'
        ,'Top SQL with Top Row Sources'
        ,'Top Sessions'
        ,'Top Blocking Sessions'
        ,'Top PL/SQL Procedures'
        ,'Top Events'
        ,'Top DB Objects'
        ,'Activity Over Time'
        ,'Wait Event Histogram Detail (64 msec to 2 sec)'
        ,'Wait Event Histogram Detail (4 sec to 2 min)'
        ,'Wait Event Histogram Detail (4 min to 1 hr)'
        ,'SQL ordered by Elapsed Time'
        ,'SQL ordered by CPU Time'
        ,'SQL ordered by User I/O Wait Time'
        ,'SQL ordered by Gets'
        ,'SQL ordered by Reads'
        ,'SQL ordered by Physical Reads (UnOptimized)'
        ,'SQL ordered by Optimized Reads'
        ,'SQL ordered by Executions'
        ,'SQL ordered by Parse Calls'
        ,'SQL ordered by Sharable Memory'
        ,'SQL ordered by Version Count'
        ,'SQL ordered by Cluster Wait Time'
        ,'Key Instance Activity Stats'
        ,'Instance Activity Stats'
        ,'IOStat by Function summary'
        ,'IOStat by Filetype summary'
        ,'IOStat by Function/Filetype summary'
        ,'Tablespace IO Stats'
        ,'File IO Stats'
        ,'Checkpoint Activity'
        ,'MTTR Advisory'
        ,'Segments by Logical Reads'
        ,'Segments by Physical Reads'
        ,'Segments by Direct Physical Reads'
        ,'Segments by Physical Read Requests'
        ,'Segments by UnOptimized Reads'
        ,'Segments by Optimized Reads'
        ,'Segments by Physical Write Requests'
        ,'Segments by Physical Writes'
        ,'Segments by Direct Physical Writes'
        ,'Segments by DB Blocks Changes'
       ,'Segments by Table Scans'
        ,'Segments by Row Lock Waits'
        ,'Segments by ITL Waits'
        ,'Segments by Buffer Busy Waits'
        ,'Segments by Global Cache Buffer Busy'
        ,'Segments by CR Blocks Received'
        ,'Segments by Current Blocks Received'
        ,'In-Memory Segments by Scans'
        ,'In-Memory Segments by DB Block Changes'
        ,'In-Memory Segments by Populate CUs'
        ,'In-Memory Segments by Repopulate CUs'
        ,'Interconnect Device Statistics'
        ,'Dynamic Remastering Stats'
        ,'Resource Manager Plan Statistics'
        ,'Resource Manager Consumer Group Statistics'
        ,'Replication System Resource Usage'
        ,'Replication SGA Usage'
        ,'GoldenGate Capture'
        ,'GoldenGate Capture Rate'
        ,'GoldenGate Apply Reader'
        ,'GoldenGate Apply Coordinator'
        ,'GoldenGate Apply Server'
        ,'GoldenGate Apply Coordinator Rate'
        ,'GoldenGate Apply Reader and Server Rate'
        ,'XStream Capture'
        ,'XStream Capture Rate'
        ,'XStream Apply Reader'
        ,'XStream Apply Coordinator'
        ,'XStream Apply Server'
        ,'XStream Apply Coordinator Rate'
        ,'XStream Apply Reader and Server Rate'
        ,'Table Statistics by DML Operations'
        ,'Table Statistics by Conflict Resolutions'
        ,'Replication Large Transaction Statistics'
        ,'Replication Long Running Transaction Statistics'
        ,'Streams Capture'
        ,'Streams Capture Rate'
        ,'Streams Apply'
        ,'Streams Apply Rate'
        ,'Buffered Queues'
        ,'Buffered Queue Subscribers'
        ,'Persistent Queues'
        ,'Persistent Queues Rate'
        ,'Persistent Queue Subscribers'
        ,'Rule Set'
        ,'Shared Servers Activity'
        ,'Shared Servers Rates'
        ,'Shared Servers Utilization'
        ,'Shared Servers Common Queue'
        ,'Shared Servers Dispatchers'
        ,'init.ora Parameters'
        ,'init.ora Multi-Valued Parameters'
        ,'Cluster Interconnect'
        ,'Wait Classes by Total Wait Time'
        ,'Top 10 Foreground Events by Total Wait Time'
        ,'Top ADDM Findings by Average Active Sessions'
        ,'Cache Sizes'
        ,'Host Configuration Comparison'
        ,'Top Timed Events'
        ,'Top SQL Comparison by Elapsed Time'
        ,'Top SQL Comparison by I/O Time'
        ,'Top SQL Comparison by CPU Time'
        ,'Top SQL Comparison by Buffer Gets'
        ,'Top SQL Comparison by Physical Reads'
        ,'Top SQL Comparison by UnOptimized Read Requests'
        ,'Top SQL Comparison by Optimized Reads'
        ,'Top SQL Comparison by Executions'
        ,'Top SQL Comparison by Parse Calls'
        ,'Top SQL Comparison by Cluster Wait Time'
        ,'Top SQL Comparison by Sharable Memory'
        ,'Top SQL Comparison by Version Count'
        ,'Top Segments Comparison by Logical Reads'
        ,'Top Segments Comparison by Physical Reads'
        ,'Top Segments Comparison by Direct Physical Reads'
        ,'Top Segments Comparison by Physical Read Requests'
        ,'Top Segments Comparison by Optimized Read Requests'
        ,'Top Segments Comparison by Physical Write Requests'
        ,'Top Segments Comparison by Physical Writes'
        ,'Top Segments Comparison by Table Scans'
        ,'Top Segments Comparison by DB Block Changes'
        ,'Top Segments by Buffer Busy Waits'
        ,'Top Segments by Row Lock Waits'
        ,'Top Segments by ITL Waits'
        ,'Top Segments by CR Blocks Received'
        ,'Top Segments by Current Blocks Received'
        ,'Top Segments by GC Buffer Busy Waits'
        ,'Top In-Memory Segments Comparison by Scans'
        ,'Top In-Memory Segments Comparison by DB Block Changes'
        ,'Top In-Memory Segments Comparison by Populate CUs'
        ,'Top In-Memory Segments Comparison by Repopulate CUs'
        ,'Service Statistics'
        ,'Service Statistics (RAC)'
        ,'Global Messaging Statistics'
        ,'Global CR Served Stats'
        ,'Global CURRENT Served Stats'
        ,'Replication System Resource Usage'
        ,'Replication SGA Usage'
        ,'Streams by CPU Time'
        ,'GoldenGate Capture'
        ,'GoldenGate Capture Rate'
        ,'GoldenGate Apply Coordinator'
        ,'GoldenGate Apply Reader'
        ,'GoldenGate Apply Server'
        ,'GoldenGate Apply Coordinator Rate'
        ,'GoldenGate Apply Reader and Server Rate'
        ,'XStream Capture'
        ,'XStream Capture Rate'
        ,'XStream Apply Coordinator'
        ,'XStream Apply Reader'
        ,'XStream Apply Server'
        ,'XStream Apply Coordinator Rate'
        ,'XStream Apply Reader and Server Rate'
        ,'Table Statistics by DML Operations'
        ,'Table Statistics by Conflict Resolutions'
        ,'Replication Large Transaction Statistics'
        ,'Replication Long Running Transaction Statistics'
        ,'Streams by IO Time'
        ,'Streams Capture'
        ,'Streams Capture Rate'
        ,'Streams Apply'
        ,'Streams Apply Rate'
        ,'Buffered Queues'
        ,'Rule Set by Evaluations'
        ,'Rule Set by Elapsed Time'
        ,'Persistent Queues'
        ,'Persistent Queues Rate'
        ,'IOStat by Function - Data Rate per Second'
        ,'IOStat by Function - Requests per Second'
        ,'IOStat by File Type - Data Rate per Second'
        ,'IOStat by File Type - Requests per Second'
        ,'Tablespace IO Stats'
        ,'Top File Comparison by IO'
        ,'Top File Comparison by Read Time'
        ,'Top File Comparison by Buffer Waits'
        ,'Key Instance Activity Stats'
        ,'Other Instance Activity Stats'
        ,'Enqueue Activity'
        ,'Buffer Wait Statistics'
        ,'Dynamic Remastering Stats'
        ,'Library Cache Activity'
        ,'Library Cache Activity (RAC)'
        ,'init.ora Parameters'
        ,'init.ora Multi-Valued Parameters'
        ,'Buffered Subscribers'
        ,'Persistent Queue Subscribers'
        ,'Shared Servers Activity'
        ,'Shared Servers Rates'
        ,'Shared Servers Utilization'
        ,'Shared Servers Common Queue'
        ,'Shared Servers Dispatchers'
        ,'Database Summary'
        ,'Database Instances Included In Report'
        ,'Top ADDM Findings by Average Active Sessions'
        ,'Cache Sizes'
        ,'OS Statistics By Instance'
        ,'Foreground Wait Classes -  % of Total DB time'
        ,'Foreground Wait Classes'
        ,'Foreground Wait Classes -  % of DB time '
        ,'Time Model'
        ,'Time Model - % of DB time'
        ,'System Statistics'
        ,'System Statistics - Per Second'
        ,'System Statistics - Per Transaction'
        ,'Global Cache Efficiency Percentages'
        ,'Global Cache and Enqueue Workload Characteristics'
        ,'Global Cache and Enqueue Messaging Statistics'
        ,'SysStat and Global Messaging  - RAC'
        ,'SysStat and  Global Messaging (per Sec)- RAC'
        ,'SysStat and Global Messaging (per Tx)- RAC'
        ,'CR Blocks Served Statistics'
        ,'Current Blocks Served Statistics'
        ,'Global Cache Transfer Stats'
        ,'Global Cache Transfer (Immediate)'
        ,'Cluster Interconnect'
        ,'Interconnect Client Statistics'
        ,'Interconnect Client Statistics (per Second)'
        ,'Interconnect Device Statistics'
        ,'Interconnect Device Statistics (per Second)'
        ,'Ping Statistics'
        ,'Top Timed Events'
        ,'Top Timed Foreground Events'
        ,'Top Timed Background Events'
        ,'Resource Manager Plan Statistics'
        ,'Resource Manager Consumer Group Statistics'
        ,'SQL ordered by Elapsed Time (Global)'
        ,'SQL ordered by CPU Time (Global)'
        ,'SQL ordered by User I/O Time (Global)'
        ,'SQL ordered by Gets (Global)'
        ,'SQL ordered by Reads (Global)'
        ,'SQL ordered by UnOptimized Read Requests (Global)'
        ,'SQL ordered by Optimized Reads (Global)'
        ,'SQL ordered by Cluster Wait Time (Global)'
        ,'SQL ordered by Executions (Global)'
        ,'IOStat by Function (per Second)'
        ,'IOStat by File Type (per Second)'
        ,'Segment Statistics (Global)'
        ,'Library Cache Activity'
        ,'System Statistics (Global)'
        ,'Global Messaging Statistics (Global)'
        ,'System Statistics (Absolute Values)'
        ,'PGA Aggregate Target Statistics'
        ,'Process Memory Summary'
        ,'init.ora Parameters'
        ,'init.ora Multi-valued Parameters'
        ,'Database Summary'
        ,'Database Instances Included In Report'
        ,'Time Model Statistics'
        ,'Operating System Statistics'
        ,'Host Utilization Percentages'
        ,'Global Cache Load Profile'
        ,'Wait Classes'
        ,'Wait Events'
        ,'Cache Sizes'
        ,'PGA Aggr Target Stats'
        ,'init.ora Parameters'
        ,'init.ora Multi-valued Parameters'
        ,'Global Cache Transfer Stats'
        ,' Exadata Storage Server Model'
        ,' Exadata Storage Server Version'
        ,' Exadata Storage Information'
        ,' Exadata Griddisks'
        ,' Exadata Celldisks'
        ,' ASM Diskgroups'
        ,' Exadata Non-Online Disks'
        ,' Exadata Alerts Summary'
        ,' Exadata Alerts Detail'
        ,'Exadata Statistics'
) then output end ) ignore nulls over(order by line) section
        ,output
        from awr
)
select output AWR_REPORT_TEXT from awr_sections where regexp_like(section,'&1') or regexp_like(output,'&2')
/

Please is you have any improvement or modification, don't hesitate to comment.

Parallel DML in 12c

Thu, 2015-06-11 12:48

Following a question from Randolf Geist (who can imagine that there is something about parallel query that Randolf didn't know?), I get back to some notes I've taken when 12c was out and I've tested them again on 12.1.0.2 - it's about the ability to enable parallel DML at query level.

Test case

In 12.1.0.2 I create two tables. DEMO1 has 100000 rows and is about 80MB. DEMO2 is empty. DEMO1 is parallel.

SQL> create table DEMO1 parallel 2 as
  2  select rownum id , ora_hash(rownum,10) a from xmltable('1 to 1000000');

Table created.

SQL> select table_name,num_rows,blocks from user_tables where table_name='DEMO';

TABLE_NAME   NUM_ROWS     BLOCKS
---------- ---------- ----------
DEMO           100000      10143

SQL>
SQL> create table DEMO2 as select * from DEMO1 where null is not null;

Table created.

SQL>
SQL> alter session set statistics_level=all;

Session altered.

insert ... select ...

Here is a simple insert as select:

SQL> insert into DEMO2 select * from DEMO1;

1000000 rows created.

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
SQL_ID  bx27xdnkr7dvw, child number 0
-------------------------------------
insert into DEMO2 select * from DEMO1

Plan hash value: 4271246053

-----------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |          |      1 |        |      0 |00:00:17.40 |   24311 |
|   1 |  LOAD TABLE CONVENTIONAL | DEMO2    |      1 |        |      0 |00:00:17.40 |   24311 |
|   2 |   PX COORDINATOR         |          |      1 |        |   1000K|00:00:04.49 |       5 |
|   3 |    PX SEND QC (RANDOM)   | :TQ10000 |      0 |   1000K|      0 |00:00:00.01 |       0 |
|   4 |     PX BLOCK ITERATOR    |          |      0 |   1000K|      0 |00:00:00.01 |       0 |
|*  5 |      TABLE ACCESS FULL   | DEMO1    |      0 |   1000K|      0 |00:00:00.01 |       0 |
-----------------------------------------------------------------------------------------------

Note                                                                                                                                
-----
   - Degree of Parallelism is 2 because of table property
   - PDML is disabled in current session

The select part is done in parallel (it's below the coordinator) but the insert part (LOAD TABLE) is above the coordinator, which means that it is done in serial by the coordinator. In 12.1.0.2 you have no doubt: dbms_xplan has a note to tell you that PDML was not used and it gives the reason: it's not enabled in the session.

When you have tuning pack the parallel queries are monitored by default, so we can get the SQL Monitor Plan. You can get it as text, html or flash but I'll use Lighty here as I find it awesome for that as well:

PDML1.png

Look at the bottom right which details the highlighted plan line: 100% of the load has been done by my session process.

Enable parallel dml

So we need to enable parallel DML for our session. Do you know why? Because inserting in parallel requires to lock the table (or partition) it is inserted into, so the optimizer cannot decide that without our permission. So let's enable parallel DML:

SQL> alter session enable parallel dml;
ERROR:
ORA-12841: Cannot alter the session parallel DML state within a transaction

I cannot do that here because I have a transaction in progress. But in 12c you can also enable parallel DML at query level, with the ENABLE_PARALLEL_DML hint. I've seen it when 12c came out, but it was undocumented. But I discover today that it is documented in the Enable Parallel DML Mode of the Database VLDB and Partitioning Guide.

With the DISABLE_PARALLEL_DML hint you can disable PDML at query level when you have enabled it in the session. And with the ENABLE_PARALLEL_DML hint you can enable PDML for one query even when it's not enabled in the session. And you can do that even if you have a transaction in progress:

SQL> insert /*+ enable_parallel_dml */ into DEMO2 select * from DEMO1;

1000000 rows created.

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------
SQL_ID  707bk8y125hp4, child number 0
-------------------------------------
insert /*+ enable_parallel_dml */ into DEMO2 select * from DEMO1

Plan hash value: 4271246053

-----------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT         |          |      1 |        |      0 |00:00:18.76 |   22343 |
|   1 |  LOAD TABLE CONVENTIONAL | DEMO2    |      1 |        |      0 |00:00:18.76 |   22343 |
|   2 |   PX COORDINATOR         |          |      1 |        |   1000K|00:00:04.22 |       5 |
|   3 |    PX SEND QC (RANDOM)   | :TQ10000 |      0 |   1000K|      0 |00:00:00.01 |       0 |
|   4 |     PX BLOCK ITERATOR    |          |      0 |   1000K|      0 |00:00:00.01 |       0 |
|*  5 |      TABLE ACCESS FULL   | DEMO1    |      0 |   1000K|      0 |00:00:00.01 |       0 |
-----------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 2 because of table property
   - PDML disabled because object is not decorated with parallel clause
   - Direct Load disabled because no append hint given and not executing in parallel

Ok. I've enabled PDML but PDML occurs only when in parallel. Here the table has no parallel degree and there is no PARALLEL hint. Once again dbms_xplan gives us the reason. And because it's not PDML and there is no append hint, then it's not loaded in direct-path.

Here is the SQL Monitoring plan. Note that is the same as the previous one except that it's not the same cost. I don't know why yet. If you have any idea, please comment.

PDML2.png

Enable parallel DML while in a transaction

I disable PDML and start a transaction:

SQL> commit;

Commit complete.

SQL> alter session disable parallel dml;

Session altered.

SQL> delete from DEMO1 where rownum
SQL> select status,used_urec from v$transaction where ses_addr=(select saddr from v$session where sid=sys_context('userenv','sid'));

STATUS            USED_UREC
---------------- ----------
ACTIVE                 1000

And while I'm within that transaction, Let's do the parallel insert enabled by hint:

SQL> insert /*+ parallel enable_parallel_dml */ into DEMO2 select * from DEMO1;

999000 rows created.

SQL> select * from table(dbms_xplan.display_cursor(format=>'allstats last'));

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------
SQL_ID  2b8q4k902pbdx, child number 1
-------------------------------------
insert /*+ parallel enable_parallel_dml */ into DEMO2 select * from DEMO1

Plan hash value: 86785878

-------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name | Starts | A-Rows | Buffers | OMem |1Mem | Used-Mem |
-------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |      |      1 |      4 |     136 |      |     |          |
|   1 |  PX COORDINATOR                    |      |      1 |      4 |     136 |      |     |          |
|   2 |   PX SEND QC (RANDOM)              | :TQ10|      0 |      0 |       0 |      |     |          |
|   3 |    LOAD AS SELECT (HYBRID TSM/HWMB)|      |      0 |      0 |       0 |   33M|  33M| 2068K (0)|
|   4 |     OPTIMIZER STATISTICS GATHERING |      |      0 |      0 |       0 |      |     |          |
|   5 |      PX BLOCK ITERATOR             |      |      0 |      0 |       0 |      |     |          |
|*  6 |       TABLE ACCESS FULL            | DEMO1|      0 |      0 |       0 |      |     |          |
-------------------------------------------------------------------------------------------------------

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Here PDML occurred. We know that because of the load operator under the coordinator (Note to self: HYBRID TSM/HWMB is something to investigate - once again comments welcome). I've displayed the plan with 'allstats last' which show only the coordinator activity. SQL Monitor can show all:

PDML5.png

Conclusion

Yes you can enable PDML at query level in 12c and it is documented. And you can do it even when within a transaction which is a restriction only for 'alter session enable parallel dml' but not for that hint.

SQL Server 2016 : availability groups and automatic failover enhancements

Thu, 2015-06-11 11:07

Let’s continue with this study of the new availability group enhancements. Others studies are available here:

This time we’ll talk about the possibility to enrol a third replica for automatic failover. It implies of course to configure synchronous replication between the 2 pairs of replicas and this is at the cost of degrading the overall performance. But it seems that in this area we can expect to have also some improvements. So maybe another future study.

First of all, my feeling is that this enhancement will be very interesting in terms of availability but unfortunately introducing a third replica in this case will not be affordable for some customers in terms of budget. So, the final package is surely not yet defined and this would lead me to draw conclusions based on inaccurate information. So let’s focus only on the technical aspect of this feature for the moment:

I have included a third replica (SQL163) to my existing availability group 2016Grp:

 

blog_51_-_1-_aag_config

 

In parallel, my cluster quorum is configured as follows:

 

blog_51_-_2-_clust_config

 

...

 

blog_51_-_3-_quorum_config

 

...

 

blog_51_-_4-_quorum_config

 

Basically, this is a windows failover cluster CLUST-2021 on a single subnet that includes three nodes (SQL161, SQL162 and SQL163) and configured to use a file share witness as well as dynamic quorum capability.

I simulated a lot of test failures in my lab environment (shutdown of a replica, turn off of a replica, lost a database file, disable the network cards and so on) and the automatic failover on 2 pairs of replicas was successful in each case. However, this raised the following question: which secondary replica will be chosen by the system? I didn’t see a configuration setting that controls the “failover priority order list” and I believe it could be a good adding value here. After performing others tests and after discussing with some other MVPS like Christophe Laporte (@Conseilit), I noticed that the failover order seems to be related to the order of the preferred owner of the related availability group cluster role. Moreover, according to this very interesting article from Vijay Rodrigues, this order is set and changed dynamically by SQL Server itself, so changing the order directly from the cluster itself seems to be a very bad idea. Next, I decided to configure directly the order at the creation step of the availability during the adding operation of the replicas and it seems to be the good solution.

To illustrate this point here the initial configuration I wanted to achieve:

 

  • SQL161 primary replica
  • SQL163 secondary replica (first failover partner)
  • SQL162 secondary replica (second failover partner)

After adding this replica in the correct order from the wizard here the inherited order I get from the related cluster role:

 

blog_51_-_7-_preferred_owner_list

 

This order seems to be preserved according to the current context of the availability group.

The same test with a different order like:

  • SQL163 primary replica
  • SQL162 secondary replica (first failover partner)
  • SQL161 secondary replica (second failover partner)

… Givesus a different result and once again this order is preserved regardless the context changes:

 

blog_51_-_8-_preferred_owner_list

 

This idea of controlling the failover replicas order comes from a specific scenario where you may have two secondary replicas across multiple sites. You may decide to failover first on the secondary replica on the same datacenter and then the one located on the remote site.

 

blog_51_-_7-_preferred_owner_list_

 

 

But wait a minute… do you see the weakness in the above architecture? Let’s deal with the node weights (in red). You may noticed that you will have to introduce another replica in order to avoid losing the quorum in case of the datacenter 1 failure. Indeed, you won’t get the majority with the current architecture if it remains nothing but the file share witness and the replica on the datacenter 2. So the new architecture may be the following:

 

blog_51_-_6-_archi

 

In this case we may or may not decide to use this additional node as a SQL Server replica on the datacenter 2 but it is at least mandatory in the global cluster architecture to provide automatic failover capability for the availability group layer in case of the datacenter 1 failure. This is why I said earlier that introducing this new availability group capability may not be affordable for all of the customers assuming that this additional replica must be licenced.

See you

SQL Server 2016: native support for JSON

Thu, 2015-06-11 10:00

A lot of discussions and most important, a feature requests in the Microsoft connect site here with more than 1000 votes is the origin of this new feature in SQL Server 2016.

 

A lot of NoSQL have already this function and PostgreSQL for example have the json_extract_path_text functionality and you can at every time ask my colleague Daniel Westermann one of our expert in PostgreSQL of this subject.

using dbms_server_alert in combination with a custom monitoring solution

Thu, 2015-06-11 06:35
Lot's of companies do not use Grid- or Cloud Control for monitoring their Oracle databases for various reasons but rather use open source tools like nagios. And lot of those either implemented custom script frameworks or rely on plug-ins available for the monitoring solution of choice. This post shall show on how you can let Oracle check a lot of its own metrics and you only need one script for alerting various metrics.

draft release notes for PostgreSQL 9.5 online

Thu, 2015-06-11 00:02
Bruce Momjian, one of the PostgreSQL Core members, just compiled the first draft version of the release notes for the upcoming PostgreSQL 9.5

Some of the goodies that will show up: If you want to test any of the new features check this post.

Can you have pending system statistics?

Wed, 2015-06-10 09:08

Your system statistics seems to be wrong and you want to gather or set more relevant ones. But you don't want to see all your application execution plans changing between nested loops and hash joins. For object statistics, we can gather statistics in a pending mode, test them in a few sessions, and publish them when we are ok with them. But for system statistics, can you do the same? It can be risky to try it, so I've done it for you in my lab.

Test case in 11g

 

SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production

SQL> create table DEMO as
           select rownum id , ora_hash(rownum,10) a , ora_hash(rownum,10) b , lpad('x',650,'x') c 
           from xmltable('1 to 100000');

Table created.

Here are my system statistics:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2
is not null order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2719
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 08:11
                     SYSSTATS_INFO    DSTOP                 06-10-2015 08:11
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

I check a full table scan cost:

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO1;

Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 88550 |    30M|  2752   (1)| 00:00:34 |
|   1 |  TABLE ACCESS FULL| DEMO | 88550 |    30M|  2752   (1)| 00:00:34 |
--------------------------------------------------------------------------

No surprise here. I've 10000 blocks in my tables, SREATDIM= IOSEEKTIM + db_block_size / IOTFRSPEED= 12 ms and MREADTIM= IOSEEKTIM + db_block_size * MBRC / IOTFRSPEED = 26 ms. Then the cost based on a MBRC of 8 is ( 26 * 10000 / 8 ) / 12 = 2700

 

Pending stats in 11g

I set 'PUBLISH' to false in order to have pending statistics:

SQL> exec dbms_stats.SET_GLOBAL_PREFS('PUBLISH', 'FALSE') ;

PL/SQL procedure successfully completed.

Then I set some system statistics manually to simulate a fast storage:

17:14:38 SQL> exec dbms_stats.set_system_stats('IOSEEKTIM',1);

PL/SQL procedure successfully completed.

17:14:38 SQL> exec dbms_stats.set_system_stats('IOTFRSPEED','204800');

PL/SQL procedure successfully completed.

and I run the same explain plan:

Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 88550 |    30M|  1643   (2)| 00:00:02 |
|   1 |  TABLE ACCESS FULL| DEMO | 88550 |    30M|  1643   (2)| 00:00:02 |
--------------------------------------------------------------------------

The cost is better. I'm not using pending statistics, which means that the published stats have been changed - despie the PUBLISH global preference set to FALSE:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 i
s not null order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2719
                     SYSSTATS_MAIN    IOSEEKTIM 1
                     SYSSTATS_MAIN    IOTFRSPEED 204800
                     SYSSTATS_INFO    DSTART                06-10-2015 08:14
                     SYSSTATS_INFO    DSTOP                 06-10-2015 08:14
                     SYSSTATS_INFO    FLAGS               1
                     SYSSTATS_INFO    STATUS                COMPLETED

As you see, the SYS-AUX_STATS$ show my modified values (note that the date/time did not change by the way). So be careful, when you set or gather or delete system statistics in 11g you don't have the pending/publish mechanism. It's the kind of change that may have a wide impact changing all your execution plans.

 

With the values I've set the SREADTIM is near 1 ms and MREADTIM is about 1.3 ms so the cost is ( 1.3 * 10000 / 8 ) / 1 = 1625 which is roughly what has been calculated by the CBO on my new not-so-pending statistics.

12c

If you look at 12c you will see new procedures in dbms_stats which suggest that you can have pending system statistics:

SQL> select banner from v$version where rownum=1;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

SQL> select procedure_name from dba_procedures where object_name='DBMS_STATS' and procedure_name like '%PENDIN
G_SYSTEM_STATS';

PROCEDURE_NAME
--------------------------------------------
DELETE_PENDING_SYSTEM_STATS
EXPORT_PENDING_SYSTEM_STATS
PUBLISH_PENDING_SYSTEM_STATS

but be careful, they are not documented. Let's try it anyway. I start as I did above, with a demo table and default statistics:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 is not nul
l order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2725
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 17:25
                     SYSSTATS_INFO    DSTOP                 06-10-2015 17:25
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO1;

Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  2752   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  2752   (1)| 00:00:01 |
--------------------------------------------------------------------------

I set PUBLISH to false and set manual system stats:

SQL> exec dbms_stats.SET_GLOBAL_PREFS('PUBLISH', 'FALSE') ;

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_system_stats('IOSEEKTIM',1);

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.set_system_stats('IOTFRSPEED','204800');

PL/SQL procedure successfully completed.

and I check the SYS.AUX_STATS$ table:

SQL> select '' savtime,sname,pname,pval1,pval2 from sys.aux_stats$ where pval1 is not null or pval2 is not nul
l order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
                     SYSSTATS_MAIN    CPUSPEEDNW       2725
                     SYSSTATS_MAIN    IOSEEKTIM          10
                     SYSSTATS_MAIN    IOTFRSPEED       4096
                     SYSSTATS_INFO    DSTART                06-10-2015 17:25
                     SYSSTATS_INFO    DSTOP                 06-10-2015 17:25
                     SYSSTATS_INFO    FLAGS               0
                     SYSSTATS_INFO    STATUS                COMPLETED

Good ! I still have the previous values here. The new stats have not been published.

 

The pending stats are stored in the history table, with a date in the future:

SQL> select savtime,sname,pname,pval1,pval2 from sys.wri$_optstat_aux_history where pval1 is not null or pval2
 is not null and savtime>sysdate-30/24/60/60 order by 1,2 desc,3;

SAVTIME              SNAME            PNAME           PVAL1 PVAL2
-------------------- ---------------- ---------- ---------- --------------------
01-dec-3000 01:00:00 SYSSTATS_MAIN    CPUSPEEDNW       2725
01-dec-3000 01:00:00 SYSSTATS_MAIN IOSEEKTIM 10
01-dec-3000 01:00:00 SYSSTATS_MAIN IOTFRSPEED 204800
01-dec-3000 01:00:00 SYSSTATS_INFO    DSTART                06-10-2015 17:29
01-dec-3000 01:00:00 SYSSTATS_INFO    DSTOP                 06-10-2015 17:29
01-dec-3000 01:00:00 SYSSTATS_INFO    FLAGS               1
01-dec-3000 01:00:00 SYSSTATS_INFO    STATUS                COMPLETED

That's perfect. It seems that I can gather system statistics without publishing them. And I don't care about the Y3K bug yet.

 

12c use pending stats = true

First, I'll check that a session can use the pending stats if chosen explicitly:

SQL> alter session set optimizer_use_pending_statistics=true;

Session altered.

the I run the query:

SQL> set autotrace trace explain
SQL> select * from DEMO DEMO2;

Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843

--------------------------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  1308   (1)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  1308   (1)| 00:00:01 |
--------------------------------------------------------------------------

Cost is lower. This is exacly what I expected with my new - unpublished - statistics. Good. I don't know what it's lower than in 11g. Maybe the formula has changed. This is another place for comments ;)

 

12c use pending stats = false

Ok I checked that the published statistics are the same as before, but let's try to use them:

SQL> alter session set optimizer_use_pending_statistics=false;

Session altered.

and once again run the same query:

SQL> set autotrace trace explain

SQL> select * from DEMO DEMO3;

Execution Plan
----------------------------------------------------------
Plan hash value: 4000794843

----------------------------------------------------------
| Id  | Operation         | Name | Rows  | Bytes | Cost  |
----------------------------------------------------------
|   0 | SELECT STATEMENT  |      | 80500 |    28M|  1541 |
|   1 |  TABLE ACCESS FULL| DEMO | 80500 |    28M|  1541 |
----------------------------------------------------------

Note
-----
   - cpu costing is off (consider enabling it)

Oh. There is a problem here. 'cpu costing is off' means that there are no system statistics. The cost has been calculated as it were in old versions whithout system statistics. This is bad. I have gathered pending statistics, not published, but all sessions have their costing changed now.

 

10053

Just a look at the 10053 trace show that I have a problem:

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
System Stats are INVALID.
...
  Table: DEMO  Alias: DEMO3
    Card: Original: 80500.000000  Rounded: 80500  Computed: 80500.000000  Non Adjusted: 80500.000000
  Scan IO  Cost (Disk) =   1541.000000
  Scan CPU Cost (Disk) =   0.000000
  Total Scan IO  Cost  =   1541.000000 (scan (Disk))
                       =   1541.000000
  Total Scan CPU  Cost =   0.000000 (scan (Disk))
                       =   0.000000
  Access Path: TableScan
    Cost:  1541.000000  Resp: 1541.000000  Degree: 0
      Cost_io: 1541.000000  Cost_cpu: 0
      Resp_io: 1541.000000  Resp_cpu: 0
  Best:: AccessPath: TableScan
         Cost: 1541.000000  Degree: 1  Resp: 1541.000000  Card: 80500.000000  Bytes: 0.000000

It seems that with pending statistics the optimizer can't simply get the published values, and falls back as if there were no system statistics. This is a bug obviously. I've not used the undocumented new functions. They were used in the background, but it's totally supported to set PUBLISH to FALSE and the gather system statistics. The behavior should be either the same as in 11g - publishing the gathered stats - or gathering into pending stats only and session continue to use the published ones by default.

 

Conclusion

In 11g, be careful, system statistic changes are always published.

In 12c, don't gather system statistics when PUBLISH is set to false. We can expect that nice new feature in further versions, but for the moment it messes up everything. I'll not open an SR yet but hope it'll be fixed in future versions.

update

Further investigations done by Stefan Koehler on this twitter conversation:

@FranckPachot IOSEEKTIM=1 is not accepted/set. Reason for cost drop to 1308 in case of pending SYS stats … 1/2 pic.twitter.com/jIotn07lbP

— Stefan Koehler (@OracleSK) June 11, 2015

Flame Graph for quick identification of Oracle bug

Wed, 2015-06-10 04:28

Most of my performance stores start with a screenshot of Orachrome Lighty my preferred tool to have a graphical view of the database performance, in Standard and Enterprise Edition without any options:

b2ap3_thumbnail_CaptureHighParse.JPG

quickly exchange code or text between workstations or teams

Tue, 2015-06-09 17:20
In a recent project I faced the following situation: One the one hand I had to execute scripts on a customer's workstation while on the other hand I had to integrate the results of these scripts into a report on my own workstation. The question was how to efficiently do this without sending dozens of mails to myself.

DOAG Middleware Day: WebLogic von allen Seiten beleuchtet

Tue, 2015-06-09 09:48

DOAG-MDW-SIG-img001

This year, I had the opportunity to participate at the Middleware Day organized by the “Deutche Oracle Anwendergruppe” in Düsseldorf. As you expect, all sessions were given in a foreign language - “deutsch sprachige”. I was surprised that all German courses provided by dbi services and offered to their employees to improve language knowledge was not a waste of time. I understood all the sessions, I suppose ;) On the other side, speak and communicate with other participants was more challenging.

 

Let’s get back to our concern, the DOAG Middleware Day. So, in this blog post, I will quickly describe the sessions from my point of view without going in the detail. To have more detail, just participate to the event

 

At the beginning, the “begrüssung und Neues von der DOAG” directly took my attention as there are two interesting other events in Q3 this year.

 

  • The 23rd of September: Usability and UX-Design mit ADF, Apex und Forms! Verträgt sich das mit Performance? - detail
 
  • The 24th of September: DOAG OpenStack Day 2015 - detail

 

And the technical and interesting session began.

 

The fist one gave us some tips and tricks for Tuning ADF – Web Application im WLS 12c. Some interesting information have been provided. The referent really knew all possibilities of ADF customization and optimization that can be made declaratively in JDeveloper.

 

The second session gave us some information regarding the Oracle Cloud offer with the Java Cloud service. The referent described what is the Oracle Cloud offering and also the pricing. He also made a demo of the usability and the features provided to an administrator for easily provision his cloud space and make a live demo on how it’s very simple to deploy JEE application from his Netbeans IDE to the Java Cloud Service. He also made an interesting demo of an HTML 5 application on a weblogic server setup in cluster. WebLogic direclty managed the synchronization of each nodes part of a cluster in case of having a websockets application hosted on the cluster.

 

The third session covers a practical SSO use case demonstrating the architecture and the integration of forms applications to a new CRM system. This covers the following components - Oracle Forms, Reports andDiscoverer, Oracle Access Management, Oracle Internet Directory, Kerberos authentication mechanism, Microsoft Active Directory ASO.

 

The next session also covered a practical use case but also drawback on implementing WebLogic auf ODA at a customer. He covered the physical architecture and showed use some performances test results.

 

The last session was concentrated on WebLogic cluster features and capabilities. Unfortunately the main part was a demo which had a problem. The presenter remains cool and was able to manage it quite well.

 

It was a quite good day in Germany with interesting presentation. 

SQL Server 2016 CTP2: Stretch database feature - Part 1

Tue, 2015-06-09 03:51

SQL Server 2016 CTP 2 has introduced some interesting new features such as Always Encrypted, Stretch database, the configuration of the tempdb in the SQL Server installation, aso...

Regarding the configuration of the tempdb in SQL Server 2016 CTP 2, I recommend you a good article called SQL Server 2016 CTP2 : first thoughts about tempdb database from David Barbarin.

 

In this article, I will focus on the Stretch database feature!

 

What is the Stretch Database feature?

This new feature allows to extend on-premise databases to Microsoft Azure. In other words, you can use the Microsoft cloud as an additional storage for your infrastructure.

This can be useful if you have some issues with your local storage, such as available space.

 

Prerequisites

First, you need to enable the option on the server by running the stored procedure named 'sp_configure'. It requires at least serveradmin or sysadmin permissions.

remote_data_archive.png

 

remote_data_archive_reconfigure.png

 

Of course, you also need a valid Microsoft Azure subscription and your credentials. Be careful, a SQL Database server and an Azure storage will be used for this new feature.

 

Enable Stretch for a database

After enabling this feature at the server level, you need to enable it for the desired database.

It requires at least db_owner and CONTROL DATABASE permissions.

By default, it will create a SQL Database server with the Standard service tier and the S3 performance level. To fit your needs, you can change the level of the service afterwards.

Everything is done using a wizard in Management Studio. To open the wizard, proceed as follows:

enable_database_for_strech.png

 

Skip the 'Introduction' step to access to the 'Microsoft Sign-In' step:

database_for_strech_wizard_step2.png

 

You need your Microsoft Azure credentials to access to your subscription. Once this is done, you can click on 'next'.

database_for_strech_wizard_step3.png

 

 

You have to select an Azure location. Of course for better performances, you should select the closest location to your on-premise server.
You also need to provide credentials for the Azure SQL DB server which will be created through your wizard.
The last step is to configure the SQL Databases firewall in Azure to allow connection from your on-premise server. To do this, you must specify a custom IP range or use the current IP of your instance.

Then, click the 'next' button. A summary of all your configuration is displayed. Click the 'next' button again.

database_for_strech_wizard_step5.png

 

The configuration is now completed! The feature is enabled for your database.

With Visual Studio, you can connect to the SQL Database server which is in Azure. You can see the SQL Database server recently created:

 

SQL_azure.png

 

At the moment, there is no table stored in Azure, because we do not have enabled the feature for a table. In my next blog, I will show you how to do this!