Skip navigation.

Feed aggregator

Delivering Enterprise Mobile Applications Webcast

Anshu Sharma - Thu, 2015-02-26 12:36

Please join us for a discussion around opportunities and challenges around delivering enterprise mobile applications. We will have product management and an oracle partner who has been successful in creating a profitable mobile business provide their perspectives

 http://www.oracle.com/partners/secure/campaign/eblasts/enterprise-mobile-applications-2419241.html

3 things we learned from the Linux Collaboration Summit

Chris Foot - Thu, 2015-02-26 12:32

On February 13th, developers participating in the Linux project assembled in Santa Rosa, California to assess the kernel's development over the past year. It's been nearly 20 years since Linux 1.0 was released, and the solution has evolved considerably since then. 

InformationWeek contributor Charles Babcock acknowledged statements made by Dan Frye, IBM's vice president of open systems, who maintained that Linux is one of a kind, given the fact that the project has grown so much since its inception. During the conference, a number of Linux facts were brought up that some may not be aware of. Let's take a look at some of these findings.

1. Developer profile expands
According to Ars Technica, an estimated 2,000 developers were added to the Linux project over the pat 15 months, making up nearly 50 percent of all the engineers currently writing code for the OS kernel. The news source cited a report released by the Linux Foundation around the time of the Collaboration summit. 

"Linux kernel 3.15 was the busiest development cycle in the kernel's history," noted the Linux Foundation, in an announcement, as quoted by the news source. "The average number of changes accepted into the kernel per hour is 7.7, which translates to 185 changes every day and nearly 1,300 per week." 

2. Linux is reaching 19 million lines of code
Babcock noted that when Linus Torvalds first disclosed the open source kernel, Linux had 10,250 lines of code. In December of last year, exactly 18,997,848 lines of code were featured in the 3.18 kernel. Six releases were unveiled throughout 2014, resulting in 1 million lines being added. 

This speaks to just how much the kernel has grown over the past two decades. It acknowledges just how much you can do with this program, and why it's the OS of choice among many system administrators. 

3. Its development rate is accelerating 
With more developers on board and a sizeable code portfolio, one could imagine kernel releases are hampered by the pains of exchanging ideas among so many contributors. However, this isn't the case. The Linux project has fully adopted the ideals behind open source development – that software should be continuously revised whenever changes are made. 

Now, a new Linux kernel is released about every 70 days or so. The 3.18 version was developed and released after 63 days of collaboration, according to Babcock. It's a testament to Linus Torvalds' mission of creating a best-in-class OS kernel. 

The post 3 things we learned from the Linux Collaboration Summit appeared first on Remote DBA Experts.

Parallel Execution -- 1 The PARALLEL Hint and AutoDoP

Hemant K Chitale - Thu, 2015-02-26 09:29
The behaviour of the PARALLEL Hint has changed subtly but significantly in 11.2.  From the documentation :
Beginning with Oracle Database 11g Release 2 (11.2.0.1), the PARALLEL and NO_PARALLEL hints are statement-level hints and supersede the earlier object-level hints .......... If you omitinteger, then the database computes the degree of parallelism.

Further down, the documentation states :
This hint overrides the value of the PARALLEL_DEGREE_POLICY initialization parameter. 
and 
PARALLEL: The statement always is run parallel, and the database computes the degree of parallelism, which can be 2 or greater.

It is important to note that the Statement Level PARALLEL Hint *overrides* the PARALLEL_DEGREE_POLICY.
So, even if PARALLEL_DEGREE_POLICY is set to MANUAL, implying that automatic degree of parallelism is disabled, the PARALLEL Hint, itself, allows Oracle to auto-compute a DoP.

What further complicates understanding of the behaviour is a reading of Oracle Support Note 1269321.1 that implies that if I/O calibration statistics are missing, Oracle does not use the automatic degree of parallelism feature.
So, one would assume that if I/O Calibration is not done, with the "automatic degree of parallelism" feature not being used, the PARALLEL Hint would not compute any Auto DoP !

Let's run a simple test case :

HEMANT>select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

HEMANT>
HEMANT>show parameter parallel;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 40
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 16
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
HEMANT>
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 10488 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 10488 (1)| 00:02:06 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 10488 (1)| 00:02:06 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
2

HEMANT>
HEMANT>select * from dba_rsrc_io_calibrate;

no rows selected

HEMANT>

Apparently, I/O Calibration statistics are not present ("are missing"). And yet, Oracle chose to use 2 PX Servers (not 4, not 1) for the query.  Isn't this confusing ?  Is AutoDoP used or is it not used ?

Let's make a change somewhere (else ?)

HEMANT>show parameter cpu_count

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1

HEMANT>

Let's try on a server with more CPUs.  I reconfigure the same VM to run with 4 "CPUs" (cores) and restart the VM and database instance.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Thu Feb 26 23:18:47 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 135
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
SYS>

We can now see that CPU_COUNT, PARALLEL_MAX_SERVERS and PARALLEL_SERVERS_TARGET have all gone up.

SYS>connect hemant/hemant
Connected.
HEMANT>set serveroutput off
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2622 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>
HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
8

HEMANT>

Aaha !  The same SQL statement, the same SQL_ID, the same Execution Plan (Plan Hash Value) and Oracle  chose to use 8  (not 1, not 4, not 2) PX servers for the query !

So, there IS some way that AutoDoP is being used even when I/O calibration statistics are missing. Is this AutoDoP simply a function CPU_COUNT x PARALLEL_THREADS_PER_CPU ?

UPDATE :  Herlindo QV also confirmed to me on Twitter : "it seems that way and in RAC the formula looks like (CPU_COUNT x PARALLEL_THREADS_PER_CPU) x RAC nodes"  and, later, when I ask him to check with PARALLEL_FORCE_LOCAL=TRUE : "right, restricts parallel to just one instance. The tricky part is how to choose the best DOP for each unique scenario"
.
.
.

Categories: DBA Blogs

Webcast: Next Generation AP Invoice Automation

WebCenter Team - Thu, 2015-02-26 09:20

Inspyrus is a silicon valley software startup that provides large enterprises and small-to-medium businesses with cloud and on-premise technology solutions. Join this webcast to learn how Next-Gen A/P Process Automation can deliver significant results in terms of cost savings, processing time and resource efficiency to your existing Oracle and non-Oracle ERP applications.

Register Now!
Tuesday, March 3, 2015 9:00 am | Central Standard Time (Chicago, GMT-06:00)

Power Cord Replacement Notice (updated February 2015)

Joshua Solomin - Thu, 2015-02-26 09:16
Power Cord Replacement Notice Power Cord Replacement Notice (Updated February 2015) Oracle published a Power Cord Replacement Notice to inform its customers that a small number of power cords from one specific manufacturer may develop a leakage current condition over an extended period of time (typically >4 years). If excessive enough, this leakage current could lead to a trip condition of the rack PDU or external circuit breaker. In a more severe case, there is the possibility of damage due to a thermal event. These cords shipped with certain systems from January 2004 to July 2011. Oracle is advising its customers to replace suspected power cords as soon as possible. For more information on this issue, including instructions on how to identify suspected power cords and obtaining replacement power cords, see the Power Cord Replacement Notice.

Database security monitoring identifies employee wrongdoing [VIDEO]

Chris Foot - Thu, 2015-02-26 09:01

Transcript

Hi, welcome to RDX! When it comes to security, many think about external threats, but rarely regard internal forces. Yes, we’re talking about the disgruntled employees.

Don’t think that’s much of a concern? A survey conducted by Harris Interactive found that 19 percent of workers aged 18 to 34 said they would steal company information if they knew they were going to be fired. In addition, 16 percent of respondents noted they were able to access systems through old user authentications.

Not only do database monitoring programs assess systems for malware and spyware, they also record user activity. This means when unauthorized persons gain entry to a database, a team of DBAs is notified of the situation. In some cases, this technology can be adjusted to block those using outdated credentials from accessing the databases entirely.

Thanks for watching! Check back next time to learn how database monitoring services can help protect your information.

The post Database security monitoring identifies employee wrongdoing [VIDEO] appeared first on Remote DBA Experts.

Real time sql monitoring – bulk bound statement

Dominic Brooks - Thu, 2015-02-26 09:00

Moving through some observations of an ongoing investigation tackling the easiest ones first.

Previously I observed the slow execution of a recursive delete of statistics history but coming not from MMON but from a number of application processes which should have only been calculating stats for their own partitions.

Statement was sql id 9v9n97qj8z1dg:

	
delete /*+ dynamic_sampling(4) */
from sys.wri$_optstat_histhead_history
where savtime < :1
and rownum <= NVL(:2, rownum);

From the real time sql monitoring output of a single execution (DBMS_SQLTUNE.REPORT_SQL_MONITOR EXEC_ID parameter) , I observed that EXECS was greater than 1 and so the A-Rows was much larger than I expected for a statement which was deliberately restricting the number of rows affected by a ROWNUM predicate.

Reminder:

SQL Plan Monitoring Details (Plan Hash Value=2348801730)
=======================================================================================================================================================================================================
| Id   |            Operation             |             Name              |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |            Activity Detail            |
|      |                                  |                               | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |              (# samples)              |
=======================================================================================================================================================================================================
|    0 | DELETE STATEMENT                 |                               |         |       |     50262 |   +101 |  1423 |        1 |      |       |          |                                       |
|    1 |   DELETE                         | WRI$_OPTSTAT_HISTHEAD_HISTORY |         |       |     50361 |     +2 |  1423 |        1 |      |       |    29.74 | enq: TX - row lock contention (14630) |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | buffer busy waits (231)               |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (82)                              |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (5)                |
|    2 |    COUNT                         |                               |         |       |     50262 |   +101 |  1423 |      14M |      |       |     0.00 | Cpu (1)                               |
|    3 |     FILTER                       |                               |         |       |     50266 |    +97 |  1423 |      14M |      |       |    30.14 | Cpu (15146)                           |
|    4 |      TABLE ACCESS BY INDEX ROWID | WRI$_OPTSTAT_HISTHEAD_HISTORY |      6M | 23218 |     50253 |   +100 |  1423 |       4G |   22 | 176KB |    23.17 | buffer busy waits (18)                |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (11627)                           |
| -> 5 |       INDEX RANGE SCAN           | I_WRI$_OPTSTAT_HH_ST          |    990K |  5827 |     50264 |   +101 |  1423 |       4G |   41 | 328KB |    16.94 | buffer busy waits (3)                 |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (8332)                            |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (180)              |
=======================================================================================================================================================================================================
   3 - filter(ROWNUM<=NVL(:2,ROWNUM))
   5 - access("WRI$_OPTSTAT_HISTHEAD_HISTORY"."SYS_NC00024$"<SYS_EXTRACT_UTC(:1))

Initially I was stumped by the fact that A-Rows in steps 2-3 was greater than the ROWNUM predicate restriction (10000) but that was before I noticed the number of EXECS and EXECS * 10000 = 14M so was this a report of a bulk/batched statement?

So, this is just an illustration that such a bulk bound statement will produce an output as per above.

Create a table with a few rows to slow things down:

drop table t3;

create table t3
as
with x as
(select rownum col1
 from   dual
 connect by level <= 1000)
select x1.col1
,      rpad('X',100,'X') col2
from   x x1, x x2;

Then run an update with FORALL to bulk bind:

declare
 v1 sys.odcinumberlist := sys.odcinumberlist();
begin
 for i in 1 .. 1000
 loop
     v1.extend();
     v1(i) := i;
 end loop;
 forall i in 1 .. v1.count
   update /*+ monitor domtest */ t3
   set    col2 = rpad('Y',100,'Y')
   where  col1 = v1(i);
 commit;
end;
/

Hunt down my SQL execution and get RTSM output of statement.

V$SQL reports this as one execution updating 1M rows.

select executions, rows_processed, elapsed_time/1000/1000, sql_text from v$sql where sql_id = '3tkqtzjyaa02g';

EXECUTIONS ROWS_PROCESSED ELAPSED_TIME/1000/1000 SQL_TEXT
---------- -------------- ---------------------- ------------------------------------------------------------------------------
         1        1000000             109.596429 UPDATE /*+ monitor domtest */ T3 SET COL2 = RPAD('Y',100,'Y') WHERE COL1 = :B1

RTSM reports it as 1044 (!?) executions and doesn’t seem to deal with the number of rows updated.
We only get the bind for the initial bind – not unexpected.

select dbms_sqltune.report_sql_monitor('3tkqtzjyaa02g',sql_exec_id => '16777216') from dual;
SQL Monitoring Report

SQL Text
------------------------------
UPDATE /*+ monitor domtest */ T3 SET COL2 = RPAD('Y',100,'Y') WHERE COL1 = :B1

Global Information
------------------------------
 Status              :  DONE                 
 Instance ID         :  1                    
 SQL ID              :  3tkqtzjyaa02g        
 SQL Execution ID    :  16777216             
 Execution Started   :  02/26/2015 13:25:29  
 First Refresh Time  :  02/26/2015 13:25:29  
 Last Refresh Time   :  02/26/2015 13:27:19  
 Duration            :  110s                 
 Module/Action       :  SQL Developer/-      
 Service             :  SYS$USERS            
 Program             :  SQL Developer        

Binds
========================================================================================================================
| Name | Position |  Type  |                                           Value                                           |
========================================================================================================================
| :B1  |        1 | NUMBER | 1                                                                                         |
========================================================================================================================

Global Stats
=========================================
| Elapsed |   Cpu   |  Other   | Buffer |
| Time(s) | Time(s) | Waits(s) |  Gets  |
=========================================
|     110 |     109 |     0.16 |    16M |
=========================================

SQL Plan Monitoring Details (Plan Hash Value=669554690)
==========================================================================================================================
| Id |      Operation       | Name |  Rows   | Cost |   Time    | Start  | Execs |   Rows   | Activity | Activity Detail |
|    |                      |      | (Estim) |      | Active(s) | Active |       | (Actual) |   (%)    |   (# samples)   |
==========================================================================================================================
|  0 | UPDATE STATEMENT     |      |         |      |           |        |  1044 |          |          |                 |
|  1 |   UPDATE             | T3   |         |      |       109 |     +2 |  1044 |        0 |     0.92 | Cpu (1)         |
|  2 |    TABLE ACCESS FULL | T3   |     171 | 3409 |       110 |     +1 |  1044 |       1M |    99.08 | Cpu (108)       |
==========================================================================================================================

By contrast a non-bulk bound FOR LOOP statement would register as 1000 executions in V$SQL and each a separate SQL_EXEC_ID.

However this does nothing to address the concern about what the heck is going on such that application processes executing gather_table_stats for a particular partition would each end up running this bulk indiscriminate cleanup script… heading that way next.


The German ADF-Community-Book was released today

Gerd Volberg - Thu, 2015-02-26 06:00
The German ADF- (and Forms-) Community released their first book.

The "ADF book" is a compilation of German lectures, articles and workshop tutorials in Oracle ADF (some of the contributions are in English!). The authors are members of the German ADF community. The time frame covers the years 2010 to 2014. The project "ADF book" was implemented by a team of staff from partner companies and Oracle.

40 authors wrote 70 contributions on 1400 pages in this book.

It's a 110 MB PDF and can be downloaded for free from the ADF-Community-Page.

Here is the direct link
My part in the book has the title "Modernizing Oracle Forms" in which I demonstrate, how easy it is to enhance Oracle Forms with modern frameworks like LAF (Look and Feel-Framework from Francois Degrelle) and how to integrate ADF in Oracle Forms via OraFormsFaces (A framework from Wilfred van der Deijl).

Have fun with the book
Gerd

SQL*Plus Terminator Torture

The Anti-Kyte - Thu, 2015-02-26 05:42

“Leave that jar of Nutella alone, it’s got my name on it !”
The context in which Deb issued this injunction to me probably requires some explanation.
It was Friday evening.
Wales had just…well…come second in the latest installment of their eternal battle with the English through the medium of Rugby.
There was no alcohol left in the house.
And only one source of chocolate.
From the safety of the Cupboard under the stairs, to which I had retreated at kick-off – the Welsh do take their Rugby quite seriously – I wondered about my better half’s change of name.
Shorn of it’s chocolate hazelnut spread connotations, you might think that Nutella was quite an nice name for a girl.
It certainly seems appropriate if the “Girl” in question is slightly unhinged by a combination of wine and wounded national pride.

I was going to write something here about how Rugby players all look like the Terminator and use this as a way of introducting the topic at hand. However, I realise that this would simply be too contrived…even for me.
Instead, I’ll jump straight in…

The Nature of SQL*Plus

SQL*Plus is the command line interface for the Oracle RDBMS. As such, it supports three categories of statement :

  • SQL*Plus commands – which require no terminator
  • SQL commands – terminated by a “/” on a new line
  • PL/SQL blocks – containing statements that are terminated by “;”

…but that’s not the whole story as we are about to discover….

select 'Hasta La Vista' from dual
/

set serveroutput on size unlimited

begin
    dbms_output.put_line(q'[Ill be back...from PL/SQL]');
end;
/

select 'Just you wait until the World Cup!' 
from dual;

Here we can see an example of all three statement types – SQL, SQL*Plus and PL/SQL.
However the final SQL statement is a bit different. The terminator is not the newline “/” as with the first statement, but a “;”.

It is this small quirk that can cause some interesting things to happen and leave you scratching your head for a good while until you figure out exactly what’s happened.

Spot the Terminator

Consider the following, run as an unattended batch job :

select to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') from dual;
declare
    e_not_exists exception; 
    pragma exception_init(e_not_exists, -942);
begin
    execute immediate 'drop table tenuous_links purge';
exception when e_not_exists then null;
end;
/
    
create table tenuous_links( arnie_quote varchar2(100));
/

Simple enough – drop the table ( ignoring the ORA-00942 error if it doesn’t exist in the first place), then create it again…


TO_CHAR(SYSDATE,'DD-MON-YYYYH
-----------------------------
21-FEB-2015 13:12:20


PL/SQL procedure successfully completed.


Table created.

create table tenuous_links( arnie_quote varchar2(100))
             *
ERROR at line 1:
ORA-00955: name is already used by an existing object


SQL> 

At first glance, it would seem that our PL/SQL block to drop the table didn’t work for some reason.
However, if we dig a bit deeper…

select to_char(created, 'DD-MON-YYYY HH24:MI:SS')
from user_objects
where object_name = 'TENUOUS_LINKS'
and object_type = 'TABLE'
/

TO_CHAR(CREATED,'DD-MON-YYYYH
-----------------------------
21-FEB-2015 13:12:20

So, the table was created at the time we ran our script. We know it still exists (and is not in the recycle bin) because it’s still in USER_OBJECTS.

If the table was dropped then re-created when we ran our script then why did we get the error ?

Let’s have a look at that CREATE TABLE statement again :

create table tenuous_links( arnie_quote varchar2(100));
/

Notice that “;” at the end of the first line ?
SQL*Plus takes this to mean “execute the last statement in the buffer”.
Then on the next line we have the same directive – expressed using a different syntax – i.e. “/”.

If we correct the script by removing the extraneous “;” all now works as expected :


select to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') from dual;
declare
    e_not_exists exception; 
    pragma exception_init(e_not_exists, -942);
begin
    execute immediate 'drop table tenuous_links purge';
exception when e_not_exists then null;
end;
/
    
create table tenuous_links( arnie_quote varchar2(100))
/

TO_CHAR(SYSDATE,'DD-MON-YYYYH
-----------------------------
21-FEB-2015 13:30:48


PL/SQL procedure successfully completed.


Table created.

Whilst debugging a small script like this is relatively straight forward, you’re likely to be confronted with a much longer script in the real world and spend considerably more time tracking down the problem.

Culling the Terminators

At this point we have a number of questions :

  1. How and why does SQL*Plus allow two terminator characters ?
  2. Can we turn one of them off ?
  3. Can we standardise on one of them and if so, which ?

It’s probably worth taking a closer look at the two terminator characters as they behave slightly differently in
certain circumstances. For example…

create or replace function arnie
    return varchar2
as
begin
    return(q'[I'll be back!]');
end;
/

Function created.

However, we get a bit less success if we try :

create or replace function arnie
    return varchar2
as
    begin
        return(q'[I'll be back!]');
end;;
SQL> 
SQL> 
SQL> 

SQL*Plus doesn’t recognise the second “;” as a terminator. Putting it on a separate line fares no better.
From this then, we can infer that the “/” is mandatory when generating DDL for PL/SQL stored program units.

The next question is how we end up with two terminators, and whether we can switch one of them off.
Well, let’s take a look at the SQL*Plus settings, or one setting in particular :

SQL> show sqlterminator
sqlterminator ";" (hex 3b)
SQL> 

We can use this parameter to set the terminator to another single character…


SQL> set sqlterminator !

SQL> select 'Another Terminator reference'
  2  from dual
  3  !

'ANOTHERTERMINATORREFERENCE'
----------------------------
Another Terminator reference

SQL> 

SQL> select 'Another Terminator reference' from dual !

'ANOTHERTERMINATORREFERENCE'
----------------------------
Another Terminator reference

SQL> 

SQL> select 'Another Terminator reference!' from dual !

'ANOTHERTERMINATORREFERENCE!'
-----------------------------
Another Terminator reference!

SQL> 

…but not multiple characters…

SQL> set sqlterminator runbuffer
string "runbuffer" is too long. maximum size is 1 character.
SQL> 

Interestingly, we can also disable it altogether and then re-enable it…

SQL> set sqlterminator off
SQL> show sqlterminator
sqlterminator OFF
SQL> select sysdate
  2  from dual;
  3  
SQL> 
SQL> select sysdate
  2  from dual
  3  /

SYSDATE
------------------
20-FEB-15

SQL> set sqlterminator ;
SQL> select sysdate
  2  from dual;

SYSDATE
------------------
20-FEB-15

SQL> 

Whilst this can be quite useful, especially if you’re running scripts that contain only DDL for stored program units, it does feel a bit odd having to put the terminator on a new line.
Additionally, you may consider that standardising this would require some serious regression testing of any SQL scripts to make sure that they’re not using the “;” terminator, not to mention any scripts that get generated dynamically.

Missing Terminators

Just as vexing as having too many terminators is not having enough.

consider :

create or replace package the_terminator as
    function favourite_terminator
        return varchar2;
end;

create or replace package body the terminator as

    function favourite_terminator
        return varchar2
    is
    begin
        return q'[There's only one Arnie !]';
    end favourite_terminator;
end the_terminator;
/

Warning: Package created with compilation errors.

SQL> sho err
Errors for PACKAGE THE_TERMINATOR:

LINE/COL ERROR
-------- -----------------------------------------------------------------
6/1	 PLS-00103: Encountered the symbol "CREATE"

SQL> select object_type          
  2  from user_objects
  3  where object_name = 'THE_TERMINATOR'
  4  /

OBJECT_TYPE
-------------------
PACKAGE

SQL> 

Now consider this in a longer script ( which I’ve instrumented with prompt statements) :

prompt Recording Start Time

select to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') from dual;

prompt Dropping TENUOUS_LINKS

declare
    e_not_exists exception; 
    pragma exception_init(e_not_exists, -942);
begin
    execute immediate 'drop table tenuous_links purge';
exception when e_not_exists then null;
end;
/

prompt Creating table TENUOUS_LINKS
    
create table tenuous_links( arnie_quote varchar2(100))
/


prompt Creating Function ARNIE

create or replace function arnie
    return varchar2
as
begin
    return(q'[I'll be back !]');
end;

prompt Creating Package THE_TERMINATOR

create or replace package the_terminator as
    function favourite_terminator
        return varchar2;
end;
/

create or replace package body the terminator as

    function favourite_terminator
        return varchar2
    is
    begin
        return q'[There's only one Arnie !]';
    end favourite_terminator;
end the_terminator;
/

This time, we’ve missed the terminator at the end of the function.
However, this causes the Package to error as well :

Recording Start Time

TO_CHAR(SYSDATE,'DD-MON-YYYYH
-----------------------------
21-FEB-2015 14:11:39

Dropping TENUOUS_LINKS

PL/SQL procedure successfully completed.

Creating table TENUOUS_LINKS

Table created.

Creating Function ARNIE

Warning: Function created with compilation errors.


Warning: Package Body created with compilation errors.

SQL> sho err
Errors for PACKAGE BODY THE_TERMINATOR:

LINE/COL ERROR
-------- -----------------------------------------------------------------
0/0	 PL/SQL: Compilation unit analysis terminated
1/14	 PLS-00304: cannot compile body of 'THE_TERMINATOR' without its
	 specification

1/14	 PLS-00905: object MIKE.THE_TERMINATOR is invalid
SQL> select object_name, object_type
  2  from user_objects
  3  where object_name in ('ARNIE', 'THE_TERMINATOR')
  4  /

OBJECT_NAME		       OBJECT_TYPE
------------------------------ -------------------
ARNIE			       FUNCTION
THE_TERMINATOR		       PACKAGE
THE_TERMINATOR		       PACKAGE BODY

SQL> 

There are a couple of things to note here. First is that it may well be worth enhancing the instrumentation in the script by including SHOW ERRORS after each stored program unit creation statement. The second is that there is no “smoking gun” error for either too many terminators, or not enough.
With SHOW ERRORS in place, it becomes a bit easier to spot what’s going wrong :

prompt Recording Start Time

select to_char(sysdate, 'DD-MON-YYYY HH24:MI:SS') from dual;

prompt Dropping TENUOUS_LINKS

declare
    e_not_exists exception; 
    pragma exception_init(e_not_exists, -942);
begin
    execute immediate 'drop table tenuous_links purge';
exception when e_not_exists then null;
end;
/

prompt Creating table TENUOUS_LINKS
    
create table tenuous_links( arnie_quote varchar2(100))
/


prompt Creating Function ARNIE

create or replace function arnie
    return varchar2
as
begin
    return(q'[I'll be back !]');
end;

show error

prompt Creating Package THE_TERMINATOR

create or replace package the_terminator as
    function favourite_terminator
        return varchar2;
end;
/

show error


create or replace package body the_terminator as

    function favourite_terminator
        return varchar2
    is
    begin
        return q'[There's only one Arnie !]';
    end favourite_terminator;
end the_terminator;
/

show error

When you run this script, the problem is a bit easier to spot :

Recording Start Time

TO_CHAR(SYSDATE,'DD-MON-YYYYH
-----------------------------
21-FEB-2015 14:17:03

Dropping TENUOUS_LINKS

PL/SQL procedure successfully completed.

Creating table TENUOUS_LINKS

Table created.

Creating Function ARNIE

Warning: Function created with compilation errors.

Errors for FUNCTION ARNIE:

LINE/COL ERROR
-------- -----------------------------------------------------------------
8/1	 PLS-00103: Encountered the symbol "SHOW"

Warning: Package Body created with compilation errors.

Errors for PACKAGE BODY THE_TERMINATOR:

LINE/COL ERROR
-------- -----------------------------------------------------------------
0/0	 PL/SQL: Compilation unit analysis terminated
1/14	 PLS-00304: cannot compile body of 'THE_TERMINATOR' without its
	 specification

1/14	 PLS-00905: object MIKE.THE_TERMINATOR is invalid
SQL> 

There is another SQL*Plus setting that you might consider to at least reduce some of the pain caused by this tangle of terminators.
Remember, the terminator is a directive for SQL*Plus to run what’s in it’s buffer. By default this is the last SQL or PL/SQL statement. SQL*Plus commands are not held in the buffer.
The CLEAR BUFFER command will, well, clear the contents of the buffer…

SQL> select 'I really must watch the movies again'
  2  from dual
  3  /

'IREALLYMUSTWATCHTHEMOVIESAGAIN'
------------------------------------
I really must watch the movies again

SQL> list
  1  select 'I really must watch the movies again'
  2* from dual
SQL> clear buffer
buffer cleared
SQL> list
SP2-0223: No lines in SQL buffer.
SQL> 

So, run a statement and we can then see it’s in the buffer by issuing the list command ( usually abbreviated to “l”).
After the CLEAR BUFFER, there’s nothing to see.

Conclusion

SQL*Plus facilitates the use of two terminator characters which behave in the same way…most of the time.
Whilst it’s possible to disable, or even change the “;”, it may not be practical to go forward using just the “/” as a standard.

Deb has just “asked” me to add a disclaimer to this post.
A certain amount of artistic licence was used in the first couple of paragraphs.
After all, there’s so much junk in the cupboard under the stairs that I’d never fit in.
The Shed, Deb assures me, is an entirely different matter.


Filed under: Oracle, SQL Tagged: clear buffer, prompt, set sqlterminator, show errors, SQL*Plus, statement terminators

Why and How to use Oracle Metadata Management 12c. Part 2: Importing and Relating Metadata

Rittman Mead Consulting - Thu, 2015-02-26 05:27

In the first post of this series we have seen how to install and configure OEEM to start working with it. In this new post we are going to see how we import the metadata into OEMM from different sources like Oracle Database, OBIEE, ODI and OWB and then relate all of them inside OEMM.

oemm_main_page

After we have installed and configured OEMM, we need to start adding all the metadata from the different sources and applications that we use. In this example the sources will be some Oracle schemas and our applications will be ODI, OWB and OBIEE. To import the metadata for all of them we need to create one model in OEMM for each. A model in OEMM has all the connection details for a specific source or metadata provider (i.e: database schema, ODI repository, etc), and is also the container for the metadata of that specific source after the import process. So one model can connect to one specific source or application.

First and for organisational purposes we will create a Folder to contain the future models.  You can also create your models first, and then create the folder/s that you want and then just move the models under the correspondent folders. In addition, you can create folders within another folder.

To create a folder, right-click on the Repository entry under the Repository panel in the OEMM main page. Select New > Folder in the pop-up menu, enter a name and press the Create button.

 

oemm_folder_comb

The next step is creating the models and import the metadata of the different sources. The import or reverse engineering process is named harvesting in OEMM. We will start with the model for the Oracle Database. In this particular example I used Oracle 12c.

To create a model right click on the folder or the repository entry, and select New > Model. In the Create Model window that appears, enter a name for the new model and select the type of source that you want to import or to be more precise which will be the Import bridge that this model will use.

The Import Bridge is part of the Meta Integration® Model Bridge (MIMB) software and is the way that OEMM connect to the sources and applications to reverse engineering the metadata. You will find import bridges for a wide range of technologies like different databases, Business Intelligence  and Data Integration products from different vendors, Big Data stores, etc.

oemm_model_comb

For this first example we will select the Oracle Database (via JDBC) import bridge and in the Import Setup tab we will add all the usual connection details: host, port, service and user and password to connect to the Database. This user should have at least the CONNECT privilege and the SELEC_CATALOG_ROLE role. We can also define this model for specific schemas using the magnifying glass to choose the shown schemas or just write the schemas (in uppercase) separated by “;”. Also we can decide if we want that the stored procedures are going to be included in this imported metadata or not.

oemm_model_conn_db

After all the connection details have set, we test the connection and wait until we receive the Connection Successful message, and finally press the Create button. A message windows will appear asking if we want to “Import a new version now?” Press yes to start the harvesting process. A log window will show you the progress in the import process that can take several minutes. After the process is finished a new windows message ask if we want to open the model.

oemm_imp_succ_open_model

Choose yes to see all the objects that are imported for this model as it is shown in the figure below.

ORCL_training_model

We need to repeat the process explained above to create the models for the rest of sources and applications that we are going to use in this example. The process is the same for all of them but of course there are some differences in the connection details required after we chose the specific Import Bridge for each one.

In the next screenshot you will find the connection details for the ODI model after you choose the Oracle Data Integrator (ODI) Import Bridge. In the Import Setup tab, you need to select the appropriate driver to connect to the database where is the ODI Repository (that could be Oracle, SQLServer, DB2, etc), the ODI Home folder, the URL to connect to database, the schema and the password for the Master Repository, user and password for the ODI User (SUPERVISOR for example), the name of the Work Repository from that we want to select the ODI Objects and the Context.

oemm_odi_import_setup

We need to select the Scope for this model between two options: Projects, that will include packages and mappings (or interfaces for versions before 12c) or Load Plans and Scenarios, that includes the Load Plans and Scenarios.

After we chose the Scope we can also filter the Content of the scope pressing the magnifying glass icon and select the specific objects that we want for this model.

After you press the create button to start the harvesting process, open the model created and it will look similar to this if you choosing Projects as Scope.

oemm_odi_projects

For the connection details to create the OWB model , you need to take a couple of things into account. First, the version of the OWB from which you want to import the metadata. If it is 11.2.0.3 or later you will need to do the these two steps before:

  1. Copy the following .JAR files from: MetaIntegrationInstallationDir\java\ to %OWB_HOME%\owb\lib\ext\
  • jsr173_1.0_api.jar
  • MIR.jar
  • MIRModelBridge.jar
  • MIROracleWarehouseBuilderOmb.jar
  • MIRUtil.jar
  • stax-1.1.1-dev.jar
  1. Copy the MetaIntegrationInstallationDir\bin\mimbexec.bat file into the same OWB directory.

As the version that I have is 11.2.0.4, I copy the files detailed above, set the connection parameters like is shown in the following image and test the connection.

owb model config

When I started the import process the following error message appears in the log windows:

error_owb2

 

After trying many things unsuccessfully, I asked David Allan for help and he sent me another mimbexec.bat because apparently between 11.2.0.3 and 11.2.0.4 there were directory name changes.  This a temporary fix and a proper one is being worked on.

I substituted the bat file and I received another error message as it is shown in the next screenshot.

error_importing_owb_externaltable

 

After a while, I realised that the issue that OEMM reported was because I was using an external table in one of the mappings. I changed it for a common table and the import process worked well. This has reported as a bug and a solution is being worked on. I really want to thank David for all his invaluable help on that.

This is how it looks the OWB model after the import of the metadata.

owb_model

The last model that we need to create is the one based on OBIEE. There are different import bridges depending on the metadata that we need to import from OBIEE. Could be Oracle Business Intelligence (OBI) Server, Oracle Business Intelligence (OBI) Enterprise Edition and Oracle Business Intelligence (OBI) Answers.

The OBI Server import bridge needs the OBI repository in xml format as a parameter to import it, and the result model will contain all the objects defined in the three layers of the repository (Presentation, Business Model, Physical) as well as the repository connections, the variables and the initialisation blocks defined in the OBI repository.

oemm_biserver_comb

To use the OBI Enterprise Edition import bridge we need to set the login user and password to connect to OBIEE (usually weblogic or a user with admin privileges), the repository file in xml format, and we can also filter the amount of reports retrieved from the OBI Presentation Server.

There are a couple of interesting not mandatory options, one is for optimise the import of large models which if it sets to true doesn’t return some objects like joins, relationships, logical fk, etc., to consume less memory at run time. And another option is to set if we want to do an incremental import to import only the changes of the source or each time we want to import everything.

oemm_obiee_comb

The last import bridge to use with OBI is the OBI Answers, which will be import the content for a particular analysis or KPI report. This bridge needs to have the specific analysis in XML format.

oemm_bi_analysis_comb

 

About models, there are a couple of additional things that you need to take note. First if you want to see the configuration details you need to right-click the model and choose the settings option from the pop-up menu. In case that you want to open the model to see the objects that contains, double-click on it.

Another thing is for every parameter that you have in a model, you will find a very detailed help at the right in the import setup tab; and if you click on the name of the Import Bridge in the same tab, you have the documentation of this particular bridge which I find it very useful.

There are two tabs more in the folder and model definition that we won’t use in this example but that we talk in future posts: one for security and another to executing scripts when an event happens to this object. Models also have an additional tab Import Schedule, to create a plan to do the harvest process.

Relate the models

Once we have defined our models we need to relate them and to validate their relationship. The automated process of relate these models through the validation is named stitching. In order to do that we must create a Configuration first. A configuration in OEMM is a collection of models and another objects like mappings, glossaries, etc, that are related in someway.

According to the online documentation we need to consider a configuration as any of these options:

  • Repository workspace: a collection of Repository Objects to be analyzed together (search, browse, reports, etc.) as a technical scope, or business area under the same access permission scope.
  • Enterprise architecture – a collection of data store Models (ODS, data staging areas, data warehouses, data marts, etc.) and data process Models (ETL/DI, and BI) connected together through data flow stitching.
  • Design workflow – a collection of conceptual, logical and physical Models connected (semantically stitched) together through semantic mappings modeling the design process.

To create a Configuration, just right-click on a selected folder or the repository entry and choose New> Configuration. Enter a name for the configuration and press the Create button.

oemm_new_config

The configuration is opened and you need to drag the models that you want to be stitched inside this configuration as it is shown in the following screenshot

oemm_models_config

As you drag and drop your models, you can see that some of them have a warning icon after you include them in the configuration, and that is because we need to connect that model with the appropriate source of data.

To do that, select the model in the configuration and press Edit Connection. Choose the correspondent store for each connection and press OK.

oemm_edit_conn_config

oemm_conn_edit1

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

After you finish with all the models, press the Validate button, to start stitching or relate them.

oemm_validate_config

In most of the cases, OEMM can assign the correspondent default schema for each of the connections in the model. If in some cases cannot do it , like in OWB, you need to do it manually.

oemm_conn_config_owb

In the following image you will see all the models validated. For this example, I’ve created four databases models, one that contains the source (transactional system), one for the staging schema, and another two that contains different data warehouses. Also an ODI model, an OWB model and an OBIEE model.

oemm_models_validated

You can see also the relationship between the models that belong to a configuration in a graphical view in the Architecture Diagram tab of the Configuration. If the diagram looks like a little messy, you  can press the Edit button and then Layout to order the way the components are shown.

oemm_conf_arch_diag

In summary, we create and harvesting (reverse-engineer) the models and then relate or stitching them to can analyse them together. In the next post, we will see some interesting stuff that we can do with configurations and models like trace data lineage and trace data impact.

Categories: BI & Warehousing

COUNT STOPKEY – NVL

Dominic Brooks - Thu, 2015-02-26 04:50

Yesterday I mentioned issues with a recursive delete operation on statistics history.

This is a quick illustration of the last points I made on that post regarding the lack of a COUNT STOPKEY optimisation because of the use of NVL.

COUNT STOPKEY is an optimisation which allows processing to stop once the target number of rows has been reached.

For example:

create table t1 as select * from dba_objects;

alter session set statistics_level = all;

var rn number
exec :rn := 10;

select *
from   t1
where  rownum <= :rn
and    mod(object_id,5) = 0;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

Plan hash value: 3836375644

-------------------------------------------------------------------------------------
| Id  | Operation          | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |      1 |        |     10 |00:00:00.01 |       4 | 
|*  1 |  COUNT STOPKEY     |      |      1 |        |     10 |00:00:00.01 |       4 |
|*  2 |   TABLE ACCESS FULL| T1   |      1 |  26148 |     10 |00:00:00.01 |       4 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(ROWNUM<=:RN)
   2 - filter(MOD("OBJECT_ID",5)=0) 

COUNT STOPKEY kicks in after we have fetched the relevant rows and stops any unnecessary further execution – note in particular A-Rows & Buffers for STEP 2

However, if we use NVL around our ROWNUM limitation, then this optimisation is prevented.

var rn number
exec :rn := 10;

select *
from   t1
where  rownum <= nvl(:rn,rownum)
and    mod(object_id,5) = 0;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));

Plan hash value: 624922415

--------------------------------------------------------------------------------------
| Id  | Operation           | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |      |      1 |        |     10 |00:00:00.09 |    2310 |
|   1 |  COUNT              |      |      1 |        |     10 |00:00:00.09 |    2310 |
|*  2 |   FILTER            |      |      1 |        |     10 |00:00:00.09 |    2310 |
|*  3 |    TABLE ACCESS FULL| T1   |      1 |  26148 |  29630 |00:00:00.08 |    2310 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(ROWNUM<=NVL(:RN,ROWNUM))
   3 - filter(MOD("OBJECT_ID",5)=0)


Meet the new face of bank robbery

Chris Foot - Thu, 2015-02-26 02:43

Bandits equipped with revolvers and bandoleers aren't that big of a threat compared to a small army of cybercriminals armed with laptops. Why go through the trouble of physically robbing a bank when you can hack into its databases?

Once a hacker successfully infiltrates a financial institution's database, a wealth of information is at his or her disposal. They're after customer data, enabling them to siphon money from checking and savings accounts, and use other data that could prove a sustainable source of income.

The work of groups
Much like how it was in the old west, the smart perpetrators aren't working alone. The more people you have on your side, the better chance you have of winning. At the Kaspersky Security Analyst Summit in Cancun, Mexico, which took place earlier this month, attendees unmasked Carbanak, a cybercrime gang comprised of members located across Russia, Ukraine, China and other European countries.

Carbanak has been responsible for pilfering approximately $1 billion over two years from 100 banks from Australia to the United Kingdom. The organization employs spear-phishing attacks that zero in on bank employees, and also favors using remote Trojan backdoors that spy on infected users, steal data and provide access to infiltrated machines.

Kaspersky Lab's Sergey Golovanov, principal security researcher at the firm's global research and analysis team, noted that one bank reported being hit multiple times by the gang. Its systems were ultimately infected by exploiting unpatched versions of Microsoft Office.

Invested in their work
What shocked InformationWeek's Susan Nunziata so much was that spear-phishing campaigns were so successful. The phishing emails were so well written that employees believed such messages were sent from fellow colleagues. Based on Kaspersky's insights, the Carbanak hackers dedicated an incredible amount of time and resources refining their tactics.

The security research firm's report showed that attackers infected video surveillance equipment to monitor bank clerk behaviors. In addition, Carbanak members used monitoring tools that enabled them to further mimic employee actions. Whenever the gang would successfully infiltrate a bank, it would assess its processes and networks for anywhere between two and four months, making away with an estimated $10 million per hack.

This situation underlines the need for database monitoring tools that can spot peculiar activity. It's only a matter of time before such malicious operations impact financial institutions in the U.S., and it's time they brace themselves.

About RDX
Since its inception in 1994, RDX has helped hundreds of organizations lower database administration support costs while increasing performance and availability. RDX provides 100 percent US-based, 24×7 support for Oracle, Oracle EBS, SQL Server, PostgreSQL, MySQL and DB2 databases as well as operating system support for all major Windows, Unix and Linux offerings.

RDX's highest priority is to safeguard its customers' sensitive data stores, and its expert staff of highly-trained professionals is backed by a monitoring and support infrastructure that has been continuously improved and enhanced throughout its 20-year history.

The post Meet the new face of bank robbery appeared first on Remote DBA Experts.

Complément : A-Team Chronicles

Jean-Philippe Pinte - Thu, 2015-02-26 02:31
Le site A-Team Chronicles aggrège le contenu produit par les membres de la A-Team : meilleures pratiques, astuces, conseils, etc

e-Literate TV Preview: Essex County College and changing role of faculty

Michael Feldstein - Wed, 2015-02-25 17:58

By Phil HillMore Posts (291)

As we get closer to the release of the new e-Literate TV series on personalized learning, Michael and I will be posting previews highlighting some of the more interesting segments from the series. When we first talked about the series with its sponsors, the Bill & Melinda Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent.

In this video preview (about 4:18 in duration), we hear from two faculty members who have first-hand experience in using a personalized learning approach as well as a traditional approach to remedial math. We also hear from students on what they are learning about learning. In our case studies so far, the real faculty issue is not that software is being designed to replace faculty, but rather that successful implementation of personalized learning necessarily changes the role of faculty. One of our goals with e-Literate TV is to allow faculty, staff and students to describe direct experiences in their own words. Take a look.

Click here to view the embedded video.

Stay tuned for the full episodes to be released on the In The Telling platform[1]. You can follow me (@PhilOnEdTech), Michael (@mfeldstein67), or e-Literate TV (@eLiterateTV) to stay up to date. You can also follow the e-Literate TV YouTube channel. We will also announce the release here on e-Literate.

  1. ITT is our partner in developing this series, providing video production as well as the platform.

The post e-Literate TV Preview: Essex County College and changing role of faculty appeared first on e-Literate.

Understanding vs Resolution – Statistics History Cleanup

Dominic Brooks - Wed, 2015-02-25 13:26

Today I helped resolve a problem quickly but to the detriment of my understanding of exactly what was going on and why.

And as part of the quicker resolution, I had to destroy the evidence which would have helped understand better.

So… now need to go back and figure it out if I have time to model it properly, etc.

Here’s what little I know so far.

What happened was that there were a number of application sessions experiencing slowness and contention when executing a recursive stats history cleanup statement.

Verified via ASH that this recursive delete was somehow being called by app ETL code (TOP_LEVEL_SQL_ID, PLSQL_ENTRY_OBJECT_ID & USER_ID columns), four of them each running slow statement and also because of that nature of the statement below and being blocked by mode 6 TX locks from the leading execution.

Version is 11.2.0.3

Statement was sql id 9v9n97qj8z1dg:

delete /*+ dynamic_sampling(4) */ 
from sys.wri$_optstat_histhead_history 
where savtime < :1 
and rownum <= NVL(:2, rownum);

First up, resolution was quick and easy according to, by a quirk of coincidence, my last post:

https://orastory.wordpress.com/2015/02/16/dbms_stats-purge_stats/

We just got rid of all the statistics history using the magic PURGE_ALL truncate flag.
The history is of limited usefulness day-to-day anyway.

The slowness was holding up ETL jobs which were just trying to calculate stats for their own partition.

I was brought into the situation towards the end of the situation but here’s an illustration of the slowness:

SQL Text
------------------------------
delete /*+ dynamic_sampling(4) */ from sys.wri$_optstat_histhead_history where savtime < :1 and rownum <= NVL(:2, rownum)

Global Information
------------------------------
 Status              :  EXECUTING               
 Instance ID         :  1                       
 SQL ID              :  9v9n97qj8z1dg           
 SQL Execution ID    :  16777249                
 Execution Started   :  02/24/2015 19:11:25     
 First Refresh Time  :  02/24/2015 19:13:06     
 Last Refresh Time   :  02/25/2015 09:10:35     
 Duration            :  50351s                  
 Module/Action       :  JDBC Thin Client/-      
 Program             :  JDBC Thin Client        

Binds
========================================================================================================================
| Name | Position |  Type  |                                           Value                                           |
========================================================================================================================
| :2   |        2 | NUMBER | 10000                                                                                     |
========================================================================================================================

Global Stats
===============================================================================================
| Elapsed |   Cpu   |    IO    | Application | Concurrency |  Other   | Buffer | Read | Read  |
| Time(s) | Time(s) | Waits(s) |  Waits(s)   |  Waits(s)   | Waits(s) |  Gets  | Reqs | Bytes |
===============================================================================================
|   50359 |   35199 |     0.16 |       14669 |         254 |      237 |     1G |   63 | 504KB |
===============================================================================================

SQL Plan Monitoring Details (Plan Hash Value=2348801730)
=======================================================================================================================================================================================================
| Id   |            Operation             |             Name              |  Rows   | Cost  |   Time    | Start  | Execs |   Rows   | Read | Read  | Activity |            Activity Detail            |
|      |                                  |                               | (Estim) |       | Active(s) | Active |       | (Actual) | Reqs | Bytes |   (%)    |              (# samples)              |
=======================================================================================================================================================================================================
|    0 | DELETE STATEMENT                 |                               |         |       |     50262 |   +101 |  1423 |        1 |      |       |          |                                       |
|    1 |   DELETE                         | WRI$_OPTSTAT_HISTHEAD_HISTORY |         |       |     50361 |     +2 |  1423 |        1 |      |       |    29.74 | enq: TX - row lock contention (14630) |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | buffer busy waits (231)               |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (82)                              |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (5)                |
|    2 |    COUNT                         |                               |         |       |     50262 |   +101 |  1423 |      14M |      |       |     0.00 | Cpu (1)                               |
|    3 |     FILTER                       |                               |         |       |     50266 |    +97 |  1423 |      14M |      |       |    30.14 | Cpu (15146)                           |
|    4 |      TABLE ACCESS BY INDEX ROWID | WRI$_OPTSTAT_HISTHEAD_HISTORY |      6M | 23218 |     50253 |   +100 |  1423 |       4G |   22 | 176KB |    23.17 | buffer busy waits (18)                |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (11627)                           |
| -> 5 |       INDEX RANGE SCAN           | I_WRI$_OPTSTAT_HH_ST          |    990K |  5827 |     50264 |   +101 |  1423 |       4G |   41 | 328KB |    16.94 | buffer busy waits (3)                 |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | Cpu (8332)                            |
|      |                                  |                               |         |       |           |        |       |          |      |       |          | resmgr:cpu quantum (180)              |
=======================================================================================================================================================================================================
   3 - filter(ROWNUM<=NVL(:2,ROWNUM))
   5 - access("WRI$_OPTSTAT_HISTHEAD_HISTORY"."SYS_NC00024$"<SYS_EXTRACT_UTC(:1))

So, first thought was that:

1. I thought that MMON was responsible for purging old stats… clearly not in this case as there were multiple concurrent application connections purging old data as part of their ETL process.

2. The DELETE is deleting any old history older than a parameterised date, the first 10000 rows thereof. There is no connection to the object on whose statistics the application ETL jobs were working on.

3. I would expect a COUNT STOPKEY operation but then the NVL predicate avoids this optimisation.

4. UPDATE: The real time sql monitoring output is also reporting the metrics for 1423 executions of this statement. The clue is in the 14M rows reported in the COUNT + FILTER operations. 1423 * 10000 rows = 14M. But I requested the output for what I thought was a single execution id – 16777249 – strange… bulk/batched delete ?

More questions than answers…


Forecast for SXSW: Mostly Cloudy with 100% Chance of Innovative Ideas

Linda Fishman Hoyle - Wed, 2015-02-25 12:23

South by Southwest (SXSW) is just around the corner, and we can’t wait for the five nerdiest days on the calendar. Thirty thousand of the top influencers from advertising, technology, marketing, and social media will converge on Austin, Texas, to swap ideas and build relationships.

Oracle’s lineup this year is all about the cloud. We have a lot of clouds at Oracle, and each one is designed to make our customers' lives easier. SXSW is a great opportunity to highlight our integrated solutions to these influencers. As Mark Hurd said, “We integrate the products for the customer so the customer doesn’t have to do it. We optimize the software for the solution, and it’s delivered us strong growth."

The best part? We're only getting better at what we do in the cloud. We have a great lineup of customer speakers sharing our stages to talk about their Oracle journeys plus a fun cocktail event again this year. So check out what we have planned for you at SXSW.

Friday, March 13th: A Full Day of Oracle Intelligence

Come by the Radisson Hotel at 111 Cesar Chavez and Congress to see a great day of presentations with Oracle customers. Also check out our Social Intelligence Center to follow all of the #SXSW online buzz.

Saturday, March 14 and Sunday, March 15:  The Accelerator at the SXSW Startup Village

For the second year in a row, we are the premier sponsor of the Accelerator, a competition of the top 48 most innovative ideas, technologies, products, and/or services. This competition will take place on Saturday and Sunday through a series of live demonstrations at the SXSW Startup Village. Don’t miss this!

Sunday, March 15th: More Great Sessions

Tuesday, March 17th: One Final Session Before Heading Home

If you aren’t able to go to Austin, follow the conversation on social:

  • Oracle (@Oracle)
  • Oracle Social Cloud (@OracleSocial)
  • Oracle Marketing Cloud (@OracleMktgCloud)
  • Oracle Data Cloud (@OracleDataCloud)
  • Oracle Customer Experience (@OracleCX)

Virtual CPUs with Google Compute Engine

Pythian Group - Wed, 2015-02-25 12:12

Continuing on my series of virtual CPU behavior in Amazon Web Services, Amazon Web Service HVM instances, and Microsoft Azure, I’m taking a look at Google Compute Engine (GCE). GCE is a relative newcomer to the public cloud world, become generally available in December 2013. It does have some interesting features, including transparent maintenance through live migration, and automatic sustained-use discounts without upfront commitments.

Unlike Amazon or Microsoft, Google is very upfront about their vCPU definition.

For the n1 series of machine types, a virtual CPU is implemented as a single hyperthread on a 2.6GHz Intel Sandy Bridge Xeon or Intel Ivy Bridge Xeon (or newer) processor. This means that the n1-standard-2 machine type will see a whole physical core.

I still believe calling such a hyperthread a “virtual CPU” is misleading. When creating a virtual machine in a non-cloud VM platform, 1 virtual CPU = 1 physical core. Plain and simple. But when using a cloud platform, I need 2 virtual CPUs to get that same physical core.

cpu-comparison

Anyways, off to run some CPU tests. n1-standard-4 is a close match to the m3.xlarge instances previously tested, so I’ll try that.

Getting set up on Google Compute Engine

I already signed up with Google Compute Engine’s free trial and created a project I’m calling marc-cpu-test. Installing the gcloud compute command-line tools.

[marc@quartz ~]$ gcloud auth login --no-launch-browser
Go to the following link in your browser:

https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%(redacted)&access_type=offline

Enter verification code: (redacted)
Saved Application Default Credentials.

You are now logged in as [fielding@pythian.com].
Your current project is [None].  You can change this setting by running:
  $ gcloud config set project PROJECT
[marc@quartz ~]$ gcloud config set project marc-cputest
[marc@quartz ~]$ gcloud config set compute/zone us-central1-a
[marc@quartz ~]$ gcloud compute instances create cpu-test-n4 --image centos-6 --machine-type "n1-standard-4" --zone us-central1-a
Created [https://www.googleapis.com/compute/v1/projects/marc-cputest/zones/us-central1-a/instances/cpu-test-n4].
NAME        ZONE          MACHINE_TYPE  INTERNAL_IP    EXTERNAL_IP   STATUS
cpu-test-n4 us-central1-a n1-standard-4 10.240.222.194 104.154.75.96 RUNNING
[marc@quartz ~]$ gcloud compute ssh cpu-test-n4
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/marc/.ssh/google_compute_engine.
Your public key has been saved in /home/marc/.ssh/google_compute_engine.pub.
The key fingerprint is:
(redacted)
Updated [https://www.googleapis.com/compute/v1/projects/marc-cputest].
Warning: Permanently added '104.154.75.96' (RSA) to the list of known hosts.
Warning: Permanently added '104.154.75.96' (RSA) to the list of known hosts.

OK, instance all set and connected. As a CentOS 6 image it doesn’t allow SSH root logins by default, so attempting to set up a gcloud environment as a root user will get you “permission denied” errors on SSH. Serves me right for trying to run these tools as root in the first place :-).

Looking around

Checking what they got us:

[marc@cpu-test-n4 ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
processor       : 1
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
processor       : 2
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2
processor       : 3
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2

Google has redacted the exact CPU model numbers, but has clearly marked this as a 2-core system with core IDs 0 and 1.

The single-CPU case
[marc@cpu-test-n4 ~]$ taskset -pc 0 $$
pid 1558's current affinity list: 0-3
pid 1558's new affinity list: 0
[marc@cpu-test-n4 ~]$  dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 14.3427 s, 151 MB/s
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 29.3081 s, 74.1 MB/s
2170552320 bytes (2.2 GB) copied, 29.3065 s, 74.1 MB/s

We get a nice boost in raw CPU numbers as compared to the 120 MB/s I saw in AWS. With two processes sharing this CPU, see a tiny bit less than half the throughput.

Sharing the cores (or trying to)
[marc@cpu-test-n4 ~]$ taskset -pc 0,1 $$
pid 1558's current affinity list: 0
pid 1558's new affinity list: 0,1
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c &gt; /dev/null &amp; done
[1] 1803
[2] 1805
[marc@cpu-test-n4 ~]$ 2170552320 bytes (2.2 GB) copied, 14.6959 s, 148 MB/s
2170552320 bytes (2.2 GB) copied, 14.7139 s, 148 MB/s

This is interesting; we see almost full-speed throughput in spite of processors 0 and 1 said to have a shared core. With processors 0 and 2 the situation is the same.

[marc@cpu-test-n4 ~]$ taskset -pc 0,2 $$
pid 1558's current affinity list: 0,1
pid 1558's new affinity list: 0,2
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c &gt; /dev/null &amp; done
[1] 1830
[2] 1833
[marc@cpu-test-n4 ~]$ 2170552320 bytes (2.2 GB) copied, 14.6683 s, 148 MB/s
2170552320 bytes (2.2 GB) copied, 14.6692 s, 148 MB/s

Is the CPU scheduler ignoring my taskset commands? Running mpstat 2-second samples during the test to see actual CPU usage:

[marc@cpu-test-n4 ~]$ mpstat -P ALL 2
...
06:08:44 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
06:08:46 PM  all   46.31    0.00    3.75    0.00    0.00    0.00    0.00    0.00   49.94
06:08:46 PM    0   93.50    0.00    6.50    0.00    0.00    0.00    0.00    0.00    0.00
06:08:46 PM    1   92.00    0.00    8.00    0.00    0.00    0.00    0.00    0.00    0.00
06:08:46 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:46 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
...
06:08:52 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
06:08:54 PM  all   46.75    0.00    3.25    0.00    0.00    0.00    0.00    0.00   50.00
06:08:54 PM    0   93.47    0.00    6.53    0.00    0.00    0.00    0.00    0.00    0.00
06:08:54 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:54 PM    2   93.50    0.00    6.50    0.00    0.00    0.00    0.00    0.00    0.00
06:08:54 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

So the taskset commands are working: when we ask for CPUs 0 and 1, we are getting them, but throughput shows that cores aren’t being shared. It means that the CPUs in the virtual machine are not statically bound to hardware threads as seen under AWS. I’d call it a win as it gets more consistent performance even if the guest operating system is forced to makes poor CPU scheduling decisions as in this case.

[marc@cpu-test-n4 ~]$ taskset -pc 0-3 $$
pid 1558's current affinity list: 0,2
pid 1558's new affinity list: 0-3
[marc@cpu-test-n4 ~]$ for i in {1..4}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 22.9823 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 22.9914 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 22.9915 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 23.1333 s, 93.8 MB/s

This is more the throughput we would expect with two shared cores.

Lessons learned

Over the course of these tests, I’ve discovered a few things:

  • Although they share virtual CPUs like competitors, Google is very upfront about this behavior.
  • Actual throughput for a simple gzip workload is excellent.
  • Google Compute Engine has an abstraction layer in front of CPUs that dynamically schedules tasks between CPU threads, in addition to the regular scheduler in the virtual machine. In my testing, it allocates tasks efficiently across CPU cores, even when the OS scheduler is configured suboptimally.
Categories: DBA Blogs

Exadata X5 – A Practical Point of View of the New Hardware and Licensing

Pythian Group - Wed, 2015-02-25 12:10

Oracle recently announced its latest iteration of Exadata – X5-2. It includes a refresh of the hardware to the most recent Xeon® E5-2699 v3 CPUs. These new CPUs boost the total cores count in a full rack to 288. This is higher than the current 8 socket “big machine” version X4-8, which has only 240 cores.

But the most exciting part is the all flash version of Exadata. In the previous generation – X4 – Oracle had to switch from 15K drives to 10K drives in order to boost capacity from 600 GB to 1200 GB per hard drive to keep disk space higher than flash cache size. At that time of X4 announcements, we were already wondering why Oracle was still offering high-speed disks and not switching to all flash, and now we know why. Because that type of high-performance flash wasn’t quite ready.

Maintaining high IO rates over long periods of times needed some changes to the ILOM in order to maintain cooling fans speed based on many individual temperature sensors inside the flash cards (details). Removing the SAS controller and using the new NVMe connectivity resulted in much higher bandwidth per hard drive – 3.2 GBytes/sec vs. the old 1.2 GBytes/sec SAS.

With temperature and bandwidth sorted out, we now have a super-high performance option (EF – Extreme Flash) for Exadata which delivers the stunning 263 GB/sec uncompressed scan speed in a full rack. The difference in performance between the High Capacity and High Performance EF flash option is now much higher. The high-performance option in Exadata X5 is now viable. In Exadata X4 it made so little difference, that it was pointless.

x4 vs x5

The one thing I wonder with the X5 announcement is why the X5-2 storage server still uses the very old and quite outdated 8 core CPUs. I’ve seen many cases where a Smart Scan on an HCC table is CPU bound on the storage server even when reading from spinning disk. I am going to guess that there’s some old CPU inventory to cleanup. But that may not end up being such a problem (see “all columnar” flash cache feature).

But above all, the most important change was the incremental licensing option. With 36 cores per server, even the 1/8th rack configuration was in the multi-million dollars in licenses, and in many cases was too much for the problem in hand.

The new smallest configuration is:

  • 1/8th rack, with 2 compute nodes
  • 8 cores enabled per compute node (16 total)
  • 256 GB RAM per node (upgradable to 768 GB per node)
  • 3 storage servers with only half the cores, disks and flash enabled

Then you can license additional cores as you need them, 2 cores at a time. Similar to how ODA licensing option worked. You cannot reduce licensed cores.

The licensing rules changes go even further. Now you can mix & match compute and storage servers to create even more extreme options. Some non-standard examples:

  • Extreme Memory – more compute nodes with max RAM, reduced licensed cores
  • Extreme Storage – replace compute node with storage nodes, reduced licensed cores

x5 custom
Link to video

In conclusion, Oracle Exadata X5 configuration options and the changes it brings to licensing allows an architect to craft a system that will meet any need and allow for easy, small step increments in the future, potentially without any hardware changes.

There are many more exciting changes in Oracle 12c, Exadata X5 and the new storage server software which I may cover in the future as I explore them in detail.

Categories: DBA Blogs

Log Buffer #411, A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2015-02-25 12:00

This Log Buffer Edition brings you some blog posts from Oracle, SQL Server and MySQL.

Oracle:

Suppose you have a global zone with multiple zpools that you would like to convert into a native zone.

The digital revolution is creating abundance in almost every industry—turning spare bedrooms into hotel rooms, low-occupancy commuter vehicles into taxi services, and free time into freelance time

Every time I attend a conference, the Twitter traffic about said conference is obviously higher.  It starts a couple weeks or even months before, builds steadily as the conference approaches, and then hits a crescendo during the conference.

Calling All WebLogic Users: Please Help Us Improve WebLogic Documentation!

Top Two Cloud Security Concerns: Data Breaches and Data Loss

SQL Server:

This article describes a way to identify the user who truncated the table & how you can recover the data.

When SQL Server 2014 was released, it included Hekaton, Microsoft’s much talked about memory-optimized engine that brings In-Memory OLTP into play.

Learn how you can easily spread your backup across multiple files.

Daniel Calbimonte has written a code comparison for MariaDB vs. SQL Server as it pertains to how to comment, how to create functions and procedures with parameters, how to store query results in a text file, how to show the top n rows in a query, how to use loops, and more.

The article show a simple way we managed to schedule index rebuild and reorg for an SQL instance with 106 databases used by one application using a Scheduled job.

MySQL:

How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!)

vCloud Air and business-critical MySQL

MySQL Dumping and Reloading the InnoDB Buffer Pool

How to benchmark MongoDB

MySQL Server on SUSE 12

Categories: DBA Blogs

SQL Server 2014 Cumulative Update 6

Pythian Group - Wed, 2015-02-25 11:59

Hello everyone,

Just a quick note to let you know that this week, while most of North America was enjoying a break, Microsoft released the 6th cumulative update for SQL Server 2014. This update contains fixes for 64 different issues, distributed as follows:

SQL 2014 Cumulative Update 6

As the name implies, this is a cumulative update, that means it is not necessary to install the previous 5 in case you don’t have them. Please remember to test thoroughly any update before applying to production.

The cumulative update and the full release notes can be found here: https://support.microsoft.com/kb/3031047/en-us?wa=wsignin1.0

 

 

Categories: DBA Blogs