Skip navigation.

Feed aggregator

xml to csv in powershell

Laurent Schneider - Wed, 2014-11-12 11:59

Powershell is very strong with XML, to convert an XML document to something flat like a CSV file, it is incredibly powerfull.

Let’s take a file called emp.xml


To get all employees, it is awfully easy


Now I actually want to have each employee together with his department. I create an object for each department add the ename and the deptno

$d=([xml](gc emp.xml)).EMPTABLE.DEPT | % { 
  foreach ($i in $_.EMPLIST.ENAME) {
    $o = New-Object Object
    Add-Member -InputObject $o -MemberType NoteProperty -Name DEPTNO -Value $_.DEPTNO
    Add-Member -InputObject $o -MemberType NoteProperty -Name ENAME -Value $i

------     -----
10         CLARK
10         MILLER
10         KING
20         SMITH
20         FORD
20         ADAMS
20         SCOTT
20         JONES
30         ALLEN
30         WARD
30         MARTIN
30         BLAKE
30         TURNER
30         JAMES

This could be convert to multiple format.






        "DEPTNO":  "10",
        "ENAME":  "CLARK"
        "DEPTNO":  "10",
        "ENAME":  "MILLER"
        "DEPTNO":  "10",
        "ENAME":  "KING"
        "DEPTNO":  "20",
        "ENAME":  "SMITH"
        "DEPTNO":  "20",
        "ENAME":  "FORD"
        "DEPTNO":  "20",
        "ENAME":  "ADAMS"
        "DEPTNO":  "20",
        "ENAME":  "SCOTT"
        "DEPTNO":  "20",
        "ENAME":  "JONES"
        "DEPTNO":  "30",
        "ENAME":  "ALLEN"
        "DEPTNO":  "30",
        "ENAME":  "WARD"
        "DEPTNO":  "30",
        "ENAME":  "MARTIN"
        "DEPTNO":  "30",
        "ENAME":  "BLAKE"
        "DEPTNO":  "30",
        "ENAME":  "TURNER"
        "DEPTNO":  "30",
        "ENAME":  "JAMES"

Or even to xml with ($d|ConvertTo-XML).OuterXml

It is so lightening fast that you could process pretty large files (millions of lines) in just a few seconds

#ChevyGuy Response is Authentic, Human, and Brilliant

Linda Fishman Hoyle - Wed, 2014-11-12 11:34

A Guest Post by Meg Bear, Group Vice President, Oracle Social Cloud (pictured left)

For those who didn’t hear, Chevrolet executive Rikk Wilde had a little trouble during his live television presentation of the World Series’ Most Valuable Player award to San Francisco Giants’ Madison Bumgarner. He nervously stumbled through his speech, sweating and looking down at his note cards as he mumbled about how the truck had “technology and stuff.”

It was awkward and painful to watch. Immediately social media exploded with jokes aimed at Chevrolet and the executive. #ChevyGuy and #technologyandstuff were trending. You couldn’t help but feel awful for #ChevyGuy, aka Rikk Wilde. And you certainly wondered how all this would play out.

Well, GM and Chevrolet played it brilliantly. They didn’t scold or fire Rikk Wilde, #ChevyGuy. They supported him, embraced the harmless gaffe, and turned what could have been an embarrassment into social media gold.

It’s a reminder that social is equal parts technology and humanity.

GM reacted swiftly not only via social media, but also with advertisements placed online and in newspapers, like USA Today, the following day. It has garnered more than $2 million in free publicity, according to published reports. The website saw a 70 percent hike in visitors, according to a company spokesperson.

That’s what we call #winning.

Well done, GM and Chevrolet. Well done. We are thrilled to have such an innovative, smart, and authentic partner.

If you missed it, you can check out the story in the Detroit Free Press.

Parallel Fun

Jonathan Lewis - Wed, 2014-11-12 10:42

As I write, there’s an ongoing thread on Oracle-L that started with the (paraphrased) question: “I’ve got this query that returns 7 million rows; when I change it to ‘select count(*)’ it returns in 4 seconds but when we display the full result set on screen it takes hours, and every second or two the screen pauses; how do I make it go faster.”

The general rapid response was: “You shouldn’t be running 7M rows to a screen – the time is the time for the network traffic and display.”

The first part of the statement is right – the second part is quite likely to be wrong and there’s a very strong hint in the question that makes me say that, it’s the “pauses every second or two”. Of course we don’t know what the OP isn’t telling us, and we don’t know how accurate he is in what he is telling us, so any ideas we have may be completely wrong. For example, we haven’t been given any idea of how long a “pause” is, we don’t really know how accurate that “second or two” might be and whether “every” is an exaggeration, and maybe the query is returning CLOB columns (and that could make a big difference to what you can do to improve performance).

If we take the statement at face value, though, there is one very obvious inference: although some of the time will be due to network traffic time, most of the time is probably due to Oracle doing something expensive for a significant fraction of the rows returned. The pattern of activity probably looks like this:

  • client: call server to fetch next array of rows
  • server: spend some time populating array  — this is where the client sees a pause
  • client: display result array
  • client: call server to fetch next array of rows
  •  etc…

Here’s a trivial example:

connect / as sysdba
set arraysize 500
set pagesize 40

        o1.spare1 ,
        select  max((ctime))
        from    obj$    o2
        where   o2.owner# = o1.owner#
        and     o2.obj# < o1.obj#
        ) ct
from obj$ o1

On my laptop, running an instance of with about 80,000 rows in obj$ (and a lot of them owned by SYS), I can count seconds and find that (approximately) I alternate between one second watching results scrolling up the screen and one second waiting as the server generates the next 500 rows.

Of course it’s possible to argue that the problem really is the network and nothing but the network struggling to cope with the never-ending stream of little packets produced by 7M rows. Could there be a choke point that causes the data to stop and start with great regularity, maybe – but previous experience says probably not. I have experienced bad network problems in the past, but when they’ve occurred I’ve always observed extremely random stop/go behaviour. The regularity implied in the question makes the Oracle-based problem seem far more likely.

Conveniently a couple of people asked for more clues – like the query text and the execution plan; even more conveniently the OP supplied the answers in this response. Since the email format makes them a little hard to read I’ve copied them here:

SELECT  bunch of stuff.....,

                ORDER BY sr1.RELATED_SID), '//text()' ) ) , ',' )
                FROM    service_relationship sr1
                WHERE   sr1.SID                    = slv.SID
                        AND sr1.RELATIONSHIP_LEVEL = '1'
                GROUP BY sr1.SID
                ORDER BY sr2.RELATED_SID), '//text()' ) ) , ',' )
                FROM    service_relationship sr2
                WHERE   sr2.SID                    = slv.SID
                        AND sr2.RELATIONSHIP_LEVEL = '2'
                GROUP BY sr2.SID
                ORDER BY sr3.RELATED_SID), '//text()' ) ) , ',' )
                FROM    service_relationship sr3
                WHERE   sr3.SID                    = slv.SID
                        AND sr3.RELATIONSHIP_LEVEL = '3'
                GROUP BY sr3.SID
                ORDER BY sr4.RELATED_SID), '//text()' ) ) , ',' )
                FROM    service_relationship sr4
                WHERE   sr4.SID                    = slv.SID
                        AND sr4.RELATIONSHIP_LEVEL = '4'
                GROUP BY sr4.SID
                ORDER BY sr5.RELATED_SID), '//text()' ) ) , ',' )
                FROM    service_relationship sr5
                WHERE   sr5.SID                    = slv.SID
                        AND sr5.RELATIONSHIP_LEVEL = '5'
                GROUP BY sr5.SID
FROM    service_lookup slv
        LEFT JOIN service_location sl
        ON      sl.service_location_id = slv.service_location_id;

Plan hash value: 1570133209

| Id  | Operation                        | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
|   0 | SELECT STATEMENT                 |                      |  7331K|  5593M|  1877   (5)| 00:00:01 |        |      |            |
|   1 |  SORT GROUP BY                   |                      |     1 |    22 |   368   (6)| 00:00:01 |        |      |            |
|   2 |   PX COORDINATOR                 |                      |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)           | :TQ10000             |    25 |   550 |   368   (6)| 00:00:01 |  Q1,00 | P->S | QC (RAND)  |
|   4 |     PX BLOCK ITERATOR            |                      |    25 |   550 |   368   (6)| 00:00:01 |  Q1,00 | PCWC |            |
|*  5 |      TABLE ACCESS STORAGE FULL   | SERVICE_RELATIONSHIP |    25 |   550 |   368   (6)| 00:00:01 |  Q1,00 | PCWP |            |
|   6 |  SORT GROUP BY                   |                      |     1 |    22 |   368   (6)| 00:00:01 |        |      |            |
|   7 |   PX COORDINATOR                 |                      |       |       |            |          |        |      |            |
|   8 |    PX SEND QC (RANDOM)           | :TQ20000             |    25 |   550 |   368   (6)| 00:00:01 |  Q2,00 | P->S | QC (RAND)  |
|   9 |     PX BLOCK ITERATOR            |                      |    25 |   550 |   368   (6)| 00:00:01 |  Q2,00 | PCWC |            |
|* 10 |      TABLE ACCESS STORAGE FULL   | SERVICE_RELATIONSHIP |    25 |   550 |   368   (6)| 00:00:01 |  Q2,00 | PCWP |            |
|  11 |  SORT GROUP BY                   |                      |     1 |    22 |   368   (6)| 00:00:01 |        |      |            |
|  12 |   PX COORDINATOR                 |                      |       |       |            |          |        |      |            |
|  13 |    PX SEND QC (RANDOM)           | :TQ30000             |    25 |   550 |   368   (6)| 00:00:01 |  Q3,00 | P->S | QC (RAND)  |
|  14 |     PX BLOCK ITERATOR            |                      |    25 |   550 |   368   (6)| 00:00:01 |  Q3,00 | PCWC |            |
|* 15 |      TABLE ACCESS STORAGE FULL   | SERVICE_RELATIONSHIP |    25 |   550 |   368   (6)| 00:00:01 |  Q3,00 | PCWP |            |
|  16 |  SORT GROUP BY                   |                      |     1 |    22 |   368   (6)| 00:00:01 |        |      |            |
|  17 |   PX COORDINATOR                 |                      |       |       |            |          |        |      |            |
|  18 |    PX SEND QC (RANDOM)           | :TQ40000             |    25 |   550 |   368   (6)| 00:00:01 |  Q4,00 | P->S | QC (RAND)  |
|  19 |     PX BLOCK ITERATOR            |                      |    25 |   550 |   368   (6)| 00:00:01 |  Q4,00 | PCWC |            |
|* 20 |      TABLE ACCESS STORAGE FULL   | SERVICE_RELATIONSHIP |    25 |   550 |   368   (6)| 00:00:01 |  Q4,00 | PCWP |            |
|  21 |  SORT GROUP BY                   |                      |     1 |    22 |   368   (6)| 00:00:01 |        |      |            |
|  22 |   PX COORDINATOR                 |                      |       |       |            |          |        |      |            |
|  23 |    PX SEND QC (RANDOM)           | :TQ50000             |    25 |   550 |   368   (6)| 00:00:01 |  Q5,00 | P->S | QC (RAND)  |
|  24 |     PX BLOCK ITERATOR            |                      |    25 |   550 |   368   (6)| 00:00:01 |  Q5,00 | PCWC |            |
|* 25 |      TABLE ACCESS STORAGE FULL   | SERVICE_RELATIONSHIP |    25 |   550 |   368   (6)| 00:00:01 |  Q5,00 | PCWP |            |
|  26 |  PX COORDINATOR                  |                      |       |       |            |          |        |      |            |
|  27 |   PX SEND QC (RANDOM)            | :TQ60002             |  7331K|  5593M|  1877   (5)| 00:00:01 |  Q6,02 | P->S | QC (RAND)  |
|* 28 |    HASH JOIN RIGHT OUTER BUFFERED|                      |  7331K|  5593M|  1877   (5)| 00:00:01 |  Q6,02 | PCWP |            |
|  29 |     PX RECEIVE                   |                      |  3175K|   920M|   366   (3)| 00:00:01 |  Q6,02 | PCWP |            |
|  30 |      PX SEND HASH                | :TQ60000             |  3175K|   920M|   366   (3)| 00:00:01 |  Q6,00 | P->P | HASH       |
|  31 |       PX BLOCK ITERATOR          |                      |  3175K|   920M|   366   (3)| 00:00:01 |  Q6,00 | PCWC |            |
|  32 |        TABLE ACCESS STORAGE FULL | SERVICE_LOCATION     |  3175K|   920M|   366   (3)| 00:00:01 |  Q6,00 | PCWP |            |
|  33 |     PX RECEIVE                   |                      |  7331K|  3467M|  1507   (5)| 00:00:01 |  Q6,02 | PCWP |            |
|  34 |      PX SEND HASH                | :TQ60001             |  7331K|  3467M|  1507   (5)| 00:00:01 |  Q6,01 | P->P | HASH       |
|  35 |       PX BLOCK ITERATOR          |                      |  7331K|  3467M|  1507   (5)| 00:00:01 |  Q6,01 | PCWC |            |
|  36 |        TABLE ACCESS STORAGE FULL | SERVICE_LOOKUP       |  7331K|  3467M|  1507   (5)| 00:00:01 |  Q6,01 | PCWP |            |

We have a simple two-table outer join, and five scalar subqueries in the select list. (Not being very familiar with the various XML calls I had no idea of what the scalar subqueries were doing, or how they produced a result, beyond the fact that they were querying and aggregating multiple rows. In fact the combination of calls does much the same as listagg(), though it allows for a CLOB result (which could be part of the performance problem, of course) rather than being limited to a varchar2() result).

Would you like to guess at this point why I constructed my demonstration query again obj$ the way I did when presenting the idea of high-cost per row queries as a reason for regular pauses in the output ? The execution plan matched one of my two initial guesses about what the query was going to look like. When you “select count(*) from {this query}”, the optimizer will factor out the scalar subqueries and only have to count the result set from the hash join – and it might even manage to use a couple of parallel index fast full scans to get that result rather than doing the tablescans. When you run the query you have to run the scalar subqueries.

If we trust the statistics, we have 5 subqueries to run for each row of the hash join – and the hash join is predicted to return 7.3 million rows. Given that the subqueries are all going to run parallel tablescans against a fairly large table (note – the cost of the tablescans on SERVICE_RELATIONSHIP is 368, compared to the cost of the tablescan on SERVICE_LOCATION which is 366 to return 3.1M rows) that’s an awful lot of work for each row returned – unless we benefit from an enormous amount of scalar subquery caching.

Here’s another performance threat that the plan shows, though: notice where the PX SEND QC operation appears – that means the PX slaves send their (7M) rows to the Query Co-ordinator and the QC is responsible for doing all the work of running the scalar subqueries. Another interesting little threat visible in the plan shows up in the TQ column – the plan uses six “data flow operations” (using the original naming convention, though that changed some time ago but survived in the column names of v$pq_tqstat). In principle each DFO could allocate two sets of PX slaves (and every DFO could have a different degree of parallelism); in this example DFO number 6 (the driving hash join) uses two sets of slave, and the other five DFOs (the scalar subqueries) use a single set each. The upshot of this is that if the default degree of parallelism in play is N this query will allocate 7N parallel query slaves. It gets a little nastier than that, though (based on checking the output from v$sql_plan_monitor), because each time one of the scalar subqueries runs Oracle seems to allocate and deallocate the slaves that are supposed to run it – which is probably going to cause some contention if there are other parallel queries trying to run at the same time.


So what could you do with this query ? It depends on how much change you want to make to the code.

It’s possible that an index on service_relationship(relationship_level, sid) – with compress 1 – might help if it’s very precise, and if the target table stays in the buffer cache for the duration of the query – but, in the absence scalar subquery caching that could still leave the query co-ordinator executing 35 million (5 queries x 7 million rows) subqueries in a serialised process.

A better bet may be to convert from subqueries to joins – remembering that the listagg() / xmlserialize() calls will require you to aggregate (which means sorting in this case) an estimated 25 rows per driving row per relationship_level; in other words you may need to sort 7M * 125 = 875M rows – but at least you could do that in parallel, and there’s always the possibility that the estimated 25 drops off as you work through the different levels. You could choose to do 5 outer hash joins or (as Iggy Fernandez outlined in the thread) you could do a single outer join with a decode on the relationship_level. Another variation on this theme (which would probably have a plan showing ‘join then aggregate’) would be to ‘aggregate then join’. It’s possible that creating a non-mergeable inline view for the 5 values of relationsip_level from a single table access, aggregating it to produce the five required columns, then using the result in an outer join, would be the most efficient option. In the absence of a detailed understanding of the data volume and patterns it’s hard to make any prediction of which strategy would work best.


I may be wrong in my analysis of this problem. When I first saw the question the reason for the performance pattern suggested an “obvious” design error in either the SQL or the infrastructure, and when I saw that the query and execution plan matched my prediction it became very hard for me to think that there might be some other significant cause.

There were a couple of interesting details in the execution plan that made me pursue the problem a little more. In the first case I built a very simple model to get an estimate of the time needed to display 7M rows of a reasonable width in SQL*Plus running across a typical LAN (my estimate was in the order of 45 minutes – not hours); then I spent a little more time (about 10 minutes) to build a model that reproduced the key features of the execution plan shown.

I then spent two or three hours playing with the model, and I’ll be writing a further blog with some of the results later on. One detail to carry away today, though, is that in 12c Oracle can do a new form of subquery unnesting which transformed the query from its 5 scalar subquery form into the seven table join form that was one of the suggestions made on the thread; even more interestingly, if I blocked the unnesting (to force the subquery execution) Oracle came up with a new operator (EXPRESSION EVALUATION) that allowed it to run the subqueries from the PX slaves before passing the results to the query co-ordinator – in other words eliminating the serialisation point.

To be continued …

Rittman Mead anuncia su catálogo de cursos en Español y en Portugués.

Rittman Mead Consulting - Wed, 2014-11-12 07:55


UnitedKingdom-ukflag brazil-flag

Tenemos el agrado de anunciarles que desde ahora Rittman Mead ofrece su catálogo completo de cursos en Español y en Portugués.  Esta es una gran noticia para quienes viven en América Latina, España y Portugal, ya que ahora pueden recibir la mejor capacitación, incluyendo el material teórico/práctico, en su idioma local.

Los cursos se dictan tanto en forma remota, con clases virtuales en vivo a través de nuestra plataforma web como también en forma presencial. De ambas maneras, cada estudiante recibirá el material teórico/práctico en forma electrónica y tendrá acceso durante el curso a una máquina virtual de uso exclusivo, para realizar las prácticas.

Ofrecemos un amplia variedad de cursos de Oracle Business Intelligence y tecnologías de Data Warehousing: desde cursos intensivos de 5 días (bootcamps)  a conjunto de cursos específicos orientados por rol de trabajo. Acceda al catálogo completo en Español, Portugués o Inglés.

¿Está interesado en nuestros cursos o quiere ser nuestro partner en capacitación? No dude en consultarnos al mail

Categories: BI & Warehousing

Quick Links to Helpful Literature on Oracle Database In-Memory Column Store.

Kevin Closson - Wed, 2014-11-12 01:42

I’m surprised to find that Google is not cleanly ranking the helpful set of blog posts by Oracle’s Maria Colgan on the Oracle Database 12c In-Memory Column Store feature so I thought I’d put together this convenient set of links. Google search seems to only return a few of them in random order.

Over time I may add other helpful links regarding Oracle’s new, exciting caching technology.

Starter Information

Getting Started With Oracle Database In-Memory. Part I.

Getting Started With Oracle Database In-Memory. Part II.

Getting Started With Oracle Database In-Memory. Part III.

Getting Started With Oracle Database In-Memory. Part IV. 

In-Memory Column Store With Real Application Clusters

The following are links to information about Oracle Database In-Memory on Real Application Clusters:

Oracle Database In-Memory on RAC. Part I.

In-Memory Product That Requires Proprietary Storage?

How could the brand of storage matter for an in-memory cache feature? Good question.

Fellow Oaktable Network member Christian Antognini has produced a very important article regarding how Oracle Database 12c In-Memory Column Store with Real Application Clusters is questionable unless using Oracle storage (Exadata, SPARC SuperCluster).  I found Christian’s article very interesting because, after all, the topic at hand is an in-memory cache product (a.k.a., In-Memory Column Store). I fail to see any technical reason why Oracle wouldn’t support an in-memory product with blocks from any and all storage. It is in-memory after all, isn’t it? Please visit Christian’s article here: The Importance of the In-Memory DUPLICATE Clause for a RAC System.






Filed under: oracle

Mystery of java.sql.SQLRecoverableException: IO Error: Socket read timed out during adop/adpatch

Vikram Das - Tue, 2014-11-11 21:19
While applying the R12.2 upgrade driver, we faced the issue of WFXLoad.class failing in adworker log but showing up as running on adctrl

Worker  Code      Context            Filename                    Status
------  --------  -----------------  --------------------------  --------------
     1  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     2  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     3  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     4  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     5  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     6  Run       AutoPatch R120 pl                              Wait        
     7  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     8  Run       AutoPatch R120 pl  WFXLoad.class               Running      
     9  Run       AutoPatch R120 pl  WFXLoad.class               Running      
    10  Run       AutoPatch R120 pl                              Wait        

adworker log shows:
Exception in thread "main" java.sql.SQLRecoverableException: IO Error: Socket read timed out        at oracle.jdbc.driver.T4CConnection.logon(        at oracle.jdbc.driver.PhysicalConnection.(        at oracle.jdbc.driver.T4CConnection.(        at oracle.jdbc.driver.T4CDriverExtension.getConnection(        at oracle.jdbc.driver.OracleDriver.connect(        at java.sql.DriverManager.getConnection(        at java.sql.DriverManager.getConnection(        at        at by: Socket read timed out        at        at        at oracle.jdbc.driver.T4CConnection.connect(        at oracle.jdbc.driver.T4CConnection.logon(        ... 8 more
This was happening again and again. The DBAs were suspecting network issue, cluster issue, server issue and all the usual suspects.  In Database alert log we saw these errors coming every few seconds:
Fatal NI connect error 12537, connecting to: (LOCAL=NO)
  VERSION INFORMATION:        TNS for Linux: Version - Production        Oracle Bequeath NT Protocol Adapter for Linux: Version - Production        TCP/IP NT Protocol Adapter for Linux: Version - Production  Time: 11-NOV-2014 19:58:19  Tracing not turned on.  Tns error struct:    ns main err code: 12537
TNS-12537: TNS:connection closed    ns secondary err code: 12560    nt main err code: 0    nt secondary err code: 0    nt OS err code: 0opiodr aborting process unknown ospid (26388) as a result of ORA-609

We tried changing the parameters in sqlnet.ora and listener.ora as instructed in the article:Troubleshooting Guide for ORA-12537 / TNS-12537 TNS:Connection Closed (Doc ID 555609.1)
Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120
However, the errors continued.  To rule out any issues in network, I also restarted the network service on Linux:
service network restart
One thing which I noticed was the extra amount of time that the connect was taking 4 seconds:
21:17:38 SQL> conn apps/appsConnected.21:17:42 SQL> 
Checked from remote app tier and it was same 4 seconds.
Stopped listener and checked on DB server that uses bequeath protocol:
21:18:47 SQL> conn / as sysdbaConnected.21:18:51 SQL> conn / as sysdbaConnected.
Again it took 4 seconds.
A few days back, I had seen that connect time had increased after turning setting the DB initialization parameter pre_page_sga to true in a different instance.  On a hunch, I checked this and indeed pre_page_sga was set to true.  I set this back to false:
alter system set pre_page_sga=false scope=spfile;shutdown immediate;exitsqlplus /nologconn / as sysdbastartupSQL> set time on22:09:46 SQL> conn / as sysdbaConnected.22:09:49 SQL>
The connections were happening instantly.  So I went ahead and resumed the patch after setting:
update fnd_install_processes set control_code='W', status='W';
I restarted the patch and all the workers completed successfully.  And the patch was running significantly faster.  So I did a search on to substantiate my solution with official Oracle documentation.  I found the following articles:
Slow Connection or ORA-12170 During Connect when PRE_PAGE_SGA init.ora Parameter is Set (Doc ID 289585.1) Health Check Alert: Consider setting PRE_PAGE_SGA to FALSE (Doc ID 957525.1)
The first article (289585.1) says:PRE_PAGE_SGA can increase the process startup duration, because every process that starts must access every page in the SGA. This approach can be useful with some applications, but not with all applications. Overhead can be significant if your system frequently creates and destroys processes by, for example, continually logging on and logging off. The advantage that PRE_PAGE_SGA can afford depends on page size.
The second article (957525.1) says:Having the PRE_PAGE_SGA initialization parameter set to TRUE can significantly increase the time required to establish database connections.
The golden words here are "Overhead can be significant if your system frequently creates and destroys processes by, for example, continually logging on and logging off.".  That is exactly what happens when you do adpatch or adop.
Keep this in mind, whenever you do adpatch or adop, make sure that pre_page_sga is set to false.  It is possible that you may get the error "java.sql.SQLRecoverableException: IO Error: Socket read timed out" if you don't.  Also the patch will run significantly slower if pre_page_sga is set to true.  So set it to false and avoid these issues.

Categories: APPS Blogs

How to uninstall/remove EM12c Agent

Arun Bavera - Tue, 2014-11-11 15:59
./emctl stop agent

/opt/app/oracle/agent12c/core/ /opt/app/oracle/agent12c/core/ -agentHome /opt/app/oracle/agent12c

rm -Rf /opt/app/oracle/agent12c

From host where emcli is installed:
emcli delete_target -name="" -type="oracle_emd" -delete_monitored_targets –async


Check any left out info:
select target_name,target_type from MGMT_TARGETS_DELETE where DELETE_COMPLETE_TIME is null

select target_name,target_type from MGMT_TARGETS_DELETE where target_name like '%spare%' and target_type='oracle_emd'

You can try:

exec mgmt_admin.delete_target('target_name','target_type');

If the agent install fails to register with OEM than try:

     cd /opt/app/oracle/agent12c/agent_inst/bin/
    /opt/app/oracle/agent12c/agent_inst/bin/emctl secure agent
    /opt/app/oracle/agent12c/agent_inst/bin/emctl start agent
    /opt/app/oracle/agent12c/agent_inst/bin/emctl config agent addInternalTargets

   /opt/app/oracle/agent12c/agent_inst/bin/emctl upload
Categories: Development

AWR does not store explain plan predicates

Yann Neuhaus - Tue, 2014-11-11 14:49

What is the most important part in an execution plan? It's probably the predicate section. It helps to see implicit conversions. It helps to understand why an index is used or not. Or to see additional predicates coming from constraints. When you get an execution plan from shared pool, the dbms_xplan.display_cursor() shows the predicates. But when you retrieve a plan from the past, dbms_xplan.display_awr() does not show them.


With Statspack, you can store execution plans when you take level 7 snapshot (which I often set as default). But you don't have the predicate section, for the simple reason that they are not collected.

Here is what you can find in spcpkg.sql:

insert into stats$sql_plan(
 plan_hash_value, ...
 access_predicates ,
 filter_predicates...) ... select ...
 new_plan.plan_hash_value, ...
 0 -- should be max(sp.access_predicates) (2254299),
 0 -- should be max(sp.filter_predicates)



AWR is not better. Finding the code is a bit more difficult. It's optimized, run from the binaries. Let's find it:

$ strings $ORACLE_HOME/bin/oracle | grep -i "v[$]sql_plan"


SELECT /*+ leading(S) use_nl(P) PARAM('_cursor_plan_unparse_enabled','FALSE') */ remarks, operation, options, object_node, object_owner, object_name, object_alias, object_type, optimizer, id, parent_id, depth, position, cost, cardinality, bytes, other_tag, partition_start, partition_stop, partition_id, distribution, cpu_cost, io_cost, temp_space, access_predicates, filter_predicates,projection, time, qblock_name FROM v$sql_plan_statistics_all P, v$sqlarea_plan_hash S WHERE P.sql_id = :1 AND P.plan_hash_value = :2 AND P.con_dbid = :3 AND S.sql_id = :4 AND S.plan_hash_value = :5 AND S.con_dbid = :6 AND P.child_address = s.last_active_child_address


I've highlighted what is interesting here. The parameter description for _cursor_plan_unparse_enabled is 'enables/disables using unparse to build projection/predicates' which is true by default but is set to false for this query. This, access_predicates ant filter_predicates are null as in Statspack.


It you tried to use those access_predicate and filter_predicates in the days of 9i you probably remember that it finished in ORA-7445 most of the times. Those columns are a bit special. The predicates are not stored as-is. They are 'unparsed' from the execution plan code. And because of a few bugs, Oracle has probably chosen to avoid automatic collection on them.

I think there are still some bugs still until 11.1 But I query those columns very frequently (directly or through dbms_xplan.display_cursor) and I don't think I've seen any problem in current versions. I hope that one day that limitation will be released.


When I've a plan coming from AWR, the first thing I do is to try to find it in the shared pool with dbms_xplan.display_cursor - with same sql_id and same plan_hash_value. Then I'll have the predicate section.

If it is not present anymore, then I'll try to reproduce it with an EXPLAIN PLAN. In order to get the same plan, I retrieve the outlines ('+OUTLINE' format) and the bind variable values may help as well ('+PEEKED_BINDS' format) and then I reproduce it and check that I've the same plan_hash_value.


Let's see on an example

I have the following plan from AWR

SQL> select * from table(dbms_xplan.display_awr('7ws837zynp1zv',3722429161,format=>'basic +predicate'));

SQL_ID 7ws837zynp1zv

Plan hash value: 3722429161

| Id  | Operation                            | Name                |
|   0 | SELECT STATEMENT                     |                     |
|   1 |  COUNT STOPKEY                       |                     |
|   3 |    INDEX RANGE SCAN                  | CARDDETAILS_CUST_IX |

17 rows selected.

There is no predicates here.

Then I try to get the plan with explain plan:

SQL> alter session set current_schema=SOE;

Session altered.

SQL> explain plan for
  4  ROWNUM  select * from table(dbms_xplan.display(format=>'basic +predicate'));

Plan hash value: 2597291669

| Id  | Operation          | Name         |
|   0 | SELECT STATEMENT   |              |
|*  1 |  COUNT STOPKEY     |              |

Predicate Information (identified by operation id):

   1 - filter(ROWNUM < TO_NUMBER(:B1))
   2 - filter("CUSTOMER_ID"=TO_NUMBER(:B2))

15 rows selected.

The problem is that I've not the same plan. I want the predicates for the index access (plan hash value 3722429161).

So I get the maximum information from the AWR plan, with +OUTLINE and +PEEKED_BINDS:

SQL> select * from table(dbms_xplan.display_awr('7ws837zynp1zv',3722429161,format=>'basic +outline +peeked_binds'));

SQL_ID 7ws837zynp1zv

Plan hash value: 3722429161

| Id  | Operation                            | Name                |
|   0 | SELECT STATEMENT                     |                     |
|   1 |  COUNT STOPKEY                       |                     |
|   3 |    INDEX RANGE SCAN                  | CARDDETAILS_CUST_IX |

Outline Data


Peeked Binds (identified by position):

   1 - :B2 (NUMBER): 315821
   2 - :B1 (NUMBER): 15

39 rows selected.

And I can now do the explain plan with the hints coming from the outlines (I can also replace the variables with the binds if I want to, as they are those that were peeked to optimize the statement):

SQL> explain plan for
  3    /*+
  5        DB_VERSION('')
  6        ALL_ROWS
  7        OUTLINE_LEAF(@"SEL$1")
  8        INDEX_RS_ASC(@"SEL$1" "CARD_DETAILS"@"SEL$1"
  9                ("CARD_DETAILS"."CUSTOMER_ID"))
 11    */
 14  ROWNUM  select * from table(dbms_xplan.display(format=>'basic +predicate'));

Plan hash value: 3722429161

| Id  | Operation                            | Name                |
|   0 | SELECT STATEMENT                     |                     |
|*  1 |  COUNT STOPKEY                       |                     |
|*  3 |    INDEX RANGE SCAN                  | CARDDETAILS_CUST_IX |

Predicate Information (identified by operation id):

   1 - filter(ROWNUM < TO_NUMBER(:B1))
   3 - access("CUSTOMER_ID"=TO_NUMBER(:B2))

16 rows selected.

Bingo, I've now the predicates.


Time for a Wish

I wish that one day Oracle will release that limitation so that we can get predicate information from AWR (when in EE + Diagnostic Pack) and Statspack (SE and EE without option).

I've posted the idea on OTN. Please vote for the idea here CDB views are now using CONTAINERS()

Yann Neuhaus - Tue, 2014-11-11 07:41

I've blogged about multitenant internals at the time when was just released. Something has changed in and blogging about it was in my todo list for a long time. Now the occasion to do it has been triggered by a question on Oracle Forums about CDB_DATA_FILES not showing PDB$SEED datafiles because there is an important change we must be aware of.

In the previous blog, I have described how you can query PDB information using 'container data objects', like the CDB_ views that are created by catcdbviews.sql, defined with the CDB$VIEW clause, which looks like a table function (but is not). It was not documented, and implemented internally with parallel query, partitioning and fixed table. Lot of internal details from Laurent Leturgez here. has introduced the CONTAINERS() table function which does the same but is documented.

And CDB_ views are now using the CONTAINERS() clause instead of CDB$VIEW() one.

However, there is something different. By default the PDB$SEED is not queried. This comes from the initialization parameter exclude_seed_cdb_view which is true by default:

SQL> show parameter exclude_seed_cdb_view

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
exclude_seed_cdb_view                boolean     TRUE

SQL> select * from containers(dual);

D     CON_ID
- ----------
X          3
X          1

You don't see PDB$SEED here which is container id 2

We can get back to the behaviour by setting the parameter to false:

SQL> alter session set exclude_seed_cdb_view=false;

Session altered.

SQL> select * from containers(dual);

D     CON_ID
- ----------
X          2
X          1
X          3


So, what's the point about that? If you are used to list the database datafiles by doing a select from DBA_DATA_FILES then you probably query CDB_DATA_FILES from root in multitenant. And you expect to have all files. You add control files and log file members and you have an exhaustive list of your database files. Maybe you use that in a backup or maintenance script.

Then if you do that in you will miss the PDB$SEED. Except if you set exclude_seed_cdb_view to false. You can't rely on CDB_DATA_FILES and that's the important point raised in the Oracle Forum post.

I usually prefer to use RMAN for that and RMAN is right:

RMAN> connect target /

connected to target database: CDB1 (DBID=836194344)

RMAN> report schema;

using target database control file instead of recovery catalog
Report of database schema for database with db_unique_name CDB1

List of Permanent Datafiles
File Size(MB) Tablespace           RB segs Datafile Name
---- -------- -------------------- ------- ------------------------
1    790      SYSTEM               YES     /ZFS01/oradata/CDB1/system01.dbf
3    610      SYSAUX               NO      /ZFS01/oradata/CDB1/sysaux01.dbf
4    160      UNDOTBS1             YES     /ZFS01/oradata/CDB1/undotbs01.dbf
5    250      PDB$SEED:SYSTEM      NO      /ZFS01/oradata/CDB1/pdbseed/system01.dbf
6    5        USERS                NO      /ZFS01/oradata/CDB1/users01.dbf
7    490      PDB$SEED:SYSAUX      NO      /ZFS01/oradata/CDB1/pdbseed/sysaux01.dbf
8    270      PDB1:SYSTEM          NO      /ZFS01/oradata/CDB1/PDB1/system01.dbf
9    530      PDB1:SYSAUX          NO      /ZFS01/oradata/CDB1/PDB1/sysaux01.dbf
10   40       PDB1:USERS           NO      /ZFS01/oradata/CDB1/PDB1/SAMPLE_SCHEMA_users01.dbf
11   1345     PDB1:EXAMPLE         NO      /ZFS01/oradata/CDB1/PDB1/example01.dbf

List of Temporary Files
File Size(MB) Tablespace           Maxsize(MB) Tempfile Name
---- -------- -------------------- ----------- --------------------
1    275      TEMP                 32767       /ZFS01/oradata/CDB1/temp01.dbf
2    20       PDB$SEED:TEMP        32767       /ZFS01/oradata/CDB1/pdbseed/pdbseed_temp012014-11-01_08-57-07-AM.dbf
3    20       PDB1:TEMP            32767       /ZFS01/oradata/CDB1/PDB1/PDB1_temp012014-11-01_09-10-16-AM.dbf

Where CDB_DATA_FILES by default shows only:

SQL> select con_id,file_id,file_name from cdb_data_files;

---------- ---------- ----------------------------------------------------
         1          1 /ZFS01/oradata/CDB1/system01.dbf
         1          3 /ZFS01/oradata/CDB1/sysaux01.dbf
         1          6 /ZFS01/oradata/CDB1/users01.dbf
         1          4 /ZFS01/oradata/CDB1/undotbs01.dbf
         3          8 /ZFS01/oradata/CDB1/PDB1/system01.dbf
         3          9 /ZFS01/oradata/CDB1/PDB1/sysaux01.dbf
         3         10 /ZFS01/oradata/CDB1/PDB1/SAMPLE_SCHEMA_users01.dbf
         3         11 /ZFS01/oradata/CDB1/PDB1/example01.dbf
More internals...

Now what has change about CONTAINERS vs CDB$VIEW?

SQL> set autotrace trace explain
SQL> select * from cdb_data_files;

Execution Plan
Plan hash value: 1439328272

| Id  | Operation               | Name     | Rows  | Pstart| Pstop |
|   0 | SELECT STATEMENT        |          | 20004 |       |       |
|   1 |  PX COORDINATOR         |          |       |       |       |
|   2 |   PX SEND QC (RANDOM)   | :TQ10000 | 20004 |       |       |
|   3 |    PX PARTITION LIST ALL|          | 20004 |     1 |   254 |
|   4 |     FIXED TABLE FULL    | X$CDBVW$ | 20004 |       |       |

It's still using parallel processes on a partitioned fixed table

SQL> alter session set "_px_cdb_view_enabled"=FALSE;

Session altered.

SQL> select * from cdb_data_files;

Execution Plan
Plan hash value: 2351439557

| Id  | Operation          | Name     | Rows  | Pstart| Pstop |
|   0 | SELECT STATEMENT   |          | 20004 |       |       |
|   1 |  PARTITION LIST ALL|          | 20004 |     1 |   254 |
|   2 |   FIXED TABLE FULL | X$CDBVW$ | 20004 |       |       |
SQL> alter session set "_partition_cdb_view_enabled"=FALSE;

Session altered.

SQL> select * from cdb_data_files;

Execution Plan
Plan hash value: 1784620524

| Id  | Operation        | Name     | Rows  | Bytes |
|   0 | SELECT STATEMENT |          | 20004 |  8185K|
|   1 |  FIXED TABLE FULL| X$CDBVW$ | 20004 |  8185K|

And ndocumented parameters _px_cdb_view_enabled and _partition_cdb_view_enabled still control it.

Each partition is estimated to return 10000 rows (that's hardcoded) so the estimation for 2 partitions (CDB$ROOT and PDB1) is about 20000.

However, that was after I tried to gather statistics for cdb views:

SQL> select dbms_pdb.update_cdbvw_stats from dual;

and I got statistics:

SQL> select * from  cdbvw_stats$ where objname='DBA_DATA_FILES';

OBJNAME                        TIMESTAMP      FLAGS     ROWCNT     SPARE1
------------------------------ --------- ---------- ---------- ----------
DBA_DATA_FILES                 11-NOV-14          1          4

which are the number of datafiles in my CDB$ROOT. So I expected the estimation to be about 8. But that will probably be for another blog post...

What is important to know for the moment is that by default CDB_DATA_FILES don't show all your database files.

OLTP Compression, Drop Column, Partition Exchange

Dominic Brooks - Tue, 2014-11-11 06:02

With Basic Compression, you cannot drop a column.

create table t1
(col1 number
,col2 number)
table T1 created.

alter table t1 drop column col2;

SQL Error: ORA-39726: unsupported add/drop column operation on compressed tables
39726. 00000 -  "unsupported add/drop column operation on compressed tables"
*Cause:    An unsupported add/drop column operation for compressed table
           was attemped.
*Action:   When adding a column, do not specify a default value.
           DROP column is only supported in the form of SET UNUSED column
           (meta-data drop column).

But with Basic Compression, you can set as unused.

alter table t1a set unused column col2;  

table T1A altered

With OLTP Compression, on the face of it, you can drop a column.

drop table t1;

create table t1
(col1 number
,col2 number)
compress for oltp;

select table_name, column_name, hidden_column 
from   user_tab_cols 
where  table_name = 'T1' order by column_id;

---------- ----------- -------------
T1         COL1        NO
T1         COL2        NO

alter table t1 drop column col2;

table T1 altered

But this is a lie/misdirection.
In reality, the dropped column is just renamed.
We see the column is not dropped but hidden:

select table_name, column_name, hidden_column 
from   user_tab_cols 
where  table_name = 'T1' order by column_id;

---------- -------------------------- -------------
T1         COL1                       NO
T1         SYS_C00002_14111103:30:14$ YES

1. The hidden column name is system generated and suffixed with YYMMDDHH24:MI:SS$
2. Hidden columns are not exposed via USER_TAB_COLUMNS hence usage of USER_TAB_COLS.

I’m not a fan of this behaviour.
It is intended and documented.

Not in OLTP Compression master note 1223705.1.

But in doc id 1068820.1

Only advanced compression, which requires Advanced Compression license, supports drop on compressed tables but even in this situation there is no real drop but internally the database sets the column UNUSED to avoid long-running decompression and recompression operations.

Why might this matter?

I can think of at least one time when this might matter – partition exchange.
Ok, the circumstances are probably niche and more relevant for a DEV environment than PROD.
But imagine you add a column to one side of the partition exchange operation and then think better of it and immediately drop it.

drop table t1;
drop table t2;

create table t1
(col1 number
,col2 number)
compress for oltp;
create table t2
(col1 number
,col2 number)
partition by range (col1) interval(1)
(partition p0 values less than (0))
compress for oltp;

Show that partition exchange works initially:

lock table t2 partition for (1) in exclusive mode;

lock succeeded.

alter table t2 exchange partition for (1) with table t1;

table T2 altered.

But if I add and drop a column

alter table t1 add col3 number;

table T1 altered.

alter table t1 drop column col3;

table T1 altered.

Then my partition exchange no longer works:

alter table t2 exchange partition for (1) with table t1;

SQL Error: ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
14097. 00000 -  "column type or size mismatch in ALTER TABLE EXCHANGE PARTITION"
*Cause:    The corresponding columns in the tables specified in the
           ALTER TABLE EXCHANGE PARTITION are of different type or size
*Action:   Ensure that the two tables have the same number of columns
           with the same type and size.

Until I recreate the table or use DBMS_REDEFINITION.
Or until I do the same to the other side of the exchange operation:

alter table t2 add col345 number;

table T2 altered.

alter table t2 drop column col345;

table T2 altered.


alter table t2 exchange partition for (1) with table t1;

table T2 altered.

Value Placeholder

Denes Kubicek - Tue, 2014-11-11 05:46
Did you know that you can use Value Placeholder in your page items to display hints for the users? This is a quite neat feature you can implement without using different plugins for displaying hints. It will also work with the APEX ampersand substitution strings like


Here is how it looks like. Be carefull with this. It will work only for the browsers supporting HTML5.

Categories: Development

Is Oracle Database 12c ( Faster Than Previous Releases?

Is Oracle Database 12c ( Faster Than Previous Releases?
I was wondering if the new Oracle Database 12c version in-memory column store feature will SLOW performance when it is NOT being used. I think this is a fair question because most Oracle Database systems will NOT be using this feature.

While the new in-memory column store feature is awesome and significant, with each new Oracle feature there is additional kernel code. And if Oracle is not extremely careful, these new lines of Oracle kernel code can slow down the core of Oracle processing, that is, buffer processing in Oracle's buffer cache.

Look at it this way, if a new Oracle release requires 100 more lines of kernel code to be executed to process a single buffer, that will be reflected in how many buffers Oracle can process per second.

To put bluntly, this article is the result of my research comparing core buffer processing rates between Oracle Database versions, and

With postings like this, it is very important for everyone to understand the results I publish are based on a very specific and targeted test and not on a real production load. Do not use my results in making a "should I upgrade decision." That would be stupid and an inappropriate use of the my experimental results. But because I publish every aspect of my experiment and it is easily reproducable it is valid data point with which to have a discussion and also highlight various situations that DBAs need to know about.

There are two interesting results from this research project. This article is about the first discovery and my next article will focus on the second. The second is by far the most interesting!

FYI. Back in August of 2013 performed a similar experiment where I compared Oracle database versions with I posted the article HERE.

Why "Faster" Means More Buffer Gets Processed Per Second
For this experiment when I say "faster" I am referring to raw buffered block processing. When a buffer is touched in the buffer cache it is sometimes called a buffer get or a logical IO. But regardless of the name, every buffer get increases the instance statistic, session logical reads.

I like raw logical IO processing experiments because they are central to all Oracle Database processing. Plus with each new Oracle release, as additional functionality is inserted it is likely more lines of Oracle kernel code will exist. To maintain performance with added functionality is an incredible feat. It's more likely the core buffer processing will be slower because of the new features. Is this case with Oracle's in-memory column store?

How I Setup The Experiment
I have included all the detailed output, scripts, R commands and output, data plots and more in the Analysis Pack that can be downloaded HERE.

There are a lot of ways I could have run this experiment. But two key items must exist for a fare comparison. First, all the processing must be in cache. There can be no physical read activity. Second, the same SQL must be run during the experiment and have the same execution plan. This implies the Oracle 12c column store will NOT be used. A different execution plan is considered "cheating" as a bad plan will clearly loose. Again, this is a very targeted and specific experiment.

The experiment compares the buffer get rates for a given SQL statement. For each Oracle version, I gathered 33 samples and excluded the first two, just to ensure caching was not an issue. The SQL statement runs for around 10 seconds, processes around 10.2M rows and touches around 5M buffers. I checked to ensure the execution plans are the same for each Oracle version. (Again, all the details are in the Analysis Pack for your reading pleasure.)

I ran the experiment on a Dell server. Here are the details:
$ uname -a
Linux sixcore 2.6.39-400.17.1.el6uek.x86_64 #1 SMP Fri Feb 22 18:16:18 PST 2013 x86_64 x86_64 x86_64 GNU/Linux
To make this easier for myself, to perform the test I used my CPU Speed Test tool (version 1i). I blogged about this last month HERE. The latest version of this tool can be downloaded HERE.

The Results, Statistically
Shown below are the experimental results. Remember, the statistic I'm measuring is buffer gets per millisecond.

Details about the above table: The "Normal" column is about if the statistical distribution of the 31 samples is normal. If the p-value (far right column) is greater than 0.05 then I'll say they are normal. In all three cases, the p-value is less than 0.05. If fact, if you look at the histograms contained in the Analysis Pack every histogram is visually clearly not normal. As you would expect the "Average" and the "Median" are the statistical mean and median. The "Max" is the largest value in the sample set. The "Std Dev" is the standard deviation, which is doesn't mean much since our sample sets are not normally distributed.

As I blogged about before the Oracle Database 12c buffer processing is faster than Oracle Database 11g. However, the interesting part is Oracle version with in-memory column store is slower then the previous version of 12c, In fact, in my experiment the in-memory column store version is around 5.5% slower! This means version "out of the box" can process logical buffers around 5.5% faster! Interesting.

In case you're wondering, I used the default out-of-the-box in-memory column store settings for version I checked the in-memory size parameter, inmemory_size and it was indeed set to zero. Also, when I startup the Oracle instance there is no mention of the in-memory column store.

Statistically Comparing Each Version
As an important side bar, I did statistically compare the Oracle Database versions. Why? Because while a 5.5% decrease in buffer throughput may seem important, it may not be statistically significant, meaning this difference can not be explained with our sample sets.

So going around saying version is "slower" by 5.5% would be misleading. But in my experiment, it would NOT be misleading because the differences in buffer processing are statistically significant. The relevant experimental details are shown below.

Version A Version B Statistical p-value
---------- ---------- ----------- ------- YES 0.0000 YES 0.0000 YES 0.0000

In all three cases the p-value was less than 0.05 signifying the two sample sets are statistically
different. Again, all the details are in the Analysis Pack.

The chart above shows the histograms of both Oracle Database 12c version sample sets together. Visually they look very separated and different with no data crossover. So from both a numeric and visual perspective there is a real difference between and

What Does This Mean To Me
To me this is surprising. First, there is a clear buffer processing gain upgrading from Oracle 11g to 12c. That is awesome news! But I was not expecting a statistically significant 5.5% buffer processing decrease upgrading to the more recent version. Second, this has caused me to do a little digging to perhaps understand the performance decrease. The results of my experimental journey are really interesting...I think more interesting than this posting! But I'll save the details for my next article.

Remember, if you have any questions or concerns about my experiment you can run the experiment yourself. Plus all the details of my experiment are included in the Analysis Pack.

All the best in your Oracle performance tuning work!


Categories: DBA Blogs

PeopleTools/Interaction Hub Dependencies Doc--Updated for Revision 3

PeopleSoft Technology Blog - Mon, 2014-11-10 17:50

The latest version (Revision 3) of the PeopleSoft Interaction Hub is now available.  Since Hub Revisions are aligned with PeopleTools releases, we provide a doc that helps explain how the Hub can best be implemented to take advantage of PeopleTools features.  This document contains information on how new customers can install PeopleSoft Interaction Hub and existing PeopleSoft Interaction Hub customers can apply PIH 9.1 Revision 3 functionality if they are on an earlier version. It also describes the general upgrade process within the Revision model, and how customers can expect this to work with subsequent revisions, maintenance packs, and bundles.

The document is available on the Portal Solutions home page in Oracle Support.

Dammit, the LMS

Michael Feldstein - Mon, 2014-11-10 16:07

Count De Monet: I have come on the most urgent of business. It is said that the people are revolting!

King Louis: You said it; they stink on ice.

- History of the World, Part I

Jonathan Rees discovered a post I wrote about the LMS in 2006 and, in doing so, discovered that I was writing about LMSs in 2006. I used to write about the future of the LMS quite a bit. I hardly ever do anymore, mostly because I find the topic to be equal parts boring and depressing. My views on the LMS haven’t really changed in the last decade. And sadly, LMSs themselves haven’t changed all that much either. At least not in the ways that I care about most. At first I thought the problem was that the technology wasn’t there to do what I wanted to do gracefully and cost-effectively. That excuse doesn’t exist anymore. Then, once the technology arrived as Web 2.0 blossomed[1], I thought the problem was that there was little competition in the LMS market and therefore little reason for LMS providers to change their platforms. That’s not true anymore either. And yet the pace of change is still glacial. I have reluctantly come to the conclusion that the LMS is the way it is because a critical mass of faculty want it to be that way.

Jonathan seems to think that the LMS will go away soon because faculty can find everything they need on the naked internet. I don’t see that happening any time soon. But the reasons why seem to get lost in the perennial conversations about how the LMS is going to die any day now. As near as I can remember, the LMS has been about to die any day now since at least 2004, which was roughly when I started paying attention to such things.

And so it comes to pass that, with great reluctance, I take up my pen once more to write about the most dismal of topics: the future of the LMS.

In an Ideal World…

I have been complaining about the LMS on the internet for almost as long as there have been people complaining about the LMS on the internet. Here’s something I wrote in 2004:

The analogy I often make with Blackboard is to a classroom where all the seats are bolted to the floor. How the room is arranged matters. If students are going to be having a class discussion, maybe you put the chairs in a circle. If they will be doing groupwork, maybe you put them in groups. If they are doing lab work, you put them around lab tables. A good room set-up can’t make a class succeed by itself, but a bad room set-up can make it fail. If there’s a loud fan drowning out conversation or if the room is so hot that it’s hard to concentrate, you will lose students.

I am a first- or, at most, second-generation internet LMS whiner. And that early post captures an important aspect of my philosophy on all things LMS and LMS-like. I believe that the spaces we create for fostering learning experiences matter, and that one size cannot fit all. Therefore, teachers and students should have a great deal of control in shaping their learning environments. To the degree that it is possible, technology platforms should get out of the way and avoid dictating choices. This is a really hard thing to do well in software, but it is a critical guiding principle for virtual learning environments. It’s also the thread that ran through the 2006 blog post that Jonathan quoted:

Teaching is about trust. If you want your students to take risks, you have to create an environment that is safe for them to do so. A student may be willing to share a poem or a controversial position or an off-the-wall hypothesis with a small group of trusted classmates that s/he wouldn’t feel comfortable sharing with the entire internet-browsing population and having indexed by Google. Forever. Are there times when encouraging students to take risks out in the open is good? Of course! But the tools shouldn’t dictate the choice. The teacher should decide. It’s about academic freedom to choose best practices. A good learning environment should enable faculty to password-protect course content but not require it. Further, it should not favor password-protection, encouraging teachers to explore the spectrum between public and private learning experiences.

Jonathan seems to think that I was supporting the notion of a “walled garden” in that post—probably because the title of the post is “In Defense of Walled Gardens”—but actually I was advocating for the opposite at the platform level. A platform that is a walled garden is one that forces particular settings related to access and privacy on faculty and students. Saying that faculty and students have a right to have private educational conversations when they think those are best for the situation is not at all the same as saying that it’s OK for the platform to dictate decisions about privacy (or, for that matter, that educational conversations should always be private). What I have been trying to say, there and everywhere, is that our technology needs to support and enable the choices that humans need to make for themselves regarding the best conditions for their personal educational needs and contexts.

Regarding the question of whether this end should be accomplished through an “LMS,” I am both agnostic and utilitarian on this front. I can imagine a platform we might call an “LMS” that would have quite a bit of educational value in a broad range of circumstances. It would bear no resemblance to the LMS of 2004 and only passing resemblance to the LMS of 2014. In the Twitfight between Jonathan and Instructure co-founder Brian Whitmer that followed Jonathan’s post, Brian talked about the idea of an LMS as a “hub” or an “aggregator.” These terms are compatible with what my former SUNY colleagues and I were imagining in 2005 and 2006, although we didn’t think of it in those terms. We thought of the heart of it as a “service broker” and referred to the whole thing in which it would live as a “Learning Management Operating System (LMOS).” You can think of the broker as the aggregator and the user-facing portions of the LMOS as the hub that organized the aggregated content and activity for ease-of-use purposes.

By the way, if you leave off requirements that such a thing should be “institution-hosted” and “enterprise,” the notion that an aggregator or hub would be useful in virtual learning environments is not remotely contentious. Jim Groom’s ds106 uses a WordPress-based aggregation system, the current generation of which was built by Alan Levine. Stephen Downes built gRSShopper ages ago. Both of these systems are RSS aggregators at heart. That second post of mine on the LMOS service broker, which gives a concrete example of how such a thing would work, mainly focuses on how much you could do by fully exploiting the rich metadata in an RSS feed and how much more you could do with it if you just added a couple of simple supplemental APIs. And maybe a couple of specialized record types (like iCal, for example) that could be syndicated in feeds similarly to RSS. While my colleagues and I were thinking about the LMOS as an institution-hosted enterprise application, there’s nothing about the service broker that requires it to be so. In fact, if you add some extra bits to support federation, it could just as easily form the backbone of for a distributed network of personal learning environments. And that, in fact, is a pretty good description of the IMS standard in development called Caliper, which is why I am so interested in it. In my recent post about walled gardens from the series that Jonathan mentions in his own post, I tried to spell out how Caliper could enable either a better LMS, a better world without an LMS, or both simultaneously.

Setting aside all the technical gobbledygook, here’s what all this hub/aggregator/broker stuff amounts to:

  • Jonathan wants to “have it all,” by which he means full access to the wide world of resources on the internet. Great! Easily done.
  • The internet has lots of great stuff but is not organized to make that stuff easy to find or reduce the number of clicks it takes you to see a whole bunch of related stuff. So it would be nice to have the option of organizing the subset of stuff that I need to look at for a class in ways that are convenient for me and make minimal demands on me in terms of forcing me to go out and proactively look to see what has changed in the various places where there might be activity for my class.
  • Sometimes the stuff happening in one place on the internet is related to stuff happening in another place in ways that are relevant to my class. For example, if students are writing assignments on their blogs, I might want to see who has gotten the assignment done by the due date and collect all those assignments in one place that’s convenient for me to comment on them and grade them. It would be nice if I had options of not only aggregating but also integrating and correlating course-related information.
  • Sometimes I may need special capabilities for teaching my class that are not available on the general internet. For example, I might want to model molecules for chemistry or have a special image viewer with social commenting capabilities for art history. It would be nice if there were easy but relatively rich ways to add custom “apps” that can feed into my aggregator.
  • Sometimes it may be appropriate and useful (or even essential) to have private educational conversations and activities. It would be nice to be able to do that when it’s called for and still have access to whole public internet, including the option to hold classes mostly “in public.”

In an ideal world, every class would have its own unique mix of these capabilities based on what’s appropriate for the students, teacher, and subject. Not every class needs all of these capabilities. In fact, there are plenty of teachers who find that their classes don’t need any of them. They do just fine with WordPress. Or a wiki. Or a listserv. Or a rock and a stick. And these are precisely the folks who complain the loudest about what a useless waste the LMS is. It’s a little like an English professor walking into a chemistry lab and grousing, “Who the hell designed this place? You have these giant tables which are bolted to the floor in the middle of the room, making it impossible to have a decent class conversation. And for goodness sake, the tables have gas jets on them. Gas jets! Of all the pointless, useless, preposterous, dangerous things to have in a classroom…! And I don’t even want to know how much money the college wasted on installing this garbage.”

Of course, today’s LMS doesn’t look much like what I described in the bullet points above (although I do think the science lab analogy is a reasonable one even for today’s LMS). It’s fair to ask why that is the case. Some of us have been talking about this alternative vision for something that may or may be called an “LMS” for a decade or longer now. And there are folks like Brian Whitmer at LMS companies (and LMS open source projects) saying that they buy into this idea. Why don’t our mainstream platforms look like this yet?

Why We Can’t Have Nice Things

Let’s imagine another world for a moment. Let’s imagine a world in which universities, not vendors, designed and built our online learning environments. Where students and teachers put their heads together to design the perfect system. What wonders would they come up with? What would they build?

Why, they would build an LMS. They did build an LMS. Blackboard started as a system designed by a professor and a TA at Cornell University. Desire2Learn (a.k.a. Brightspace) was designed by a student at the University of Waterloo. Moodle was the project of a graduate student at Curtin University in Australia. Sakai was built by a consortium of universities. WebCT was started at the University of British Columbia. ANGEL at Indiana University.

OK, those are all ancient history. Suppose that now, after the consumer web revolution, you were to get a couple of super-bright young graduate students who hate their school’s LMS to go on a road trip, talk to a whole bunch of teachers and students at different schools, and design a modern learning platform from the ground up using Agile and Lean methodologies. What would they build?

They would build Instructure Canvas. They did build Instructure Canvas. Presumably because that’s what the people they spoke to asked them to build.

In fairness, Canvas isn’t only a traditional LMS with a better user experience. It has a few twists. For example, from the very beginning, you could make your course 100% open in Canvas. If you want to teach out on the internet, undisguised and naked, making your Canvas course site just one class resource of many on the open web, you can. And we all know what happened because of that. Faculty everywhere began opening up their classes. It was sunlight and fresh air for everyone! No more walled gardens for us, no sirree Bob.

That is how it went, isn’t it?

Isn’t it?

I asked Brian Whitmer the percentage of courses on Canvas that faculty have made completely open. He didn’t have an exact number handy but said that it’s “really low.” Apparently, lots of faculty still like their gardens walled. Today, in 2014.

Canvas was a runaway hit from the start, but not because of its openness. Do you know what did it? Do you know what single set of capabilities, more than any other, catapulted it to the top of the charts, enabling it to surpass D2L in market share in just a few years? Do you know what the feature set was that had faculty from Albany to Anaheim falling to their knees, tears of joy streaming down their faces, and proclaiming with cracking, emotion-laden voices, “Finally, an LMS company that understands me!”?

It was Speed Grader. Ask anyone who has been involved in an LMS selection process, particularly during those first few years of Canvas sales.

Here’s the hard truth: While Jonathan wants to think of the LMS as “training wheels” for the internet (like AOL was), there is overwhelming evidence that lots of faculty want those training wheels. They ask for them. And when given a chance to take the training wheels off, they usually don’t.

Let’s take another example: roles and permissions.[2] Audrey Watters recently called out inflexible roles in educational software (including but not limited to LMSs) as problematic:

Ed-tech works like this: you sign up for a service and you’re flagged as either “teacher” or “student” or “admin.” Depending on that role, you have different “privileges” — that’s an important word, because it doesn’t simply imply what you can and cannot do with the software. It’s a nod to political power, social power as well.

Access privileges in software are designed to enforce particular ways of working together, which can be good if and only if everybody agrees that the ways of working together that the access privileges are enforcing are the best and most productive for the tasks at hand. There is no such thing as “everybody agrees” on something like the one single best way for people to work together in all classes. If the access privileges (a.k.a. “roles and permissions”) are not adaptable to the local needs, if there is no rational and self-evident reason for them to be structured the way they are, then they end up just reinforcing the crudest caricatures of classroom power relationships rather than facilitating productive cooperation. Therefore, standard roles and permissions often do more harm than good in educational software. I complained about this problem in 2005 when writing about the LMOS and again in 2006 when reviewing an open source LMS from the UK called Bodington. (At the time, Stephen Downes mocked me for thinking that this was an important aspect of LMS design to consider.)

Bodington had radically open permissions structures. You could attach any permissions (read, write, etc.) to any object in the system, making individual documents, discussions, folders, and what have you totally public, totally private, or somewhere in between.You could collect sets of permissions and and define them as any roles that you wanted. Bodington also, by the way, had no notion of a “course.” It used a geographical metaphor. You would have a “building” or a “floor” that could house a course, a club, a working group, or anything else. In this way, it was significantly more flexible than any LMS I had seen before.

Of course, I’m sure you’ve all heard of Bodington, its enormous success in the market, and how influential it’s been on LMS design.[3]

What’s that? You haven’t?


OK, but surely you’re aware of D2L’s major improvements in the same area. If you recall your LMS patent infringement history, then you’ll remember that roles and permissions were exactly the thing that Blackboard sued D2L over. The essence of the patent was this: Blackboard claimed to have invented a system where the same person could be given the role of “instructor” in one course site and the role of “student” in another. That’s it. And while Blackboard eventually lost that fight, there was a court ruling in the middle in which D2L was found to have infringed on the patent. In order to get around it, the company ripped out its predefined roles, making it possible (and necessary) for every school to create its own. As many as they want. Defined however they want. I remember Ken Chapman telling me that, even though it was the patent suit that pushed him to think this way, in the end he felt that the new way was a significant improvement over the old way of doing things.

And the rest, as you know, was history. The Chronicle and Inside Higher Ed wrote pieces describing the revolution on campuses as masses of faculty demanded flexible roles and permissions. Soon it caught the attention of Thomas Friedman, who proclaimed it to be more evidence that the world is indeed flat. And the LMS market has never been the same since.

That is what happened…right?


Do you want to know why the LMS has barely evolved at all over the last twenty years and will probably barely evolve at all over the next twenty years? It’s not because the terrible, horrible, no-good LMS vendors are trying to suck the blood out of the poor universities. It’s not because the terrible, horrible, no-good university administrators are trying to build a panopticon in which they can oppress the faculty. The reason that we get more of the same year after year is that, year after year, when faculty are given an opportunity to ask for what they want, they ask for more of the same. It’s because every LMS review process I have ever seen goes something like this:

  • Professor John proclaims that he spent the last five years figuring out how to get his Blackboard course the way he likes it and, dammit, he is not moving to another LMS unless it works exactly the same as Blackboard.
  • Professor Jane says that she hates Blackboard, would never use it, runs her own Moodle installation for her classes off her computer at home, and will not move to another LMS unless it works exactly the same as Moodle.
  • Professor Pat doesn’t have strong opinions about any one LMS over the others except that there are three features in Canvas that must be in whatever platform they choose.
  • The selection committee declares that whatever LMS the university chooses next must work exactly like Blackboard and exactly like Moodle while having all the features of Canvas. Oh, and it must be “innovative” and “next-generation” too, because we’re sick of LMSs that all look and work the same.

Nobody comes to the table with an affirmative vision of what an online learning environment should look like or how it should work. Instead, they come with this year’s checklists, which are derived from last year’s checklists. Rather than coming with ideas of what they could have, the come with their fears of what they might lose. When LMS vendors or open source projects invent some innovative new feature, that feature gets added to next year’s checklist if it avoids disrupting the rest of the way the system works and mostly gets ignored or rejected to the degree that it enables (or, heaven forbid, requires) substantial change in current classroom practices.

This is why we can’t have nice things. I understand that it is more emotionally satisfying to rail against the Powers That Be and ascribe the things that we don’t like about ed tech to capitalism and authoritarianism and other nasty isms. And in some cases there is merit to those accusations. But if we were really honest with ourselves and looked at the details of what’s actually happening, we’d be forced to admit that the “ism” most immediately responsible for crappy, harmful ed tech products is consumerism. It’s what we ask for and how we ask for it. As with our democracy, we get the ed tech that we deserve.

In fairness to faculty, they don’t always get an opportunity to ask good questions. For example, at Colorado State University, where Jonathan works, the administrators, in their infinite wisdom, have decided that the best course of action is to choose their next LMS for their faculty by joining the Unizin coalition. But that is not the norm. In most places, faculty do have input but don’t insist on a process that leads to a more thoughtful discussion than compiling a long list of feature demands. If you want agitate for better ed tech, then changing the process by which your campus evaluates educational technology is the best place to start.

There. I did it. I wrote the damned “future of the LMS” post. And I did it mostly by copying and pasting from posts I wrote 10 years ago. I am now going to go pour myself a drink. Somebody please wake me again in another decade.

  1. Remember that term?
  2. Actually, it’s more of an extension of the previous example. Roles and permissions are what make a garden walled or not, which another reason why they are so important.
  3. The Bodington project community migrated to Sakai, where some, but not all, of its innovations were transplanted to the Sakai platform.

The post Dammit, the LMS appeared first on e-Literate.

Selecting a Data Warehouse Appliance [VIDEO]

Chris Foot - Mon, 2014-11-10 11:57


Hi, welcome to RDX! Selecting a data warehouse appliance is a very important decision to make. The amount of data that companies store is continuously increasing, and DBAs now have many data storage technologies available to them. Uninformed decisions may cause a number of problems including limited functionality, poor performance, lack of scalability, and complex administration.

Oracle, Microsoft, and IBM understand the common data warehousing challenges DBAs face and offer data warehouse appliances that help simplify administration and help DBAs effectively manage large amounts of data.

Need help determining which data warehouse technology is best for your business? Be sure to check out RDX VP of Technology, Chris Foot’s, recent blog post, Data Warehouse Appliance Offerings, where he provides more details about each vendor’s architecture and the benefits of each.

Thanks for watching. See you next time!

The post Selecting a Data Warehouse Appliance [VIDEO] appeared first on Remote DBA Experts.

Presentations to go to at #DOAG2014

The Oracle Instructor - Mon, 2014-11-10 11:26

As every year, there’s a long list of great speakers with interesting talks to attend at the DOAG (German Oracle User Group) annual conference. Sadly I cannot attend them all, so I’ve got to make a choice:

First day

Datenbank-Upgrade nach Oracle – Aufwand, Vorgehen, Kunden by Mike Dietrich, Oracle

Die unheimliche Begegnung der dritten Art: XML DB für den DBA by Carsten Czarski, Oracle

Advanced RAC Programming Features by Martin Bach, Enkitec

Automatische Daten Optimierung, Heatmap und Compression 12c live by Ulrike Schwinn, Oracle

Second day

Understanding Oracle RAC Internals – The Cache Fusion Edition by Markus Michalewicz, Oracle

Die Recovery Area: Warum ihre Verwendung empfohlen ist – I have to go to that one because I present it myself :-)

Geodistributed Oracle GoldenGate and Oracle Active Data Guard: Global Data Services by Larry Carpenter, Oracle

Oracle Database In-Memory – a game changer for data warehousing? by Hermann Baer & Maria Colgan, Oracle

Oracle Distributed Transactions by Joel Goodman, Oracle

Third day

High Noon – Bessere Überlebenschancen beim Datenbank Security Shoot Out by Heinz-Wilhelm Fabry, Oracle

Tuning Tools für echte Männer und Sparfüchse – vom Leben ohne EM12c by Björn Rost, portrix Systems

Best Practices in Managing Oracle RAC Performance in Real Time by Mark Scardina, Oracle

Maximum Availability with Oracle Multitenant: Seeing Is Believing by Larry Carpenter, Oracle

Tagged: #DOAG2014
Categories: DBA Blogs

Penguins and Conferences

Floyd Teter - Mon, 2014-11-10 10:13
I just came back from the East Coast Oracle User Group conference.  Good conference.  Lots of solid, technical knowledge being shared.  Being there got me to thinking...

Over the past few years, a big concern for people attending conferences is the need to justify their attendance.  It's a big deal.  And, in my own mine, the only real justification is what you bring back, share and apply post-conference.  Let me tell you a story (can you hear all of my children groaning in the background?).

All the penguins in my neighborhood get together for a little meeting every month.  They talk about the happenings around the neighborhood, complain about the weather, catch up with each other, share info on where the fish are, and all sorts of things.  It's just a little social gathering.  At least, it was until last month.

Last month, a new penguin stopped by.  He was on his way north, looking for better penguin weather.  And he was flying!  The local penguin crew was stunned because, as everybody knows, penguins can't fly.  But the new bird promised to teach them all to fly.  And, after about four hours of instruction and practice, all those penguins were flying.  Soaring.  Barrel rolls.  Loops.  Bomber dives.  Spins.  What a bunch of happy penguins, high-fiving each other and laughing about the new knowledge and skills they acquired.

After another four hours, those penguins were exhausted.  Huffing and puffing.  Soreness from muscles they didn't even know they had.  But they were exhilarated. They all agreed it was a spectacular day.

And then they all walked home...

You want to justify your attendance at a conference?  Be smarter than my local penguins.

The Cloud UX Lab

Oracle AppsLab - Mon, 2014-11-10 09:57

There’s a post over on VoX about a OAUX new lab at Oracle HQ, the Cloud UX Lab.


Jeremy Ashley, VP, in the new lab, image used with permission.

Finished just before OOW in September, this lab is a showcase for OAUX projects, including a few of ours.

The lab reminds me of a spacecraft from the distant future, the medical bay or the flight deck. It’s a very cool place, directly inspired and executed by our fearless leader, Jeremy Ashley (@jrwashley), an industrial designer by trade.

I actually got to observe the metamorphosis of this space from something that felt like a doctor’s office waiting room into the new hotness. Looking back on those first meetings, I never expected it would turn out so very awesome.

Anyway, the reason why I got to tag along on this project is because our team will be filling the control room for this lab with our demos. Noel (@noelportugal) and Jeremy have a shared vision for that space, which will be a great companion piece to the lab and equally awesome.

So, if you’re at Oracle HQ, book a tour and stop by the new Cloud UX Lab, experience the new hotness and speculate on what Noel is cooking up behind the glass.Possibly Related Posts:

Seeing slow startup of SOA OSB and other Java based application then verify Entropy

Arun Bavera - Mon, 2014-11-10 09:20
We faced slow startup of Domain Creation, slow startup  of Domain and resolved using proper Entropy settings:
You should be able to select the faster-but-slightly-less-secure /dev/urandom on Linux using:
Default /dev/urandom is configured, but as mentioned this is ignored by Java.
However, this doesn’t work with Java 5 and later (Java Bug 6202721). The suggested work-around is to use: the extra ‘/./’)
You can also set in your environment like below in
if [ "${USER_MEM_ARGS}" != "" ] ; then
export MEM_ARGS
Or at runtime:

Categories: Development

Pass summit 2014: My favorite sessions of this great event

Yann Neuhaus - Mon, 2014-11-10 08:19

The Pass Summit 2014 is now over and it’s time for us to go back home. I want to share my favorite sessions with you in this post.



Pass Summit was really an amazing event regardless of the expertise area (BI or SQL Server engine). This was also a good opportunity to meet SQL Server guys in others countries as well as the French SQL Server community. We attended a lot of interesting sessions and I admit it was often difficult to choose between two or more sessions at the same time. Nevertheless, here is my list of favorite sessions I was able to participate in:



SQL Server 2014 AlwaysOn (high availability and disaster recovery) and troubleshooting – Kevin Farlee & Trayce Jordan

These sessions was a good reminder of what SQL Server AlwaysOn exactly means. Indeed, AlwaysOn is only a label that defines two main technologies: SQL Server FCI and availability groups. Kevin Farlee described some improvements provided by SQL Server 2014 for availability groups like diagnostics, availability of the read-only replicas, resilience of the network that reduce node eviction, usage of cluster shared volumes with SQL Server 2014 FCI and others. I remember to have some blog posts in my todo list. We also had an interesting experience feedback of the support team that concerns different availability group issues and how they resolved them. It was very funny when Trace Jordan asked us for our feedback concerning availability group issues. In my case, I know I already had to face to some issues, but some people in the meeting room too, it seems :-)


Latches, locks, free structures and SQL Server IO – Bob Ward & Klaus Aschenbrenner

I have to admit my brain was sometimes burning during both sessions … this is what we can expect from a 500-level session :-). If I have to summarise here what can I tell from these two sessions: On one side we learned about latches and spinlocks, how they can be problematic for critical workloads and how can we can analyse and resolve them as well. On the other side, we learned about low-level Windows API and structures used by SQL Server concerning IOs. We saw interesting demos with DMVs, extended events and the Windows debugger. Personally, I’m a fan of these topics and I hope to give some sessions on this subject in the future.




Advanced data recovery techniques – Paul Randal

A very interesting session by Paul Rand in which he described different corruption cases and how to resolve them. The most part of the session was dedicated only to demos and I was able to see that some corruptions are relatively simple to repair while for others this is not the case, as they require strong internal structure skills concerning SQL Server pages. A powerful command that Paul used during the session: DBCC WRITPAGE - in order to corrupt database pages. I occasionally used this command in my blog. Be careful not to use it in production! Probably the most importance lesson to learn from this session: Practice! Practice! Practice!


Query Tuning Mastery: Manhandling Parallelism, 2014 Edition – Adam Machanic

A very impressive session on how to deal with parallelism on SQL Server. Adam already gave a session in 2012 on this subject and the new edition is on the same level than the preview. This is the type of sessions where you have to listen one or more times to hope understand all pieces of information. This also applies to Bob Ward and Paul Rand. A bit of work in sight...




High Performance Infrastructure for SQL Server 2014 – Michael Frandson

My last session of this summit and a good surprise for me! I didn’t know Michael before and this guy has specialized on storage, scalability, and virtualization. He has contributed to a lot of projects with Microsoft and SQLCat. Michael discussed new storages features in Windows Server 2012 / R2 and how they relate to SQL Server. We had an overview on Infiniband, SMB, multi-path IO, RDMA, RCoE, NAND flash at steady pace. Then we discussed topics of SQL Server features related to storage like buffer pool extension, In-memory tables, usage of local disk with SQL Server FCI. Finally, Michael finished his session with a pretty smart reminder: the SQL Server performance is carried out both at application level (developpers) and infrastructure level (DBA / architectes)


Before leaving Seattle, we had our ceremonial breakfast at the Seattle marketplace and our traditional burger party at Johnny Rockets ...


blog_22_-_8_-_public_market blog_22_-_7_-_Johnny_Rockets_logo


See you at the next Pass summit!