Skip navigation.

Jonathan Lewis

Syndicate content Oracle Scratchpad
Just another Oracle weblog
Updated: 17 hours 14 min ago

Flashback Logging

Wed, 2015-03-11 09:21

One of the waits that is specific to ASSM (automatic segment space management) is the “enq: FB – contention” wait. You find that the “FB” enqueue has the following description and wait information when you query v$lock_type, and v$event_name:

SQL> execute print_table('select * from v$lock_type where type = ''FB''')
TYPE                          : FB
NAME                          : Format Block
ID1_TAG                       : tablespace #
ID2_TAG                       : dba
IS_USER                       : NO
DESCRIPTION                   : Ensures that only one process can format data blocks in auto segment space managed tablespaces

SQL> execute print_table('select * from v$event_name where name like ''enq: FB%''')
EVENT#                        : 806
EVENT_ID                      : 1238611814
NAME                          : enq: FB - contention
PARAMETER1                    : name|mode
PARAMETER2                    : tablespace #
PARAMETER3                    : dba
WAIT_CLASS_ID                 : 1893977003
WAIT_CLASS#                   : 0
WAIT_CLASS                    : Other

This tells us that a process will acquire the lock when it wants to format a batch of blocks in a segment in a tablespace using ASSM – and prior experience tells us that this is a batch of 16 consecutive blocks in the current extent of the segment; and when we see a wait for an FB enqueue we can assume that two session have simultaneously tried to format the same new batch of blocks and one of them is waiting for the other to complete the format. In some ways, this wait can be viewed (like the “read by other session” wait) in a positive light – if the second session weren’t waiting for the first session to complete the block format it would have to do the formatting itself, which means the end-user has a reduced response time. On the other hand the set of 16 blocks picked by a session is dependent on its process id, so the second session might have picked a different set of 16 blocks to format, which means in the elapsed time of one format call the segment could have had 32 blocks formatted – this wouldn’t have improved the end-user’s response time, but it would mean that more time would pass before another session had to spend time formatting blocks. Basically, in a highly concurrent system, there’s not a lot you can do about FB waits (unless, of course, you do some clever partitioning of the hot objects).

There is actually one set of circumstances where you can have some control of how much time is spent on the wait, but before I mention it I’d like to point out a couple more details about the event itself. First, the parameter3/id2_tag is a little misleading: you can work out which blocks are being formatted (if you really need to), but the “dba” is NOT a data block address (which you might think if you look at the name and a few values). There is a special case when the FB enqueue is being held while you format the blocks in the 64KB extents that you get from system allocated extents, and there’s probably a special case (which I haven’t bothered to examine) if you create a tablespace with uniform extents that aren’t a multiple of 16 blocks, but in the general case the “dba” consists of two parts – a base “data block address” and a single (hex) digit offset identifying which batch of 16 blocks will be formatted.

For example: a value of 0x01800242 means start at data block address 0x01800240, count forward 2 * 16 blocks then format 16 blocks from that point onwards. Since the last digit can only range from 0x0 to 0xf this means the first 7 (hex) digits of a “dba” can only reference 16 batches of 16 blocks, i.e. 256 blocks. It’s not coincidence (I assume) that a single bitmap space management block can only cover 256 blocks in a segment – the FB enqueue is tied very closely to the bitmap block.

So now it’s time to ask why this discussion of the FB enqueue appears in an article titled “Flashback Logging”. Enable the 10704 trace at level 10, along with the 10046 trace at level 8 and you’ll see. Remember that Oracle may have to log the old version of a block before modifying it and if it’s a block that’s being reused it may contribute to “physical reads for flashback new” – here’s a trace of a “format block” event:

*** 2015-03-10 12:50:35.496
ksucti: init session DID from txn DID:
        ksqlkdid: 0001-0023-00000014

*** 2015-03-10 12:50:35.496
*** ksudidTrace: ksqgtl
        ktcmydid(): 0001-0023-00000014
        ksusesdi:   0000-0000-00000000
        ksusetxn:   0001-0023-00000014
ksqgtl: RETURNS 0
WAIT #140627501114184: nam='db file sequential read' ela= 4217 file#=6 block#=736 blocks=1 obj#=192544 tim=1425991835501051
WAIT #140627501114184: nam='db file sequential read' ela= 674 file#=6 block#=737 blocks=1 obj#=192544 tim=1425991835501761
WAIT #140627501114184: nam='db file sequential read' ela= 486 file#=6 block#=738 blocks=1 obj#=192544 tim=1425991835502278
WAIT #140627501114184: nam='db file sequential read' ela= 522 file#=6 block#=739 blocks=1 obj#=192544 tim=1425991835502831
WAIT #140627501114184: nam='db file sequential read' ela= 460 file#=6 block#=740 blocks=1 obj#=192544 tim=1425991835503326
WAIT #140627501114184: nam='db file sequential read' ela= 1148 file#=6 block#=741 blocks=1 obj#=192544 tim=1425991835504506
WAIT #140627501114184: nam='db file sequential read' ela= 443 file#=6 block#=742 blocks=1 obj#=192544 tim=1425991835504990
WAIT #140627501114184: nam='db file sequential read' ela= 455 file#=6 block#=743 blocks=1 obj#=192544 tim=1425991835505477
WAIT #140627501114184: nam='db file sequential read' ela= 449 file#=6 block#=744 blocks=1 obj#=192544 tim=1425991835505985
WAIT #140627501114184: nam='db file sequential read' ela= 591 file#=6 block#=745 blocks=1 obj#=192544 tim=1425991835506615
WAIT #140627501114184: nam='db file sequential read' ela= 449 file#=6 block#=746 blocks=1 obj#=192544 tim=1425991835507157
WAIT #140627501114184: nam='db file sequential read' ela= 489 file#=6 block#=747 blocks=1 obj#=192544 tim=1425991835507684
WAIT #140627501114184: nam='db file sequential read' ela= 375 file#=6 block#=748 blocks=1 obj#=192544 tim=1425991835508101
WAIT #140627501114184: nam='db file sequential read' ela= 463 file#=6 block#=749 blocks=1 obj#=192544 tim=1425991835508619
WAIT #140627501114184: nam='db file sequential read' ela= 685 file#=6 block#=750 blocks=1 obj#=192544 tim=1425991835509400
WAIT #140627501114184: nam='db file sequential read' ela= 407 file#=6 block#=751 blocks=1 obj#=192544 tim=1425991835509841

*** 2015-03-10 12:50:35.509
ksqrcl: FB,16,18002c2
ksqrcl: returns 0

Note: we acquire the lock (ksqgtl), read 16 blocks by “db file sequential read”, write them to the flashback log (buffer), format them in memory, release the lock (ksqrcl). That lock can be held for quite a long time – in this case 13 milliseconds. Fortunately the all the single block reads after the first have been accelerated by O/S prefetching, your timings may vary.

The higher the level of concurrent activity the more likely it is that processes will collide trying to format the same 16 blocks (the lock is exclusive, so the second will request and wait, then find that the blocks are already formatted when it finally get the lock). This brings me to the special case where waits for the FB enqueue waits might have a noticeable impact … if you’re running parallel DML and Oracle decides to use “High Water Mark Brokering”, which means the parallel slaves are inserting data into a single segment instead of each using its own private segment and leaving the query co-ordinator to clean up round the edges afterwards. I think this is most likely to happen if you have a tablespace using fairly large extents and Oracle thinks you’re going to process a relatively small amount of data (e.g. small indexes on large tables) – the trade-off is between collisions between processes and wasted space from the private segments.

Flashback logging

Mon, 2015-03-09 08:44

When database flashback first appeared many years ago I commented (somewhere, but don’t ask me where) that it seemed like a very nice idea for full-scale test databases if you wanted to test the impact of changes to batch code, but I couldn’t really see it being a good idea for live production systems because of the overheads.

Features and circumstances change, of course, and someone recently pointed out that if your production system is multi-terabyte and you’re running with a dataguard standby and some minor catastrophe forces you to switch to the standby then you don’t really want to be running without a standby for the time it would take for you to use restore and recover an old backup to create a new standby and there may be cases where you could flashback the original primary to before the catastrophe and turn it into the standby from that point onward. Sounds like a reasonable argument to me – but you might still need to think very carefully about how to minimise the impact of enabling database flashback, especially if your database is a datawarehouse, DSS, or mixed system.

Imagine you have a batch processes that revolve around loading data into an empty table with a couple of indexes – it’s a production system so you’re running with archivelog mode enabled, and then you’re told to switch on database flashback. How much impact will that have on your current loading strategies ? Here’s a little bit of code to help you on your way – I create an empty table as a clone of the view all_objects, and create one index, then I insert 1.6M rows into it. I’ve generated 4 different sets of results: flashback on or off, then either maintaining the index during loading or marking it unusable then rebuilding it after the load. Here’s the minimum code:

create table t1 segment creation immediate tablespace test_8k
select * from all_objects
where   rownum < 1

create index t1_i1 on t1(object_name, object_id) tablespace test_8k_assm_auto;
-- alter index t1_i1 unusable;

insert /*+ append */ into t1
with object_data as (
        select --+ materialize
                rownum <= 50000
counter as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 32
        /*+ leading (ctr obj) use_nl(obj) */
        counter         ctr,
        object_data     obj

-- alter index t1_i1 rebuild;

Here’s a quick summary of the timing I got  before I talk about the effects (running

Flashback off:
Maintain index in real time: 138 seconds
Rebuild index at end: 66 seconds

Flashback on:
Maintain index in real time: 214 seconds
Rebuild index at end: 112 seconds

It is very important to note that these timings do not allow you to draw any generic conclusions about optimum strategies for your systems. The only interpretation you can put on them is that different circumstances may lead to very different timings, so it’s worth looking at what you could do with your own systems to find good strategies for different cases.

Most significant, probably, is the big difference between the two options where flashback is enabled – if you’ve got to use it, how do you do damage limitation. Here are some key figures, namely the file I/O stats and the some instance activity stats, I/O stats first:

"Real-time" maintenance
Tempfile Stats - 09-Mar 11:41:57
file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs    Writes      Blocks   Avg Csecs    File name
-----       -----      ------    --------   ---------     -------   ---------    ------      ------   ---------    -------------------
    1       1,088      22,454      20.638        .063         296        .000     1,011      22,455        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_938s5v4n_.tmp

Datafile Stats - 09-Mar 11:41:58
file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs         Writes      Blocks   Avg Csecs    File name
-----       -----      ------    --------   ---------     -------   ---------     -------   ---------         ------      ------   ---------    -------------------
    3      24,802      24,802       1.000        .315      24,802        .315           0        .000          2,386      20,379        .239    /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
    5         718      22,805      31.762        .001           5        .000         713        .002            725      22,814        .002    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
    6       8,485       8,485       1.000        .317       8,485        .317           0        .000            785       6,938        .348    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf

Mark Unusable and Rebuild
Tempfile Stats - 09-Mar 11:53:04
file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs    Writes      Blocks   Avg Csecs    File name
-----       -----      ------    --------   ---------     -------   ---------    ------      ------   ---------    -------------------
    1       1,461      10,508       7.192        .100           1        .017       407      10,508        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_938s5v4n_.tmp

Datafile Stats - 09-Mar 11:53:05
file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs         Writes      Blocks   Avg Csecs    File name
-----       -----      ------    --------   ---------     -------   ---------     -------   ---------         ------      ------   ---------    -------------------
    3          17          17       1.000       5.830          17       5.830           0        .000             28          49       1.636    /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
    5         894      45,602      51.009        .001           2        .002         892        .001            721      22,811        .026    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
    6       2,586       9,356       3.618        .313         264       3.064       2,322        .001          2,443       9,214        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf

There are all sorts of interesting differences in these results due to the different way in which Oracle handles the index. For the “real-time” maintenance the session accumulates the key values and rowids as it writes the table, then sorts them, then does an cache-based bulk update to the index. For the “rebuild” strategy Oracle simply scans the table after it has been loaded, sorts the key values and indexes, then writes the index to disc using direct path writes; you might expect the total work done to be roughly the same in both cases – but it’s not.

I’ve shown 4 files: the temporary tablespace, the undo tablespace, the tablespace holding the table and the tablespace holding the index; and the first obvious difference is the number of blocks written and read and the change in average read size on the temporary tablespace. Both sessions had to spill to disc for the sort, and both did a “one-pass” sort; the difference in the number of blocks written and read appears because the “real-time” session wrote the sorted data set back to the temporary tablespace one more time than it really needed to – it merged the sorted data in a single pass but wrote the data back to the temporary tablespace before reading it again and applying it to the index (for a couple of points on tracing sorts, see this posting). I don’t know why Oracle chose to use a much smaller read slot size in the second case, though.

The next most dramatic thing we see is that real-time maintenance introduced 24,800 single block reads with 20,000 blocks written to the undo tablespace (with a few thousand more that would eventually be written by dbwr – I should have included a “flush buffer_cache” in my tests), compared to virtually no activity in the “rebuild” case. The rebuild generates no undo; real-time maintenance (even starting with an empty index) generates undo because (in theory) someone might look at the index and need to see a read-consistent image of it. So it’s not surprising that we see a lot of writes to the undo tablespace – but where did the reads come from? I’ll answer question that later.

It’s probably not a surprise to see the difference in the number of blocks read from the table’s tablespace. When we rebuild the index we have to do a tablescan to acquire the data; but, again, we can ask why did we see 22,800 blocks read from the table’s tablespace when we were doing the insert with real-time maintenance. On a positive note those reads were multiblock reads, but what caused them? Again, I’ll postpone the answer.

Finally we see that the number of blocks read (reason again postponed) and written to the index’s tablespace are roughly similar. The writes differ because because the rebuild is doing direct path writes, while the real-time maintenance is done in the buffer cache, so there are some outstanding index blocks to be written. The reads are similar, though one test is exclusively single block reads and the other is doing (small) multiblock reads – which is just a little bit more efficient.  The difference in the number of reads is because the rebuild was at the default pctfree=10 while the index maintenance was a massive “insert in order” which would have packed the index leaf blocks at 100%.

To start the explanation – here are the most significant activity stats – some for the session, a couple for the instance:

"Real-time" maintenance
Name                                                                     Value
----                                                                     -----
physical reads for flashback new                                        33,263
redo entries                                                           118,290
redo size                                                          466,628,852
redo size for direct writes                                        187,616,044
undo change vector size                                            134,282,356
flashback log write bytes                                          441,032,704

Name                                                                     Value
----                                                                     -----
physical reads for flashback new                                           156
redo entries                                                            35,055
redo size                                                          263,801,792
redo size for direct writes                                        263,407,628
undo change vector size                                                122,156
flashback log write bytes                                          278,036,480

The big clue is the “physical reads for flashback new”. When you modify a block, if it hasn’t been dumped into the flashback log recently (as defined by the hidden _flashback_barrier_interval parameter) then the original version of the block has to be written to the flashback log before the change can be applied; moreover, if a block is being “newed” (Oracle-speak for being reformatted for a new use) it will also be written to flashback log. Given the way that the undo tablespace works it’s not surprising if virtually every block you modify in the undo tablespace has to be written to the flashback log before you use it. The 33,264 blocks read for “flashback new” consists of the 24,800 blocks read from the undo tablespace when we were maintaining the index in real-time plus a further 8,460 from “somewhere” – which, probably not coincidentally, matches the number of blocks read from the index tablespace as we create the index. The odd thing is that we don’t see the 22,800 reads on the table’s tablespace (which don’t occur when flashback is off) reported as “physical reads for flashback new”; this looks like a reporting error to me.

So the volume of undo requires us to generate a lot of flashback log as well as the usual increase in the amount of redo. As a little side note, we get confirmation from these stats that the index was rebuilt using direct path writes – there’s an extra 75MB of redo for direct writes.


If you are running with flashback enabled in a system that’s doing high volume data loading remember that the “physical reads for flashback new” could be a major expense. This is particularly expensive on index maintenance, which can result in a large number single block reads of the undo tablespace. The undo costs you three times – once for the basic cost of undo (and associated redo), once for the extra reads, and once for writing the flashback log. Although you have to do tablescans to rebuild indexes, the cost of an (efficient, possibly direct path) tablescan may be much less than the penalty of the work relating to flashback.

Footnote: since you can’t (officially) load data into a table with an unusable unique index or constraint, you may want to experiment with using non-unique indexes to support unique/PK constraints and disabling the constraints while loading.

255 columns

Wed, 2015-02-18 18:45

You all know that having more than 255 columns in a table is a Bad Thing ™ – and surprisingly you don’t even have to get to 255 to hit the first bad thing about wide tables. If you’ve ever wondered what sorts of problems you can have, here are a few:

  • If you’re still running 10g and gather stats on a table with more than roughly 165 columns then the query Oracle uses to collect the stats will only handle about 165 of them at a time; so you end up doing multiple (possibly sampled) tablescans to gather the stats. The reason why I can’t give you an exact figure for the number of columns is that it depends on the type and nullity of the columns – Oracle knows that some column types are fixed length (e.g. date types, char() types) and if any columns are declared not null then Oracle doesn’t have to worry about counting nulls – so for some of the table columns Oracle will be able to eliminate one or two of the related columns it normally includes in the stats-gathering SQL statement – which means it can gather stats on a few more table columns.  The 165-ish limit doesn’t apply in 11g – though I haven’t checked to see if there’s a larger limit before the same thing happens.
  • If you have more than 255 columns in a row Oracle will split it into multiple row pieces of 255 columns each plus one row piece for “the rest”; but the split counts from the end, so if you have a table with 256 columns the first row-piece has one column the second row-piece has 255 columns. This is bad news for all sorts of operations because Oracle will have to expend extra CPU chasing the the row pieces to make use of any column not in the first row piece. The optimists among you might have expected “the rest” to be in the last row piece. If you want to be reminded how bad row-chaining can get for wide tables, just have a look at an earlier blog note of mine (starting at this comment).
  • A particularly nasty side effect of the row split comes with direct path tablescans – and that’s what Oracle does automatically when the table is large. In many cases all the row pieces for a row will be in the same block; but they might not be, and if a continuation row-piece is in a different block Oracle will do a “db file sequential read” to read that block into the buffer cache and it won’t be cached (see 1st comment below).  As an indication of how badly this can affect performance, the results I got at a recent client site showed “select count(col1) from wide_table” taking 10  minutes while “select count(column40) from wide_table” took 22 minutes because roughly one row in a hundred required a single block read to follow the chain.
  • An important side effect of the split point is that you really need to put the columns you’re going to index near the start of the table to minimise the risk of this row chaining overhead when you create or rebuild an index.
  • On top of everything else, of course, it takes a surprisingly large amount of extra CPU to load a large table if the rows are chained. Another client test reported 140 CPU seconds to load 5M rows of 256 columns, but only 20 CPU seconds to load 255.

If you are going to have tables with more than 255 columns, think very carefully about column order – if you can get all the columns that are almost always null at the end of the row you may get lucky and find that you never need to create a secondary row piece. A recent client had about 290 columns in one table of 16M rows, and 150 columns were null for all 16M rows – unfortunately they had a mandatory “date_inserted” column at the end of the row, but with a little column re-arrangement they eliminated row chaining and saved (more than) 150 bytes storage per row.  Of course, if they have to add and back-fill a non-null column to the table they’re going to have to rebuild the table to insert the column “in the middle”, otherwise all new data will be chained and wasting 150 bytes per row, and any old data that gets updated will suffer a row migration/chain catastrophe.

Parallel rownum

Thu, 2015-02-12 01:27

It’s easy to make mistakes, or overlook defects, when constructing parallel queries – especially if you’re a developer who hasn’t been given the right tools to make it easy to test your code. Here’s a little trap I came across recently that’s probably documented somewhere, which could be spotted easily if you had access to the OEM SQL Monitoring screen, but would be very easy to miss if you didn’t check the execution plan very carefully. I’ll start with a little script to generate some data:

create table t1 nologging
select * from all_objects where rownum <= 50000

insert /*+ append */ into t1 select * from t1;
insert /*+ append */ into t1 select * from t1;
insert /*+ append */ into t1 select * from t1;
insert /*+ append */ into t1 select * from t1;

		ownname		 => user,
		tabname		 =>'T1',
		method_opt	 => 'for all columns size 1'

create table t2 as select * from t1;
alter table t2 add id number(10,0);

All I’ve done is create some data – 800,000 rows – and then create a table to copy it to; and while I copy it I’m going to add a temporary id to the rows, which I’ll do with a call to rownum; and since there’s a lot of data I’ll use parallel execution:

alter session enable parallel dml;

insert /*+ parallel(t2 3) */ into t2
select /*+ parallel(t1 4) */ t1.* , rownum from t1;

For the purposes of experiment and entertainment I’ve done something a little odd by supplying two possible degrees of parallelism, but this lets me ask the query: will this statement run parallel 3, parallel 4, both of the above, or neither ? (You may assume that I have parallel execution slaves available when the statement runs.)

The answer is both – because that rownum does something nasty to the execution plan (I didn’t include the 50,000 limit in my first test, which is why the plan reports 993K rows instead of 800,000):

| Id  | Operation                  | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
|   0 | INSERT STATEMENT           |          |   993K|    92M|  1076   (1)| 00:00:13 |        |      |            |
|   1 |  PX COORDINATOR            |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)      | :TQ20001 |   993K|    92M|  1076   (1)| 00:00:13 |  Q2,01 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT          | T2       |       |       |            |          |  Q2,01 | PCWP |            |
|   4 |     PX RECEIVE             |          |   993K|    92M|  1076   (1)| 00:00:13 |  Q2,01 | PCWP |            |
|   5 |      PX SEND ROUND-ROBIN   | :TQ20000 |   993K|    92M|  1076   (1)| 00:00:13 |        | S->P | RND-ROBIN  |
|   6 |       COUNT                |          |       |       |            |          |        |      |            |
|   7 |        PX COORDINATOR      |          |       |       |            |          |        |      |            |
|   8 |         PX SEND QC (RANDOM)| :TQ10000 |   993K|    92M|  1076   (1)| 00:00:13 |  Q1,00 | P->S | QC (RAND)  |
|   9 |          PX BLOCK ITERATOR |          |   993K|    92M|  1076   (1)| 00:00:13 |  Q1,00 | PCWC |            |
|  10 |           TABLE ACCESS FULL| T1       |   993K|    92M|  1076   (1)| 00:00:13 |  Q1,00 | PCWP |            |

See that “P->S” (parallel to serial) at operation 8. The select statement runs in parallel (degree 4) to scan the data, and then sends it all to the query co-ordinator to supply the rownum; then the query co-ordinator re-distributes the data (including rownum) to another set of slave (S->P) to do the parallel (degree 3) insert. The P->S at line 2 shows the parallel execution slaves passing details to the query co-ordinator of the private segments that they have created so that the query co-ordinator can stitch the segments together into a single data segment for the table. (If you watch closely you’ll see the query co-ordinator doing a few local writes as it tidies up the header blocks in those segment blocks.)

There are two threats to this rownum detail. The first, of course, is that the operation essentially serialises through the query co-ordinator so it’s going to take longer than you might expect; secondly an accident of this type is typically going to allocate twice as many parallel execution slaves as you might have expected – the select and the insert are two separate data flow operations (note how the Name column shows TQ1xxxx and TQ2xxxx), each gets its own slave sets, and both sets of slaves are held for the duration of the statement. If this statement is demanding twice the slaves it should be using, then you may find that other statements that start running at the same time get their degree of parallelism downgraded because you’ve run out of PX slaves. Although the rownum solution is nice and clean – it require no further infrastructure – you probably need to introduce a sequence (with a large cache) to get the same effect without losing parallelism.

If you look at v$pq_tqstat after running this statement the results are a little disappointing – there are a few problems connecting lines from the plan with rows in the view – here’s my original output (and you’ll now see why I chose to have two different degrees of parallelism):

---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
         1          0 Consumer               1 P000                331330   39834186         74         71           0
                                             1 P001                331331   39844094         75         72           0
                                             1 P002                330653   39749806         74         71           0

                    1 Producer               1 P000                     1        131       2263        396           0
                                             1 P001                     1        131       2238        417           0
                                             1 P002                     1        131       2182        463           0

         2          0 Producer               1 P003                247652   28380762         13          0           0
                                             1 P004                228857   26200574         13          1           0
                                             1 P005                267348   30496182         14          0           0
                                             1 P006                249457   28401982         13          0           0
                                             1 QC                  993314  119428086 4294967269 4294967286           0
                      Consumer               1 QC                  993314  113479500        125         65           0

                    1 Consumer               1 QC                       3        393          2          1           0

The first problem is that the DFO_number reported in the view doesn’t match with the :TQ1xxxx and :TQ2xxxx reported in the plan – the parallel 4 bit is the select, which is covered by :TQ1000, but it’s listed under DFO_Number = 2 in the view, and the insert is the parallel 3 bit, which is covered by :TQ2000 and :TQ20001 but listed under DFO_Number = 1.

More confusingly, potentially, is that the all appearances of the query coordinator have been assigned to DFO_Number = 2. Ignoring the fact that the DFO_Number column switches the 1 and 2 from the plan, what we should see is as follows:

  • The consumer at line 16 is consuming from the 4 producers at lines 11 – 14.
  • The producer at line 15 is producing FOR the 3 consumers at lines 3 – 5
  • The consumer at line 18 is consuming from the producers at lines 7 – 9

Ideally (including the correction for the DFO_Number) I think the view content should be as follows:

---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
         1          0 Producer               1 P003                247652   28380762         13          0           0
                                             1 P004                228857   26200574         13          1           0
                                             1 P005                267348   30496182         14          0           0
                                             1 P006                249457   28401982         13          0           0
                      Consumer               1 QC                  993314  113479500        125         65           0

         2          0 Producer               1 QC                  993314  119428086 4294967269 4294967286           0
                      Consumer               1 P000                331330   39834186         74         71           0
                                             1 P001                331331   39844094         75         72           0
                                             1 P002                330653   39749806         74         71           0

                    1 Producer               1 P000                     1        131       2263        396           0
                                             1 P001                     1        131       2238        417           0
                                             1 P002                     1        131       2182        463           0
                      Consumer               1 QC                       3        393          2          1           0

Just don’t ask me why the waits and timeouts for the QC as producer seem to be counting backwards from 2^32.

Functions & Subqueries

Sat, 2015-02-07 22:12

I think the “mini-series” is a really nice blogging concept – it can pull together a number of short articles to offer a much better learning experience for the reader than they could get from the random collection of sound-bites that so often typifies an internet search; so here’s my recommendation for this week’s mini-series: a set of articles by Sayan Malakshinov a couple of years ago comparing the behaviour of Deterministic Functions and Scalar Subquery Caching.

Although I’ve labelled it as “this week’s” series, I wouldn’t want you to assume that I’ll be trying to find a new mini-series every week.

Footnote 2:
I had obviously expected to publish this note a long time ago – but must have forgotten about it. I was prompted to search my blog for “deterministic” very recently thanks to a recent note on the OTN database forum and discovered both this note and an incomplete note about improving the speed of creating function-based indexes by tweaking hidden parameters – which I might yet publish, although if you read all of Sayan’s articles you’ll find the solution anyway.