Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: why CTAS faster than Insert select?
linda wrote:
> Jonathan,
>
> Here is the details:
> version of oracle: 9.2.0.5.0
> time recorded in v$session_event:
> CTAS:
> EVENT WAITS TIMEOUTS TOTAL_TIME
> AVG
> ------------------------------ ---------- ---------- ----------
> ----------
> PX Deq: Execute Reply 1612 63 39707
> 25
...
> Insert Select:
> EVENT WAITS TIMEOUTS TOTAL_TIME
> AVG
> ------------------------------ ---------- ---------- ----------
> ----------
> PX Deq: Execute Reply 1729 170 62951
> 36
...
> redo generated:
> CTAS: 2130240
> INSERT: 4753332
> -- I consider 2M difference is minimum. The table is created
> nologging so INSERT will not generate redo. The full redo size would
be
> huge if table created logging (I measured about 1G per minute
before).
> CPU time recorded in v$sesstat:
> CTAS: Elapsed: 00:06:47.40
> NAME
> VALUE
> ----------------------------------------------------------------
> ----------
> recursive cpu usage
> 62
> CPU used when call started
> 306439
> CPU used by this session
> 306439
> parse time cpu
> 17
>
> INSERT: Elapsed: 00:10:43.61
> NAME
> VALUE
> ----------------------------------------------------------------
> ----------
> recursive cpu usage
> 62
> CPU used when call started
> 437671
> CPU used by this session
> 437671
> parse time cpu
> 24
> Framework of the two tests:
> CTAS:
> drop table tmp_pkt_colt_fncl_st_spst;
> alter session force parallel dml;
>
> CREATE TABLE TMP_PKT_COLT_FNCL_ST_SPST NOLOGGING
> PARALLEL 24 -- since 24 is used by Insert as
well
> (oracle determines it)
> PARTITION BY ...
> AS SELECT ... FROM ...WHERE ...;
>
> INSERT Select:
> truncate table tmp_pkt_colt_fncl_st_spst; -- same table created as
> NOLOGGING before
> alter session force parallel dml;
>
> INSERT /*+ APPEND */
> INTO TMP_PKT_COLT_FNCL_ST_SPST
> SELECT ... FROM ... WHERE ...;
>
> For both tests, the table is created as nologging.
>
> Thanks,
> Linda
I think the parallel is a red herring, the redo is the key. (as an
interesting aside, see
http://metalink.oracle.com/metalink/plsql/ml2_documents.showFrameDocument?p_database_id=NOT&p_id=267330.1
) Even though there is twice as much wait time for PX Deq (see
http://metalink.oracle.com/metalink/plsql/ml2_documents.showFrameDocument?p_database_id=NOT&p_id=270916.1
). But I could very well be talking out of my elbow, so try it without
parallel and see if you get similar results. My speculation here is
that the PX wait comes from the slaves being too slow, but the slaves
too slow is the result of some other wait having to do with redo
generation or [something else that further research into your system
may clarify], rather than being the ultimate cause. In addition,
search the metalink bug database for PX Deq: Execute Reply, there
appear to be some bugs related to cache locks and such (so I speculate
something there sets off the more redo as well as waits).
I also have a vague memory that PX wait strangenesses have been discussed somewhere publicly before, but since it seems like every statspack post has that...
jg
-- @home.com is bogus. The Hurd gets grilled. http://www.signonsandiego.com/uniontrib/20050331/news_1b31hp.htmlReceived on Thu Mar 31 2005 - 16:04:17 CST
![]() |
![]() |