RE: Re: Very slow metadata-only expdp
Date: Wed, 19 Oct 2016 15:29:59 +0000
Message-ID: <CE70217733273F49A8A162EE074F64D90150344208_at_exmbx05.thus.corp>
I missed the first post, so replying via one of the responses:
The select from seg$ is, I believe, identifying the segment header blocks in a tablespace, of which you show in excess of 73,000. The insert/select is selecting from "used extents" for locally managed segments - and locally managed segments and it's a view that's generated by reference to the content of the segment header block, so it seems likely that a large fraction of your time is CPU time spent in deriving (possibly lots) of extent descriptions from your segment headers (a process that doesn't need to do any further I/O).
Regards
Jonathan Lewis
http://jonathanlewis.wordpress.com
_at_jloracle
From: oracle-l-bounce_at_freelists.org [oracle-l-bounce_at_freelists.org] on behalf of l.flatz_at_bluewin.ch [l.flatz_at_bluewin.ch] Sent: 19 October 2016 09:17
To: dmarc-noreply_at_freelists.org
Cc: dmarc-noreply_at_freelists.org; oracle-l_at_freelists.org Subject: Re: Re: Very slow metadata-only expdp
HI,
there is something strange here. The wait event is db file sequential read , but the statement from your traces shows not disk reads.
Regards
Lohar
----Ursprüngliche Nachricht----
Von : dmarc-noreply_at_freelists.org
Datum : 19/10/2016 - 09:54 (UTC)
An : dmarc-noreply_at_freelists.org
Cc : oracle-l_at_freelists.org
Betreff : Re: Very slow metadata-only expdp
Surprised nobody faced this ever!
On Oct 10, 2016, at 9:54 PM, Deepak Sharma (Redacted sender "sharmakdeep_oracle" for DMARC) <dmarc-noreply_at_freelists.org<mailto:dmarc-noreply_at_freelists.org>> wrote:
When exporting only "constraints" on a ~150TB DB (11.2.0.4) using expdp metadata_only option, the job spends about 10 (of 10.5) hours just on the following SQL. The log with METRICS-Y shows constraints export took only about 30 mins of the entire 10.5hrs run!
INSERT INTO SYS.KU$XKTFBUE SELECT KTFBUESEGTSN, KTFBUESEGFNO, KTFBUESEGBNO, KTFBUEBLKS FROM SYS.X$KTFBUE; Doing a 10046 trace (in another session for about 10 mins) shows max time spent on the following SQL:
SQL ID: 1n1bm7xa28vtq Plan Hash: 167961705
select file#, block#, type#, nvl(spare1,0), hwmincr, cachehint,
nvl(scanhint,0)
from
seg$ where ts# = :1
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 9 0.00 0.00 0 0 0 0 Execute 9 0.00 0.00 0 0 0 0 Fetch 73737 1.64 2.83 0 223521 0 73728
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 73755 1.65 2.84 0 223521 0 73728
Misses in library cache during parse: 1
Misses in library cache during execute: 1
Optimizer mode: RULE
Parsing user id: SYS (recursive depth: 3)
Number of plan statistics captured: 3
Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
688 515 850 TABLE ACCESS CLUSTER SEG$ (cr=1550 pr=0 pw=0 time=15081 us) 693 517 850 INDEX RANGE SCAN I_FILE#_BLOCK# (cr=518 pr=0 pw=0 time=2956 us)(object id 9)...
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited ---------------------------------------- Waited ---------- ------------ db file sequential read 74275 0.47 1477.70 Disk file operations I/O 130 0.00 0.01
Metalink has a close match<https://support.oracle.com/epmos/faces/BugDisplay?_afrLoop=249069429018914&id=18389118&_afrWindowMode=0&_adf.ctrl-state=h2tjl3914_530> but in that case they used a workaround to exclude table statistics (in our case we want to get only constraints)
Any thoughts or suggestions? We are opening an SR, but just wanted to see if anyone else has run into a similar issue.
-- http://www.freelists.org/webpage/oracle-lReceived on Wed Oct 19 2016 - 17:29:59 CEST