RE: Large index creation speed up / issues with direct path read temp / direct path write temp
Date: Sat, 4 Jul 2015 11:36:55 +0200 (CEST)
Message-ID: <2005046146.397679.1436002616004.JavaMail.open-xchange_at_app03.ox.hosteurope.de>
Hi,
_at_Jonathan:
You are right of course. Thank you very much for correction. I should check the Safari auto-correction more carefully as it should be "it is pretty
avoidable i guess". However the "real" data might be bigger than the 70 GB as GG is using index key compression. AFAIK this is only done on index leaf
block level and not already done by sorting in some kind of way. Am i right or are there enhancements as well?
_at_Lothar:
Imo increasing the parallelism can make the service vs. application wait time even more worse. 30-40 ms vs. 753 ms is pretty obvious for some disk
(and possibly HBA) queuing effect or a ZFS issue. SAN storage incites to publish only a few large LUNs, but neglects the disk queuing :-)
Best Regards
Stefan Koehler
Freelance Oracle performance consultant and researcher
Homepage: http://www.soocs.de
Twitter: _at_OracleSK
> Jonathan Lewis <jonathan_at_jlcomp.demon.co.uk> hat am 4. Juli 2015 um 11:00 geschrieben:
>
> Stefan,
>
> 2GB memory to do 70GB index should be an easy one-pass sort the algorithm is "n-squared" viz: we might have to produce 140 streams of 1GB (allowing
> for various overheads out of 2GB) but then a merge of 140 streams at (say) 1MB per reload per stream needs only 140MB of memory. With 2GB you might
> be able to sort something in the order of 1TB in a one-pass.
-- http://www.freelists.org/webpage/oracle-lReceived on Sat Jul 04 2015 - 11:36:55 CEST