Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> Re: Using NetApp Filers for a DWH
I like the NetApp guys, etc. But it is really RAID-4 (no kidding), so
perhaps that might explain one or two things? Just guessing...
Mogens
Richmond Shee wrote:
> For your second question: The WAFL technology can have a disastrous effect on full table scans and fast full scans in Oracle10g. You can create this simple mouse trap:-
>
> 1. Create a table and load a bunch of data.
> 2. Flush the buffer cache.
> 3. Enable 10046 at level 8 for the session.
> 4. set timing on
> 5. set db_file_multiblock_read_count = 16
> 6. perform a full table scan against the table. (this is your baseline)
> 7. exit
> 8. login again
> 9. run a procedure that updates the 1st row of every other block (i.e. odd or even blocks)
> 10. repeat steps 2 to 6. Note the elapsed times and compare the two trace files.
>
> According to my tests in Oracle10g, many db file sequential read events showed up in the trace file after the update. I have seen as much as 89% degradation in performance. NetApp blamed this on Oracle as a 10g bug. Oracle kinda admitting it, but until now there is no resolution. However, when the same test was ran in Oracle 9.2.0.4 it didn't produce this problem.
-- Archives are at http://www.freelists.org/archives/oracle-l/ FAQ is at http://www.freelists.org/help/fom-serve/cache/1.html -----------------------------------------------------------------Received on Wed Aug 04 2004 - 18:50:04 CDT
![]() |
![]() |