Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Mailing Lists -> Oracle-L -> wait events for raw devices vs file system
--0-496987743-970068510=:18210
Content-Type: text/plain; charset=us-ascii
Oracle 8.1.6/Solaris. Server is very idle, during testing I was the only one logged on.
I'm testing file systems (fs) vs raw devices and have some questions about the results. I'm using imp to load ~4.3 million rows into a table without any indexes, fk constraints, etc, just a plain ol' table, syntax is at the bottom of this note. Table is dropped and recreated prior to each run and is the only segment in each tablespace I'm testing against.
I have yet to get the raw device tablespace to outperform the fs tablespace . In most attempts raw was 10-30 seconds slower than fs, nothing to get excited about but not the performance increase I heard about. Both tablespaces are locally-managed with 50M uniform extent size.
I captured the results from v$session_event for the imp process every 10 seconds while it was running and used the final numbers to compare. I hope someone can help me understand why I'm seeing such different wait results and what I can do if anything to fix this.
This is the result from one run but very typical of what every run looked like. I've only pulled the 2 most non SQL* significant waits, both of which vary greatly depending on where the underlying datafile resided.
format is event, total_waits, total_timeouts, time_waited
RAW log buffer space, 3282, 31, 9640
free buffer waits, 24, 2, 679
FILE SYSTEM log buffer space, 5218, 0, 5997
free buffer waits, 54, 26, 3538
Why such an increase in timeouts and time waited for log buffer space when using raw vs fs and vice-versa for free buffer waits? The 10-30 second lag on raw's can usually be accounted for in the waits but why?
I experience log switches about every 5 minutes while the table is being loaded. I have 500M redo logs, I did have them at 100M but they were switching about every minute so I recreated them at 500M. Table is ~650M when it is loaded. archive logging is off. Time to load entire table is ~7 minutes. Log files reside on a file system, not raw.
log_buffer 65536 log_checkpoint_interval 1000000000 log_checkpoint_timeout 0 log_checkpoints_to_alert TRUE
Any insight would be great.
Thanks - Brian
Here is the imp syntax I use.
imp userid=x/x file=hourly.dmp fromuser=vpnsla_dmart_mgr touser=bsw ignore=y constraints=n indexes=n
Here is the table syntax.
drop table bsw.tunnel_hourly;
CREATE TABLE bsw.TUNNEL_HOURLY
(
FK_TIME_KEY DATE NOT NULL, FK_CUSTOMER_KEY integer NOT NULL, FK_VPN_KEY integer NOT NULL, FK_PRODUCT_KEY integer NOT NULL, FK_REGION_KEY integer NOT NULL, FK_TUNNEL_ID VARCHAR2(255) NOT NULL, FK_FACT_TYPE_KEY integer NOT NULL, FACT_STD_DEV NUMBER NOT NULL, FACT_TOTAL NUMBER NOT NULL, FACT_MIN NUMBER NOT NULL, FACT_MAX NUMBER NOT NULL, FACT_COUNT NUMBER NOT NULL, FACT_AVG NUMBER NOT NULL,
<P>Oracle 8.1.6/Solaris. Server is very idle, during testing I was the only one logged on.<BR><BR>I'm testing file systems (fs) vs raw devices and have some questions about the results. I'm using imp to load ~4.3 million rows into a table without any indexes, fk constraints, etc, just a plain ol' table, syntax is at the bottom of this note. Table is dropped and recreated prior to each run and is the only segment in each tablespace I'm testing against.<BR><BR>I have yet to get the raw device tablespace to outperform the fs tablespace . In most attempts raw was 10-30 seconds slower than fs, nothing to get excited about but not the performance increase I heard about. Both tablespaces are locally-managed with 50M uniform extent size.</P>
<P>I captured the results from v$session_event for the imp process every 10 seconds while it was running and used the final numbers to compare. I hope someone can help me understand why I'm seeing such different wait results and what I can do if anything to fix this.</P>
<P>This is the result from one run but very typical of what every run looked like. I've only pulled the 2 most non SQL* significant waits, both of which vary greatly depending on where the underlying datafile resided. </P>
<P>format is event, total_waits, total_timeouts, time_waited</P>
<P>RAW</P>
<P>log buffer space, 3282, 31, 9640</P>
<P>free buffer waits, 24, 2, 679</P>
<P>FILE SYSTEM</P>
<P>log buffer space, 5218, 0, 5997</P>
<P>free buffer waits, 54, 26, 3538</P>
<P>Why such an increase in timeouts and time waited for log buffer space when using raw vs fs and vice-versa for free buffer waits? The 10-30 second lag on raw's can usually be accounted for in the waits but why?</P>
<P>I experience log switches about every 5 minutes while the table is being loaded. I have 500M redo logs, I did have them at 100M but they were switching about every minute so I recreated them at 500M. Table is ~650M when it is loaded. archive logging is off. Time to load entire table is ~7 minutes. Log files reside on a file system, not raw.</P>
<P>log_buffer 65536 </P>
<P>log_checkpoint_interval 1000000000</P>
<P>log_checkpoint_timeout 0</P>
<P>log_checkpoints_to_alert TRUE</P>
<P>Any insight would be great. </P>
<P>Thanks - Brian</P>
<P>Here is the imp syntax I use.</P>
<P>imp userid=x/x file=hourly.dmp fromuser=vpnsla_dmart_mgr touser=bsw ignore=y constraints=n indexes=n </P>
<P>Here is the table syntax.</P>
<P>drop table bsw.tunnel_hourly; <BR><BR>CREATE TABLE bsw.TUNNEL_HOURLY <BR>( <BR>FK_TIME_KEY DATE NOT NULL, <BR>FK_CUSTOMER_KEY integer NOT NULL, <BR>FK_VPN_KEY integer NOT NULL, <BR>FK_PRODUCT_KEY integer NOT NULL, <BR>FK_REGION_KEY integer NOT NULL, <BR>FK_TUNNEL_ID VARCHAR2(255) NOT NULL, <BR>FK_FACT_TYPE_KEY integer NOT NULL, <BR>FACT_STD_DEV NUMBER NOT NULL, <BR>FACT_TOTAL NUMBER NOT NULL, <BR>FACT_MIN NUMBER NOT NULL, <BR>FACT_MAX NUMBER NOT NULL, <BR>FACT_COUNT NUMBER NOT NULL, <BR>FACT_AVG NUMBER NOT NULL, <BR>FACT_P95 NUMBER NOT NULL, <BR>LOAD_DATE DATE NOT NULL <BR>) tablespace aggr_data_1;</P><p><br><hr size=1><b>Do You Yahoo!?</b><br>
Send instant messages & get email alerts with Yahoo! Messenger.
Received on Wed Sep 27 2000 - 10:28:30 CDT
![]() |
![]() |