Oracle export and import performance [message #292150] |
Tue, 08 January 2008 01:11 |
kumarpavan
Messages: 3 Registered: January 2008 Location: India
|
Junior Member |
|
|
Hi Frineds,
During export how can i determine how much buffer value to give to improve the performance. Can any one provide me the calculating formula.
Thanks,
Pavan
|
|
|
Re: Oracle export and import performance [message #292177 is a reply to message #292150] |
Tue, 08 January 2008 02:37 |
Mohammad Taj
Messages: 2412 Registered: September 2006 Location: Dubai, UAE
|
Senior Member |
|
|
>During export how can i determine how much buffer value to >give to improve the performance. Can any one provide me the >calculating formula.
It is always better to go with default value.
Increase or Decrease buffer size have side effects.
according requirement adjust buffer size.
BUFFER
Default: operating system-dependent. See your Oracle operating system-specific documentation to determine the default value for this parameter.
Specifies the size, in bytes, of the buffer used to fetch rows. As a result, this parameter determines the maximum number of rows in an array fetched by Export. Use the following formula to calculate the buffer size:
buffer_size = rows_in_array * maximum_row_size
If you specify zero, the Export utility fetches only one row at a time.
Tables with columns of type LOBs, LONG, BFILE, REF, ROWID, LOGICAL ROWID, or DATE are fetched one row at a time.
Note:
The BUFFER parameter applies only to conventional path Export. It has no effect on a direct path Export. For direct path Exports, use the RECORDLENGTH parameter to specify the size of the buffer that Export uses for writing to the export file.
Example: Calculating Buffer Size
This section shows an example of how to calculate buffer size.
The following table is created:
CREATE TABLE sample (name varchar(30), weight number);
The maximum size of the name column is 30, plus 2 bytes for the indicator. The maximum size of the weight column is 22 (the size of the internal representation for Oracle numbers), plus 2 bytes for the indicator.
Therefore, the maximum row size is 56 (30+2+22+2).
To perform array operations for 100 rows, a buffer size of 5600 should be specified.
http://download.oracle.com/docs/cd/B14117_01/server.101/b10825/exp_imp.htm#CEGFIAGE
Regards
Taj
[Updated on: Tue, 08 January 2008 02:38] Report message to a moderator
|
|
|
|
Re: Oracle export and import performance [message #447160 is a reply to message #292177] |
Fri, 12 March 2010 01:59 |
ram12345ji
Messages: 7 Registered: March 2010 Location: INDIA
|
Junior Member |
|
|
as of you told that buffer size calculation ,it is fine
and i need some clarification from you , i have some doubt on this topic ,please explain me
In the example you specified one table
for suppose you have 100 tables having diff. number of rows and diff. datatypes ,
then i hope this is not the better way to find the buffer,please tell my assumption is right?
and can you tell me the maxsize of buffer you can put while exporting?(how much maximum you can put in a way that no issues vl be come while exporting)
|
|
|
|
Re: Oracle export and import performance [message #447168 is a reply to message #447163] |
Fri, 12 March 2010 02:39 |
ram12345ji
Messages: 7 Registered: March 2010 Location: INDIA
|
Junior Member |
|
|
thank you very much for reply.
your clarification cleared me some doubts but still i have one more doubt ,please explain me
of course you told that more buffer size will increase the performance i am agree with that
and you told that you often use 100MB,that is ok
I mean to say in the last reply was
suppose my database Server memory size 1000MB is using
as of you told that you can put maximum buffer size if the server memory allows you.
here my question is that how can we know that server memory allows you or not ,is any calculation is there for find that?
please dont mind,I am struggling for finding the answer for this.
please explain me if I am using 1000MB as server memory,how much can i put maximum size for buffer?
|
|
|
|
|
doubt in commit process [message #447456 is a reply to message #447177] |
Mon, 15 March 2010 06:13 |
ram12345ji
Messages: 7 Registered: March 2010 Location: INDIA
|
Junior Member |
|
|
Hi Michel,
how are you,I hope you are fine,and I have one doubt in commit process,in the database lot of transactions are going on,lets say 50 lacs rows are inserted but not committed .In general at what time commit will takes place if user is not committing the data.
suppose you have automatic commit options like
1.for every row you can commit
2.for every 1000rows you can commit
3.for every 1lac rows you can commit
4.for every 5lac rows you can commit
in the above 4 cases,which case is preferable?
and in which case database performance will be high?
and why the performance is high in that case?
Thanks&Regards,
ramji mulukoori.
|
|
|
Re: doubt in commit process [message #447489 is a reply to message #447456] |
Mon, 15 March 2010 07:48 |
|
Michel Cadot
Messages: 68728 Registered: March 2007 Location: Saint-Maur, France, https...
|
Senior Member Account Moderator |
|
|
Usually you commit only once the transaction is over.
If you cannot offer this, you commit as less as possible and you must have something that allows you to restart where you stopped in case of failure.
Regards
Michel
[Updated on: Mon, 15 March 2010 07:48] Report message to a moderator
|
|
|
Re: doubt in commit process [message #447636 is a reply to message #447489] |
Wed, 17 March 2010 00:18 |
ram12345ji
Messages: 7 Registered: March 2010 Location: INDIA
|
Junior Member |
|
|
ok Michel.can you please explain me that there is any performance issue is there if we do commit as less as possible?
like as I explained in the earlier mail,
suppose we inserting the records lakh by lakh
what will be the performance delay if we commit them for every one lakh rows rather than for every two lakh rows?
as of you said that as less as possible,that is ok.I need an explanation in performance wise?
thanks®ards,
ramji mulukoori.
|
|
|
|
Re: doubt in commit process [message #447647 is a reply to message #447644] |
Wed, 17 March 2010 01:30 |
ram12345ji
Messages: 7 Registered: March 2010 Location: INDIA
|
Junior Member |
|
|
> SELECT * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
============================================
as of you asked the version,please see it in above
thank you very much for responding
already Michel has cleared my doubt and he has given very good explanation for my earlier mails also,I am very grateful for that.
but again I have a doubt in the commit process as I said in the earlier mail
please kindly clarify my doubt..
thanks,
ramji.
|
|
|
|
Re: doubt in commit process [message #447656 is a reply to message #447636] |
Wed, 17 March 2010 02:05 |
|
Michel Cadot
Messages: 68728 Registered: March 2007 Location: Saint-Maur, France, https...
|
Senior Member Account Moderator |
|
|
ram12345ji wrote on Wed, 17 March 2010 06:18ok Michel.can you please explain me that there is any performance issue is there if we do commit as less as possible?
like as I explained in the earlier mail,
suppose we inserting the records lakh by lakh
what will be the performance delay if we commit them for every one lakh rows rather than for every two lakh rows?
as of you said that as less as possible,that is ok.I need an explanation in performance wise?
thanks®ards,
ramji mulukoori.
Committing means serialization, when you commit log buffer is locked until it is flushed into log file, ALL sessions making modification are waiting for this flushing to end. The more you commit, the more you put these sessions in waiting state.
Regards
Michel
|
|
|
|
|
|