Home » RDBMS Server » Server Utilities » Oracle export and import performance
icon4.gif  Oracle export and import performance [message #292150] Tue, 08 January 2008 01:11 Go to next message
kumarpavan
Messages: 3
Registered: January 2008
Location: India
Junior Member
Hi Frineds,

During export how can i determine how much buffer value to give to improve the performance. Can any one provide me the calculating formula.

Thanks,
Pavan
Re: Oracle export and import performance [message #292177 is a reply to message #292150] Tue, 08 January 2008 02:37 Go to previous messageGo to next message
Mohammad Taj
Messages: 2412
Registered: September 2006
Location: Dubai, UAE
Senior Member

>During export how can i determine how much buffer value to >give to improve the performance. Can any one provide me the >calculating formula.
It is always better to go with default value.

Increase or Decrease buffer size have side effects.
according requirement adjust buffer size.

BUFFER
Default: operating system-dependent. See your Oracle operating system-specific documentation to determine the default value for this parameter.

Specifies the size, in bytes, of the buffer used to fetch rows. As a result, this parameter determines the maximum number of rows in an array fetched by Export. Use the following formula to calculate the buffer size:

buffer_size = rows_in_array * maximum_row_size

If you specify zero, the Export utility fetches only one row at a time.

Tables with columns of type LOBs, LONG, BFILE, REF, ROWID, LOGICAL ROWID, or DATE are fetched one row at a time.

Note:
The BUFFER parameter applies only to conventional path Export. It has no effect on a direct path Export. For direct path Exports, use the RECORDLENGTH parameter to specify the size of the buffer that Export uses for writing to the export file.

Example: Calculating Buffer Size
This section shows an example of how to calculate buffer size.

The following table is created:
CREATE TABLE sample (name varchar(30), weight number);

The maximum size of the name column is 30, plus 2 bytes for the indicator. The maximum size of the weight column is 22 (the size of the internal representation for Oracle numbers), plus 2 bytes for the indicator.
Therefore, the maximum row size is 56 (30+2+22+2).

To perform array operations for 100 rows, a buffer size of 5600 should be specified.
http://download.oracle.com/docs/cd/B14117_01/server.101/b10825/exp_imp.htm#CEGFIAGE


Regards
Taj

[Updated on: Tue, 08 January 2008 02:38]

Report message to a moderator

Re: Oracle export and import performance [message #292188 is a reply to message #292150] Tue, 08 January 2008 03:11 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
The greater it is, the most performant it is.
So use the greatest buffer your server memory allows.
I often use 100MB.

Regards
Michel
Re: Oracle export and import performance [message #447160 is a reply to message #292177] Fri, 12 March 2010 01:59 Go to previous messageGo to next message
ram12345ji
Messages: 7
Registered: March 2010
Location: INDIA
Junior Member
as of you told that buffer size calculation ,it is fine
and i need some clarification from you , i have some doubt on this topic ,please explain me
In the example you specified one table
for suppose you have 100 tables having diff. number of rows and diff. datatypes ,
then i hope this is not the better way to find the buffer,please tell my assumption is right?
and can you tell me the maxsize of buffer you can put while exporting?(how much maximum you can put in a way that no issues vl be come while exporting)
Re: Oracle export and import performance [message #447163 is a reply to message #447160] Fri, 12 March 2010 02:06 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
What is not clear in my answer?

Michel Cadot wrote on Tue, 08 January 2008 10:11
The greater it is, the most performant it is.
So use the greatest buffer your server memory allows.
I often use 100MB.

Regards
Michel

Re: Oracle export and import performance [message #447168 is a reply to message #447163] Fri, 12 March 2010 02:39 Go to previous messageGo to next message
ram12345ji
Messages: 7
Registered: March 2010
Location: INDIA
Junior Member
thank you very much for reply.
your clarification cleared me some doubts but still i have one more doubt ,please explain me
of course you told that more buffer size will increase the performance i am agree with that
and you told that you often use 100MB,that is ok
I mean to say in the last reply was
suppose my database Server memory size 1000MB is using
as of you told that you can put maximum buffer size if the server memory allows you.
here my question is that how can we know that server memory allows you or not ,is any calculation is there for find that?
please dont mind,I am struggling for finding the answer for this.
please explain me if I am using 1000MB as server memory,how much can i put maximum size for buffer?
Re: Oracle export and import performance [message #447177 is a reply to message #447168] Fri, 12 March 2010 03:21 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Quote:
here my question is that how can we know that server memory allows you or not ,is any calculation is there for find that?

Use vmstat or any other tool showing you the memory usage on your server. What is not used is free for your export/import.

Regards
Michel
Re: Oracle export and import performance [message #447183 is a reply to message #447177] Fri, 12 March 2010 04:06 Go to previous messageGo to next message
ram12345ji
Messages: 7
Registered: March 2010
Location: INDIA
Junior Member
Thank you very much Michel...
doubt in commit process [message #447456 is a reply to message #447177] Mon, 15 March 2010 06:13 Go to previous messageGo to next message
ram12345ji
Messages: 7
Registered: March 2010
Location: INDIA
Junior Member
Hi Michel,
how are you,I hope you are fine,and I have one doubt in commit process,in the database lot of transactions are going on,lets say 50 lacs rows are inserted but not committed .In general at what time commit will takes place if user is not committing the data.

suppose you have automatic commit options like
1.for every row you can commit
2.for every 1000rows you can commit
3.for every 1lac rows you can commit
4.for every 5lac rows you can commit

in the above 4 cases,which case is preferable?
and in which case database performance will be high?
and why the performance is high in that case?


Thanks&Regards,
ramji mulukoori.
Re: doubt in commit process [message #447489 is a reply to message #447456] Mon, 15 March 2010 07:48 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Usually you commit only once the transaction is over.
If you cannot offer this, you commit as less as possible and you must have something that allows you to restart where you stopped in case of failure.

Regards
Michel

[Updated on: Mon, 15 March 2010 07:48]

Report message to a moderator

Re: doubt in commit process [message #447636 is a reply to message #447489] Wed, 17 March 2010 00:18 Go to previous messageGo to next message
ram12345ji
Messages: 7
Registered: March 2010
Location: INDIA
Junior Member
ok Michel.can you please explain me that there is any performance issue is there if we do commit as less as possible?
like as I explained in the earlier mail,
suppose we inserting the records lakh by lakh
what will be the performance delay if we commit them for every one lakh rows rather than for every two lakh rows?
as of you said that as less as possible,that is ok.I need an explanation in performance wise?

thanks&regards,
ramji mulukoori.
Re: doubt in commit process [message #447644 is a reply to message #447636] Wed, 17 March 2010 01:05 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
Post Operating System (OS) name & version for DB server system.
Post results of
SELECT * from v$version


It would be helpful if you followed Posting Guidelines - http://www.orafaq.com/forum/t/88153/0/

If V10 or higher, should not use imp/exp, but use expdp/impdp instead.
Re: doubt in commit process [message #447647 is a reply to message #447644] Wed, 17 March 2010 01:30 Go to previous messageGo to next message
ram12345ji
Messages: 7
Registered: March 2010
Location: INDIA
Junior Member
> SELECT * from v$version;
BANNER
----------------------------------------------------------------
Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
PL/SQL Release 10.2.0.4.0 - Production
CORE 10.2.0.4.0 Production
TNS for Linux: Version 10.2.0.4.0 - Production
NLSRTL Version 10.2.0.4.0 - Production
============================================
as of you asked the version,please see it in above
thank you very much for responding
already Michel has cleared my doubt and he has given very good explanation for my earlier mails also,I am very grateful for that.
but again I have a doubt in the commit process as I said in the earlier mail
please kindly clarify my doubt..

thanks,
ramji.

Re: doubt in commit process [message #447650 is a reply to message #447647] Wed, 17 March 2010 01:43 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
The fewer COMMIT the better & 1 COMMIT at end is best.
Re: doubt in commit process [message #447656 is a reply to message #447636] Wed, 17 March 2010 02:05 Go to previous messageGo to next message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
ram12345ji wrote on Wed, 17 March 2010 06:18
ok Michel.can you please explain me that there is any performance issue is there if we do commit as less as possible?
like as I explained in the earlier mail,
suppose we inserting the records lakh by lakh
what will be the performance delay if we commit them for every one lakh rows rather than for every two lakh rows?
as of you said that as less as possible,that is ok.I need an explanation in performance wise?

thanks&regards,
ramji mulukoori.

Committing means serialization, when you commit log buffer is locked until it is flushed into log file, ALL sessions making modification are waiting for this flushing to end. The more you commit, the more you put these sessions in waiting state.

Regards
Michel

Re: doubt in commit process [message #447657 is a reply to message #447656] Wed, 17 March 2010 02:17 Go to previous messageGo to next message
ram12345ji
Messages: 7
Registered: March 2010
Location: INDIA
Junior Member
thank you Michel.
you cleared my doubt.I am very grateful for your valuable explanations and thank you very much responding to my mails.


thanks,
ramji.

icon6.gif  Re: doubt in commit process [message #468626 is a reply to message #447657] Sat, 31 July 2010 04:46 Go to previous messageGo to next message
vks0982
Messages: 1
Registered: July 2010
Junior Member
[quote title=ram12345ji wrote on Wed, 17 March 2010 02:17]thank you Michel.
you cleared my doubt.I am very grateful for your valuable explanations and thank you very much responding to my mails.


thanks,
ramji.

Hi Michel,


I have doubt in my mind that how to know which user is gng to do frequent commits in the database due to which performance is going down.


Regards,

Vikas
Re: doubt in commit process [message #468629 is a reply to message #468626] Sat, 31 July 2010 05:31 Go to previous message
Michel Cadot
Messages: 68728
Registered: March 2007
Location: Saint-Maur, France, https...
Senior Member
Account Moderator
Query v$sesstat for "user commits" statistic; you will find its number in v$statname.

Regards
Michel
Previous Topic: How to export a user without data of few tables
Next Topic: need help with sql loader
Goto Forum:
  


Current Time: Mon Dec 23 13:57:09 CST 2024