Home » RDBMS Server » Performance Tuning » Performance issues (10gR2)
Performance issues [message #379107] Sun, 04 January 2009 23:19 Go to next message
nani_ar
Messages: 58
Registered: March 2006
Member
Hi ,

My application involves using a lot of global temporary tables while running the application.Most of them are DML operations using insert/delete statements. Due to this activity, the performance is degrading and also my undo tablespace is getting filled up very quickly . We thought of implementing the "truncate" instead of "delete",since unconditional deletes are used. Before implementing the "truncate" in the code, we need the advice from you all in regarding the following.

Ours is 10gR2 using RAC.

1) Do we get any other issues if we use truncate instead of delete for global temporary tables?

2) Does the concept of "two phase commit" have any effect on this?

3) My undo tablespace is almost 99% full most of the time. Since the undo tablespace frees up space as and when required depending on the undo_retention parameter, I guess it is of no problem. But, still do we face any performance problem, since the free space needs to be created in the UNDO tbs by deleting the expired undo, for generating the new UNDO

4) Is there any way of clearing the UNDO tablespace manually?

5) I see a lot of cache buffer chains in the AWR report.

cache buffers chains
get requests: 1,122,700,292
PCt misses : 2.87
Nowait request : 1,196,711

Does it have any impact on the performance? Please let me know what "cache buffers chains" exactly means, and how to reduce it.

All the replies are welcome and thanks in advance.

Regards
Nani

Re: Performance issues [message #379120 is a reply to message #379107] Mon, 05 January 2009 00:16 Go to previous messageGo to next message
BlackSwan
Messages: 26766
Registered: January 2009
Location: SoCal
Senior Member
>Please let me know what "cache buffers chains" exactly means, and how to reduce it.

If you don't know what "cache buffers chains" exactly means,
then how/why do you conclude/desire to reduce it?

Ready, Fire, Aim!

How/when would an independent observer know (which metric & what value) that "your problem" has been solved?

Other than "degrading performance", I have no idea exactly what the problem actually is.
Re: Performance issues [message #379140 is a reply to message #379120] Mon, 05 January 2009 01:34 Go to previous messageGo to next message
tahpush
Messages: 961
Registered: August 2006
Location: Stockholm/Sweden
Senior Member

I wonder if you have the correct approach, why are you using a large amount of GTT:s ?

If you are doing alot of DELETE's and UPDATE's on your GTT:s then their probably will be alot of redo generation.
Re: Performance issues [message #379141 is a reply to message #379120] Mon, 05 January 2009 01:37 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
TRUNCATE performs an implicit commit of all DML statements on other tables. DELETE does not.

Two-phase commit is only relevant to working with two databases I thought. I could be wrong...

I wouldn't worry about your UNDO tablespace unless it is giving you measurable problems (like errors). Space does not need to be 'deleted' in undo; when a transaction needs space, something old just gets overwritten. If there is nothing old (eg. only uncommitted / retained transactions) you get an error. There should be no overhead to "freeing up" UNDO as and when required.

Ross Leishman
Re: Performance issues [message #380346 is a reply to message #379141] Sun, 11 January 2009 02:29 Go to previous messageGo to next message
nani_ar
Messages: 58
Registered: March 2006
Member
Hi ALL,

Thanks for all the replies. I do have an update from my side. We did some testing with the GTT's. We have dropped the GTT's and created them again. WOW!!!!!!!!!!!! There was a huge boost in the performance. The operation which used to take 45 mins earlier has come down to 15 secs. We tested over a period of 24 hrs and still it is performing very well. A lot of test load has been done. There was a drastic improvement in the performance after we dropped and recreated the GTT''s.

I don't the exact reason behind this performance, but it got improved. No other tuning has been done. Kindly clarify me the following .

1) Do the Global Temporary Tables have the HWM, in the same way as the regular tables?
2) What exactly happens when we drop and recreate the GTT's?
3) As per theory, the GTT's occupy the space in user's temporary tablespace (for eg. TEMP). If we drop and recreate the GTT's, will it have any effect on the space used for them.
4) Does dropping and recreating the GTT's affect the application in any way?

Kindly clarify them and thanks in advance.

Regards
nani
Re: Performance issues [message #380362 is a reply to message #380346] Sun, 11 January 2009 05:51 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
On my understanding, there shouldn't be a HWM issue.

Dropping and rebuilding the GTTs would cause all plans to be recalculated. That would affect anything with a WHERE clause, but it shouldn't affect an unqualified DELETE.

Another possibility is that you had a LOT more indexes on those GTTs than you thought.

Ross Leishman
Re: Performance issues [message #380437 is a reply to message #380362] Sun, 11 January 2009 23:26 Go to previous messageGo to next message
nani_ar
Messages: 58
Registered: March 2006
Member
Hi,

Thanks for the reply. Yes, As far as my understanding also, there should not be any HWM issue. My GTT's are not having any indexes. But, after dropping and recreating the GTT's, my performance got boosted. No other tuning has been done.

Regards
Nani
Re: Performance issues [message #380746 is a reply to message #379120] Tue, 13 January 2009 03:45 Go to previous messageGo to next message
rakeshgj_36
Messages: 3
Registered: January 2009
Location: INDIA
Junior Member
Cache Buffer Chain:
==================
Blocks in the buffer cache are placed on linked lists (cache buffer chains) which hang off a hash table.The hash chain that a block is placed on is based on the DBA and CLASS of the block. Each hash chain is protected by a single child latch. Processes need to get the relevant latch to allow them the scan a hash chain for a buffer so that the linked list does not change underneath them.

I am attaching one simple picture please review it.( mobile photo) Smile

Contention for these latches can be caused by:

++ Very long buffer chains.
++ very very heavy access to a single block.

Solution:
=========

Normally you need break the chain to reduce the contention.
just change the size of the buffer cache , then chain will be broken.

Identify the hot block ( metalink 163424.1)

Let me know if you need more information and I am happy to help you.

Kind Regards,
Rakesh


Re: Performance issues [message #380820 is a reply to message #380746] Tue, 13 January 2009 08:17 Go to previous messageGo to next message
tahpush
Messages: 961
Registered: August 2006
Location: Stockholm/Sweden
Senior Member

@rakeshgj36

When you are posting referenced information please post the source link in the future.


In your case I guess you got it from
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1229436447262
Since your not allowed to publish information from metalink.
Re: Performance issues [message #381118 is a reply to message #380820] Thu, 15 January 2009 05:50 Go to previous message
rakeshgj_36
Messages: 3
Registered: January 2009
Location: INDIA
Junior Member
No Message Body

[Updated on: Thu, 15 January 2009 05:54]

Report message to a moderator

Previous Topic: Tempfile readtime and Writetime
Next Topic: Database slow after attaching new storage
Goto Forum:
  


Current Time: Tue Nov 26 01:53:09 CST 2024