Home » RDBMS Server » Performance Tuning » Use PCTfree/Initran/Maxtran to prevent contention
Use PCTfree/Initran/Maxtran to prevent contention [message #138875] Sat, 24 September 2005 17:00 Go to next message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
I'm trying to decrease high I/O due to contention and concurency my app creates.
I tested with 4K block size and, of course, the results were much better, but I ended up with a lot of unused memory over a period of time as the ASM doesn't apply to the 4K_buffers. Usually this memory will be moved automatically to the other areas in SGA where it was needed, now - it was hard allocated.
I decided to stuck with 8K block and to try to use PCTFree/initran/maxtran.
Each process flow does 20-30 inserts and about 30-40 selects, and only 1 or 2 updates. Then it repeats the same. 3 of the longest time consuming statements in StatPack are inserts. There's no triggers, FK, or too many indexes on this tables... but there are about 70 to 90 processes doing that in the same time and selecting from the same tables in the same time.In Oracle 9i I had a great success with setting up high freelist/groups, but this is not possible with auto undo management in 10g.
I need some help in the right sizing.
My set up is:
PCTFree = 60, initran = 96 - the number of proceses I'm running,
maxtran = 500. Are this reasonable numbers?
I know it means a lot of disk space, but the this is cheap today. I have plenty of server memory - 32GB - 20 for SGA_target.
What I'm seeing is that ORacle 10g ignores the maxtrans... I read somewhere on the web that this is possible, but do not remember reading this in Oracle's 10g manuals???
What will be the down sides of this set up? Is there something I need to monitor carefully?
Thanks a lot, mj

[Updated on: Sun, 25 September 2005 12:19]

Report message to a moderator

Re: Use PCTfree/Initran/Maxtran to prevent contention [message #139112 is a reply to message #138875] Mon, 26 September 2005 10:54 Go to previous messageGo to next message
smartin
Messages: 1803
Registered: March 2005
Location: Jacksonville, Florida
Senior Member
I would want to look more at your actual transactions. You do a ton of individual inserts and selects. Can these be combined into fewer overall statements? Can you do 10 larger selects instead of 40 smaller ones?

You may be too worried about instance tuning and not worried enough about application tuning. The pctfree is really just reserving space in each block so that a subsequent update will have room in case the update causes the row size to increase, oracle won't have to cause the row to "overflow" somewhere else. Makes it more likely that the update will result in the row staying in the same spot. If you don't do many updates, and those that you do don't involve increasing data type sizes, like a varchar2(90) going from a content of 5 bytes to a content of 90 bytes, then you don't have as much to worry about.

Also I wouldn't relate freelists with undo space...that would correspond more to rollback space. You may have a situation where it would be better to stick with manual segment space management, but to me that should not be your primary focus of activity.

I assume with all of those inserts you are using bind variables? And you have statistcs gathered fully and regularly? And your data model and physical table structures (partitioning, IOT, compression, etc) are appropriate for the work you are doing?

I don't understand what you mean about the unused memory with ASM and 4k blocks. I don't know much about ASM though.

You mention there aren't much in the way of foriegn keys and indexes? Should there be? In general I've found it best to put constraints where appropriate. Be they not null, foriegn key, primary key. They provide more information to the CBO and allow it the possibility of making optimizations that it otherwise wouldn't be able to do.

With initrans, do you really have 96 separate concurrent transactions hitting the same block? Is concurrency really your issue, or is more the work you are doing to begin with?

From the 10gR1 docs:
Quote:



MAXTRANS Parameter

In earlier releases, the MAXTRANS parameter determined the maximum number of concurrent update transactions allowed for each data block in the segment. This parameter has been deprecated. Oracle now automatically allows up to 255 concurrent update transactions for any data block, depending on the available space in the block.

Existing objects for which a value of MAXTRANS has already been set retain that setting. However, if you attempt to change the value for MAXTRANS, Oracle ignores the new specification and substitutes the value 255 without returning an error.


[Updated on: Mon, 26 September 2005 10:56]

Report message to a moderator

Re: Use PCTfree/Initran/Maxtran to prevent contention [message #139229 is a reply to message #139112] Mon, 26 September 2005 22:31 Go to previous messageGo to next message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
Thanks a lot.
First, I cannot touch the app - 3rd party, compiled in C++. We have been promissed improvements but when - who knows? I have to deal with it... For the moment Oracle doesn't show good performance because of the nature of the app, but we did almost everything possible to optimize the DB part, which we are allow to do.
The app is very fast - it loads about 6 mln of records, each 3-4K, for about 7-8 hours with avarage 3500 transactions per second, and audit them in the same time.
Everything is with bind variables, not too many, but correct indexes - reversed, covered, IOT,etc., no triggers and constraints like FK to slow down the process. The speed is everything which the business value here.
I cannot touch the inserts, but after my experiment with the 4K data page which showed less concurency, I believe that less data in the block will improve my inserts and mainly the selects. When monitor I could see how 4-5 processed are goign after the same data in the same time - both insert,update and 3-4 selects waiting with the same bind variables. For the test I started yesterday I have much better results - the initial inserts jumped to 25,000 from 18,000 and the only change was the table storage.
ASM and 4k blocks - for the custom sizes buffers ASM is not valid - you have to separate memory for the 4K block buffer cache, but it's hard allocated - when not in use it's not transferred to the other place where needed like ASM does between the buffer cache and all other elements of SGA. I find this odd... but as long as I understand this is not the purpose of the custom sizes data blocks... so, may be it's right.
Right now the concurency is the biggest issue because of the way the app is acting - all these simultanteously selects and inserts in the same tables by miltiple processes. The second problem are the millions of single commits. As I cannot change this behavior, I'm playing with the size of the logs, etc. to reduce log synch waits and parallel LGWR waits.
There's another bad side, as the same app is running with different dataset on DB2 and performs 100 times better but this is because of the difference in the 2 DBs. The app is not considered for Oracle which it should be as we write an app for a specific database... but the business doesn't understand this...
So, the only thing I can do it's to tune the database side as much as possible and to wait for some imporvement in the app whenever comes...
Thanks a lot for the help, mj

[Updated on: Mon, 26 September 2005 22:33]

Report message to a moderator

Re: Use PCTfree/Initran/Maxtran to prevent contention [message #139363 is a reply to message #138875] Tue, 27 September 2005 11:46 Go to previous message
smartin
Messages: 1803
Registered: March 2005
Location: Jacksonville, Florida
Senior Member
Hmm.

In 10g there is a new feature, I think it is called sql tuning sets. Designed for situations like yours, where you can't control the app but can possibly "intercept" the sql and transform it more to your liking. It may (or may not) help in your situation, I have not used it yet.

Also, you mentioned lots of processes essentially touching the same time. Could you set up hash partitioning on your tables? This would basically randomly cause each rows to go into different segments/blocks. So instead of 10 new rows all being inserted into the same block, and then manipulated several more times with several processes contending for the same block, you would instead have each of those 10 rows go into a different block.

With regards to less data on each block as you mentioned, I don't see how this could help in a select. An insert or modification contention, sure, but not selects. If anything it would hurt. I'd want to see some test showing that.

10gR2 has something called asynchronous commits which may be worth researching. If the app commits all over the place then it likely isn't doing so at the end of business transactions anyway, meaning an almost certain loss of data integrity and accuracy which your business is already ok with as evidenced by not writing it correctly to begin with. So since they are already ok with some loss of data and data integrity, might as well look into ways to make that work to your advantage.

As far as loading records, sqlloader is incredibly fast at that. I'm guessing the app does more than that, but if it is just loading records then sounds like they are using the wrong tool for the job. Likewise if performance speed is really that important, then they need to write the db access code for the db it is accessing. But sqlloader direct path has massive speed.
Previous Topic: Poor Performance in Accessing Remote Database
Next Topic: RAC: "global cache cr request" events take 1 second ocasionally
Goto Forum:
  


Current Time: Sat Nov 23 17:59:46 CST 2024