Home » RDBMS Server » Performance Tuning » general query tuning
general query tuning [message #163769] Sun, 19 March 2006 18:57 Go to next message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
I have a query bellow issued by an application multiple times.
This is one of the most expensive statements from the whole run and the DBA wants me to tune it up.
Well,but the statement is already using the appropriate index in Range scan and access the table by these row IDs.
I do not know what else I could do.
Please, help me resolve it. Any idea is greatly appreciated.

Thanks a lot,mj

Statement:
SELECT ENT_ID, ADDR1, CITY, STATE, POSTAL_CODE, COUNTRY, ADDR_ID, ACCT_ID, VALID_FROM_DT, VALID_THRU_DT FROM ADDRESS WHERE ADDR_H = :1 AND SYS_DELETE_DT IS NULL
buffer_gets: 240704355
executions: 33240728
disk_reads: 19594599
row_processed:118447387
reason:Executions Disk Reads Rows Processed
Optimizer Mode:MULTIPLE CHILDREN PRESENT



Re: general query tuning [message #163771 is a reply to message #163769] Sun, 19 March 2006 19:31 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
Shocked Shocked Shocked Shocked Shocked Shocked Shocked Shocked Shocked Shocked Shocked Shocked
33 MILLION executions. I think I found your problem.

In giving this suggestion, I'll have to make a couple of assumptions:
- These are not the aggregated run stats of the last 300 years.
- Your application does not have >1 Million online users.

Given these assumptions, it seems likely that this statement is called from within a loop in a batch process.

Get rid of SQL-Statements within loops. Try to merge the SQL into the main cursor. If the cardinality is incompatible, use the Concurrent Cursors technique.
______________
Ross Leishman
Re: general query tuning [message #163779 is a reply to message #163769] Sun, 19 March 2006 22:50 Go to previous messageGo to next message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
33 mln executions is a normal process for this application. Currently it process/load about 20 mln records every 2 days.
I know it sounds strange but this is type of the application - it take records from XMl file and selects to check is they are inside the database. If yes, then it does verification/comparison and some more work to decide what to do with this records and inserts a log for that. If not - then this is a simple insert in 15 tables. I have avarage 3000 inserts per minute in the final log table. I usually have between 18 and 24 instances of this application running together.
In this situation, is there something I can do for statements like the one above?
I also have a lot of inserts and their speed is not the best as well, but this is a 3rd party application and I cannot change anything then indexing... I may also make suggestions to the development team... but it takes time to see something implemented.

Thanks a lot again, mj
Re: general query tuning [message #163789 is a reply to message #163779] Mon, 20 March 2006 00:33 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
You're, um, ... stuffed.

You want batch performance from a real-time asynchronous interface. Not going to happen.

You may be able to make some small tweaks though:
- Notice that you have 19M disk reads for your 33M executions. This means that 40% of executions cannot be resolved entirely by the rows in cache. How big is the table? Is it small enough to cache permanently? You can create the table and index with the CACHE option, which *may* keep the rows cached longer (at the expense of other tables).
- If you were to store the table in a single-table HASH cluster (cluster key of ADDR_H) then you would get faster reads, but if the table is growing quickly then it might need to be rebuilt too frequently to avoid chaining, and the amount whitespace required for growth would slow down full table scans.
- If ADDR_H is the Primary Key, you could place the table in an Index-Organized-Table (IOT). All of the columns you need for this query must reside in the primary segment of the IOT - other less frequently used columns could go in the Overflow Segment to improve clustering.

The only other option is to get the application redesigned so that it processes the rows in batches (say 1000 at a time). eg. Accept 1000 rows from XML and dump them to an EOT, then do everything you need with bulk SQL by joining database tables to the EOT. Any statement that acts on a single-row is your enemy.

_____________
Ross Leishman
Re: general query tuning [message #163824 is a reply to message #163789] Mon, 20 March 2006 04:30 Go to previous messageGo to next message
Mahesh Rajendran
Messages: 10708
Registered: March 2002
Location: oracleDocoVille
Senior Member
Account Moderator
Just to add, this caught me a little offguard. Never seen this before.
>>Optimizer Mode:MULTIPLE CHILDREN PRESENT
A further search in metalink revealed this
Quote:


Finding this value in optimizer_mode column of v$sql means that the same query
was executed at least twice with a slightly different executing environment
which caused the optimizer to choose a different execution path.

Some examples:

- Two people selecting from different private tables named emp.

- Same query but are using different session optimizer_mode setting.

- Both queries are using cost based, but with a different session value for
sort_area_size or other CBO related parameters.

Could you also please post your OS,Oracle Versions,plan/stats/trace information? Am i allowed to guess this is 8i?
Re: general query tuning [message #163902 is a reply to message #163769] Mon, 20 March 2006 11:16 Go to previous messageGo to next message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
I'm so sorry to miss this - it's Ora 10.1.0.4 running on aix 5.2,64 bit.
I saw the Metalink note 1013747.102 but no one of the conditions is applicable to me. So, I do not know how to understand this.
Changing the Optimizer_mode from SIMILAR to FORCE solved the problem for the update statements. But all inserts and selects like bellow with the same type of bind variables still have the reparsing problem.

The biggest problem is that I do not have control over the 3rd party application - I may make suggestions to the dev team but nothing more... And it's a real batch processing row by row!
The reason why it cannot be changed is the content of the app - Every row MUST be evaluated alone towards all data in the database, the comparison and the logic "YES-NO" is the most valuable part and delivers the results the application is designed to do.
From the database point of view this is not good, but...

Here is what I have in mind - the table (actually they are 7 base tables like name, address, numbers, etc.)has constant inserts at high rate - depends on the number of app instances running, but generally let's say 3000 per minute. The row size per Address table is 1366 - 38 fields with about 300 mln records. How much of it I could cache to satisfy very random selects? What size of Buffer I need to have to load it?
Bellow are my major questions. For a copule of months I'm planning to run a big test with some of these options but it's very hard to get a test server for yourself, needs to request, wait, you know how it is...
1. What about if I cache only the indexes in a buffer with bigger page size? Can I expect better performance?
2. I have several log tables where there's only inserts, never selects/updates. I tried to put these tables and their indexes into recycle bufer pool and got worst results. My insert rate drop more then 10%. My colleagues said that this will bring us nowhere... but I still think that this is a way to go. I do believe that reason for the lower inserts was that in order to create recycle BP I had to make SGA smaller other way I had no memory on the box. Any thought on that?
3. There are 2 ways that the app queries the Address table - one is the Addr_H, and the other is the actual PK which consists of 4 other fields. If I create an IOT and include all primary fields needed (the rest in the overflow segment), I may create a secondary index for the Addr_h. My small (2-3 mln records)initial test failed my expectations and showed that the gain is not too much. Can I really expect pereformance increase due to the IOT with 1 more index?

I know that most of this questions need to be tested and you do not have enough information to answer them but I would appreciate any thoughts and shared experience. It'll give me a good guideline at least what to expect and where to start from.
When the testing resources are limited you alwayswant to start with whatever you believe will give the best results.


Thanks a lot, mj


Re: general query tuning [message #163929 is a reply to message #163902] Mon, 20 March 2006 18:24 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
Forget the IOT if ADDR_H is not the PK. Secondary indexes must use a "Rowid Guess", not a physical ROWID, so they are subject to performance problems. Also, since the secondary index and the IOT live in different segments, access via the secondary index will be no better than that of a normal table and index.

I have no further suggestions on tuning the application. What it boils down to is this: The application is not scalable.You are batch processing using a transaction processing paradigm. It will never scale.

You can fiddle with database settings all you like. Some of the tweaks you try will give you a small performance improvement, but it will never be enough. A small increase in volume and you'll be back to square 1.

This is good news if you want assured permanent employment, but bad news if you value your sanity. If you continue down the path of attempting to tune this app, it will NEVER end. You commenced this process with an un-tested assumption that there was a solution. It's time now to revisit that assumption.
_____________
Ross Leishman
Re: general query tuning [message #164285 is a reply to message #163769] Wed, 22 March 2006 15:11 Go to previous message
mjschwenger
Messages: 137
Registered: July 2005
Location: US
Senior Member
Thanks a lot for the help.
I know I'm on the hard spot but I'll try to do as much as I can to keep it running and scale as much as possible.
Appreciate your input.
Thanks again, mj
Previous Topic: A myth of TKPROF output in Oracle 10.1.0.4
Next Topic: tablespace
Goto Forum:
  


Current Time: Sat Nov 23 14:21:31 CST 2024