Home » RDBMS Server » Performance Tuning » Merge Performance (wide open question here) (10g release (4?) and 11g release 1)
Merge Performance (wide open question here) [message #460093] Wed, 09 June 2010 10:06 Go to next message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
Hi all. I have a open question about tuning merge on large data sets in a data-warehouse.

I had a meeting with some guys at my company (these may be the best Oracle guys at my company I have met in my 15 years here), and they showed me a batch system where they want to hopefully improve its performance to meet batch window time frames (currently 8 hours). They are very performance oriented and have built in a great deal of instrumentation into their code to help them with it. These guys are not idiots. They rival your level of expertise.

Their system is all PLSQL/MERGE statements. Very little PLSQL, and lots of merge commands. These merge commands are set based commands not one row at a time commands.

They load data from a source system into work tables. This input data is crappy so they do lots of editing before they merge.

They incrementally load data so after X rows they update their statistics. They typically deal with 50 million rows a day to be merged in many places.

They execute all their merge statements using dynamic sql in order to ensure that query plans are the best possible because their data is changing rapidly as they load.

They use parallel query like it is going out of style; 24 CPU.

I will not get access to their system till earliest Thursday (tomorrow) but I did get a demo of it to see some of what they do. They got my name from a VP of the company who knows me so I guess I have at least some reputation worth noting. This VP told them "this guy can help you" and there is a job here for me if I want it so I would like to impress them since my current job is on the outsourcing chopping block (twelve weeks I figure till I am looking for work).

So my open ended question is this: Knowing only that merge statements are being executed against large target and source tables (many millions of rows), what tuning advice might you come up with to make these go fast? Be creative. Here are my ideas so far as to what I can look at:

compression (reduced IO when writing)
resultset caching (don't look up data twice)
partition wise join (might make ||query better)
basic topN query tuning (including function based indexes and index only access)
combine multiple merges (avoid multiple scans of data)


As you can see I am looking for creative ideas that might address various problems overall. The fact is these guys have tuned the snot out of the process already so I am going to need some heavy guns and advanced techniques to best them. Still being the pompous fathead that I am, my goal is to cut their batch time in half and I think I can do it.

Additionally they told me that they had to do a lot of parameter tuning with sga/pga and other memory parameters in order to make their merge commands work well. Memory tuning is an area I am week in so if you had commentary on that, it might help me also. I know about hash joins which are common with merge it seems.

I have read about 200 posts on OraFAQ that have to do with merge, and spent ten hours of quality Google time looking for ideas (found a few I think), but there is nothing like going to my friends on OraFAQ who have done it all.

So, have I broken too many posting rules BlackSwan?

Sorry I have no specific code to offer, but I will keep you guys posted as I go along.

With much thanks, Kevin
Re: Merge Performance (wide open question here) [message #460155 is a reply to message #460093] Wed, 09 June 2010 22:57 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
Questions:
  • You didn't mention whether they used conventional path MERGE, or MERGE /*+ APPEND */.
  • You should also determine the split between INSERTS and UPDATES in the MERGE. If UPDATES are rare, MERGE is going to fly a lot faster.
  • Are indexes invalidated before the load?
  • Are the tables heavily indexed?
  • Are there any BITMAP indexes?
  • Are there any RI constraints or triggers?


Not so sure about gathering stats as you load. Sounds like a waste of time to me. Changing stats on the target is not going to alter insert/update performance.

If there are a significant number of updates, you need to carefully consider the wisdom of merging as opposed to replacing the data with DDL.

A technique I have used quite successfully for large MERGE type operations is to identify those rows succeptible to update (in my case, OPEN orders are subject to updates, CLOSED orders are static) and partition the table: VOLATILE rows in one partition, STATIC rows in another. Then use EXCHANGE PARTITION or TRUNCATE PARTITION to replace the VOLATILE data. This way you can convert all of your UPDATEs to direct path INSERTs.

I'm also concerned about the in-place editing/cleansing of data. This will involve updates yet again, which are - of course - slow.

Knowing a little of your personal preferences regarding MERGE, I wouldn't expect any measurable difference converting the MERGE to FORALL INSERTS and UPDATES in PL/SQL. The only possibility I can see for improvement there is that if you knew your table was somewhat clustered on a particular key (eg. if the key was sequentially allocated on INSERT), then you could pre-sort the updates and reduce disk-IO. However if there are THAT many updates, I would be looking into a replace-scenario.

Ross Leishman
Re: Merge Performance (wide open question here) [message #460201 is a reply to message #460155] Thu, 10 June 2010 02:56 Go to previous messageGo to next message
tahpush
Messages: 961
Registered: August 2006
Location: Stockholm/Sweden
Senior Member

Any thoughts on upgrading to 11.2 ?
There are a few very intresting performance features.
Have a look at In-Memory Parallel Execution

[Updated on: Thu, 10 June 2010 02:56]

Report message to a moderator

Re: Merge Performance (wide open question here) [message #460252 is a reply to message #460201] Thu, 10 June 2010 06:19 Go to previous messageGo to next message
John Watson
Messages: 8960
Registered: January 2010
Location: Global Village
Senior Member
Quote:
They incrementally load data so after X rows they update their statistics.
Are they actually using dbms_stats.gather_something? If so, and if you know what X is, might you save a bit of time by using dbms_stats.set_whatever instead?

Quote:
they had to do a lot of parameter tuning with sga/pga and other memory parameters
I have sometimes had good results by going back to older memory management techniques:
alter session set workarea_size_policy=manual;
alter session set sort_area_size=many-hundreds-of-megabytes
alter session set hash_area_size=even-more-hundreds-of-megabytes

I believe (unproven) that the limits on what the pga automatic management mechanism will give you, no matter what you set the target to, can restrict the optimizer a bit.

I wonder about this, Quote:
They load data from a source system into work tables
I remember trying to eliminate the use of staging tables by using pipelined functions, so that in effect several steps in the ETL process run concurrently rather than consecutively. But my PL/SQL knowledge isn't in your class, so I won't try to go further.

Just a couple of thoughts. Have you any code examples yet? What do the wait events and db_time figures look like?
Re: Merge Performance (wide open question here) [message #460278 is a reply to message #460252] Thu, 10 June 2010 08:03 Go to previous messageGo to next message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
I knew you guys would have lots to say. This is great stuff and I will check it out as I go. I will update you with results whey I reach any greate moments. Here is what I know today:

1) COMPRESSION: I have reviewed compression and I am beginning to think this is not going to help me. 11gr2 compression seems very weighted to reduce disk at expense of CPU by a wide margin. Some people are reporting 50%-80% increase in CPU time or more when enabling 11g compression so that won't help me, and 10g compression will be undone by updates so if a large percentage of updates occur then this is going backwards as well.

ROSS:

I have not seen and APPEND hint yet, will investiagte.
I will get more info on the split between INSERT vs. UPDATE.
Indexes are not invalidated before the load.
Tables are well indexed.
There are no BITMAPS indexes (I know these are horrid for update situations).
To my knowledge RI constraints are in force (I'll get more info on if this is everywhere or just in certain places).

TAPUSH:
They have plans to upgrade to 11gR2. I'll get a date.
I will research IN-MEMORY PARALLEL EXECUTION today to see where it might take me.

JOHN:
I will review DBMS_STATS.SET with these guys. I am thinking though that the reason they collect stats after X operations is because they have no clue what to set values to. This is a creative idea though which is what I am looking for.

I will verify their use of MANUAL and see what they are doing with their memory. I suspect they did as you have; gone back to the old way so they have control.

Pipelined functions sounds very interesting. I will see how much time is being taken by writing/reading the work tables and have one of these dudes experiment with pipelined functions if there looks to be opportunity in that part of the process.

I will feed code examples through next week after my second meeting with them. I am trying to get them to agree to a four hour working session early next week. We will see how bad they want help I guess.

Thanks guys. I knew your caliber of minds would have ideas. Kevin
Re: Merge Performance (wide open question here) [message #460566 is a reply to message #460278] Sat, 12 June 2010 08:36 Go to previous messageGo to next message
rleishman
Messages: 3728
Registered: October 2005
Location: Melbourne, Australia
Senior Member
The the purpose of the APPEND hint is fairly obvious.

MERGE and indeed INSERT will maintain indexes row by row - even with direct path. SQL*Loader defers index maintenance until after a direct path load. This has greater TEMP usage, but can reduce logical IO.

I don't think there is a SQL equivalent of what is possible with direct path SQL*Loader. If you have plenty of indexes, this could be a factor.

If the proportion of data being loaded compared to the current size of the table, it might be better to invalidate and rebuild indexes anyway.

RI constraints will disable direct-path loads. If you disable them to perform a direct path load, you have to re-validate the WHOLE table. Only way around this is Partition Exchange.

Good luck.

Ross Leishman
Re: Merge Performance (wide open question here) [message #460568 is a reply to message #460093] Sat, 12 June 2010 08:48 Go to previous message
Kevin Meade
Messages: 2103
Registered: December 1999
Location: Connecticut USA
Senior Member
Thanks again Ross.

Yes, they are doing things with Partition Exchange.

Yes, they are very interested in their RI since the incomming data has none. I believe they validate their data at the time they move it to their work tables so they may or may not have RI turned on in the warehouse. I will check that. I suspect that if they trust their loads, they will have the RI turned on but "trusted". This can be good and bad. Good if it makes loading faster, bad because it disables removal of DUNSEL JOINS.

It was not mentioned in our meeting if they rebuilt indexes or not. Something for me to look at.

Good stuff. I am looking with aniticipation to getting into full swing with these guys next week. One way or another it is going to be a blast because I am going to learn things and be exposed to things that are new for me. And my fat head still tells me I can help them reduce their runtimes by half (how full of myself I am at times, but I think that goes with the job).

Kevin
Previous Topic: tkprof question
Next Topic: Changing execution plans
Goto Forum:
  


Current Time: Fri Nov 22 07:05:12 CST 2024