Performance degrades as dB size increases [message #569800] |
Thu, 01 November 2012 13:03 |
|
segan1415
Messages: 16 Registered: October 2012
|
Junior Member |
|
|
I am runnning an ODBC loader application on the same system as my RDBMS. It is being fed records from a test application across a lab LAN (<1ms latency). Currently I am using DIRECT PATH insert, I am testing the impact of using multiple instances of the loader simultaneously to improve performance. With one loader and DIRECT PATH I get around 7k records / second, but all of my system resources (CPU, memory etc) are largely idle. With two loaders, I run into contention issues (ie all inserts are going into one table, if that table is locked by one loader the other has to wait) so the clock I use is used less efficiently but I use so much more clock that this becomes irrelevant. I see linear growth from 7k to 14k records / second with two loaders. At three loaders direct path, resource contention becomes the governing factor and I see performance loss.
I then ran the same test with CONVENTIONAL PATH insert and two loaders improved performance from 5k to 10k. Three loaders improved from 10k to nearly 15k. However at this point the system is almost 100% taxed, and the gain over two loaders with DIRECT PATH is marginal. So I deem two loaders, DIRECT PATH to be the optimum configuration for the moment.
The issue is that this performance gain degrades strongly as the size of the database increases. For each test I begin with an empty table and I generate a large but, theoretically, infinitely sustainable load using my generator application. Each test is an hour, during the first fifteen minutes I can insert ~14,434 records / second. During the last fifteen minutes performance has dropped to ~10,052 records / second.
From the AWRSQRPT for the relevant INSERT statement for the first 15minutes:
Elapsed Time / exec = 57.6ms
CPU Time / exec = 46.1ms
Disk Reads / exec = 0.0
Dis Reads total = 107
Last 15m:
Elapsed Time / exec = 115.6ms
CPU Time / exec = 56.2ms
Disk Reads / exec = 54.9
Disk Reads total = 513,154
That is a lot of disk reads. The AWRRPT reveals they are "db file sequential reads", which in the first window occupied .08% of my DB time, but a whopping 19.49% of my DB time in the last window. "enq: TM - contention" due to table locks also more than doubles in DB Time from 10.54% to 23.8%
Observing the Windows perf monitor confirms that something strange goes on with the disk, for the first few minutes of any given test it looks steady. Then for no obvious reason it starts to fluctuate chaotically. At the same time the queue on my loaders (a count of how many records are waiting in the loader cache to be inserted) starts filling and doesn't stop (eventually it overflows and records are lost).
"enq: TM - contention" obviously appears because I am using DIRECT PATH insert which requires exclusive lock on the table. Why it increases overtime is unclear, but so be it. The real question is why the "db file sequential reads"? Internet research suggests this is often linked to indexes, so I dropped 3 of the 4 indexes from this table (leaving the primary key index), which improved the numbers somewhat but the same trend is still observed. I then reasoned that it may be reading from the disk so often because not enough memory is available to Oracle. So I took it off Automated Memory Management and increased max SGA size to 8912M from 4912M. In the Windows perf monitor I can see that Oracle is now consuming around 4GB mem, instead of 3GB previously. This made another gain, but the overall trend remains. Disk reads increase dramatically and performance degrades.
So why? It is clearly linked to the size of the table. If I stop the load generator, truncate the table, and restart the generator performance immediately returns to the 14k mark. Just as inevitably though, something seems to occur in the database wherein the disk suddenly starts taking a lot of physical reads and performance drops off. Your thoughts are appreciated. I have full AWRRPT and AWRSQRPT for the relevant time periods and I can run other tests too if you like.
(I have become aware that ODBC is not recommended by...anyone, and I welcome your votes on the best alternative OCI, JDBC, etc. But I cannot make changes like that the drop of the hat. For now I am stuck with ODBC).
|
|
|
Re: Performance degrades as dB size increases [message #569828 is a reply to message #569800] |
Fri, 02 November 2012 03:52 |
John Watson
Messages: 8960 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
Hash partitioning would solve the concurrent direct load issue, perhaps scaling linearly as far as you want. But: firstly, you would have to build the partitioning algorithm into the application to be sure of achieving an affinity between loaders and partitions; secondly, it is expensive.
I can't see why inserting a few million rows would make I/Os per conventional insert increase that much. It could be space allocation. Is it the same if when you truncate you use the REUSE STORAGE option, so that the extents remain? Or you could pre-allocate extents before each run, with ALTER INDEX|TABLE...ALLOCATE EXTENT commands. Creating them in a tablespace with a large (1G ?) uniform extent size might help, too.
Post the reports for the first and last 15 mins. Someone might see something.
|
|
|