using data compression on CLOBS [message #54131] |
Thu, 31 October 2002 05:01 |
charles lanfair
Messages: 1 Registered: October 2002
|
Junior Member |
|
|
We use a clob data type to store an XML document representing an insurance quote. Periodically, the xml document has to change as new data needs to be collected in the app. The conversion program - via Java - reads a clob, makes appropriate decision re: conversion, converts, and writes the data back to the database. 15000 rows in 1.4 hours.
The min, avg and max sizes of the xml are 20k 33k and 200k.
One of the app people suggests that the xml be compressed, put back in the clob or perhaps a blob and that will reduce io and network time. The app will uncompress on read and compress on writes. I think I've proved that io takes 6-9 minutes of that, but now this compression scheme is being used as a remedy for 'network latency' - less data moves faster. My thought is that (barring any tools to test with ) 33k is less that 5 blocks. How long can it take to move 5 blocks of data and why oh why would compressing the data in the database be a good idea?. I'm open minded on this, but extremely leary,Tim. Any one got any comments, experience??
|
|
|
|