Re: Big Update/DML
Date: Thu, 27 Aug 2020 03:22:39 +0300
Message-ID: <CAOVevU6_tdPLbJeME0-cZj_FXni7+vNpvNZ20UUv7Qq8ZZe1+Q_at_mail.gmail.com>
Hi Sanjay,
It would be better if you provide more details about your update. Exact update statement would be helpful. is this column nullable/not null?
On Thu, Aug 27, 2020 at 1:12 AM Sanjay Mishra <dmarc-noreply_at_freelists.org> wrote:
> Andy
>
> Yes look like is an option if we are doing work online and despite take
> more time but need not require downtime. In our case multiple DDL are
> running to existing environment due to Application upgrade and so all work
> has to be done with downtime. So challenge is reduce time of DML operations
> on big tables containing few billions rows.
>
> Tx
> Sanjay
>
> On Wednesday, August 26, 2020, 11:20:55 AM EDT, Andy Sayer <
> andysayer_at_gmail.com> wrote:
>
>
> It does sound like a virtual column could be the ideal solution. But if
> data needs to be physically stored or cannot be calculated
> deterministically at any point in time then Connor has a great demo of
> using dbms_redefinition to create a new table online with a function to map
> the new column. There’s obviously some overhead with context switching but
> it may be far better than some of the obstacles you might be facing at the
> moment:
>
> https://connor-mcdonald.com/2016/11/16/performing-a-large-correlated-update/
> (and you might be able to help it with pragma udf in the right
> circumstances).
>
> Obviously, how helpful this is depends where the work is currently going
> and how online this needs to be.
>
>
> Thanks,
> Andrew
>
> On Wed, 26 Aug 2020 at 16:00, Jonathan Lewis <jlewisoracle_at_gmail.com>
> wrote:
>
>
> Is that 3-4 billion rows each, or total ?
>
> I would be a little suspicious of an update which populates a new column
> with a value derived from existing columns. What options might you have for
> declaring a virtual column instead - which you could index if needed.
>
> Be extremely cautious about calculating space requirements - if you're
> updating every row on old data might you find that you're causing a
> significant fraction of the rows in each block to migrate, and there's a
> peculiarity of bulk row migration that can effectively "waste" 25% of the
> space in every block that becomes the target of a migrated row.
>
> This effects can be MUCH work when the table is compress (even for OLTP)
> since the update has to decompress the row before updating and then only
> "re-compresses" intermittently as the block becomes full. The CPU cost can
> be horrendous and you still have the problem of migration if the addition
> means the original rows can no longer fit in the block.
>
> If it is necessary to add the column you may want to review "alter table
> move online" can do in the latest versions (in case you can make it add the
> column as you move) or review the options for dbms_redefinition - maybe
> running several redefinitions concurrently rather than trying to do any
> parallel update to any single table.
>
> Regards
> Jonathan Lewis
>
>
>
>
>
>
>
>
-- Best regards, Sayan Malakshinov Oracle performance tuning engineer Oracle ACE Associate http://orasql.org -- http://www.freelists.org/webpage/oracle-lReceived on Thu Aug 27 2020 - 02:22:39 CEST