Re: Fastest way to count exact number of rows in a very large table

From: Andy Sayer <andysayer_at_gmail.com>
Date: Fri, 2 Oct 2020 20:09:29 +0100
Message-ID: <CACj1VR6trYYtEq6sq65QPNTdct31s9cA5sZYFTvUk26Cnt1aYQ_at_mail.gmail.com>



Just because a table has the same number of rows, it doesn’t mean it has the same data. With 108 billion rows, your data is going to be changing quickly, in order to get accurate counts at the right point in time you’re going to end up keeping your application offline for a window before and after your migration.

What you need to do is determine where you expect data to go missing and work out a way to check.

This will depend on how you’re doing your migration, I would suggest you use Cross-Platform Transportable Tablespaces (Doc Id 371556.1) as that would allow you to do a physical import and just convert the files to the right endianness. This starts by making sure all data has been written to your data files (so they can be read only on the source system). As you’re working with the physical data files rather than the logical data (rows in tables), the only way you’re going to loose rows is by corrupting your files. You can check for corruption using RMAN once you’ve imported the converted files. No need to count all your rows, and no need to hope that that’s all you need to compare.

Hope that helps,
Andy

On Fri, 2 Oct 2020 at 19:38, ahmed.fikri_at_t-online.de < ahmed.fikri_at_t-online.de> wrote:

> Hi Ashoke,
>
>
>
>
>
> could you send the execute plan of the query too? I think there is no
> general approach for that, it depends on several factors: whether the table
> has indexes (normal/bitmap) and in case the table has indexes the size of
> the table compared to the existing index...... But generally parallel
> processing should help.
>
>
>
>
>
> Best regards
>
>
> Ahmed
>
>
>
>
>
>
>
>
>
>
>
> -----Original-Nachricht-----
>
>
> Betreff: Fastest way to count exact number of rows in a very large table
>
>
> Datum: 2020-10-02T19:45:19+0200
>
>
> Von: "Ashoke Mandal" <ramukam1983_at_gmail.com>
>
>
> An: "ORACLE-L" <oracle-l_at_freelists.org>
>
>
>
>
>
>
>
>
>
>
>
>
>
> Dear All,
>
> I have a table with 108 billion rows and migrating this database from
> Oracle 11g on Solaris to Oracle 12c on Linux.
>
>
>
>
>
> After the migration I need to compare the row count of this table in both
> the source DB and the destination DB. It takes almost two hours to get the
> row count from this table.
>
>
> SQL> select to_char(count(*), '999,999,999,999') from test_data;
>
> TO_CHAR(COUNT(*)
> ----------------
> 108,424,262,144
> Elapsed: 02:22:46.18
>
>
>
>
>
> Could you please suggest some tips to get the row count faster so that it
> reduces the cut-over downtime.
>
>
>
>
>
> Thanks,
>
>
> Ashoke
>
>
>
>
>
>
>
> 
>

--
http://www.freelists.org/webpage/oracle-l
Received on Fri Oct 02 2020 - 21:09:29 CEST

Original text of this message