Re: Question on IO consideration
Date: Wed, 22 Sep 2021 09:47:35 +0200
Message-ID: <738f2e10-b419-3e2f-f1d2-e145b968696b_at_bluewin.ch>
In Addition once you use hybrid columnar compression (HCC) the data is
stored in columnar format. (Am I mistaken? I am irritated by the fact
that nobody mentioned that simple fact.)
In an ETL Job there are phases where data is checked for valid keys and
additional data is added (e.g. from dimension tables) to simplify queering.
In such phases columnar storage is most likely to help. You often use
only a few columns of a table. You can consider using HCC on selected
tables.
Regards
Lothar
Am 21.09.2021 um 23:41 schrieb Tanel Poder:
Columnar storage is likely to help less when you are doing aggregation
of some sort. (Although nothing is sure unless you check it.)
Some queries on DBA_HIST_ACTIVE_SESS_HISTORY should tell you were the
time is spend.
> Exadata storage cells (starting from cellsrv v12.1.2.1.0 / Jan 2015)
> can use fully columnar flash cache for greatly speeding up reads (the
> cache is fully columnar, not hybrid like the datafile storage).
>
> You can look into slides 17-20 in this presentation (from 2015), some
> things may have changed/improved by now:
>
> https://tanelpoder.com/2015/03/24/oracle-exadata-performance-latest-improvements-and-less-known-features/
> <https://tanelpoder.com/2015/03/24/oracle-exadata-performance-latest-improvements-and-less-known-features/>
>
> This would speed up queries doing lots of scanning /if/ your current
> SQL performance bottleneck is about reading too many datafile blocks
> (and not somewhere else, like having a large fact-fact hash join spill
> to temp). This columnar flash caching won't speed up writes & large
> data loads.
>
> So, if the goal is to speed up your ETL processing, you should first
> measure (if you haven't done so already) where the response time of
> your ETL jobs is spent... and see if it's in /smart scan data
> retrieval/, where the storage cells can't somehow keep up with the
> data ingest demand of the DB layer and they waif for disk reads a lot
> (but I'd say it's unlikely... depending on how your flash cache
> allowance is set up. You might already be benefitting from the
> columnar flash cache, the slides have metrics that show how much).
>
> Otherwise, just to reduce I/O (which kind of I/O - data retrieval for
> scans or data processing for GROUP BY/HASH JOINs?), you could review
> your top SQLs' execution plans and see if a better partitionings,
> subpartitioning (and maybe even Exadata-aware indexing) scheme would
> help to allow doing less I/O. I would look into fancier things like
> attribute clustering (and zone maps) after done with the basics first.
>
> --
> Tanel Poder
> https://tanelpoder.com <https://tanelpoder.com>
>
>
>
> On Tue, Sep 21, 2021 at 2:48 PM Lok P <loknath.73_at_gmail.com
> <mailto:loknath.73_at_gmail.com>> wrote:
>
> Hello Listers, We have oracle exadata databases which are
> performing mostly warehousing or batch type data processing. And
> few are hybrid i.e combination of both oltp+warehousing/analytics
> types processing. ETL jobs run on a few of these which
> move/read/write billions rows daily. The databases are 50TB to
> ~150TB in size currently. Few architecture team members suggested
> evaluating if we can use columnar database type of offering for IO
> reduction and thus better performance considering the future
> growth. As per my understanding , Oracle stores data in row
> format only, so is there any other offering from Oracle for
> columnar datastore format or columnar databases and we should
> evaluate that? Or is there any clustering kind of technique
> which can be evaluated which will help reduce IO? Want to
> understand experts' views here on this.
>
> Regards
> Lok
>
-- http://www.freelists.org/webpage/oracle-lReceived on Wed Sep 22 2021 - 09:47:35 CEST