Re: Question on IO consideration
Date: Tue, 21 Sep 2021 17:41:33 -0400
Message-ID: <CAMHX9JL1vNEpS5ikHw_vaytBuYOrZ274gGpQDjnMm+fBiyK0Ng_at_mail.gmail.com>
Exadata storage cells (starting from cellsrv v12.1.2.1.0 / Jan 2015) can use fully columnar flash cache for greatly speeding up reads (the cache is fully columnar, not hybrid like the datafile storage).
You can look into slides 17-20 in this presentation (from 2015), some things may have changed/improved by now:
This would speed up queries doing lots of scanning *if* your current SQL performance bottleneck is about reading too many datafile blocks (and not somewhere else, like having a large fact-fact hash join spill to temp). This columnar flash caching won't speed up writes & large data loads.
So, if the goal is to speed up your ETL processing, you should first measure (if you haven't done so already) where the response time of your ETL jobs is spent... and see if it's in *smart scan data retrieval*, where the storage cells can't somehow keep up with the data ingest demand of the DB layer and they waif for disk reads a lot (but I'd say it's unlikely... depending on how your flash cache allowance is set up. You might already be benefitting from the columnar flash cache, the slides have metrics that show how much).
Otherwise, just to reduce I/O (which kind of I/O - data retrieval for scans or data processing for GROUP BY/HASH JOINs?), you could review your top SQLs' execution plans and see if a better partitionings, subpartitioning (and maybe even Exadata-aware indexing) scheme would help to allow doing less I/O. I would look into fancier things like attribute clustering (and zone maps) after done with the basics first.
-- Tanel Poder https://tanelpoder.com On Tue, Sep 21, 2021 at 2:48 PM Lok P <loknath.73_at_gmail.com> wrote:Received on Tue Sep 21 2021 - 23:41:33 CEST
> Hello Listers, We have oracle exadata databases which are performing
> mostly warehousing or batch type data processing. And few are hybrid i.e
> combination of both oltp+warehousing/analytics types processing. ETL jobs
> run on a few of these which move/read/write billions rows daily.
> The databases are 50TB to ~150TB in size currently. Few architecture
> team members suggested evaluating if we can use columnar database type of
> offering for IO reduction and thus better performance considering the
> future growth. As per my understanding , Oracle stores data in row format
> only, so is there any other offering from Oracle for columnar datastore
> format or columnar databases and we should evaluate that? Or is there any
> clustering kind of technique which can be evaluated which will help reduce
> IO? Want to understand experts' views here on this.
>
> Regards
> Lok
>
-- http://www.freelists.org/webpage/oracle-l