Re: db file multiblock read count
Date: Sun, 17 Jan 2021 16:40:38 +0100
Message-Id: <8934B8DC-4318-4ACB-805A-5B3CBA16D71E_at_gmail.com>
I too agree with most previous replies.
At a certain point in time in the past, setting db_file_multiblock_read_count to 0 would make it oracle controlled and dynamic. However since it’s not documented what it takes into account and how it responds to it, I never used this setting, dynamic settings are unpredicatable especially if the whole mechanism is undocumented. I just read the db_file_multiblock_read_count parameter description in the oracle 19 documentation, and any mentioning of it being dynamic is gone, rather it says that the default value is ‘maximum I/O size that can be performed efficiently’ that is a rather clumsy and non-descriptive formulation.
Setting the value higher makes you give Oracle the ability to potentially read a large chunk in one go, which is useful if all the layers underneath Oracle do support the size of the IO, and the optimal path for your execution is performing full scans. The larger you set the value for db_file_multiblock_read_count, the cheaper choosing a full segment scan becomes for the optimizer. It’s up to you to determine a setting that is optimal. My takeway would be to have db_file_multiblock_read_count at 128 (8k) and work with that. l don’t have confidence in ‘leave it and let oracle deal with it’, like some suggested, because Oracle doesn’t know how you system looks like.
If doing large IOs is optimal for whatever you do with that database, and it makes sense to spend time tuning it (the choice whether to tune something should be a cost based analysis too ;-) there are a couple of specifics that I found that are good to know. I didn’t check them for the very latest versions (12.2+), that’s up to you.
- There is no maximum IO size in linux to my knowledge. There used to be one, but that was a long time ago.
- That doesn’t mean filesystems, storage drivers and storage hardware do not have limits (obviously).
- A buffered multiblock IO (administered under the wait event db file scattered read) cannot go beyond 1M (128/8k, 64/16k), even if you set DMRC higher.
- A direct path multiblock IO (administered under the wait event direct path read) can go as high as 4096 (DMRC; 32MB_at_8k). — ASM will not perform a larger IO than the AU size set for the diskgroup, because that is the potential maximum size of adjacent blocks. (which is 4MB by default on exadata). — I found multiple IO mechanisms in Oracle that have 1MB as maximum IO size (on linux), one of them is the database writer, to which oracle sometimes refers to as ‘SSTIOMAX’. — This is not a parameter, and nowhere to be found in the oracle database, however some trace messages mention it. So maybe a constant defined in a header file. (one of these functions is kcbb_current_space()).
Please mind this doesn’t mean I say you should set it this way. Actually, probably not. Like anything, there is a ‘goldilocks zone’ for settings where it performs best, setting it too high will just make it really inefficient, just as setting it too low will make it inefficient.
So set DMRC to 128 (when at 8k blocksize) and if you are doing datawarehousing/OLAP on exadata set the special exadata mode for system statistics to further prefer full segment scans, and it’s probably tuned well enough for its use.
Frits Hoogland
http://fritshoogland.wordpress.com <http://fritshoogland.wordpress.com/> frits.hoogland_at_gmail.com <mailto:frits.hoogland_at_gmail.com> Mobile: +31 6 14180860
> On 17 Jan 2021, at 12:36, Neil Chandler <neil_chandler_at_hotmail.com> wrote:
>
> I agree with all the previous replies - and generally you should leave it alone - but it's worth understanding that setting it explicitly to 128 (8K block size default) is not the same as leaving it to default. Changing it could affect EVERY SQL statement in the system.
>
> Multi-block reads are (if left to default) controlled by 2 hidden parameters:
> _db_file_optimizer_read_count for costing (8)
> _db_file_exec_read_count for execution (128 assuming 8k block size, 64 for a 16k block size. Anything else is probably the wrong block size.)
>
> Explicitly setting the MBRC will override the optimizer cost parameter "_db_file_optimizer_read_count", which is 8 by default.
>
> This will have an effect on your system statistics (assuming you've left them to default***), and therefore an effect on your execution plans.
>
> MBRC=default = a single block cost (SBC) to multi block cost (MBC) ratio of 0.270833 (i.e. for the optimizer, 1 multiblock read = almost 4 single block reads for costing purposes)
> MBRC=128 = a SBC to MBC ratio of 0.1732, 1 multiblock read = about 6 single blockreads. Tablescans just got more attractive to the optimizer.
> MBRC=1024 = a SBC to MBC ratio of 0.1675, so tablescans are even more attractive
>
> ***always leave your system statistics to default unless you have a dedicated data warehouse as the only database on an exadata.
> Under those circumstances there's a chance that gathering with 'exadata' might be better for you.
> As well as explicitly setting the MBRC in the system stats it also adjusts the multiblock read time because the IOTRFSPEED is raised from 4096, making the SBC/MBC ratio 0.022.
> You get about 50 blocks on a multiblock read for every single block read. Your indexes need to be surgically precise to be used.
>
> regards
>
> Neil Chandler
> Database Guy.
>
>
> From: oracle-l-bounce_at_freelists.org <oracle-l-bounce_at_freelists.org> on behalf of Jonathan Lewis <jlewisoracle_at_gmail.com>
> Sent: 17 January 2021 09:58
> To: oracle-l_at_freelists.org <oracle-l_at_freelists.org>
> Subject: Re: db file multiblock read count
>
>
> There are only two points you have to consider -
>
> a) does a non-default value have any effect on the actual activity at the time of execution
> b) does a non-default value have any effect on the cost of the query, and would that make any critical plans change in an undesirable way.
>
> Both points can be investigated with some very simple modelling, though the "actual activity" testing may require a little though to cover variations like parallel execution, direct path reads in general, possible effects on "small" tables; and the "costing" testing needs to consider the effects of system statistics (dbms_stats.xxx_system_stats) and/or the calibrate_io calls.
>
> Once you've done the tests that are most relevent to your application and setup you may still find that leaving everything to default is the safest strategy.
>
> Regards
> Jonathan Lewis
>
>
>
> <https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.avg.com%2Femail-signature%3Futm_medium%3Demail%26utm_source%3Dlink%26utm_campaign%3Dsig-email%26utm_content%3Dwebmail&data=04%7C01%7C%7C73b8571cb084449dac1308d8bace89ce%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637464743531728542%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=AjoMMNhHKi6VSGgt1VDWvniOBbNllZLHRS9bDm6llDY%3D&reserved=0> Virus-free. www.avg.com <https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.avg.com%2Femail-signature%3Futm_medium%3Demail%26utm_source%3Dlink%26utm_campaign%3Dsig-email%26utm_content%3Dwebmail&data=04%7C01%7C%7C73b8571cb084449dac1308d8bace89ce%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637464743531728542%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=AjoMMNhHKi6VSGgt1VDWvniOBbNllZLHRS9bDm6llDY%3D&reserved=0> <x-msg://21/#x_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
> On Fri, 15 Jan 2021 at 22:17, Moustafa Ahmed <moustafa_dba_at_hotmail.com <mailto:moustafa_dba_at_hotmail.com>> wrote:
> Hello
>
> There are many opinions out there about the right value for
> db file multiblock read count
> And the more I look it seems like it started to be of less significance than 10 years ago!
> Say a value of 1024 which is considerably high
> On Exadata and a DW system
> What would be the concerns with that?
-- http://www.freelists.org/webpage/oracle-lReceived on Sun Jan 17 2021 - 16:40:38 CET