Re: ASM and VMWARE

From: Chris Taylor <christopherdtaylor1994_at_gmail.com>
Date: Fri, 14 Jun 2024 22:08:06 -0500
Message-ID: <CAP79kiTUstASXzhm8b6Abfr2P3W37QFpqW-PUMf7Fh5V9JuKww_at_mail.gmail.com>



Also forgot to mention you need to know the device limit from storage array to OS. Sometimes it's 256 (or 255) devices so you need to make sure you plan ahead and make each device large enough to avoid hitting the limit too quickly as far as the size of each device (32G, 256G, etc). Larger sizes allow less total devices to be presented

On Fri, Jun 14, 2024, 9:32 PM Chris Taylor <christopherdtaylor1994_at_gmail.com> wrote:

> In addition to Mark & Mgladen's points, I have worked with _really_ good
> Linux and storage admins over the years including on Unity, 3par, and some
> other EMC storage units.
>
> We pretty much came to the conclusion that on modern storage you can have
> however many logical devices you want and use all the available disks in
> the array.
>
> For example.:
>
> Let's say you want 30x32G devices that look like disks to the Linux
> system. On the storage array each of these 32 devices would be built
> across all available storage devices in the storage array. To ASM it looks
> like 30x32 disks.
>
> In this way you get ASM advantages and storage advantages.
>
> HTH,
> Chris
>
>
> On Fri, Jun 14, 2024, 9:01 PM Mark W. Farnham <mwf_at_rsiz.com> wrote:
>
>> The biggest single problem with virtual disk infrastructure is whether or
>> not physical channels and physical disks will be isolated for use by the
>> Oracle database only.
>>
>> Engage the linux admin in that discussion about how to make sure other
>> uses of the disk farm cannot intermittently destroy database performance.
>>
>> Knowing the exact topology of the physical disk farm being managed
>> virtually is essential before carving things up. And that includes i/o
>> channels, both for solid state and spinning rust.
>>
>> In your project plan, build in time to build and throw away each of the
>> suggested ways, most especially including the system administrator's pet.
>> Some sys admins are bent to handling 50 bazillion little files opened used
>> and closed by many individual users and applications, because that is what
>> they usually do. Some sys admins also know and have good advice for RDBMS
>> systems with a small number of large volumes or files.
>>
>> Also be sure not to shoot yourself in the foot by making the intermittent
>> writes to logs and alerts conflict with database volumes/files, control
>> files, temp, and undo.
>>
>> So you build something, slobber all over it and record the results. Throw
>> that away, and build another recommendation, and slobber all over that.
>> Rinse and repeat for at least three flavors: Your pet, Oracle's doc pet,
>> and the sys admin pet.
>>
>> Completely read in on Kevin Closson's SLOB. Check for results other
>> people may have already done with hardware similar to yours.
>>
>> Also seriously question whether or not RAC is optimal for your project,
>> and what (if any) recovery and business continuation requirements you have.
>> See Mladen's reply for another build you may want to make and slobber on.
>>
>> Good luck,
>>
>> mwf
>>
>> -----Original Message-----
>> From: oracle-l-bounce_at_freelists.org [mailto:oracle-l-bounce_at_freelists.org]
>> On Behalf Of Ed Lewis
>> Sent: Friday, June 14, 2024 2:57 PM
>> To: <oracle-l_at_freelists.org>
>> Subject: ASM and VMWARE
>>
>> Hi,
>>
>> I recently took on a project with a new client.
>>
>> I’m tasked with building several databases in a RAC environment (19.23)
>>
>>
>>
>> ASM is being used with VMWARE on a EMC Unity Array. I recommended
>>
>> creating a few disk groups with a minimum of 4 LUNS (same size) for each
>> group.
>>
>>
>>
>> The unix admin is against doing that saying that just use 1 large disk
>> for each group.
>>
>> He says it’s a disadvantage when using a virtual disk infrastructure like
>> we have with our EMC Unity disk farm.
>>
>> He states it is actually a disadvantage to carve up such small physical
>> disks at the SAN Storage Array processor
>>
>> level and is not actually even possible as only whole disks can be
>> assigned to a particular use at that level.
>>
>>
>>
>> Although, I have not worked much with VMWARE, I’ve never heard of
>> these restrictions
>>
>> when using ASM, so I have my doubts.
>>
>> Any thoughts or experiences on this would be greatly appreciated
>> --
>> http://www.freelists.org/webpage/oracle-l
>>
>>
>>
>>
>>
>> --
>> http://www.freelists.org/webpage/oracle-l
>>
>>
>>

--
http://www.freelists.org/webpage/oracle-l
Received on Sat Jun 15 2024 - 05:08:06 CEST

Original text of this message