Experiences/thoughts about hardware recommendations

From: Karl Arao <karlarao_at_gmail.com>
Date: Sat, 17 Oct 2009 20:42:10 +0800
Message-ID: <12ee65600910170542k71556953xbbad66ff9eef583f_at_mail.gmail.com>



This is for the dbas, architects, technical pre-sales, or anyone who has to do with recommending Servers.

Often I encounter situations where I'm expected to help the pre-sales team to come up with a hardware recommendation that will be included on the technical proposal for a client (well thorough scoping has already been done).
My ex-boss came up with a capacity planning template for storage, CPU, and RAM which can be quickly used by the Sales/Pre-Sales for hardware recommendations and creating technical proposals.

The storage sizing part part of the template was okay. And we've been using it for quite a while.

However, I don't completely agree on the CPU and RAM capacity template. I think there's a more scientific way of doing it. Or I have to say many ways of doing it.
In my opinion this could be achieved in 3 ways:

  1. ROTs (Rule of thumbs)

ROTs are not applicable to all environment or to all project that you'll have, this may not consider the workload of the system like the batch jobs ratio to the cpu utilization and oltp ratio to the cpu utilization. And most likely there will be vendor specific ROTs for specific applications. So if it's comming from Oracle, and I have an E-Business suite apps with specific ROTs that are current then I may have to follow it. What I mean by current is, ROTs from 1990's are not applicable today given the trend now is to go multicore.

2) Benchmarks

This is where you actually run you application on a specific hardware with X number of CPUs. We get to see more of this at TPC.org. Oracle together with hardware vendors from time to time publish some statistics about the benchmark. We could infer on the results but still may not be applicable to other applications. Still it will depend on workload (oltp/batch) / #ofdatabases / #ofusers.

3) Models

Models are very useful if you can't afford to make benchmarks. But you could also model data comming from benchmarks which is far better because you could use the math for you to see the future. I've read on a book where Oracle Performance engineers use ratio modeling for quick estimate on CPU requirements. Linear regression could be useful to know when you will run out of gas, I mean, when will be the limit of a certain resource. And don't forget, that there's a concept of effective CPUs where you have X number of CPUs let's say 12 but you are not actually maximizing it and you're just able to use 6 because of some factors which could be coherence, contention, concurrency. So even if you add 20 more CPUs don't expect that your application's performance will be linear to your CPU power. Well this may sound all theoretical, but some people are already working on these models.

So What I'm trying to say here?
Well, on my part... I may have to research/learn more on this area, but...

  • I don't like the feeling of being on a project where I ask the project manager how did they came up with the hardware specs, and he only says.. "the winning bidder came up with it.." then you find out sooner after setting up everything that the hardware is either overpowered or underpowered.
  • I don't like the feeling of guessing or just giving something for the sake of coming up with a hardware recommendation and technical proposal

So I have to ask good people, & people in user groups, or people with similar roles, how they do it?...how you do it?

I hope we could build up on each others ideas regarding this area of performance :) (And I hope I made you curious about it..)

Share your experiences/thoughts about it.. :)

Received on Sat Oct 17 2009 - 07:42:10 CDT

Original text of this message