Oracle FAQ | Your Portal to the Oracle Knowledge Grid |
![]() |
![]() |
Home -> Community -> Usenet -> c.d.o.server -> Re: tough choices
Daniel Morgan wrote:
> Mark A wrote: > >>> Please correct me if you think I am incorrect. But losing a single node >>> with RAC can not deprive users of access to data. The system continues >>> to run with no effect other than the loss of a few CPU's and their >>> associated RAM. >>> >>> With DB2 I could lose a node and either lose access to some of the data >>> or, worst case, lose the entire database application.Daniel, you are still measures with two metrics :-) When an Oracle RAC node goes down it has information that is needed by the other nodes. All the remaining nodes are affected by that during this timeframe where RAC gets its balance back. I take your word, that this is in the second ballpark. Now in a DB2 + DPF scenario, if DB Partition goes down all clients connected to that partition get kicked.
>> Do you mean loose the database permanently or just until a fallover
>> can be
>> accomplished or the hardware repaired? I don't know of a situation where
>> data would be lost permanently unless there was a multiple disk failure
>> affecting both the data and logs.
> I meant only until it is brought back on-line. DB2 is far more robust to
> become ashes ... just toast. ;-)
Right, so now the question is the race against time to get the down
partition up again. On the same hardware, different hardware, doesn't
matter.
>>>>> Correct me if I am wrong ... but it seems DPF >>>>> is an extra cost add-in only available on ESE systems with that some >>>>> functionality only works on AIX. One question ... is it supported by the >>>>> optimizer?
store id data
Do a local join between the inventory and product table doing the
local aggregation
3. Pass the result back to the coordinator
At most two nodes were involved.
If the software associating the clients to db partitions has any brains
it uses the same mapping as DB2 and the coordinator is the right
partition to begin with.
Let's assume the query comes in with the store id rather than the store
name (more realistic actually). In this case a feature called "local
bypass" kicks in. The coordinator would simply pass the query on to the
right node.
Now does this require a bit of DB Schema design work? Absolutely.
But in a BI environment which is where DB2 + DPF plays you have that
knowledge. In return for their efforts the user gets near linear
scalability once the homework has been done.
DB2 + DPF is not sold as an HA solution it is sold for scale out.
>>>> To the best of my knowledge DPF is offered on all supported >>>> platforms of >>>> DB2 for LUW. >>> >>> >>> Not according to: >>> http://www.developer.ibm.com/tech/faq/individual?oid=2:82779
If you just want a simple HA solution with DB2 Stinger you will use the
HDR feature.
For currently available releases HA is driven with general solutions.
I'm no experts in this.. Too far away from my area.
For HA you need to compare: DB2 + HA solutions or soon DB2 + HDR.
To learn about HDR best ask in the informix newsgroup for HADR.
HDR is stage one of that port.
Just to wrap up:
The point being made is:
1. DB2 + DPF for near unlimited scale out
(DB2 supports 999 DB Partitions,
there >100 partition installation out there)
2. DB2 + DPF _supports_ HA solutions if needed
3. DB2 + DPF is not an HA feature and never was meant to be one.
My personal toughts on RAC are:
Oracle RAC is an HA feature with neat limited scale out ability
Oracle RAC has yet to proof how far it can scale out.
I don't believe that near linear scale out can be achieved without a divide and conquer strategy of sorts. That strategy requires schema/app changes.
Cheers
Serge
-- Serge Rielau DB2 SQL Compiler Development IBM Toronto LabReceived on Sat Jun 19 2004 - 18:45:59 CDT
![]() |
![]() |