Re: A real world example

From: Keith H Duggar <duggar_at_alum.mit.edu>
Date: 11 Aug 2006 10:43:55 -0700
Message-ID: <1155318235.130778.245060_at_m79g2000cwm.googlegroups.com>


Brian Selzer wrote:
> In an earlier thread, Marshall asked for a real-world
> example, and while this isn't exactly what was asked for,
> it illustrates the problem.

(Brian note that I'm just a novice. So any questions or points I raise are not to argue a position but rather to provoke further detail or explanation for my Level 6 Fraudulent Education.)

Later you say that it illustrates not "the" but rather "a similar" problem. Can you provide a real-world example of "the" actual problem as Marshall asked?

[snip example of a "similar" but different problem]

> My point is that here is a real-world example where the
> universes of two databases overlap, but the set of
> attributes used to identify something common to the two
> universes is different for different databases.

So isn't the actual problem that the databases employ different models of the same data? In other words, it seems in that example (and in "the" problem), that the problem is not different /values/ but different /models/?

> the fact remains that a similar situation can occur within
> a single database. With natural keys, that is, keys whose
> value can be different in successive database states, it
> is possible for the values of one set of identifying
> attributes of something in the universe to be different in
> successive universe states, and when this new information
> is imparted to the database, it is possible for there to
> be propositions in successive database states that should
> correspond, but cannot because the only relevant set of
> identifying attributes of something is different in
> successive universe states.

Why "should [they] correspond"? Are you envisioning some meta-data external to the data model that remains constant? Some property not modeled that corresponds? If this external information and "correspondence" is important, then why is it not internalized explicitly as part of the data model?

Just as in the previous example, it seems that in your mind you maintain a /different/ model than the database. So again the problem seems not to be the changing values but rather that the two DBMS (in this case the machine and your mind) have /different models/. No?

Can you give a real-world example of two entities sharing the /same/ data model having such a correspondence mismatch?

Received on Fri Aug 11 2006 - 19:43:55 CEST

Original text of this message