Re: Normalization by Composing, not just Decomposing
Date: Fri, 09 Apr 2004 00:02:20 GMT
Message-ID: <gKldc.65103$2I5.4217884_at_phobos.telenet-ops.be>
Dawn M. Wolthuis wrote:
>
> Do you HAVE a description of any type of normalization or other data
> modeling theories related to PICK?
That's what the Arenas & Libkin paper was. I know the math is tough but
that's to a large extent because normalization in such data models is
inherently more complex then in the relational model. Explaining it in
detail would take more time than I have. If you want things to be
simpeler then I suggest you stick to the NFNF relational model. See, for
example:
http://citeseer.ist.psu.edu/ozsoyoglu87new.html
although that is not a very good paper, but it contains more references
if you want them. If you can find them look at the Fischer and Van Gucht
paper, and the Roth, Korth and Silberschatz paper.
And if that is too dificult for you, just stick to the usual dependencies (functional, multi-valued, join), which would basically mean you are just dealing with the good old flat relational model. Or, for starters, perhaps even just the functional dependencies.
I had the impression from you attempted formalization of your denormalization rule, that this is what you were doing anyway. I just hope you realize that this is just a very tiny tip of a really huge iceberg. .. I'm sure you can come up with a nice pun on the word "pick" here somewhere .. ;-)
> But there is nothing that tells you to put them back together, right? You
> can obey all the rules of normalization and have a set of N resulting
> relations when someone else also has a normalized set of the same attributes
> but with N-500 relations, right?
Only if you normalize in a very naive way, i.e., splitting off offending FDs one by one. The usual algorithm that gets you to 3NF in one step (the one using the minimal cover) splits as little as possible. See for example sheet 46 on:
- Jan Hidders