Re: Schema Design.
Date: Thu, 9 Dec 2010 09:18:33 -0800 (PST)
Message-ID: <00e2b9bf-0d40-4370-ba66-6b0090c980a3_at_j32g2000prh.googlegroups.com>
On Dec 9, 2:28 am, "Preston" <dontwant..._at_nowhere.invalid> wrote:
> We've got a few applications that currently all use the same schema
> (let's call it "FRED"). There's a main application that all our clients
> have, & a few add-on optional apps. We're about to start building a
> couple of new apps (both in APEX), which again will only be available
> to clients with the main app, but which will have some new tables (&
> plenty of procedures) that are only used by them.
>
> Some of the new objects will be used by both the new apps, & some by
> just one of them. Additionally we're also planning to re-write the
> existing main app (again in APEX), at which point it too will use some
> of the new tables/procedures, but mainly it will stick with what's in
> the current "FRED" schema. With me so far..?
>
> So what I'm trying to figure out is the best way to map applications to
> schemas, bearing in mind this is a database & applications that we
> install & maintain at various client sites. I don't want to stick with
> using a single schema for various reasons.
>
> A separate schema for each app is one option, but who gets the
> tables/procs used by multiple apps? We could use a "SHARED" schema for
> those, but I suspect that would get messy as people develop
> app-specific objects in the future then realise they could be used by
> other apps too & move them to "SHARED". Either option could be a
> nightmare when it comes to remembering to put schema. in front of the
> object names, not to mention the potential re-work needed if objects
> are moved to different schemas.
>
> Thoughts anyone?
>
> --
> Preston.
This winds up being an intractable problem. I work on an enterprise app that originally was a couple of apps/schemata, then various modules within those were spun off as separate options. Over time, one schema became dominant, and an argument could be made everything should have been lumped in there to begin with. As Fred pointed out, people think of these schemata as separate databases (which is what they are called in non-Oracle dbms's and tools like Excel), so strange things happen - different definitions of the same field or table, or sometimes the same definition but one table isn't used, and so forth. Module definitions wind up being hazy at times, as with things being executed in modules you haven't bought, or bizarro naming conventions as things go from one module to many or are converted to general library routines or engines.
I created another "database" when I wrote an extension to an analysis module for DSS purposes. The idea was, since people would be doing all this weird stuff, they could put it on their own PC accessed remotely from the server and not load down the OLTP server with the processing, and by layering this way only the extract program would need to be changed on upgrades. What wound up happening is those programs got bureaucratized and run en mass by an admin on the server, and the few people who do more extensive analysis bought some SS based program that (horribly!) accesses the original schema live transaction tables and sucks down everything, then analytics run on PC's. You can't win. Fortunately, relatively cheap servers are very powerful these days.
jg
-- _at_home.com is bogus. http://www.signonsandiego.com/news/2010/dec/08/opting-out-theres-app-course/Received on Thu Dec 09 2010 - 11:18:33 CST