Re: Calculations in maximizing rows per block
Date: Tue, 15 Aug 2017 08:52:07 -0500
Message-ID: <bea02a3589f6f77865ccd9913dd76973_at_society.servebeer.com>
On 2017/08/14 22:09, Mark W. Farnham wrote:
Intriguing idea! I would love to try that, but I don't think JD Edwards would like it, as I've seen it occasionally UPDATE by ROWID. For the record, this is the table in question (ignore the numeric datatypes, as the SQL generated is likely not targeted for Oracle): http://www.jdetables.com/?schema=910&table=F42199
The first 254 columns of our implemented table have no defaults, but even then, I have not seen any NULLs in this table -- ever. That's not to say that JDE couldn't INSERT with NULLs, then immediately populate with an UPDATE, but I doubt it. I need to mine redos to verify...
> Putting the least frequently used columns late physically *MIGHT* pay off if your queries routinely only reference the columns used rather than "*".
>
> Even without clustering, that might mean you don't need the second row piece for a query.
>
> I mention this because my experience in the field shows that *MOSTLY* tables with over 255 columns have a lot of unpopulated columns and a lot of rarely or never used columns and sometimes columns that are functionally audit trail columns that could be coordinated as a second logical object. IF you can get rid of them to get under 255, that is plan A.
JDE maintains its own data dictionary that it uses to generate DDL. It gets most upset when table definitions (and to a much lesser extent, indexes) in the DB differ from its repository.
Perfect! That's what I was hoping for. But I still need to calculate that free space to help determine the costs of inter-row chaining/migration vs. lower block density.
So there's no ASSM info in each block?
> All the stuff mentioned earlier in the thread is probably gold.
As is your post. Thanks!
Rich
-- http://www.freelists.org/webpage/oracle-lReceived on Tue Aug 15 2017 - 15:52:07 CEST