large table [message #675535] |
Thu, 04 April 2019 03:34 |
reym21
Messages: 241 Registered: February 2010 Location: Philippines
|
Senior Member |
|
|
Gurus,
Please help me on what to do with the table that would take much time
before you retrieve its record count by:
select count(*) from user_ledger_trans;
This table has a PCT_FULL percentage of 99.95% (ALLOCATED_MB=12,680; USED_MB=12,673.87).
We only identified it's num_rows (141,704,779) by executing:
select * from dba_tables
where table_name='USER_LEDGER_TRANS';
Our end-users cannot proceed anymore to process any update on this table.
The table size is around 2GB already.
Thank you in advance.
|
|
|
Re: large table [message #675536 is a reply to message #675535] |
Thu, 04 April 2019 03:56 |
John Watson
Messages: 8960 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
Quote:cannot proceed anymore to process any update This is not an error message that I have seen before. Perhaps you have some other messages that might be more helpful?
|
|
|
Re: large table [message #675538 is a reply to message #675536] |
Thu, 04 April 2019 04:08 |
cookiemonster
Messages: 13959 Registered: September 2008 Location: Rainy Manchester
|
Senior Member |
|
|
2GB isn't actually that much. The largest table I've got is 14GB.
If you've got slow queries/updates you need to look at the execution plan and see what it's doing.
We can't makes any useful suggestions with the information you've provided so far.
|
|
|
|
|
|
|
|
Re: large table [message #675576 is a reply to message #675548] |
Sun, 07 April 2019 21:49 |
reym21
Messages: 241 Registered: February 2010 Location: Philippines
|
Senior Member |
|
|
Sirs,
Thank you for your prompt replies.
I got some details using Toad (sorry but I can't format it, just refer to the attachment):
Table Owner Table Name Mb Allocated Mb Used Pct Full (before next extend) Tablespace
--------------- ----------------------- --------------- --------------- ------------------------------- ----------------
ABC USER_LEDGER_TRANS 18580 18560 99.89 USR_DATA_LRG
Parameter Value
--------------- -----------------
Size 18 Gb
Number Extents 830
OWNER ABC
TABLE_NAME USER_LEDGER_TRANS
TABLESPACE_NAME USR_DATA_LRG
CLUSTER_NAME
PCT_FREE 20
PCT_USED 60
INI_TRANS 1
MAX_TRANS 255
INITIAL_EXTENT 2 Gb
NEXT_EXTENT 20 Mb
MIN_EXTENTS 1
MAX_EXTENTS 1200
PCT_INCREASE 0
FREELISTS 1
FREELIST_GROUPS 1
BACKED_UP N
NUM_ROWS 141704779
BLOCKS 2375678
EMPTY_BLOCKS 2560
AVG_SPACE 1661
CHAIN_CNT 2
AVG_ROW_LEN 14
DEGREE 1
INSTANCE 1
Also, what our users had experience were the 2 errors below:
"F 02084: Batch driven execution failed. Cancel to exit." and
"I 01025: A fatal error has occurred. Contact Help Desk."
Do I need to recreate the table or the tablespace?
Thank you very much for your time.
@Michel: I came up with the 2Gb size by multiplying the AVG_ROW_LEN by NUM_ROWS = 1.84 Gb.
*BlackSwan added {code} tags. PLEASE do so yourself in the future!
-
Attachment: abc_toad.jpg
(Size: 24.32KB, Downloaded 2428 times)
[Updated on: Sun, 07 April 2019 22:01] by Moderator Report message to a moderator
|
|
|
|
Re: large table [message #675584 is a reply to message #675576] |
Mon, 08 April 2019 01:22 |
John Watson
Messages: 8960 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
Quote:"F 02084: Batch driven execution failed. Cancel to exit." and
"I 01025: A fatal error has occurred. Contact Help Desk." These do not look like Oracle error messages to me, more like something generated by your application. Better ask your developers or application vendor what they mean.
|
|
|
Re: large table [message #675586 is a reply to message #675584] |
Mon, 08 April 2019 02:28 |
Roachcoach
Messages: 1576 Registered: May 2010 Location: UK
|
Senior Member |
|
|
>>Oracle 7.3.4.5.0, Solaris 8
I'll be surprised if the application devs are still in the game and even more surprised if they can remember this.
|
|
|
|
|
Re: large table [message #675591 is a reply to message #675590] |
Mon, 08 April 2019 04:04 |
cookiemonster
Messages: 13959 Registered: September 2008 Location: Rainy Manchester
|
Senior Member |
|
|
It's generally better if you post stuff directly in [code] tags rather than attaching a file.
Not everyone can/will open files.
Never the less I've had a look and what you've got in the file doesn't make sense.
You've got an explain plan that is supposedly for
select count(*) from user_ledger_trans
But it can't possibly be since user_ledger_trans appears multiple times in the explain plan along with a lot of other stuff.
That explain plan is for some much more complex query.
According to the tkprof output the select count(*) is taking 48 seconds.
An explain plan for the actual query would help.
|
|
|
Re: large table [message #675593 is a reply to message #675589] |
Mon, 08 April 2019 04:30 |
John Watson
Messages: 8960 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
Having seen the DDL, the quickest way to run select count(*) from user_ledger_trans; (if that really is the query you want to run) would likely be an index fast full scan of abc.USER_LEDGER_TRANS_N2 or abc.USER_LEDGER_TRANS_N1 If the CBO isn't doing that, what happens if you hint it? If the index FFS is still not quick enough, the only other possibility I can think of is to throw parallel query at it.
|
|
|
|
Re: large table [message #675596 is a reply to message #675594] |
Mon, 08 April 2019 04:39 |
John Watson
Messages: 8960 Registered: January 2010 Location: Global Village
|
Senior Member |
|
|
Hints were there. However, you may well be right about it being set to use the RBO.
@reym21, you need to be sure that you have optimizer_goal=all_rows and that you have analyzed all your objects.
|
|
|
|
|
|
|