Home » RDBMS Server » Server Administration » Unable to extend table (database server, 10.2.0.4, linux 64 rac asm)
Unable to extend table [message #421901] |
Fri, 11 September 2009 09:47 |
maxgro
Messages: 4 Registered: September 2009
|
Junior Member |
|
|
Sometimes (once or twice a month), during the daily database load (data warehouse) I get the errors
*** Error -3233: ORA-03233: unable to extend table xxxx subpartition SYS_SUBP2094 by 2048 in tablespace yyy
*** Error -1653: ORA-01653: unable to extend table xxx by 256 in tablespace yyy
This error happens on different tables, (both partitioned and not, compressed or not, and on tables of different size, from 100 mb to 50 gb, when the database is doing insert or merge on tables; it also happens on tablespaces with just a table (partitioned/subpartitioned)).
I don't think it's a space problem: for example during a load of a table, the tablespace was 3 gb before the execution and it grows to 32gb when I get the error. The load makes a lot of extents and blocks (empty blocks, perhaps due to rollback).
After moving the table (alter table move in other tablespace, shrink tablespace and moving back to the original tablespace) and redoing the same load (same task, same statements and same number of records to load) I didn't get any errors and the tablespace was about 3 gb (very little increase).
Some more details: all tablespaces are extent mgmt local autoallocate and segment space mgmt automatic, rac with asm oon linux 64 bit, 10.2.0.4
Any help would be appreciate
Thanks
|
|
|
|
Re: Unable to extend table [message #421914 is a reply to message #421905] |
Fri, 11 September 2009 11:55 |
maxgro
Messages: 4 Registered: September 2009
|
Junior Member |
|
|
you can find below
1) the ddl of the table (partition by range / hash, compressed)
2) tablespace
3) an extract of the daily load procedure of one of the tables where i get the "unable to extend ..." error message
this is the summary of the load:
- drop index
- truncate ST02_VENDUTO_TMP
- load from as400 with dblink into ST02_VENDUTO_TMP
- update some columns in ST02_VENDUTO_TMP
- delete st02_venduto (same id_lotto_estrazione of as400 loaded in ST02_VENDUTO_TMP)
- insert into st02_venduto from st02_venduto_tmp (join with a lookup table)
- delete fact table f001_venduto (only the dates i'll insert in next step)
- insert f001_venduto from st02_venduto (only some dates, the ones in st02_venduto_tmp) group by...
here the unable to expand on subpartition of f001_venduto
- create index
4) when I get the error, the 2 extensible datafile of the tablespace grow from 11 to 16gb and from 16 to 32 gb (maximum) and one subpartition fills all this space; this is a bit strange, the normal size of a subpartition is under 550mb; after some cleaning (insert subpartition records in another table, shrink tablespace, copying back the records to subpartition) I execute the same procedure and the procedure finishes without error (the tablespace has some little space increase, some hundred of mb, not 20gb!)
Thanks
DDL OF TABLE F001_VENDUTO
-------------------------
size 25gb
records 175m
maximum subpartition size 550mb
CREATE TABLE F001_VENDUTO
(
COD_AZIENDA VARCHAR2(2 BYTE) NULL,
COD_SCOP VARCHAR2(6 BYTE) NULL,
COD_PUNTO_VENDITA NUMBER(6) NULL,
........... other 60 or 70 fields...
)
TABLESPACE DWH_VENDUTO_PART_1
NOLOGGING
PARTITION BY RANGE (DAT_COMPETENZA)
SUBPARTITION BY HASH (COD_PUNTO_VENDITA)
(PARTITION P2006_01 VALUES LESS THAN (TO_DATE(' 2006-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS',
'NLS_CALENDAR=GREGORIAN'))
NOLOGGING
COMPRESS
TABLESPACE DWH_VENDUTO_PART_1
SUBPARTITIONS 8 STORE IN
(DWH_VENDUTO_PART_1,DWH_VENDUTO_PART_1,DWH_VENDUTO_PART_1,DWH_VENDUTO_PART_1,DWH_VENDUTO_PART_1,DWH_VENDUTO_PART_1,DWH_VENDUTO_PART_1 ,DWH_VENDUTO_PART_1),
...
.... other partition and sub..
...
PARTITION P2011_12 VALUES LESS THAN (TO_DATE(' 2012-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
NOLOGGING
COMPRESS
TABLESPACE DWH_VENDUTO_PART_1
SUBPARTITIONS 8 STORE IN (DWH_VENDUTO_PART_1,...),
PARTITION P_MAX VALUES LESS THAN (MAXVALUE)
NOLOGGING
COMPRESS
TABLESPACE DWH_VENDUTO_PART_1
SUBPARTITIONS 8 STORE IN (DWH_VENDUTO_PART_1,...),
)
NOCOMPRESS
PARALLEL ( DEGREE 8 INSTANCES 1 )
MONITORING;
TABLESPACE
-----------
f001_venduto is the only table in tablespace
'+DG2/dwcci/datafile/dwh_venduto_part_1.304.660331695' SIZE 4096M AUTOEXTEND OFF,
'+DG2/dwcci/datafile/dwh_venduto_part_1.305.660331753' SIZE 4096M AUTOEXTEND OFF,
'+DG2/dwcci/datafile/dwh_venduto_part_1.306.660331811' SIZE 4096M AUTOEXTEND OFF,
'+DG2/dwcci/datafile/dwh_venduto_part_1.386.681151247' SIZE 11264M AUTOEXTEND ON NEXT 100M MAXSIZE 16384M,
'+DG2/dwcci/datafile/dwh_venduto_part_1.383.681373697' SIZE 16384M AUTOEXTEND ON NEXT 1024M MAXSIZE 32768M
LOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL AUTOALLOCATE
BLOCKSIZE 32K
SEGMENT SPACE MANAGEMENT AUTO
FLASHBACK ON;
EXTRACT OF THE LOAD PROCEDURE
-----------------------------
...drop 4 index......
EXECUTE IMMEDIATE 'TRUNCATE TABLE ST02_VENDUTO_TMP';
INSERT /*+ APPEND */INTO st02_venduto_tmp
SELECT bgensom AS id_lotto_estrazione,
......other 60 or 70 fields...
bgenfil AS id_riga_rzarca,
FROM mersy_dbst.soibac1l@as4prod -- dblink to as400
WHERE bgensom = rec_port.id_key;
UPDATE st02_venduto_tmp
SET id_articolo = ...,
id_promozione = ....,
id_campagna = ....
COMMIT;
DELETE FROM dwh_staging.st02_venduto
WHERE id_lotto_estrazione = nid_lotto_estrazione;
INSERT /*+ append */INTO dwh_staging.st02_venduto
(id_lotto_estrazione,
id_riga_rzarca,
cod_azienda,
...other 60 or 70 fields..
)
SELECT v.id_lotto_estrazione,
v.id_riga_rzarca,
v.cod_azienda,
......other 60 or 70 fields...
FROM st02_venduto_tmp v,
v040_segni_venduto s
WHERE ......;
COMMIT;
DELETE /*+ index (f001_venduto) */dwh_cci.f001_venduto
WHERE dat_competenza IN(SELECT DISTINCT dat_competenza FROM st02_venduto_tmp a)
AND cod_scop = v_new_id_scop;
INSERT /*+ parallel (f001_venduto 0) */INTO dwh_cci.f001_venduto
(cod_azienda,
cod_scop,
cod_punto_vendita,
......other 60 or 70 fields...)
SELECT /*+ parallel (st02_venduto, 0) */
cod_azienda,
cod_scop,
cod_punto_vendita,
......other 60 or 70 fields...
FROM st02_venduto
WHERE dat_competenza IN(SELECT DISTINCT dat_competenza FROM st02_venduto_tmp a)
AND cod_scop = v_new_id_scop
GROUP BY cod_azienda, cod_scop, cod_punto_vendita, ......other 60 or 70 fields...;
COMMIT;
build 4 indexes
|
|
|
|
Re: Unable to extend table [message #421919 is a reply to message #421916] |
Fri, 11 September 2009 13:22 |
maxgro
Messages: 4 Registered: September 2009
|
Junior Member |
|
|
First step is from network, the last one, the more time expensive isn't.
I explained the statement (see below) and it itsn't executed in parallel (hint with parallel 0). It's already executed in serial.
I was able to execute it in parallel only after some changes in the procedure (enable / force parallel dml, removing hint, see the second explain).
Also, I can't understand why the same procedure usually doesn't increase the datafiles and when I get the error (about once a month) it increases the datafiles of 20 gb for only one subpartition (and the numbers of records to load are about the same). And if I reexecute after some cleaning there is no or vey little increase.
Same problems on other tables with no parallelism, no partitions, less than 1 gb: every now and then datafile grows until maximun size.
Thanks
1 explain plan for
2 INSERT /*+ parallel (f001_venduto 0) */INTO dwh_cci.f001_venduto
3 (cod_azienda,
4 cod_scop,
5 cod_punto_vendita,
6 cod_articolo,
7 cod_var_articolo,
8 dat_competenza,
9 cod_tipo_vendita,
10 cod_azione_prezzo,
11 tip_reparto_vendita_casse,
12 tip_categoria_sva,
13 num_perc_iva,
14 cod_um_vendita,
15 qta_vendita,
16 qta_vendita_pezzi,
17 qta_vendita_kg,
18 imp_valore_lsco_incliva,
19 imp_valore_lsco_iva,
20 imp_valore_nsco_incliva,
21 imp_valore_nsco_iva,
22 imp_valore_al_costo_escliva,
23 imp_valore_al_costo_iva,
24 num_clienti,
25 num_operazioni_annullamento,
26 num_clienti_annullamento,
27 imp_valore_annullamento_lsco,
28 num_operazioni_storno,
29 num_clienti_storno,
30 imp_valore_storno_lsco,
31 num_operazioni_reso,
32 num_clienti_reso,
33 imp_valore_reso_lsco,
34 flg_riga_di_storno,
35 flg_riga_in_errore,
36 cod_smea,
37 cod_smeb,
38 cod_smec,
39 cod_smed,
40 cod_smee,
41 cod_srep,
42 cod_sres,
43 cod_cdpa,
44 cod_cdpb,
45 cod_cdpc,
46 cod_ruolo_coordinatore,
47 cod_ruolo_capo_reparto,
48 cod_ruolo_altro_resp,
49 cod_pers_coordinatore,
50 cod_pers_capo_reparto,
51 cod_pers_altro_resp,
52 cod_srob,
53 cod_sroc,
54 cod_piano_mktg_anno,
55 cod_piano_mktg_progr,
56 cod_evento_commerciale,
57 cod_campagna_anno,
58 cod_campagna_progr,
59 cod_iniziativa,
60 cod_regola,
61 cod_for_prevalente,
62 cod_var_for_prevalente,
63 cod_for_consegna,
64 cod_var_for_consegna,
65 cod_causale_amm,
66 cod_classe_causale_amm,
67 id_articolo,
68 id_promozione,
69 cod_piano_mktg_anno2,
70 cod_piano_mktg_progr2,
71 cod_evento_commerciale2,
72 id_campagna
73 )
74 SELECT /*+ parallel (st02_venduto, 0) */
75 cod_azienda,
76 cod_scop,
77 cod_punto_vendita,
78 cod_articolo,
79 cod_var_articolo,
80 dat_competenza,
81 cod_tipo_vendita,
82 cod_azione_prezzo,
83 tip_reparto_vendita_casse,
84 tip_categoria_sva,
85 num_perc_iva,
86 cod_um_vendita,
87 SUM(qta_vendita),
88 SUM(qta_vendita_pezzi),
89 SUM(qta_vendita_kg),
90 SUM(imp_valore_lsco_incliva),
91 SUM(imp_valore_lsco_iva),
92 SUM(imp_valore_nsco_incliva),
93 SUM(imp_valore_nsco_iva),
94 SUM(imp_valore_al_costo_escliva),
95 SUM(imp_valore_al_costo_iva),
96 SUM(num_clienti),
97 SUM(num_operazioni_annullamento),
98 SUM(num_clienti_annullamento),
99 SUM(imp_valore_annullamento_lsco),
100 SUM(num_operazioni_storno),
101 SUM(num_clienti_storno),
102 SUM(imp_valore_storno_lsco),
103 SUM(num_operazioni_reso),
104 SUM(num_clienti_reso),
105 SUM(imp_valore_reso_lsco),
106 flg_riga_di_storno,
107 flg_riga_in_errore,
108 cod_smea,
109 cod_smeb,
110 cod_smec,
111 cod_smed,
112 cod_smee,
113 cod_srep,
114 cod_sres,
115 cod_cdpa,
116 cod_cdpb,
117 cod_cdpc,
118 cod_ruolo_coordinatore,
119 cod_ruolo_capo_reparto,
120 cod_ruolo_altro_resp,
121 cod_pers_coordinatore,
122 cod_pers_capo_reparto,
123 cod_pers_altro_resp,
124 cod_srob,
125 cod_sroc,
126 cod_piano_mktg_anno,
127 cod_piano_mktg_progr,
128 cod_evento_commerciale,
129 cod_campagna_anno,
130 cod_campagna_progr,
131 cod_iniziativa,
132 cod_regola,
133 cod_for_prevalente,
134 cod_var_for_prevalente,
135 cod_for_consegna,
136 cod_var_for_consegna,
137 cod_causale_amm,
138 cod_classe_causale_amm,
139 id_articolo,
140 id_promozione,
141 cod_piano_mktg_anno2,
142 cod_piano_mktg_progr2,
143 cod_evento_commerciale2,
144 id_campagna
145 FROM st02_venduto
146 WHERE dat_competenza IN(SELECT DISTINCT dat_competenza
147 FROM st02_venduto_tmp a) --AND cod_scop = p_id_scop
148 AND cod_scop = 'CITSUP'
149 GROUP BY cod_azienda,
150 cod_scop,
151 cod_punto_vendita,
152 cod_articolo,
153 cod_var_articolo,
154 dat_competenza,
155 cod_tipo_vendita,
156 cod_azione_prezzo,
157 tip_reparto_vendita_casse,
158 tip_categoria_sva,
159 num_perc_iva,
160 cod_um_vendita,
161 flg_riga_di_storno,
162 flg_riga_in_errore,
163 cod_smea,
164 cod_smeb,
165 cod_smec,
166 cod_smed,
167 cod_smee,
168 cod_srep,
169 cod_sres,
170 cod_cdpa,
171 cod_cdpb,
172 cod_cdpc,
173 cod_ruolo_coordinatore,
174 cod_ruolo_capo_reparto,
175 cod_ruolo_altro_resp,
176 cod_pers_coordinatore,
177 cod_pers_capo_reparto,
178 cod_pers_altro_resp,
179 cod_srob,
180 cod_sroc,
181 cod_piano_mktg_anno,
182 cod_piano_mktg_progr,
183 cod_evento_commerciale,
184 cod_campagna_anno,
185 cod_campagna_progr,
186 cod_iniziativa,
187 cod_regola,
188 cod_for_prevalente,
189 cod_var_for_prevalente,
190 cod_for_consegna,
191 cod_var_for_consegna,
192 cod_causale_amm,
193 cod_classe_causale_amm,
194 id_articolo,
195 id_promozione,
196 cod_piano_mktg_anno2,
197 cod_piano_mktg_progr2,
198 cod_evento_commerciale2,
199* id_campagna
19:59:42 dwh_staging@DWCCI> /
Spiegato.
Passati: 00:00:01.96
19:59:46 dwh_staging@DWCCI> SELECT * FROM TABLE(dbms_xplan.display);
PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------
Plan hash value: 1621356690
------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------------
| 0 | INSERT STATEMENT | | 83145 | 18M| | 29280 (1)| 00:08:48 |
| 1 | HASH GROUP BY | | 83145 | 18M| 59M| 29280 (1)| 00:08:48 |
|* 2 | TABLE ACCESS BY INDEX ROWID | ST02_VENDUTO | 5197 | 1136K| | 27620 (1)| 00:08:18 |
| 3 | NESTED LOOPS | | 83145 | 18M| | 27620 (1)| 00:08:18 |
| 4 | SORT UNIQUE | | 300K| 2351K| | 1226 (1)| 00:00:23 |
| 5 | TABLE ACCESS FULL | ST02_VENDUTO_TMP | 300K| 2351K| | 1226 (1)| 00:00:23 |
| 6 | BITMAP CONVERSION TO ROWIDS| | | | | | |
|* 7 | BITMAP INDEX SINGLE VALUE | BI02_ST02_VENDUTO | | | | | |
------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("COD_SCOP"='CITSUP')
7 - access("DAT_COMPETENZA"="DAT_COMPETENZA")
Selezionate 20 righe.
Passati: 00:00:00.79
20:01:05 dwh_staging@DWCCI> ed
Registrato file afiedt.buf
1* SELECT * FROM TABLE(dbms_xplan.display)
20:01:30 dwh_staging@DWCCI> spool off
explain after alter session force parallel dml and hint remove
------------------------------------------------------------------------------------------------------------------------------------- --------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------- --------
| 0 | INSERT STATEMENT | | 83145 | 18M| | 8882 (4)| 00:02:40 | | | |
| 1 | PX COORDINATOR | | | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10005 | 83145 | 18M| | 8882 (4)| 00:02:40 | Q1,05 | P->S | QC (RAND) |
| 3 | INDEX MAINTENANCE | F001_VENDUTO | | | | | | Q1,05 | PCWP | |
| 4 | PX RECEIVE | | 83145 | 18M| | 8882 (4)| 00:02:40 | Q1,05 | PCWP | |
| 5 | PX SEND RANGE | :TQ10004 | 83145 | 18M| | 8882 (4)| 00:02:40 | Q1,04 | P->P | RANGE |
| 6 | LOAD AS SELECT | F001_VENDUTO | | | | | | Q1,04 | PCWP | |
| 7 | PX RECEIVE | | 83145 | 18M| | 8882 (4)| 00:02:40 | Q1,04 | PCWP | |
| 8 | PX SEND PARTITION (KEY) | :TQ10003 | 83145 | 18M| | 8882 (4)| 00:02:40 | Q1,03 | P->P | PART (KEY) |
| 9 | HASH GROUP BY | | 83145 | 18M| 59M| 8882 (4)| 00:02:40 | Q1,03 | PCWP | |
| 10 | PX RECEIVE | | 83145 | 18M| | 8881 (4)| 00:02:40 | Q1,03 | PCWP | |
| 11 | PX SEND HASH | :TQ10002 | 83145 | 18M| | 8881 (4)| 00:02:40 | Q1,02 | P->P | HASH |
|* 12 | HASH JOIN | | 83145 | 18M| | 8881 (4)| 00:02:40 | Q1,02 | PCWP | |
| 13 | PX RECEIVE | | 300K| 2351K| | 43 (3)| 00:00:01 | Q1,02 | PCWP | |
| 14 | PX SEND BROADCAST | :TQ10001 | 300K| 2351K| | 43 (3)| 00:00:01 | Q1,01 | P->P | BROADCAST |
| 15 | SORT UNIQUE | | 300K| 2351K| | 43 (3)| 00:00:01 | Q1,01 | PCWP | |
| 16 | PX RECEIVE | | 300K| 2351K| | 43 (3)| 00:00:01 | Q1,01 | PCWP | |
| 17 | PX SEND HASH | :TQ10000 | 300K| 2351K| | 43 (3)| 00:00:01 | Q1,00 | P->P | HASH |
| 18 | PX BLOCK ITERATOR | | 300K| 2351K| | 43 (3)| 00:00:01 | Q1,00 | PCWC | |
| 19 | TABLE ACCESS FULL| ST02_VENDUTO_TMP | 300K| 2351K| | 43 (3)| 00:00:01 | Q1,00 | PCWP | |
| 20 | PX BLOCK ITERATOR | | 124M| 25G| | 8823 (4)| 00:02:39 | Q1,02 | PCWC | |
|* 21 | TABLE ACCESS FULL | ST02_VENDUTO | 124M| 25G| | 8823 (4)| 00:02:39 | Q1,02 | PCWP | |
------------------------------------------------------------------------------------------------------------------------------------- --------
|
|
|
|
Re: Unable to extend table [message #421950 is a reply to message #421928] |
Sat, 12 September 2009 02:45 |
maxgro
Messages: 4 Registered: September 2009
|
Junior Member |
|
|
>>Some more details: all tablespaces are extent mgmt local >>autoallocate and segment space mgmt automatic,
You make this claim without providing proof.
Even if tablespace matches above, it does not necessarily mean tables or partitions have same configuration.
>I don't think it's a space problem:
Then what type of problem is it?
Do you think Oracle's error message is incorrect?
If so, submit bug report to Oracle.
----
I posted the script to create the tablespace and the table in a previous message. What else can I post to describe the situation?
Oracle message is correct.
I'd like to know why the same insert i repeat every day (I load daily about the same number of records) usually increases the table of some mb and once or twice a month (random) it increases the tables (only one subpartiton of the table) of 20 or more gb (until i get "unable to extend...."). After the "unable to ...." and after shrinking the tablespace at the usual size, I reexecute the insert and I didn't get the "unable to extend....", the space increase of the table is few mb.
These are the detailed step
1) Tablespace 40 gb, only one table f001_venduto
2) Run daily procedure with the insert in f001_venduto
3) Usually ok, yesterday "unable to extend.....", tablespace goes to 60gb. At this point i have the same data in f001_venduto as before run(error --> rollback) but different space (from 40 to 60 gb)
4) Detect subpartition of f001_venduto with not usual size (usual is <= 550mb, yesterday the detected subpartition was > 20gb) and his partition
5) create a new empty table (ctas) xxx, same structure as f001_venduto
6) insert into xxx from partition detected at 4)
7) empty partition detected at 4) (alter table ... truncate partition ....)
shrink tablespace from 60gb to 40gb (...resize...)
9) insert from xxx to f001_venduto
Run procedure of step 2; tablespace is 40 gb as it was at step 1. Table f001_venduto has same data, procedure inserts same records and
completed succesfully, space increase of the table is some mb, not 20gb. Tablespace will stay 40gb for many days probably and in these days the procedure will run without error (usual behavior), but in a week or month I'll get the "unable to extend......" and I'll discover the datafiles full (60gb)
Thanks
|
|
|
Goto Forum:
Current Time: Sun Dec 01 17:03:23 CST 2024
|