Tom Kyte
Unable to Perform ONLINE DDLs on tables when Supplemental Logging is enabled
Dear Tom,
In our ERP, we are actively consuming both EBR & Supplemental Logging. EBR is for upgrades with a near zero downtime while Supplemental Logging is mainly for CDC, LogMiner & GoldenGate. But we encounter errors when ALTER TABLE statements are executed for normal tables in ONLINE mode while Supplemental Logging is enabled.
The error we are getting is:
<i>ORA-14416: Online DDL's cannot be used with certain types of tables.</i>
Quick Test Steps:
-- enable minimal supplemental logging (from CDB)
<code>
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;</code>
-- create first table and its constraints
<code>
CREATE TABLE ORDER_TABLE
(
ID VARCHAR2(50),
DESCRIPTION VARCHAR2(100),
ORDER_DATE DATE,
CUSTOMER_ID VARCHAR2(50),
CF_ID VARCHAR2(50)
);
ALTER TABLE ORDER_TABLE
ADD CONSTRAINT ORDER_PK PRIMARY KEY (ID);
ALTER TABLE ORDER_TABLE
ADD CONSTRAINT ORDER_CFK UNIQUE (CF_ID)
USING INDEX;
</code>
-- create second table and its constraints
<code>
CREATE TABLE ORDER_CF_TABLE
(
CF_ID VARCHAR2(50),
AUTH_ID VARCHAR2(100),
AUTH_DATE DATE
);
ALTER TABLE ORDER_CF_TABLE
ADD CONSTRAINT ORDER_CF_PK PRIMARY KEY (CF_ID);
ALTER TABLE ORDER_CF_TABLE
ADD CONSTRAINT ORDER_CF_RK FOREIGN KEY (CF_ID)
REFERENCES ORDER_TABLE (CF_ID) ON DELETE CASCADE;
ALTER TABLE ORDER_CF_TABLE
ADD CONSTRAINT ORDER_CF_TABLE_CF_ID_NN
CHECK ("CF_ID" IS NOT NULL);
</code>
Now try to execute below:
<code>ALTER TABLE ORDER_CF_TABLE DROP CONSTRAINT ORDER_CF_RK KEEP INDEX ONLINE;</code>
Error ORA-14416 is raised.
Since both ONLINE mode for table DDLs & Supplemental Logging are key functionalities in Oracle database, what we believe is, it should be possible to use them at the same time. Could you please explain this behavior & any possible ways to achieve ONLINE DDLs on tables for upgrades while supplementary logging is enabled?
Thanks & Kind Regards,
Navinth
Categories: DBA Blogs
Authid current user functionality
Hi Connor,
Let me describe the situation. Our client is running a warehouse management system. There are two schemas wh1 and wh2. All the packages and procedures are created in schema wh1 with authid current user. Today I have faced the issue with the SKU master. Both the schemas have the SKU master, which should be ideally identical. When a particular procedure of a package is called from schema wh2 and was looking for an SKU which was present in schema wh2 but was missing from wh1 it flagged an error that the SKU is missing. When I created the SKU in schema wh1 it processed successfully. This is really puzzling. To best of my knowledge when the procedure is being called from schema wh2 it should access schema wh2 tables by default when we are not prefixing the table name with schema name. Am I missing something. Please share your view.
Let me try with a sample code:
tablename : sku
schemas : wh1 and wh2
The said table is created in both the schemas. Lets sku 'SAMPLE1' is in schema wh2. But this sku does not exist in schema wh1.
create or replace package wh1.sync_sku is
authid current_user;
begin
upsert_sku(p_sku varchar2);
end;
create or replace package body wh1.sync_sku is
begin
procedure upsert_sku(p_sku varchar2) is
declare v_found char(1) := 'N';
begin
select 'Y'
into v_found
from sku
where sku = p_sku;
exception when no_data_found then
raise_appliocation_error(-20001, 'SKU does not exist');
when others then
raise;
end;
end;
When the procedure upsert_sku is executed from schema wh2 with parameter 'SAMPLE1' its showing the error 'SKU does not exist' although the sku is exist in schema wh2.
As soon as we insert the sku in schema wh1 the procedure executes successfully.
The schema wh2 have all the required rights to execute the procedure of schema wh1.
Categories: DBA Blogs
segregration of duties template for Oracle Database
Oracle has published following document for MySQL:
https://blogs.oracle.com/mysql/why-your-application-should-not-use-one-mysql-user-for-everything.
I have not found similar document for Oracle Database: I would like to know if Oracle has documented something similar for Oracle Database ?
Thanks.
Categories: DBA Blogs
oracle error 1408 and 6502
how to find which field raise the error 6502 or 1408?
Categories: DBA Blogs
include heading while downloading an interactive report into pdf
hi
I have created an interactive report
I want to include heading e.g. Amountwise advances as on
how can I do this
Also I want to include heding while downloading as pdf
please help
Categories: DBA Blogs
Index logging
what is the difference between logging and nologging when creating an index
Categories: DBA Blogs
add a new column to table The column will be of type NUMBER(19,0) and nullable (null by default).
My question is if they add the column the table will be block during the coluum add because it's not enteprise but standard edition
Categories: DBA Blogs
M5 Cross-Endian Platform Migration - KB144840
We are planning to migrate our database from on-premises infrastructure to ODA. The source platform is Solaris sparc, and the target platform is x86. We intend to use the M5 Cross-Endian Platform Migration approach.
However, we have a question regarding whether APEX and ORDS will be migrated as part of this process. On the source system, APEX and ORDS are installed in dedicated tablespaces rather than in SYSTEM or SYSAUX.
We would like to verify whether this M5 migration is fully compatible for APEX,ORDS and whether any issues are expected.
Both Source and target DB's are in 19.25 , APEX IS 22.1
Categories: DBA Blogs
On Premisis MCP server
Is there a way to create a On-premise MCP for oracle database without using OCI as a gateway? I want to deploy a real server (vm/container etc) rather than running a local instance of SQLCL on a user desktop that way we are able to configure the agent framework code (multiple chat bots) to talk to the database.
Categories: DBA Blogs
drop table without purge
I conducted an experiment with the recyclebin parameter.
First, I set recyclebin = OFF to observe the behavior of a regular DROP TABLE.
As I understand the architecture:
there is a tablespace and a segment, for example T1.Information about this segment is stored in the data dictionary.The tablespace also uses a space management mechanism (Segment Space Management AUTO), which tracks free blocks.
Therefore, when I drop a table with recyclebin = OFF,the entry about the segment is removed from the data dictionary,and all blocks of the segment are marked as free and can be reused by other objects.I performed this experiment and indeed observed exactly this behavior.
Next, I enabled the parameter recyclebin = ON.
In this case, the information about the table is not fully removed.The table is marked as inaccessible and renamed,after which it appears in the RECYCLEBIN view. At the same time, the segment continues to exist.
Then I read the following statement in the documentation:
<i>Unless you specify the PURGE clause, the DROP TABLE statement does not result in space being released back to the tablespace for use by other objects, and the space continues to count toward the user's space quota.
</i>
However, in my experiment I observe the following behavior.
Suppose initially the tablespace had: 500 MB free space
I created a table and filled it with data totaling:200 MB
As a result:
free space = 300 MB
user quota used = 200 MB out of 500 MB
After that, I executed a regular: DROP TABLE table_name;
And I observed that: 200 MB returned to free space // free space became 500 MB again
At the same time:
the object still exists in the RECYCLEBIN
the segment size is approximately 200 MB
the user's quota still shows 200 MB used
I expected the free space to remain unchanged: 300 MB
because the documentation states that the space is not released for use by other objects.
Question:
Where is the flaw in my understanding of the logic?
Why does the free space in the tablespace increase even though the segment remains in the RECYCLEBIN and the user's quota is still consumed?
Categories: DBA Blogs
ARRAY PROCESSING
Hi Tom,
I have created a TYPE as:
Create or replace type NUM_ARRAY as table of varchar2(1000);
Then I have a procedure which accepts an ARRAY of sql statements and executes one at a time. Now this take a lot of time to process a batch of sql statement.
I was wondering if I can use the forall to execute them in a bulk as these are all insert/update/delete statements;
Here is the Procedure:
CREATE OR REPLACE procedure give_me_an_array( p_array in num_array )
as
begin
for i in 1 .. p_array.count loop
BEGIN
execute immediate p_array(i);
EXCEPTION
WHEN OTHERS THEN NULL;
END;
--dbms_output.put_line( p_array(i) );
end loop;
commit;
end;
We are using Java as the Front-End. Your input will be highly appricated as I am competing against SQLServer which is doing this extremly fast without using any ARRAYS. I am in the process of migrating from SQLServer to Oracle and the condition is if Oracle runs faster than SQLServer.
Thanks
Nirmal
Categories: DBA Blogs
SQL profile and SQL plan baseline in Standard Edition 2 (SE2)
Hello,
If we are using Oracle SE2 there is no Tuning Pack or Diagnostic Pack available.
So I don't think it's possible to create a SQL profile with SQL Tuning Advisor. would it be possible with SQLTXPLAIN coe_xfr_sql_profile.sql script ?
I know it's possible to create a SQL plan baseline from cursor cache but what if the good plan is not in the cursor cache ?
Thanks.
Categories: DBA Blogs
Oracle AQ Scaling with PL/SQL Callbacks, DBMS_SCHEDULER Jobs
Hello,
I have an Oracle Advanced Queuing queue and would like to be able to process this queue from inside of the database, as opposed to using an external app server. However, I am concerned about the scalability of internal solutions.
Please assume the following:
1. The queue receives an arbitrary number of messages.
2. Each message results in a PLSQL procedure being called, which can take an arbitrary amount of time.
3. You want to limit the number of messages that can be processed at once to some value N.
---
Solution #1: Run N permanent DBMS_SCHEDULER jobs that loop and call DBMS_AQ.DEQUEUE with WAIT_FOREVER. This is good because you can easily cap how many jobs you want to have processing this queue by adjusting the N number of permanent DBMS_SCHEDULER jobs. This is bad because all of these permanent jobs will reduce the available JOB_QUEUE_PROCESSES. It is fine if you only need a handful of jobs to process your queue, but as you scale the number of jobs up, eventually you will degrade the other unrelated jobs that need to run in the system.
Does calling DBMS_LOCK.SLEEP or DBMS_AQ.DEQUEUE inside a DBMS_SCHEDULER job free up a JOB_QUEUE_PROCESSES slot while the job is sleeping? My guess is no.
---
Solution #2: Use a PL/SQL callback, and in the callback, create a one-time DBMS_SCHEDULER job per message, and use a common resource constraint, such that only N scheduler jobs can run at once. For example, if you set a cap of 128 jobs in your resource constraint, and you receive 1000 messages, the PL/SQL callback will create 1000 jobs, but only 128 jobs will be running at once, and the rest will be blocked.
The downside here is that you have to create a whole dbms_scheduler job to process a message. This will increase the time between receiving a message and starting to process it, and just seems like an overall heavy solution. Lightweight jobs won't help because the resource constraints don't work for lightweight jobs.
In fact, you might as well not use AQ at all if you go down this route. Instead of writing messages to a queue, which later calls DBMS_SCEDHULER.CREATE_JOB, you could simply call DBMS_SCHEDULER.CREATE_JOB directly with a resource constraint.
---
Solution #3: Use an external app server. Run N threads, where each thread grabs a connection from a connection pool, loops and calls DBMS_AQ.DEQUEUE with WAIT_FOREVER. This is the best approach because you can set N to cap the number of connections processing messages in parallel easily, and you do not have to block up and slots in JOB_QUEUE_PROCESSES.
However, this has downsides. Your app server often has much more downtime than your database due to releases, network partitions, and many other various issues. If your session which is executing a long running PL/SQL procedure is terminated, you cannot assume whether the PL/SQL procedure on the server will complete or be stopped. While this is also true for DBMS_SCHEDULER Jobs that end up getting kil...
Categories: DBA Blogs
PL/SQL Execution Time
We have PL/SQL stored procedures that perform poorly, Using DBA_HIST or any other AWR / ASH metrics, is it possible to determine the runtimes of PL/SQL Procedures ?
Categories: DBA Blogs
Partitions evicting sequence cached values
Recently we observed that cached sequence values were lost significantly, appearing as large gaps in persisted values.
Our system does not expect gapless sequence values, however we are trying to understand root cause.
gv$rowcache shows high getmisses in histogram, objects, segments, followed by sequences.
SGA dump shows many grows in shared pool and shrinks in buffer cache, and gv$db_object_cache shows high loads.
We did a trend of sequence value jump or loss from cache based on gaps found in persisted values across time.
I will list the events that may be contributory to the gap. Please let me know if this is a incorrect hypothesis.
1. Onset of moderately large large sequence jumps aligns with migration to multitenant database
2. Prior to multitenant migration, we never had histogram collection as part of stats, it appears a DBA has run stats with histogram collection turned on at the time of multitenant migration, sequence gaps are silently occuring
3. Few months after multitenant migration, a tablespace rebuild activity followed by stats collection was done. This time the standard stats collection script was run and removed stats from many tables, but not few core tables that are extensively used in the application. So never before seen histogram traffic is still continuing to dictionary cache.
4. After multitenant migration, another effort started where many tables started to get partitioned. There were 2 large on-time efforts that created several thousands of partitions, followed by regular scheduled jobs creating few hundreds of partitions for historical data management
5. The sequence jump (and loss from cache) seems to continue for many months, un-noticed as the application is not affected by gaps.
6. Some of the regular scheduled batch jobs were missed, so there was a large gap where tables were not monthly partitioned properly as expected. At discovery a one-time catchup activity was performed where around 800 partitions were created.
7. The sequence jump phenomenon exploded uncontrollably and was discovered by a partner system.
We pinned the sequence in memory to calm it down.
Here is my draft hypothesis for a root cause, please correct if it does not hold:
a) Multitenant migration increased traffic to dictionary cache, sequence metadata is evicted and reload constantly, so values have jumps/gaps
b) As many partitions are created, more traffic is arriving to dictionary cache and pressure increases, sequence jumps occur silently meanwhile.
c) Cumulative traffic to dictionary cache is increasing every month as 100s of partitions are added by monthly job and pressure is getting intolerable, sequence evictions are on rise
c) One-time gap-covering exercise for missed partition creation pushed the traffic over a threshold, sequence jump skyrocketed and now visible in the application as a very large gap.
Does these dots connect as a root cause ?
Categories: DBA Blogs
PDB Snapshot Copy/Carousel
Hello Chris/Connor,
Hope you are doing well.
We are using ExaC@C with 19c databases at work and are exploring whether we can use PDB Snapshot Carousel and/or PDB Snapshot Copy feature.
It might be just me but I am somewhat confused with the "art of possible" while using ASM.
PDB Snapshot Copy Process (Doc ID 2730771.1) appears to suggest that we can use sparse disk group feature on exadata to either create PDB snapshot copy or PDB Snapshot Carousel
However, ORA-65227 during pluggable database snapshot (Doc ID 3024542.1) appears to suggest that the feature is simply not supported in 19c and only available from 21c onwards.
https://www.dbarj.com.br/en/2021/09/creating-a-snapshot-sparse-clone-from-a-different-release-update/ appears to even provide an example of how PDB Snapshot Copy can be used to patch a 19c database.
So are we able to use PDB Snapshot Carousel with ASM Sparse Disk Group in ExaC@C by using Sparse Disk Group feature of ASM only (and not using any file system)? I am confused...
Thanks in advance,
Narendra
Categories: DBA Blogs
Flashback Schema
Hello Tom,
I use Oracle Flashback features a lot. We have several development and test environments here and the combination flashback database + replay is priceless. My question is this: why don't we have a "flashback schema" feature ? I know you can simulate that with PL/SQL, but that's just for tables. A schema is much more than that: pl/sql code, grants, etc. If you consolidate databases into schemas inside a large machine, you lose the ability to flashback them; to maintain this ability you'll need virtualization (or pluggable databases :)). So, why was it never done ? Is it impossible and I fail to see the reason ?
Thank you for your time.
Categories: DBA Blogs
SQL macro with different table
Hi,
I have a database with thousands of tables containing the same kind of information.
I need to write a program to aggregate these informations and thought about using a sqlmacro.
<code>
-- This is a very simplified concept
create or replace
function get_val (p_table_name varchar2)
return varchar2 SQL_Macro
is
return 'select col1,col2 from p_table_name';
end;
/
select col1, col2
from get_val(t.table_name)
, table_list t; --Table_list contains the list of the table to take
</code>
And it always tells that the table doesn't exist.
The documentation talks about DBMS_TF.TABLE_T, which works if you pass the table as the parameter (and not the table's name).
How can I do that? Do I have to write a function returning the rows from the table?
Thank you
Categories: DBA Blogs
user_tab_identity_cols.generation_type and BY DEFAULT ON NULL
<code>SQL> select banner from v$version where rownum=1;
BANNER
--------------------------------------------------------------------------------
Oracle Database 19c EE Extreme Perf Release 19.0.0.0.0 - Production
SQL> create table t1(id int generated BY DEFAULT ON NULL as identity);
Tabelle wurde erstellt.
SQL> create table t2(id int generated BY DEFAULT as identity);
Tabelle wurde erstellt.
SQL> create table t3(id int generated ALWAYS as identity);
Tabelle wurde erstellt.
SQL> select table_name, generation_type
2 from user_tab_identity_cols utic
3 where utic.table_name in ('T1', 'T2', 'T3');
TABLE_NAME
--------------------------------------------------------------------------------
GENERATION
----------
T1
BY DEFAULT
T2
BY DEFAULT
T3
ALWAYS</code>
Why doesn't user_tab_identity_cols.generation_type show "BY DEFAULT ON NULL" for T1 ?
Behaviour is differently in comparison to T2, so where can I see it (besides DBMS_METADATA) ?
<code>
SQL> set long 5000 lines 300 pages 5000
SQL> select dbms_metadata.get_ddl('TABLE', table_name) from user_tables where table_name in ('T1', 'T2');
DBMS_METADATA.GET_DDL('TABLE',TABLE_NAME)
--------------------------------------------------------------------------------
CREATE TABLE "YYY"."T1"
( "ID" NUMBER(*,0) GENERATED BY DEFAULT ON NULL AS IDENTITY MINVALUE 1 MAXVAL
UE 9999999999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NO
CYCLE NOKEEP NOSCALE NOT NULL ENABLE
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "XXX"
CREATE TABLE "YYY"."T2"
( "ID" NUMBER(*,0) GENERATED BY DEFAULT AS IDENTITY MINVALUE 1 MAXVALUE 99999
99999999999999999999999 INCREMENT BY 1 START WITH 1 CACHE 20 NOORDER NOCYCLE N
OKEEP NOSCALE NOT NULL ENABLE
) SEGMENT CREATION DEFERRED
PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
NOCOMPRESS LOGGING
TABLESPACE "XXX"
</code>
Categories: DBA Blogs
Reducing SS contention with Local Temp Tablespace fails
We use Business Objects against a database setup just for generating reports. This is an Exadata RAC with 2 nodes and ASM storage and all of the BO sessions login/connect to the same oracle user.
During our last month-end, which coincided with quarter-end, we saw many session with "env: SS - contention" wait event. Also intermittently saw "buffer busy waits" as they all wait for access to the shared temporary tablespace, as indicated by the P1 Value.
Searching for answers on how to reduce these wait events led us to Local Temporary Tablespaces. So we setup a Local Temp Tablespace in our development environment...
<code>CREATE LOCAL TEMPORARY TABLESPACE FOR ALL temp_reporting_local
TEMPFILE '+DTADVQ1/.../TEMPFILE/temp_reporting_local.dbf'
SIZE 10G AUTOEXTEND OFF
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 2M;
</code>
Assigned it to the REPORT_USER as it's default Local Temp Tablespace...
<code>ALTER USER report_user LOCAL TEMPORARY TABLESPACE temp_reporting_local;
SELECT username, default_tablespace, temporary_tablespace, local_temp_tablespace
FROM DBA_USERS
WHERE username = 'REPORT_USER';
USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE LOCAL_TEMP_TABLESPACE
REPORT_USER TBE_REPORT_USER_01 TEMP_REPORTING TEMP_REPORTING_LOCAL
</code>
Then ran some large queries while logged in as REPORT_USER. The query fails with same error message as before: "ORA-01652: unable to extend temp segment by 256 in tablespace TEMP_REPORTING".
Monitoring Free Space, the Local Temps do not appear to have been used at all.
<code>SELECT tablespace_name, inst_id,
tablespace_size/1024/1024 AS total_mb,
allocated_space/1024/1024 AS allocated_mb,
free_space/1024/1024 AS free_mb
FROM dba_temp_free_space
WHERE tablespace_name LIKE 'TEMP_REPORTING%';
TABLESPACE_NAME INST_ID TOTAL_MB ALLOCATED_MB FREE_MB
TEMP_REPORTING 10240 10240 0 (assumed to be zero at instant report died)
TEMP_REPORTING_LOCAL 1 10240 2 10238
TEMP_REPORTING_LOCAL 2 10240 2 10238
</code>
A hash join exceeded the 10GB of shared temp but did not use any of the Local temp.
So, how can we get these queries to use Local Temp once Shared Temp "overflows"? I'm thinking it is because it cannot spit the hashed results between the two. Which makes me wonder how it will ever use local temp tablespaces.
Second question: why did they not set it up to use the Local Temp first and then overflow into the Shared Temp, if needed? Seems like a more logical approach if you want to mitigate these wait events.
Categories: DBA Blogs



