Tom Kyte
Problem with loading data from XML decimal point with german nls-settings
<code>-- Settings
select banner from v$version;
--Oracle Database 23c Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
select * from v$nls_parameters where parameter in ('NLS_NUMERIC_CHARACTERS','NLS_TERRITORY','NLS_LANGUAGE');
-- NLS_LANGUAGE GERMAN 3
-- NLS_TERRITORY GERMANY 3
-- NLS_NUMERIC_CHARACTERS ,. 3
create table asktom_gpx_tab
(i number
,xmldata xmltype
);
INSERT INTO asktom_gpx_tab
-- (i,xmldata)
VALUES (1
,xmltype(
'<?xml version="1.0" encoding="UTF-8" standalone="no" ?>
<gpx>
<trkpt lat="49.773210" lon="8.930600"><ele>340.8</ele></trkpt>
<trkpt lat="49.772980" lon="8.930280"><ele>342.0</ele></trkpt>
<trkpt lat="49.772740" lon="8.929900"><ele>341.5</ele></trkpt>
</gpx>')
);
commit;
select elevation
from asktom_gpx_tab t,
xmltable('/gpx/trkpt' passing t.xmldata
columns elevation varchar2(20) path 'ele') x1;
-- 340,8 -- converted because NLS sets decimal sign to comma
-- 342.0 -- why is this value not converted to comma???
-- 341,5 -- converted because NLS sets decimal sign to comma
-- I found a Question here with a supposed solution but it doesnt work right
select to_number(elevation,'FM9999D99','nls_numeric_characters = '',.''')
from asktom_gpx_tab t,
xmltable('/gpx/trkpt' passing t.xmldata
columns elevation varchar2(20) path 'ele') x1;
-- 340,8 -- ok
-- 3420 -- This is definitely a wrong value
-- 341,5 -- ok</code>
Shouldnt all values be treated the same way?
Categories: DBA Blogs
What are the recommended BIG-IP F5 settings for routing HTTP traffic to ORDS 24.4 without SSL in an Oracle EBS-integrated environment?
What are the recommended BIG-IP F5 settings for routing HTTP traffic to ORDS 24.4 without SSL in an Oracle EBS-integrated environment?
Categories: DBA Blogs
impdp commit size parameter
Is there a commit size parameter for datapump import utility in the future? Currently there is none. I see direct path insert and parallel process are there and if so will adding commit size parameter help in faster data loading.
Categories: DBA Blogs
Performance Analysis of SQL over DB Link
Hello and thank you for your response.
Sorry I could not provide a live example SQL.
I have an application running in JBoss connecting to a database Reporting_DB and executes SQLs there.
The SQLs actually would access views in the Reporting_DB.
The views are accessing tables which are in a remote DB, say ReadOnlyStandby_DB accessed over a DB link to a remote database.
Here is a simple schematic:
Application --Network--> Reporting_DB ---DB Link--> ReadOnlyStandby_DB
ORM sql ---> View --> Table
SQL Signature:
Example of SQL seen in Reporting_DB:
The application uses some ORM (hibernate), so the SQLs are always changing as seen in the Reporting_DB:
select this_.Column1 as Column1_485_0_, this_.Column2 as Column2_485_0_, this_.Column3 as AUDIT3_485_0 etc.
Example of SQL seen in ReadOnlyStandby_DB
The sql seen in the ReadOnlyStandby_DB looks like this:
SELECT "A1"."col1" ... FROM "TABLE_1" where CONDITION ... UNION ALL ...
The users are complaining that the performance is slow, developers are sharing application logs that show SQLs are timing out with slowness.
The final database where SQL is executed is a Data Guard read only Standby database.
I have OEM on Reporting_DB and ReadOnlyStandby_DB but the app server (JBOSS) is external to my jurisdiction and have no access or insight.
How can I get the following details:
1) Connect the dots for session: How to connect the dots from App, db session to Reporting_DB, db session to ReadOnlyStandby_DB
2) Session Trace: How to trace session coming from DB link on ReadOnlyStandby_DB
3) SQL Analysis: The SQL on Reporting_DB is not the same on ReadOnlyStandby_DB; it seems to change. How to connect SQL "a" on Reporting_DB and its related SQL "b" on ReadOnlyStandby_DB ?
Categories: DBA Blogs
Materialized view based on View expanded to underlying objects
Hello All
I have created a Materialized view using the following code
<code>CREATE MATERIALIZED VIEW "APPS"."XXBST_UNPACK_PENDING_MV" ("CUST_ACCOUNT_ID", "CUSTOMER_NAME", "SUPPLIER_ID", "SUPPLIER_NAME", "SHIPMENT_ID", "SHIPMENT_NUMBER", "UNPACK_DATE", "DAYS_IN_UNPACK_PENDING_STATUS")
SEGMENT CREATION IMMEDIATE
ORGANIZATION HEAP PCTFREE 10 PCTUSED 0 INITRANS 2 MAXTRANS 255
NOCOMPRESS NOLOGGING
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "APPS_TS_TX_DATA"
BUILD DEFERRED
USING INDEX
REFRESH COMPLETE ON DEMAND START WITH sysdate+0 NEXT sysdate + (1/24/60)*30
USING DEFAULT LOCAL ROLLBACK SEGMENT
USING TRUSTED CONSTRAINTS EVALUATE USING CURRENT EDITION DISABLE ON QUERY COMPUTATION DISABLE QUERY REWRITE
AS SELECT * FROM xxbst_unpack_pending_v;</code>
This is in a non prod environment which has just been restored from a backup due to the code being changed in prod to this lowest level expanded code
<code>CREATE MATERIALIZED VIEW "APPS"."XXBST_UNPACK_PENDING_MV" ("CUST_ACCOUNT_ID", "CUSTOMER_NAME", "SUPPLIER_ID", "SUPPLIER_NAME", "SHIPMENT_ID", "SHIPMENT_NUMBER", "UNPACK_DATE", "DAYS_IN_UNPACK_PENDING_STATUS")
SEGMENT CREATION IMMEDIATE
ON PREBUILT TABLE WITHOUT REDUCED PRECISION
USING INDEX
REFRESH FORCE ON DEMAND
USING DEFAULT LOCAL ROLLBACK SEGMENT
USING ENFORCED CONSTRAINTS EVALUATE USING CURRENT EDITION DISABLE ON QUERY COMPUTATION DISABLE QUERY REWRITE
AS SELECT "A1"."CUST_ACCOUNT_ID" "CUST_ACCOUNT_ID","A1"."CUSTOMER_NAME" "CUSTOMER_NAME","A1"."SUPPLIER_ID" "SUPPLIER_ID","A1"."SUPPLIER_NAME" "SUPPLIER_NAME","A1"."SHIPMENT_ID" "SHIPMENT_ID","A1"."SHIPMENT_NUMBER" "SHIPMENT_NUMBER","A1"."UNPACK_DATE" "UNPACK_DATE","A1"."DAYS_IN_UNPACK_PENDING_STATUS" "DAYS_IN_UNPACK_PENDING_STATUS" FROM (SELECT "A7"."CUST_ACCOUNT_ID" "CUST_ACCOUNT_ID","APPS"."XXBST_DEV_UTILS_PKG"."GET_CUST_SHORT_NAME"("A7"."CUST_ACCOUNT_ID") "CUSTOMER_NAME","A3"."VENDOR_ID" "SUPPLIER_ID","A3"."VENDOR_NAME" "SUPPLIER_NAME","A10"."SHIPMENT_ID" "SHIPMENT_ID","A10"."SHIPMENT_NUMBER" "SHIPMENT_NUMBER",MAX("A7"."CREATION_DATE") "UNPACK_DATE",TRUNC(SYSDATE)-TRUNC(MAX("A7"."CREATION_DATE")) "DAYS_IN_UNPACK_PENDING_STATUS" FROM "XXBST"."XXBST_TNT_SHPMNT" "A10","XXBST"."XXBST_TNT_MV_PLAN" "A9","XXBST"."XXBST_TNT_MV" "A8","XXICE"."XXICE_SM_RCV_HEADERS" "A7","XXBST"."XXBST_TNT_HEADERS_ALL" "A6","XXBST"."XXBST_TNT_LINES_ALL" "A5","XXBST"."XXBST_BOM_ORDER_HEADERS_ALL" "A4", (SELECT "A12"."VENDOR_ID" "VENDOR_ID","A12"."VENDOR_NAME" "VENDOR_NAME" FROM "AP"."AP_SUPPLIERS" "A12","AR"."HZ_PARTIES" "A11" WHERE "A12"."PARTY_ID"="A11"."PARTY_ID") "A3","XXBST"."XXBST_TNT_MV_PLAN" "A2" WHERE "A10"."SHIPMENT_ID"="A7"."SHIPMENT_ID" AND "A10"."SHIPMENT_ID"="A9"."SHIPMENT_ID" AND "A9"."PLAN_ID"="A8"."PLAN_ID" AND "A8"."MV_TYPE"='DELIVERY' AND "A8"."MV_STATUS"='UNPACK_PENDING' AND "A6"."SHIPM...
Categories: DBA Blogs
Export Oracle Label Security info from one DB to be imported into another DB
Hi Sir,
I'm trying to copy the Oracle Label Security information from one database and apply it to another one.
I tried to use expdp/impdp and dbms_metadata.get_ddl, but I didn't have success using them.
Is there a special package or tool to get this work done?
Thanks in advance.
Valerio Almeida
Categories: DBA Blogs
Design decision on database tables oracle DB
Hello experts,
I have a UNIQUE constraint on (col1, col2) for a table. But, due to new functionality, I need a conditional unique constraint based on col 3.
So something like this:
if col3 = 'Val1':
UNIQUE constraint on (col1, col2)
else:
UNIQUE constraint on (col4, col5)
I'm on oracle DB 19c, and found that creating a unique index with case type helps. Can you guide me on alternative options and the pros, cons for this design. I do not want to create any new table and want the best way to achieve conditional unique constraints.
Thanks
Categories: DBA Blogs
SQL developer hanging
I am using version 23.1.1 and when I start up the application, it just hangs. I have no idea what to look at to even begin to diagnose the problem.
Categories: DBA Blogs
Social Sign-In with Azure
Dear Experts, Social Sign-in with Microsoft Azure/Office 365 in Oracle APEX applications is working well. I used this How-To: https://tm-apex.hashnode.dev/implementing-social-sign-in-with-microsoft-azureoffice-365-in-apex
When I use the substitution variable in APEX (&APP_USER.) I get the correct name. So far so good!
But we need the samAccountName for checking Authorization. I tried #samAccountName#, #sam_account_name# and #sam#. It doesn't work! :(
APEX is referencing in help the site https://openid.net/specs/openid-connect-basic-1_0.html#Scopes where I found other keys ("claims"). Not all are working, for instance #sub# and #family_name# works, #preferred_username# does not work.
With the help of Google I found other keys like #upn# (https://promatis.com/ch/en/build-a-secure-oracle-apex-app-with-microsoft-azure-ad-login-and-delegated-calendar-access/) which works fine and is not mentioned in the above website.
But my question to you is how I get the samAccountName from Azure??? What is the correct name/key/claim? May I have to configure other things than "profile,email" in scope textfield maybe?
Categories: DBA Blogs
Oracle returns default value for column on rows inserted before the column was added
<code>create table add_column_default ( id number )
/
insert into add_column_default ( id ) values ( 0 )
/
alter table add_column_default add col1 number default 5
/
insert into add_column_default ( id, col1 ) values ( 11, null )
/
select * from add_column_default order by id
/
ID COL1
---------- ----------
0 5
11
2 rows selected.
drop table add_column_default
/</code>
<b>Assumptions:</b>
My understanding is that the data block is not modified during the ALTER. That is, the row with id = 0 is not updated to add a col1 value of 5.
https://docs.oracle.com/en/database/oracle/oracle-database/19/sqlrf/ALTER-TABLE.html#:~:text=If%20you%20specify%20the%20DEFAULT,subject%20to%20the%20following%20restrictions%3A
<i>"If you specify the DEFAULT clause for a column, then the default value is stored as metadata but the column itself is not populated with data. However, subsequent queries that specify the new column are rewritten so that the default value is returned in the result set."</i>
Note: This used to not be the case for nullable columns in 11.2.
https://docs.oracle.com/cd/E18283_01/server.112/e17118/statements_3001.htm#CJAHHIBI:~:text=When%20you%20add%20a%20column%2C%20the,a%20default%20value%20or%20NULL.
<i>"When you add a column, the initial value of each row for the new column is null.
...
If you specify the DEFAULT clause for a nullable column, then the default value is added to existing rows as part of this ALTER TABLE statement, and any update triggers defined on the table are fired. This behavior also results if you change a NOT NULL column with a default value to be nullable."</i>
My understanding is that the data block does not store any information (not even the length byte) regarding col1 when the value is null and col1 is the last column in the table.
<b>Therefore:</b>
Rows inserted before the ALTER do not have col1 information.
Rows inserted after the ALTER may not have col1 information (inserting null into the last fixed width column in a table).
<b>Confusion/Question:</b>
If both rows look the same (with respect to col1 information) in the data block, then how does Oracle know to return a col1 value of 5 for the row with id = 0 and return a col1 value of null for the row with id = 11?
Categories: DBA Blogs
XMLTYPE returning unknown special character
Hello Sir,
We're using Oracle Database 19C Enterprise edition.
We're getting a XML tag in Varchar2 format. Then we're using XMLTYPE to convert that to XML. However, incase when there's special character in the XML tag it's giving a unknown special character in the output. What can we do to get the same output
Sample SQL:
<code>select XMLTYPE('<tag1> a''bc </tag1>') from dual;</code>
Output:
<code><tag1> a'bc </tag1></code>
Expected output:
<code><tag1> a'bc </tag1></code>
Categories: DBA Blogs
Sql Plan Baseline
Hi Dear Experts,
I want to create a SQL PLAN baseline for one of the My system query in a two-node RAC 19.21. With user SYS and inside a PDB with the following commands
<code>var v_num number;
exec :v_num:=dbms_spm.load_plans_from_cursor_cache(sql_id => '0b3...............',plan_hash_value => 2170529430 ,fixed=>'YES');</code>
It gives a successfull response. Of course, the output of the above command is returned as 0. And when I check the Plan baselines with the following command, I see that nothing new has been created:
<code>select * from dba_sql_plan_baselines</code>
I check with the following command and see that plan_hash_value equal to 2170529430 exists in memory:
<code>select sql_id, plan_hash_value, parsing_schema_name, sql_text
from v$sql where sql_id = '0b3...............';</code>
What is the problem?
Categories: DBA Blogs
Deleting duplicate records without using rowid and rownum
Hi Tom,
If there is any duplications of records in a table, we know how to eliminate the duplicate rows using rowid.
But is there any possibility to delete the duplicate records in a table without using rowid and rownum.
my friend who is working in Oracle for arnd 4 years says that it is not possible. But i think there should be some way to do this.
Pls give me some suggestion TOM. i would be thankful to you, if you can illustrate with examples.
Thanks in Advance
Prakash
Categories: DBA Blogs
Table Design
Hello,
I work as a dba and get requests from developers to create tables which we often review and sometimes change for better design and/or implementation.
One of the developers recently sent a request to create a table such as the following ;
table_name:t1_relation
Column_names:
c1_master_id_pk (foreign key to t1_master table)
c1_attribute
c1_value
primary key all 3 columns.
They explained that the data in all these columns are all non nullable and that they are all needed to uniquely identify a record.
having all the columns of a table as a primary key didn't look very right to me and so
I suggested we create a surrogate key and make the 3 columns unique with not null constraints on all of them. they initially said yes then came back and said to change it to be how they requested initially.
I'm messaging to ask if this is proper database design and what would be the ideal way to implement this?
Categories: DBA Blogs
CONNECT_TIME with reset on action
We have an Oracle Forms / Database application and were asked to limit User's maximum online time to 15 minutes.
Following this, we set the profiles' CONNECT_TIME to 15 - this works well.
How is it possible to reset this if user does any action on frontend?
Maybee v$session.seconds_in_wait can help?
Thanks in advance
Helmut
Categories: DBA Blogs
PL/SQL package
Which PL/SQL package is primarily used for interacting with Generative AI services in Oracle Database 23ai?
DBMS_AI or DBMS_ML or DBMS_VECTOR_CHAIN or DBMS_GENAI?
Categories: DBA Blogs
Agent OEM 24ai
Hi there!
I've some trouble trying to upload data from agent to OMS server, everything seems correct but:
<b>emctl upload agent
Oracle Enterprise Manager 24ai Release 1
Copyright (c) 1996, 2024 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD upload error:full upload has failed: uploadXMLFiles skipped :: OMS version not checked yet. If this issue persists check trace files for ping to OMS related errors. (OMS_DOWN)</b>
agent is running, OMS too, telnet from agent is:
telnet myserver.com 4903
Trying 10.8.0.65...
Connected to myserver.com.
Escape character is '^]'.
curl output:
curl -vk myserver.com:4903/empbs/upload
* Trying 10.8.0.65...
* TCP_NODELAY set
* Connected to myserver.com (10.8.0.**) port 4903 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=myserver.com
* start date: Jul 10 16:57:40 2025 GMT
* expire date: Jul 9 16:57:40 2035 GMT
* issuer: O=EnterpriseManager on myserver.com; OU=EnterpriseManager on myserver.com; L=EnterpriseManager on myserver.com; ST=CA; C=US; CN=myserver.com
* SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
> GET /empbs/upload HTTP/1.1
> Host: myserver.com:4903
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 03 Sep 2025 18:53:47 GMT
< X-ORCL-EM-APIGW-CONSOLIDATED-BY: apigateway
< X-ORCL-EM-APIGW-ERR: 0
< X-ORACLE-DMS-ECID: 21f06e98-2895-465a-b3f3-17be919babe9-00001673
< X-ORCL-EMOA: true
< X-ORCL-EM-APIGW-GUID: 6113d58d-fde2-8b19-f054-c5eee6216d13
< X-ORACLE-DMS-RID: 0
< Date: Wed, 03 Sep 2025 18:53:47 GMT
< Content-Type: text/html;charset=utf-8
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Vary: Accept-Encoding
< Content-Length: 306
<
<HTML><HEAD><TITLE>
Http XML File receiver
</TITLE></HEAD><BODY bgcolor="#FFFFFF">
<H1>Http XML File receiver</H1>
<H2> Http Receiver Servlet active!</h2>
<H2> Product version is: 24ai </H2>
<H2> Product release version is: 24.1.0.0.0 </H2>
<H2> Core release version is: 24.1.0.0.0 </H2>
</BODY></HTML>
* Connection #0 to host myserver.com left intact
thank you very much!
Categories: DBA Blogs
General Question for your thought or Experience with the Outbox Pattern and viable alternatives
Question
We're evaluating different approaches to implement the Outbox Pattern in Oracle 19c for reliable event publishing in our microservices architecture, but we're concerned about the significant I/O overhead and performance implications. Could you provide guidance on the best practices and alternatives?
Current Implementation Options We're Considering
1. Traditional Polling Approach
Method: Standard outbox table with application polling using SELECT ... FOR UPDATE SKIP LOCKED
Concerns:
Constant polling creates unnecessary database load
Potential for high latency in event delivery
Resource consumption even when no events exist
2. Change Data Capture (CDC) with Debezium
Method: Using Debezium to mine Oracle redo logs for outbox table changes
Concerns:
Additional complexity in deployment and monitoring
Dependency on external CDC infrastructure
Potential log mining overhead on the database
3. Oracle Advanced Queuing (AQ) with Sharded Queues
Method: Leveraging Oracle's native messaging with 19c sharded queue improvements
Concerns:
Learning curve for development teams familiar with table-based approaches
Potential vendor lock-in
Queue management complexity
Primary Concerns
I/O Impact: All approaches seem to significantly increase database I/O:
Polling creates constant read operations
CDC requires continuous log scanning
Queuing systems add their own storage and processing overhead
Scalability: As our event volume grows, we're worried about:
Database performance degradation
Increased storage requirements for outbox/queue tables
Network bandwidth consumption
Specific Questions
Performance Optimization: What Oracle 19c specific features or configurations can minimize the I/O overhead of outbox pattern implementations?
Alternative Architectures: Are there Oracle-native alternatives to the traditional outbox pattern that provide similar transactional guarantees with better performance characteristics?
Hybrid Approaches: Would a combination approach (e.g., AQ for high-priority events, polling for batch operations) be advisable?
Monitoring and Tuning: What specific metrics should we monitor, and what tuning parameters are most critical for outbox pattern performance in Oracle 19c?
Resource Planning: How should we size our database resources (I/O capacity, storage, memory) when implementing outbox patterns at scale?
Environment Details
Oracle Database 19c Enterprise Edition
Microservices architecture with moderate to high event volume
Requirements for exactly-once delivery semantics
Mixed OLTP and event-driven workloads
Any insights on Oracle-specific optimizations, alternative patterns, or architectural recommendations would be greatly appreciated.
Categories: DBA Blogs
Remote Procedure Call (RPC) Dependency Management - where is remote timestamp stored?
Hi AskTom Team!
According to https://docs.oracle.com/en/database/oracle/oracle-database/19/adfns/schema-object-dependency.html#GUID-B99E885E-900F-4F29-A188-A617A301FDCE :
"Whenever a procedure is compiled, its time stamp is recorded in the data dictionary."
Is it possible to see this recorded timestamp ?
Marcin
Categories: DBA Blogs
Bring the consecutive number(strat_val,end_val) record based on id as one record and add new record if the consecutive number is breaks down
Bring the consecutive number(strat_val,end_val) record based on id as one record and add new record if the consecutive number is breaks down
https://livesql.oracle.com/next/?compressed_code=H4sIAAAAAAAACo3PwQqCQBCA4fvCvsPcVBjBXTWLboXHCoLOMupCgm2wO%252Fr8oXbo1jKn%252BflgmM4ZYgNM7WiAjeemX3YpYikAAIYeZnLdk5yOVZbgVj2T42amEa6Py6m%252Bf7Ox%252FRrBTq%252FWOCmSoxTbDNYbxzBYfv%252BcmWmcjI8hoggV5gv%252FDwvchcEKVRYmlUZVBtIdqkMY1RlqHUbzEvMqkO6xWN9K0%252Fp2%252FgB1bHIywAEAAA%253D%253D&code_language=PL_SQL&code_format=false
<code>create table test_date
(
id varchar2(10),
start_val NUMBER,
end_val number
);</code>
input data :
<code>insert into test_date values( 'a',1,3);
insert into test_date values( 'a',4,6);
insert into test_date values( 'a',7,10);
insert into test_date values( 'a',12,15);
insert into test_date values( 'a',16,19);
insert into test_date values( 'a',20,22);
insert into test_date values( 'a',35,37);
insert into test_date values( 'a',38,40);</code>
output data:
'a' , 1, 10
'a' , 12, 19
'a' , 35 , 40
Categories: DBA Blogs



