DBA Blogs
Discover the major announcements from Oracle AI World 2025, including the transformation to Oracle Database 26ai, GoldenGate's AI-ready capabilities, and the game-changing OCI GoldenGate on Azure integration. Learn how Oracle is architecting AI into the core of data management and why these innovations position organizations for the AI revolution.
The post Reflections from Oracle AI World 2025: Innovation, Community, and the Road Ahead appeared first on DBASolved.
Today is the day dedicated to the late Joel Kallman, a mentor and friend, so I thought it would be nice to show off my APEX application hosted on OCI working with LLMs run via the Ollama server located at home.
It's an interesting exercise because one learns a lot about how these "things" work, the strong and weak points of each model, which one works best for a certain task, which one has the best "vision" capability, which one creates the best embeddings. So, here are a few short videos showing some "vibe" coding that I'll need to revise tomorrow, the analysis of two versions of the same procedure created by the same LLM, then a "live" comparison showing how other models answer the same question differently.
So far, so good. #JoelKallmanDay #orclAPEX, #AI
As I have recently gone through a course on LLMs for Chatbots and RAG on OCI (Oracle Cloud Infrastructure), I thought about putting together a quick demo that uses A The Hugging Face all-MiniLM-L12-v2 model (with some augmentations done by Oracle for download) B The ONNX (Open Neural Network Exchange) Runtime Engine that runs in Oracle 23ai (and, yes, in the 23ai Free downloadable edition, not requiring the Oracle Cloud) C Vectors stored in an Oracle Database using the Vector Datatype D Use the Hugging Face model to generate Vectors for Similarity / Semantic Searches
but DOES NOT USE Oracle Cloud Infrastructure. (To use OCI for Embedding -- generating "Vectors" from "Tokens", I have to book a Dedicated AI Cluster to "Fine Tune" an existing model, so I am currently staying away from this expensive method) [Similarly, I cannot currently build a Generative AI as I have to setup a model and server resources]
The sequence is : 1. Setup an Oracle 23ai database (either in the Oracle Cloud OR on an Exadata machine OR the Free edition for On-Premises testing) 2. Create a User to hold the source data and vectors 3. Import the augmented Hugging Face model 4. Load the Source Data as words or sentences or chunks -- here I use a few sentences 5. Compute Vectors for the target column 6. Optional but recommended : Build Vector Index -- for this demo with very few rows I haven't built a Vector Index but if and/or when I do build a demo with a few hundred or thousand sentences or chunks of data, I will create an Index on the Vectors 7. Use Oracle's VECTOR_DISTANCE Function for a Semantic Similarity Search
So here goes :
User and Grants :
Using the 23ai FREEPDB downloadable VM, I created the user with these grants :
grant connect to vector_demo identified by password ;
alter user vector_demo default tablespace users ;
alter user vector_demo quota unlimited on users ;
grant DB_DEVELOPER_ROLE to vector_demo ;
grant CREATE MINING MODEL to vector_demo;
CREATE OR REPLACE DIRECTORY DM_DUMP as '/tmp/Load_Model'; -- on the Linux VM
GRANT READ ON DIRECTORY dm_dump TO vector_demo;
GRANT WRITE ON DIRECTORY dm_dump TO vector_demo;
Download and Import the Model :
Download the ZIP file from
https://adwc4pm.objectstorage.us-ashburn-1.oci.customer-oci.com/p/VBRD9P8ZFWkKvnfhrWxkpPe8K03-JIoM5h_8EJyJcpE80c108fuUjg7R5L5O7mMZ/n/adwc4pm/b/OML-Resources/o/all_MiniLM_L12_v2_augmented.zip
It also contains a README
Place the extracted ONNX file on the Linux Server at /tmp/Load_Model
SQL> !ls -l /tmp/Load_Model
total 249932
-rw-r--r--. 1 oracle oinstall 122597346 Jul 15 2024 all_MiniLM_L12_v2_augmented.zip
-rw-r--r--. 1 oracle oinstall 133322334 Jul 15 2024 all_MiniLM_L12_v2.onnx
-rw-r--r--. 1 oracle oinstall 4232 Jul 15 2024 README-ALL_MINILM_L12_V2-augmented.txt
SQL>
If necessary, delete the previously imported model from the database (logged in to the SQL command line as vector_demo) :
exec DBMS_VECTOR.DROP_ONNX_MODEL(model_name => 'ALL_MINILM_L12_V2_AUGMENTED', force => true);
Import the model file :
SQL> EXECUTE DBMS_VECTOR.LOAD_ONNX_MODEL(-
directory=>'DM_DUMP',-
file_name=>'all_MiniLM_L12_v2.onnx',-
model_name=>'ALL_MINILM_L12_V2_AUGMENTED'-
metadata => JSON('{"function" : "embedding", "embeddingOutput" : "embedding", "input": {"input": ["DATA"]}}')-
);
The "model_name" is the User Defined Name (i.e. defined by the DBA).
In many online demos and the online demo, I see the model name specified as "doc_model".
I preferred to use the name "ALL_MINILM_L12_V2_AUGMENTED". You could use any meaningful name
Verify the imported model with these queries :
SQL> l
1 SELECT model_name, attribute_name, attribute_type, data_type, vector_info
2 FROM user_mining_model_attributes
3 WHERE model_name = 'ALL_MINILM_L12_V2_AUGMENTED'
4* ORDER BY ATTRIBUTE_NAME
SQL> /
MODEL_NAME ATTRIBUTE_NAME ATTRIBUTE_TY DATA_TYPE
-------------------------------- ---------------- ------------ ----------------
VECTOR_INFO
--------------------------------------------------------
ALL_MINILM_L12_V2_AUGMENTED DATA TEXT VARCHAR2
ALL_MINILM_L12_V2_AUGMENTED ORA$ONNXTARGET VECTOR VECTOR
VECTOR(384,FLOAT32)
SQL>
SQL> l
1 SELECT MODEL_NAME, MINING_FUNCTION, ALGORITHM,
2 ALGORITHM_TYPE, MODEL_SIZE
3 FROM user_mining_models
4 WHERE model_name = 'ALL_MINILM_L12_V2_AUGMENTED'
5* ORDER BY MODEL_NAME
SQL> /
MODEL_NAME MINING_FUNCTION ALGORITHM ALGORITHM_ MODEL_SIZE
-------------------------------- ------------------------------ ------------------------------ ---------- ----------
ALL_MINILM_L12_V2_AUGMENTED EMBEDDING ONNX NATIVE 133322334
SQL>
SQL>SELECT * FROM DM$VMALL_MINILM_L12_V2_AUGMENTED ORDER BY NAME ;
NAME VALUE
------------------ ------------------------------------------------
Graph Description Graph combining tokenizer and main_graph
tokenizer
main_graph
Graph Name tokenizer_main_graph
Input[0] input:string[?]
Output[0] embedding:float32[?,384]
Producer Name onnx.compose.merge_models
Version 1
6 rows selected.
SQL>
SQL> SELECT * FROM DM$VPALL_MINILM_L12_V2_AUGMENTED ORDER BY NAME;
NAME VALUE
------------------ ------------------------------------------------
embeddingOutput embedding
function embedding
SQL>
SQL> SELECT * FROM DM$VJALL_MINILM_L12_V2_AUGMENTED;
METADATA
--------------------------------------------------------------------------------
{"function":"embedding","embeddingOutput":"embedding","input":{"input":["DATA"]}
SQL>
Test generation of Embeddings with the Model :
SQL> SELECT VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'The quick brown fox jumped' as DATA) AS embedding;
EMBEDDING
------------------------------------------------------------------------------------------------------------------------------------
[1.65517051E-002,3.19098569E-002,-1.96293015E-002,-3.56926955E-002,
SQL>
SQL> SELECT VECTOR_DIMS(VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'The quick brown fox jumped' as DATA)) as Dimensions_Count;
DIMENSIONS_COUNT
----------------
384
SQL>
note : The output is actually a list of 384 dimensions, I copied the whole output from SQL Developer put it into a Python List and
then did a LEN(list) to count the number of elements
Create the Table for the source data (sentences of text in my demo) and load some data (sentences)
SQL> drop table my_data_source purge;
Table dropped.
SQL> create table my_data_source(id number primary key, my_sentence varchar2(4000));
Table created.
SQL> insert into my_data_source values (1,'The quick brown fox jumped over the lazy dog');
1 row created.
SQL> insert into my_data_source values (2,'she sells sea-shells at the seashore');
1 row created.
SQL> insert into my_data_source values (3,'the fox and dog are brown friends');
1 row created.
SQL> insert into my_data_source values (4,'the elephant knows the fox and dog');
1 row created.
SQL> insert into my_data_source values (5,'the fox, dog and elephant live together');
1 row created.
SQL> insert into my_data_source values (6,'aeroplanes fly in the sky');
1 row created.
SQL> commit;
Commit complete.
SQL>
SQL> select * from my_data_source;
ID
----------
MY_SENTENCE
------------------------------------------------------------------------------------------------------------------------------------
1
The quick brown fox jumped over the lazy dog
2
she sells sea-shells at the seashore
3
the fox and dog are brown friends
4
the elephant knows the fox and dog
5
the fox, dog and elephant live together
6
aeroplanes fly in the sky
6 rows selected.
SQL>
Create the Table to hold the computed Vectors, using the model that has been loaded
SQL> drop table my_data_vectors;
Table dropped.
SQL> create table my_data_vectors (id number primary key, my_sentence varchar2(4000), sentence_vector vector);
Table created.
SQL>
SQL> insert into my_data_vectors
select ds.id as id, my_sentence, to_vector(et.embed_vector) sentence_vector
from
my_data_source ds,
dbms_vector_chain.utl_to_embeddings(
dbms_vector_chain.utl_to_chunks(dbms_vector_chain.utl_to_text(ds.my_sentence), json('{"normalize":"all"}')),
json('{"provider":"database", "model":"ALL_MINILM_L12_V2_AUGMENTED"}')) t,
JSON_TABLE(t.column_value, '$[*]' COLUMNS (embed_id NUMBER PATH '$.embed_id', embed_data VARCHAR2(4000) PATH '$.embed_data', embed_vector CLOB PATH '$.embed_vector')) et;
2 3 4 5 6 7 8
6 rows created.
SQL>
SQL> commit
2 /
Commit complete.
SQL> select * from my_data_vectors;
ID
----------
MY_SENTENCE
------------------------------------------------------------------------------------------------------------------------------------
SENTENCE_VECTOR
------------------------------------------------------------------------------------------------------------------------------------
1
The quick brown fox jumped over the lazy dog
[1.70537992E-003,5.95331714E-002,-2.32173726E-002,3.02353837E-002,
2
she sells sea-shells at the seashore
[1.2790652E-002,5.97022101E-002,8.24511051E-002,3.93822305E-002,
3
the fox and dog are brown friends
[7.67209902E-002,-7.82399923E-002,-2.40087509E-002,-2.37390138E-002,
4
the elephant knows the fox and dog
[4.04452085E-002,-4.15055361E-003,-1.71641614E-002,2.63017584E-002,
5
the fox, dog and elephant live together
[9.44276601E-002,-4.74944711E-002,2.31287945E-002,2.9055763E-002,
6
aeroplanes fly in the sky
[1.1147093E-001,2.18752325E-002,-4.58196662E-002,-2.64751501E-002,
6 rows selected.
SQL>
Do a Semantic Similarity Search for certain phrases :
SQL> l
1 SELECT VECTOR_DISTANCE(
2 VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'The quick brown fox jumped' as DATA),
3 VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'The quicker yellow fox jumped' as DATA),
4* COSINE) as Vector_Distance
SQL> /
VECTOR_DISTANCE
---------------
1.76127E-001
SQL>
SQL> l
1 SELECT VECTOR_DISTANCE(
2 VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'The quick brown fox jumped' as DATA),
3 VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'seashells' as DATA),
4* COSINE) as Vector_Distance
SQL> /
VECTOR_DISTANCE
---------------
9.81778E-001
SQL>
SQL> l
1 SELECT VECTOR_DISTANCE(
2 VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'The quick brown fox jumped' as DATA),
3 VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'moon elephants' as DATA),
4* COSINE)as Vector_Distance
SQL> /
VECTOR_DISTANCE
---------------
7.51297E-001
SQL>
--- "the quicker yellow fox jumped" is not very far away from "the quick brown fox jumped"
--- "moon elephants" is far away from the "the quick brown fox jumped".
--- "seashells" is much further away.
--- likely because the prebuilt model recognises tnat "fox" and "elephant" have some similarity being animals
SQL> l
1 SELECT VECTOR_DISTANCE(
2 VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'The quick brown fox jumped' as DATA),
3 VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'houshold electricals' as DATA),
4* COSINE)as Vector_Distance
SQL> /
VECTOR_DISTANCE
---------------
9.97313E-001
SQL>
-- "household" electricals is further away
SQL> l
1 SELECT id , my_sentence, vector_distance(sentence_vector , VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'sea-shells' as DATA), COSINE) as Vector_Distance
2 FROM my_data_vectors
3 ORDER BY vector_distance(sentence_vector , VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'sea-shells' as DATA), COSINE)
4* FETCH FIRST 2 ROWS ONLY
SQL> /
ID MY_SENTENCE VECTOR_DISTANCE
------------ ------------------------------------------------ ---------------
2 she sells sea-shells at the seashore 3.65937E-001
6 aeroplanes fly in the sky 8.43676E-001
SQL>
SQL> l
1 SELECT id , my_sentence, vector_distance(sentence_vector , VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'she sells seashells' as DATA), COSINE) as Vector_Distance
2 FROM my_data_vectors
3 ORDER BY vector_distance(sentence_vector , VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'she sells seashells' as DATA), COSINE)
4* FETCH FIRST 2 ROWS ONLY
SQL> /
ID MY_SENTENCE VECTOR_DISTANCE
------------ ------------------------------------------------ ---------------
2 she sells sea-shells at the seashore 1.46406E-001
6 aeroplanes fly in the sky 9.26212E-001
SQL>
--- If I refine my query from "sea-shells" to "she sells seashells", I can get a better match, with the Vector Distance dropping from 3.6*10^-1 to 1.4*10^-1
SQL> l
1 SELECT id , my_sentence, vector_distance(sentence_vector , VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'quick fox' as DATA), COSINE) as Vector_Distance
2 FROM my_data_vectors
3 ORDER BY vector_distance(sentence_vector , VECTOR_EMBEDDING(ALL_MINILM_L12_V2_AUGMENTED USING 'quick fox' as DATA), COSINE)
4* FETCH FIRST 2 ROWS ONLY
SQL> /
ID MY_SENTENCE VECTOR_DISTANCE
------------ ------------------------------------------------ ---------------
1 The quick brown fox jumped over the lazy dog 3.22912E-001
4 the elephant knows the fox and dog 5.21092E-001
SQL>
--- "quick fox" has a better similarity with sentence 1 than sentence 4
What LLMs do when "generating" the next word (known as "token") is to find the "token" that has a higher probability -- a lower COSINE Distance
COSINE Distance measures the Angle between the Vectors There are other Methods such as DOT PRODUCT, EUCLIDEAN (L1/L2) etc
Some Useful References :
Oracle Docs and Examples :
Oracle Blog :
External Blog :
Hugging Face :
ONNX :
Oracle SQL Reference :
We have an Oracle Forms / Database application and were asked to limit User's maximum online time to 15 minutes.
Following this, we set the profiles' CONNECT_TIME to 15 - this works well.
How is it possible to reset this if user does any action on frontend?
Maybee v$session.seconds_in_wait can help?
Thanks in advance
Helmut
Which PL/SQL package is primarily used for interacting with Generative AI services in Oracle Database 23ai?
DBMS_AI or DBMS_ML or DBMS_VECTOR_CHAIN or DBMS_GENAI?
Hi there!
I've some trouble trying to upload data from agent to OMS server, everything seems correct but:
<b>emctl upload agent
Oracle Enterprise Manager 24ai Release 1
Copyright (c) 1996, 2024 Oracle Corporation. All rights reserved.
---------------------------------------------------------------
EMD upload error:full upload has failed: uploadXMLFiles skipped :: OMS version not checked yet. If this issue persists check trace files for ping to OMS related errors. (OMS_DOWN)</b>
agent is running, OMS too, telnet from agent is:
telnet myserver.com 4903
Trying 10.8.0.65...
Connected to myserver.com.
Escape character is '^]'.
curl output:
curl -vk myserver.com:4903/empbs/upload
* Trying 10.8.0.65...
* TCP_NODELAY set
* Connected to myserver.com (10.8.0.**) port 4903 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: CN=myserver.com
* start date: Jul 10 16:57:40 2025 GMT
* expire date: Jul 9 16:57:40 2035 GMT
* issuer: O=EnterpriseManager on myserver.com; OU=EnterpriseManager on myserver.com; L=EnterpriseManager on myserver.com; ST=CA; C=US; CN=myserver.com
* SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
> GET /empbs/upload HTTP/1.1
> Host: myserver.com:4903
> User-Agent: curl/7.61.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Wed, 03 Sep 2025 18:53:47 GMT
< X-ORCL-EM-APIGW-CONSOLIDATED-BY: apigateway
< X-ORCL-EM-APIGW-ERR: 0
< X-ORACLE-DMS-ECID: 21f06e98-2895-465a-b3f3-17be919babe9-00001673
< X-ORCL-EMOA: true
< X-ORCL-EM-APIGW-GUID: 6113d58d-fde2-8b19-f054-c5eee6216d13
< X-ORACLE-DMS-RID: 0
< Date: Wed, 03 Sep 2025 18:53:47 GMT
< Content-Type: text/html;charset=utf-8
< X-Content-Type-Options: nosniff
< X-XSS-Protection: 1; mode=block
< Vary: Accept-Encoding
< Content-Length: 306
<
<HTML><HEAD><TITLE>
Http XML File receiver
</TITLE></HEAD><BODY bgcolor="#FFFFFF">
<H1>Http XML File receiver</H1>
<H2> Http Receiver Servlet active!</h2>
<H2> Product version is: 24ai </H2>
<H2> Product release version is: 24.1.0.0.0 </H2>
<H2> Core release version is: 24.1.0.0.0 </H2>
</BODY></HTML>
* Connection #0 to host myserver.com left intact
thank you very much!
Question
We're evaluating different approaches to implement the Outbox Pattern in Oracle 19c for reliable event publishing in our microservices architecture, but we're concerned about the significant I/O overhead and performance implications. Could you provide guidance on the best practices and alternatives?
Current Implementation Options We're Considering
1. Traditional Polling Approach
Method: Standard outbox table with application polling using SELECT ... FOR UPDATE SKIP LOCKED
Concerns:
Constant polling creates unnecessary database load
Potential for high latency in event delivery
Resource consumption even when no events exist
2. Change Data Capture (CDC) with Debezium
Method: Using Debezium to mine Oracle redo logs for outbox table changes
Concerns:
Additional complexity in deployment and monitoring
Dependency on external CDC infrastructure
Potential log mining overhead on the database
3. Oracle Advanced Queuing (AQ) with Sharded Queues
Method: Leveraging Oracle's native messaging with 19c sharded queue improvements
Concerns:
Learning curve for development teams familiar with table-based approaches
Potential vendor lock-in
Queue management complexity
Primary Concerns
I/O Impact: All approaches seem to significantly increase database I/O:
Polling creates constant read operations
CDC requires continuous log scanning
Queuing systems add their own storage and processing overhead
Scalability: As our event volume grows, we're worried about:
Database performance degradation
Increased storage requirements for outbox/queue tables
Network bandwidth consumption
Specific Questions
Performance Optimization: What Oracle 19c specific features or configurations can minimize the I/O overhead of outbox pattern implementations?
Alternative Architectures: Are there Oracle-native alternatives to the traditional outbox pattern that provide similar transactional guarantees with better performance characteristics?
Hybrid Approaches: Would a combination approach (e.g., AQ for high-priority events, polling for batch operations) be advisable?
Monitoring and Tuning: What specific metrics should we monitor, and what tuning parameters are most critical for outbox pattern performance in Oracle 19c?
Resource Planning: How should we size our database resources (I/O capacity, storage, memory) when implementing outbox patterns at scale?
Environment Details
Oracle Database 19c Enterprise Edition
Microservices architecture with moderate to high event volume
Requirements for exactly-once delivery semantics
Mixed OLTP and event-driven workloads
Any insights on Oracle-specific optimizations, alternative patterns, or architectural recommendations would be greatly appreciated.
Hi AskTom Team!
According to https://docs.oracle.com/en/database/oracle/oracle-database/19/adfns/schema-object-dependency.html#GUID-B99E885E-900F-4F29-A188-A617A301FDCE :
"Whenever a procedure is compiled, its time stamp is recorded in the data dictionary."
Is it possible to see this recorded timestamp ?
Marcin
Bring the consecutive number(strat_val,end_val) record based on id as one record and add new record if the consecutive number is breaks down
https://livesql.oracle.com/next/?compressed_code=H4sIAAAAAAAACo3PwQqCQBCA4fvCvsPcVBjBXTWLboXHCoLOMupCgm2wO%252Fr8oXbo1jKn%252BflgmM4ZYgNM7WiAjeemX3YpYikAAIYeZnLdk5yOVZbgVj2T42amEa6Py6m%252Bf7Ox%252FRrBTq%252FWOCmSoxTbDNYbxzBYfv%252BcmWmcjI8hoggV5gv%252FDwvchcEKVRYmlUZVBtIdqkMY1RlqHUbzEvMqkO6xWN9K0%252Fp2%252FgB1bHIywAEAAA%253D%253D&code_language=PL_SQL&code_format=false
<code>create table test_date
(
id varchar2(10),
start_val NUMBER,
end_val number
);</code>
input data :
<code>insert into test_date values( 'a',1,3);
insert into test_date values( 'a',4,6);
insert into test_date values( 'a',7,10);
insert into test_date values( 'a',12,15);
insert into test_date values( 'a',16,19);
insert into test_date values( 'a',20,22);
insert into test_date values( 'a',35,37);
insert into test_date values( 'a',38,40);</code>
output data:
'a' , 1, 10
'a' , 12, 19
'a' , 35 , 40
Why does DML on a materialized view's master table cause the materialized view staleness to change from FRESH to NEEDS_COMPILE? I would have thought it would have changed to STALE. My understanding about NEEDS_COMPILE is that it is supposed to reflect structural changes (DDL), so I must have some gap in my understanding because I am getting NEEDS_COMPILE when only performing DML (not DDL).
<code>SQL>column mview_name format a20
SQL>create table T ( id NUMBER
2 )
3 /
Table created.
SQL>insert into T ( id ) values ( 1 )
2 /
1 row created.
SQL>create materialized view T_MV
2 ( id
3 )
4 as select id
5 from T
6 /
Materialized view created.
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV'
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV FRESH VALID COMPLETE
SQL>select * from T_MV order by id
2 /
ID
-----------
1
SQL>insert into T ( id ) values ( 2 )
2 /
1 row created.
SQL>commit
2 /
Commit complete.
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV'
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV NEEDS_COMPILE NEEDS_COMPILE COMPLETE
SQL>select * from T_MV order by id
2 /
ID
-----------
1
SQL>begin
2 dbms_snapshot.refresh( list => user || '.T_MV'
3 , purge_option => 2
4 , atomic_refresh => false
5 , out_of_place => true
6 )
7 ;
8 end;
9 /
PL/SQL procedure successfully completed.
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV'
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV FRESH VALID COMPLETE
SQL>select * from T_MV order by id
2 /
ID
-----------
1
2
SQL>insert into T ( id ) values ( 3 )
2 /
1 row created.
SQL>commit
2 /
Commit complete.
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name = 'T_MV'
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV NEEDS_COMPILE NEEDS_COMPILE ...
It appears USING TRUSTED CONSTRAINTS causes a materialized view to have staleness=UNKNOWN when it would otherwise be FRESH.
Is it possible to have a materialized view with staleness=FRESH when USING TRUSTED CONSTRAINTS?
If not, would the optimizer be less likely to consider a materialized view with staleness=UNKNOWN for query rewrite if query_rewrite_integrity=TRUSTED and query_rewrite_enabled=TRUE? How about if query_rewrite_integrity=ENFORCED?
<code>SQL>column mview_name format a20
SQL>create table T ( id NUMBER
2 )
3 /
Table created.
SQL>insert into T ( id ) values ( 1 )
2 /
1 row created.
SQL>create materialized view T_MV
2 ( id
3 )
4 -- refresh using trusted constraints
5 as select id
6 from T
7 /
Materialized view created.
SQL>create materialized view T_trusted_MV
2 ( id
3 )
4 refresh using trusted constraints
5 as select id
6 from T
7 /
Materialized view created.
SQL>
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name like 'T\_%MV' escape '\' order by 1
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV FRESH VALID COMPLETE
T_TRUSTED_MV UNKNOWN VALID COMPLETE
SQL>begin
2 dbms_snapshot.refresh( list => user || '.T_MV'
3 , purge_option => 2
4 , atomic_refresh => false
5 , out_of_place => true
6 )
7 ;
8 dbms_snapshot.refresh( list => user || '.T_TRUSTED_MV'
9 , purge_option => 2
10 , atomic_refresh => false
11 , out_of_place => true
12 )
13 ;
14 end;
15 /
PL/SQL procedure successfully completed.
SQL>select mview_name, staleness, compile_state, last_refresh_type from user_mviews where mview_name like 'T\_%MV' escape '\' order by 1
2 /
MVIEW_NAME STALENESS COMPILE_STATE LAST_REF
-------------------- ------------------- ------------------- --------
T_MV FRESH VALID COMPLETE
T_TRUSTED_MV UNKNOWN VALID COMPLETE
SQL>select * from T_MV order by id
2 /
ID
-----------
1
SQL>select * from T_trusted_MV order by id
2 /
ID
-----------
1
SQL>drop materialized view T_MV
2 /
Materialized ...
Hi Tom.
Is it possible to return rows from a table in the order that they were inserted?
I have an old query, pre 8.1.6, that always returned rows in the order they were inserted. Now, in version 8.1.7 & 9 they are returned in a seemingly random order.
Thanks
Hi tom,
I have a table with compression, also have partitions and subpartitions. You can see The create DDL on the below. As you can see all of my objects are compressed or nocompressed. Bu i can't see this information on the all_tables table. Compression ad compress_for is turning null. Why i can see the table is compressed on the DDL. What is the point on this issue.
<code>CREATE TABLE EFSSALES1
(
sale_id NUMBER,
sale_date DATE,
region VARCHAR2(10),
amount NUMBER
)
COMPRESS BASIC
PARTITION BY RANGE (sale_date)
SUBPARTITION BY HASH (region)
(
PARTITION sales_2024 VALUES LESS THAN (TO_DATE('01-JAN-2025','DD-MON-YYYY'))
(
SUBPARTITION sp_east1 COMPRESS FOR OLTP,
SUBPARTITION sp_west1 NOCOMPRESS
),
PARTITION sales_2025 VALUES LESS THAN (TO_DATE('01-JAN-2026','DD-MON-YYYY'))
(
SUBPARTITION sp_east2 COMPRESS FOR OLTP,
SUBPARTITION sp_west2 COMPRESS FOR OLTP
)
);
SELECT compression, COMPRESS_FOR FROM all_tables WHERE table_name = 'EFSSALES1' AND owner='COPYCATLIVE'</code>
Hello Tom,
I am wondering what is the benefit of using a recovery catalog over a control file? What are some of the decisions one should consider before going either way?
Please and thank you
I am using Windows 11 64 bit / Oracle 19c .Everything works fine except when I create a nested table and associate it with my table as a column . I can neither query it nor drop the table . When I query I get a message that the connection to the database was reset and when I try to drop it I get a message as:
RA-00600: internal error code, arguments: [kkpo_rcinfo_defstg:objnotfound], [78561], [], [], [], [], [], [], [], [], [], []
00600. 00000 - "internal error code, arguments: [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s], [%s]"
*Cause: This is the generic internal error number for Oracle program
exceptions. It indicates that a process has encountered a low-level,
unexpected condition. The first argument is the internal message
number. This argument and the database version number are critical in
identifying the root cause and the potential impact to your system.
How do I resolve this?
I encountered inexplicable freezing of pl/sql procedure while trying to fetch and insert data into table, however same operation runs successfully when called with sql query in any IDE.
I am trying to realise ETL process in data warehouse database.
I need to load data from a remote database's table into a local one, i have low privileges on the remote database meaning i cant tune network or create materialised view logs there.
Target table is 'long' and daily data that must be transferred is about one million rows 7 columns wide with mostly numeric data and no large objects.
I created db link and a simple view that represents remote table in local database and granted select privilege to <code>'DATAW'</code> schema designed for data storage.
In the <code>'DATAW'</code> i created a procedure for merging data into identical local table. Something like this:
<code>
create procedure merge_t_data
(p_days_offset in number)
as
begin
merge /*+ APPEND */ into t_data_local tgt
using (
select col_id, col2, ..., col7
from pdbadmin.v_data_remote
where updated_at >= trunc(sysdate) - interval '1' day * p_days _offset ) src on (tgt.col_id = src.col_id)
when matched then update
set ...
when not matched then
insert
...;
end;
</code>
When i run the procedure the session acquires wait event 'Sql*net message from dblink' or 'Sql*net more data from dblink' which stays the same forever.
When i check incoming traffic on the server while the procedure is running i see that it is not used at all.
<b>HOWEVER</b>
When i run the same merge operatiion using query like:
<code>
merge /*+ APPEND */ into t_data_local tgt
using (
select col_id, col2, ..., col7
from pdbadmin.v_data_remote
where updated_at >= trunc(sysdate) - interval '1' day * 3 ) src on (tgt.col_id = src.col_id)
when matched then update
set ...
when not matched then
insert
...;
</code>
it runs successfully: i see incoming traffic up to 2Mb, and query finishes after +-10 minutes.
I am the only user of the database, no other people works with it for now.
I have also tried inserting the data into temporary table; using fetch cursor bulk collect; running execute immediate in the procedure, result was the same - execution freezes.
Also worth mentioning that i also successfully realised ETL process for second table: it is much wider: daily data needed for transferring is about 50k rows and the number of collumns is more than 20. I did it with similar merge procedure that runs successfully unlike the previously discussed one.
I want to know if it is possible to achieve success in running my merge procedure for the 'long' table or what might be other solutions to this problem
Hey Tom,
I have a Java application that runs thousands of different types of queries against an Oracle database millions of times. I wanted to save the query preparation time by using the oracle.jdbc.implicitStatementCacheSize JDBC property to cache prepared queries. But I easily end up with an error ORA-01000: maximum open cursors exceeded, even when running a single query at a time and reading it to completion.
In my mind, an open cursor represents a way to scroll through the results of a query via communication with the database server. I don't immediately see a correlation between a statement and a cursor beyond the idea that the statement yields a cursor when executed. But it appears to be deeper than that in the Oracle JDBC driver.
See the following example code that can quickly reproduce what I am experiencing:
<code>
public class OracleCursorExhaustionThroughStatementCaching
{
public static void main(String[] args)
{
try
{
Class.forName("oracle.jdbc.driver.OracleDriver");
final Properties props = new Properties();
props.put("user", "scott");
props.put("password", "tiger");
// CURSORS on the remote system are set currently to 300. Just run more unique queries than there are cursors to reproduce.
// This cache should only be holding statement information and cursors should only be used during an individual query
props.put("oracle.jdbc.implicitStatementCacheSize", "1500"); // commenting/removing this line allows this test to run without error
try (Connection c = DriverManager.getConnection("jdbc:oracle:thin:@someserver:1521/mydb", props))
{
DatabaseMetaData meta = c.getMetaData();
System.out.println("Product: " + meta.getDatabaseProductName());
System.out.println("Version: " + meta.getDatabaseProductVersion());
System.out.println("Driver: " + meta.getDriverVersion());
System.out.println("JVM Version: " + System.getProperty("java.runtime.version"));
for(int i = 0; i < 1000; i++)
{
// Each statement will be closed after executing
try(PreparedStatement ps = c.prepareStatement("select " + i + " from dual")) // for demo a unique query against dual is enough
{
// Being explicit with closing the result set after execution because logically this is the end of the cursor. (Statement close closes it anyway)
try(ResultSet rs = ps.executeQuery())
{
while(rs.next())
; // just read each result set fully, which should bring the cursor to its end
}
}
}
}
} catch(Exception ex)
{
ex.printStackTrace();
}
}
}
</code>
So on my machine and database this code yields the following:
Product: Oracle
Version: Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.22.0.0.0
Driver: 23.8.0.25.04
JVM Vers...
If you are wondering why your APEX page is not shown in printer-friendly mode after specifying "Yes" in the relevant parameter of the URL, the reason is simply that "Yes" as reported in the documentation of APEX 24.x is wrong, you must specify "YES" in uppercase.
I believe Oracle should accept a case insensitive value for that parameter and avoid a lot of headaches, I mean, it's just a matter of taking the UPPER value and that's it, end of story.
If, for some reason, you cannot change the URL or you don't want to use the URL to activate printer friendly mode, there is still the option to use a conditional before header process where you set the variable: APEX_APPLICATION.g_printer_friendly := TRUE;
The condition could be based on a REQUEST value or on some other expression of your choosing. If, for some reason, you are struggling to understand whether the page is in printer-friendly mode or not, the easy way to know it is by adding temporarily a region with the condition "Current page is in printer-friendly mode" (or the opposite, whichever you prefer).
If you are hitting this weird problem when trying to login to your APEX app running on Oracle ADB 23ai: Item ID (P9999_USERNAME) is not an item defined on the current page. According to Oracle APEX development team members this seems to be related to an issue with the database result cache mechanism that can be fixed by executing this procedure as SYSDBA (ADMIN user on ADB): begin dbms_result_cache.flush; end;
You can find the whole story about the problem on this forum thread. Now, I am not completely clear if this problem was fixed at some point and then popped up again on a more recent version of Oracle 23ai, in my case ADB is version 23.9.0.25.08 and APEX has been recently upgraded to 24.2.8.
I am glad I quickly found the workaround this morning as it was really driving me crazy. PS: The same caching bug seems to affect also APEX_EXEC.OPEN_QUERY_CONTEXT, that is if you change the query in parameter p_sql_query, the new query will be ignored and the "cached" will continue to be executed.
Pages
|