DBA Blogs
In today's data-driven world, businesses rely on robust databases to manage their mission-critical workloads. Oracle Database 23ai Free offers a streamlined experience of this industry-leading database, with resource limits of up to 2 CPUs for foreground processes, 2 GB of RAM, and 12 GB of user data on disk. This free version is designed for ease of use and simple download, making it an ideal starting point for exploring the capabilities of Oracle Database.
A key feature that sets Oracle Database 23ai apart is its AI Vector Search capability. But what exactly are vectors? In simple terms, vectors are mathematical representations of data that capture complex relationships and patterns. They are a way to encode data, such as text, images, or audio, into numerical values that can be easily processed and analyzed by machines. Vectors enable computers to understand the semantic meaning and context of data, allowing for more accurate and efficient searching and analysis.
Vector search takes this concept a step further. It is a technique used to quickly identify similar data points within a vast dataset. Traditional search methods rely on keyword matching or exact phrase searches, but vector search enables more nuanced and intuitive queries. By comparing the vector representations of different data points, vector search can identify patterns and relationships that would be missed by traditional search methods. Oracle AI Vector Search builds on this technology, introducing a converged database capability that revolutionizes the way businesses interact with their data.
By storing vectors as a native data type and utilizing vector indexes and SQL functions, AI Vector Search enables fast and simple similarity search queries on both structured and unstructured data. This means that customers can quickly identify similar information across documents, images, and other unstructured data sources. Furthermore, AI Vector Search allows prompts to large language models (LLMs) to be augmented with private business data or domain knowledge, unlocking new possibilities for data-driven insights and decision-making.
With Oracle AI Vector Search, businesses can unlock the full potential of their data, uncovering hidden patterns and relationships that drive innovation and growth. Whether you're working with text, images, or other data types, Oracle Database 23ai's AI Vector Search capability is poised to transform the way you search, analyze, and interact with your data. Insallation: Intall it with Docker: docker pull container-registry.oracle.com/database/free:latest Install it with Oracle VirtualBox: Oracle_Database_23ai_Free_Developer.ova Install it with Linux / Windows: oracle-database-free-23ai-1.0-1.el8.x86_64.rpm WINDOWS.X64_236000_free.zip Connecting to Oracle Database Free: For PDB: sqlplus sys@localhost:1521/FREEPDB1 as sysdba For CDB: sqlplus sys@localhost:1521/FREE as sysdba In Python: import oracledb conn = oracledb.connect(user="[Username]", password="[Password]", dsn="localhost:1521/FREEPDB1") with conn.cursor() as cur: cur.execute("SELECT 'Hello World!' FROM dual") res = cur.fetchall() print(res) In Go: package main import ( "fmt" "log" "database/sql" _ "github.com/godror/godror" ) func main() { // connectString format: [hostname]:[port]/[DB service name] dsn := `user="[Username]" password="[Password]" connectString="localhost:1521/FREEPDB1"` db, err := sql.Open("godror", dsn) if err != nil { panic(err) } defer db.Close() rows, err := db.Query("SELECT 'Hello World!' FROM dual") if err != nil { panic(err) } defer rows.Close() var strVal string for rows.Next() { err := rows.Scan(&strVal) if err != nil { log.Fatal(err) } fmt.Println(strVal) } err = rows.Err() if err != nil { log.Fatal(err) } }
In today's data-driven world, AI needs fuel to power innovative applications. Oracle Database 23ai brings AI directly to your data, making it effortless to develop cutting-edge apps and tackle mission-critical tasks. But what makes this possible? Enter GraphQL, a game-changing query language that's changing the way we interact with data. GraphQL is an open-source data query and manipulation language developed by Facebook in 2015. It allows clients to specify exactly what data they need, eliminating unnecessary requests and improving performance. GraphQL's declarative nature makes it a perfect fit for modern, data-driven applications. Its history is impressive, with Facebook open-sourcing it in 2015, followed by widespread adoption by tech giants like GitHub, Pinterest, and Airbnb. GraphQL solves several pain points that have plagued developers for years. By allowing clients to receive only requested data, GraphQL reduces data transfer and minimizes bandwidth usage. This results in improved performance, as fewer requests and optimized data retrieval lead to faster responses. Additionally, GraphQL supports multiple data sources, making integration seamless. Its self-documenting nature and intuitive queries simplify development, making it a favorite among developers. Traditionally, relational databases like Oracle used SQL for querying. However, SQL can be restrictive, requiring multiple queries to fetch related data. GraphQL changes this by enabling simplified complex queries and real-time data retrieval. This makes it perfect for applications requiring instant updates. Oracle's integration of GraphQL into its database takes this power to the next level, offering native support, optimized queries, and robust security features. With Oracle Database 23ai and GraphQL, developers can build innovative applications faster and more efficiently. GraphQL's nested queries and relationships make fetching complex data easier, while Oracle's database engine optimizes queries for peak performance. This powerful combination enables developers to focus on building exceptional user experiences. Imagine querying a movie database to get personalized recommendations. With GraphQL, you can fetch exactly what you need. For example: Here are some examples: Query 1: Get Movie Details query Movies { movies { title director genres cast { actor_name character_name } } } Query 2: Find Movies by Actor query MoviesByActor { movies { title release_year actors { actor_name movies_actor_id { title } } } } Query 3: Discover Movie Recommendations query Recommendations { movies { title rating similar_movies { title genre } } }
These examples illustrate the potential of Oracle Database 23ai and GraphQL. By combining AI-powered data analysis with intuitive querying, developers can unlock new possibilities in application development.
With Oracle Database 23ai and GraphQL, building innovative movie apps is faster, easier, and more powerful than ever.
Hope this helps.
As an Oracle database administrator, managing sessions is crucial for maintaining performance and availability. This script provides a comprehensive solution for monitoring and cleaning up idle and blocking sessions.
The script identifies blocking sessions exceeding a specified threshold (default: 60 minutes), kills them, and sends notification emails. It also identifies idle sessions exceeding a specified threshold (default: 60 minutes), kills them, and sends notification emails. Key components include session identification using V$SESSION , V$PROCESS , and V$TRANSACTION , threshold settings, notification email functionality using TRACK.SEND_EMAIL , and error handling. To implement this script, you'll need to declare variables for threshold settings (minutes), notification lists, and other necessary variables. The script then monitors blocking sessions using a FOR loop, killing each blocking session and sending notifications. A similar loop monitors idle sessions. DECLARE -- Threshold settings (minutes) in_blocker_threshold_minutes NUMBER := 60; in_idle_threshold_minutes NUMBER := 60; -- Notification list in_notification_list VARCHAR2(100) := 'your_email@example.com'; -- Other variables v_Body CLOB; any_blockers_killed NUMBER := 0; any_idlers_killed NUMBER := 0; BEGIN -- Monitor blocking sessions FOR bses IN ( SELECT s.sid, s.serial#, p.spid, s.username, s.program, machine, osuser, logon_time, last_call_et, NVL(sbc.ses_blocking_cnt, 0) ses_blocking_cnt, NVL2(t.used_ublk, TO_CHAR(used_ublk), 'none') used_ublk, sa.sql_text last_command FROM v$session s, v$process p, v$transaction t, v$sqlarea sa, (SELECT blocking_session, COUNT(*) ses_blocking_cnt FROM v$session WHERE blocking_session IS NOT NULL GROUP BY blocking_session) sbc WHERE last_call_et > in_blocker_threshold_minutes * 60 AND s.username IS NOT NULL AND s.type <> 'BACKGROUND' AND s.blocking_session IS NULL ) LOOP -- Kill blocking session and send notification BEGIN EXECUTE IMMEDIATE 'ALTER SYSTEM KILL SESSION ''' || bses.sid || ',' || bses.serial# || ''' IMMEDIATE'; any_blockers_killed := 1; EXCEPTION WHEN MARKED_FOR_KILL THEN DBMS_OUTPUT.PUT_LINE(bses.sid || ',' || bses.serial# || ' marked for kill.'); END; END LOOP; -- Monitor idle sessions FOR ises IN ( SELECT s.sid, s.serial#, p.spid, s.username, s.program, machine, osuser, logon_time, last_call_et, NVL(sbc.ses_blocking_cnt, 0) ses_blocking_cnt, NVL2(t.used_ublk, TO_CHAR(used_ublk), 'none') used_ublk, sa.sql_text last_command FROM v$session s, v$process p, v$transaction t, v$sqlarea sa, (SELECT blocking_session, COUNT(*) ses_blocking_cnt FROM v$session WHERE blocking_session IS NOT NULL GROUP BY blocking_session) sbc WHERE last_call_et > in_idle_threshold_minutes * 60 AND s.username IS NOT NULL AND s.type <> 'BACKGROUND' ) LOOP -- Kill idle session and send notification BEGIN EXECUTE IMMEDIATE 'ALTER SYSTEM KILL SESSION ''' || ises.sid || ',' || ises.serial# || ''' IMMEDIATE'; any_idlers_killed := 1; EXCEPTION WHEN MARKED_FOR_KILL THEN DBMS_OUTPUT.PUT_LINE(ises.sid || ',' || ises.serial# || ' marked for kill.'); END; END LOOP; -- Send notification emails IF any_blockers_killed = 1 OR any_idlers_killed = 1 THEN TRACK.SEND_EMAIL('oracle@your_host', in_notification_list, 'Killed sessions on your_instance', '<pre>' || v_Body || '</pre>'); END IF; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('Error checking idle and blocking sessions in your_instance'); TRACK.SEND_EMAIL('oracle@your_host', in_notification_list, 'Error checking idle and blocking sessions in your_instance', '<pre>' || SQLERRM || '</pre>'); RAISE; END; /
To maximize the effectiveness of this script, consider the following best practices: - Schedule the script to run regularly (e.g., every 30 minutes).
- Adjust threshold settings according to your database requirements.
- Monitor notification emails for killed sessions.
Hope this helps.
The "link attributes" of an interactive report allows a developer to specify additional attributes for a column displayed as a link in a interactive report. A tooltip that will be displayed when a user hovers over the link text with the pointer can be specified using the attribute TITLE, for instance: TITLE="open page xyz". This column attribute however is not picked up by the translation process of APEX, so it's not something that can be be found in the translation repository. An easy way to work around the problem if you need to translate the text is as follows: - add a new column to the IR report containing:
APEX_LANG.MESSAGE('MSGOPENXYZ') as MSG1 - Make the column hidden.
- Update the LINK ATTRIBUTES column attribute adding TITLE="#MSG1#"
- Add the message MSGOPENXYZ to the message repository for the main language and for the additional languages.
- Repeat the steps 1-4 if you have more links needing this.
- Seed you application
- Publish the application in the additional languages
- Enjoy the translated tooltip.
Tom:
I'm not sure whether to use a PreparedStatement or just the regular Statement if a query is going to be executed only once before the statement is closed.
I have read through some documentation regarding this topic and still not exactly clear on which way is better for the following scenario.
I have a EJB(stateless session bean) that needs to retrieve a record from the database and returns an Object (Object created using the resultset getXXX methods..).
If I use a PreparedStatement, connection.prepareStatement(sql) would have to dip to the database to see if a cached statement is available or need to precompile this 'sql'. And another dip to the database to execute this prepared statement. Once I have constructed my Object using the resultset, both the resultset and the prepared statement are closed (connection is returned to the pool). The same steps happen for every request to the EJB (prepareStatement(), execute(), close()). Is there any benefit in using preparedStatement for this scenario ?
In contrast, if I used the regular Statement, my guess would be the connection.createStatement() just returns a new object reference(instead of the database dip in the case of preparedstatement). My execute will dip to the database. The only problem i see with this approach is the sql statement has to be parsed everytime before execution (or will this be cached ???). Is there some guidelines on when to use PreparedStatement Vs Statement?
Thanks,
Ram.
Hi Tom,
Hope you are doing well!
I have 2 cursor loops. cursor C1,cursor c2(parameterised on C1 results).
below is a pseudo code depicting the situation:
<code>declare
cursor C1
select distinct a,b,c
from table1;--fetches 18K+records
cursor c2(p_a,p_b,p_c)
select *
from table1
where a=p_a
and b=p_b
and c=p_c;----fetches 20K+records
begin
loop c1
minor select queries and few validations....
loop c2(a,b,c)
actual validations
if insert flag='Y'
Insert into table1;
ELSE
UPDATE table1
end loop C2;
End loop C1;
END;</code>
I am unable to use bulk collect and FORALL as I have to insert or update conditionally.
The performance impact is too high.
Question:
How can I achieve above with a better performance. Kindly help
Best Regards,
Srabanee
I was puzzled when I got an error message allegedly occurring at a line containing an "end loop" statement and it took me a while to figure out that this occurs when either bound of the loop is NULL. In my case both the initial and final bounds are variables and they were supposed to be not null or so I thought...
Here is a code snippet reproducing the error: begin
for i in null..100
loop
null;
end loop;
end;
Error report -
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 5
06502. 00000 - "PL/SQL: numeric or value error%s"
*Cause: An arithmetic, numeric, string, conversion, or constraint error
occurred. For example, this error occurs if an attempt is made to
assign the value NULL to a variable declared NOT NULL, or if an
attempt is made to assign an integer larger than 99 to a variable
declared NUMBER(2).
*Action: Change the data, how it is manipulated, or how it is declared so
that values do not violate constraints.
So, if you see this error reported at this unusual location, you know what you are up against.
This video shows how to locally install Tencent Hunyuan3D-1 model for 3D generation.
Code: conda create -n ai python=3.9 -y && conda activate ai conda remove cuda-compiler conda install gcc_linux-64=11.2.0 gxx_linux-64=11.2.0 -y conda install cuda=11.8 -c nvidia -y conda install pytorch=2.0.1 torchvision==0.15.2 pytorch-cuda=11.8 -c pytorch -c nvidia -y conda install -c iopath iopath -y conda install -c bottler nvidiacub -y conda install pytorch3d -c pytorch3d -y conda install anaconda::cmake -y conda install conda-forge::lit-nlp -y conda install anaconda::numpy=1.23.5 -y git clone https://github.com/tencent/Hunyuan3D-1 && cd Hunyuan3D-1 #From below remove, pytorch3 from env_install.sh file. chmod a+x env_install.sh ./env_install.sh pip install huggingface_hub huggingface-cli login mkdir weights huggingface-cli download tencent/Hunyuan3D-1 --local-dir ./weights mkdir weights/hunyuanDiT huggingface-cli download Tencent-Hunyuan/HunyuanDiT-v1.1-Diffusers-Distilled --local-dir ./weights/hunyuanDiT python3 main.py --text_prompt "a lovely rabbit" --save_folder ./outputs/test/ --max_faces_num 90000 --do_texture_mapping --do_render python3 main.py --image_prompt "/home/Ubuntu/images/komodo.png" --save_folder ./outputs/test/ --max_faces_num 90000 --do_texture_mapping --do_render
I'd like to switch from Apache Tomcat datasources to UCP.
With Tomcat datasources we use
Datasource.Initsql=ALTER SESSION SET CURRENT_SCHEMA=XXXXXXX
Datasource.tomcat.jdbcInterceptors=ConnectionPoolLogger(MODULE=XXXX, ACTION=YYYY)
Are there ways to do the same with UCP? Something like:
PoolDataSource.setModule('XXXX')
PoolDataSource.setAction('YYYY')
PoolDataSource.initSQL('ALTER SESSION SET CURRENT_SCHEMA=XXXXXXX')
We are using Oracle Database 19c Enterprise Edition Release 19.0.0.0.0
Hi Tom! In Pro*C, when working with pointers to dynamic arrays, you can specify the FOR clause to indicate what is the size of your dynamic array, like:
<code>
EXEC SQL BEGIN DECLARE SECTION;
int* ids = (int*)malloc(sizeof(int) * n); // n is some runtime variable
int nrows = n;
EXEC SQL END DECLARE SECTION;
EXEC SQL FOR :nrows insert into some_table (id) values (:ids);
</code>
However, the Pro*C documentation says that you can't use the FOR syntax for SELECT statements, because you will get an error:
<code>
PCC-E-0056: FOR clause not allowed on SELECT statement at ...
</code>
This sentence (and all other sentences stating the same thing in different points of the document) has been in the Pro*C and Oracle Precompiler docs since at least version 8.
However, at least in 19c (and I think in 11 as well, but I'm not fully sure), that's no longer the case and you can use FOR in SELECT statements without problems.
<code>
EXEC SQL BEGIN DECLARE SECTION;
int* output = (int*)malloc(sizeof(int) * n); // n is some runtime variable
int nrows = n;
EXEC SQL END DECLARE SECTION;
EXEC SQL FOR :nrows select id into :output from some_table;
</code>
and it works just fine and do what I pretend without issues. However, I wonder, starting from which version was the restriction relaxed?
I have two choices here: assume the FOR clause working in SELECTs is actually a "bug" (it shouldn't work), or it's a feature and the docs are buggy, and I'm inclined for the latter because there's plenty of instances in the Pro*C docs that are outdated:
- The doc claims you need to use CODE=CPP if you want to use C++-style comments (//...). That's false. Pro*C accepts that kind of comments just fine with PARSE=ALL. I guess the reason is that the claim is there in the docs from times when Pro*C only cared about C89 and C90 (because the C++-style comments were added to the C standard in C99), even if that's no longer the case. At some points they increased the support for C90 but forgot to update the docs.
- In the release notes of version 12, it says they have added a new option called trim_password. However, the option is totally missing from the documented list of options all the way up to 23c (but using the option on the command line "compiles", at least in 19c).
- It claims that host variables of type `char` and `char[n]` uses the same default mapping demanded by `CHAR_MAP`. False, host variables of type `char` are always mapped to the external type `VARCHAR2` (discovered first by trying to solve a bug in my code, and then confirmed by looking at the type code assigned to each host variable in the sqlcud0 array of the generated code).
All of this tells me you that you have to rely on your own experiments on top of what the doc says, because the docs are, at the very least, marginally maintained.
But even if my experiments say otherwise, I would like to have at least "something", "somewhere" (some Oracle post, some A...
Hi Team,
We are doing some testing on heavy loads, but we are experiencing strange behavior.
We have a process that is doing a lot of updates and inserts on a primarily 2 tables. But some of the aggregate queries have a tendency to start being slow (from being very fast) and doing way more Logical I/O than seems normal for it to do.
During testing we are running on quite homogenous data so we should not see very much difference between each run. I have also tested the scripts on similar data and never gotten slow runs.
All code is run from PL/SQL here. During test, we are running 5 session that each works with one entity, that will have from 2000 to 400000 transactions in the run (always the same number for each entity in each run, although this will not be the case when we come to production).
For 2000, 10000, 20000 and 40000 pr entity (10000, 50000, 100000 and 200000 runs), we have no problems. Most of the scripts for select and update are running within second (even less than a second).
When we enter a 500000 run (100000 pr. entity), we start seeing problems. First 2-3 times the scripts are run, it takes seconds and then the next take minutes (for this amount up to 12-18 minutes). Logical I/O rises from almost 0 to 60000000.
When we enter a 1000000 run (200000 pr. entity), things even get worse. Then almost all of the runs take more than 20 minutes and more often than not, some of the runs never end properly, as we have a 1 hour timeout.
Here is an example code:
<code>WITH source AS (
SELECT
MAX(lpad(t2.column6, 7, '0') || '0000') AS aggregate_reference,
t1.column1,
t1.column2,
t1.column3,
t1.column4,
t1.column5
FROM
tab1e1 t1
INNER JOIN table2 t2 ON t1.column4 = t2.column4
AND t1.initiatorbankid = t2.initiatorbankid
AND t1.column3 = t2.column3
AND t1.column1 = t2.column1
AND t1.column2 = t2.column2
AND t1.column5 = t2.column5
WHERE
t1.column1 <> 'T'
AND t1.column7= '2544'
AND t1.column4 = 2081
GROUP BY
t1.column3,
t1.column1,
t1.column2,
t1.column4,
t1.column5
)
SELECT
appdata.type_for_reference(pk_column, aggregate_reference)
BULK COLLECT
INTO v_ref
FROM
table1 tx
INNER JOIN source ON tx.column4 = source.column4
AND tx.column3 = source.column3
AND tx.column1 = source.column1
AND tx.column2 = source.column2
AND tx.column5 = source.column5;
</code>
In a 500000 run, table1 has 500000 (100000 pr. entity) records, table2 has 200000 (40000 pr. entity) records. The with clause would return about 50000 records (10000 pr. entity) and the total statement ab...
Hi AskTom Team,
I do not know why my <b>view </b> is not using the <b>/*+ result_cache */</b> hint.
Hoqw can I look for the reason?
<code>
select /*+ result_cache */ * from (
WITH pttc as
(
SELECT pttc.pttc_code AS term_code,
pttc.pttc_cts_value AS term_cts_value,
pttc.pttc_create_date AS term_create_date,
pttc.pttc_create_user AS term_create_user,
pttc.pttc_desc AS term_description,
pttc.pttc_id AS term_id,
pttc.pttc_modify_date AS term_modify_date,
pttc.pttc_modify_user AS term_modify_user,
pttc.pttc_pdtc_id AS term_pdtc_id,
pttc.pttc_prftc_id AS term_prftc_id,
pttc.pttc_sort AS term_sort,
pttc.pttc_term_id AS term_term_id,
pttc.pttc_valid_from AS term_valid_from,
pttc.pttc_valid_to AS term_valid_to,
pttc.pttc_wfttc_id AS term_wfttc_id,
proc.proc_create_date AS create_date,
proc.proc_create_user AS create_user,
proc.proc_desc AS description,
proc.proc_id AS id,
lngc.lngc_iso_639_1 AS lngc_code,
lngc.lngc_id,
proc.proc_modify_date AS modify_date,
proc.proc_modify_user AS modify_user,
proc.proc_name AS name,
proc.proc_revision_number AS rev,
proc.proc_sort AS sort,
ccsc.ccsc_code AS state_code,
ccsc.ccsc_id AS state_id,
ccsc.ccsc_name AS state_name,
proc.proc_valid_from AS valid_from,
proc.proc_valid_to AS valid_to,
PROC_EXCHANGE_DATE AS exchange_date,
PROC_version AS version
FROM cat_proc_type_term pttc
INNER JOIN cat_proc_type proc ON proc.proc_pttc_id = pttc.pttc_id
INNER JOIN ctl_language lngc ON proc.proc_lngc_id = lngc.lngc_id
INNER JOIN cat_ctl_cat_state ccsc
ON proc.proc_ccsc_id = ccsc.ccsc_id
AND TRUNC (SYSDATE) BETWEEN pttc.pttc_valid_from
AND NVL (
pttc.pttc_valid_to,
TO_DATE ('31-12-9999',
'dd-mm-yyyy'))
AND TRUNC (SYSDATE) BETWEEN proc.proc_valid_from
AND NVL (
proc.proc_valid_to,
TO_DATE ('31-12-9999',
'dd-mm-yyyy'))
AND ccsc.ccsc_c...
Unlocking Oracle 23c AI's JSON Relational Duality: Simplifying Data Handling for Modern Applications
With the release of Oracle 23c AI, Oracle has introduced a suite of new features designed to empower DBAs and developers. One of the most revolutionary additions is JSON Relational Duality, which offers unprecedented flexibility in handling data by allowing users to access the same data as both relational and JSON, seamlessly switching between formats without duplication or additional processing. This article explores JSON Relational Duality and demonstrates how it simplifies data management for applications that need to handle both structured and unstructured data.
What Is JSON Relational Duality? JSON Relational Duality in Oracle 23c AI allows the same data to be represented as both relational and JSON simultaneously. This feature is particularly valuable for applications where structured relational data needs to be accessed as unstructured JSON data—and vice versa. For example, a company with a traditional relational structure may want to integrate with a modern application that uses JSON-based APIs. JSON Relational Duality bridges this gap, enabling applications to leverage both formats without complex transformations or performance compromises.
Why JSON Relational Duality Matters In the past, handling both relational and JSON data often meant duplicating data or creating intricate ETL processes to convert between formats. JSON Relational Duality eliminates these challenges by allowing a single, consistent view of data that works natively with both JSON and relational formats.
- Reduced Complexity: You avoid complex ETL processes when transforming data formats. - Enhanced Flexibility: Developers can work with the format best suited to their application needs. - Improved Performance: Access data in the format you need without extra processing, reducing load on the system.
Getting Started with JSON Relational Duality To demonstrate JSON Relational Duality, let’s use an example of a user, 'user', who manages employee data in a relational structure but needs to provide a JSON format for API consumption. Suppose we have a table 'employees' with the following schema:
SQL CREATE TABLE employees ( employee_id NUMBER PRIMARY KEY, first_name VARCHAR2(50), last_name VARCHAR2(50), department VARCHAR2(50), salary NUMBER );
Step 1: Enabling JSON Relational Duality
With Oracle 23c AI, we can use a 'DUPLICATE JSON' clause to enable duality, allowing 'employees' to be queried as JSON without changing the schema.
SQL ALTER TABLE employees ENABLE JSON RELATIONAL DUALITY;
Step 2: Querying Data in JSON Format Now, 'user' can retrieve the same data in JSON format using a simple query: SQL SELECT JSON_OBJECT(*) FROM employees WHERE department = 'Sales';
The query above will output each row as a JSON object, making it compatible with any API or application that requires JSON.
Step 3: Using JSON Data with Relational Functions JSON Relational Duality allows JSON data to be used in SQL operations as if it were relational. This means that the 'user' can join, filter, and manipulate JSON data using SQL. For example, let’s filter employees by salary and project it as JSON: SQL SELECT JSON_OBJECT(*) FROM employees WHERE salary > 70000;
Use Case: Integrating Relational Data with a JSON-Based Application
Imagine the 'user' is tasked with integrating an Oracle database with a cloud-based application that consumes JSON via REST API. JSON Relational Duality simplifies this process. Instead of transforming data into JSON on the fly or storing duplicates, the 'user' can directly query relational data in JSON format and feed it into the API, streamlining integration.
Example API Call with JSON Duality Data Using JSON output from Oracle, the 'user' can now make API calls without extra data transformation:
JSON: { "employee_id": 101, "first_name": "John", "last_name": "Doe", "department": "Sales", "salary": 75000 } JSON Relational Duality in Complex Queries Let’s explore a more complex query in which the 'user' needs to get department-wise average salaries, outputting the results in JSON format.
SQL SELECT JSON_OBJECT( 'department' VALUE department, 'average_salary' VALUE AVG(salary) ) AS department_summary FROM employees GROUP BY department;
This query enables a modern, JSON-based interface to summarize data traditionally stored in relational format, making it accessible to front-end applications that prefer JSON.
SEO-Optimized Benefits of JSON Relational Duality
For DBAs, developers, and system architects, JSON Relational Duality in Oracle 23c AI is a keyword-rich feature that aligns with trends in hybrid database management and Oracle 23c AI JSON integration. It emphasizes dual-format data handling and native JSON support in relational databases, making it ideal for applications with mixed data needs. Key benefits like performance optimization, reduced data duplication, and simplified data management address frequently searched queries, ensuring that Oracle 23c AI remains at the forefront of modern database management.
Summary: Embracing the Duality Advantage in Oracle 23c AI Oracle 23c AI’s JSON Relational Duality is a groundbreaking feature, particularly for DBAs like 'user', who must manage complex data structures efficiently. With JSON Relational Duality, Oracle offers flexibility without sacrificing performance, enabling seamless data integration for applications that demand both structured and unstructured data. For anyone managing databases in hybrid environments, JSON Relational Duality is a powerful tool that reduces workload, enhances performance, and provides a competitive edge in the data-driven landscape.
As an Oracle database administrator, you've likely encountered errors that make your heart skip a beat. One such error is ORA-00240: control file enqueue held for more than 120 seconds. But before you panic, let's break down what this error means and how to address it. This error occurs when the control file enqueue is held for an extended period (over 120 seconds). The control file is a critical component of the Oracle database, managing database structure and integrity. If you see this error occasionally, and your instance remains up and running, it's likely a fleeting glitch. Ignore it and move on. However, if: - The error occurs frequently
- Your instance hangs or crashes
- Performance is severely impacted
You need to be worried about it. In my experience, ORA-00240 can be triggered by: - High session counts conflicting with OS ulimits
- Shared pool latch contention (as noted in some MOS documents)
- Bugs in the Oracle software (resolvable with PSUs or one-off patches)
You should be checking: - Check alert logs for frequency and patterns.
- Verify OS ulimits are adequately set.
- Monitor shared pool latch contention using
SELECT NAME, GETS, WAITS, IMP_GETS, IMP_WAITS FROM V$LATCH WHERE NAME = 'shared pool'; Don't panic over occasional ORA-00240 errors. However, frequent occurrences warrant immediate attention. By understanding the causes and taking proactive steps, you'll minimize downtime and ensure your Oracle database runs smoothly.
Dear Tom,
Hi !
Thanks for the previous answers.They were as perfect as
always.
Qust I have got around 50 .csv files (Excel files) containing
some data which needs to be imported into the related tables
in the database.
I have a script which I am running from the command propmt
(Windows NT) ,which makes use of sqlldr command.I am creating
a .ctl file.For every table I am creating a single .ctl file.
The sample of script is as follows
[---c1.ctl---]
LOAD DATA
INFILE MT_srvc_ctgry_rel.csv
INTO TABLE MT_srvc_ctgry_rel
append
fields terminated by ","
( SRVC_CTGRY_CD,
SRVC_CD,
DEL_F,
DEL_DT DATE 'YYYY-MM-DD',
UPDT_USER_ID,
CRT_DT DATE 'YYYY-MM-DD',
UPDT_DT DATE 'YYYY-MM-DD')
I am making use of following command to execute the script
c:\> SQLLDR user1/user1@dbmatch c1.ctl c1.log
In this script MT_srvc_ctgry_rel is the Table name which is
empty and getting data from MT_srvc_ctgry_rel.csv file.
Now I am creating such script for every Table because
the csv file and ofcourse COLUMNS in every table are different.
Consider I have T1,T2,T3 and T4 tables and C1.csv,c2.csv.c3.csv.c4.csv files containing data.
Can you suggest me a method through which the data in all
these 4 tables can be inserted at one go through there
respective .csv files.
Is there any Keyword which can be used to mark all the
COLUMNS in the table instead of writing every column in
the script in the parenthesis and also is it possible to
mention more than one file (.csv) name in the script.
Kindly answer !
Thanks & Regards
OP Lalwani
As a database developer, I try to ensure column names in tables are not named something like "TYPE", or "OWNER", or "PRIMARY". It seems redundant to a Java developer to add a column to a table that indicates the OWNER of a PRODUCT to a column named PRODUCT_OWNER... after all, in the Java Entity, the property is "owner", and it's obviously the Product Owner.
The problem comes in when the business wants to have multiple Owners, so we have to go find out all the code that references OWNER, but just for this table, and we have multiple tables with OWNER columns.
Or the column is something like TYPE, and the word TYPE is all over the database objects.
Or we want to drop the column, and want to see where it's used (this one is easier, drop the column in a test database, and see what breaks.)
There are 100 different cases where, as a database developer, I need to know where a column is used. I know there are ways you can find certain things, like in SYS.ALL_DEPENDENCIES, or searching other tables where COLUMN_NAME = 'OWNER', or looking in USER_SOURCE, etc. Some are helpful, others just return a lot of noise.
Obviously, the Oracle compiler knows which columns are used where, otherwise, the syntactic compile wouldn't work. Is it possible to find this information via a data dictionary view?
I have consistent storage snapshot for oracle database. I want to restore the snapshot to a mount host and roll the archivelogs to a specific point-in-time or SCN. I always got error when trying to open the database on mount host.
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '+DATA/ORCL/DATAFILE/system.256.1183393031'
What I have done:
Production Host
- Take consistent storage snapshot of online database including DATA,REDO,ARCHIVE
- Make database changes
- Backup archivelogs
Mount Host:
- Restore storage snapshot
- Attach ASM disk groups
- Restore archivelogs taken after storage snapshot was made.
- Mount database, trying to recover database to SCN or specific time or just until cacel
Always got ORA-01113 and ORA-01110. Tried to recover using backup controlfile, same result.
What should the correct steps to roll the archivelogs on top of storage snapshots? Thanks!
I need to pass an array into parameter in a store procedure like this
<code>
--##############################################################################################
-- P A C K A G E S P E C
--##############################################################################################
CREATE OR REPLACE PACKAGE TEST_PKG"
IS
type chef_type is RECORD
(
p_numero_chef varchar2(3),
p_type_infraction_chef varchar2(1),
p_code_infraction_chef varchar2(4),
p_modele_chef varchar2(7),
p_nature_chef varchar2(6),
p_reglement_chef varchar2(20),
p_article_chef varchar2(20),
p_type_date_chef varchar2(1),
p_date_infraction_chef varchar2(8),
p_date_infraction_fin_chef varchar2(8)
);
type chef_array is varray(100) of chef_type;
??????
--##############################################################################################
-- PROC
--##############################################################################################
PROCEDURE TEST_PROC (
p_var...
p_nombre_chef in number
p_chef????????????in chef_array,
p_var...
);
--##############################################################################################
-- P A C K A G E B O D Y
--##############################################################################################
CREATE OR REPLACE PACKAGE BODY TEST_PKG"
IS
--##############################################################################################
-- (2) S I C O U R M E T A J O U R S I C M A
--##############################################################################################
PROCEDURE TEST_PROC (
????????????p_var...
????????????p_nombre_chef in number,
????????????p_chef IN chef_array,
????????????p_var...
??????)
??????IS
??????v_var...
??????BEGIN
??????FOR i IN 1 .. p_nombre_chef
LOOP
????????????v_numero_chef := p_chef.p_numero_chef(i);
????????????v_type_infraction_chef := p_chef.p_type_infraction_chef(i);
????????????...
????????????insert ...
??????END LOOP;
??????END SICOUR_MAJ_SICMA;???
error message
[Error] PLS-00302 (382: 30): PLS-00302: The composant 'P_NUMERO_CHEF' must be declared
[Error] PLS-00302 (387: 39): PLS-00302: The composant 'P_TYPE_INFRACTION_CHEF' must be declared
</code>
Greetings,
Mongo JSON/BSON provides what seem like two cool improvements:
1.) Denormalization through embedding documents: https://www.mongodb.com/docs/manual/tutorial/model-embedded-one-to-many-relationships-between-documents/
2.) Design patterns to optimize access to data: https://www.mongodb.com/blog/post/building-with-patterns-a-summary
I would think that with OSON, we can denormalize/embed documents by simply creating multiple JSON data types within a single table. Is there a limit to the number of JSON data types per table? Are there any potential performance problems to watch out for?
Regarding Mongo design patterns to optimize access to data, does Oracle OSON have their own design patterns? Finally, design patterns seem like they optimize data access, why hasn't Oracle RDBMS implemented their own design patterns?
Thanks,
John
How do i create pluggable database in oracle linux
<code>
Enter user-name: sys as sysdba
Enter password:
Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.5.0.24.07
SQL> alter session set container=cdb$root;
Session altered.
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
SQL> CREATE PLUGGABLE DATABASE pdb1
ADMIN USER pdb1admin IDENTIFIED BY welcome
ROLES = (dba)
DEFAULT TABLESPACE users
DATAFILE '/opt/oracle/oradata/FREE/pdb1users01.dbf' SIZE 250M AUTOEXTEND ON
FILE_NAME_CONVERT = ('/opt/oracle/oradata/FREE/pdbseed/',
'/opt/oracle/oradata/FREE/pdb1/');
CREATE PLUGGABLE DATABASE pdb1
*
ERROR at line 1:
ORA-01017: invalid credential or not authorized; logon denied
Help: https://docs.oracle.com/error-help/db/ora-01017/
SQL> show user
USER is "SYS"
SQL> select * from session_privs;
....
PRIVILEGE
----------------------------------------
EXECUTE DYNAMIC MLE
USE ANY JOB RESOURCE
LOGMINING
CREATE ANY CREDENTIAL
CREATE CREDENTIAL
ALTER LOCKDOWN PROFILE
DROP LOCKDOWN PROFILE
CREATE LOCKDOWN PROFILE
SET CONTAINER
CREATE PLUGGABLE DATABASE
INHERIT ANY REMOTE PRIVILEGES
....
</code>
Please help me to solve this problem
Thanks :)
Pages
|