Feed aggregator

How Can a Data Breach of an Oracle Database be Managed and Analysed?

Pete Finnigan - Wed, 2024-11-06 17:46
No one wants to be hacked or their data stolen or leaked or their database be breached but it can happen. If you become the latest victim of a data hack and your Oracle database is compromised then what do....[Read More]

Posted by Pete On 06/11/24 At 11:23 AM

Categories: Security Blogs

Unlocking Oracle 23c AI's JSON Relational Duality: Simplifying Data Handling for Modern Applications

Learn DB Concepts with me... - Wed, 2024-11-06 09:15





Unlocking Oracle 23c AI's JSON Relational Duality: Simplifying Data Handling for Modern Applications


With the release of Oracle 23c AI, Oracle has introduced a suite of new features designed to empower DBAs and developers. One of the most revolutionary additions is JSON Relational Duality, which offers unprecedented flexibility in handling data by allowing users to access the same data as both relational and JSON, seamlessly switching between formats without duplication or additional processing.

This article explores JSON Relational Duality and demonstrates how it simplifies data management for applications that need to handle both structured and unstructured data. 


What Is JSON Relational Duality?

JSON Relational Duality in Oracle 23c AI allows the same data to be represented as both relational and JSON simultaneously. This feature is particularly valuable for applications where structured relational data needs to be accessed as unstructured JSON data—and vice versa. 

For example, a company with a traditional relational structure may want to integrate with a modern application that uses JSON-based APIs. JSON Relational Duality bridges this gap, enabling applications to leverage both formats without complex transformations or performance compromises.


Why JSON Relational Duality Matters

In the past, handling both relational and JSON data often meant duplicating data or creating intricate ETL processes to convert between formats. JSON Relational Duality eliminates these challenges by allowing a single, consistent view of data that works natively with both JSON and relational formats.


- Reduced Complexity: You avoid complex ETL processes when transforming data formats.

- Enhanced Flexibility: Developers can work with the format best suited to their application needs.

- Improved Performance: Access data in the format you need without extra processing, reducing load on the system.


Getting Started with JSON Relational Duality

To demonstrate JSON Relational Duality, let’s use an example of a user, 'user', who manages employee data in a relational structure but needs to provide a JSON format for API consumption.

Suppose we have a table 'employees' with the following schema:


SQL

CREATE TABLE employees (

    employee_id NUMBER PRIMARY KEY,

    first_name VARCHAR2(50),

    last_name VARCHAR2(50),

    department VARCHAR2(50),

    salary NUMBER

);



Step 1: Enabling JSON Relational Duality


With Oracle 23c AI, we can use a 'DUPLICATE JSON' clause to enable duality, allowing 'employees' to be queried as JSON without changing the schema.


SQL

ALTER TABLE employees ENABLE JSON RELATIONAL DUALITY;


Step 2: Querying Data in JSON Format

Now, 'user' can retrieve the same data in JSON format using a simple query:

SQL

SELECT JSON_OBJECT(*) 

FROM employees 

WHERE department = 'Sales';


The query above will output each row as a JSON object, making it compatible with any API or application that requires JSON.


Step 3: Using JSON Data with Relational Functions

JSON Relational Duality allows JSON data to be used in SQL operations as if it were relational. This means that the 'user' can join, filter, and manipulate JSON data using SQL.

For example, let’s filter employees by salary and project it as JSON:

SQL

SELECT JSON_OBJECT(*) 

FROM employees 

WHERE salary > 70000;



Use Case: Integrating Relational Data with a JSON-Based Application


Imagine the 'user' is tasked with integrating an Oracle database with a cloud-based application that consumes JSON via REST API. JSON Relational Duality simplifies this process. Instead of transforming data into JSON on the fly or storing duplicates, the 'user' can directly query relational data in JSON format and feed it into the API, streamlining integration.


Example API Call with JSON Duality Data

Using JSON output from Oracle, the 'user' can now make API calls without extra data transformation:


JSON:

{

    "employee_id": 101,

    "first_name": "John",

    "last_name": "Doe",

    "department": "Sales",

    "salary": 75000

}

 

JSON Relational Duality in Complex Queries

Let’s explore a more complex query in which the 'user' needs to get department-wise average salaries, outputting the results in JSON format.


SQL

SELECT JSON_OBJECT(

    'department' VALUE department,

    'average_salary' VALUE AVG(salary)

) AS department_summary

FROM employees

GROUP BY department;



This query enables a modern, JSON-based interface to summarize data traditionally stored in relational format, making it accessible to front-end applications that prefer JSON.


SEO-Optimized Benefits of JSON Relational Duality


For DBAs, developers, and system architects, JSON Relational Duality in Oracle 23c AI is a keyword-rich feature that aligns with trends in hybrid database management and Oracle 23c AI JSON integration. It emphasizes dual-format data handling and native JSON support in relational databases, making it ideal for applications with mixed data needs. Key benefits like performance optimization, reduced data duplication, and simplified data management address frequently searched queries, ensuring that Oracle 23c AI remains at the forefront of modern database management.


Summary: Embracing the Duality Advantage in Oracle 23c AI

Oracle 23c AI’s JSON Relational Duality is a groundbreaking feature, particularly for DBAs like 'user', who must manage complex data structures efficiently. With JSON Relational Duality, Oracle offers flexibility without sacrificing performance, enabling seamless data integration for applications that demand both structured and unstructured data.

For anyone managing databases in hybrid environments, JSON Relational Duality is a powerful tool that reduces workload, enhances performance, and provides a competitive edge in the data-driven landscape.



Categories: DBA Blogs

Structured Output Example with Sparrow UI Shell

Andrejus Baranovski - Mon, 2024-11-04 00:52
Structured output is all you need. I deployed a Sparrow demo UI with Gradio to demonstrate the output Sparrow can produce by running a JSON schema query. You can see examples for the Bonds table, Lab results, and Bank statement. 

 

How to import data from more than one (.csv ) [Excel] files into the related tables !

Tom Kyte - Fri, 2024-11-01 12:06
Dear Tom, Hi ! Thanks for the previous answers.They were as perfect as always. Qust I have got around 50 .csv files (Excel files) containing some data which needs to be imported into the related tables in the database. I have a script which I am running from the command propmt (Windows NT) ,which makes use of sqlldr command.I am creating a .ctl file.For every table I am creating a single .ctl file. The sample of script is as follows [---c1.ctl---] LOAD DATA INFILE MT_srvc_ctgry_rel.csv INTO TABLE MT_srvc_ctgry_rel append fields terminated by "," ( SRVC_CTGRY_CD, SRVC_CD, DEL_F, DEL_DT DATE 'YYYY-MM-DD', UPDT_USER_ID, CRT_DT DATE 'YYYY-MM-DD', UPDT_DT DATE 'YYYY-MM-DD') I am making use of following command to execute the script c:\> SQLLDR user1/user1@dbmatch c1.ctl c1.log In this script MT_srvc_ctgry_rel is the Table name which is empty and getting data from MT_srvc_ctgry_rel.csv file. Now I am creating such script for every Table because the csv file and ofcourse COLUMNS in every table are different. Consider I have T1,T2,T3 and T4 tables and C1.csv,c2.csv.c3.csv.c4.csv files containing data. Can you suggest me a method through which the data in all these 4 tables can be inserted at one go through there respective .csv files. Is there any Keyword which can be used to mark all the COLUMNS in the table instead of writing every column in the script in the parenthesis and also is it possible to mention more than one file (.csv) name in the script. Kindly answer ! Thanks & Regards OP Lalwani
Categories: DBA Blogs

Can you find where a specific column from a specific table is used in Oracle?

Tom Kyte - Fri, 2024-11-01 12:06
As a database developer, I try to ensure column names in tables are not named something like "TYPE", or "OWNER", or "PRIMARY". It seems redundant to a Java developer to add a column to a table that indicates the OWNER of a PRODUCT to a column named PRODUCT_OWNER... after all, in the Java Entity, the property is "owner", and it's obviously the Product Owner. The problem comes in when the business wants to have multiple Owners, so we have to go find out all the code that references OWNER, but just for this table, and we have multiple tables with OWNER columns. Or the column is something like TYPE, and the word TYPE is all over the database objects. Or we want to drop the column, and want to see where it's used (this one is easier, drop the column in a test database, and see what breaks.) There are 100 different cases where, as a database developer, I need to know where a column is used. I know there are ways you can find certain things, like in SYS.ALL_DEPENDENCIES, or searching other tables where COLUMN_NAME = 'OWNER', or looking in USER_SOURCE, etc. Some are helpful, others just return a lot of noise. Obviously, the Oracle compiler knows which columns are used where, otherwise, the syntactic compile wouldn't work. Is it possible to find this information via a data dictionary view?
Categories: DBA Blogs

recover using storage snapshot

Tom Kyte - Fri, 2024-11-01 12:06
I have consistent storage snapshot for oracle database. I want to restore the snapshot to a mount host and roll the archivelogs to a specific point-in-time or SCN. I always got error when trying to open the database on mount host. SQL> alter database open resetlogs; alter database open resetlogs * ERROR at line 1: ORA-01113: file 1 needs media recovery ORA-01110: data file 1: '+DATA/ORCL/DATAFILE/system.256.1183393031' What I have done: Production Host - Take consistent storage snapshot of online database including DATA,REDO,ARCHIVE - Make database changes - Backup archivelogs Mount Host: - Restore storage snapshot - Attach ASM disk groups - Restore archivelogs taken after storage snapshot was made. - Mount database, trying to recover database to SCN or specific time or just until cacel Always got ORA-01113 and ORA-01110. Tried to recover using backup controlfile, same result. What should the correct steps to roll the archivelogs on top of storage snapshots? Thanks!
Categories: DBA Blogs

Table array in parameter

Tom Kyte - Fri, 2024-11-01 12:06
I need to pass an array into parameter in a store procedure like this <code> --############################################################################################## -- P A C K A G E S P E C --############################################################################################## CREATE OR REPLACE PACKAGE TEST_PKG" IS type chef_type is RECORD ( p_numero_chef varchar2(3), p_type_infraction_chef varchar2(1), p_code_infraction_chef varchar2(4), p_modele_chef varchar2(7), p_nature_chef varchar2(6), p_reglement_chef varchar2(20), p_article_chef varchar2(20), p_type_date_chef varchar2(1), p_date_infraction_chef varchar2(8), p_date_infraction_fin_chef varchar2(8) ); type chef_array is varray(100) of chef_type; ?????? --############################################################################################## -- PROC --############################################################################################## PROCEDURE TEST_PROC ( p_var... p_nombre_chef in number p_chef????????????in chef_array, p_var... ); --############################################################################################## -- P A C K A G E B O D Y --############################################################################################## CREATE OR REPLACE PACKAGE BODY TEST_PKG" IS --############################################################################################## -- (2) S I C O U R M E T A J O U R S I C M A --############################################################################################## PROCEDURE TEST_PROC ( ????????????p_var... ????????????p_nombre_chef in number, ????????????p_chef IN chef_array, ????????????p_var... ??????) ??????IS ??????v_var... ??????BEGIN ??????FOR i IN 1 .. p_nombre_chef LOOP ????????????v_numero_chef := p_chef.p_numero_chef(i); ????????????v_type_infraction_chef := p_chef.p_type_infraction_chef(i); ????????????... ????????????insert ... ??????END LOOP; ??????END SICOUR_MAJ_SICMA;??? error message [Error] PLS-00302 (382: 30): PLS-00302: The composant 'P_NUMERO_CHEF' must be declared [Error] PLS-00302 (387: 39): PLS-00302: The composant 'P_TYPE_INFRACTION_CHEF' must be declared </code>
Categories: DBA Blogs

Oracle OSON: Denormalization and Design Patterns?

Tom Kyte - Fri, 2024-11-01 12:06
Greetings, Mongo JSON/BSON provides what seem like two cool improvements: 1.) Denormalization through embedding documents: https://www.mongodb.com/docs/manual/tutorial/model-embedded-one-to-many-relationships-between-documents/ 2.) Design patterns to optimize access to data: https://www.mongodb.com/blog/post/building-with-patterns-a-summary I would think that with OSON, we can denormalize/embed documents by simply creating multiple JSON data types within a single table. Is there a limit to the number of JSON data types per table? Are there any potential performance problems to watch out for? Regarding Mongo design patterns to optimize access to data, does Oracle OSON have their own design patterns? Finally, design patterns seem like they optimize data access, why hasn't Oracle RDBMS implemented their own design patterns? Thanks, John
Categories: DBA Blogs

Create Pluggable Database

Tom Kyte - Fri, 2024-11-01 12:06
How do i create pluggable database in oracle linux <code> Enter user-name: sys as sysdba Enter password: Connected to: Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free Version 23.5.0.24.07 SQL> alter session set container=cdb$root; Session altered. SQL> show con_name CON_NAME ------------------------------ CDB$ROOT SQL> CREATE PLUGGABLE DATABASE pdb1 ADMIN USER pdb1admin IDENTIFIED BY welcome ROLES = (dba) DEFAULT TABLESPACE users DATAFILE '/opt/oracle/oradata/FREE/pdb1users01.dbf' SIZE 250M AUTOEXTEND ON FILE_NAME_CONVERT = ('/opt/oracle/oradata/FREE/pdbseed/', '/opt/oracle/oradata/FREE/pdb1/'); CREATE PLUGGABLE DATABASE pdb1 * ERROR at line 1: ORA-01017: invalid credential or not authorized; logon denied Help: https://docs.oracle.com/error-help/db/ora-01017/ SQL> show user USER is "SYS" SQL> select * from session_privs; .... PRIVILEGE ---------------------------------------- EXECUTE DYNAMIC MLE USE ANY JOB RESOURCE LOGMINING CREATE ANY CREDENTIAL CREATE CREDENTIAL ALTER LOCKDOWN PROFILE DROP LOCKDOWN PROFILE CREATE LOCKDOWN PROFILE SET CONTAINER CREATE PLUGGABLE DATABASE INHERIT ANY REMOTE PRIVILEGES .... </code> Please help me to solve this problem Thanks :)
Categories: DBA Blogs

Oracle JSON versus Mongo JSON

Tom Kyte - Thu, 2024-10-31 17:46
Greetings, Has there been any performance comparison between Oracle 23 and Mongo 7 on saving and accessing JSON data? If yes, do you have a link to that test? Thanks, John
Categories: DBA Blogs

High Water Mark

Tom Kyte - Thu, 2024-10-31 17:46
Hello Tom, I have gone through with Oracle documentation and some of your article and question/Answer about High Water Mark. I am kind of confuced with HWM. According to my knowledge, "all Oracle segments have an upper boundary containing the data within the segment. This upper boundary is called the "high water mark" or HWM". That means table can not contain data beyond the HWM (If not correct, please make me correct), if this is correct and we are determining high water mark with help of this query: SELECT blocks, empty_blocks, num_rows FROM user_tables WHERE table_name = <tablename>; Where, 1. BLOCKS represents the number of blocks 'ever' used by the segment. 2. EMPTY_BLOCKS represents only the number of blocks above the 'HWM'. Now if according to HWM defination we can not go beyond HWM, then: Question: What is EMPTY_BLOCKS represents. Question: How to determine the HWM, please give me an example. Question: For what all object HWM is possible? Question: How to determine real space used by a table? (Below the HWM) and I was using: SELECT COUNT(DISTINCT SUBSTR(rowid,15,4)||SUBSTR(rowid,1,8)) "Used" FROM schema.table; or SELECT COUNT (DISTINCT DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid)||DBMS_ROWID.ROWID_RELATIVE_FNO(rowid)) "Used" FROM schema.table; or SELECT COUNT (DISTINCT SUBSTR(rowid,1,15)) "Used" FROM schema.table; or but all of three queries are giving different-2 results. Could you please also through some focus on this. THX Monica
Categories: DBA Blogs

Oracle's JSON implementation compared to Mongo

Tom Kyte - Thu, 2024-10-31 17:46
Greetings, I recently heard that Oracle 23 will support JSON! I am excited about this since I have been taken away from supporting Oracle for the past three years to work on Mongo. As I have been working with Mongo, I?ve realized a few big weaknesses over RDBMS design. I wonder if Oracle's JSON implementation avoids these weaknesses: 1.) Storage overhead from repeating field names in each document. a. Let's say I have a collection that will contain 1000 fields. In this collection we will have 1 billion documents. Each document will be required to add a value for each of the 1000 fields. I calculated the number of total characters used for the 1000 field names to be 10,000 characters. Does Oracle save the field names for the 1000 fields in each document. That will result in 10,000 characters times a billion documents = 10 Trillion characters stored! In relational database tables, only 10,000 characters would be stored for the table since they only store the field names in the table definition. b. Do you know if Oracle?s JSON implementation won?t store the repeating field names in each document of the same collection? There are cases where we don?t need the ?flexible schema? option. We want all documents to have the same fields and set data types. 2.) Mongo has a two replicaset requirement for each Mongo primary server instance. Is Oracle going to require the same, which is to have two standby database servers for each primary server? 3.) Mongo highly recommends that we have enough memory to store the whole index and collection working set in memory. The weakness here is that Mongo licensing is based on memory and Mongo's decision to store the field names per document significantly bloats memory with useless information. Will Oracle continue to license based off of CPU for JSON implementations? Thanks, John
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator