DBA Blogs
Unlocking Oracle 23c AI's JSON Relational Duality: Simplifying Data Handling for Modern Applications
With the release of Oracle 23c AI, Oracle has introduced a suite of new features designed to empower DBAs and developers. One of the most revolutionary additions is JSON Relational Duality, which offers unprecedented flexibility in handling data by allowing users to access the same data as both relational and JSON, seamlessly switching between formats without duplication or additional processing. This article explores JSON Relational Duality and demonstrates how it simplifies data management for applications that need to handle both structured and unstructured data.
What Is JSON Relational Duality? JSON Relational Duality in Oracle 23c AI allows the same data to be represented as both relational and JSON simultaneously. This feature is particularly valuable for applications where structured relational data needs to be accessed as unstructured JSON data—and vice versa. For example, a company with a traditional relational structure may want to integrate with a modern application that uses JSON-based APIs. JSON Relational Duality bridges this gap, enabling applications to leverage both formats without complex transformations or performance compromises.
Why JSON Relational Duality Matters In the past, handling both relational and JSON data often meant duplicating data or creating intricate ETL processes to convert between formats. JSON Relational Duality eliminates these challenges by allowing a single, consistent view of data that works natively with both JSON and relational formats.
- Reduced Complexity: You avoid complex ETL processes when transforming data formats. - Enhanced Flexibility: Developers can work with the format best suited to their application needs. - Improved Performance: Access data in the format you need without extra processing, reducing load on the system.
Getting Started with JSON Relational Duality To demonstrate JSON Relational Duality, let’s use an example of a user, 'user', who manages employee data in a relational structure but needs to provide a JSON format for API consumption. Suppose we have a table 'employees' with the following schema:
SQL CREATE TABLE employees ( employee_id NUMBER PRIMARY KEY, first_name VARCHAR2(50), last_name VARCHAR2(50), department VARCHAR2(50), salary NUMBER );
Step 1: Enabling JSON Relational Duality
With Oracle 23c AI, we can use a 'DUPLICATE JSON' clause to enable duality, allowing 'employees' to be queried as JSON without changing the schema.
SQL ALTER TABLE employees ENABLE JSON RELATIONAL DUALITY;
Step 2: Querying Data in JSON Format Now, 'user' can retrieve the same data in JSON format using a simple query: SQL SELECT JSON_OBJECT(*) FROM employees WHERE department = 'Sales';
The query above will output each row as a JSON object, making it compatible with any API or application that requires JSON.
Step 3: Using JSON Data with Relational Functions JSON Relational Duality allows JSON data to be used in SQL operations as if it were relational. This means that the 'user' can join, filter, and manipulate JSON data using SQL. For example, let’s filter employees by salary and project it as JSON: SQL SELECT JSON_OBJECT(*) FROM employees WHERE salary > 70000;
Use Case: Integrating Relational Data with a JSON-Based Application
Imagine the 'user' is tasked with integrating an Oracle database with a cloud-based application that consumes JSON via REST API. JSON Relational Duality simplifies this process. Instead of transforming data into JSON on the fly or storing duplicates, the 'user' can directly query relational data in JSON format and feed it into the API, streamlining integration.
Example API Call with JSON Duality Data Using JSON output from Oracle, the 'user' can now make API calls without extra data transformation:
JSON: { "employee_id": 101, "first_name": "John", "last_name": "Doe", "department": "Sales", "salary": 75000 } JSON Relational Duality in Complex Queries Let’s explore a more complex query in which the 'user' needs to get department-wise average salaries, outputting the results in JSON format.
SQL SELECT JSON_OBJECT( 'department' VALUE department, 'average_salary' VALUE AVG(salary) ) AS department_summary FROM employees GROUP BY department;
This query enables a modern, JSON-based interface to summarize data traditionally stored in relational format, making it accessible to front-end applications that prefer JSON.
SEO-Optimized Benefits of JSON Relational Duality
For DBAs, developers, and system architects, JSON Relational Duality in Oracle 23c AI is a keyword-rich feature that aligns with trends in hybrid database management and Oracle 23c AI JSON integration. It emphasizes dual-format data handling and native JSON support in relational databases, making it ideal for applications with mixed data needs. Key benefits like performance optimization, reduced data duplication, and simplified data management address frequently searched queries, ensuring that Oracle 23c AI remains at the forefront of modern database management.
Summary: Embracing the Duality Advantage in Oracle 23c AI Oracle 23c AI’s JSON Relational Duality is a groundbreaking feature, particularly for DBAs like 'user', who must manage complex data structures efficiently. With JSON Relational Duality, Oracle offers flexibility without sacrificing performance, enabling seamless data integration for applications that demand both structured and unstructured data. For anyone managing databases in hybrid environments, JSON Relational Duality is a powerful tool that reduces workload, enhances performance, and provides a competitive edge in the data-driven landscape.
Dear Tom,
Hi !
Thanks for the previous answers.They were as perfect as
always.
Qust I have got around 50 .csv files (Excel files) containing
some data which needs to be imported into the related tables
in the database.
I have a script which I am running from the command propmt
(Windows NT) ,which makes use of sqlldr command.I am creating
a .ctl file.For every table I am creating a single .ctl file.
The sample of script is as follows
[---c1.ctl---]
LOAD DATA
INFILE MT_srvc_ctgry_rel.csv
INTO TABLE MT_srvc_ctgry_rel
append
fields terminated by ","
( SRVC_CTGRY_CD,
SRVC_CD,
DEL_F,
DEL_DT DATE 'YYYY-MM-DD',
UPDT_USER_ID,
CRT_DT DATE 'YYYY-MM-DD',
UPDT_DT DATE 'YYYY-MM-DD')
I am making use of following command to execute the script
c:\> SQLLDR user1/user1@dbmatch c1.ctl c1.log
In this script MT_srvc_ctgry_rel is the Table name which is
empty and getting data from MT_srvc_ctgry_rel.csv file.
Now I am creating such script for every Table because
the csv file and ofcourse COLUMNS in every table are different.
Consider I have T1,T2,T3 and T4 tables and C1.csv,c2.csv.c3.csv.c4.csv files containing data.
Can you suggest me a method through which the data in all
these 4 tables can be inserted at one go through there
respective .csv files.
Is there any Keyword which can be used to mark all the
COLUMNS in the table instead of writing every column in
the script in the parenthesis and also is it possible to
mention more than one file (.csv) name in the script.
Kindly answer !
Thanks & Regards
OP Lalwani
As a database developer, I try to ensure column names in tables are not named something like "TYPE", or "OWNER", or "PRIMARY". It seems redundant to a Java developer to add a column to a table that indicates the OWNER of a PRODUCT to a column named PRODUCT_OWNER... after all, in the Java Entity, the property is "owner", and it's obviously the Product Owner.
The problem comes in when the business wants to have multiple Owners, so we have to go find out all the code that references OWNER, but just for this table, and we have multiple tables with OWNER columns.
Or the column is something like TYPE, and the word TYPE is all over the database objects.
Or we want to drop the column, and want to see where it's used (this one is easier, drop the column in a test database, and see what breaks.)
There are 100 different cases where, as a database developer, I need to know where a column is used. I know there are ways you can find certain things, like in SYS.ALL_DEPENDENCIES, or searching other tables where COLUMN_NAME = 'OWNER', or looking in USER_SOURCE, etc. Some are helpful, others just return a lot of noise.
Obviously, the Oracle compiler knows which columns are used where, otherwise, the syntactic compile wouldn't work. Is it possible to find this information via a data dictionary view?
I have consistent storage snapshot for oracle database. I want to restore the snapshot to a mount host and roll the archivelogs to a specific point-in-time or SCN. I always got error when trying to open the database on mount host.
SQL> alter database open resetlogs;
alter database open resetlogs
*
ERROR at line 1:
ORA-01113: file 1 needs media recovery
ORA-01110: data file 1: '+DATA/ORCL/DATAFILE/system.256.1183393031'
What I have done:
Production Host
- Take consistent storage snapshot of online database including DATA,REDO,ARCHIVE
- Make database changes
- Backup archivelogs
Mount Host:
- Restore storage snapshot
- Attach ASM disk groups
- Restore archivelogs taken after storage snapshot was made.
- Mount database, trying to recover database to SCN or specific time or just until cacel
Always got ORA-01113 and ORA-01110. Tried to recover using backup controlfile, same result.
What should the correct steps to roll the archivelogs on top of storage snapshots? Thanks!
I need to pass an array into parameter in a store procedure like this
<code>
--##############################################################################################
-- P A C K A G E S P E C
--##############################################################################################
CREATE OR REPLACE PACKAGE TEST_PKG"
IS
type chef_type is RECORD
(
p_numero_chef varchar2(3),
p_type_infraction_chef varchar2(1),
p_code_infraction_chef varchar2(4),
p_modele_chef varchar2(7),
p_nature_chef varchar2(6),
p_reglement_chef varchar2(20),
p_article_chef varchar2(20),
p_type_date_chef varchar2(1),
p_date_infraction_chef varchar2(8),
p_date_infraction_fin_chef varchar2(8)
);
type chef_array is varray(100) of chef_type;
??????
--##############################################################################################
-- PROC
--##############################################################################################
PROCEDURE TEST_PROC (
p_var...
p_nombre_chef in number
p_chef????????????in chef_array,
p_var...
);
--##############################################################################################
-- P A C K A G E B O D Y
--##############################################################################################
CREATE OR REPLACE PACKAGE BODY TEST_PKG"
IS
--##############################################################################################
-- (2) S I C O U R M E T A J O U R S I C M A
--##############################################################################################
PROCEDURE TEST_PROC (
????????????p_var...
????????????p_nombre_chef in number,
????????????p_chef IN chef_array,
????????????p_var...
??????)
??????IS
??????v_var...
??????BEGIN
??????FOR i IN 1 .. p_nombre_chef
LOOP
????????????v_numero_chef := p_chef.p_numero_chef(i);
????????????v_type_infraction_chef := p_chef.p_type_infraction_chef(i);
????????????...
????????????insert ...
??????END LOOP;
??????END SICOUR_MAJ_SICMA;???
error message
[Error] PLS-00302 (382: 30): PLS-00302: The composant 'P_NUMERO_CHEF' must be declared
[Error] PLS-00302 (387: 39): PLS-00302: The composant 'P_TYPE_INFRACTION_CHEF' must be declared
</code>
Greetings,
Mongo JSON/BSON provides what seem like two cool improvements:
1.) Denormalization through embedding documents: https://www.mongodb.com/docs/manual/tutorial/model-embedded-one-to-many-relationships-between-documents/
2.) Design patterns to optimize access to data: https://www.mongodb.com/blog/post/building-with-patterns-a-summary
I would think that with OSON, we can denormalize/embed documents by simply creating multiple JSON data types within a single table. Is there a limit to the number of JSON data types per table? Are there any potential performance problems to watch out for?
Regarding Mongo design patterns to optimize access to data, does Oracle OSON have their own design patterns? Finally, design patterns seem like they optimize data access, why hasn't Oracle RDBMS implemented their own design patterns?
Thanks,
John
How do i create pluggable database in oracle linux
<code>
Enter user-name: sys as sysdba
Enter password:
Connected to:
Oracle Database 23ai Free Release 23.0.0.0.0 - Develop, Learn, and Run for Free
Version 23.5.0.24.07
SQL> alter session set container=cdb$root;
Session altered.
SQL> show con_name
CON_NAME
------------------------------
CDB$ROOT
SQL> CREATE PLUGGABLE DATABASE pdb1
ADMIN USER pdb1admin IDENTIFIED BY welcome
ROLES = (dba)
DEFAULT TABLESPACE users
DATAFILE '/opt/oracle/oradata/FREE/pdb1users01.dbf' SIZE 250M AUTOEXTEND ON
FILE_NAME_CONVERT = ('/opt/oracle/oradata/FREE/pdbseed/',
'/opt/oracle/oradata/FREE/pdb1/');
CREATE PLUGGABLE DATABASE pdb1
*
ERROR at line 1:
ORA-01017: invalid credential or not authorized; logon denied
Help: https://docs.oracle.com/error-help/db/ora-01017/
SQL> show user
USER is "SYS"
SQL> select * from session_privs;
....
PRIVILEGE
----------------------------------------
EXECUTE DYNAMIC MLE
USE ANY JOB RESOURCE
LOGMINING
CREATE ANY CREDENTIAL
CREATE CREDENTIAL
ALTER LOCKDOWN PROFILE
DROP LOCKDOWN PROFILE
CREATE LOCKDOWN PROFILE
SET CONTAINER
CREATE PLUGGABLE DATABASE
INHERIT ANY REMOTE PRIVILEGES
....
</code>
Please help me to solve this problem
Thanks :)
Greetings,
Has there been any performance comparison between Oracle 23 and Mongo 7 on saving and accessing JSON data? If yes, do you have a link to that test?
Thanks,
John
Hello Tom,
I have gone through with Oracle documentation and some of your article and question/Answer about High Water Mark. I am kind of confuced with HWM.
According to my knowledge, "all Oracle segments have an upper boundary containing the data within the segment. This upper boundary is called the "high water mark" or HWM". That means table can not contain data beyond the HWM (If not correct, please make me correct), if this is correct and we are determining high water mark with help of this query:
SELECT blocks, empty_blocks, num_rows
FROM user_tables
WHERE table_name = <tablename>;
Where,
1. BLOCKS represents the number of blocks 'ever' used by the segment.
2. EMPTY_BLOCKS represents only the number of blocks above the 'HWM'.
Now if according to HWM defination we can not go beyond HWM, then:
Question: What is EMPTY_BLOCKS represents.
Question: How to determine the HWM, please give me an example.
Question: For what all object HWM is possible?
Question: How to determine real space used by a table? (Below the HWM)
and I was using:
SELECT COUNT(DISTINCT SUBSTR(rowid,15,4)||SUBSTR(rowid,1,8)) "Used"
FROM schema.table;
or
SELECT COUNT (DISTINCT DBMS_ROWID.ROWID_BLOCK_NUMBER(rowid)||DBMS_ROWID.ROWID_RELATIVE_FNO(rowid)) "Used"
FROM schema.table;
or
SELECT COUNT (DISTINCT SUBSTR(rowid,1,15)) "Used"
FROM schema.table;
or
but all of three queries are giving different-2 results. Could you please also through some focus on this.
THX
Monica
Greetings,
I recently heard that Oracle 23 will support JSON! I am excited about this since I have been taken away from supporting Oracle for the past three years to work on Mongo.
As I have been working with Mongo, I?ve realized a few big weaknesses over RDBMS design. I wonder if Oracle's JSON implementation avoids these weaknesses:
1.) Storage overhead from repeating field names in each document.
a. Let's say I have a collection that will contain 1000 fields. In this collection we will have 1 billion documents. Each document will be required to add a value for each of the 1000 fields. I calculated the number of total characters used for the 1000 field names to be 10,000 characters. Does Oracle save the field names for the 1000 fields in each document. That will result in 10,000 characters times a billion documents = 10 Trillion characters stored! In relational database tables, only 10,000 characters would be stored for the table since they only store the field names in the table definition.
b. Do you know if Oracle?s JSON implementation won?t store the repeating field names in each document of the same collection? There are cases where we don?t need the ?flexible schema? option. We want all documents to have the same fields and set data types.
2.) Mongo has a two replicaset requirement for each Mongo primary server instance. Is Oracle going to require the same, which is to have two standby database servers for each primary server?
3.) Mongo highly recommends that we have enough memory to store the whole index and collection working set in memory. The weakness here is that Mongo licensing is based on memory and Mongo's decision to store the field names per document significantly bloats memory with useless information. Will Oracle continue to license based off of CPU for JSON implementations?
Thanks,
John
In my previous post I mentioned that I took a machine learning class based on Python and a library called PyTorch. Since the class ended, I have been working on a useful application of the PyTorch library and machine learning ideas to my work with Oracle databases. I do not have a fully baked script to share today but I wanted to show some things I am exploring about the relationship between the current date and time and database performance metrics such as host CPU utilization percentage. I have an example that you can download here: datetimeml2.zip
There is a relationship between the day of the week and the hour of the day and database performance metrics on many Oracle database systems. This is a graph from my hostcpu.py script that shows CPU utilization on a production Oracle database by date and hour of the day:
During the weekdays the CPU peaks at a certain hour and on the weekends, there are some valleys. So, I thought I would use PyTorch to model this relationship. Here is what the graph looks like of actual host CPU used versus predicted by PyTorch:
It’s the same source database but an earlier date. The prediction is close. I guess the big question after I got to this point was, so what? I’m not sure exactly what to do with it now that I have a model of the database CPU. I guess if the CPU is at 100% for an entire day instead of going up and down, I should throw an alert? What if CPU stays near 0% for a day during the week? It must be helpful to have a prediction of the host CPU but exactly how to alert on deviations from the predicted value is still a work in progress.
I thought it would be helpful to talk about the date and time inputs to this model. If you look at datetimeoscpu.sql in the zip it has this SQL for getting the date and time values:
select
to_char(sn.END_INTERVAL_TIME,'D') day_of_week,
to_char(sn.END_INTERVAL_TIME,'DD') day_of_month,
to_char(sn.END_INTERVAL_TIME,'DDD') day_of_year,
to_char(sn.END_INTERVAL_TIME,'HH24') hour_of_day,
to_char(sn.END_INTERVAL_TIME,'MM') month,
to_char(sn.END_INTERVAL_TIME,'Q') quarter,
to_char(sn.END_INTERVAL_TIME,'WW') week_of_year,
to_char(sn.END_INTERVAL_TIME,'W') week_of_month,
to_char(sn.END_INTERVAL_TIME,'YYYY') year,
...
I ended up ignoring year because it is not a cyclical value. The rest have a range like 1 to 7 for day of week or 1 to 31 for day of month. Having all eight of these is probably overkill. I could just focus on day of week and hour of day and forget the other six. We have 6 weeks of AWR history so I’m not sure why I care about things like month, quarter, day of year because I don’t have multiple years of history to find a pattern.
My input data looks like this:
4 17 199 03 07 3 29 3 2024 15.4210261
4 17 199 04 07 3 29 3 2024 15.5799532
4 17 199 05 07 3 29 3 2024 26.2080227
4 17 199 06 07 3 29 3 2024 24.1532019
4 17 199 07 07 3 29 3 2024 23.2947767
4 17 199 08 07 3 29 3 2024 25.5198717
4 17 199 09 07 3 29 3 2024 26.029638
4 17 199 10 07 3 29 3 2024 27.1160204
4 17 199 11 07 3 29 3 2024 27.7076496
Each line represents an AWR snapshot. The first 8 are the cyclical date and time input values or X. The last value is the host CPU utilization percentage or Y. The point of the program is to create a model based on this data that will take in the 8 values and put out a predicted CPU percentage. This code was used to make the predictions for the graph at the end of the train.py script:
predictions = model(new_X)
When I first started working on this script it was not working well at all. I talked with ChatGPT about it and discovered that cyclical values like hour of day would work better with a PyTorch model if they were run through sine and cosine to transform them into the range -1 to 1. Otherwise PyTorch thinks that an hour like 23 is far apart from the hour 0 when really, they are adjacent. Evidently if you have both cosine and sine their different phases help the model use the cyclical date and time values. So, here is the code of the function which does sine and cosine:
def sinecosineone(dttmval,period):
"""
Use both sine and cosine for each of the periodic
date and time values like hour of day or
day of month
"""
# Convert dttmval to radians
radians = (2 * np.pi * dttmval) / period
# Apply sine and cosine transformations
sin_dttmval = np.sin(radians)
cos_dttmval = np.cos(radians)
return sin_dttmval, cos_dttmval
def sinecosineall(X):
"""
Column Number
day_of_week 7
day_of_month 31
day_of_year 366
hour_of_day 24
month 12
quarter 4
week_of_year 52
week_of_month 5
"""
...
The period is how many are in the range – 24 for hour of the day. Here is the hour 23 and hour 0 example:
>>> sinecosineone(23.0,24.0)
(-0.25881904510252157, 0.9659258262890681)
>>> sinecosineone(0.0,24.0)
(0.0, 1.0)
Notice how the two values are close together for the two hours that are close in time. After running all my input data through these sine and cosine procedures they are all in the range -1.0 to 1.0 and these were fed into the model during training. Once I switched to this method of transforming the data the model suddenly became much better at predicting the CPU.
You can play with the scripts in the zip. I really hacked the Python script together based on code from ChatGPT, and I think one function from my class. It isn’t pretty. But you are welcome to it.
I’m starting to get excited about PyTorch. There are a bunch of things you must learn initially but the fundamental point seems simple. You start with a bunch of existing data and train a model that can be used as a function that maps the inputs to the outputs just as they were in the original data. This post was about an example of inputting cyclical date and time values like time of day or day of week and outputting performance metrics like CPU used percentage. I don’t have a perfect script that I have used to great results in production, but I am impressed by PyTorch’s ability to predict database server host CPU percent based on date and time values. More posts to come, but I thought I would get this out there even though it is imperfect. I hope that it is helpful.
Bobby
hi,
i have a question regarding external table and macro's
we have an external table and and there are a couple of complex sql functions case, regexp, subst.. that i would 'hide' in a view, so we can reuse those expression in PLSQL.
With a view a cannot pass the location, so i thought a macro can help me out, but till now, no luck
<code>select banner_full from v$version;
BANNER_FULL
--------------------------------------------------------------------------------
Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.24.0.0.0
create table test_table (
a varchar2(10),
b varchar2(10),
c varchar2(10),
)
organization external (
type oracle_loader
default directory dir_temp
access parameters (
fields csv without embedded
missing field values are null
reject rows with all null fields
)
location ( 'test.csv' )
);
-- this works
select *
from test_table
external modify (
location('test.csv')
);</code>
<code>create or replace function external_table_macro_test
(
p_tab in dbms_tf.table_t
,p_filename in varchar2
) return clob sql_macro is
begin
return q'{select a, case a when 'test' then b end as just_an_example from p_tab external modify(location('p_filename'))}';
end external_table_macro_test;
/</code>
<code>-- this returns an error, ORA-30663: An EXTERNAL MODIFY clause can only be specified for an external or hybrid-partitioned table.
select * from external_table_macro_test(p_tab => test_table, p_filename => 'test.csv' );</code>
even without the filename parameter it fails, is this combination even possible?
thx
Dear all,
I would like to download files from file server and view it in oracle apex. Till now I am unable to do so.
I have been able to export file, uploaded from a page oracle apex to file server but I would like also to do the other way round i.e. to read files from fileserver and view it in oracle apex.
Note that I am saving only file Id and filename in an oracle table.
The objective in doing this is that I would like to avoid saving files as BLOB in Database.
Thanking you,
Oracle 19 server cloud vm and from my laptop I am able to connect to the same database using sql developer tool .
And I have installed oracle 19 client on my laptop and intention is to connect oracle from python script. when I tried
to connect database using sqlplus from command prompt , getting error saying "ORA-12547: TNS:lost contact".
Kindly let me know how to resolve this.
sqlplus userid/pwd@oracleserver:port/servicename
I would like, if exist documentation for distribution from dev to prod environment ??
REgards
Eduardo
Hi,
We have case which is depicted in below code. Requirement is SQL has three input values in WHERE clause and which are passed using an EXECUTE IMMEDIATE USING clause. There may be cases where user may not pass one or more values. Now it is giving error when executed. Is there a way to handle this requirement.
<code>declare
l_empno number := &l_empno;
l_deptno number:= &l_deptno;
l_mgr number:=&l_mgr;
l_stm varchar2(1000);
begin
l_stm := 'select * from emp where empno = :1 and deptno = :2 and mgr = :3';
execute immediate l_stm using l_empno,l_deptno,to_number(l_mgr);
end;</code>
1) When all values are passed as input then PL/SQL block completes successfully
2) When any one of the value is not passed then it errors. Is there a way to do this?
Thanks,
Girish
<b>Hi Tom!</b>
I have a string like this format - "xxx/yyy/zzz"
I want replace words using these rules:
if we meet "xxx" then change at "x1"
if we meet "yyy" then chage at "y1"
if we meet "zzz" then chage at "z1"
And then end I want to get string like this - "x1/y1/z1".
My first variant is:
<code>With Example(str) as
(select 'xxx/yyy/zzz' from dual)
select
LISTAGG( Case RegExp_SubStr(str,'[^/]+',1,level)
When 'xxx' then 'x1'
When 'yyy' then 'y1'
When 'zzz' then 'z1'
End,
'/') as NewString
from Example
connect by RegExp_SubStr(str,'[^/]+',1,level) is not null</code>
It works! But so long!
I want to make change at one time using RegExp_Replace.
But I don't know how to do it!
Want to use backreference like this:
<code>select
RegExp_Replace(str, '(xxx|yyy|zzz)',
Case '\1'
When 'xxx' then 'x1'
When 'yyy' then 'y1'
When 'zzz' then 'z1'
End
) as NewString
from Example </code>
But it doesn't work. Result is "//"
Can you explain why? I can not using backreference at this examle?
I have a materialized view in my Oracle database (v$instance.version=19.0.0.0.0) from which a user is able to select records. However, I can't find any permissions that would give the user that ability.
Using the following queries, I have checked for privileges, but I haven't found any. Where else should/could I look?
In this post, I've changed the values for anonymity.
Username = 'ADAM'
MView Owner = 'SCHEMANAME'
MView Name = 'MVIEWNAME'
-- Check for system privileges
<code>SELECT * FROM DBA_SYS_PRIVS WHERE GRANTEE='ADAM';</code>
/* There are no privileges, such as SELECT ANY TABLE, that would grant this access */
-- Check for direct object privileges
<code>SELECT * FROM DBA_TAB_PRIVS WHERE TABLE_NAME='MVIEWNAME' AND GRANTEE='ADAM';</code>
/* There are no direct privs granted for this user */
-- Check for direct object privileges for roles
<code>SELECT * FROM DBA_TAB_PRIVS WHERE TABLE_NAME='MVIEWNAME' AND GRANTEE IN (SELECT ROLE FROM DBA_ROLES);</code>
/* There are no direct privs granted for any roles. This rules out ADAM being a member of a role with permission */
Where else could ADAM's access be coming from, and how would I revoke it?
Thanks for your help.
There are at least two ways of configuring facets in a faceted report in a hierarchical fashion, using different attributes for the child facet, one is straightforward, the other one more convoluted, let's find out in this video I prepared for the #JoelKallmanDay to remember a mentor and friend.
This video is a step-by-step tutorial to fine-tune or do full training of a voice model F5-TTS and E2-TTS on your own custom voice dataset locally.
Code: git clone https://github.com/SWivid/F5-TTS.git && cd F5-TTS cd ckpts mkdir F5TTS_Base wget https://huggingface.co/SWivid/F5-TTS/resolve/main/F5TTS_Base/model_1200000.safetensors?download=true -- In train.py, in Line 75, make sure that path points to your model's directory -- In models/trainer.py , in Line 94, make sure that path points to your model's directory conda create -n ai python=3.11 -y && conda activate ai pip install torch torchaudio pip install git+https://github.com/huggingface/transformers pip install git+https://github.com/huggingface/accelerate pip install huggingface_hub pip install pandas datasets import pandas as pd from datasets import load_dataset dataset = load_dataset("amphion/Emilia-Dataset") dataset.save_to_disk("/home/Ubuntu/mydataset/emilia_subset") # prepare custom dataset up to your need # download corresponding dataset first, and fill in the path in scripts , you may tailor your own one along with a Dataset class in model/dataset.py. # Prepare the Emilia dataset python scripts/prepare_emilia.py # Prepare the Wenetspeech4TTS dataset python scripts/prepare_wenetspeech4tts.py Training Once your datasets are prepared, you can start the training process. # setup accelerate config, e.g. use multi-gpu ddp, fp16 # will be to: ~/.cache/huggingface/accelerate/default_config.yaml accelerate config accelerate launch train.py
Pages
|