DBA Blogs
here is the test.
<code>create table t1(id int, name char(10));
create table t2(id int, name char(12));
insert into t1 values(100, 'yang');
insert into t2 values(200, 'yang');
commit;
-- return 1 row
select * from t1, t2 where t1.name = t2.name and t1.name = 'yang';
-- return 1 row
select * from t1, t2 where t1.name = t2.name and t1.name = rpad('yang', 10);
-- return 1 row
with tmp as (
select id, name from t2 where name = rpad('yang', 12)
)
select * from t1, tmp where t1.name = tmp.name and t1.name = 'yang';
-- return 0 row
with tmp as (
select id, name from t2 where name = rpad('yang', 12)
)
select * from t1, tmp where t1.name = tmp.name and t1.name = rpad('yang', 10);</code>
Why the last query return 0 row?
So I am able work out partition by reference with the DDL below, but can seem to figure out how to also get IOT too. The examples with IOT I have seen needed the PK defined in the create table DDL. Since my solution for making the PK on the child table with partition local required me building the PK as an alter. I cannot seem to figure out how to get both.
<code>
CREATE TABLE orders
( ORD_ID INTEGER GENERATED ALWAYS AS IDENTITY,
order_date DATE,
order_mode VARCHAR2(500)
)
partition by range (ORD_ID) interval (100)
(partition empty values less than (1)
)
;
CREATE UNIQUE INDEX PK_ORDERS_PK ON ORDERS (ORD_ID) local ;
ALTER TABLE ORDERS ADD CONSTRAINT "PK_ORDERS_PK" PRIMARY KEY (ORD_ID) ;
CREATE TABLE ORDER_ITEMS
( order_items_ID INTEGER GENERATED ALWAYS AS IDENTITY NOT NULL,
ORD_ID INTEGER NOT NULL,
product_id NUMBER(6) NOT NULL,
unit_price NUMBER(8,2),
quantity NUMBER(8),
CONSTRAINT FK_order_items_ORD_ID
FOREIGN KEY(ORD_ID) REFERENCES orders(ORD_ID)
)
PARTITION BY REFERENCE(FK_order_items_ORD_ID);
CREATE UNIQUE INDEX PK_ORDER_ITEMS_PK ON ORDER_ITEMS (ORD_ID,order_items_ID) local ;
ALTER TABLE ORDER_ITEMS ADD CONSTRAINT "PK_ORDER_ITEMS_PK" PRIMARY KEY (ORD_ID,order_items_ID)
USING INDEX "PK_ORDER_ITEMS_PK" ENABLE;</code>
Hi,
I have a column in table that have yet text in fields. Some of fields have in the end of text: distance: R1: a R2: b R3: c mm mm mm
I want to insert before the first mm d/a, the second mm d/b and the third mm d/c. It will be : distance R1: a R2: b R3: c d/a mm d/b mm d/c mm
Please can you help me to write the update table code as update table set column = ??.
So example: in items we have R1: 43.5 R2: 44 R3: 45 mm mm mm. There are about 17000 row containing this informations. I want to introduce before mm a calculation: 337.5/43.5 = 7.76 mm for the first one. In final it will be R1: 43.5 R2: 44 R3: 45 7.76 mm 7.67 mm 7.5 mm.
I think perhaps I must use regexp_replace in update table:
update table set column = regexp_replace (column, ....);
The regexp_replace is very hard to understand. That is my problem.
Thanks.
I would like to create a table in an Oracle 19c DB (on premise) that is optimally designed for many parallel DML queries (INSERT/UPDATE).
What do I have to pay attention to?
Does partitioning help, with different tablespaces?
How must sequences be created or should sequences be omitted?
How should indices be created?
...
Hi Tom,
I have a question regarding to add some constraint into a table.
I have 2 tables defined as:
<code>
create table TableA (
fieldA varchar2(10),
fieldB varchar2(10),
fieldC varchar2(5),
fieldD varchar2(10) not null,
fieldE number(38) not null,
Constraint TableAPK primary key (fieldE));
</code>
<code>
create table TableB(
categorie varchar2(255),
soort varchar(255) not null,
code varchar(255) not null,
status varchar(255),
Constraint TableB primary key (code));
</code>
In TableA; the values from fieldA, fieldB, fieldC and fieldD are used to populate the TableB on column code.
The column soort in TableB is filled depending which kind of field are comming from TableA.
As example:
<code>
TableA
fieldA fieldB fieldC fieldD fieldE
?????- ?????- ?????- ?????- ?????-
PAS304 344555 PG1AA 36415 305
TableB
categorie soort code status
????????- ????- ?????- ??????
[null] ALT PAS304 free
[null] MMS 344555 free
[null] ROE PG1AA free
[null] TOR 36415 free
</code>
My question:
It is possible to create new constraints foreign-keys in TableA for fieldA, fieldB, fieldC and fieldD with the fields from TableB (soort, code).
I tried something like this but I am not sure if this is what I want.
<code>
ALTER TABLE TableA
ADD CONSTRAINT fieldA_FK
FOREIGN KEY (fieldA)
REFERENCES TableB (CODE);
ALTER TABLE TableA
ADD CONSTRAINT fieldB_FK
FOREIGN KEY (fieldB)
REFERENCES TableB (CODE);
etc
</code>
Thanks in advance for your answer.
With kind regards,
Ricardo Barrera Ramirez
Hi Tom,
I am trying to alter the tablespace to the transportation tablspace. First I do:
alter tablespace users read only;
Then do :
exp sys/change_on_install file=02072001.dmp transport_tablespace=y tablespaces=users triggers=n
constraints=n
But I got the error as:
Export: Release 8.1.5.0.0 - Production on Fri Feb 9 13:19:08 2001
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to: Oracle8i Enterprise Edition Release 8.1.5.0.0 - Production
With the Partitioning and Java options
PL/SQL Release 8.1.5.0.0 - Production
Export done in US7ASCII character set and US7ASCII NCHAR character set
Note: table data (rows) will not be exported
Note: constraints on tables will not be exported
About to export transportable tablespace metadata...
EXP-00008: ORACLE error 29341 encountered
ORA-29341: The transportable set is not self-contained
ORA-06512: at "SYS.DBMS_PLUGTS", line 1067
ORA-06512: at line 1
EXP-00000: Export terminated unsuccessfully
I checked packages under sys account and see the package named
'DBMS_PLUGTS' is there.
Could you point out how to solve the problem?
Thanks.
Pauline
Hi TOM,
Sorry for asking this basic question (After two hours of searching, I am unable to find the answer)
Let's say I have run opatch in an ora19 home to patch the binaries.
But then I "forgot" to run datapatch.
Is there any dictionary view to tell me that I "forgot"?
BR
Peter
I am working on a Business SaaS solution and my data is siloed using a tenant_id column. I am setting up REST APIs using ORDS (v24). The application will provide the ability to POST and GET the subscribers data. Each subscriber will be provided their own set of credentials for the REST API. In calls to the API, I am requiring the tenant_id to be passed in via the headers. It occurs to me that even though I am securing the API behind a set of credentials, there is nothing to prevent the subscriber from using their credentials to access the API, but spoof their tenant_id in the headers by simply guessing a different tenant_id. Is there a way in ORDS to identify the user accessing the API so that I can map that user to the tenant_id? If not, I already activate a fake application user primarily for the purposes of logging the transactions. So, I've considered securing that user with a password and requiring the password to also be passed in the headers. I appreciate your advice on the best approach to use.
Dear Ask Tom Team,
can you suggest me the best way to have my "<b>datamarts</b>" tables, that summarized data(coming from the <b>oltp </b>database) using <b>analytical functions</b>,daily refreshed?
Thanks
Alessandra
how can i insert and read numeric value larger than 1000 digits into blob column in oracle
i have 1st table MIT_DOCUMENTS_LIST_MASTER with columns and data
<code>
ID DOCUMENTS_LIST
1 Passport Copy
2 Visa Application Form
3 Company Cover Letter
4 Invitation Letter
</code>
second table VISA_ON_ARRIVAL_COUNTRIES columns and data
<code>
id COUNTRY_ID COUNTRY_NAME VISA_REQUIRED VISA_ON_ARRIVAL_COUNTRIES DOCUMENTS_LIST CREATED_ON LAST_MODIFIED
1 45 Germany Y N 1:2:3:5:4:6:7:9 01/15/2026 01/15/2026
</code>
third table only columns columns
<code>
CREATE TABLE VISA_DOCUMENTS_UPLOADS (
ID NUMBER GENERATED BY DEFAULT AS IDENTITY,
COUNTRY_ID NUMBER,
DOCUMENT_ID NUMBER,
FILENAME VARCHAR2(4000 BYTE),
FILE_MIMETYPE VARCHAR2(512 BYTE),
FILE_BLOB BLOB,
FILE_CHARSET VARCHAR2(512 BYTE),
SAMPLE_DOCUMENT BLOB,
FORM_TEMPLATE BLOB,
UPLOADED_DOCUMENT BLOB,
PREVIEW_DOCUMENT VARCHAR2(4000),
FILE_COMMENTS VARCHAR2(4000 BYTE),
CREATED TIMESTAMP(6) DEFAULT CURRENT_TIMESTAMP,
CREATED_BY VARCHAR2(255 BYTE),
UPDATED TIMESTAMP(6) DEFAULT CURRENT_TIMESTAMP,
UPDATED_BY VARCHAR2(255 BYTE),
CONSTRAINT pk_visa_documents_uploads PRIMARY KEY (ID)
);
</code>
now i have to create one report whn i select country that country documents list will be shown in the report like this
<code>
Docid Document Name SampleDocuments FormTemplates Upload Documents Preview Uploaded Documents Mimetype File Size
1 Passport Copy Download Download chose_file Preview
2 Visa Application Form Downloa Download chose_file Preview
</code>
you see the above report format where upload documemnts i need to upload document for each document and store in the database of table VISA_DOCUMENTS_UPLOADS and i have one button save when i click on that button then it should store in the database
thisis the report query
<code>
SELECT
d.ID AS DOCUMENT_ID,
d.DOCUMENTS_LIST AS DOCUMENT_NAME,
MAX(CASE WHEN vu.SAMPLE_DOCUMENT IS NOT NULL THEN
'<a href="download_sample?p_id=' || vu.ID || '" target="blank">Download</a>'
ELSE 'Sample not Available' END) AS SAMPLE_DOCUMENTS,
MAX(CASE WHEN vu.FORM_TEMPLATE IS NOT NULL THEN
'<a href="download_template?p_id=' || vu.ID || '" target="blank">Download</a>'
ELSE 'Template not Available' END) AS FORM_TEMPLATES,
'<input type="file" ' AS UPLOAD_FILLED_FORM_BUTTON,
MAX(CASE WHEN vu.PREVIEW_DOCUMENT IS NOT NULL THEN
'<a href="' || vu.PREVIEW_DOCUMENT || '" target="_blank" id="preview_document_' || vu.ID || '">Preview</a>'
ELSE 'No Preview Available' END) AS PREVIEW_UPLOADED_DOCUMENTS,
MAX(vu.FILE_MIMETYPE) AS Mimetype,
...
Hello,
I accidentally dropped an index on a table (Oracle 19c) that was constructed on three columns; one of them was a foreign key.
table:
INDEX_DEFINITION:
<code>CREATE INDEX "dba"."INDEX_SOP" ON "dba"."USR_SOP_STD" ("TS_ID", "TS_ASSET_SUBDOMAIN", "TS_TITLE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "TT_DATA" ;</code>
The column TS_ASSET_SUBDOMAIN is a foreign key.
The impact on the performance was notable. I have a backup of the database before dropping the index. How can I check what exactly happened during the drop index? How can I fix this?
Many thanks.
Kind regards,
Ioana Deaconu
Oracle provides an initialization method that will automatically run when the package is first accessed within a session. My question, is there a method that gets called when the session ends?
The reason I am asking this is because I have a package that connects to LDAP. I have been having trouble because in batch jobs it is easy to overload on the number of connections. Something about the unbind does not execute and free the session right away.
To that end I am making a change to allow for the session to have just one connection, but it is possible someone does not call the close routine (to unbind). So, I was wanting a way to insure that the close routine would automatically get called at the end of the session.
<code>
--example of the initialization method
create or replace package body LDAP_Valmont is
ldap_session DBMS_LDAP.SESSION;
Session_Open BOOLEAN;
procedure SetSessionVariables is
begin
Session_Open:= FALSE;
end;
begin
SetSessionVariables;
end;
</code>
Hi Tom,
I'm trying to get rid of a performance problem in APEX with a pipelined function within an db-package. Reading ora-documentations and searching the web/comunnities wasn't successful. Sorry if my question is already answered - i haven't found it.
My situation:
1) for different facilities of our company there are (physical) separated oracle databases which hold production data.
2) each of the db-installations of 1) hold the same data-model.
3) there is a central APEX-info page for requesting data for each of the db from 1). In dependence of the user(groups), the APEX-info-page should extract the data from the proper database and put them into an dynamic report (about 5.000 to 10.000 rows).
4) To meet the requirements of 3) and to avoid multiple function programming I've programmed as followed at db-site (Oracle DB 12c Enterprise Ed. Rel. 12.1.0.2.0 - 64bit Production):
4.1) created db-links to each database of 1).
4.2) created user-defined type (create or replace xyz_t AS OBJECT) and therefore an table (create or replace type xyz_ct as table of xyz_t;)
4.3) created an package with an pipelined function (FUNCTION get_prod_Data(facility varchar2) RETURN xyz_ct PIPELINED)
5) programming the function
5.1) create an ref cursor
5.2) define dynamic statement to differ between the db-links. ('Select a, b, c from prod_tbl@' || dblink || ', prod_tbl2@' || dblink || where ...').
5.3) open cursor for statement 5.2)
loop
fetch cursor into variables
exit when cursor%NOTFOUND;
pipe row (xyz_t(...));
end loop;
close cursor;
Confusion:
c1) calling the function at the database "select * from table(package.get_prod_Data('niceCity1'));" will return the datasets within 1 second.
c2) calling the function from APEX for an dynamic report, the datasets are returned > 2 minutes (!!!).
c3) changing the implementation of the function to (without dynamic statement):
for runvar1 in (
Select a, b, c
from prod_tbl@dblink_c1, prod_tbl2@dblink_c1
where ...'
)
loop
pipe row (xyz_t(runvar1.a, runvar1.b, runvar1.c))
end loop;
Effect: data is return at the databas and also at APEX within seconds (instead of minutes at APEX).
c4) extracting data for the dynamic report directly on APEX with the origin select-statement ('Select a, b, c from prod_tbl@dblink_c1, prod_tbl2@dblink_c1 where ...') there is also an good performance.
So it seems to be a subject of the cursor within the package/function called out of APEX. Why is it so fast calling the function at database-site but has rather poor performance at APEX-site?
Any hint is appreciated.
Cheers
Peter
I completed the course Database for developers foundation. I did not recieve an email of the certofocate but i did have the option to print the certoificate. but I want to have a URL that can show as proof of creidentials. where to get it? Please help I invested lot of time doing this course these past 2 weeks. Thank you
Hello All,
I have a PL/SQL function that takes two INTERVAL DAY TO SECOND parameters and returns a value of the same data type.
It works correctly as a regular PL/SQL function.
I try to convert it to an equivalent SQL MACRO, both as a TABLE (in 19c and 23ai ) and as a SCALAR macro
(in 23ai ) and they compile without error, but when trying to use them from a SELECT statement they both fail.
I cannot perform tracing for trying to debug the problem.
Also, using DBMS_UTILITY.EXPAND_SQL_TEXT cannot be used in this case, because it works only for a TABLE SQL Macro
and only when the SQL runs successfully.
As an enhancement suggestion, it would be extremely useful to have the ability to see the transformed SQL query
( after the macro replacement ) in all cases, TABLE or SCALAR, successful or not, either by using
DBMS_UTILITY.EXPAND_SQL_TEXT or by any other method.
I created a script for the test case, under the link below:
https://livesql.oracle.com/ords/livesql/s/czlvwpdwtohm5u5l6njsmw4h9
and, although it appears as "Publicly shareable", the submission page DOES NOT allow me to enter it in the URL box !
Thanks a lot in advance for your help !
Iudith Mentzel
( 37 years of using Oracle, not just 35 as the dropbox on this submit page allows :) )
Starting with 21c (and continued in 23ai) I've noticed that Oracle database does not "automatically sort" data when working with set operators such as UNION, MINUS, INTERSECT etc, while it used to sort data till 19c.
Is this behavioral change documented somewhere?
<b>Example Code:</b>
<code>
WITH
sample_data_1(c_id, c_val) AS(
SELECT 1002,'V2' FROM dual
UNION
SELECT 1001,'V1' FROM dual
),
sample_data_2(c_id, c_val) AS(
SELECT 1004,'V4' FROM dual
UNION
SELECT 1003,'V3' FROM dual
)
SELECT * FROM sample_data_1
UNION
SELECT * FROM sample_data_2
;
</code>
<b>OUTPUT in 21c and 23ai:</b>
<code>C_ID | C_VAL
---- + --
1002 | V2
1001 | V1
1004 | V4
1003 | V3</code>
<b>OUTPUT in 19c and below:</b>
<code>C_ID | C_VAL
---- + --
1001 | V1
1002 | V2
1003 | V3
1004 | V4</code>
Do you like coding but you hate the scaffolding and prep-work?As developer, I often spend a considerable amount of time setting up development environments and the project scaffolding before I even write a single line of code. Configuring dependencies, installing tools, and making sure everything runs smoothly across different machines can be tedious. IF you find this prep work time consuming and constraining then...
Enter GitHub Codespaces
GitHub Codespaces is cloud-based development environment that allows you to start coding instantly without the hassle of setting up a local machine on your browser! Whether you’re working on an open-source project, collaborating with a team, or quickly prototyping an idea, Codespaces provides a streamlined workflow with minimal scaffolding.
Why GitHub Codespaces?
-
Instant Development Environments
With a few clicks, you get a fully configured development environment in the cloud. No need to install dependencies manually—just launch a Codespace, and it’s ready to go.
-
Pre-configured for Your Project
Codespaces can use Dev Containers (.devcontainer.json ) to define dependencies, extensions, and runtime settings. This means every team member gets an identical setup, reducing "works on my machine" issues.
-
Seamless GitHub Integration
Since Codespaces runs directly on GitHub, pushing, pulling, and collaborating on repositories is effortless. No need to clone and configure repositories locally.
-
Access from Anywhere
You can code from a browser, VSCode desktop, or even an iPad, making it an excellent option for developers who switch devices frequently.
-
Powerful Compute Resources
Codespaces provides scalable cloud infrastructure, so even resource-intensive projects can run smoothly without overloading your local machine.
A Real-World Example
Imagine you’re starting a new Streamlit project on their community. Normally, you’d:
- Install Streamlit and other packages
- Set up a virtual environment
- Configure dependencies
- Ensure all team members have the same setup
With GitHub Codespaces, you can define everything in a requirements.txt and .devcontainer.json file and launch your environment in seconds. No more worrying about mismatched Python versions or missing dependencies—just open a browser and start coding. See below how I obtained this coding environment to built a Weather Streamlit app quickly and for FREE using the Streamlit community Cloud
All in one browser page using Github, Browser edition of VScode and access to a free machine on Streamlit Community Cloud which uses GitHub Codespaces for development.
Final Thoughts
GitHub Codespaces is a game-changer for modern development I think. It eliminates the friction of setting up environments, making collaboration effortless and speeding up development cycles. If you haven’t tried it yet, spin up a Codespace for your next project—you might never go back to traditional setups on your laptop anymore.
Have you used GitHub Codespaces? Let me know your experience in the comments!
If you are building an AI-powered application, especially with tools and function calling you know that using JSON could greatly improve the accuracy of your application and the LLM would be able to give more grounded response. That is where this blog post is helping to show you how easy and powerful it is to use JSON with Oracle PL/SQL, which is still very much relevant in today's AI world. Oracle Database provides native support for JavaScript Object Notation (JSON) data, allowing you to store, index, and query JSON data using standard SQL and PL/SQL. Benefits of Using JSON in Oracle Database - Schemaless development: Quickly react to changing application requirements without needing to change storage schemas.
- Flexible data analysis and reporting: Leverage the power of SQL and relational databases for complex data analysis and reporting.
- Rock-solid data protection and access control: Ensure data integrity and security with Oracle Database's robust features.
JSON data can be stored, indexed, and queried without defining a schema. Oracle Database supports JSON natively, providing features like transactions, indexing, declarative querying, and views. JSON data is stored using standard SQL data types such as VARCHAR2, CLOB, and BLOB. It is recommended to use an is_json check constraint to ensure column values are valid JSON instances. PL/SQL supports SQL code, including SQL code that accesses JSON data. You can use SQL/JSON functions and conditions as built-in PL/SQL functions. Additionally, PL/SQL provides object types for JSON, allowing for fine-grained construction and manipulation of in-memory JSON data. Let's say we have a JSON object that represents a list of books: declare v_json clob; v_parsed json_object_t; v_books json_array_t; v_book json_object_t; v_title varchar2(100); v_author varchar2(100); v_price number; begin -- Load JSON Data v_json := '{ "books": [ { "title": "Book 1", "author": "Author 1", "price": 10.99 }, { "title": "Book 2", "author": "Author 2", "price": 9.99 }, { "title": "Book 3", "author": "Author 3", "price": 12.99 } ] }'; -- Parse JSON v_parsed := json_object_t.parse(v_json); v_books := v_parsed.get_array('books'); -- Loop through books for i in 1 .. v_books.get_size loop v_book := v_books.get_object(i); v_title := v_book.get_string('title'); v_author := v_book.get_string('author'); v_price := v_book.get_number('price'); -- Output book details dbms_output.put_line(v_title || ' by ' || v_author || ', Price: ' || v_price); end loop; end; / The output of this script would be: Book 1 by Author 1, Price: 10.99 Book 2 by Author 2, Price: 9.99 Book 3 by Author 3, Price: 12.99 By leveraging Oracle Database's native support for JSON data, you can efficiently store, query, and analyze JSON data using standard SQL and PL/SQL.
Pages
|