DBA Blogs
I am using the following
IIS Server for App server.
ODP.Net
Oracle 10.2.0
After 30 minutes of idle time i get the error ORA-03135: connection lost contact.
The Error is same even if I use Sql*plus.
I asked the network group to increase firewall to 4 hours which they did increase it on port 1521.
I am not sure of the return ports time out values can be set.
Since it?s a web service on IIS, the w3w.exe opens an active connection and we have default settings on connection pooling, so one connection is maintained.
After 30 minutes idle on the next call, I get the error. I tried the same with Sql*plus and I do get the same error.
Is there a way or work around for the same?
Appreciate your help.
Dear Tom,
I have a question related to Oracle EBR.
We are using Oracle 19C (Enterprise Edition) & now moving to be EBR compliant.
Currently we do have DML triggers defined on tables.
These table have foreign key constraints defined with cascade on delete option.
For instance, ORDER_LINE_TABLE has a foreign key referring to ORDER_TABLE with cascade when delete (deleting a record in ORDER_TABLE).
As per the Oracle guidelines, the recommended approach for creating triggers in general seems to be define them on editioning views (EVs), instead on real tables, for many good reasons.
But in this case, it is not possible to create the triggers on EVs due to cascade operation initiated by FK constraint is performed at real table level, thus not visible to triggers on the EV.
This limitation has been mentioned by Oren Nakdimon in below document:
https://db-oriented.com/2025/01/08/ebr-part-13-the-trouble-with-foreign-keys-with-on-delete-clause-and-related-triggers/
As a workaround for this, we have been testing on defining the DML triggers on the real tables, in this like scenarios. But we see some challenges when they are present especially while using Cross Edition (XE) Triggers during upgrades. Since XE triggers are operating on real table level, DML triggers would see those DML operations happening on real tables, hence, it is hard for us to have an isolation in upgrade period.
Even the above document says, I doubt it is a rare combination of having FKs with cascade & having DML triggers.
Do you have a better approach to define DML trigger in such scenarios, instead of the approach I mentioned above?
Thank you & Kind Regards,
Navinth
Hello,
We have migration project form Oracle database 11g in Exadata server, to 19c in ODA server, when we tested the query in the new server with client tool such as Sql developper, Sqlplus and Sqlcl, the connection was lost after 1 hour of inactivity, the only difference between Exadata and ODA is in the network architecture, for the traceroute Exadata, we take switch A, but for ODA we, take firstly Firewall, after switch b to come to ODA, we increase timeout in Firewall for the port 1521 to 24h but the same issue. When we try to connect ton ODA directly, there is no timeout.
I added this ligne: SQLNET.EXPIRE_TIME = 10, in my sqlnet file, and seems work perfectly, my question is it the right solution, or there is another solution more efficnecy. If it is the only solution, I need to do others configurations for the connection doesn't use sqlnet, such as SSIS that use ODBC, and others that use JDBC.
Thanks in advance
We use OCI GoldenGate Microservice to replicate tables from Oracle Database to BigData.
I was tasked to create a process to replicate any "ALTER TABLE ADD COLUMN" statements to let bigdata team modify schema safely.
Tested solution using database trigger in dedicated schema in test environment was rejected because
1. Triggers are evil
2. Database patching can have unpredicted consequences having database trigger. I wonder if that is true.
I used similar trigger to log my own DDL for journaling for 5 years in a database which passed through multiple patching without any issues.
Still, my experience is not statistically correct prove.
My colleague suggested using use aud$unified table, but that involves creating procedure executed by frequently running scheduled job because releases are not 100 % regular.
Are there better approach?
With regards,
Alex
Here is the setup:
prompt create table utility.ddl_log
--drop table utility.ddl_log purge;
create table utility.ddl_log (
ddl_time timestamp(6) default systimestamp not null
ddl_by varcha2(30) default sys_context('userenv', 'current_user')
, os_user varchar2(100) default sys_context('userenv','os_user')
, host varchar2(100) default sys_context('userenv','host')
, ip_address varchar2(100) default sys_context('userenv','ip_address')
, module varchar2(100) default sys_context('userenv','module')
, terminal varchar2(100) default sys_context('userenv','terminal')
, operation varchar2(100)
, owner varchar2(50)
, object_name varchar2(50)
, object_type varchar2(50)
, sqltext clob
);
create or replace trigger utility.after_gg_table_ddl
after alter on database
declare
l_2run binary_integer := 0;
l_sql_id varchar2(16);
l_sql clob;
begin
--p('ora_dict_obj_name = '||ora_dict_obj_name);
select count(*)
into l_2run
from dba_users u
where 1 = 1
and profile = 'APP_USER' -- only application schemas to exclude any other schemas
and ora_dict_obj_type = 'TABLE'
and exists (
select 1
from all_log_groups a
where a.owner = u.username
and a.table_name = ora_dict_obj_name
and log_group_type is not null
)
and ora_dict_obj_owner = u.username
;
--p('l_2run = '||to_char(l_2run)||'; sid = '|| sys_context('userenv','sid'));
if l_2run > 0 then
select sql_id
into l_sql_id
from gv$session
where sid = sys_context('userenv','sid')
and rownum = 1
;
select lower(sql_fulltext)
into l_sql
from gv$sql
where sql_id = l_sql_id
and rownum = 1; --to be on the safe side as many sessions and children can have the same sql_id
-- This part is not required
insert into utility.ddl_log (operation, owner, object_name, object_type, sqltext, prevsqltext)
values (ora_sysevent, ora_dict_obj_owner, ora_dict_obj_name, ora...
Attempting to learn and make use of JSON data within a table. Stuck on the following and would really appreciate some help...
Looking for help to create a list of ITEM_IDs where a match is found in the JSON object.
Sample column data - (column: recall_control_json)
<code>{
"recall_control":
[
{
"item_id":"item1",
"item_cd1":"CA",
"item_cd2":"AP",
"item_cd3":"CO",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL3", "VAL5"]
},
{
"item_id":"item2",
"item_cd1":"CA",
"item_cd2":"AP",
"item_cd3":"EX",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL2"]
},
{
"item_id":"item3",
"item_cd1":"CA",
"item_cd2":"TW",
"item_cd3":"CO",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL2", "VAL3"]
},
{
"item_id":"item4",
"item_cd1":"CA",
"item_cd2":"TW",
"item_cd3":"EX",
"descr":"Description text here...",
"recall_list": ["VAL1", "VAL2", "VAL4"]
}
]
}</code>
Trying to write the query that would return a list of ITEM_IDs where the RECALL_LIST contains VAL3 or getting help from others in learning why it can't be done and/or how it could be done.
So far, I have been able to return all the ITEM_ID values with this query --
<code>select json_query(rt.recall_control_json, '$.recall_control.item_id' With wrapper) as recallList from recallTbl rt
where recall_id = 502;</code>
RETURNS -- ["item1","item2","item3","item4"]
But I have not been able to update the where clause so that I only get the ones I need
I am trying to integrate Microsoft Office365 Outlook's SMTP with Oracle Apex 22.1(Running on Amazon RDS for Oracle 19c).
I have used the following configuration on Apex:
SMTP server name: smtp.office365.com
SMTP port: 587
Username: <my-email-id@outlook.com>
Password: <generated App password after 2 factor authentication>
use TLS: Yes
I have uploaded the correct MS root and intermediate certificates to the auto-login wallet and created required Netwrok ACLs.
When I try to send email using Apex (SQLPLUS), I do not get any error but the email never reaches the inbox instead the MAIl QUEUE in Apex gets populated.
begin
apex_util.set_security_group_id (p_security_group_id => 5140893825405459);
APEX_MAIL.SEND(p_from => 'awsrahul2024@outlook.com',
p_to => 'awsrahul2024@outlook.com',
p_subj => 'Test Mail2',
p_body => 'This is a test email');
apex_mail.push_queue();
END;
/
PL/SQL procedure successfully completed.
When I try to validate the email configuration, I get the following error.
SQL> begin
APEX_INSTANCE_ADMIN.validate_email_config;
end; 2 3
4 /
begin
*
ERROR at line 1:
ORA-29279: SMTP permanent error: 535 5.7.139 Authentication unsuccessful, basic
authentication is disabled. [BM1P287CA0002.INDP287.PROD.OUTLOOK.COM
2025-05-21T21:47:12.528Z 08DD989AFE00C8F6]
ORA-06512: at "APEX_220100.WWV_FLOW_MAIL", line 1284
ORA-06512: at "SYS.UTL_SMTP", line 57
ORA-06512: at "SYS.UTL_SMTP", line 142
ORA-06512: at "SYS.UTL_SMTP", line 446
ORA-06512: at "APEX_220100.WWV_FLOW_MAIL", line 1268
ORA-06512: at "APEX_220100.WWV_FLOW_MAIL", line 2588
ORA-06512: at "APEX_220100.WWV_FLOW_INSTANCE_ADMIN", line 1671
ORA-06512: at line 2
Kindly advise. Thank you!
from the oracle10g concepts guide
"Oracle recommends that unique indexes be created explicitly, and not through
enabling a unique constraint on a table.
Alternatively, you can define UNIQUE integrity constraints on the desired columns.
Oracle enforces UNIQUE integrity constraints by automatically defining a unique
index on the unique key. However, it is advisable that any index that exists for
query performance, including unique indexes, be created explicitly."
what's the difference between creating unique constraint while creating table
(any way unique constraint will create unique index to enforce integrity)
and creating unique index explicitly?
With?row?based replication?(binlog_format = ROW), the function generates duplicate sequence values; in?mixed?mode (binlog_format = MIXED), no duplicates occur. Using AWS RDS 8.0.22
CREATE DEFINER=abc@% FUNCTION nextval(seq_name varchar(100)) RETURNS bigint
BEGIN
DECLARE cur_val bigint(20);
SELECT
sequence_cur_value INTO cur_val
FROM
sequence_data
WHERE
sequence_name = seq_name
;
IF cur_val IS NOT NULL THEN
UPDATE
sequence_data
SET
sequence_cur_value = IF (
(sequence_cur_value + sequence_increment) > sequence_max_value,
IF (
sequence_cycle = TRUE,
sequence_min_value,
NULL
),
sequence_cur_value + sequence_increment
)
WHERE
sequence_name = seq_name
;
END IF;
RETURN cur_val;
Hello ,
Hello, I'm having a rather strange problem:
A single Windows user is unable to connect to a database hosted on the WINDOWSPROD server. The error is: ORA-12637: Unable to receive packet; in the alert.log:
ORA-00609: Could not attach to incoming connection
ORA-12699: Native service internal error
The same user can connect to the database hosted on the WINDOWSPROD server. Both servers are in the same domain.
Does anyone have any ideas?
Dear Tom,
I have a question. I have a profile say myprofile in dba_users view.
I'm selecting how many users having this myprofile
select username from dba_users where profile='MYPROFILE';
User1
User2
User3
Now I want to check whether all this 3 users have Connect role or not from dba_role_privs.
Suppose User1 and User2 are granted Connect role so how to check that in single query along with unmatched one which is User3.
Thanks
Nite
Hello,
In this query :
<code>select id, to_char(txn_date, 'yyyymmdd'), count(*)
from table1
where to_char(txn_date, 'yyyymmdd') > '20250101'
group by id, to_char(txn_date, 'yyyymmdd')
order by to_char(txn_date, 'yyyymmdd');</code>
Will the to_char(txn_date, 'yyyymmdd') be resolved four times or only once ?
If four times, then will the below query be more efficient ?
<code>with q1 as
(
select /*+ materialize */
id, to_char(txn_date, 'yyyymmdd') dt_yyyymmdd
from table
where to_char(txn_date, 'yyyymmdd') > '20250101'
)
select q1.id, q1.dt_yyyymmdd, count(*)
from q1
group by q1.id, q1.dt_yyyymmdd
order by q1.dt_yyyymmdd;</code>
Thank you for your time and expertise.
I"m looking for guidelines on how to architect a
new APEX solution for migrating legacy
Oracle reports to APEX.
We have 67 seperate municipal government entities
in 67 seperate Oracle instances. Some are 12c some
are 19c. They are spread across 4 servers.
Table structures are identical across all 67
instances.
Do we need to install APEX in all 67 instances?
How would we configure ORDS?
Can anyone point me to any whitepapers and/or
use cases for how to architect the best
APEX framework for this ?
I have a package that in case Oracle APEX is installed, calls a procedure attempting to send a PWA notification and this procedure requires certain APEX objects to exist.
In order to avoid errors at compile time in case Oracle APEX is not installed, I embedded the code within a block that is compiled only if a PLSQL_CCFLAGS identifier returns TRUE and this flag indicates whether or not APEX is installed.
Now, this leads to a potential problem: this flag must be initialized at the session level before compiling the code, and I would like to avoid that the code is not included only because I forgot to set the flag in advance. I wish there would be some special constants for optional components of the database such as APEX so that I can avoid this type of tricks, but until then I can stick to the following method.
Here the $ERROR directive comes in handy, right after the BEGIN statement of the procedure I put this block of pseudo-code: BEGIN -- ensure conditional compilation is based on proper PLSQL_CCFLAGS configuration $IF $$APEX IS NULL $THEN $ERROR q'[PLSQL_CCFLAGS named APEX is undefined, execute ALTER SESSION SET PLSQL_CCFLAGS = 'APEX:TRUE|FALSE']' $END $END
-- conditional compilation in case APEX is installed $IF $$APEX $THEN ... $ELSE -- APEX not installed ... $END END;
After compiling the code, you can ensure that the code contains the expected logic by calling the following procedure:
set serveroutput on EXEC DBMS_PREPROCESSOR.PRINT_POST_PROCESSED_SOURCE('PROCEDURE','TOOLS','SEND_NOTIFICATION'); In case you forgot to set the flag, here is what you get at compile time: PLS-00179: $ERROR: PLSQL_CCFLAGS for APEX are undefined, execute ALTER SESSION SET PLSQL_CCFLAGS = 'APEX:TRUE|FALSE'
I ran into this problem with 1% of our records converting views from SQL Server to Oracle where my testing found different results when doing a compare during UAT.
We have a system that needs to express time rounded to tenths of hours because of industry norms in aviation.
In the example below you can see I am finding the difference between two dates, but the result seems to be rounding incorrectly. Normally to test these issues is to try converting the output to a char to see if there is an unseen precision issue.
In this case, I know that Oracle is computing the RAW_DIFF value as .649999999 and is the reason it appears to round incorrectly. What I found surprising is that RAW_CHAR did not return .649999999.
<b>My question is what is the best practice for avoiding this case because I can see how an analyst writing a query would expect ROUND(<val>,1) to just work since this is a 1% kind of issue?
</b>
START_DATE 5/7/2025 10:46:00 PM
START_CHAR 2025-05-07 22:46:00
END_DATE 5/7/2025 10:07:00 PM
START_CHAR 2025-05-07 22:07:00
RAW_DIFF 0.65
RAW_CHAR .650000000000 -- actually .649999999 depending on the query tool
SINGLE_ROUND 0.6
DOUBLE_ROUND 0.7
<code>
create table ROUND_TEST
(
id NUMBER generated always as identity,
start_date DATE,
end_date DATE
);
insert into round_test
(start_date, end_date)
values
(to_date('5/7/2025 10:46:00 PM','MM/DD/YYYY HH:MI:SS PM'), to_date('5/7/2025 10:07:00 PM','MM/DD/YYYY HH:MI:SS PM'));
select t.start_date,
to_char(t.start_date, 'YYYY-MM-DD HH24:MI:SS') start_char,
t.end_date,
to_char(t.end_date, 'YYYY-MM-DD HH24:MI:SS') end_char,
(t.start_date - t.end_date) * 24 as raw_diff,
to_char((t.start_date - t.end_date) * 24,'9,999.999999999999') as raw_char,
round((t.start_date - t.end_date) * 24, 1) as single_round,
round(round((t.start_date - t.end_date) * 24, 3), 1) as double_round
from round_test t
</code>
Tom,
Is there a way I can find the first day and last day of a given quarter.
In a given year, there are 4 quarters.
If I select first quarter then I should be able to get
the first day and last day of the first quarter
example:
quarter = 1 and year is 2007. expected result is 01/01/2007 - 03/31/2007.
similarly quarter = 2 and year is 2007 then expected result is 04/01/2007 - 06/20/2007.
Thanks in advance.
<code>
Hi, Tom
I wrote the following procedure, there are two PLACEHOLDERs which means two bind variables
and the two PLACEHOLERS have the same name, that is to say ,the name of PLACEHOLDERs are identical,
they are both "MYTEST".
In the procedure, I assign a value to the PLACEHOLDERs by DBMS_SQL.BIND_VARIABLE only once,
but if I use 'EXECUTE IMMEDIATE' to execute a dynamic SQL, we have to separately assign value
for the TWO PLACEHOLDERs even though their names are same.
I'd like to know if the usage of "two PLACEHOLDERs with same name, bind only once" of DBMS_SQL is correct?
I have done many experiments of this usage and I got the correct result.
Thank a lot
Best Regards
create table test(id1 number,id2 number);
insert into test values(1,2);
insert into test values(2,1);
insert into test values(2,3);
commit;
SQL> select * from test;
ID1 ID2
---------- ----------
1 2
2 1
2 3
set serveroutput on
declare
v_sql varchar2(32767);
v_cur pls_integer;
v_execute pls_integer;
v_column pls_integer;
v_count pls_integer;
V_ID pls_integer := 1;
begin
v_cur := dbms_sql.open_cursor;
v_sql := 'select count(*)
from (select *
from test
where id1=:MYTEST
or id2=:MYTEST)';
dbms_sql.parse(v_cur,v_sql,dbms_sql.native);
dbms_sql.bind_variable(v_cur,'MYTEST',V_ID);
dbms_sql.define_column(v_cur,1,v_column);
v_execute := dbms_sql.execute(v_cur);
if dbms_sql.fetch_rows(v_cur) > 0 then
dbms_sql.column_value(v_cur, 1, v_count);
end if;
dbms_sql.close_cursor(v_cur);
dbms_output.put_line('count is '||to_char(v_count));
end;
/
count is 2
PL/SQL procedure successfully completed.</code>
Hello, a few questions:
1. All of the scripts I have written years ago use both address and hash_value to join v$session and v$sqlarea.
What I don't quite understand is why it's not sufficient to just join on hash_value (or sql_id).
Why was/is address also needed?
Doc ID 43716.1 seems to indicate both should be included when doing the join.
SQL_ADDRESS RAW(4|8) Used with SQL_HASH_VALUE to identify the SQL statement that is currently being executed.
----> Joins to <<View:V$SQL>> . ADDRESS / HASH_VALUE
SQL_HASH_VALUE NUMBER Used with SQL_ADDRESS to identify the SQL statement that is currently being executed.
----> Joins to <<View:V$SQL>> . ADDRESS / HASH_VALUE
2. If I wanted to instead use v$sqlstats, it seems the best way to join v$session to that view is to use sql_id?
Note my use-case for doing these joins in the above two questions is to get the full SQL text.
3. Why was Bug 27760729 (for SQL_FULLTEXT in v$sqlstats not actually showing the full SQL text) closed out as not a bug?!?
Hi, I have a sequence skip number problem.
I have this sequence. The cache is 20 and noorder.
<code>
create sequence seq_serial_no minvalue 1 maxvalue 999 start with 2 increment by 1 cycle
</code>
The current situation that has arisen is:
I have two servers A and B. They connect to the same database instance.
Sometimes the sequence is incremented, but sometimes it jumps, and every time it jumps, it jumps to the last_number. Here is two examples.
The first example:
server value -- comment
B 201 2025/04/23 9:14
A 202 2025/04/23 9:22
A 221 2025/04/23 10:35 --skip
B 222 2025/04/23 11:08
A 241 2025/04/23 13:22 --skip
B 242 2025/04/23 15:13
A 261 2025/04/23 17:41 --skip
The second examples:
server value -- comment
A 541 2025/05/08 14:36
B 542 2025/05/08 15:12
A 561 2025/05/09 9:28 -- skip
B 562 2025/05/09 10:18
A 563 2025/05/09 10:46
A 581 2025/05/12 9:17 -- skip
B 582 2025/05/12 9:23
A 583 2025/05/12 10:30
B 601 2025/05/12 14:53 -- skip
A 602 2025/05/12 15:19
The skipping of numbers looks like it has something to do with the cache being cleared. May I ask if my guess is correct? If it is right, then what would clear the cache for SGA?
Hope you're doing well.
I'm reaching out to get your inputs on statistics gathering in our Oracle Autonomous Database (version 19c), which we use to support our integration workloads.
We have a few very large tables (approximately 70 million records each) that serve as history tables for archiving and auditing purposes. On a daily basis, we dump data from staging tables into these history tables. Since these tables are not frequently queried, we have historically not gathered statistics on them regularly.
However, we've recently observed some performance degradation when querying these tables.
To address this, we are considering to run GATHER_SCHEMA_STATS nightly on entire schema. We are assuming this could help improve performance.
But, we are not very familiar with the DBMS_SCHEMA_STATS APIs. Based on the documentation we are planning to run the following
<code>
DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'XXDB'
, estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE
, method_opt => 'FOR ALL COLUMNS SIZE AUTO'
, cascade => TRUE);
</code>
Could you please let us know if this is the right approach to gather schema stats? By any chance if yo have any automation scripts created for this purpose to refer?
Thanks in advance.
I want to upload multiple XLSX files, each containing multiple sheets. The data from each sheet should be inserted into a table with the same name as the sheet (the tables already exist) using Oracle APEX. Is this possible?
Pages
|