Feed aggregator

2025 Tax Change – Middle Class Still Screwed

Michael Dinh - 9 hours 17 min ago

What is the percentage difference between 12% and 22%? Many get this wrong.

This is why the middle class is screwed.

Retrieved from https://www.irs.gov/newsroom/irs-releases-tax-inflation-adjustments-for-tax-year-2025 Mar 30, 2025

Flashback a Pluggable Database in RAC with Data Guard

Hemant K Chitale - Sat, 2025-03-29 01:48

 I have published a new video Flashback PDB in RAC with DG.

Here are some important considerations / commands when you want to use Flashback :

1.  Ensure that you have FLASHBACK_ON in the Database (at both Primary and Standby).  If you don't have FLASHBACK_ON,  ensure that you create the RESTORE POINT WITH GUARANTEE FLASHBACK DATABASE at both Primary and Standby.  (In the latter case, the Flashback SCN / Point In Time of the Standby must be lower than that of the Primary).

3.  Monitor the FRA (Recovery Area) (V$FLASH_RECOVERY_AREA_USAGE) and  Oldest Flashback Time (in V$FLASHBACK_DATABASE_LOG).  If necessary, increase DB_FLASHBACK_RETENTION_TARGET and DB_RECOVERY_FILE_DEST_SIZE to ensure that you don't run out of space for the Flashback Logs !  

2. If you have used dgmgrl -- i.e. Data Guard Broker Configuration, all commands to Stop Redo Shipping / Resume Redo Shipping,  Stop Redo Apply / Resume Redo Apply must be issued through dgmgrl.  Do not use SQL commands.

3. At the Standby the Flashback command uses the "STANDBY" clause to identify that you are doing a Flashback for a Standby.


Some older Blog Posts on FLASHBACK DATABASE are at  Blog Series on Flashback Database.


Categories: DBA Blogs

SQL effectivity metrics

Tom Kyte - Fri, 2025-03-28 02:25
Dear Ask Tom team memebers. Is it possible to to measure sql effectivity via computed throw away row count ? Its clear that optimizer tries to build the plan to minimize throw away rows for the next steps in the plan. So it tries to send only relevant rows to next operation. I get that huge throw away in the beginig is very beneficial for next steps. My intent is to find big(size of queried data and running time) sqls that do a lot of ineffective work on irrelevant data to find terrible design of tables, partitoning, indexes and all data access paths. So the findings should be something like a step in the plan and a ratio of rowsouce_result(rows_sent_from_plan_step) / rowsource_out(rows_sent_to_next_step i.e parent operation.) . I know its possible to see it in the plan statistics, the metric actual_rows in the plan step. But is there a method how to get those metrics for all schema sqls in a similiar way like top ten sqls measured by cpu, i/o, memory consumption ? I know that such metric could be misleading, it should be for finding ineffective database design and sqls, that waste a lot of work and resources. I tried some sqls based on sql plan statistics, but I'm not sure if its precise. Thank You. Tomas.
Categories: DBA Blogs

GenAI: How to get started

DBASolved - Thu, 2025-03-27 13:14

This post we are going to look at some items related to getting started with Generative AI. This is mostly […]

The post GenAI: How to get started appeared first on DBASolved.

Categories: DBA Blogs

PLS-00201: identifier 'APEX_DATA_EXPORT.T_EXPORT' must be declared

Tom Kyte - Wed, 2025-03-26 14:18
Hello everyone, I am trying to create a CSV file and then attach it to an email and send the email but I am receiving the following error: PLS-00201: identifier 'APEX_DATA_EXPORT.T_EXPORT' must be declared Below is the code I am attempting to run: <code>declare l_context apex_exec.t_context; l_export apex_data_export.t_export; l_mail_id NUMBER; l_workspace_id NUMBER; name varchar2(225); begin select LNAME || '_' || FNAME || '_' || MNAME into name from IN_PROCESSING where :P23_PERS_ID = PERS_ID; l_workspace_id := apex_util.find_security_group_id (p_workspace => 'OPERATIONS MANAGEMENT'); apex_util.set_security_group_id (p_security_group_id => l_workspace_id); l_context := apex_exec.open_query_context ( p_location => apex_exec.c_location_local_db, p_sql_query => 'select i.EMPLOYEEID, i.LNAME || '', '' || i.FNAME || '' '' || i.MNAME as NAME, ASSIGNED_ORG, PHONE_NUM, (select p.LNAME || '', '' || p.FNAME || '' '' || p.MNAME from PERSONS p where i.SUPV = p.DISPLAY_NAME) as SUPERVISOR_NAME, i.OFFICE_SYM, i.POSITION_TYPE, i.DUTY_TITLE || '' '' || i.RANK_GRADE as DUTY_TITLE_RANK_GRADE, CYBERAWARE_DATE, decode(i.GAIN_TYPE, ''Foreign National'', ''Yes'',NULL) as FOREIGN_NATIONAL, COMPANY_NAME, CONTR_NO, CONTR_EXP_DATE from IN_PROCESSING i where PERS_ID = '||:P23_PERS_ID||';', p_file_name => ''||name||'_test.csv'); l_export := apex_data_export.export (p_context => l_context, p_format => apex_data_export.c_format_csv); apex_exec.close (l_context); l_mail_id := APEX_MAIL.SEND (p_to => 'me@mail', p_from => 'csv_test@us.af.mil', p_body => 'words', p_body_html => '<h2>Report is attached</h2>', p_subj => 'Example Report'); APEX_MAIL.ADD_ATTACHMENT (p_mail_id => l_mail_id, p_attachment => l_export.content_blob, p_filename => ''||name||'_test.csv', p_mime_type => 'text/csv'); apex_mail.push_queue; end;</code> And below is the error received: ORA-06550: line 3, column 17: PLS-00201: identifier 'APEX_DATA_EXPORT.T_EXPORT' must be declared ORA-06550: line 0, column 0: PL/SQL: Compilation unit analysis terminated ORA-06512: at "SYS.DBMS_SQL", line 1721 1. declare 2. l_c...
Categories: DBA Blogs

Oracle DB 23ai Free Connection Pool in Python

Andrejus Baranovski - Tue, 2025-03-25 14:56
I describe how to connect to Oracle DB from Python. I explain why DB connection pool is important for better performance. Connection is done through thin oracledb mode, without installing Oracle Client.

 

Importdp with remap_table has run a full import

Tom Kyte - Tue, 2025-03-25 02:10
Hi Tom, It was run a impdp command with remap_table in parfile with sqlplus in Oracle Linux but it run a full import. We've lost impdp logs, How could we get information about the impdp command that was run and clarify the issue? Thanks a lot Regards!
Categories: DBA Blogs

Oracle Scheduler - Lethargic

Tom Kyte - Tue, 2025-03-25 02:10
On my test database, Oracle scheduler seems a bit lethargic. In my case, I am inserting data into 6 different tables ( million plus rows in each table ), these are done as insert into / select from via a stored procedure, I have a package with 6 procedures. In the package I create 6 jobs to do the insertion. At any given point only 4 jobs are running and they seem much slower then if I run these 6 procedures from a SQL Plus command line. I created a new job class for these jobs and my job_queue_parameter is currently set to 160. Is there something I can do to make these move faster via the scheduler ? Thanks BC, CT,MI
Categories: DBA Blogs

Sub queries not working if db link used

Tom Kyte - Tue, 2025-03-25 02:10
hi Tom, when a remote table is referred in sub query then it is not retrieving the data and it throws error. refer the below sql, I am trying to get latest record, so I am doing order by DATEM in sub query and taking first record in outer sql. here, OWNERDOCUMENTS used in FROM clause is a synonym created for a remote table. The below sql not working! <code>select ownercode,img_path_back,img_path_front,DELETED_FLG,useridm,DATEM FROM ( select ownercode,img_path_back,img_path_front,customer_name,NVL(DELETED_FLG,'N') DELETED_FLG,useridm,DATEM from OWNERDOCUMENTS where OWNERCODE=NVl('xxxx',OWNERCODE) Order By DATEM DESC ) WHERE ROWNUM=1;</code> The sql below is working!! <code>select ownercode,img_path_back,img_path_front,customer_name,NVL(DELETED_FLG,'N') DELETED_FLG,useridm,DATEM from OWNERDOCUMENTS where OWNERCODE=NVl('xxxx',OWNERCODE) Order By DATEM DESC; </code> what could be the issue? please advice.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator