Feed aggregator

temporary tablespace

Tom Kyte - 4 hours 33 min ago
i have two questions about the temporary tablespace. 1.) if i'm running out space on temporary, how could i know which user or transaction is using the temporary tablespace. 2.) Sometime, it happens to me that, obviously no transactions running, but the temporary tablespace never get released unless shutdown the database. Seem to me like, some dead transaction never release the resource, and their status is "pseudo", any idea what's happened and how should i handle this except shutdown database ?
Categories: DBA Blogs

SUCCESSFUL_ROW_COUNT and ERROR_ROW_COUNT in APEX_AUTOMATION_LOG

Tom Kyte - 4 hours 33 min ago
I have a simple table with numeric column <code>create table foo (i integer); /</code> In my apex application I have a very simple automation(scheduled, always, daily at midnight) and my action code is: <code>begin insert into foo values (1); commit; execute immediate 'insert into foo values (sysdate)'; commit; end; /</code> When I execute this and then query <b>FOO</b> there is one value (the 1). <b>APEX_AUTOMATION_LOG</b> display the value "FAILURE" in <b>STATUS</b> (because of inconsistent datatypes). But why the columns <b>SUCCESSFUL_ROW_COUNT</b> and <b>ERROR_ROW_COUNT</b> shows both 0???
Categories: DBA Blogs

Timestamp appends "000" when formating.

Tom Kyte - 4 hours 33 min ago
I have the following query. I am wondering why, oracle sufixes "000" with the given timestamp value when I format it. SQL> alter session set NLS_TIMESTAMP_TZ_FORMAT='dd/mm/yyyy hh24:mi.ssxff' ; SQL> alter session set nls_timestamp_format='dd/mm/yyyy hh24:mi.ssxff'; SQL> select systimestamp from dual ; SYSTIMESTAMP ---------------------------------------------------------------------- 27/02/2002 15:51.12.539880 SQL> select to_timestamp('27/02/2002 15:51.12.539880', 'dd/mm/yyyy hh24:mi.ss.ff') from dual ; TO_TIMESTAMP('27/02/200215:51.12.539880','DD/MM/YYYYHH24:MI.SS.FF') --------------------------------------------------------------------- 27/02/2002 15:51.12.539880000 Why do I get the last three zeroes? How do I supress them Note: I return the "systimestamp" from a procedure and the returned value always has "000" suffixed. I need to supppress the last three 0s. How do I do it? Thanks.
Categories: DBA Blogs

Quarantined SQL Plans for PS/Queries

David Kurtz - Mon, 2025-02-24 11:46

This follows on from my previous post, Management of Long Running PS/Queries Cancelled by Resource Manager.

From 19c, on Engineered Systems only (such as Exadata and Exadata Cloud Service) the 'Oracle Database automatically quarantines the plans for SQL statements terminated by … the Resource Manager for exceeding resource limits.

The Resource Manager can set a maximum estimated execution time for a SQL statement, for example, 20 minutes. If a statement execution exceeds this limit, then the Resource Manager terminates the statement. However, the statement may run repeatedly before being terminated, wasting 20 minutes of resources each time it is executed.

Starting in Oracle Database 19c, if a statement exceeds the specified resource limit, then the Resource Manager terminates the execution and “quarantines” the plan. To quarantine the plan means to put it on a blacklist of plans that the database will not execute. Note that the plan is quarantined, not the statement itself.'

[Oracle SQL Tuning Guide: 4.7 About Quarantined SQL Plans]

When an attempt is made to execute a quarantined execution plan an error is produced: ORA-56955: quarantined plan used.

Oracle does not log timed-out or quarantined queries.  On V$SQL and V$SQLSTAT, AVOIDED_EXECUTIONS records the number of times a SQL query has been prevented from running.  However, this will not stay in the library cache long on a PeopleSoft system, due to the continual parse of dynamically generated SQL statements.  As of Oracle 19.20, it is not recorded in AWR on DBA_HIST_SQLSTAT.  

If an error condition occurs during a PSQUERY process run on the process scheduler, the process terminates with an error.  The SQL statement and the error message are recorded in the Application Engine message log.  As demonstrated in the previous blog, we can detect such failures by inspecting the message log of the PSQUERY process that did not finish successfully (ie. it has an end time, but the run status does not indicate success).

Matching Quarantine Directives to Cancelled Queries

Quarantine directives are visible via DBA_SQL_QUARANTINE, including SQL text and execution plan hash value.

It would be useful to know which quarantine directive relates to which query.  However, it is not easy to match the SQL in the PeopleSoft message log entries with that in the quarantine entries. The SQL text in the message log can have multiple spaces. These are stripped out in the DBA_SQL_QUARANTINE view where the normalised SQL statement is visible.  

The timestamp of creation and last execution of the quarantine directive is stored on it, but matching these to when the query was running can result in false positives.

Also, you cannot tell which quarantine directive was created by which consumer group.  The maximum CPU timeout is recorded on DBA_SQL_QUARANTINE.  In my example, it is only possible to distinguish the originating consumer group because the two consumer groups happen to have different timeouts.

A method that matches exactly, but only returns partial rows is to:

  • Obtain ASH data for queries terminated by the resource manager.  It can be matched by timestamp, MODULE, and ACTION (provided that EnableAEMonitoring is enabled).
    • Profile the ASH to find the statement that took the longest during each PSQUERY process, and that is almost certain to be the SQL query.  Thus obtaining the SQL_ID, SQL Plan Hash Value and consumer group ID.  It is also possible to determine the total database time for the query, and the database time spent on CPU.
    • The consumer group name can then be obtained from DBA_HIST_RSRC_CONSUMER_GROUP
  • Obtain the SQL text for the long-running query.  It would also have to be captured by an AWR snapshot.  This does often occur because it was a long-running SQL, but it is not certain.
  • The signature for the SQL statement (not the force-matching signature) can be derived using the SQLTEXT_TO_SIGNATURE function in DBMS_SQLTUNE.  This can be matched to the signature recorded in DBA_SQL_QUARANTINE.
  • You can have multiple quarantine directives for the same signature (i.e. the same SQL statement), each with a different plan hash value.  
    • NB: The plan hash value on DBA_SQL_QUARANTINE is the adaptive plan hash value (with all of its possible plan alternatives), and therefore it matches SQL_FULL_PLAN_HASH_VALUE in the ASH data, and not SQL_PLAN_HASH_VALUE (the plan that actually executed).
Note that
  • When a query executes until timed-out, producing ORA-00040, you usually can find the SQL statement in the AWR repository and so generate the signature to exactly match the quarantine record.
  • When an attempt is made to run a quarantined statement and execution plan, you usually cannot find the SQL statement because it hasn't run for long enough to produce an ASH sample.  Even when it has, you also have to rely on the statement having been captured previously by AWR.  Those conditions only come together occasionally.
This matching process is done by this query: message_log_checker-psquery2.sql.  This is a sample output.  
  • We can see the quarantine directives that were created when the resource manager cancelled a query, raising error ORA-00040: active time limit exceeded - call aborted.  
  • However, where quarantine directives have prevented SQL from executing, raising error ORA-56955: quarantined plan used, the ASH data from the event that originally created the directive has since been purged, so we cannot use it to match directives.
Mon Feb 24                                                                                                                                                           page    1
                                                     PS/Queries terminated by Resource Manager/quarantined Execution Plan

                                                          Public/                                                              ASH
                                                          Private                                Ru Oracle      Exec    ASH    CPU Message Log
     P.I. DBNAME  OPRID    RUNCNTLID                      Query   QRYNAME                        St Err. #      Secs   Secs   Secs Date/Time Stamp              SQL_ID
--------- ------- -------- ------------------------------ ------- ------------------------------ -- --------- ------ ------ ------ ---------------------------- -------------
   SQL Plan   Full Plan                                                                                      CPU
 Hash Value  Hash Value Consumer Group Name                   SIGNATURE Quarantine Name                      Time  Quarantine Created           Quarantine Last Executed
----------- ----------- ------------------------- --------------------- ------------------------------------ ----- ---------------------------- ----------------------------
…
 31452465 FSPROD  USR0001  GBR_JRNL_LN_DETAIL_ACCT        Public  GBR_JRNL_LN_DETAIL_ACCT        10 ORA-56955     36     10     10 20-FEB-25 06.28.03.578218 PM 0wm9g6xkys12h
 4009529842   653370716 PSQUERY_BATCH_GROUP         5584654620166156419

 31451318 FSPROD  USR0002  GBR_JRNL_LN_DETAIL_ACCT        Public  GBR_JRNL_LN_DETAIL_ACCT        10 ORA-56955     36               20-FEB-25 02.36.38.590841 PM


 31451292 FSPROD  USR0002  GBR_JRNL_LN_DETAIL_ACCT        Public  GBR_JRNL_LN_DETAIL_ACCT        10 ORA-56955     36               20-FEB-25 02.30.51.777351 PM


 31438602 FSPROD  USR0003  1                              Private DK_GBR_GL_DETAIL_NEW           10 ORA-00040  28316  28275  14203 18-FEB-25 11.39.19.502412 PM 5qrbrf775whky
 3446094907  3491308607 PSQUERY_BATCH_GROUP        16266909742923016361 SQL_QUARANTINE_f3gxc76u48u59d019243f 14400 18-FEB-25 11.49.33.091081 PM

 31437925 FSPROD  USR0004  16                             Private TB_TEST2                       10 ORA-00040  17684  17654  17541 18-FEB-25 06.09.14.060615 PM 06xqrgj18wp05
 4256462904  2062199471 PSQUERY_BATCH_GROUP         6341132966559464532 SQL_QUARANTINE_5h01uuscnrg2n7aeaaaaf 14400 18-FEB-25 06.17.20.679769 PM

 31437907 FSPROD  USR0004  16                             Private TB_TEST2                       10 ORA-00040  17694  17695  17592 18-FEB-25 06.04.05.942470 PM 4yurn75y2p0t2
 3232504707   121066138 PSQUERY_BATCH_GROUP         4966087806133732884 SQL_QUARANTINE_49usqjjc001hn0737529a 14400 18-FEB-25 06.17.24.869185 PM
 …

Virtual Nuisance

Jonathan Lewis - Mon, 2025-02-24 11:09

Here’s a note that’s been sitting as a draft for the last 7 years – finally dusted down, retested, and published. Following my new publishing paradigm, I’m going to tell you the conclusion at the start, then show the working that demonstrates the point.

Summary

When cloning a table, or subset of a table, the simplest method in the “good old days” was to execute a call to: “create table cloned_table as select * from source_table where …” (CTAS) then create any indexes that you needed on the clone. Unfortunately if you’ve created any (visible) virtual columns the CTAS will generate real columns holding the calculated values.

There are various workarounds to this, of course, though not all viable for all systems.

  • The “obvious and sensible” option is (used to be) to identify just the real columns in the select list. But then you’d have to remember to check and add all the virtual bits afterwards (including the invisible bits).
  • A “lazy” option that I’ve seen suggested a few times is to drop the virtual columns, clone the table, then recreate the virtual columns – a slightly better bet might be to make them invisible before cloning then make them visible (on the source) afterwards. Either way, though, this isn’t a viable strategy for a production/shared developer table and there are cases where it simply can’t be used and, again, you have to recreate the the virtual columns on the clone.

A fairly nice strategy appeared in the 12.2 feature “create table XX for exchange …” which doesn’t have to be aimed at a partitioned table. This will create the virtual, including invisible, columns. You still have to insert the data, however, which means you still have to create a select list that covers all the “real” columns in the right order; and you still have to recreate the indexes and some constraints.

Discussion and demonstration

Quoting from the SQL Language Reference manual 23c for the Create Table statement, you will find under the subtitle “for exchange with table” the statement:

This clause lets you create a table that matches the structure of an existing partitioned table.

This is true – but not the whole truth. It will do what it says, but it can also be used to create a table that matches the structure of an existing non-partitioned table. This means that when you want to clone a table (or subset of a table) that includes virtual columns and hidden columns, it’s a little easier to get it right. You still have to get the data into the table, though and recreate indexes and some constraints.

The following script shows two attempts at cloning – first using a simple “create as select * ..”, the second using a “create for exchange” followed by steps to populate the copy – first the wrong way then the right way.

We start by creating a source table (a subset of the view all_objects) then add a couple of indexes, a primary key (which also means not null) constraint, and a couple of virtual columns (one invisible).

rem
rem     Script:         virtual_nuisance.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Feb 2018
rem     Purpose:        
rem
rem     Last tested 
rem             23.4.0.0 (23ai)
rem             19.11.0.0
rem             12.2.0.1
rem             11.2.0.4

prompt  ====================
prompt  Basic Table Creation
prompt  ====================

create table source_table nologging
as 
select  * 
from    sys.all_objects 
where   rownum <= 10000
;

-- alter table source_table modify object_id not null;
alter table source_table add constraint st_pk primary key(object_id);

create unique index st_u1 on source_table(object_type, object_id);
create        index st_i1 on source_table(object_name);

prompt  ===================
prompt  Add virtual columns
prompt  ===================

alter table source_table 
        add (
                new_name_visible 
                        generated always as ( initcap(object_name) ) virtual,
                new_name_invisible invisible 
                        generated always as ( lower(object_name) ) virtual
        )
;

Now we create two copies of the table, one with a simple “create as select *”, the other as a “create for exchange” which means we then have to populate it, which we try in two ways – first using “insert select *” which fails, leading us to insert a list of named columns:

prompt  ====================================================== 
prompt  Clone table - which instantiates the visible virtual column
prompt  ======================================================

create table clone_table as select * from source_table;

prompt  ====================================
prompt  Create for exchange (12.2+)
prompt  ====================================

create table exchange_table for exchange with table source_table;

insert into exchange_table select * from source_table;
-- *
-- ERROR at line 1:
-- ORA-54013: INSERT operation disallowed on virtual columns
-- Help: https://docs.oracle.com/error-help/db/ora-54013/
--

insert into exchange_table (
        owner, object_name, subobject_name, object_id, data_object_id, object_type,
        created, last_ddl_time, timestamp, status, temporary, generated,
        secondary, namespace, edition_name, sharing, editionable, oracle_maintained,
        application, default_collation, duplicated, sharded, created_appid, 
        created_vsnid, modified_appid, modified_vsnid
)
select
        owner, object_name, subobject_name, object_id, data_object_id, object_type,
        created, last_ddl_time, timestamp, status, temporary, generated,
        secondary, namespace, edition_name, sharing, editionable, oracle_maintained,
        application, default_collation, duplicated, sharded, created_appid, 
        created_vsnid, modified_appid, modified_vsnid
from
        source_table
/

-- 10000 rows created.

As a check we can select a few rows in the exchange table to see that the virtual columns exist and hold the expected data. (Remember that the invisible columns can be selected by name, even though they are invisible in a describe or a “select *”.)

column object_name              format a24
column new_name_visible         format a24
column new_name_invisible       format a24

select
        object_name, object_type, new_name_visible, new_name_invisible
from
        exchange_table
where
        rownum <= 5
/

OBJECT_NAME              OBJECT_TYPE             NEW_NAME_VISIBLE         NEW_NAME_INVISIBLE
------------------------ ----------------------- ------------------------ ------------------------
ALL_OBJECT_TABLES        VIEW                    All_Object_Tables        all_object_tables
ALL_OBJECT_TABLES        SYNONYM                 All_Object_Tables        all_object_tables
ALL_ALL_TABLES           VIEW                    All_All_Tables           all_all_tables
ALL_ALL_TABLES           SYNONYM                 All_All_Tables           all_all_tables
DBA_TABLES               VIEW                    Dba_Tables               dba_tables

5 rows selected.

Now check which of the three interesting columns exist in the three tables, and what their state is:

break on table_name skip 1

select
        table_name, column_name, 
        virtual_column "Virtual", 
        hidden_column  "Hidden" 
from    user_tab_cols
where   table_name in ( 'SOURCE_TABLE', 'CLONE_TABLE', 'EXCHANGE_TABLE')
and     column_name in ('OBJECT_NAME', 'NEW_NAME_VISIBLE','NEW_NAME_INVISIBLE')
order by
        table_name desc, column_name desc
/

TABLE_NAME                COLUMN_NAME          Vir Hid
------------------------- -------------------- --- ---
SOURCE_TABLE              OBJECT_NAME          NO  NO
                          NEW_NAME_VISIBLE     YES NO
                          NEW_NAME_INVISIBLE   YES YES

EXCHANGE_TABLE            OBJECT_NAME          NO  NO
                          NEW_NAME_VISIBLE     YES NO
                          NEW_NAME_INVISIBLE   YES YES

CLONE_TABLE               OBJECT_NAME          NO  NO
                          NEW_NAME_VISIBLE     NO  NO


8 rows selected.

Critically, the clone (CTAS) table doesn’t have the invisible virtual column, and the visible virtual column is no longer virtual. Both the virtual columns have appeared in the exchange table, both virtual, and the invisible column is still invisible (hidden).

But what of the indexes (including the implicit primary key index):

select 
        table_name, index_name, uniqueness
from
        user_indexes
where   table_name in ( 'SOURCE_TABLE', 'CLONE_TABLE', 'EXCHANGE_TABLE')
order by
        table_name desc, index_name
;

TABLE_NAME                INDEX_NAME           UNIQUENES
------------------------- -------------------- ---------
SOURCE_TABLE              ST_I1                NONUNIQUE
                          ST_PK                UNIQUE
                          ST_U1                UNIQUE


3 rows selected.

Neither of the copies has any indexes – which is not a surprise for the clone (CTAS) table but is a little disappointing for a table we have (nominally) created in anticipation of an exchange with partition. But that is exactly what the manual would have told us would happen – if we’d looked.

There is one other detail to be careful of when cloning tables – will columns that report as NOT NULL in the source table be declared NOT NULL in the clone/exchange table? The answer depends on why they show as NOT NULL.

In my example the object_id in the source table shows up as NOT NULL when described, but doesn’t in either of the copies because it is a side-effect of the primary key constraint and that constraint has not been recreated in the copying process.

Find the Dependencies

Tom Kyte - Mon, 2025-02-24 09:23
Hi We are looking to take a schema(abc) from our database to to a different server with a new schema name. I am looking to find what are the dependencies on the abc schema in our packages/procedure/functions and is there anything else i should be looking at. regards, Balaji
Categories: DBA Blogs

ORA-06502 exception_dump tracing does not work??

Tom Kyte - Mon, 2025-02-24 09:23
Hello Connor/Chris, Hope you are doing well. I came across MOS note ORA-6502 Research Events (Doc ID 2911572.1), which claims to recommend an option to enable trace to get more details about ORA-06502 error. However, I am not able to reproduce the behaviour on 19.25 and 19.24 databases. Despite enabling trace as mentioned in MOS note using below command, no trace file is created. <code>alter session set events '6502 trace name exception_dump forever'; </code> I am using below PL/SQL block to reproduce the ORA-06502 error <code>declare rt NUMBER(1); begin rt := 11; end; /</code> What am I missing? Thanks in advance
Categories: DBA Blogs

Multitenant Architecture and Application Container: Drop failed install

Tom Kyte - Mon, 2025-02-24 09:23
Using Oracle RDBMS 19.19 APEX 24.2.2 ORDS 24.4.0.r3451601 Goal: Create an application container to contain APEX and ORDS Issue: One of the install steps fail, and I am not able to get rid of the application in "installing" status. I have tried Doing a <code>ALTER PLUGGABLE DATABASE APPLICATION BEGIN UNINSTALL; ... Then a Miracle Occurs ALTER PLUGGABLE DATABASE APPLICATION END UNINSTALL; </code> Places that application in "UNINSTALLED" status, and as such, I then can't attempt to install it again, which is documented behavior. So what I've been having to do is to drop the application container, and try again. My goal is to install APEX, patch APEX, and install ORDS. I've hit errors in each step and resolved them, and then I hit errors in a following step. I have tried several things without success. Is there any way to get rid of the in-process application install?
Categories: DBA Blogs

DDL Event Trigger and alter package compile

Tom Kyte - Mon, 2025-02-24 09:23
I have a ddl trigger (after ddl on database) for analyzing package (spec) source code. It works perfectly if i write the whole command, like <code>create or replace package xyz as procedure foo; end; /</code> I can then pick the source code inside my trigger by something like <code> ... <<get_source_code>> declare n binary_integer; sql_text ora_name_list_t; code clob; begin n:=ora_sql_txt(sql_text); for i in 1..n loop code:=code||sql_text(i); end loop; dbms_output.put_line( substr(code,1,1000) ); end get_source_code; ... </code> But when i programmaticaly recompile the package with... <code>alter package xyz compile specification;</code> ...my "code" is just this command ("alter package xyz compile specification"). Is there a way to get the compiled source code when i manually run the alter package command? (it seems that dictionary information is only after the trigger availabe, not during execution.. so this is not a option) Any idea???
Categories: DBA Blogs

Cannot create index on virtual column of type RAW or RAW(30)

Tom Kyte - Mon, 2025-02-24 09:23
Dear Tom I cannot create an index on a virtual column of type RAW or RAW(30). I've used virtual columns several times in the past and I also indexed them, with no issues so far. This this the first attempt to do so with a RAW column, but I fail. Why do I want to index a virtual column of type RAW(30)? - Please read the section at the bottom. When I try to create the index on a virtual column of type RAW, then Oracle returns "ORA-01450: maximum key length (6397) exceeded" even though the actual keys are shorter. I guess this is due to the fact that the length of a RAW column is only limited by the technical length limit of type RAW (which happens to be 32767). <code>create or replace function calc_sort_key return raw deterministic is l_result raw(30) := hextoraw('41424344'); -- 'ABCD' for the sake of simplicity begin -- the "real" function calculates a sort key of 30 bytes return l_result; end; / create table x ( id number(38) primary key, sort_key as (calc_sort_key()) ) / create unique index sort_key_ui on x (sort_key) / -- ORA-01450: maximum key length (6398) exceeded </code> Ok, then let's try to create a virtual column of limited size, like RAW(30). When I do so, then Oracle returns "ORA-12899: value too large for column "SORT_KEY" (actual: 32767, maximum: 30)". What confuses me with ORA-12899 is: According to the description of ORA-12899 (https://docs.oracle.com/en/error-help/db/ora-12899/?r=19c) this generally happens upon insert or update. But I did not even attempt to insert or update any rows in the table, in fact, the table is empty - it happened after a DDL statement (create table or alter table)! <code>drop table x / create table x ( id number(38) primary key, sort_key raw(30) as (calc_sort_key()) ) / -- ORA-12899: value too large for column "SORT_KEY" (actual: 32767, maximum: 30) </code> I also compared the behaviour with a physical column of type RAW(30) <code>drop table x / create table x ( id number(38) primary key, physical_column raw(30) ) / create unique index physical_column_ui on x (physical_column) / </code> No problem so far. I re-declare the function calc_sort_key such that it returns x.physical_column%type instead of RAW, add a virtual column of type RAW(30) and index it: <code>create or replace function calc_sort_key return x.physical_column%type deterministic is l_result raw(30) := hextoraw('41424344'); -- 'ABCD' begin return l_result; end; / alter table x add ( sort_key raw(30) as (calc_sort_key()) ) / -- ORA-12899: value too large for column "SORT_KEY" (actual: 32767, maximum: 30) alter table x add ( sort_key as (calc_sort_key()) ) / create unique index sort_key_ui on x (sort_key) / -- ORA-01450: maximum key length (6398) exceeded </code> It seems impossible to create a virtual column of type RAW and have an index on that column. Let't see if I can create a functi...
Categories: DBA Blogs

Backing Up your Linux Desktop to Google Drive using Deja Dup on Ubuntu

The Anti-Kyte - Mon, 2025-02-24 01:30

Let’s face it, ensuring that you have regular backups of your home computer to a remote location can seem like a bit of a chore…right up to the time when you’ve managed to lose your data and need a fast and reliable way to get it back.
If you’re using Ubuntu, this task is made much easier by Deja Dup Backups.
Additionally, the cloud storage that comes free with mail accounts from most of the major tech companies can offer a reliable remote home for your backups until you need them. Handy when you don’t have a spare large-capacity thumb drive kicking around.

Using Ubuntu 24.04.02 LTS, what we’re going to look at here is :

  • Making your Google Drive accessible from the Ubuntu Gnome Desktop
  • Configuring Deja Dup to backup files to the Google Drive
  • Testing the backup
  • Scheduling backups to run regularly

For this purpose we’ll be using Ubuntu’s standard backup tool – Deja Dup ( listed as “Backups” if you need to search for it in the Gnome Application Menu).
We’ll also be making use of a Gmail account other than that used as the main account on an Android phone.
You may be relieved to know that we’ll be able to accomplish all of the above without opening a Terminal window ( well, not much anyway).

Accessing Google Drive from the Gnome Desktop

I’ve chosen to use a gmail account in this example because :

  • Google offers 15GB of free space compared to the 5GB you get with Microsoft or Apple
  • You don’t have to use the Google account you use for your Anrdoid phone so you don’t have to share the space with the phone data backups.

First you need to open Settings – either from the Gnome Application Menu, or from the Panel at the top of the screen – and select Online Accounts :

Now sign in with your Google account…

At this point you’ll be asked to specify what Gnome is allowed to access. In this case I’ve enabled everything :

If you now open Files on your computer, you should see your Google Drive listed :

In this case, I can also see that I have the full 15GB available :

Now we have a remote location to backup to, it’s time to figure out what to backup and how often…

Choosing what to Backup

OK, we’ve got some space, but it’s not unlimited so we want to take some care to ensure we backup only files we really need.
By default, the Backup tool will include everything under the current user’s $HOME apart from the Rubbish Bin and the Downloads folder :

Lurking under your $HOME are a number of hidden folders. In the main, these contain useful stuff such as application configuration ( e.g. web browser bookmarks etc). However, there may also be stuff that you really don’t need to keep hold of.

You remember what I said about not having to open a Terminal window ? Well, if you really want to know how big each of these directories is, You can just right-click each one in Files and then look at it’s properties. Alternatively, if you’re happy to open a Terminal Window, you can run :

du --max-depth=1 --human-readable $HOME |sort -hr

This will output the total size of $HOME followed by the total size of each immediate child directory in descending order :

In this case, I can see that the whole of my home is a mere 80M so I’m not going to worry about excluding anything else from the backup for the moment.
One final point to consider. By default, Deja Dup encrypts backup files using GPG (Gnu Privacy Guard), which will compress files as well as encrypting them. This means that the actual space required for the backup may be considerably less than the current size on disk.

Configuring the Backup

Before we start, we’re going to create a test file so that we can make sure that our backup is working once we’ve configured it.
I’ve created a file in $HOME/Documents...

…which contains…

Now look for “Backups” in the Gnome Application Menu :

Open the tool and click on Create Your First Backup :

Next, we need to select which folders to backup. As mentioned above, Deja Dup uses a reasonably sensible default :

You can add folders to both the Folders to Back Up and Folders to Ignore lists by clicking on the appropriate “+” button and entering either an absolute path, or a relative path from your $HOME.

Next, we need to specify the destination for the backup.
Looks like Deja Dup has been clever enough to detect our mapped Google Drive :

The destination folder name default to the name of the current machine.
When the backup runs, the folder will be created automatically if it doesn’t already exist.

When we click the Forward button, Deja Dup may well ask for the installation of additional packages :

…before asking for access to your Google account :

The next screen asks for an Encryption password for your backup. A couple of important points to note here are :

  1. Password protecting the backup is an outstandingly good idea. Whilst this screen does allow you to turn this off, I’d recommend against it – especially if the backup is being stored on a remote server.
  2. Whilst you can ask Deja Dup to remember the password so that you won’t need it if you need to restore files from a backup on the current system, the password is essential if you want to restore the backed up files on another system…or the current system after some major disaster renders your existing settings unavailable. MAKE SURE YOU REMEMBER IT.

When we click Forward, we should now be running our first backup :

Once the backup completes, we’ll have the option to schedule them at regular intervals. First though, let’s take a look at the files that have been created :

Let’s make sure that we can restore files from this backup.
Click on the Restore tab in Deja Dup…

… and you should be able to see all of the files in our backup, including the test file we created earlier :

If we navigate to the Documents folder we can select files to restore :

Click on Restore and select the location to restore to ( in this case, we’ll put it directly into $HOME ) :

Again, Click Restore and you’ll get a confirmation that your file is back :

…which we can confirm in Files :

Scheduling Backups

Deja Dup can perform backups automatically at a set interval.
In the Overview tab, select Back Up Automatically.

If you then open the “hamburger” menu and select Preferences, you get additional options, namely :

  • Automated Backup Frequency
  • Keep Backups

Once you’ve configured them, these backups will take place automatically on the schedule you’ve specified.

Guide on Taking Social Security: 62 vs. 67 vs. 70

Michael Dinh - Sun, 2025-02-23 11:55

https://www.schwab.com/learn/story/guide-on-taking-social-security

https://www.schwab.com/learn/story/can-you-afford-to-retire-early

House is paid off and property is tax $2,100.

Please perform research for your situation as there is no one size fits all.

Building a Real-Time Weather App with Streamlit and Open-Meteo

Kubilay Çilkara - Sun, 2025-02-23 02:02
see app here: https://data-exchange.streamlit.app/

I recently embarked on a project to build a real-time weather application, and I wanted to share my experience using Streamlit and Open-Meteo. The goal was to create a dynamic web app that provides users with up-to-date weather information and 10 day weather forecasts, all while leveraging the convenience of cloud-based development.

Streamlit: Rapid Web App Development in the Browser

One of the most compelling aspects of Streamlit is its ability to facilitate rapid web application development directly within the browser. For this project, I utilized GitHub Codespaces, which provided a seamless development environment. This eliminated the need for complex local setups and allowed me to focus solely on coding.

Key Advantages of Using GitHub Codespaces:

  • Browser-Based Workflow: All development activities were performed within a web browser, streamlining the process.
  • Dependency Management: Installing necessary Python packages was straightforward using pip install.
  • Version Control: Integrating with Git enabled efficient version control with git commit and push commands.

Data Acquisition with Open-Meteo

To obtain accurate and current weather data, I employed the Open-Meteo API. This API offers a comprehensive set of weather parameters, allowing for detailed data visualization.

Visualizing Weather Data with Streamlit's Graph Capabilities

Streamlit's built-in graph visualization tools proved to be highly effective. Creating dynamic charts to represent weather data was quick and efficient. The clarity and responsiveness of these visualizations significantly enhanced the user experience.

Technical Implementation:

The application was developed using Python, leveraging Streamlit for the front-end and the requests library to interact with the Open-Meteo API. The workflow involved:

  1. Fetching weather data from the Open-Meteo API.
  2. Processing the data to extract relevant information.
  3. Utilizing Streamlit's charting functions to create graphical representations.
  4. Deploying the application via the streamlit sharing platform.

Observations:

  • Streamlit's simplicity and ease of use allowed for rapid prototyping and development.
  • GitHub Codespaces provided a consistent and reliable development environment.
  • Open-Meteo API provided accurate data.
  • Streamlit sharing made deployment very easy.

Conclusion:

This project demonstrated the power of Streamlit and Open-Meteo for building data-driven web applications. The ability to develop entirely within a browser, combined with powerful visualisation tools, made the development process efficient and enjoyable.

You can view the final app here: https://data-exchange.streamlit.app/


Categories: DBA Blogs

Little Things That Excite Me

Michael Dinh - Sat, 2025-02-22 10:50

In addition to maintaining databases, I maintain cars (just the simple stuff).

Which one is the best?

Oracle DDL: Duplicate not null constraints

Tom Kyte - Fri, 2025-02-21 09:13
Hi Tom, This is a DDL question/observation, which puzzles me a bit: Given the following succesfull creation of a table in a schema PL, on Oracle 19c: <code>create table pl.constraint_test ( id integer, end_date date, constraint end_date_not_null check (end_date is not null) ); Table PL.CONSTRAINT_TEST created. </code> Now, consider the following two DDL statements: -- --DDL Statement 1: --I would expect this to fail, since the table has a constraint with that name already. <code>alter table pl.constraint_test add constraint END_DATE_NOT_NULL check (END_DATE is not null);</code> --DDL Statement 2: --I would expect this to fail, since this constraint (the semantics: that "END_DATE is not null") is on the table already. <code>alter table pl.constraint_test add constraint END_DATE_NOT_NULL2 check (END_DATE is not null);</code> The results are these: --DDL Statement 1: <code>alter table pl.constraint_test add constraint END_DATE_NOT_NULL check (END_DATE is not null); Error starting at line : 50 in command - alter table pl.constraint_test add constraint END_DATE_NOT_NULL check (END_DATE is not null) Error report - ORA-02264: name already used by an existing constraint 02264. 00000 - "name already used by an existing constraint" *Cause: The specified constraint name has to be unique. *Action: Specify a unique constraint name for the constraint. --DDL Statement 2: alter table pl.constraint_test add constraint END_DATE_NOT_NULL2 check (END_DATE is not null); Table PL.CONSTRAINT_TEST altered.</code> Consequences and Thoughts: The first constraint addition fails because Oracle does not allow two constraints with the same name to be created. This is completely as expected! The second constraint, in my expectation, really should fail, since it is a repetition of the requirement that "END_DATE is not null". But it does not: Now the table has not one, but two, check constraints requiring that "End_date is not null" For this table it will mean slower inserts, and updates, since the "same" constraint will here have to be checked not once, but twice instead. With indexing Oracle is very careful, not allowing the same set of columns, in the same order, and of the same index type, to be indexed twice. Apparently, a similar thing is not enforced for constraints. Are the results above expected, or do I have a point here? Cheers, K?re Kristoffersen Database Expert, Nordea Bank.
Categories: DBA Blogs

How to use Union of two select queries for a database export?

Tom Kyte - Fri, 2025-02-21 09:13
I need to union two database querieres for an export job. Any idea how this is formatted on a par file?
Categories: DBA Blogs

Delphix Data Virtualization and Masking

Yann Neuhaus - Thu, 2025-02-20 09:55

After successfully completing several customer projects involving Delphix products, Data Virtualization and Data Masking, I decided to set up a complete infrastructure internally to test the combination of both products.

Delphix Data Virtualization Creates only a virtual copy of your database blocks instead of a full physical copy. For example this allows making a new big database available only in a few seconds.

Delphix Data Masking Protects sensitive data by replacing it with realistic but non-sensitive values, ensuring compliance with privacy regulations while maintaining data usability for development, testing, and analytics. For reference, dbi-services has developed Swiss masking rules for customers to apply all their specific needs.

Setup

As always, setting up the prerequisites took some time, but once the environment was built, it was truly rewarding to demonstrate the combined power of these tools to my colleagues. It’s a real pleasure, like a game, building database environments in a flash, where the data is directly masked.

This allows you to rapidly provision databases to your internal or external developers, without taking any risk of a data phishing attack!

Watch this video! It’s only a minute-long and you’ll see how easy it is! to create a new masked Oracle pluggable database VPDB2_masked on another server (These tasks can also be fully automated in a pipeline using an API)

Relevant outcomes

The Delphix pricing model is based on the size of your databases, thus you can apply the same Virtualization and Masking capabilities at least for PostgreSQL, SQL Server, and Sybase at no additional cost. Are you interested in making your database development environments fully secure? Don’t wait, dbi services has your solution!

For those who are interested in more details

1. Table data from the source database PDB1

Original data of the table

2. Table data in the new created and masked virtual database VPDB2_Masked

masked data table

3. Output of the masking job

output from a masking job Useful information related to Delphix

Blog that presents Data Masking in detail

Customer success story

Webinar that presents Data Virtualization

L’article Delphix Data Virtualization and Masking est apparu en premier sur dbi Blog.

Management of Long Running PS/Queries Cancelled by Resource Manager CPU Limit

David Kurtz - Thu, 2025-02-20 05:46

I have written previously about using the Oracle database resource manager to prioritise the allocation of CPU to different processes in a PeopleSoft system.  I proposed a sample resource plan that can be used as a starting point to build a resource plan that meets a system's specific objectives and requirements.

This post looks at 

  • How to configure the resource manager to cancel long-running queries,
  • What happens when it does and what PeopleSoft users experience,
  • How the system administrators can monitor such queries,
  • What action could they take.

Configuring SQL Cancellation in a Resource Manager Consumer Group
The sample resource plan, PSFT_PLAN, contains various server consumer groups.  It relies upon MODULE and ACTION being set by enabling PeopleSoft instrumentation (EnableAEMonitoring=1) and/or the psftapi_store_prcsinstance trigger on PSPRCSRQST (see Effective PeopleSoft Performance Monitoring),
  • PSQUERY_BATCH_GROUP
    • Applies to scheduled PSQUERY Application Engine Programs
    • Limited to 4 hours on CPU (or estimated at >= 4 hours)
  • PSQUERY_ONLINE
    • Applies to queries run online via the PeopleSoft Internet Architecture (PIA).
    • There is no resource manager limit for this consumer group.  
      • The PIA has a session timeout (default 20 minutes).  
      • The ICQuery Tuxedo service that runs the queries also has a timeout (default 20 minutes)
    • When the resource manager cancels a SQL call, it simply raises an Oracle error that appears in a message box in the PIA without any further explanation.  I think it is better to let the PIA timeouts handle online queries in a more controlled fashion.
  • LOW_LIMITED_GROUP
  • Applies to SQL*Plus, SQL Developer and Toad.
  • Limited to 2 hours on CPU (or estimated at >= 2 hours)

  • Recommendations: 

    • Users should generally be encouraged to schedule queries that will take more than a few minutes to run on the process scheduler.  
    • Resist the temptation to increase either the PIA or ICQuery service timeout from the delivered setting of 20 minutes. 
    Plan Directives
    Plan directives are created with DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE
      DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
        'PSFT_PLAN', 'PSQUERY_BATCH_GROUP'
        ,mgmt_p6 => 1
        ,switch_group => 'CANCEL_SQL'
        ,switch_time => 14400
        ,switch_estimate => TRUE 
        ,switch_for_call => TRUE
      );
    
      DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
        'PSFT_PLAN', 'PSQUERY_ONLINE_GROUP'
        ,mgmt_p6 => 90
      );
    
      DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(
        'PSFT_PLAN', 'LOW_LIMITED_GROUP'
        ,mgmt_p8 => 1
        ,switch_group => 'CANCEL_SQL'
        ,switch_time => 7200
        ,switch_elapsed_time => 7200
        ,switch_estimate => TRUE 
        ,switch_for_call => TRUE
      );
    Four parameters control cancellation behaviour.
    • SWITCH_GROUP specifies the consumer group to which the session is switched when a switch condition is met.  If the group switches to CANCEL_SQL the current call is cancelled, raising error ORA-00400.
    • SWITCH_TIME specified the number of seconds on CPU (not elapsed time).
    • If SWITCH_ESTIMATE is true, the resource manager also switches group if the estimated run time is greater than the switch time
    • SWITCH_FOR_CALL is set to true so that if the consumer group is switched, it is then restored to the original consumer group at the end of the top call.  Thus a persistent session is not permanently switched. This is important if switching an application engine server (PSAESRV) session.
    Cancellation Behaviour
    The resource manager can cancel long-running queries in these consumer groups raising ORA-00040: active time limit exceeded - call aborted 
    • The query may be cancelled immediately because the estimated execution time is greater than the limit.  
    • Otherwise, it is quite likely to run for an elapsed time that is greater than the CPU time limit. Some time will be consumed in the client process, during which the database will be idle waiting for the next fetch request from the client, 
    • Some of the database time may not be on CPU because it may be doing something else, such as physical IO (in which case the session will report being on an event other than NULL).
    • The database session may be held back by the resource manager allocating CPU to higher priority processes,  in which case the session will again not be on CPU, and will report being on event resmgr: cpu quantum.
    Querying the PeopleSoft Message Log
    The full message text is stored in multiple pieces in PS_MESSAGE_LOGPARM and must be reconstructed so that it can be searched for the error code.  I demonstrated this technique in another blog post: Querying the PeopleSoft Message Log with SQL.

    For this analysis, I have made some alterations to the message log query (see message_log_checker-psquery.sql).

    • This query is restricted to messages generated by PSQUERY processes that did not run to success (not run status 9).
    • PeopleSoft messages are typically defined with up to 9 substitution variables, but long SQL statements can have many more entries in PS_MESSAGE_LOGPARM.  So the PL/SQL function in this query simply appends any additional log parameter rows beyond the 9 substitution variables to the end of the generated string.
    • Once the message has been generated we can look for one of the error messages associated with the resource manager terminating a query:
      • ORA-00040: active time limit exceeded - call aborted
      • ORA-56955: quarantined plan used

    It is necessary to filter by message number because even in PS/Query users can write invalid SQL that produces other error messages.  However, all this text processing for each row retrieved makes the query quite slow.

    Here is an example output.  User USR001 has run a private query MY_TEST2 with run control 42.  It ran for 20772s (5h 46m), until it was terminated by the resource manager.  As explained above, the 4-hour limit is on CPU time that will be less than the elapsed time.
                                                             Public/
                                                              Private                                Ru   Exec                              Msg Msg  Msg
         P.I. DBNAME  OPRID    RUNCNTLID                      Query   QRYNAME                        St   Secs DTTM_STAMP_SEC               Seq Set  Nbr
    --------- ------- -------- ------------------------------ ------- ------------------------------ -- ------ ---------------------------- --- --- ----
    MSG
    ---------------------------------------------------------------------------------------------------------------------------------------------------- 
     12395311 FSPROD  USR001   42                             Private MY_TEST2                       10  20772 10-FEB-25 04.41.47.384694 PM   1  65   30
    File: C:\PT860P13B_2403250500-retail\peopletools\src\pssys\qpm.cppSQL error. Stmt #: 8526  Error Position: 189  Return: 40 - ORA-00040: active time
     limit exceeded - call abortedFailed SQL stmt: SELECT A.LEDGER, A.FISCAL_YEAR, A.BUSINESS_UNIT, …
                                                                                                               10-FEB-25 04.41.47.421800 PM   2  50  380
    Error in running query because of SQL Error, Code=40, Message=ORA-00040: active time limit exceeded - call aborted
    …
    From Oracle 19c on Exadata timed out statements are automatically quarantined.  If a quarantined statement is run and a quarantined execution plan is generated, then error ORA-56955 is generated immediately.  It can therefore also be detected in the logs.  So the query searches for both messages.
                                                              Public/
                                                              Private                                Ru   Exec                              Msg Msg  Msg
         P.I. DBNAME  OPRID    RUNCNTLID                      Query   QRYNAME                        St   Secs DTTM_STAMP_SEC               Seq Set  Nbr
    --------- ------- -------- ------------------------------ ------- ------------------------------ -- ------ ---------------------------- --- --- ----
    MSG
    ----------------------------------------------------------------------------------------------------------------------------------------------------
     12319513 FSPROD  USR002   Transactions                   Public  GBR_JRNL_LINE_DTL_ACCT         10     25 13-FEB-25 11.13.35.746644 PM   1  65   30
    File: C:\PT860P13B_2403250500-retail\peopletools\src\pssys\qpm.cppSQL error. Stmt #: 8526  Error Position: 2783  Return: -8581 - ORA-56955: quarant
    ined plan usedFailed SQL stmt: SELECT A.FISCAL_YEAR, A.ACCOUNTING_PERIOD, A.JRNL_HDR_STATUS, B.LED
    …
                                                                                                               13-FEB-25 11.13.35.814112 PM   2  50  380
    Error in running query because of SQL Error, Code=-8581, Message=ORA-56955: quarantined plan used
    I will discuss automatic SQL quarantine for PS/Query in a subsequent blog.
    Opinion
    So far I have explained how to set a maximum CPU time limit for PS/Queries in a resource manager consumer group, and then to detect the cancelled PS/queries by examining the message log.

    The final stage is to close the feedback loop and go back to the users who are producing the queries, find out what they are trying to do, and why the queries are running for such a long time.

    Pages

    Subscribe to Oracle FAQ aggregator