Skip navigation.

The Anti-Kyte

Syndicate content The Anti-Kyte
Oracle - for when it was like that when you got there
Updated: 11 hours 25 min ago

Caching in – Oracle Sequences and Performance

Fri, 2015-09-04 14:00

Several years ago, the BBC found itself a little financially embarassed. Appropriately enough, given the topic at hand, it was a bit strapped for cache…er…cash.

In an attemmpt to reduce expenditure, the decision was taken to re-broadcast the entire four series of Blackadder.
Happily, this garuanteed must-see viewing every Friday evening for about six months.

Possibly as a result of this, the dialogue from the show became something of a lingua franca within the IT community, and probably still is for geeks of a certain age.

Just think, how often have you be presented with “a cunning plan” or maybe found yourself in a situation that was “sticker than sticky the stick insect stuck on a sticky bun”.

Oh, just me then.

It is with Wisdom of the Ancients distilled through the dialogue from this classic show that we will now explore the wacky world of Sequence Caching in Oracle.
What’s that ? You don’t see anything wacky about it ? Hmmm, let’s take a closer look then…

Reasons for not caching sequence values

When it comes to sequences, there are a couple of reasons usually advanced for not caching :

  • We don’t want gaps in the values sourced from the sequence
  • We don’t want to “waste” any cache values that may be discarded if not used – what if the sequence runs out of numbers ?

In addition to this, the documentation has this to say on the matter :

” NOTE – Oracle recommends using the CACHE setting to enhance performance if you are using sequences in an Oracle Real Application Clusters environment.”

From this, you may infer that caching isn’t something you really need to consider in a non-RAC environment.

We’ll come back to this shortly.

In the meantime however, a quick demonstration of …

Why values generated from sequences aren’t necessarily gap-free
create table shows
(
    id number primary key,
    year_first_shown number(4),
    title varchar2(100)
)
/

As you can see, this table has a synthetic key. Typically, we’ll use a sequence to generate the values for it.
NOTE – in 12c you can associate the sequence with the column you’re using it to populate.
For now though, we’ll just take the traditional approach…

create sequence shows_id_seq
/

Now let’s add some values to the table…

insert into shows ( id, year_first_shown, title)
values( shows_id_seq.nextval, 1983, 'The Black Adder')
/

commit;

insert into shows ( id, year_first_shown, title)
values( shows_id_seq.nextval, 1986, 'Blackadder II')
/

commit;

insert into shows ( id, year_first_shown, title)
values( shows_id_seq.nextval, 1987, 'Blackadder The Third')
/

rollback;

insert into shows ( id, year_first_shown, title)
values( shows_id_seq.nextval, 1989, 'Blackadder Goes Forth')
/

commit;

insert into shows ( id, year_first_shown, title)
values( shows_id_seq.nextval, 1987, 'Blackadder The Third')
/

commit;

Notice that rollback in the middle there ? The sequence has been invoked and a value obtained but it has not actually been used.
Predictably, once we run this, we have a gap in our ID values…

select *
from shows
order by id
/

	ID YEAR_FIRST_SHOWN TITLE
---------- ---------------- ----------------------------------------------------------------------------------------------------
	 1	       1983 The Black Adder
	 2	       1986 Blackadder II
	 4	       1989 Blackadder Goes Forth
	 5	       1987 Blackadder The Third

As our sequence is specifically for the generation of Primary Key values, it has to be set to NOCYCLE (the default).
If we discard chunks of numbers through caching, doesn’t that increase the chances of the sequence reaching it’s maximum value and ceasing to function ?

How long will my sequence last ?

Let’s take a closer look at the sequence :

 select max_value, cache_size, cycle_flag
  2  from user_sequences
  3  where sequence_name = 'SHOWS_ID_SEQ' 
  4  /


			      MAX_VALUE CACHE_SIZE C
--------------------------------------- ---------- -
	   9999999999999999999999999999 	20 N

SQL> 

That MAX_VALUE is a big number. I wonder how long it would take to reach it if we were to use, say 100 million values per day ?

SQL> select floor( max_value / (100000000 * 365.25)) as years    
  2  from user_sequences
  3  where sequence_name = 'SHOWS_ID_SEQ'
  4  /

				  YEARS
---------------------------------------
		     273785078713210130

SQL> 

Should be alright for a bit then.

Coming back to not worrying about caching in a non-RAC environment, it might be interesting to do a quick test.

Do you need some cache ?

I’ve got a staging table containing one million rows that I need to insert into an application table…

create table maths_stg
(
    black_adder varchar2(100),
    baldrick varchar2(100)
)
/

begin
    for i in 1..1000000 loop
        insert into maths_stg( black_adder, baldrick)
        values
        (
            'I have two beans and I add two more beans. What does that make ?',
            'A very small casserole'
        );
    end loop;
end;
/

commit;

The target table looks like this …

create table maths_questions
(
    id number primary key,
    question varchar2(100),
    answer varchar2(100)
)
/

…and the sequence is created like this ….

create sequence mathq_id_seq
    nocache
/

select cache_size
from user_sequences
where sequence_name = 'MATHQ_ID_SEQ';

CACHE_SIZE
----------
	 0

SQL> 

Despite my earlier statements about sequences, the neat-freak in me has decided to create this one with nocache specified.
After all, I’m not running this on a RAC environment so, it’s no biggie.

Anyway, let’s see just how long this load will take.
NOTE – The steps followed for this and each subsequent test are :

  1. Drop the sequence (if it exists)
  2. Drop the MATHS_QUESTIONS table (if it exists) and purge from the Recycle Bin
  3. Create the sequence with the appropriate CACHE setting
  4. Create the MATHS_QUESTIONS table
  5. Bounce the database
  6. Issue an ALTER SYSTEM SWITCH LOGFILE to minimize the effect of checkpointing
  7. Run the load

So, for a sequence with a cache size of 0 (i.e. nocache) :

set timing on
insert into maths_questions( id, question, answer)
    select mathq_id_seq.nextval, black_adder, baldrick
    from maths_stg
/

1000000 rows created.

Elapsed: 00:03:15.97

Now, just for fun, let’s try that again with the sequence cache set to the default (i.e. 20).

set timing on
insert into maths_questions( id, question, answer)
    select mathq_id_seq.nextval, black_adder, baldrick
    from maths_stg
/

1000000 rows created.

Elapsed: 00:00:31.04

Interesting. The load time has gone down from 195 seconds to 31.

Increasing the cache size to 100 the results are even faster :

set timing on
insert into maths_questions( id, question, answer)
    select mathq_id_seq.nextval, black_adder, baldrick
    from maths_stg
/

1000000 rows created.

Elapsed: 00:00:17.26

Now I could go on. Indeed, I did, I tested with cache sizes of up to 1 million.
Unfortunately, the results weren’t consistent. In other words a cache size of 1000 was often faster than for 1 million, but not always.
This is despite the database re-start between runs and switching the logfiles.
This may well have something to do with the limited resources on my test database. It’s Express Edition, remember, which means it will access at most, 1GB RAM and a single CPU core.

That said, there remains the question of exactly why increasing the sequence cache size has such a dramatic effect on performance.
This is where things get a bit…strange…

Curious results and how not to make sense of them

The next logical step in finding out what’s happening would be to do a bit of tracing. So, before executing the insert statement, I ran :

select value
from v$diag_info
where name = 'Default Trace File'
/

…to establish the name and location of the trace file of the current session. This was followed by…

exec dbms_monitor.session_trace_enable( null, null, true, true)

…to start tracing in the current session.

Finally, after the insert, I stopped tracing with :

exec dbms_monitor.session_trace_disable

I then generated tkprof output from the resulting tracefiles using the following command on the OS :

tkprof tracefile_name.trc tkp_output.txt explain=uid/mypwd@db

…where tracefile_name.trc is the name of the tracefile and tkp_output.txt is the name of the file into which the tkprof output is saved.

Even before I looked at the tkprof output, it became apparent that tracing has rather a dramatic effect on runtime as the below table shows :

Cache Size No Trace Time Trace Time NOCACHE 03:15.97 18:01.22 20 00:31.04 01:24.28 100 00:17.26 00:31.01

Perhaps the most illuminating information to be gleaned from the tkprof output is this recursive statement, which is listed immediately after our insert statement :

SQL ID: 4m7m0t6fjcs5x Plan Hash: 1935744642

update seq$ set increment$=:2,minvalue=:3,maxvalue=:4,cycle#=:5,order$=:6,
  cache=:7,highwater=:8,audit$=:9,flags=:10 
where
 obj#=:1

Looking at the tkprof output for this statement is somewhat revealing.

Where the sequence is set to NOCACHE :

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute 1000000    909.24     913.93          2    1000464    2034042     1000000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total   1000001    909.24     913.93          2    1000464    2034042     1000000

With a cache size of 20 :

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  50000     58.14      58.66          0      50000     100016       50000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    50001     58.14      58.66          0      50000     100016       50000

Finally, for a cache size of 100 :

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.00          0          0          0           0
Execute  10000     12.17      12.37          0      10107      21362       10000
Fetch        0      0.00       0.00          0          0          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total    10001     12.17      12.37          0      10107      21362       10000

Given the massive distortion in the results caused by the simple act of tracing the session, these results are unlikely to be truly representative of exactly what goes on in a normal ( i.e. non-traced) session. However, the number of executions of this statement does provide an indication of just why the increase in cache size reduces runtime in this example.

I think that last statement should be qualified – an increase in sequence cache size will reduce the runtime…up to a point.
Beyond a certain cache size, you will start to encounter diminishing returns and possibly even performance degradation.
The optimum setting for the sequence cache will depend on the resources (particularly CPU) available in your target environment.

So, what, if anything can we conclude from all this ?

Conclusions

If you have a data load that’s taking longer than expected and you can see that a statement with an SQL ID of 4m7m0t6fjcs5x is particularly active, then you may well want to look at the relevant sequence and review it’s cache setting.

Additionally, you may well consider whether tracing in an environment other than production would be prudent if you’re trying to track down a problem in a load job which has this characteristic.

On the plus side, unless you’re doing something particularly unusual, any NOCYCLE sequences with a default MAX_VALUE are unlikely ever to run out on you.


Filed under: Oracle, SQL Tagged: create sequence, dbms_monitor.session_trace_disable, dbms_monitor.session_trace_enable, sequence cache size, SQL ID: 4m7m0t6fjcs5x, tkprof, user_sequences

What’s that Skippy ? Mike’s doing too much typing to Instrument his PL/SQL code ?

Sat, 2015-08-15 06:53

Australian readers will be pleased to note that, despite the Antipodean flavour of this post, there will be no mention of The Ashes. It is a well known fact that Kangaroos are not interested in cricket.

My brother used to run a motorcycling school. One of his teaching techniques, out on the road was to say things like “What’s that Skippy ? Mike’s left his indicator on after the turn ?”
This is in reference to Skippy the Bush Kangaroo – a children’s TV program about the adventures of the eponymous hero with and uncanny knack of communicating life-threatening situations to humans, simply by means of a few tongue-clicking sounds.
My son spent quite a bit of time with his Uncle Steve.
Uncle Steve had quite a bit of influence on said child.
As a result, I’d often be on the receiving end of the distilled wisdom of Skippy…
“What’s that Skippy ? Dad’s left his keys on the table ?”
“What’s that Skippy ? Dad’s left the eight-ball over the pocket ?”
“What’s that Skippy ? Pocket money should be going up in line with inflation ?”

Over the years, this began to seep into my internal monologue… “What’s that Skippy ? I’ve forgotten to close the cursor ?”
It is with thanks to “Uncle Steve” and the help of a know-it-all marsupial with a unique linguistic talent that I will be looking at logging in PL/SQL applications and ways of…well…doing less typing to achieve the same level of instrumentation.
Specifically, what we’ll cover is :

  • Why logging in PL/SQL is special
  • Logging error messages by magic
  • Using OWA_UTIL.WHO_CALLED_ME
  • Using PL/SCOPE to figure out where you are
  • An neater way to log runtime parameter values
  • A logging package that incorporates these techniques

The ultimate combination of all of these changes may well not be ideal in every situation.
However, Skippy has tried to reduce the instrumentation code required to and absolute minimum. After all, kangaroos don’t like typing.

Like most of the Oracle world, Skippy and myself are still on 11gR2.
The sunny uplands of 12c remain, for the moment, the province of messing about in VirtualBox.
Therefore, we won’t be covering any of the 12c utilities ( e.g. UTL_CALL_STACK) here.

What’s that Skippy ? Oh yes, if you are considering a replacement for your existing logging sub-system, or even planning one from scratch, then you might want to check out the OraOpenSource Logger.

The Logging Application

Typically, a PL/SQL logging application will consist of a table that looks something like this :

create table application_message_logs
(
    log_id number not null,
    log_ts timestamp not null,
    username varchar2(30) not null,
    session_id number not null,
    program_unit varchar2(30),
    sub_program varchar2(30),
    location number,
    message_type varchar2(1) not null,
    message varchar2(4000) not null
)
/

…complete with a sequence to generate the LOG_ID values…

create sequence aml_log_id_seq
/

NOTE – I’ve kept things relatively simple here. Of course, there may be instances where you want to log a message of more than 4000 characters ( maybe when debugging a dynamic SQL statement for example). For such eventualities you’d probably have an “overflow” column for a continuation of the message.

The logging package would probably resemble this :

create or replace package log_pkg
as

    --
    -- Set the logging level
    --
    -- E(rror) = just log error messages
    -- I(nfo) = log Error and Information messages
    -- D(ebug) = log everything
    
    g_log_level varchar2(1) := 'E';
    
    procedure write_pr
    (
        i_program_unit in application_message_logs.program_unit%type default null,
        i_sub_program in application_message_logs.sub_program%type default null,
        i_location in application_message_logs.location%type,
        i_message_type in application_message_logs.message_type%type,
        i_message in application_message_logs.message%type
    );
end log_pkg;
/

create or replace package body log_pkg
as

    function log_level_to_num_fn( i_level in varchar2)
        return pls_integer
    is
        --
        -- Private function to convert the log level or message type
        -- from a character to a number
        --
    begin
        return
        case i_level
            when 'E' then 1
            when 'I' then 2
            when 'D' then 3
        end;
    end log_level_to_num_fn;
    
    procedure write_pr
    (
        i_program_unit in application_message_logs.program_unit%type default null,
        i_sub_program in application_message_logs.sub_program%type default null,
        i_location in application_message_logs.location%type,
        i_message_type in application_message_logs.message_type%type,
        i_message in application_message_logs.message%type
    )
    is
    --
    -- Check the message_type against the current log level setting
    -- (g_log_level) and if, appropriate, write the message to the log table
    --
        pragma autonomous_transaction;
    begin
        if log_level_to_num_fn( nvl(i_message_type, 'E')) > log_level_to_num_fn( g_log_level)
        then
            --
            -- Nothing to do
            --
            return;
        end if;
        insert into application_message_logs
        (
            log_id, log_ts, username, session_id,
            program_unit, sub_program, location,
            message_type, message
        )
        values
        (
            aml_log_id_seq.nextval, systimestamp, user, sys_context('userenv', 'sessionid'),
            i_program_unit, i_sub_program, i_location,
            i_message_type, substr(i_message, 1, 4000)
        );
        commit;
    exception
        when others then
            rollback;
    end write_pr;

end log_pkg;
/

Once again, for the sake of simplicity, I’ve used package variable to control the logging level rather than something a bit more elaborate.

What’s that Skippy ? A programmer’s trapped in the Standards Document ?

Before we go any further, it’s worth stopping to consider this package.
As well as using an Autonomous Transaction, it also employs another much despised feature of PL/SQL – the WHEN-OTHERS exception without a re-raising of the error.

In other contexts, use of these features would be a sure-fire way of inviting the opprobrium of the QA department.
Here, however, both are entirely reasonable, not to say necessary.

If you’re adding a log record in a job that’s part way through a transaction, you definitely want to save the log record, without affecting that ongoing transaction. Indeed, even if that transaction is subsequently rolled back, you want to keep the log record.
Therefore, the autonomous transaction is, in this context, entirely appropriate.

The WHEN-OTHERS with no re-raise is probably a bit more contentious at first sight. However, consider that the package’s one procedure is used to log any messages, not simply errors.
Say, for the sake of argument that the logging table is on a different tablespace from your application tables.
There might be an occasion where an insert into this table would cause an error due to the tablespace being full.
Do you really want your batch data load to fail because there has been an unexpected error logging a message to say the job has started ?
If the answer is no, then such errors are by definition non-fatal. Therefore, the lack of a re-raise is sensible in this context.

It’s not just the code itself that is somewhat out of the ordinary when it comes to logging.
Consider the following code, in which the logging package is called.
Incidentally, this isn’t the actual algorithm used by the UK Met Office to produce a weather forecast…although sometimes I wonder…

create or replace package forecast_pkg
as
    procedure tomorrow_pr
    (
        i_forecast_date date,
        i_detail in varchar2,
        i_just_guess in boolean default true,
        o_forecast out varchar2
    );
end forecast_pkg;
/

create or replace package body forecast_pkg
as
    procedure tomorrow_pr
    (
        i_forecast_date date,
        i_detail in varchar2,
        i_just_guess in boolean default true,
        o_forecast out varchar2
    )
    is
        lc_proc_name constant application_message_logs.sub_program%type := 'TOMORROW_PR';
        l_params application_message_logs.message%type;
        l_loc pls_integer;
        l_forecast varchar2(10);
    begin
        -- record the fact that we're starting...
        log_pkg.write_pr
        (
            i_program_unit => $$plsql_unit, 
            i_sub_program => lc_proc_name, 
            i_location => $$plsql_line, 
            i_message_type => 'I', 
            i_message => 'Starting forecast for tomorrow'
        );
        -- ...and the parameter values we've been passed
        l_params := 'i_forecast_date = '||to_char(i_forecast_date, sys_context('userenv', 'nls_date_format'))||', '
            ||' i_detail = '||i_detail||', '
            ||' i_just_guess = '||case i_just_guess when true then 'TRUE' else 'FALSE' end;
        log_pkg.write_pr
        ( 
            i_program_unit => $$plsql_unit, 
            i_sub_program => lc_proc_name, 
            i_location => $$plsql_line, 
            i_message_type => 'I', 
            i_message => substr(l_params,1,4000)
        );
        l_loc := $$plsql_line;
        --
        -- Do some weather forecasting here... and throw in a debug message as well..
        --
        log_pkg.write_pr
        (
            i_program_unit => $$plsql_unit, 
            i_sub_program => lc_proc_name, 
            i_location => $$plsql_line, 
            i_message_type => 'D', 
            i_message => 'Running complicated algorithm to get forecast'
        );
        if floor( dbms_random.value(1,3)) = 1 then
            o_forecast := 'SUNNY';
        else
            o_forecast := 'SOGGY';
        end if;
        -- then...
        --
        log_pkg.write_pr
        (
            i_program_unit => $$plsql_unit, 
            i_sub_program => lc_proc_name, 
            i_location => $$plsql_line, 
            i_message_type => 'I', 
            i_message => 'Forecast completed outlook is '||o_forecast
        );
    exception when others then
        log_pkg.write_pr
        ( 
            i_program_unit => $$plsql_unit, 
            i_sub_program => lc_proc_name, 
            i_location => l_loc, 
            i_message_type => 'E', 
            i_message => sqlerrm||chr(10)||dbms_utility.format_error_backtrace
        );
        -- What's that Skippy ? Oh, yes...
        raise;
    end tomorrow_pr;
end forecast_pkg;
/

Aside from the fact that the input parameters are there purely for the purposes of demonstration, we have three distinct logging scenarios in this package :

  • Recording Parameter values
  • recording where we are in the procedure at various points
  • Logging an error from an exception handler

If we set the log level to Debug and run the procedure…

set serveroutput on size unlimited
declare
    l_forecast varchar2(100);
begin
    log_pkg.g_log_level := 'D';
    forecast_pkg.tomorrow_pr
    (
        i_forecast_date => to_date('12052015', 'DDMMYYYY'),
        i_detail => 'Some random value',
        i_just_guess => false,
        o_forecast => l_forecast
    );
    dbms_output.put_line('The forecast is '||l_forecast);
end;
/

The forecast is SOGGY

PL/SQL procedure successfully completed.

SQL> 

…we then get the following entries in our log table :

select program_unit, sub_program, location,
    message_type, message
from application_message_logs
order by log_id
/

PROGRAM_UNIT	SUB_PROGRAM	LOCATION MESSA MESSAGE
--------------- --------------- -------- ----- --------------------------------------------------------------------------------
FORECAST_PKG	TOMORROW_PR	      21 I     Starting forecast for tomorrow
FORECAST_PKG	TOMORROW_PR	      33 I     i_forecast_date = 12-MAY-15 i_detail = Some random value i_just_guess = FALSE
FORECAST_PKG	TOMORROW_PR	      45 D     Running complicated algorithm to get forecast
FORECAST_PKG	TOMORROW_PR	      60 I     Forecast completed outlook is SOGGY

Now, the coding standards being applied here include :

  • DATE and BOOLEAN IN parameter values must be explicitly converted to VARCHAR2 prior to logging
  • Calls to stored program units must pass parameters by reference (one parameter per line)
  • The naming convention is that packages are suffixed pkg and procedures suffixed pr

Whilst, of themselves, these standards are fairly reasonable, it does add up to an awful lot of typing to do some fairly standard instrumentation.
This in turn serves to make it that bit harder to spot the actual application code.

So, how can we make logging more readable and less of an overhead in terms of the amount of code required, whilst still capturing the same level of information ?

Well, for a start, we apply some common sense in lieu of standards.

To digress for a moment, I’m firmly of the opinion that standards documents should always contain a get-out-clause for situations such as this.

Firstly, we don’t really need those suffixes on the logging package and procedure names.
If you see a call to logs.write in a package, you can be reasonably sure that it’s a call to a packaged procedure that’s writing to a logging table. OK, it could be a type method, but either way, it’s clear what’s going on.
In case you’re wondering, log is a reserved word which is why I’m proposing to call the package logs.

The next thing we can do is to re-order the parameters to be better able to pass arguments by position rather than by reference.
We can do this by putting the optional parameters (i.e. those with default values) last.
We can also apply a reasonable default value for the message type.

Therefore, we could change our procedure’s signature to this :

procedure write
(
    i_message in message_logs.message%type
    i_location in message_logs.location%type,
    i_program_unit in message_logs.program_unit%type default null,
    i_sub_program in message_logs.sub_program%type default null,
    i_message_type in message_logs.message_level%type default 'E'
);

…which means that the call to it could be simplified (at minimum) to something like …

log.write( 'Logging a message', $$plsql_line);

It’s a start, I suppose. However, such a call isn’t going to result in a particularly useful log record. We wouldn’t know the program unit or the sub program name.
It would be good if we could get rid of the need for some of these parameters altogether but still be able to record the relevant information.

It would be even better if we could find a way not to have to type in the code to format the error stack in every exception handler in the application.

What’s that Skippy? Do I want to see a magic trick ?

Logging the Error Stack – the lazy way

Skippy’s knocked up a quick example :

create or replace procedure nothing_up_my_sleeves
as
begin
    dbms_output.put_line(sqlerrm||chr(10)||dbms_utility.format_error_backtrace);
end;
/

On the face of it, this doesn’t look that impressive. However, there’s more to SQLERRM and SQLCODE than meets the eye.

When we invoke this procedure…

set serveroutput on size unlimited
begin
    raise_application_error(-20000, q'[What's that Skippy ?]');
exception
    when others then
        nothing_up_my_sleeves;
        -- for demonstration purposes only - no raise here.
end;
/

ORA-20000: What's that Skippy ?
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

All of which means that, with the appropriate changes to the logging procedure, we could just use the following code in our exception handlers :

exception when some_error then
    logs.err;
    raise;

But hang on, that still won’t give us the name of the program or the location. How can we get this information without passing it in from the calling program ?

What’s that Skippy? Why don’t we use OWA_UTIL.WHO_CALLED_ME ?

OWA_UTIL.WHO_CALLED_ME provides information about the program unit from which the current program has been called. This includes both the name of the program unit and it’s current line number.
This procedure takes no in parameters and populates four out parameters :

  • owner – the owner of the calling program unit
  • name – the name of the calling program unit
  • lineno – the line number within the program unit where the call was made
  • caller_t – the type of program unit that made the call

To demonstrate :

create or replace function get_caller_fn
    return varchar2 
is
    l_owner varchar2(30);
    l_name varchar2(30);
    l_line number;
    l_type varchar2(30);
begin
    owa_util.who_called_me
    (
        owner => l_owner,
        name => l_name,
        lineno => l_line,
        caller_t => l_type
    );
    
    return 'Called from line '||l_line||' of a program of type '||l_type;
end;
/

If I now call this from an anonymous block…

set serveroutput on size unlimited
begin
    dbms_output.put_line(get_caller_fn);
end;
/

Called from line 2 of a program of type ANONYMOUS BLOCK

PL/SQL procedure successfully completed.

SQL> 

One point to bear in mind is that, when called from a package, the procedure passes the package name on the name out parameter, rather than the name of the package member. This is something we’ll come back to in a moment.

In the meantime, however, we can now eliminate the need for both the program name and the location parameters. The signature of the write procedure now looks like this :

procedure write
(
    i_message in message_logs.message%type
    i_sub_program in message_logs.sub_program%type default null,
    i_message_type in message_logs.message_level%type default 'E'
);

One thing to be aware of, this procedure will return details of where it was called from, irrespective of whether that’s another member of the current package. Therefore, the call to this needs to go in-line in our logs.write and logs.err procedures. This will enable us to capture the details of the program unit we’re logging from.

At this point, you may consider that the logging of the sub-program name is superfluous. After all, we know the line number from which the logging call originated so it should be simple enough to figure out which package member it came from.
To do this programmatically would no doubt require interrogation of the _SOURCE views, together with the application of some clever regular expressions.
On the other hand, you could avoid the top row of your keyboard…

PL/SCOPE

In case you are unfamiliar with it, PL/SCOPE is, to quote the documentation
“… a compiler driven tool that collects data about identifiers in PL/SQL source code at program-unit compilation time…”

In short, when you set the plscope_settings parameter appropriately and then compile a program unit, information about the program unit is written to the _IDENTIFIERS views.

Now, you may have heard that there were some issues with PL/SCOPE in 11.1.
It looks like 11GR2 doesn’t have this problem as the STANDARD and DBMS_STANDARD packages are compiled with PL/SCOPE enabled.
To verify this, run the following :

select distinct object_name, object_type
from dba_identifiers
where owner = 'SYS'
and object_name in ('STANDARD', 'DBMS_STANDARD');

OBJECT_NAME		       OBJECT_TYPE
------------------------------ -------------
DBMS_STANDARD		       PACKAGE
STANDARD		       PACKAGE

In any event, we do not need to touch these packages. For our present purposes, we’re only interested in our application code.
However, it’s probably worth reviewing the aforementioned documentation before deciding to enable PL/SCOPE across your entire database.

So, if we now enable PL/SCOPE in our current session…

alter session set plscope_settings = 'identifiers:all'
/

…and recompile our package…

alter package forecast_pkg compile
/
alter package forecast_pkg compile body
/

…we should now have some information about the package in USER_IDENTIFIERS…

select name, type, usage,
    line, usage_id, usage_context_id
from user_identifiers
where object_name = 'FORECAST_PKG'
and object_type = 'PACKAGE BODY'
and usage = 'DEFINITION'
order by line
/

NAME			       TYPE		  USAGE 	    LINE   USAGE_ID USAGE_CONTEXT_ID
------------------------------ ------------------ ----------- ---------- ---------- ----------------
FORECAST_PKG		       PACKAGE		  DEFINITION	       1	  1		   0
TOMORROW_PR		       PROCEDURE	  DEFINITION	       3	  2		   1

We can see from this that the FORECAST_PKG package body has a single top-level procedure.
We can tell that TOMORROW_PR is a top-level procedure because the usage_context_id is the same value as the usage_id for the package definition ( i.e. 1).
We can also see that the definition of this procedure begins on line 3 of the package body.
As we’ll already have a line number in the logging procedure, all we’ll need to do is to work backwards and find the last top-level definition prior to the call to the logging procedure. This query should do the trick :

with sub_progs as
(
    select name, line
    from user_identifiers
    where object_name = 'FORECAST_PKG'
    and object_type = 'PACKAGE BODY'
    and type in ('FUNCTION', 'PROCEDURE')
    and usage = 'DEFINITION'
    and usage_context_id = 1
)
select name 
from sub_progs
where line = (select max(line) from sub_progs where line < 17)
/

NAME
------------------------------
TOMORROW_PR

SQL> 

What’s that Skippy ? Oh, if you want to find out how much space is being taken up to store all of this additional metadata, you can run the following query :

select space_usage_kbytes 
from v$sysaux_occupants
where occupant_name = 'PL/SCOPE'
/

By setting aside our coding standards and getting Oracle to do more of the work, we’ve got almost all of the components we need to build our logging procedure with a more streamlined interface.
There is, however, one issue that we still need to address.

Logging parameter values

It is possible to find the parameters defined for a given program unit, or sub-program by means of the _ARGUMENTS views.
However, getting the runtime values of those parameters is something that’s only really practical from within the program unit being executed.
Additionally, given that we want to log the runtime parameter values as a string, we’re faced with the issue of conversion.
Implicit conversion from NUMBER to VARCHAR2 is probably not an issue in this instance. However, DATE and BOOLEAN values present something more of a challenge.

The aforementioned OraOpenSource Logger has an elegant approach to this problem and it’s from that, that I have taken my lead here.

The approach will be to have some overloaded procedures in the logging package which handle the various datatype conversions and then append the resulting string to the VARCHAR2 value passed in as an in-out parameter.
These can then be called from the program unit we’re logging from.
Once all of the parameters have been processed, the final string can be logged in the usual way.

It’s probably easier to see what I mean with a quick demo :

create or replace package convert_pkg
as
    -- Overload for varchar values
    procedure add_param
    (
        i_name in varchar2,
        i_value in varchar2,
        io_list in out varchar2
    );
    
    -- Overload for date values
    procedure add_param
    (
        i_name in varchar2,
        i_value in date,
        io_list in out varchar2
    );
    
    -- Overload for boolean values
    procedure add_param
    (
        i_name in varchar2,
        i_value in boolean,
        io_list in out varchar2
    );
end convert_pkg;
/

create or replace package body convert_pkg
as
    procedure add_param
    (
        i_name in varchar2,
        i_value in varchar2,
        io_list in out varchar2
    )
    is
    begin
        if io_list is not null then
            io_list := io_list||', ';
        end if;
        dbms_output.put_line('Varchar');
        io_list := io_list||' '||i_name||' => '||i_value;
    end add_param;
    
    procedure add_param
    (
        i_name in varchar2,
        i_value in date,
        io_list in out varchar2
    )
    is
    begin
        dbms_output.put_line('Date');
        if io_list is not null then
            io_list := io_list||', ';
        end if;
        io_list := io_list||' '||i_name||' => '
            ||to_char(i_value, sys_context('userenv', 'nls_date_format'));
    end add_param;
    
    procedure add_param
    (
        i_name in varchar2,
        i_value in boolean,
        io_list in out varchar2
    )
    is
    begin
        dbms_output.put_line('Boolean');
        if io_list is not null then
            io_list := io_list||', ';
        end if;
        io_list := io_list||' '||i_name||' => '
        ||case i_value when true then 'TRUE' when false then 'FALSE' end;
    end add_param;
end convert_pkg;
/

If we run the following script to test this :

set serveroutput on size unlimited
declare
    l_paramlist varchar2(4000);
begin
    convert_pkg.add_param( 'i_char_param', 'MIKE', l_paramlist);
    convert_pkg.add_param( 'i_date_param', sysdate, l_paramlist);
    convert_pkg.add_param( 'i_bool_param', true, l_paramlist);
    
    dbms_output.put_line(l_paramlist);
end;
/

Varchar
Date
Boolean
i_char_param => MIKE,  i_date_param => 14-AUG-15,  i_bool_param => TRUE

PL/SQL procedure successfully completed.


What’s that Skippy ? If you pass in a null value for the parameter, you will get an error :

set serveroutput on size unlimited
declare
    l_paramlist varchar2(4000);
begin
    convert_pkg.add_param( 'i_null_param', null, l_paramlist);

    dbms_output.put_line(l_paramlist);
end;
/

    convert_pkg.add_param( 'i_null_param', null, l_paramlist);
    *
ERROR at line 4:
ORA-06550: line 4, column 5:
PLS-00307: too many declarations of 'ADD_PARAM' match this call
ORA-06550: line 4, column 5:
PL/SQL: Statement ignored

As NULL is a valid value for a VARCHAR2, a DATE and a BOOLEAN, Oracle gets confused about which overload to use.

Therefore, you may want to weigh the convenience of this approach against the fact that you need to remember to handle potential null values in your parameters before logging them.

In our headlong rush to minimize typing in our application code, Skippy and I have decided to accept this responsibility. As a result we now have…

The new improved logging package
create or replace package logs
as
    --
    -- Set the logging level
    --
    -- E(rror) = just log error messages 
    -- I(nfo) = log Error and Information messages
    -- D(ebug) = log everything
    -- Note that we also have a message_type of P(arameter). 
    -- Messages of this type will be treated as being the same level as
    -- E(rror) - i.e. they will always be logged.
    
    g_log_level varchar2(1) := 'E'; 
    
    -- Parameter conversion procedures - overloaded for VARCHAR2, DATE and BOOLEAN
    procedure add_param
    (
        i_name in varchar2,
        i_value in varchar2,
        io_list in out varchar2
    );
    
    procedure add_param
    (
        i_name in varchar2,
        i_value in date,
        io_list in out varchar2
    );

    procedure add_param
    (
        i_name in varchar2,
        i_value in boolean,
        io_list in out varchar2
    );

    -- Mainly for P(arameter), I(nformation) and D(ebug) messages but also possible
    -- to log E(rror) messages if the message is something other than the error stack
    procedure write
    (
        i_message in application_message_logs.message%type,
        i_message_type in application_message_logs.message_type%type default 'E' 
    );
    
    -- Just log the error stack - i.e.
    -- sqlerrm||chr(10)||dbms_utility.format_error_backtrace
    procedure err;
end logs;
/

create or replace package body logs
as

    function log_level_to_num_fn( i_level in varchar2)
        return pls_integer
    is
    --
    -- Private function to convert the log level or message type
    -- from a character to a number
    --
    begin
        return
        case i_level
            when 'E' then 1
            when 'P' then 1
            when 'I' then 2
            when 'D' then 3
        end;
    end log_level_to_num_fn;

        
    procedure add_param
    (
        i_name in varchar2,
        i_value in varchar2,
        io_list in out varchar2
    )
    is
    -- Overload for VARCHAR2 parameter values
    begin
        if io_list is not null then
            io_list := io_list||' , ';
        end if;
        io_list := io_list||i_name||' => '||i_value;
    end add_param;
    
    procedure add_param
    (
        i_name in varchar2,
        i_value in date,
        io_list in out varchar2
    )
    is
    -- Overload for DATE parameter values
    begin
        if io_list is not null then
            io_list := io_list||' , ';
        end if;
        io_list := io_list||i_name||' => '
            ||to_char( i_value, sys_context( 'userenv', 'nls_date_format'));
    end add_param;
    
    procedure add_param
    (
        i_name in varchar2,
        i_value in boolean,
        io_list in out varchar2
    )
    is
    -- Overload for BOOLEAN parameter values
    begin
        if io_list is not null then
            io_list := io_list||' , ';
        end if;
        io_list := io_list||i_name||' => '||case when i_value then 'TRUE' else 'FALSE' end;
    end add_param;
    
    procedure process_log_pr
    (
        i_owner in varchar2,
        i_name in varchar2,
        i_type in varchar2,
        i_line in number,
        i_message in application_message_logs.message%type,
        i_message_type in application_message_logs.message_type%type
    )
    is
    --
    -- Private procedure to process the log record
    -- Called from the write and err procedures.
    --
    
        l_sub_program application_message_logs.sub_program%type;
        
        pragma autonomous_transaction;
    
    begin
    
        if i_type = 'PACKAGE BODY' then
            -- find the sub-program name
            -- Do this in a nested block just in case the package in question
            -- has not been compiled with PL/SCOPE enabled
            begin
                with sub_progs as
                (
                    select name, line
                    from dba_identifiers
                    where type in ('FUNCTION', 'PROCEDURE')
                    and usage = 'DEFINITION'
                    and usage_context_id = 1
                    and  owner = i_owner
                    and object_name = i_name
                    and object_type = i_type
                )
                select name
                into l_sub_program
                from sub_progs
                where line =
                (
                    select max(line)
                    from sub_progs
                    where line < i_line
                );
            exception 
                when no_data_found then
                    -- Calling package was not compiled with PL/SCOPE enabled
                    l_sub_program := null;
            end;
        end if;

        insert into application_message_logs
        (
            log_id, log_ts, username, session_id,
            program_unit, sub_program, location,
            message_type, message
        )
        values
        (
            aml_log_id_seq.nextval, systimestamp, user, sys_context('userenv', 'sessionid'),
            i_name, l_sub_program, i_line,
            i_message_type, substr( i_message, 1, 4000)
        );
        commit;
    exception 
        when others then
            rollback;
    end process_log_pr;



    procedure write
    (
        i_message in application_message_logs.message%type,
        i_message_type in application_message_logs.message_type%type default 'E'
    )
    is
        --
        -- Main procedure for general messages ( usually non-error).
        -- Check that the message is of a level that needs to be recorded based on
        -- the current g_log_level setting. If so, find out where the call originates
        -- from
        --
        
        l_owner varchar2(30);
        l_name varchar2(30);
        l_line number;
        l_type varchar2(30);
        
        l_sub_program application_message_logs.sub_program%type;
        
    begin

        if log_level_to_num_fn( nvl(i_message_type, 'E')) > log_level_to_num_fn( g_log_level)
        then
            -- Don't need to log this message at the current logging level
            return;
        end if;
        
        -- What's that Skippy ?
        -- This call needs to be in-line. If we move it to a function and call
        -- it from here, then it'll just return details of the current package ?
        owa_util.who_called_me
        (
            owner => l_owner,
            name => l_name,
            lineno => l_line,
            caller_t => l_type
        );
        
        process_log_pr
        (
            i_owner => l_owner,
            i_name => l_name,
            i_type => l_type,
            i_line => l_line,
            i_message_type => i_message_type,
            i_message => i_message
        );
    exception when others then
        -- If you're going to break a taboo, do it properly !
        null;
    end write;
    
    procedure err 
    is
    --
    -- Retrieve the error stack, get the details of the caller and
    -- then pass it for logging.
    --
        l_message application_message_logs.message%type;
        
        l_owner varchar2(30);
        l_name varchar2(30);
        l_line number;
        l_type varchar2(30);
        
        l_sub_program application_message_logs.sub_program%type;

    begin
        l_message := sqlerrm|| chr(10) ||dbms_utility.format_error_backtrace;
        
        -- As per Skippy - this call needs to be in-line (see write procedure above)
        owa_util.who_called_me
        (
            owner => l_owner,
            name => l_name,
            lineno => l_line,
            caller_t => l_type
        );

        process_log_pr
        (
            i_owner => l_owner,
            i_name => l_name,
            i_type => l_type,
            i_line => l_line,
            i_message_type => 'E',
            i_message => l_message
        );
    exception
        when others then
            -- And again, just in case QA aren't annoyed enough by this point
            null;
    end err;   
end logs;
/

Just a minute. That package looks quite a bit bigger than the one we started with.
On the plus side however, we’ve got all that code in one place.
As a result of this, our application code is somewhat less cluttered :

create or replace package body forecast_pkg
as
    procedure tomorrow_pr
    (
        i_forecast_date date,
        i_detail in varchar2,
        i_just_guess in boolean default true,
        o_forecast out varchar2
    )
    is
        l_params application_message_logs.message%type;

        l_forecast varchar2(10);
    begin
        -- Have randomly decided that all parameters need to be not null
        -- or the procedure will error.
        -- This is purely for cosmetic purposes...
        if i_forecast_date is null or i_detail is null or i_just_guess is null
        then
            raise_application_error( -20000, 'Missing mandatory parameters.');
        end if;
        -- record the fact that we're starting...
        logs.write('Starting forecast for tomorrow', 'I');
        
        -- ...and the parameter values we've been passed, which we now know, conveniently are not null
        
        logs.add_param('i_forecast_date', i_forecast_date, l_params);
        logs.add_param('i_detail', i_detail, l_params);
        logs.add_param('i_just_guess', i_just_guess, l_params);
        logs.write(l_params, 'P');
        --
        -- Do some weather forecasting here... and throw in a debug message as well..
        --
        logs.write('Running complicated algorithm to get forecast', 'D');
        if floor( dbms_random.value(1,3)) = 1 then
            o_forecast := 'SUNNY';
        else
            o_forecast := 'SOGGY';
        end if;
        -- then...
        --
        logs.write('Forecast completed outlook is '||o_forecast, 'I');
    exception when others then
        logs.err;
        raise;
    end tomorrow_pr;
end forecast_pkg;
/

If we run this now…

set serveroutput on size unlimited
declare
    l_forecast varchar2(100);
begin
    logs.g_log_level := 'D';
    forecast_pkg.tomorrow_pr
    (
        i_forecast_date => to_date('12052015', 'DDMMYYYY'),
        i_detail => 'Some random value',
        i_just_guess => false,
        o_forecast => l_forecast
    );
    dbms_output.put_line('The forecast is '||l_forecast);
end;
/

The forecast is SOGGY

PL/SQL procedure successfully completed.

SQL>

…we still get the same amount of information in the log table…

select program_unit, sub_program, location,
    message_type, message
from application_message_logs
order by log_id
/

PROGRAM_UNIT	SUB_PROGRAM	LOCATION MESSA MESSAGE
--------------- --------------- -------- ----- --------------------------------------------------------------------------------
FORECAST_PKG	TOMORROW_PR	      23 I     Starting forecast for tomorrow
FORECAST_PKG	TOMORROW_PR	      30 P     i_forecast_date => 12-MAY-15 , i_detail => Some random value , i_just_guess => FALSE
FORECAST_PKG	TOMORROW_PR	      34 D     Running complicated algorithm to get forecast
FORECAST_PKG	TOMORROW_PR	      42 I     Forecast completed outlook is SOGGY

SQL>

We can also record the error stack without having to physically enter it every time we call the logs package…

set serveroutput on size unlimited
declare
    l_forecast varchar2(100);
begin
    logs.g_log_level := 'D';
    forecast_pkg.tomorrow_pr
    (
        i_forecast_date => to_date('12052015', 'DDMMYYYY'),
        i_detail => 'Some random value',
        i_just_guess => null,
        o_forecast => l_forecast
    );
    dbms_output.put_line('The forecast is '||l_forecast);
end;
/

declare
*
ERROR at line 1:
ORA-20000: Missing mandatory parameters.
ORA-06512: at "MIKE.FORECAST_PKG", line 45
ORA-06512: at line 5

Sure enough, in the log table :

select program_unit, sub_program, location,
    message_type, message
from application_message_logs
order by log_id
/

PROGRAM_UNIT	SUB_PROGRAM	LOCATION MESSA MESSAGE
--------------- --------------- -------- ----- --------------------------------------------------------------------------------
FORECAST_PKG	TOMORROW_PR	      44 E     ORA-20000: Missing mandatory parameters.
					       ORA-06512: at "MIKE.FORECAST_PKG", line 20

Whilst you may well be skeptical about the abilities of a kangaroo in the matter of PL/SQL programming, there may be one or two techniques here that you’ll find useful.


Filed under: Oracle, PL/SQL Tagged: dba_identifiers, logging parameter values, overloaded procedures, owa_util.who_called_me, pl/scope, PL/SQL instrumentation, PLS-00307: too many declarations of x match this call, plscope_settings

PL/SQL Error Logging and Quantum Theory

Fri, 2015-07-24 14:17

When I started writing this post, it was going to be about something else.
This happens occasionally, I have an idea in my head and set to work.
Then I do some research – don’t look so surprised, I do look at the docs occasionally – and, as in this case, I find out that there’s rather more to the topic at hand than I first thought.
What follows is a re-visiting of some of the tools available in Oracle to help with error logging.
It includes stuff that either I’d forgotten or had never considered about some fairly common functions.
Before I dive in, I’d just like to say thanks to William Robertson, who first pointed out to me the similarity between PL/SQL error logging and Quantum Theory. If you’re still unclear of the connection between the two then consider, if you will, the Schrodinger’s Cat Thought Experiment.
It involves locking a cat in a box and possibly poisoning it.
Schrodinger postulates that the cat is both alive and dead…until you open the box to check.
The conclusions we can draw from this experiment are :

  • According to Quantum Theory, the act of observation changes the nature of the thing being observed
  • Schrodinger wasn’t a Cat person

Before going any further, I should point out that most of the stuff I know about Physics comes from watching Star Trek.

Moving on, I now invite you to consider…

Mysteriously moving errors

As with cats – according to Shrodinger at least – the act of “observing”- well, handling – a PL/SQL exception changes the error ( or the location from which it originated at any rate).

For example…

declare
    l_cat number;
begin
    l_cat := 'GREEBO';
end;
/

declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 4

No problem with this. It shows us that the error happened at line 4 in the code, which is correct.
However….

declare
    l_cat number;
begin
    l_cat := 'GREEBO';
exception
    when others then
        -- do some logging stuff....
        raise;
end;
/

declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 8

Here, the exception originated on line 4. However, the error is reported at line 8 – inside the exception handler.

This then, is the problem with which we need to wrestle.
Time to take a closer look at the tools that Oracle provides us for the purpose of error handling, starting with the most venerable…

SQLCODE and SQLERRM

SQLCODE and SQLERRM have been around for as long as I can remember.
If you’ve worked on PL/SQL applications for any length of time, you will almost certainly seen a variation of one of the following in an exception handler :

    sqlerrm(sqlcode);
    
    sqlcode||' : '||sqlerrm;
    
    substr(sqlerrm(sqlcode),1,500));

If any of the above ring any bells then it illustrates the point that these functions ( for that is what they are), are not especially well understood.

SQLCODE

SQLCODE returns the number of the last error encountered. Not the message, just the error number :

set serveroutput on
declare
    l_cat_lives number;
begin
    l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9';
exception 
    when others then
        dbms_output.put_line(sqlcode);
end;
/

-6502

PL/SQL procedure successfully completed.

SQLCODE returns 0 on successful completion…

declare
    l_cat_lives number;
begin
    l_cat_lives := 9;
    dbms_output.put_line(sqlcode);
end;
/
0

PL/SQL procedure successfully completed.

For user-defined errors, it returns 1 by default…

declare
    e_no_cat exception;
begin
    raise e_no_cat;
exception when e_no_cat then
    dbms_output.put_line(sqlcode);
end;
/

1

PL/SQL procedure successfully completed.

…unless you associate the exception with an error number using the EXCEPTION_INIT pragma…

declare
    e_no_cat exception;
    pragma exception_init( e_no_cat, -20000);
begin
    raise e_no_cat;
exception when e_no_cat then
    dbms_output.put_line(sqlcode);
end;
/
-20000

PL/SQL procedure successfully completed.

SQL> 

It will also return the relevant error code if you use RAISE_APPLICATION_ERROR…

begin
    raise_application_error(-20001, 'The cat has run off');
exception when others then
    dbms_output.put_line(sqlcode);
end;
/
-20001

PL/SQL procedure successfully completed.

SQL> 

On it’s own then, SQLCODE is not much help in terms of working out what went wrong unless you happen to have memorized all of the Oracle error messages.

Fortunately we also have…

SQLERRM

This function takes in an error number and returns the relevant message :

begin
    dbms_output.put_line(sqlerrm(-6502));
end;
/
ORA-06502: PL/SQL: numeric or value error

PL/SQL procedure successfully completed.

Because of this SQLERRM can be used to create the equivalent of the oerr utility in PL/SQL.
Better still, it takes SQLCODE as it’s default parameter…

declare
    l_cat_lives number;
begin
    l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9';
exception
    when others then
        dbms_output.put_line(sqlerrm);
end;
/

ORA-06502: PL/SQL: numeric or value error: character to number conversion error

PL/SQL procedure successfully completed.

The maximum length of a varchar returned by SQLERRM is, according to the documentation, “the maximum length of an Oracle Database error message” – 512.

Whilst we’re on the subject, the 11gR2 documentationincludes a note recommending that, generally, DBMS_UTILITY.FORMAT_ERROR_STACK be used instead…

DBMS_UTILITY.FORMAT_ERROR_STACK

So, let’s see what this function gives us when used as a drop-in replacement for SQLERRM….

declare
    l_cat_lives number;
begin
    l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9';
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/
ORA-06502: PL/SQL: numeric or value error: character to number conversion error


PL/SQL procedure successfully completed.

SQL> 

Not much then based on this example. However, there are a couple of differences.
The first is that this function returns up to 2000 characters of the error stack.
The second is that it does not take any arguments. Fair enough I suppose. From the name you’d infer that this function returns the entire error stack rather than the single message that SQLERRM does.
Let’s put that to the test…

create or replace package transporter as
    function find_target return varchar2;
    procedure beam_me_up_scotty;
end transporter;
/

create or replace package body transporter as
    function find_target 
        return varchar2
    is
    begin
        raise_application_error(-20003, 'Location or velocity unknown');
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    end beam_me_up_scotty;
end transporter;
/

This package is an analog of what Star Fleet Engineers would have been working with before they came up with the Heisenburg Compensator.

If we call this without any error handling, we’ll get a “stack” of errors…

begin
    transporter.beam_me_up_scotty;
end;
/

*
ERROR at line 1:
ORA-20003: Location or velocity unknown
ORA-06512: at "MIKE.TRANSPORTER", line 6 
ORA-06512: at "MIKE.TRANSPORTER", line 13
ORA-06512: at line 1

You’d expect something fairly similar if you used the FORMAT_ERROR_STACK function…

set serveroutput on size unlimited
begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/

ORA-20003: Location or velocity unknown



PL/SQL procedure successfully completed.

SQL> 

So, not similar at all then.
NOTE – if you’ve already spotted the deliberate mistake here. Bear with me for a bit.
If we change the package body so that errors are raised at multiple levels…

create or replace package body transporter as
    function find_target 
        return varchar2
    is
    begin
        raise_application_error(-20003, 'Location or velocity unknown');
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    exception when others then
        raise_application_error(-20004, 'I canna change the laws o physics!');
    end beam_me_up_scotty;
end transporter;
/

… we simply get the last error passed…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/

ORA-20004: I canna change the laws o physics!



PL/SQL procedure successfully completed.

SQL> 

From all of this, it would appear that DBMS_UTILITY.FORMAT_ERROR_STACK doesn’t really give us much (if anything) over SQLERRM. This is especially true if the documentation is correct and no single Oracle Error Message will exceed 512 bytes.
All of which is rather odd, until you consider…

RAISE_APPLICATION_ERROR

RAISE_APPLICATION_ERROR is usually invoked as in the above example. However, it actually accepts three arguments :

  • NUM – an error code in the range -20000 to -20999
  • MSG – an error message up to 1024 characters ( including the error code)
  • KEEPERRORSTACK – if TRUE then the error code is placed at the top of the error stack. Otherwise it replaces the error stack.
    Default is FALSE

The first point to note here is that, unlike SQLERRM, FORMAT_ERROR_STACK can accomodate the full length of a message from RAISE_APPLICATION_ERROR.
More relevant to the issue at hand however, it the KEEPERRORSTACK parameter. If we tweak the package once more to set this parameter to true…

create or replace package body transporter as
    function find_target 
        return varchar2
    is
    begin
        raise_application_error(-20003, 'Location or velocity unknown', true);
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    exception when others then
        raise_application_error(-20004, 'I canna change the laws o physics!', true);
    end beam_me_up_scotty;
end transporter;
/

…and re-run our test…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/
ORA-20004: I canna change the laws o physics!
ORA-06512: at "MIKE.TRANSPORTER", line 16
ORA-20003: Location or velocity unknown

PL/SQL procedure successfully completed.

…we now get a stack. However, we’re still stuck without the line number from where the error originated.
Fortunately, they’ve been burning the candle at both ends over at Star Fleet, or possibly at Redwood Shores…

DBMS_UTILITY.FORMAT_ERROR_BACKTRACE

Let’s try re-executing our package, this time using FORMAT_ERROR_BACKTRACE…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_backtrace);
end;
/

ORA-06512: at "MIKE.TRANSPORTER", line 7
ORA-06512: at "MIKE.TRANSPORTER", line 14
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

Well, that’s different. We get a stack, together with the line number at which the error originated. Unfortunately it doesn’t include the originating error message itself. Let’s try that again, but this time in combination with SQLERRM…

 
begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(sqlerrm);
        dbms_output.put_line(dbms_utility.format_error_backtrace);
end;
/
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at "MIKE.TRANSPORTER", line 7
ORA-06512: at "MIKE.TRANSPORTER", line 14
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

Now we have the original error. We also have the line at which it happened. At last, we have our Heisenburg compensator.
Well, in most circumstances. Just before we test it on Admiral Archer’s prize beagle …

create or replace package body transporter as
    function find_target 
        return varchar2
    is
        l_silly number;
    begin
        l_silly :=  'Location or velocity unknown';
        exception when others then
            -- do some logging and...
            raise;
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    end beam_me_up_scotty;
end transporter;
/

Now we’ve added an error handler to the innermost package member…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(sqlerrm);
        dbms_output.put_line(dbms_utility.format_error_backtrace);
end;
/

ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at "MIKE.TRANSPORTER", line 10
ORA-06512: at "MIKE.TRANSPORTER", line 17
ORA-06512: at line 2



PL/SQL procedure successfully completed.

…once again the handled error has “moved” to the exception block of the function.

In terms of retrieving the error stack, it would appear that a combination of SQLERRM and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE offer the most comprehensive and reliable information.
However, in order to further pin down where those pesky errors are originating we may need to turn to a time-honoured technique – albeit with a comparatively modern twist…

Location Markers with $$PLSQL_LINE

The $$PLSQL_LINE variable simply returns the line number of the stored program unit or anonymous block you’re currently in…

begin
    dbms_output.put_line('At line : '||$$plsql_line);
end;
/

At line : 2

PL/SQL procedure successfully completed.

SQL> 

By sprinkling a few dollars through our code, we should get a better (although still not necessarily exact) idea of where our error is originating.

I’m going to persevere with this transporter code. After all, they managed to get it working in the original Star Trek and that was way back in the 60’s…

create or replace package body transporter as
    function find_target 
        return varchar2
    is
        l_loc pls_integer;
        l_silly number;
    begin
        l_loc := $$plsql_line;
        l_silly :=  'Location or velocity unknown';
        exception when others then
            dbms_output.put_line('Error originating after line '||l_loc);
            raise;
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    end beam_me_up_scotty;
end transporter;
/

Now, we should get a bit more information…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(sqlerrm);
        dbms_output.put_line(dbms_utility.format_error_backtrace); 
end;
/
Error originating after line 8
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at "MIKE.TRANSPORTER", line 12
ORA-06512: at "MIKE.TRANSPORTER", line 19
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

The error actually originates from line 9 so that’s a pretty good approximation.
The downside is the aforementionned sprinkling of the assignment of $$PLSQL_LINE to a variable immediately before you perform any action.

Well, I’ve probably managed to annoy an Physics experts and Star Trek fans that happen to be reading. That’s before you even start thinking about PL/SQL Developers.
On the plus side I can say, hand-on-heart, that no cats were harmed in the writing of this post.


Filed under: Oracle, PL/SQL Tagged: $$plsql_line, dbms_utility.format_error_backtrace, dbms_utility.format_error_stack, pl/sql exceptions, pragma exception_init, raise_application_error, sqlcode, sqlerrm

Keep your Database Tidy – making sure a file exists before DBMS_DATAPUMP makes a mess

Mon, 2015-07-06 13:06

There are times when I wonder whether DBMS_DATAPUMP isn’t modelled on your average teenager’s bedroom floor.
If you’ve ever tried to start an import by specifying a file that doesn’t exist ( or that DBMS_DATAPUMP can’t see) you’ll know what I mean.
The job fails, which is fair enough. However, DBMS_DATAPUMP then goes into a huff and refuses to “clean up it’s room”.
Deb has suggested that this sort of thing is also applicable to husbands.
Not that I have any idea of whose husband she’s talking about.
Anyway, you may consider it preferable to check that the export file you want to import from actually exists in the appropriate directory before risking the wrath of the temperamental datapump API.
This apparently simple check can get a bit interesting, especially if you’re on a Linux server…

For what follows, I’ll be using the DATA_PUMP_DIR directory object. To check where this is pointing to…

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

DIRECTORY_PATH
--------------------------------------------------------------------------------
/u01/app/oracle/admin/XE/dpdump/

The owner of the functions I’ll be creating will need to have READ privileges granted directly to them on this directory…

select privilege
from user_tab_privs
where table_name = 'DATA_PUMP_DIR'
/

PRIVILEGE
----------------------------------------
READ

SQL> 

If the user does not have this privilege then you can grant it ( connecting as sysdba) with the following :

grant read on directory data_pump_dir to user
/

…where user is the name of the schema in which you are going to create the function.

Now to create a file in this directory so that we can test for it’s existence…

sudo su oracle
[sudo] password for mike: 

touch /u01/app/oracle/admin/XE/dpdump/test.txt
ls -l  /u01/app/oracle/admin/XE/dpdump/test.txt
-rw-r--r-- 1 oracle dba 0 Jul  3 18:08 /u01/app/oracle/admin/XE/dpdump/test.txt

In order to check for the existence of this file from within PL/SQL, we have a couple of options…

UTL_FILE

The UTL_FILE.FGETATTR procedure retrieves details of a file, including whether or not it exists…

set serveroutput on size unlimited
declare

    l_filename varchar2(4000) := 'test.txt';
    l_exists boolean;
    l_length number;
    l_bsize number;
begin
    utl_file.fgetattr
    (
        location => 'DATA_PUMP_DIR',
        filename => l_filename, 
        fexists => l_exists,
        file_length => l_length,
        block_size => l_bsize
    );
    if l_exists then
        dbms_output.put_line( l_filename ||' exists in DATA_PUMP_DIR : ');
        dbms_output.put_line( 'Length : '||l_length);
        dbms_output.put_line( 'Block Size : '||l_bsize);
    else
        dbms_output.put_line('File does not exist in DATA_PUMP_DIR');
    end if;
end;
/

Run this and we get :

test.txt exists in DATA_PUMP_DIR :
Length : 0
Block Size : 4096

PL/SQL procedure successfully completed.

That’s handy. Let’s put it into a function…

create or replace function file_exists_fn
(
    i_dir in all_directories.directory_name%type,
    i_filename in varchar2
)
    return varchar2
is

    l_exists boolean;
    l_length number;
    l_block_size number;
    
    l_return varchar2(4000);
    
begin
    utl_file.fgetattr
    (
        location => upper(i_dir),
        filename => i_filename,
        fexists => l_exists,
        file_length => l_length,
        block_size => l_block_size
    );
    if l_exists then
        l_return := i_filename||' in '||upper(i_dir)||' - Length : '||l_length||' - Block Size : '||l_block_size;
    else
        l_return := i_filename||' does not exist in '||upper(i_dir);
    end if;
    
    return l_return;
end;
/

Now let’s see what happens with a Symbolic Link…

touch /home/mike/symlink.txt

sudo su oracle
[sudo] password for mike: 

ln -s /home/mike/symlink.txt /u01/app/oracle/admin/XE/dpdump/symlink.txt
ls -l /u01/app/oracle/admin/XE/dpdump/symlink.txt
lrwxrwxrwx 1 oracle dba 22 Jul  3 18:29 /u01/app/oracle/admin/XE/dpdump/symlink.txt -> /home/mike/symlink.txt

If we now call our function to fine symlink.txt in DATA_PUMP_DIR…

select file_exists_fn('DATA_PUMP_DIR', 'symlink.txt')
from dual
/

FILE_EXISTS_FN('DATA_PUMP_DIR','SYMLINK.TXT')
--------------------------------------------------------------------------------
symlink.txt does not exist in DATA_PUMP_DIR

SQL> 

It is at this point that I realise that I really should have read the manual, which states that, for UTL_FILE : “neither hard nor symbolic links are supported.”

So, if we’re to handle links, a different approach is required…

The DBMS_LOB approach

The DBMS_LOB has a FILEEXISTS function which looks like it could come in handy here…

set serveroutput on size unlimited
declare

    l_filename varchar2(4000) := 'symlink.txt';
    l_loc bfile;
begin
    l_loc := bfilename('DATA_PUMP_DIR', l_filename);
    if dbms_lob.fileexists(l_loc) = 1 then
        dbms_output.put_line( l_filename||' exists');
    else
        dbms_output.put_l1ine('File not found');
    end if;
end;
/

symlink.txt exists 

PL/SQL procedure successfully completed.

That’s better. After amending the function…

create or replace function file_exists_fn
(
    i_dir in all_directories.directory_name%type,
    i_filename in varchar2
)
    return varchar2
is

    l_loc bfile;
    l_return varchar2(4000);

begin
    l_loc := bfilename(upper(i_dir), i_filename);
    if dbms_lob.fileexists(l_loc) = 1 then
        l_return :=  i_filename||' exists in '||upper(i_dir);
    else
        l_return := 'File '||i_filename||' not found';
    end if;
    return l_return;
end;
/

…we can see that this also works just fine for conventional files…

select file_exists_fn('DATA_PUMP_DIR', 'test.txt')
from dual
/

FILE_EXISTS_FN('DATA_PUMP_DIR','TEST.TXT')
--------------------------------------------------------------------------------
test.txt exists in DATA_PUMP_DIR

SQL> 

Let’s check that is works for hard links as well…

touch /home/mike/hardlink.txt
chmod a+rw /home/mike/hardlink.txt
sudo su oracle
[sudo] password for mike: 

cd /u01/app/oracle/admin/XE/dpdump/
ln /home/mike/hardlink.txt hardlink.txt
ls -l hardlink.txt
-rw-rw-rw- 2 mike mike 0 Jul  3 18:50 hardlink.txt

And the test….

select file_exists_fn('DATA_PUMP_DIR', 'hardlink.txt')
from dual
/

FILE_EXISTS_FN('DATA_PUMP_DIR','HARDLINK.TXT')
--------------------------------------------------------------------------------
hardlink.txt exists in DATA_PUMP_DIR

SQL> 

So, if you want to minimise the prospect of muttering “I’m not your mother, you know!” to your database, then the DBMS_LOB approach would seem to be the way to go.


Filed under: Linux, Oracle, PL/SQL Tagged: dbms_lob.fileexists, hard link, symbolic link, utl_file.fgetattr

What’s in a Name ? USER_TAB_COLS and USER_TAB_COLUMNS are different.

Tue, 2015-06-23 13:22

My son and I are quite similar in some ways ( although he would vehemently dispute this).
Like me, he works in IT, in his case as a Support Engineer.
Like me, he’s called Mike (well, my Mum likes the name…and I can spell it).
Unlike me – as he would be quick to point out – he still has all his own hair.
These similarities have been known to cause confusion – I’m often contacted by recruitment agents with enticing offers to work on…some newfangled stuff I know nothing about, whilst he’s constantly being offered “exciting” Database related opportunities.

Similar confusion can arise when you’re delving into the Oracle Data Dictionary…

Note – the examples that follow apply to 11gR2. Additionally, apart from the COLS synonym, what is true for USER_TAB_COLUMNS and USER_TAB_COLS also applies to their ALL_ and DBA_ equivalents.

When it comes to getting column meta-data out of the Data Dictionary, you’ve got several choices. To illustrate this, connect as HR and ….

select column_name, data_type
from cols
where table_name = 'EMPLOYEES'
order by column_id
/

select column_name, data_type
from user_tab_columns
where table_name = 'EMPLOYEES'
order by column_id
/

select column_name, data_type
from user_tab_cols
where table_name = 'EMPLOYEES'
order by column_id
/

In each case the results are identical :

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
COMMISSION_PCT		       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER

11 rows selected.

So, it would appear that COLS, USER_TAB_COLUMNS and USER_TAB_COLS are all synonyms for the same thing…

select synonym_name, table_owner, table_name
from all_synonyms
where synonym_name in ('COLS', 'USER_TAB_COLS', 'USER_TAB_COLUMNS')
order by table_name
/ 

SYNONYM_NAME		       TABLE_OWNER	    TABLE_NAME
------------------------------ -------------------- --------------------
USER_TAB_COLS		       SYS		    USER_TAB_COLS
USER_TAB_COLUMNS	       SYS		    USER_TAB_COLUMNS
COLS			       SYS		    USER_TAB_COLUMNS

…OK, so COLS is indeed a synonym for USER_TAB_COLUMNS. USER_TAB_COLS and USER_TAB_COLUMNS also appear to be identical…

select table_name, comments
from all_tab_comments
where table_name in ('USER_TAB_COLUMNS', 'USER_TAB_COLS')
/

TABLE_NAME	     COMMENTS
-------------------- --------------------------------------------------
USER_TAB_COLS	     Columns of user's tables, views and clusters
USER_TAB_COLUMNS     Columns of user's tables, views and clusters

There you go then. Must be the case…

Unused Columns

Lets create another table in the HR schema as a copy of EMPLOYEES….

create table non_sales_emps as
    select * 
    from employees
    where commission_pct is null
/

Table created.

As the name suggests, we’re not going to have any Sales Staff in this table, so we don’t really need the COMMISSION_PCT column…

SQL> alter table non_sales_emps
  2      set unused column commission_pct
  3  /

Table altered.

SQL> 

So, the table no longer contains the COMMISSION_PCT column…

select column_name, data_type
from user_tab_cols
where table_name = 'NON_SALES_EMPS'
order by column_id
/

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER
SYS_C00009_15061918:46:18$     NUMBER

11 rows selected.

Yes, the table now has 10 columns…and here are the details of all 11 of them.
Weren’t expecting that ? Well then you probably won’t be expecting this either….

select column_name, data_type
from user_tab_columns
where table_name = 'NON_SALES_EMPS'
order by column_id
/

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER

10 rows selected.

The extra column has magically disappeared again. Just what is going on ?

Delving a bit deeper into this particular rabbit-hole…

select column_name, data_type
from all_tab_columns
where owner = 'SYS'
and table_name = 'USER_TAB_COLS'
minus
select column_name, data_type
from all_tab_columns
where owner = 'SYS'
and table_name = 'USER_TAB_COLUMNS'
/

COLUMN_NAME		       DATA_TYPE
------------------------------ ------------------------------
HIDDEN_COLUMN		       VARCHAR2
INTERNAL_COLUMN_ID	       NUMBER
QUALIFIED_COL_NAME	       VARCHAR2
SEGMENT_COLUMN_ID	       NUMBER
VIRTUAL_COLUMN		       VARCHAR2

SQL> 

From this we can see that USER_TAB_COLS contains five additional columns over those available in USER_TAB_COLUMNS.

select column_name, comments
from all_col_comments
where owner = 'SYS'
and table_name = 'USER_TAB_COLS'
and column_name in ( 'HIDDEN_COLUMN', 'INTERNAL_COLUMN_ID', 
    'QUALIFIED_COL_NAME', 'SEGMENT_COLUMN_ID', 'VIRTUAL_COLUMN')
/

COLUMN_NAME		  COMMENTS
------------------------- --------------------------------------------------
HIDDEN_COLUMN		  Is this a hidden column?
VIRTUAL_COLUMN		  Is this a virtual column?
SEGMENT_COLUMN_ID	  Sequence number of the column in the segment
INTERNAL_COLUMN_ID	  Internal sequence number of the column
QUALIFIED_COL_NAME	  Qualified column name

Furthermore, if we examine the source code for the USER_TAB_COLUMNS view, the reason for it’s similarity with USER_TAB_COLS becomes apparent :

select TABLE_NAME, COLUMN_NAME, DATA_TYPE, DATA_TYPE_MOD, DATA_TYPE_OWNER,
       DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE, COLUMN_ID,
       DEFAULT_LENGTH, DATA_DEFAULT, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE,
       DENSITY, NUM_NULLS, NUM_BUCKETS, LAST_ANALYZED, SAMPLE_SIZE,
       CHARACTER_SET_NAME, CHAR_COL_DECL_LENGTH,
       GLOBAL_STATS, USER_STATS, AVG_COL_LEN, CHAR_LENGTH, CHAR_USED,
       V80_FMT_IMAGE, DATA_UPGRADED, HISTOGRAM
  from USER_TAB_COLS
 where HIDDEN_COLUMN = 'NO'

So, just when does it become useful to use USER_TAB_COLS rather than USER_TAB_COLUMNS ?

In 11g, you’d set a column to be unused on a large table in order for the “drop” to happen quickly.
Once you have set it to unused, the only thing you can do with it is drop it altogether to reclaim the space it’s using.
To find out which tables have unused columns, you can use…

select table_name, count
from user_unused_col_tabs
/

TABLE_NAME			    COUNT
------------------------------ ----------
NON_SALES_EMPS				1

…and if you want to drop an unused column, you don’t need to specify it’s name in the command…

alter table non_sales_emps
    drop unused columns
/

Table altered.

Where USER_TAB_COLS does come in handy is when you’re looking for …

Virtual Columns

For example, we could add a column to our NON_SALES_EMPS table to calculate the number of completed years service for each employee.
First, we need a deterministic function to return the number of full years between a given date and today :

create or replace function years_elapsed_fn( i_date in date)
    return number deterministic
as
begin
    return floor( months_between( trunc(sysdate), i_date) / 12);
end;
/

Now we add a virtual column to the table which calls this function :

alter table non_sales_emps
    add years_service generated always as
        (years_elapsed_fn(hire_date))
/

Whilst there’s no way to tell which columns are virtual in USER_TAB_COLUMNS, there is in USER_TAB_COLS :

select column_name, data_type, virtual_column
from user_tab_cols
where table_name = 'NON_SALES_EMPS'
/

COLUMN_NAME		  DATA_TYPE			 VIR
------------------------- ------------------------------ ---
EMPLOYEE_ID		  NUMBER			 NO
FIRST_NAME		  VARCHAR2			 NO
LAST_NAME		  VARCHAR2			 NO
EMAIL			  VARCHAR2			 NO
PHONE_NUMBER		  VARCHAR2			 NO
HIRE_DATE		  DATE				 NO
JOB_ID			  VARCHAR2			 NO
SALARY			  NUMBER			 NO
MANAGER_ID		  NUMBER			 NO
DEPARTMENT_ID		  NUMBER			 NO
YEARS_SERVICE		  NUMBER			 YES

11 rows selected.

SQL> 

The reasons for having two such similar ( and similarly named) dictionary views seem to have been lost in the mists of time.
Whatever the rationale, it’s worth knowing the difference next time you need to go poking around the column meta-data in your database.


Filed under: Oracle, SQL Tagged: all_tab_comments, alter table set unused column, cols, deterministic, hidden columns, user_tab_cols, user_tab_columns, virtual columns

Upgrading to APEX 5 on Oracle XE 11g

Mon, 2015-05-25 07:27

It’s a Bank Holiday weekend here in the UK.
This is usually a time for doing odd-jobs as a distraction from watching the rain come down.
This time around, rather than subject you to another lament about the Great British Summer ( or lack thereof), I’m going to go through the steps needed to install APEX5 on Oracle 11gXE.

Now, I know that the documentation doesn’t mention Express Edition.
I also know that the instructions that Oracle do have for upgrading APEX on XE haven’t yet been updated to account for APEX5.
I know this because I’ve spent a wet Bank Holiday finding this stuff out the hard way so that (hopefully), you don’t have to.
What I’m going to cover here is :

  • Pre-installation checks
  • Getting APEX5
  • Installation
  • Configuration

I would say “let’s get cracking before the sun comes out”, but that would only give us until around the second week in July…

The environment

I’m dong this on Oracle Database Express Edition 11gR2.
As I’m doing this on a Linux machine, you may see the odd indication of this but the essential steps outlined here will apply for all Operating Systems
I’m starting with a clean installation of Oracle XE, so the current version of APEX is 4.0. However, these steps should still be valid when upgrading from any APEX4x version.

Incidentally, if you really want to hang on to the default XE Database Management Application, you’ll probably want to have a look at the steps required to back it up prior to upgrade.

Pre-Installation checks

The documentation details several checks. However, as we’re running on XE11g, we don’t have to worry too much about some of them.

If you really want to double-check….

Oracle Database Version

The minimum version required is 11.1.07. XE runs 11.2.0.2 as we can see with the following query in SQL*Plus :

select banner
from v$version
where banner like 'Oracle Database%';

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production

SQL> 
Oracle XML DB

That’s there as well, by default…

select comp_name, version, status
from dba_registry
where comp_id = 'XDB';

COMP_NAME					   VERSION			  STATUS
-------------------------------------------------- ------------------------------ --------------------------------------------
Oracle XML Database				   11.2.0.2.0			  VALID

Web Listener Requirements

Oracle XE ships with the Embedded PL/SQL Gateway by default.
Provided you can hit the Database Home Page (e.g. by clicking on the Getting Started With Oracle Database 11g Express Edition desktop shortcut), you’re good to go.

There are some pre-requisites that you will need to verify.

Memory Target Setting

Once again, on a fresh XE11g install, you shouldn’t have any problems.
In memory terms, the smallest machine I’ve installed XE11g on had a total of 1GB RAM. Even in an environment as constrained as this, the MEMORY_TARGET should still meet the minimum requirement of 300MB.

To check, login to SQL*Plus and ….

SQL> show parameter memory_target

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
memory_target			     big integer 1G
SQL> 
Space Requirements

The easy bit first. If you’re planning to download the English Language Only version of APEX5, you’ll need 250MB of space on disk.
If you’re going for the full version, this goes up to 630 MB.

At this point, the installation instructions then start talking about the “Oracle Application Express tablespace”.
This is simply the default tablespace for the schema that will be the APEX owner. Once the installation happens this will be a schema called APEX_050000. It will be created with SYSAUX as it’s default tablespace. This is exactly the same as for the existing APEX04000 user that shipped with XE11g. Incidentally, we’ll also need to know the user’s temporary tablespace for the installation, so we may as well verify both of them now…

SQL> select default_tablespace, temporary_tablespace
  2  from dba_users
  3  where username = 'APEX_040000';

DEFAULT_TABLESPACE	       TEMPORARY_TABLESPACE
------------------------------ ------------------------------
SYSAUX			       TEMP

SQL> 

The amount of space required in SYSAUX is 220MB plus 60MB per additional language installed.
There is also a requirement for 100MB in the SYSTEM tablespace.

Working out how much space is available isn’t entirely straightforward. However, we can get most of the way with the following query :

select tablespace_name,
  file_name,
  (maxbytes - bytes) / 1024/1024 as "Available Space MB",
  autoextensible
from dba_data_files
where tablespace_name in ('SYSAUX', 'SYSTEM')
/
TABLESPACE_NAME 	       FILE_NAME					  Available Space MB AUT
------------------------------ -------------------------------------------------- ------------------ ---
SYSAUX			       /u01/app/oracle/oradata/XE/sysaux.dbf			  31677.9844 YES
SYSTEM			       /u01/app/oracle/oradata/XE/system.dbf				 150 YES

SQL> 

If you’re query returns an Available Space MB figure less than the requirements, don’t worry too much.
Provided the tablespace is Autoextensible and there is enough space on disk, it will automatically grow as it needs more space.

Browser Version

The minimum requirements for Web browsers are :

  • Firefox version 35
  • Chrome version 40
  • Safari version 7
  • IE version 9
Getting APEX5

Right, once you’re happy with the pre-requisite steps, head over to the OTN APEX5 download page and download you’re required version.
This will be one of :

  • Oracle Application Express 5.0 – All languages
  • Oracle Application Express 5.0 – English language only

I’ve gone for the English language only version.

As mentioned previously, the Download Page does state that :

“Application Express 5.0 can also be used with Oracle Database 11g Express Edition (XE), but is supported only through the OTN discussion forum, and not through Oracle Support Services.”

However, the installation instructions page it links to has yet to be updated for APEX5 at the time of writing.

Anyway, I now have a file called apex_5.0_en.zip in my Downloads directory.

As I’m on Linux, I’m going to unzip and deploy this to the ORACLE_BASE directory (/u01/app/oracle).
To avoid any issues with file permissions, I’ll do this as the oracle OS user.

I should point out that it doesn’t really matter where you deploy the files to. Also, you don’t have to be oracle to do this.
I’ve just done it this way to keep things simple.

cd $HOME/Downloads
sudo su oracle
unzip -d /u01/app/oracle apex_5.0_en.zip

You’ll then see something like …

...
  inflating: /u01/app/oracle/apex/core/template.plb  
  inflating: /u01/app/oracle/apex/core/dev_grants.sql  
  inflating: /u01/app/oracle/apex/apxsqler.sql  
  inflating: /u01/app/oracle/apex/apxprereq.sql  
  inflating: /u01/app/oracle/apex/apxupgrd.sql  
  inflating: /u01/app/oracle/apex/apxconf.sql  
  inflating: /u01/app/oracle/apex/coreins5.sql  
  inflating: /u01/app/oracle/apex/apxdvins.sql  
  inflating: /u01/app/oracle/apex/apxchpwd.sql  
 extracting: /u01/app/oracle/apex/apxexit.sql  
  inflating: /u01/app/oracle/apex/catapx.sql  
  inflating: /u01/app/oracle/apex/apxe102.sql  

After that, you should have a sub-directory call apex where you’ve unzipped the file.
NOTE – you can stop being the oracle user now.

Installation

A bit of housekeeping to start with – we need to make sure that the APEX_PUBLIC_USER database account is unlocked :

select account_status
from dba_users
where username = 'APEX_PUBLIC_USER';

If the account_status is LOCKED then…

alter user apex_public_user account unlock;

User altered.

select account_status from dba_users where username = 'APEX_PUBLIC_USER';

ACCOUNT_STATUS
--------------------------------
OPEN

SQL>       

NOTE – strictly speaking, you should also perform this check for the ANONYMOUS user. However, if Oracle XE is newly installed, or if you’re running the Embedded PL/SQL Gateway, it should be unlocked.
If you want to satisfy yourself that this is, in fact, the case :

select account_status
from dba_users
where username = 'ANONYMOUS';
Loading APEX5 into the database

As of APEX5, we now have the option of installing just the APEX runtime…but where’s the fun in that ?
We want the full-blown development environment…

The script we need to run to do the installation – apexins.sql – takes three parameters :

  • the default tablespace of the APEX owner schema
  • the default tablespace of the FLOWS_FILES schema
  • a temporary tablespace in the database
  • a virtual directory for APEX images

We already know that the default tablespace for the APEX owner is SYSAUX.
We also know that the temporary tablespace is called TEMP.
As for the FLOWS_FILES schema…

SQL> select default_tablespace
  2  from dba_users
  3  where username = 'FLOWS_FILES';

DEFAULT_TABLESPACE
------------------------------
SYSAUX

SQL> 

As for the virtual directory – “/i/” always seems to work.

Now, change directory to the apex directory you’ve created as part of the unzip step, and connect to the database as sys as sysdba.

cd /u01/app/oracle/apex 

sqlplus sys as sysdba

…and run the script…

@apexins.sql SYSAUX SYSAUX TEMP /i/

After several minutes worth of messages whizzing up your screen you’ll get :

PL/SQL procedure successfully completed.

-- Now beginning upgrade. This will take several minutes.-------
-- Ensuring template names are unique -------

…finally, you’ll get…

Thank you for installing Oracle Application Express 5.0.0.00.31

Oracle Application Express is installed in the APEX_050000 schema.

The structure of the link to the Application Express administration services is as follows:
http://host:port/pls/apex/apex_admin (Oracle HTTP Server with mod_plsql)
http://host:port/apex/apex_admin     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex/apex_admin     (Oracle REST Data Services)

The structure of the link to the Application Express development interface is as follows:
http://host:port/pls/apex (Oracle HTTP Server with mod_plsql)
http://host:port/apex     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex     (Oracle REST Data Services)


PL/SQL procedure successfully completed.






1 row selected.

Disconnected from Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
mike@mike-Aspire-E3-112 /u01/app/oracle/apex $ 

Configuration Re-setting the ADMIN password

The next step is to set the APEX ADMIN password. Note that, even if you’ve already done this for the previously installed APEX version, you’ll need to do it again here, using the script that’s shipped with this version of APEX.
Also, despite any configuration changes you may have made to the APEX password complexity rules, the password you set will need to conform to the following :

  • Password must contain at least 6 characters.
  • Password must contain at least one numeric character (0123456789).
  • Password must contain at least one punctuation character(!”#$%&()“*+,-/:;?_).
  • Password must contain at least one upper-case alphabetic character.
  • Password must not contain username.

Bearing this in mind, connect to the database again as SYS AS SYSDBA and you’ll be prompted as follows….

@apxchpwd.sql

================================================================================
This script can be used to change the password of an Application Express
instance administrator. If the user does not yet exist, a user record will be
created.
================================================================================
Enter the administrator's username [ADMIN] 
User "ADMIN" exists.
Enter ADMIN's email [ADMIN] 
Enter ADMIN's password [] 
Changed password of instance administrator ADMIN.
Load Images

One final step – we need to load the images.
As we’re running the Embedded PL/SQL Gateway, we’ll need to use the apex_epg_config.sql script.
This script takes, as a parameter, the name of the directory that you’ve extracted the apex zip into – i.e. without the /apex directory itself. As you can see from the output below, it does take a while (8 minutes in this case) :

@apex_epg_config.sql /u01/app/oracle

SQL> @apex_epg_config.sql /u01/app/oracle

. Loading images directory: /u01/app/oracle/apex/images
timing for: Load Images
Elapsed: 00:08:00.92
SQL> 
Post Installation Steps

Just before we can go and play with our shiny new APEX version, we need to do a little tidying.
First of all, confirm the port that the PL/SQL Gateway is listening on :

select dbms_xdb.gethttpport
from dual;

Then, finally, shutdown and re-start the database.

Once it comes back up, point your browser to :

http://localhost:port/apex/apex_admin

…where port is the port number returned by the query above.

Login as user ADMIN with the password that you set for admin when you ran apxchpwd.sql. If all goes well, you should now see…

apex5_admin

It may be raining outside, but at least you now have a cosy APEX5 installation to play with.


Filed under: APEX, Oracle Tagged: APEX5 on Oracle XE 11g, apexins.sql, apex_epg_config.sql, apxchpwd.sql, Oracle Application Express Tablespace

Migrating the XE Database Management Application to a new version of APEX

Sun, 2015-05-24 08:31

I must confess to a weakness when it comes to throwing stuff away.
This is particularly true of techie stuff.
Whilst I have occasionally cannibalised an old machine for parts, there is a regrettably large part of the garage reserved for “vintage” hardware that I might just need at some point.

I’ve recently added to this hoard. I’ve finally gone and got a replacement for my ageing netbook.
As part of the configuration of the new machine, I’ve installed Oracle XE again.

I’m now poised to attempt an upgrade to a shiny new version of APEX.

First of all though, if you are similarly keen to upgrade from the venerable APEX 4.0, which XE ships with, to something more modern, your hoarding instincts may kick-in when it comes to the default Database Management Application.

Once you upgrade APEX 4 to any subsequent version, this application “disappears”.
The functionality it offers is readily available through SQLDeveloper (or indeed, any of the major Oracle Database IDE’s).
Alternatively, it’s a fairly simple matter to come up with your own, improved version.

Not convinced ? Oh well, I suppose we’d better save it for re-deployment into your new APEX environment.

What I’m going to cover here is :

  • Backing up the default XE ADMIN application
  • Tweaking the APEX export file
  • Restoring the XE ADMIN application

I’ve tested this process against both APEX4.2 and APEX5.0 running on Oracle XE11g.
In the steps that follow, I’m assuming that you’re upgrading to APEX5.0.
The main difference here is the APEX owning schema.
For APEX4.2, the owner is APEX_040200, in APEX 5.0 it’s APEX_050000.
As the APEX upgrade takes place entirely within the database, the steps that follow are platform independent.

Incidentally, if you’re wondering exactly how you would upgrade XE11g to this APEX version, details will follow in my next post.

NOTE – I’m assuming here that you’re doing this on your own personal playground 11GXE database and have therefore not
worried too much about any security implications for some of the activities detailed below.

Right, let’s get started…

Backing up the XE ADMIN application

The script below uses the APEX4 PL/SQL API to create an export of the application. This is simply an SQL file that we’ll need to do some light hacking and then run it against the database once the APEX upgrade is completed.
The script ( saved as export_xe_app.sql) is :

declare
	-- Must be logged on as SYSTEM
	-- Need to grant execute on UTL_FILE to SYSTEM
	l_fh utl_file.file_type;
	l_buffer varchar2(32767);
	l_amount pls_integer := 32767;
	l_clob clob;
	l_length pls_integer;
	l_pos pls_integer := 1;
	
begin
	-- Get the source code for the XE Admin APEX application
	l_clob := wwv_flow_utilities.export_application_to_clob ( p_application_id   => 4950);
	
	l_length := dbms_lob.getlength(l_clob);
	
	-- Now write it to a file
	l_fh := utl_file.fopen( 'DATA_PUMP_DIR', 'f4950.sql', 'w');
	
	loop
		exit when l_pos > l_length;
		l_buffer := substr( l_clob, l_pos, 32767);
		utl_file.put(l_fh, l_buffer);
		utl_file.fflush(l_fh);
		l_pos := l_pos + length(l_buffer);
	end loop;
	utl_file.fclose(l_fh);
end;
/

There are a couple of points to note prior to running this script.
The first is that it must be run as SYSTEM as this is the parsing schema for this application.
The second is that you will need to grant execute on UTL_FILE to SYSTEM.
So connect as SYS as sysdba and..

grant execute on utl_file to system
/

Now that’s done, connect as system and execute the script.

Once this is done, you should have a file called f4950.sql in the DATA_PUMP_DIR.
I’ve used this directory as it’s created by default when XE is installed. If you don’t know where this maps to on disk, then you can find it by running the following query :

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

In my case (running on Linux) , this returns :

DIRECTORY_PATH
--------------------------------------------------------------------------------
/u01/app/oracle/admin/XE/dpdump/

Tweaking the APEX Export

There are a couple of things that we need to change in the export script.
The first is the call to the USER_IS_DBA function.

This function was moved from the WWV_FLOW_LOGIN package to WWV_FLOW_SECURITY in APEX 4.2, where( as at APEX 5.0), it still resides.

Therefore, we need to amend line number 218 from …

  p_scheme=>'return nvl(wwv_flow.g_flow_step_id,-1) in (1,101)'||chr(10)||
'       or wwv_flow_login.user_is_dba(p_username => :APP_USER);',

…to…

  p_scheme=>'return nvl(wwv_flow.g_flow_step_id,-1) in (1,101)'||chr(10)||
'       or wwv_flow_security.user_is_dba(p_username => :APP_USER);',

The other thing to change is the name of the APEX owner.
In my case ( migrating to APEX5), this will change from APEX_040000 to APEX_050000.

So, line number 142 in the file changes from…

  p_owner => nvl(wwv_flow_application_install.get_schema,'APEX_040000'),

…to…

  p_owner => nvl(wwv_flow_application_install.get_schema,'APEX_050000'),
Deploying the application to the new APEX version

Before running our application export against the new APEX repository, we need to grant select on the relevant views directly to the APEX owner.
NOTE – I’m indebted to this article by Jeff Eberhard for this particular step:

grant select on dba_temp_files to APEX_050000;
grant select on v_$temp_extent_pool to APEX_050000;
grant select on v_$temp_space_header to APEX_050000;
grant select on v_$system_parameter to APEX_050000;
grant select on v_$session to APEX_050000;

Now that’s done, we can simply connect as SYSTEM and run our import.
The output should look like this (ignore the two spurious errors at the start) :

SQL> @f4950.sql
SP2-0734: unknown command beginning "Content-ty..." - rest of line ignored.
SP2-0734: unknown command beginning "Content-le..." - rest of line ignored.
APPLICATION 4950 - XE_ADMIN
Set Credentials...
Check Compatibility...
API Last Extended:20130101
Your Current Version:20130101
This import is compatible with version: 20100513
COMPATIBLE (You should be able to run this import without issues.)
Set Application ID...
...authorization schemes
...navigation bar entries
...application processes
...application items
...application level computations
...Application Tabs
...Application Parent Tabs
...Shared Lists of values
...Application Trees
...page groups
...PAGE 0: 0
...PAGE 1: Home
...PAGE 2: Storage
...PAGE 4: Sessions
...PAGE 5: Backups
...PAGE 6: Parameters
...PAGE 7: Application Express
...PAGE 9: Tablespace Storage Details
...PAGE 10: Session Details
...PAGE 101: Login
...lists
...breadcrumbs
...page templates for application: 4950
......Page template 936879405068865354
......Page template 936880509857865357
......Page template 936881728833865360
......Page template 936882328801865361
...button templates
......Button Template 936883817777865362
......Button Template 936884428414865363
...region templates
......region template 936886425092865365
......region template 936888203598865371
......region template 936889721148865373
......region template 936890925366865374
......region template 936891504925865375
......region template 936892120369865376
...List Templates
......list template 8548028083899041
......list template 936898727314865389
...report templates
......report template 936900209975865391
......report template 936902514655865394
...label templates
......label template 936904524832865485
......label template 936904703644865485
...breadcrumb templates
......template 936904805731865485
...popup list of values templates
......template 936905603379865499
...calendar templates
...application themes
......theme 936905710643865499
...build options used by application 4950
...messages used by application: 4950
...dynamic translations used by application: 4950
...Language Maps for Application 4950
...Shortcuts
...web services (9iR2 or better)
...shared queries
...report layouts
...authentication schemes
......scheme 936873424775859940
...plugins
...done
SQL> 

As we have imported the application with the same ID that it had originally ( 4950), the Getting Started with Oracle Database 11g Express Edition desktop icon should still work in exactly the same way…

xe_admiin_apex5

NOTE – the first time you click on a tab that requires login, there is a bit of an issue.
Instead of prompting for login credentials, the text “Content-type:text/html; charset=utf8” appears in the top-left of the page.
If you click the tab a second time, you will get prompted for login credentials as expected.

Once you’ve connected, the tabs should work pretty much as usual :

storage_tab

sessions_tab

parameters_tab

As a word of caution, I’d be wary of using the Application Express tab for APEX admin on the database.
Instead, I’d use the APEX URL specific to the installed APEX version for this purpose.

Right, off to sort out the clutter in the garage.


Filed under: APEX, Oracle, PL/SQL, SQL Tagged: dba_directories, UTL_FILE, wwv_flow_utilities.export_application_to_clob