Skip navigation.

The Anti-Kyte

Syndicate content The Anti-Kyte
Oracle - for when it was like that when you got there
Updated: 10 hours 35 min ago

Resolving Hardware Issues with a Kernel Upgrade in Linux Mint

Sun, 2016-02-07 11:40

One evening recently, whilst climbing the wooden hills with netbook in hand, I encountered a cat who had decided that halfway up the stairs was a perfect place to catch forty winks.
One startled moggy later, I had become the owner of what I can only describe as…an ex-netbook.

Now, finally, I’ve managed to get a replacement (netbook, not cat).

As usual when I get a new machine, the first thing I did was to replace Windows with Linux Mint…with the immediate result being that the wireless card stopped working.

The solution ? Don’t (kernel) panic, kernel upgrade !

Support for most of the hardware out there is included in the Linux Kernel. The kernel is enhanced and released every few months. However, distributions, such as Mint, tend to stick on one kernel version for a while in order to provide a stable base on which to develop.
This means that, if Linux is not playing nicely with your Wireless card/web-cam/any other aspect of your machine’s hardware, a kernel upgrade may resolve your problem.
Obviously it’s always good to do a bit of checking to see if this might be the case.
It’s also good to have a way of putting things back as they were should the change we’re making not have the desired effect.

What I’m going to cover here is the specific issue I encountered with my new Netbook and the steps I took to figure out what kernel version might fix the problem.
I’ll then detail the kernel upgrade itself.

Machine details

The machine In question is an Acer TravelMate-B116.
It has an 11.6 inch screen, 4GB RAM and a 500GB HDD.
For the purposes of the steps that follow, I was able to connect to the internet via a wired connection to my router. Well, up until I got the wireless working.
The Linux OS I’m using is Linux Mint 17.3 Cinnamon.
Note that I have disabled UEFI and am booting the machine in Legacy mode.

Standard Warning – have a backup handy !

In my particular circumstances, I was trying to configure a new machine. If it all went wrong, I could simply re-install Mint and be back where I started.
If you have stuff on your machine that you don’t want to lose, it’s probably a good idea to back it up onto separate media ( e.g. a USB stick).
Additionally, if you are not presented with a grub menu when you boot your machine, you may consider running the boot-repair tool.
This will ensure that you have the option of which kernel to use if you have more than one to choose from ( which will be the case once you’ve done the kernel upgrade).

It is possible that upgrading the kernel may cause issues with some of the hardware that is working fine with the kernel you currently have installed, so it’s probably wise to be prepared.

Identifying the card

The first step then, is to identify exactly which wireless network card is in the machine.
From a terminal window …

lspci

00:00.0 Host bridge: Intel Corporation Device 2280 (rev 21)
00:02.0 VGA compatible controller: Intel Corporation Device 22b1 (rev 21)
00:0b.0 Signal processing controller: Intel Corporation Device 22dc (rev 21)
00:13.0 SATA controller: Intel Corporation Device 22a3 (rev 21)
00:14.0 USB controller: Intel Corporation Device 22b5 (rev 21)
00:1a.0 Encryption controller: Intel Corporation Device 2298 (rev 21)
00:1b.0 Audio device: Intel Corporation Device 2284 (rev 21)
00:1c.0 PCI bridge: Intel Corporation Device 22c8 (rev 21)
00:1c.2 PCI bridge: Intel Corporation Device 22cc (rev 21)
00:1c.3 PCI bridge: Intel Corporation Device 22ce (rev 21)
00:1f.0 ISA bridge: Intel Corporation Device 229c (rev 21)
00:1f.3 SMBus: Intel Corporation Device 2292 (rev 21)
02:00.0 Network controller: Intel Corporation Device 3165 (rev 81)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)

It looks like the penultimate entry is our wireless card.
It is possible to get details of the card you have by using “Intel Corporation Device 3165” as a search term. However, we may be able to get the name of the card by running ….

lspci -vq |grep -i wireless -B 1 -A 4

In my case, this returns :

02:00.0 Network controller: Intel Corporation Wireless 3165 (rev 81)
	Subsystem: Intel Corporation Dual Band Wireless AC 3165
	Flags: bus master, fast devsel, latency 0, IRQ 200
	Memory at 91100000 (64-bit, non-prefetchable) [size=8K]
	Capabilities: <access denied>

Further digging around reveals that, according to Intel, this card is supported in linux starting at Kernel version 4.2.

Now, which version of the Kernel are we actually running ?

Identifying the current kernel version and packages

This is relatively simple. In the Terminal just type :

uname -r

On Mint 17.3, the output is :

3.19.0-32-generic

At this point, we now know that an upgrade to the kernel may well solve our wireless problem. The question now is, which packages do we need to install to effect the upgrade ?

If you look in the repositories, there appear to be at least two distinct versions of kernel packages, the generic and something called low-latency.
In order to be confident of which packages we want to get, it’s probably a good idea to work out what we have now.
This can be achieved by searching the installed packages for the version number of the current kernel.
We can do this in the terminal :

dpkg --list |grep 3.19.0-32 |awk '{print $2}'

In my case, this returned :

linux-headers-3.19.0-32
linux-headers-3.19.0-32-generic
linux-image-3.19.0-32-generic
linux-image-extra-3.19.0.32-generic
linux-kernel-generic

As an alternative, you could use the graphical Synaptic Package Manager.
You can start this from the menu ( Administration/Synaptic Package Manager).

synaptic1

Now we know what we’ve got, the next step is to find the kernel version that we need…

Getting the new kernel packages

It may well be the case that the kernel version you’re after has already been added to the distro’s repository.
To see if this is the case, use Synaptic Package Manager to search as follows :

Start Synaptic Package Manager from the System Menu.
You will be prompted for your password.

Click the Status button and select Not Installed

synaptic_search1

In the Quick filter bar, enter the text : linux-headers-4.2*-generic

synaptic_search2

This should give you a list of any kernel 4.2 versions available in the repository.

If, as I did, you find the version you’re looking for, you need to select the packages that are equivalent to the ones you already have installed on your system.
Incidentally, there are a number of 4.2 kernel versions available, so I decided to go for the latest.
In my case then, I want to install :

  • linux-headers-4.20.0-25
  • linux-headers-4.20.0-25-generic
  • linux-image-4.20.0-25-generic
  • linux-image-extra-4.20.0-25-generic

NOTE – If you don’t find the kernel version you are looking for, you can always download the packages directly using these instructions.

Assuming we have found the version we want, we need to now search for the relevant packages.
In the Quick filter field in Synaptic, change the search string to : linux-*4.2.0-25

To Mark the packages for installation, right-click each one in turn and select Mark for Installation

synaptic_select

Once you’ve selected them all, hit the Apply button.

Once the installation is completed, you need to re-start your computer.

On re-start, you should find that the Grub menu has an entry for Advanced Options.
If you select this, you’ll see that you have a list of kernels to choose to boot into.
This comes in handy if you want to go back to running the previous kernel version.

For now though, we’ll boot into the kernel we’ve just installed.
We can confirm that the installation has been successful, once the machine starts, by opening a Terminal and running :

uname -r

If all has gone to plan, we should now see…

4.2.0-25-generic

Even better in my case, my wireless card has now been recognised.
Opening the systray icon, I can enable wireless and connect to my router.

Backing out of the Kernel Upgrade

If you find that the effects of the kernel upgrade are undesirable, you can always go back to the kernel you started with.
If at all possible, I’d recommend starting Mint using the old kernel before doing this.

If you’re running on the kernel for which you are deleting the packages, you may get some alarming warnings. However, once you re-start, you should be back to your original kernel version.

The command then, is :

sudo apt-get remove linux-headers-4.2* linux-image-4.2*

…where 4.2 is the version of the kernel you want to remove.
Run this and the output looks like this…

The following packages will be REMOVED
  linux-headers-4.2.0-25 linux-headers-4.2.0-25-generic
  linux-image-4.2.0-25-generic linux-image-extra-4.2.0-25-generic
  linux-signed-image-4.2.0-25-generic
0 to upgrade, 0 to newly install, 5 to remove and 7 not to upgrade.
After this operation, 294 MB disk space will be freed.
Do you want to continue? [Y/n]

Once the packages have been removed, the old kernel will be in use on the next re-boot.
After re-starting, you can check this with :

uname -r

Thankfully, these steps proved unnecessary in my case and the kernel upgrade has saved me from hardware cat-astrophe.


Filed under: Linux, Mint Tagged: Acer TravelMate-B116, apt-get remove, dpkg, Intel Corporation Dual Band Wireless AC 3165, kernel upgrade, lspci, synaptic package manager, uname -r

Making Datapump Import Stat-tastically faster

Wed, 2016-01-20 14:29

I’m determined to adopt a positive mental attitude this year.
When the train company explains delays by saying we have the wrong kind of sunshine, I prefer to marvel at the fact that the sun is shining at all in the depths of an English Winter. Let’s face it, it’s a rare enough phenomenon in the summer.
The slow-running of the train caused by this rare natural phenomenon also gives me more time to write this post.
There’s more “good” news – Datapump Import tends to be rather slow when it comes to applying optimizer statistics.
This is because it insists on doing it one row at a time.
All of which provides us with an opportunity to work from home optimize our import job… by not bothering importing the stats.
“Hang on”, you’re thinking, “won’t that mean you have to re-gather stats after the import, which probably won’t be that quick either ?”

Not necessarily. You just need to think positive…

What I’m going to cover here is :

  • How to save stats to a table
  • Export without the stats
  • Import without stats
  • Applying stats from a table

I’m using 11gR2 Express Edition in the examples that follow.
We’ll start by exporting the HR schema and then import the tables into the HR_DEV schema.

As there are overhead-line problems in the Watford Junction area, we’ve also got time to choose between running the datapump export and import on the command line or via the DBMS_DATAPUMP package.

Saving Stats to a Table

Let’s start by making sure that we have some optimizer stats on the tables in the HR schema :

select table_name, last_analyzed, num_rows
from dba_tab_statistics
where owner = 'HR'
order by table_name
/

TABLE_NAME                     LAST_ANALYZED        NUM_ROWS
------------------------------ ------------------ ----------
COUNTRIES                      13-JAN-16                  25
DEPARTMENTS                    13-JAN-16                  27
EMPLOYEES                      13-JAN-16                 107
JOBS                           13-JAN-16                  19
JOB_HISTORY                    13-JAN-16                  10
LOCATIONS                      13-JAN-16                  23
REGIONS                        13-JAN-16                   4

7 rows selected.

I can see that all of the tables in the schema have stats, which is good enough for my purposes here.
If you find that the LAST_ANALYZED value is null for the tables in your database, or if you just decide that you want to take a less cavalier approach to the relevance of your Optimizer stats, you can update them by running :

begin
    dbms_stats.gather_schema_stats('HR');
end;
/

Now we know we’ve got some stats, we need to save them to a table. This process is made fairly straightforward by DBMS_STATS. To create an appropriately structured table in the HR schema, we simply need to run :

begin
    dbms_stats.create_stat_table( ownname => 'HR', stattab => 'exp_stats');
end;
/

The CREATE_STAT_TABLE procedure creates the table specified in the stattab parameter, in the schema specified in the ownname parameter.

So, we now have a table in HR called EXP_STATS, which looks like this…

desc hr.exp_stats

 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 STATID                                             VARCHAR2(30)
 TYPE                                               CHAR(1)
 VERSION                                            NUMBER
 FLAGS                                              NUMBER
 C1                                                 VARCHAR2(30)
 C2                                                 VARCHAR2(30)
 C3                                                 VARCHAR2(30)
 C4                                                 VARCHAR2(30)
 C5                                                 VARCHAR2(30)
 N1                                                 NUMBER
 N2                                                 NUMBER
 N3                                                 NUMBER
 N4                                                 NUMBER
 N5                                                 NUMBER
 N6                                                 NUMBER
 N7                                                 NUMBER
 N8                                                 NUMBER
 N9                                                 NUMBER
 N10                                                NUMBER
 N11                                                NUMBER
 N12                                                NUMBER
 D1                                                 DATE
 R1                                                 RAW(32)
 R2                                                 RAW(32)
 CH1                                                VARCHAR2(1000)
 CL1                                                CLOB


Now we need to populate this table. Once again, we need to use DBMS_STATS…

begin
    dbms_stats.export_schema_stats( ownname => 'HR', stattab => 'exp_stats');
end;
/

…and we can see that we now have some data in the table…

select count(*)
from exp_stats
/

  COUNT(*)
----------
        62

The Export

When it comes to datapump exports, you may reasonably take the view that the best policy is to export everything and then pick and choose what you want from the resultant dump file when importing.

Speaking of the dump file, if you want to find it on the OS, you’ll need to know the location pointed to by the DATA_PUMP_DIR directory object. To find this :

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

If you’re running the datapump utility from the command line…

expdp system/pwd@XE directory=data_pump_dir dumpfile=hr_full_exp.dmp schemas=HR

…where pwd is the password for SYSTEM.

Alternatively, you can use the PL/SQL API as implemented through the DBMS_DATAPUMP package :

declare
    l_dph number;
    l_state varchar2(30) := 'NONE';
    l_status ku$_status;
begin
    l_dph := dbms_datapump.open
    (
        operation => 'EXPORT',
        job_mode => 'SCHEMA',
        job_name => 'HR_FULL_EXP'
    );
    
    -- Just the HR schema...
    dbms_datapump.metadata_filter
    (
        handle => l_dph,
        name => 'SCHEMA_EXPR',
        value => q'[ IN ('HR') ]'
    );
    
    dbms_datapump.add_file
    (
        handle => l_dph,
        filename => 'hr_full_exp.dmp',
        directory => 'DATA_PUMP_DIR',
        filetype => dbms_datapump.ku$_file_type_dump_file,
        reusefile => 1
    );
    
    dbms_datapump.add_file
    (
        handle => l_dph,
        filename => 'hr_full_exp.log',
        directory => 'DATA_PUMP_DIR',
        filetype => dbms_datapump.ku$_file_type_log_file,
        reusefile => 1
    );
    
    dbms_datapump.log_entry
    (
        handle => l_dph,
        message => 'Job starting at '||to_char(sysdate, 'HH24:MI:SS')
    );
    
    dbms_datapump.start_job( handle => l_dph);

    --
    -- Wait for the job to finish...
    --
    while l_state not in ('COMPLETED', 'STOPPED')
    loop
        dbms_datapump.get_status
        (
            handle => l_dph,
            mask => dbms_datapump.ku$_status_job_error +
                dbms_datapump.ku$_status_job_status +
                dbms_datapump.ku$_status_wip,
            timeout => -1,
            job_state => l_state,
            status => l_status
        );
    end loop;
    dbms_datapump.detach( l_dph);
end;
/

After we’ve run this, we can check the log file and see that the EXP_STATS table has been included in the export by checking the export.log file that gets created in the DATA_PUMP_DIR directory…

...
. . exported "HR"."EXP_STATS"                            20.03 KB      62 rows
. . exported "HR"."COUNTRIES"                            6.367 KB      25 rows
. . exported "HR"."DEPARTMENTS"                          7.007 KB      27 rows
. . exported "HR"."EMPLOYEES"                            16.80 KB     107 rows
. . exported "HR"."JOBS"                                 6.992 KB      19 rows
. . exported "HR"."JOB_HISTORY"                          7.054 KB      10 rows
. . exported "HR"."LOCATIONS"                            8.273 KB      23 rows
. . exported "HR"."REGIONS"                              5.476 KB       4 rows
...
Importing without Applying stats

To import the HR tables into the HR_DEV schema, whilst ensuring that datapump doesn’t apply stats…

If you’re using the import command-line utility …

impdp system/pwd@XE directory=data_pump_dir dumpfile=hr_full_exp.dmp remap_schema=HR:HR_DEV exclude=STATISTICS

Alternatively, using DBMS_DATAPUMP…

declare
    l_dph number;
    l_state varchar2(30) := 'NONE';
    l_status ku$_status;
        
begin

    l_dph := dbms_datapump.open
    (
        operation => 'IMPORT',
        job_mode => 'SCHEMA',
        job_name => 'HR_IMP_NO_STATS'
    );

    --
    -- Import HR objects from the export file into the HR_DEV schema
    --    
    dbms_datapump.metadata_remap
    (
        handle => l_dph,
        name => 'REMAP_SCHEMA',
        old_value => 'HR',
        value => 'HR_DEV'
    );
    
    -- Don't import any stats...
    dbms_datapump.metadata_filter
    (
        handle => l_dph,
        name => 'EXCLUDE_PATH_EXPR',
        value => q'[ = 'STATISTICS']'
    );
    
    dbms_datapump.set_parameter
    (
        handle => l_dph,
        name => 'TABLE_EXISTS_ACTION',
        value => 'REPLACE'
    );
    
   dbms_datapump.add_file
    (
        handle => l_dph,
        filename => 'hr_full_exp.dmp',
        directory => 'DATA_PUMP_DIR',
        filetype => dbms_datapump.ku$_file_type_dump_file,
        reusefile => 1
    );
    
    dbms_datapump.add_file
    (
        handle => l_dph,
        filename => 'hr_full_imp.log',
        directory => 'DATA_PUMP_DIR',
        filetype => dbms_datapump.ku$_file_type_log_file,
        reusefile => 1
    );

    dbms_datapump.log_entry
    (
        handle => l_dph,
        message => 'Job starting at '||to_char(sysdate, 'HH24:MI:SS')
    );
    
    dbms_datapump.start_job( handle => l_dph);
 
    -- Wait for the job to finish...
 
    while l_state not in ('COMPLETED', 'STOPPED')
    loop
        dbms_datapump.get_status
        (
            handle => l_dph,
            mask => dbms_datapump.ku$_status_job_error +
                dbms_datapump.ku$_status_job_status +
                dbms_datapump.ku$_status_wip,
            timeout => -1,
            job_state => l_state,
            status => l_status
        );
    end loop;
    dbms_datapump.detach( l_dph);
end;
/    

If we now check, we can confirm that there are indeed, no stats on the tables we’ve just imported…

select table_name, last_analyzed, num_rows
from dba_tab_statistics
where owner = 'HR_DEV'
order by table_name
/

TABLE_NAME                     LAST_ANALYZED        NUM_ROWS
------------------------------ ------------------ ----------
COUNTRIES
DEPARTMENTS
EMPLOYEES
EXP_STATS
JOBS
JOB_HISTORY
LOCATIONS
REGIONS

8 rows selected.


Now for the final touch, apply the stats that we have in the EXP_STATS table. Should be easy enough…

Applying stats from a table

If we were importing into a schema of the same name as we saved stats for, this would be straight forward.
However, in this case, we’re importing into a different schema – HR_DEV.
Therefore, if we want to avoid “leaves-on-the-line”, we need to do a little light hacking.

To make things a bit clearer, let’s have a look at the contents of the C5 column of our EXP_STATS table…

select distinct(c5)
from exp_stats
/

C5
------------------------------
HR


Yes, the table owner (for that is what the C5 column contains) is set to HR. This is reasonable enough as it was the stats for this schema which we saved to the table in the first place. However, this means that the stats will not be applied to the tables in the HR_DEV schema unless we do this…

update exp_stats
set c5 = 'HR_DEV'
where c5 = 'HR'
/

62 rows updated.

commit;

Commit complete.

Now that’s done, we can apply the stats with a call to DBMS_STATS.IMPORT_SCHEMA_STATS…

begin
    dbms_stats.import_schema_stats(ownname => 'HR_DEV', stattab => 'exp_stats');
end;
/

Check again, and the stats are now on the tables :

select table_name, last_analyzed, num_rows
from dba_tab_statistics
where owner = 'HR_DEV'
order by table_name
/

TABLE_NAME                     LAST_ANALYZED        NUM_ROWS
------------------------------ ------------------ ----------
COUNTRIES                      13-JAN-16                  25
DEPARTMENTS                    13-JAN-16                  27
EMPLOYEES                      13-JAN-16                 107
EXP_STATS
JOBS                           13-JAN-16                  19
JOB_HISTORY                    13-JAN-16                  10
LOCATIONS                      13-JAN-16                  23
REGIONS                        13-JAN-16                   4

8 rows selected.


Whilst importing stats separately does entail a few more steps, it does mean that there is rather less hanging around for datapump import to do it’s impression of a train trying to get through “the wrong kind of snow”.


Filed under: Oracle, PL/SQL Tagged: DataPump, dba_tab_statistics, dbms_datapump, dbms_datapump.metadata_filter, dbms_stats, dbms_stats.create_stat_table, dbms_stats.export_schema_stats, dbms_stats.import_schema_stats, EXCLUDE_PATH_EXPR, expdp, impdp, importing stats into a different schema using dbms_stats, remap_schema

DBMS_METADATA and SELECT_CATALOG_ROLE – Cat Herding in Oracle

Sun, 2016-01-10 11:28

Last year we got a kitten. Little Cleo was full of the joys of…well…being a cat. Then, one day, she just disappeared.
Several months later, having given up hope of ever seeing her again, we adopted Nutmeg.
Then, just before Christmas, Cleo suddenly re-appeared.
It’s a complete mystery as to where she had been for the last year and she has not condescended to comment on the matter.
The end result is that we are now a two cat family.
This brings with it certain complications.
When they aren’t studiously ignoring each other, the cats sit there giving each other hard stares for hours on end.
I think there may be some tension over exactly just who owns that fluffy ball.
To ensure that our sleep is not disturbed by these two fighting like cats in a sack, it’s necessary to ensure that they are in separate rooms before we retire for the evening.
As a result we’ve become rather expert at the art of Cat Herding, which largely consists of bribery with cat-nip, among other things.

Whilst acquiring a reputation as a “dealer” among the feline population of Milton Keynes, I have had cause to reflect on the similarity of our new hobby with the trials and tribulations of persuading DBMS_METADATA.GET_DDL that you do actually have permissions to see the source code you’ve asked for.

This is regularly a fairly tortuous process. In what follows I will be examining just why SELECT_CATALOG_ROLE is DBMS_METADATA cat-nip…and why SELECT ANY DICTIONARY isn’t.
I’ll also look at how you can stop chasing your tail and get this function to behave itself when invoked from within a stored program unit…

SELECT ANY DICTIONARY

According to the Oracle Documentation, the SELECT ANY DICTIONARY privilege allows you to :

“Query any data dictionary object in the SYS schema. This privilege lets you selectively override the default FALSE setting of the O7_DICTIONARY_ACCESSIBILITY initialization parameter.”

Some explanation is required here. Prior to Oracle 8, a user with the SELECT ANY TABLE privilege had access to any table in the database. From 8 onwards, this privilege was restricted to exclude the Data Dictionary. This is where SELECT ANY DICTIONARY came in.

Before we go any further, let’s just make sure that the O7_DICTIONARY_ACCESSIBILITY parameter is not set to TRUE…

select value
from v$parameter
where name = 'O7_DICTIONARY_ACCESSIBILITY'
/

VALUE
--------------------------------------------------------------------------------
FALSE

Now, to see the effect of this privilege on DBMS_METADATA.GET_DDL, let’s create a user :

grant create session, select any dictionary
    to cleo identified by password_you_can_type_with_paws
/

If we now connect to the database as cleo, we can see that she has permissions to query the DBA_SOURCE view, among other things, and therefore to retrieve the source for a procedure in the HR schema :

set heading off
set pages 500
set lines 130
select text
from dba_source
where owner = 'HR'
and name = 'ADD_JOB_HISTORY'
and type = 'PROCEDURE'
order by line
/

Sure enough, we get the expected output :

PROCEDURE add_job_history
  (  p_emp_id          job_history.employee_id%type
   , p_start_date      job_history.start_date%type
   , p_end_date        job_history.end_date%type
   , p_job_id          job_history.job_id%type
   , p_department_id   job_history.department_id%type
   )
IS
BEGIN
  INSERT INTO job_history (employee_id, start_date, end_date,
                           job_id, department_id)
    VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id);
END add_job_history;

13 rows selected.

SQL>

By default, EXECUTE on the DBMS_METADATA package are granted to PUBLIC. So, invoking the GET_DDL function for the same program unit should return the DDL statement required to re-create it. We already know we have access to the source so this should be no problem, right ?

set long 5000
select dbms_metadata.get_ddl( schema => 'HR', name => 'ADD_JOB_HISTORY', object_type => 'PROCEDURE')
from dual
/
ERROR:
ORA-31603: object "ADD_JOB_HISTORY" of type PROCEDURE not found in schema "HR"
ORA-06512: at "SYS.DBMS_METADATA", line 4018
ORA-06512: at "SYS.DBMS_METADATA", line 5843
ORA-06512: at line 1

no rows selected

SQL>

Has Oracle forgotten about this procedure ? Maybe it’s rolled under the sofa ?
Either way, further investigation would seem to be in order…

No SELECT_CATALOG_ROLE, no comment

Like a cat, DBMS_METADATA.GET_DDL can appear to be, to put it delicately, rather indepenently minded. It certainly doesn’t always do what it’s told.

To try to determine what’s happening, we could do some tracing and look through the recursive statements to see which precisely what is causing the error. However, there is a quicker way.
Let’s start by looking at the comments in the DBMS_METADATA package header :

-- SECURITY
-- This package is owned by SYS with execute access granted to PUBLIC.
-- It runs with invokers rights, i.e., with the security profile of
-- the caller.  It calls DBMS_METADATA_INT to perform privileged
-- functions.
-- The object views defined in catmeta.sql implement the package's security
-- policy via the WHERE clause on the public views which include syntax to
-- control user access to metadata: if the current user is SYS or has
-- SELECT_CATALOG_ROLE, then all objects are visible; otherwise, only
-- objects in the schema of the current user are visible.

This points us in the direction of the file…

$ORACLE_HOME/rdbms/admin/catmeta.sql

Sure enough, when we get to line 10209, things become a bit clearer…

-- base view for procedures, functions, packages and package bodies

create or replace force view ku$_base_proc_view of ku$_proc_t
  with object identifier (obj_num) as
  select '1','1',
         oo.obj#,
         oo.type#,
         value(o),
         sys.dbms_metadata_util.get_source_lines(oo.name,oo.obj#,oo.type#)
  from  sys.ku$_edition_schemaobj_view o, sys.ku$_edition_obj_view oo
  where (oo.type# = 7 or oo.type# = 8 or oo.type# = 9 or oo.type# = 11)
    and oo.obj#  = o.obj_num and oo.linkname is NULL
         AND (SYS_CONTEXT('USERENV','CURRENT_USERID') IN (o.owner_num, 0) OR
              EXISTS ( SELECT * FROM sys.session_roles
                       WHERE role='SELECT_CATALOG_ROLE' ))
/

Unless you are either the owner of the stored program unit you’re trying to retrieve, or connected as USER_ID 0 ( i.e. SYS), the only way that you’re going to get anything back from a query on this view is if you have been granted a role called SELECT_CATALOG_ROLE.

To verify this, let’s create another user…

grant create session
    to nutmeg identified by must_have_catnip
/

grant select_catalog_role
    to nutmeg
/

Once again, we have access to the DBA_SOURCE view…

set heading off
set pages 500
set lines 130
select text
from dba_source
where owner = 'HR'
and name = 'ADD_JOB_HISTORY'
and type = 'PROCEDURE'
order by line
/

PROCEDURE add_job_history
  (  p_emp_id          job_history.employee_id%type
   , p_start_date      job_history.start_date%type
   , p_end_date        job_history.end_date%type
   , p_job_id          job_history.job_id%type
   , p_department_id   job_history.department_id%type
   )
IS
BEGIN
  INSERT INTO job_history (employee_id, start_date, end_date,
                           job_id, department_id)
    VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id);
END add_job_history;

13 rows selected.

SQL>

…however, we can now also use DBMS_METADATA.GET_DDL …

set long 5000
select dbms_metadata.get_ddl( schema => 'HR', name => 'ADD_JOB_HISTORY', object_type => 'PROCEDURE')
from dual
/
  CREATE OR REPLACE PROCEDURE "HR"."ADD_JOB_HISTORY"
  (  p_emp_id          job_history.employee_id%type
   , p_start_date      job_history.start_date%type
   , p_end_date        job_history.end_date%type
   , p_job_id          job_history.job_id%type
   , p_department_id   job_history.department_id%type
   )
IS
BEGIN
  INSERT INTO job_history (employee_id, start_date, end_date,
                           job_id, department_id)
    VALUES(p_emp_id, p_start_date, p_end_date,
 p_job_id, p_department_id);
END add_job_history;

SQL>

It seems that DBMS_METADATA has retrieved it’s fluffy ball from under the sofa.

DBMS_METADATA.GET_DDL in a Stored Program Unit

The fact that SELECT_CATALOG_ROLE is a role rather than a system privilege does tend to make life interesting if you put calls to DBMS_METADATA.GET_DDL into a stored program unit.

To demonstrate, let’s create a function that does just that (in the nutmeg schema – i.e. a schema that does not have the role granted to it) :

alter session set current_schema = nutmeg
/

create or replace function get_ddl( i_owner in varchar2, i_name in varchar2, i_type in varchar2)
    return clob
is
begin
    dbms_output.put_line('Listing active session roles...');
    for r_role in (select role from sys.session_roles) loop
        dbms_output.put_line('Role : '||r_role.role);
    end loop;
    dbms_output.put_line('Roles listed');
    return dbms_metadata.get_ddl( schema => i_owner, name => i_name, object_type => i_type);
end;
/

This should work fine for a user with the role granted, shouldn’t it ?
Let’s test it (once again, connected as cleo)…

set serveroutput on size unlimited
select get_ddl('HR', 'ADD_JOB_HISTORY', 'PROCEDURE')
from dual
/
ERROR:
ORA-31603: object "ADD_JOB_HISTORY" of type PROCEDURE not found in schema "HR"
ORA-06512: at "SYS.DBMS_METADATA", line 4018
ORA-06512: at "SYS.DBMS_METADATA", line 5843
ORA-06512: at "NUTMEG.GET_DDL", line 10

no rows selected

Listing active session roles...
Roles listed
SQL>

Because we’re calling the DBMS_METADATA.GET_DDL function from inside a stored program unit, the role is disabled.
In order for this to work we need to make the stored program unit invoker’s rights…

alter session set current_schema = nutmeg
/

create or replace function get_ddl( i_owner in varchar2, i_name in varchar2, i_type in varchar2)
    return clob
    authid current_user
is
begin
    dbms_output.put_line('Listing active session roles...');
    for r_role in (select role from sys.session_roles) loop
        dbms_output.put_line('Role : '||r_role.role);
    end loop;
    dbms_output.put_line('Roles listed');
    return dbms_metadata.get_ddl( schema => i_owner, name => i_name, object_type => i_type);
end;
/

Now, when we invoke this function (as cleo once again), the role is still applicable…

set serveroutput on size unlimited
set long 5000
set heading off
select get_ddl('HR', 'ADD_JOB_HISTORY', 'PROCEDURE')
from dual
/

  CREATE OR REPLACE PROCEDURE "HR"."ADD_JOB_HISTORY"
  (  p_emp_id          job_history.employee_id%type
   , p_start_date      job_history.start_date%type
   , p_end_date        job_history.end_date%type
   , p_job_id          job_history.job_id%type
   , p_department_id   job_history.department_id%type
   )
IS
BEGIN

  INSERT INTO job_history (employee_id, start_date, end_date,
                           job_id, department_id)
    VALUES(p_emp_id, p_start_date, p_end_date, p_job_id, p_department_id);
END add_job_history;

Listing active session roles...
Role : SELECT_CATALOG_ROLE
Role : HS_ADMIN_SELECT_ROLE
Roles listed
SQL>

It would seem then, that only a user granted SELECT_CATALOG_ROLE can usefully use DBMS_METADATA.GET_DDL in a stored program unit to retrieve DDL for objects not owned by them.

Something you might want to consider at this point, is that SELECT_CATALOG_ROLE is itself granted the HS_ADMIN_SELECT_ROLE role. Between them, these roles have SELECT access on quite a large number of SYS objects :

select count(*)
from dba_tab_privs
where grantee in ('SELECT_CATALOG_ROLE', 'HS_ADMIN_SELECT_ROLE')
/

  COUNT(*)
----------
      2207

SQL>

At this point, you may well ask if there is any way for users to utilise our function without having this role granted.
After all, a common approach to application security is to bestow execute access to users on a stored program unit without them having any visibility of the underlying tables and views.

Well, there is…

Doing without the SELECT_CATALOG_ROLE

Before we go any further I think I should point out that there are several issues with creating objects in the SYS schema.

The objects in the schema are effectively part of the Oracle software. The schema can be seen as being analogous to root on a *nix system.

There are various admonitions against performing DDL in the SYS schema. These include

As with most “golden rules” however, there is at least one exception – in this case, the Password Verify Function springs to mind.

I suppose the best reason for avoiding this sort of thing is that it would only take one moment of inattention to cause potentially irreparable damage to your Oracle installation. Even with a backup re-installing Oracle is no trivial matter. Therefore, I strongly suggest that you consider carefully whether the benefits of the approach I’m about to take here outweigh the issues of granting SELECT_CATALOG_ROLE.

So then, as SYS…

create function get_ddl( i_owner in varchar2, i_name in varchar2, i_type in varchar2)
    return clob
is
begin
    dbms_output.put_line('Listing active session roles...');
    for r_role in (select role from sys.session_roles) loop
        dbms_output.put_line('Role : '||r_role.role);
    end loop;
    dbms_output.put_line('Roles listed');
    dbms_output.put_line('Userid is : '||sys_context('userenv', 'current_userid'));
    return dbms_metadata.get_ddl( schema => i_owner, name => i_name, object_type => i_type);
end;
/

grant execute on get_ddl to cleo
/

Note that we’re using CREATE rather than CREATE OR REPLACE to ensure that we don’t accidentally overwrite anything.

Now, when we call this function as cleo…

set serveroutput on
set heading off
set lines 130
set long 5000
select sys.get_ddl('HR', 'ADD_JOB_HISTORY', 'PROCEDURE')
from dual
/

  CREATE OR REPLACE PROCEDURE "HR"."ADD_JOB_HISTORY"
  (  p_emp_id          job_history.employee_id%type
   , p_start_date      job_history.start_date%type
   , p_end_date        job_history.end_date%type
   , p_job_id          job_history.job_id%type
   , p_department_id   job_history.department_id%type
   )
IS
BEGIN

  INSERT INTO job_history (employee_id, start_date, end_date,
                           job_id, department_id)
    VALUES(p_emp_id, p_start_date, p_end_date,
 p_job_id, p_department_id);
END add_job_history;

Listing active session roles...
Roles listed
Userid is : 0
SQL>

The fact that we don’t have the role becomes irrelevant because the function is running with definer’s rights (i.e. as SYS, which as a CURRENT_USERID of 0).

Conclusions

The security model implemented for DBMS_METADATA is quite unusual for Oracle supplied PL/SQL packages.
As we have seen, invoking this package, especially it’s GET_DDL function often behaves in unexpected (and possibly entertaining) ways.


Filed under: Oracle, PL/SQL Tagged: 07_dictionary_accessibility, catmeta.sql, current_userid, DBMS_METADATA, DBMS_METADATA.GET_DDL, definer's rights, ku$_edition_schemaobj_view, ORA-31603, select any dictionary, SELECT_CATALOG_ROLE, sys.session_roles, SYS_CONTEXT

Getting one of your Five-a-Day – connecting Remotely to a Raspberry Pi from Linux Mint

Sat, 2015-12-26 12:42

It’s Christmas. To mark the occasion, my son bought me a top-of-the-range computer…

pi_board

Christmas has come early ! Er, hang, on…

Yes, a Raspberry Pi 2 b-spec, complete with 900 MHz Quad-core ARM processor and 1 GB RAM.

Getting it up and running was a bit more of a challenge than I had anticipated.
The Pi uses HDMI for Video output and my ageing monitor is not equipped for HDMI…

tv

The best program on TV – NOOBS doing it’s thing.

In the end, I had to “borrow” the TV.
This arrangement was, of necessity, extremely temporary. The TV had to be back in it’s usual place ready for The Strictly-TOWIE-Dancing-Get-Me-Out-Of-Here Christmas Special, on pain of pain.
Therefore, my first Pi project was to connect to it remotely from another machine, namely, my Linux Mint Laptop.
This will enable me to run the Pi headless (i.e. without a monitor/keyboard/mouse attached to it).

I’m going to cover two different methods of connecting to the Pi.
The first is using ssh to connect to the command line.
The second is to connect remotely to the Raspbian desktop itself.

Just to avoid any confusion, I will be referring to the Raspberry Pi as “the Pi” and the machine I’m connecting from as “Mint”.

About the Environment The Pi

The Pi I’m using for this is running the Raspbian Jessie OS.
It is set to start the desktop on boot.
The Pi is up and the desktop is running.
The Pi is connected to the router via a network cable.

The Mint Machine

The version of Mint I’m running on is 17.2.

The Network

I’m running on a standard home network with all devices connecting to a router.
The router assigns IP addresses to each connected machine dynamically via DHCP.
Apart from the router itself, no device on the network has a fixed IP address.

SSH – Setup

We need to perform these steps on the Pi.

First of all, we need to know what the machine name of the Pi is.

As the devices that connecting to the network are dynamically allocated an IP address it’s simpler to address a specific machine by name.

So, on the Pi, open a terminal and type :

uname -n

This returns the name of the computer. In my case :

raspberrypi

The next thing we need to do is to make sure that the Pi will accept connections via SSH.

On the Raspbian Desktop, click on the Menu and select Preferences/Raspberry Pi Configuration

prefs_menu

Let’s face it, this is the closest I’m going to get to fresh fruit and veg for the next couple of weeks.

Next click on the Interfaces tab and make sure that ssh is enabled.

config_window

Once the steps have been completed, we’re ready to test…

SSH – from the remote machine

We want to connect to the Pi from a remote machine as a user that exists on the Pi.
Note that this user does not need to exist on the remote machine.

When we run the ssh command, we need to specify the user we’re connecting as, and the name of the machine we’re connecting to (i.e. the Raspberry Pi itself).

I’m going to connect as the user pi. We’ve already found out that the name of the Raspberry Pi is “raspberrypi”.

So, I just need to open a terminal window on my remote machine and type :

ssh pi@raspberrypi

The first time you ssh to another computer on your network, you’ll get this warning :

The authenticity of host 'raspberrypi (192.168.1.144)' can't be established.
ECDSA key fingerprint is 03:72:d9:84:58:c8:a6:cc:37:bc:c3:47:8f:1c:90:e0.
Are you sure you want to continue connecting (yes/no)? 

Type “yes” and Hit Return…

Warning: Permanently added 'raspberrypi,192.168.1.144' (ECDSA) to the list of known hosts.
pi@raspberrypi's password:

Enter the password and…

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.

To confirm that we’re now connected to the pi…

uname -a
Linux raspberrypi 4.1.13-v7+ #826 SMP PREEMPT Fri Nov 13 20:19:03 GMT 2015 armv7l GNU/Linux

Whilst ssh is handy if you just need command line access to the Pi, if you want access to the Desktop, you’ll need to try something a bit different.

Using RDP to run the Pi Desktop Remotely

Now, there is more than one way to do this. I’ve chosen to use RDP as it’s quite simple to setup.

Installing xrdp on the Pi

To start with, we need to install the xrdp package on the Pi. At this point, you can either do this on the machine itself ( by opening a Terminal window), or connect via ssh.
Either way, the command you need to enter is :

sudo apt-get install xrdp

You will be prompted for your password and should then get some output that looks like this :

Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  vnc4server x11-apps x11-session-utils xbase-clients xbitmaps xfonts-base
Suggested packages:
  vnc-java mesa-utils x11-xfs-utils
The following NEW packages will be installed:
  vnc4server x11-apps x11-session-utils xbase-clients xbitmaps xfonts-base xrdp
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 8,468 kB of archives.
After this operation, 17.1 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://mirrordirector.raspbian.org/raspbian/ jessie/main x11-apps armhf 7.7+4 [529 kB]
Get:2 http://mirrordirector.raspbian.org/raspbian/ jessie/main x11-session-utils armhf 7.7+1 [60.1 kB]
Get:3 http://mirrordirector.raspbian.org/raspbian/ jessie/main xbase-clients all 1:7.7+7 [36.7 kB]
Get:4 http://mirrordirector.raspbian.org/raspbian/ jessie/main vnc4server armhf 4.1.1+X4.3.0-37.6 [1,434 kB]
Get:5 http://mirrordirector.raspbian.org/raspbian/ jessie/main xbitmaps all 1.1.1-2 [32.1 kB]
Get:6 http://mirrordirector.raspbian.org/raspbian/ jessie/main xfonts-base all 1:1.0.3 [6,181 kB]
Get:7 http://mirrordirector.raspbian.org/raspbian/ jessie/main xrdp armhf 0.6.1-2 [195 kB]
Fetched 8,468 kB in 35s (236 kB/s)
Selecting previously unselected package x11-apps.
(Reading database ... 123536 files and directories currently installed.)
Preparing to unpack .../x11-apps_7.7+4_armhf.deb ...
Unpacking x11-apps (7.7+4) ...
Selecting previously unselected package x11-session-utils.
Preparing to unpack .../x11-session-utils_7.7+1_armhf.deb ...
Unpacking x11-session-utils (7.7+1) ...
Selecting previously unselected package xbase-clients.
Preparing to unpack .../xbase-clients_1%3a7.7+7_all.deb ...
Unpacking xbase-clients (1:7.7+7) ...
Selecting previously unselected package vnc4server.
Preparing to unpack .../vnc4server_4.1.1+X4.3.0-37.6_armhf.deb ...
Unpacking vnc4server (4.1.1+X4.3.0-37.6) ...
Selecting previously unselected package xbitmaps.
Preparing to unpack .../xbitmaps_1.1.1-2_all.deb ...
Unpacking xbitmaps (1.1.1-2) ...
Selecting previously unselected package xfonts-base.
Preparing to unpack .../xfonts-base_1%3a1.0.3_all.deb ...
Unpacking xfonts-base (1:1.0.3) ...
Selecting previously unselected package xrdp.
Preparing to unpack .../xrdp_0.6.1-2_armhf.deb ...
Unpacking xrdp (0.6.1-2) ...
Processing triggers for man-db (2.7.0.2-5) ...
Processing triggers for fontconfig (2.11.0-6.3) ...
Processing triggers for systemd (215-17+deb8u2) ...
Setting up x11-apps (7.7+4) ...
Setting up x11-session-utils (7.7+1) ...
Setting up xbase-clients (1:7.7+7) ...
Setting up vnc4server (4.1.1+X4.3.0-37.6) ...
update-alternatives: using /usr/bin/vnc4server to provide /usr/bin/vncserver (vncserver) in auto mode
update-alternatives: using /usr/bin/Xvnc4 to provide /usr/bin/Xvnc (Xvnc) in auto mode
update-alternatives: using /usr/bin/x0vnc4server to provide /usr/bin/x0vncserver (x0vncserver) in auto mode
update-alternatives: using /usr/bin/vnc4passwd to provide /usr/bin/vncpasswd (vncpasswd) in auto mode
update-alternatives: using /usr/bin/vnc4config to provide /usr/bin/vncconfig (vncconfig) in auto mode
Setting up xbitmaps (1.1.1-2) ...
Setting up xfonts-base (1:1.0.3) ...
Setting up xrdp (0.6.1-2) ...
Processing triggers for systemd (215-17+deb8u2) ...

Once that little lot has scrolled up your screen, you can exit the session ( just type “exit”).
There are some guides which suggest that you need to re-boot the Pi at this point. I found that this was not necessary. However, if things don’t quite work as described from this point on, it may be worth doing this. After all, “Have you tried turning it off and on again ?” is a cliche for a reason !

It’s probably worth mentioning that, at this point, you should be able to connect from any Windows ( Windows 7 or above) remote machine using the built-in Remote Desktop app.
That’s not we’re after though. Oh no. We want to be able to do this from Mint…

Installing rdesktop on Linux Mint

Back on Mint, open a Terminal window and…

sudo apt-get install rdesktop

Once again you should be prompted for your password ( remember this is for your user on the Mint machine, not on the Pi). You should then see something like …

[sudo] password for mike:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Suggested packages:
  pcscd
The following NEW packages will be installed
  rdesktop
0 to upgrade, 1 to newly install, 0 to remove and 83 not to upgrade.
Need to get 139 kB of archives.
After this operation, 427 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu/ trusty/universe rdesktop amd64 1.7.1-1ubuntu2 [139 kB]
Fetched 139 kB in 10s (12.6 kB/s)
Selecting previously unselected package rdesktop.
(Reading database ... 192214 files and directories currently installed.)
Preparing to unpack .../rdesktop_1.7.1-1ubuntu2_amd64.deb ...
Unpacking rdesktop (1.7.1-1ubuntu2) ...
Processing triggers for man-db (2.6.7.1-1ubuntu1) ...
Setting up rdesktop (1.7.1-1ubuntu2) ...

Again, I found that no re-boot was required on the Mint machine. Once again, it might be different for you.

Either way, we should now be able to prove that the Pi will run happily without all of those wires plugged into it…

Connecting via rdesktop

…but I’d recommend keeping it all plugged in until you’ve tested first.

To connect from Mint, open a Terminal window and type :

rdesktop raspberrypi -u username_on_pi -p password -g 90%

…where username_on_pi is the name of the user on the Pi and password is the password for that user.

The -g switch tells rdesktop what size to create the window ( in our case, 90% of the screen size on our main machine)

You should now see (after a short interlude whilst the desktop is rendered)….

rdp_on_pi

Alternatively, if you want to run in a full screen, you can use the -f switch for rdesktop instead – i.e. :

rdesktop raspberrypi -u pi -p raspberry -f

Once you’re in full screen mode, you can toggle between full screen and window mode at any point by pressing CTRL+ALT+ENTER.
It’s worth noting if you do this and then minimize the rdesktop window, when you maximize again, desktop can appear to be blank and or the re-drawing might not be complete. I’m not sure why this is, or what the fix is.

One other point to note, it’s more secure to run rdesktop without specifying the password. In this way, you’ll be prompted for it when you connect.
So, if you run…

rdesktop raspberrypi -u pi -g 90%

…you will be presented with

pi_login

A Python Program to automate connection

To save us typing in the rdesktop command each time we want to connect to the Pi, we could write a simple bash script to automate our rdesktop command. However, in the circumstances, Python seems a more appropriate medium…

#!/usr/bin/env python

import sys, subprocess

def main():
    # Make sure that we've got a single argument passed in
    # NOTE - the first argument is the name of this program.
    if len(sys.argv) != 2 :
        sys.exit('Usage : %s screen_size as percentage between 25 and 100', sys.argv[0])

    if is_valid_screen_size(sys.argv[1]) :
        # Pass in the screen size to the function to build the rdesktop command
        command = build_rdp_command(sys.argv[1])
    else :
        sys.exit('Usage : %s screen_size as percentage between 25 and 100', sys.argv[0])

    try :
        # Run the command...
        status = subprocess.call(command, shell=True)
    except OSError as e :
        print >> sys.stderr, 'Error : ', e
    sys.exit( status)

def is_valid_screen_size( input_size) :
    # Validate the screen size.
    # Return True if it's a valid value

    # Make sure requested size is an integer
    try :
        int( input_size)
    except ValueError :
        return False
    # Now make sure it's not ridiculously small...or over 100
    if int( input_size) < 25 or int(input_size) > 100 :
        return False
    else :
        return True

def build_rdp_command(screen_size):
    # Return the appropriate rdesktop command

    # Initialize &quot;constants&quot; to use in the rdesktop command.
    PI_NAME = 'raspberrypi'
    PI_USER = 'pi'

    command_str = "rdesktop " + PI_NAME + " -u " + PI_USER
    if screen_size == 100 :
        # Full screen
        command_str = command_str + " -f "
    else :
        # Specify the percentage
        command_str = command_str + " -g " + screen_size +"%"
    return command_str    

if __name__ == "__main__" :
    main()

The program is saved as rdp_to_pi.py.
At the time of writing, the default Python version on Mint (17.2) is 2.7.6 ( although you can invoke a Python 3.4.3 interpreter by typing python3 at the prompt).
Therefore, this program is written for Python v2.

The first line of the program tells Linux to use the Python interpreter when this program is executed.

The program then following :

  • validates that it’s been passed a sensible argument value on the command line for the screen size percentage
  • builds the appropriate rdesktop command line using “constant” value for the machine name for the Pi and the name of the user to connect as
  • executes the command

To run the program, you first need to set the appropriate file permissions…

chmod u+x rdp_to_pi.py

…and then run it using a single “.” followed by a “/” ( not the usual “. ./” to run a bash script)…

./rdp_to_pi.py 90
Keyboard Mapping Issues

There are some other issues as well, most notably, the Pi seems to have forgotten where it is and has somehow adopted settings for a US keyboard.
If you want to test this and happen to have a UK keyboard, try opening a text editor and typing any of the following :

  • ” – will print @
  • @ – will print “
  • £ – will print #
  • | – will print ???
  • ~ – will print |

I’ve tried various things to fix this, but to no avail.
Despite the fact that both the Pi and the remote machine are configured with a UK keyboard, rdesktop seems to ignore this and insist on using US keyboard mappings.
I suspect that this is something to do with an X-Server configuration setting somewhere but I just can’t figure out where.

You may have more luck using this link as a starting point.

If anyone does have a solution to this, please let me know.

For now though, I’ve achieved my project goals :

  • get the telly back in place before there’s trouble
  • allow me to play with my Raspberry Pi whilst the festive extravaganza plays out on said TV

That should keep me quiet for a bit.


Filed under: Linux, Mint Tagged: python, python main function, python subprocess module, python sys module, python ValueError exception, Raspberry Pi, SSH, xrdp

Stat Wars – using the Force (DBMS_STATS) for Incremental Stats Gathering

Tue, 2015-12-22 06:03

We haven’t been queuing for days to see that film.
Darth Debbie was quite insistent that she really did have better things to do than
queue up outside a cinema behind someone dressed as a Wookie.
Even the potential opportunity of uttering the line “will someone get this walking carpet out of my way ?!”, has not moved her on this point.
All things Star Wars are not to be mentioned in our house at the moment. So, any resemblance to a certain Space Opera in what follows is purely coincidental.

Anyway, a Long Time Ago in a Database far, far away….

It wasn’t easy being a DBA on the Jedi Archives Database.
Strong with the Force they may have been, but the users weren’t particularly patient.
On top of that, there was still some muttering going on about that unfortunate data loss incident with Obi Wan Kenobi and the missing planetary system.

All in all then, when complaints began coming in about the nightly batch overrunning, it was with a sense of dread that the Geeki began to investigate….

It was one of those batch jobs in which data was loaded into a table partition on the Oracle 11g R2 Database via a standard ETL process.
The table was partitioned daily so each load was into a new table partition.
That data was then used immediately by a subsequent report.

This required a call to DBMS_STATS to ensure that the Optimizer stats are up to date.
The problem was that it was taking several hours to gather stats, despite the partition name and the degree of parallelism being specified in the call.

It was at this point that our hard-pressed DBA recalled the wise words of his master – “when all else fails, read the manual you must!”

Incremental Stats Gathering – what the Manual saya

Whilst it’s fair to say that there were one or two issues with Incremental Stats when they were first introduced in 11g, these kinks are ironed out by version 11.2.03.

To start with then, this is what the Oracle Documentation has to say on when to gather Manual Statistics :

“For tables that are bulk-loaded, run the statistics-gathering procedures on the tables immediately following the load process.
Preferably, run the procedures as part of the same script or job that is running the bulk load.”

This confirms that the stats gathering step in the batch was appropriate and necessary.

There was also something quite interesting about the option of gathering Incremental Stats :

“An alternative to mandatory full table scans is gathering incremental statistics. When the following criteria are met, the database updates global statistics incrementally by scanning only the partitions that have changed:

  • The INCREMENTAL value for the partitioned table is true.
  • The PUBLISH value for the partitioned table is true.
  • The user specifies AUTO_SAMPLE_SIZE for ESTIMATE_PERCENT and AUTO for GRANULARITY when gathering statistics on the table.

“Gathering table statistics incrementally has the following consequences:

  • The SYSAUX tablespace consumes additional space to maintain global statistics for partitioned tables.
  • If a table uses composite partitioning, then the database only gathers statistics for modified subpartitions. The database does not gather statistics at the subpartition level for unmodified subpartitions. In this way, the database reduces work by skipping unmodified partitions.
  • If a table uses incremental statistics, and if this table has a locally partitioned index, then the database gathers index statistics at the global level and for modified (not unmodified) index partitions. The database does not generate global index statistics from the partition-level index statistics. Rather, the database gathers global index statistics by performing a full index scan.”

In summary, if you specify the appropriate parameters, Oracle will work out what stats to gather and not waste time refreshing statistics on data that has not changed.

Making sure that the target table has INCREMENTAL and PUBLISH set to TRUE is fairly straightforward and can be achieved as simply as :

dbms_stats.set_table_prefs(my_table_owner, my_table, 'INCREMENTAL', 'TRUE');
dbms_stats.set_table_prefs(my_table_owner, my_table, 'PUBLISH', 'TRUE');

… where my_table_owner is the owner and my_table is the name of the table you want to gather stats on.

That rest should be simple enough for our Jedi DBA. After all, the Recommended parameter values for DBMS_STATS to perform Incremental Stats Gathering should be the defaults, right ?
I’m sure someone once said something along the lines of “beware the quick and easy path…”

The Parameter values for DBMS_STATS

The fact is that DBMS_STATS.GATHER_TABLE_STATS has a number of parameters, not all of which are mandatory. However, the defaults used for some of them are not necessarily what is required for Incremental Stats Gathering to take place.

You can find the default values that this procedure uses here.

The recommended settings to enable incremental stats gathering are subtly different :

Parameter Name Recommended Value estimate_percent DBMS_STATS.AUTO_SAMPLE_SIZE method_opt FOR ALL COLUMNS SIZE AUTO degree DBMS_STATS.AUTO_DEGREE granularity AUTO cascade DBMS_STATS.AUTO_CASCADE no_invalidate DBMS_STATS.AUTO_INVALIDATE

It may be that most of the default values match up to those recommended. However this is at least one parameter that definitely doesn’t have the same default value as that recommended.

The DEGREE parameter, which determines the degree of parallelism to be used in the Stats Gathering operation is defaulted to NULL.
This may seem strange at first, until you realise that this is because Parallel Stats Gathering is only available in the Enterprise Edition of the Oracle 11g R2 database.

If your running a Data Warehouse application, you will almost certainly be on Enterprise Edition.
If you have partitioned tables, Enterprise Edition is a pre-requisite as partitioning is an option for EE.
If you want to make sure, you can always check by running :

select banner
from v$version
where banner like 'Oracle Database %'
/

The output should be something like :

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production.

Assuming this is the case, you should be able to specify the recommended DBMS_STATS.AUTO_DEGREE as the value for the DEGREE parameter without fear of any licensing reprucussions later on.

One other point for consideration is whether or not you want to collect Histograms.
It’s not unheard in Data Warehouses for bind variable peeking to have been disabled. If your database is set up this way then the optimizer will never look at histograms. Therefore, it seems a bit pointless to collect and store them.

If you want to know if this is the case, you can find out easily enough….

select value
from v$parameter
where name = '_optim_peek_user_binds'
/

If this query returns ‘TRUE’, or does not return any rows, then bind variable peeking is enabled and histograms are required. Otherwise, it isn’t and Histograms will not be used by the Optimizer.

Incremental Stats – the code

Now that we know what the optimal parameter values are likely to be, we can knock up something like this to gather stats on our partitioned tables :

create or replace package incremental_stats
as
	procedure gather_table_stats
	(
		i_owner in user_users.username%type default null,
		i_table in user_tables.table_name%type
	);
end incremental_stats;
/

create or replace package body incremental_stats
as
    -- Private 
	function bind_peeking_enabled_fn
		return boolean
	is
		l_value v$parameter.value%type;
	begin
		select value
		into l_value
		from v$parameter
		where name = '_optim_peek_user_binds';

		if l_value = 'TRUE' then
			return true;
		else
			return false;
		end if;
	exception when no_data_found then
	    -- parameter not set...
		return true;
	end bind_peeking_enabled_fn;
	
	procedure gather_table_stats
	(
		i_owner in user_users.username%type default null,
		i_table in user_tables.table_name%type
	)
	is
	--
	-- Gather table stats using Oracle recommended settings (as at 11G R2).
	-- See http://docs.oracle.com/cd/E11882_01/server.112/e41573/stats.htm#i13546
	-- The one exception to this is opt_method - if we're not using bind-variable peeking at the moment
	-- there's no point in collectiong histograms. 
	--
			
		l_method_opt varchar2(30);
	begin
		
		-- Check to see if bind_variable peeking is enabled
		if bind_peeking_enabled_fn then
			l_method_opt := 'FOR ALL COLUMNS SIZE AUTO';
		else
			l_method_opt := 'FOR ALL COLUMNS SIZE 1';
		end if;
		
		-- Ensure that table prefs are set to facilitate incremental stats gathering for partitions
		dbms_stats.set_table_prefs(i_owner, i_table, 'INCREMENTAL', 'TRUE');
		dbms_stats.set_table_prefs(i_owner, i_table, 'PUBLISH', 'TRUE');
		
		--
		-- NOTE - apart from the parameters specified here, all other parameters
		-- for DBMS_STATS.GATHER_TABLE_STATS derive their default value from the existing preferences.
		-- These can be set by a call to DBMS_STATS.SET_TABLE_PREFS. 
		--
		dbms_stats.gather_table_stats
		(
			ownname =&gt; i_owner,
			tabname =&gt; i_table,
			degree =&gt; dbms_stats.auto_degree,
			method_opt =&gt; l_method_opt
		);
	end gather_table_stats;
	
end incremental_stats;
/

All we need now is a test case to show that table stats are only gathered on the partition(s) that have changed.

A Simple Test

To start with, we need a partitioned table….

create table star_wars_characters
(
    film_name varchar2(100) not null,
    character_name varchar2(100)
)
partition by list(film_name)
(
    partition originals values ( 'A NEW HOPE', 'THE EMPIRE STRIKES BACK', 'RETURN OF THE JEDI'),
    partition prequels values ('THE PHANTOM MENACE', 'ATTACK OF THE CLONES', 'REVENGE OF THE SITH'),
    partition sequels values ('THE FORCE AWAKENS')
)
/

…with some data already in it…

-- Phantom Menace

insert into star_wars_characters( film_name, character_name)
values('THE PHANTOM MENACE', 'ANAKIN SKYWALKER')
/

insert into star_wars_characters( film_name, character_name)
values('THE PHANTOM MENACE', 'DARTH MAUL')
/

insert into star_wars_characters( film_name, character_name)
values('THE PHANTOM MENACE', 'OBI WAN KENOBI')
/

insert into star_wars_characters( film_name, character_name)
values('THE PHANTOM MENACE', 'R2-D2')
/

insert into star_wars_characters( film_name, character_name)
values('THE PHANTOM MENACE', 'C3PO')
/

insert into star_wars_characters( film_name, character_name)
values('THE PHANTOM MENACE', 'PADME AMIDALLA')
/

insert into star_wars_characters( film_name, character_name)
values('THE PHANTOM MENACE', 'SENATOR PLAPATINE')
/

insert into star_wars_characters( film_name, character_name)
values('THE PHANTOM MENACE', 'YODA')
/

insert into star_wars_characters( film_name, character_name)
values('THE PHANTOM MENACE', 'MACE WINDU')
/

-- Attack of the Clones

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'ANAKIN SKYWALKER')
/

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'DARTH MAUL')
/

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'OBI WAN KENOBI')
/

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'R2-D2')
/

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'C3PO')
/

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'PADME AMIDALLA')
/

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'SENATOR PLAPATINE')
/

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'YODA')
/

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'MACE WINDU')
/

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'COUNT DOOKU')
/

insert into star_wars_characters( film_name, character_name)
values('ATTACK OF THE CLONES', 'JANGO FETT')
/

-- Revenge of the Sith

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'ANAKIN SKYWALKER')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'DARTH MAUL')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'OBI WAN KENOBI')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'R2-D2')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'C3PO')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'PADME AMIDALLA')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'SENATOR PLAPATINE')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'YODA')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'MACE WINDU')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'COUNT DOOKU')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'JANGO FETT')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'ANAKIN SKYWALKER')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'DARTH MAUL')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'OBI WAN KENOBI')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'R2-D2')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'C3PO')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'PADME AMIDALLA')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'SENATOR PLAPATINE')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'YODA')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'MACE WINDU')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'COUNT DOOKU')
/

insert into star_wars_characters( film_name, character_name)
values('REVENGE OF THE SITH', 'SENATOR ORGANA')
/

-- A New Hope
insert into star_wars_characters( film_name, character_name)
values('A NEW HOPE', 'LUKE SKYWALKER')
/

insert into star_wars_characters( film_name, character_name)
values('A NEW HOPE', 'DARTH VADER')
/

insert into star_wars_characters( film_name, character_name)
values('A NEW HOPE', 'OBI WAN KENOBI')
/

insert into star_wars_characters( film_name, character_name)
values('A NEW HOPE', 'R2-D2')
/

insert into star_wars_characters( film_name, character_name)
values('A NEW HOPE', 'C3PO')
/

insert into star_wars_characters( film_name, character_name)
values('A NEW HOPE', 'HAN SOLO')
/

insert into star_wars_characters( film_name, character_name)
values('A NEW HOPE', 'PRINCESS LEIA')
/

insert into star_wars_characters( film_name, character_name)
values('A NEW HOPE', 'GRAND MOFF TARKIN')
/

-- Empire Strikes Back
insert into star_wars_characters( film_name, character_name)
values('THE EMPIRE STRIKES BACK', 'LUKE SKYWALKER')
/

insert into star_wars_characters( film_name, character_name)
values('THE EMPIRE STRIKES BACK', 'DARTH VADER')
/

insert into star_wars_characters( film_name, character_name)
values('THE EMPIRE STRIKES BACK', 'OBI WAN KENOBI')
/

insert into star_wars_characters( film_name, character_name)
values('THE EMPIRE STRIKES BACK', 'R2-D2')
/

insert into star_wars_characters( film_name, character_name)
values('THE EMPIRE STRIKES BACK', 'C3PO')
/

insert into star_wars_characters( film_name, character_name)
values('THE EMPIRE STRIKES BACK', 'HAN SOLO')
/

insert into star_wars_characters( film_name, character_name)
values('THE EMPIRE STRIKES BACK', 'PRINCESS LEIA')
/

insert into star_wars_characters( film_name, character_name)
values('THE EMPIRE STRIKES BACK', 'GRAND MOFF TARKIN')
/

insert into star_wars_characters( film_name, character_name)
values('THE EMPIRE STRIKES BACK', 'LANDO CALRISSIAN')
/

-- Return of the Jedi
insert into star_wars_characters( film_name, character_name)
values('RETURN OF THE JEDI', 'LUKE SKYWALKER')
/

insert into star_wars_characters( film_name, character_name)
values('RETURN OF THE JEDI', 'DARTH VADER')
/

insert into star_wars_characters( film_name, character_name)
values('RETURN OF THE JEDI', 'OBI WAN KENOBI')
/

insert into star_wars_characters( film_name, character_name)
values('RETURN OF THE JEDI', 'R2-D2')
/

insert into star_wars_characters( film_name, character_name)
values('RETURN OF THE JEDI', 'C3PO')
/

insert into star_wars_characters( film_name, character_name)
values('RETURN OF THE JEDI', 'HAN SOLO')
/

insert into star_wars_characters( film_name, character_name)
values('RETURN OF THE JEDI', 'PRINCESS LEIA')
/

insert into star_wars_characters( film_name, character_name)
values('RETURN OF THE JEDI', 'GRAND MOFF TARKIN')
/

insert into star_wars_characters( film_name, character_name)
values('RETURN OF THE JEDI', 'LANDO CALRISSIAN')
/

insert into star_wars_characters( film_name, character_name)
values('RETURN OF THE JEDI', 'ADMIRAL ACKBAR')
/ 

-- Force Awakens

insert into star_wars_characters( film_name, character_name)
values('THE FORCE AWAKENS', 'KYLO RENN')
/

insert into star_wars_characters( film_name, character_name)
values('THE FORCE AWAKENS', 'FINN')
/

insert into star_wars_characters( film_name, character_name)
values('THE FORCE AWAKENS', 'REY')
/

insert into star_wars_characters( film_name, character_name)
values('THE FORCE AWAKENS', 'BB8')
/

insert into star_wars_characters( film_name, character_name)
values('THE FORCE AWAKENS', 'HAN SOLO')
/

insert into star_wars_characters( film_name, character_name)
values('THE FORCE AWAKENS', 'PRINCESS LEIA')
/

insert into star_wars_characters( film_name, character_name)
values('THE FORCE AWAKENS', 'LUKE SKYWALKER')
/

insert into star_wars_characters( film_name, character_name)
values('THE FORCE AWAKENS', 'CHEWBACCA')
/

commit;

…and gather stats.

begin
    dbms_stats.gather_table_stats('MIKE', 'STAR_WARS_CHARACTERS');
end;
/

We can see when the stats were last gathered on each partition…

select partition_name, to_char(last_analyzed, 'DD-MON-YYYY HH24:MI')
from user_tab_partitions
where table_name = 'STAR_WARS_CHARACTERS'
/

PARTITION_NAME	     TO_CHAR(LAST_ANALYZED,'DD-
-------------------- --------------------------
ORIGINALS	     19-DEC-2015 21:36
PREQUELS	     19-DEC-2015 21:36
SEQUELS 	     19-DEC-2015 21:36

Now, if we insert data into a single parition…

insert into star_wars_characters( film_name, character_name)
values('THE FORCE AWAKENS', 'CAPTAIN PHASMA')
/

commit;

… and run our stats gathering package…

begin
    incremental_stats.gather_table_stats('MIKE', 'STAR_WARS_CHARACTERS');
end;
/

… we can see that stats have only been gathered on the partitions that have changed…

select partition_name, to_char(last_analyzed, 'DD-MON-YYYY HH24:MI')
from user_tab_partitions
where table_name = 'STAR_WARS_CHARACTERS'
/

PARTITION_NAME	     TO_CHAR(LAST_ANALYZED,'DD-
-------------------- --------------------------
ORIGINALS	     19-DEC-2015 21:36
PREQUELS	     19-DEC-2015 21:36
SEQUELS 	     19-DEC-2015 21:44

So, is this the panacea for all of your Stats Gathering woes ?
The Geeki reckon that they’ve observed some significant improvements in Stats gathering execution times. One example was where a job went from running in over 4 hours to under 18 minutes.
They’ve also noticed that there is less of a tendency for batch jobs to serialize because some stats job is hogging all of the parallel servers on the database.

That said, you need to remember that this is a Galaxy far, far away and there’s no knowing how the behaviour of Midi-Chlorians may affect runtime.
The best thing to do would be to test it on your application with your partitioned tables ( which will probably be a bit bigger than the one in my test case).

Planning our trip to finally see the film has hit a bit of a snag. When I suggested that we might go in fancy dress, Deb told me that she’d rather kiss a Wookie.
Sometimes I suspect that, despite her protestations to the contrary, she’s more of a fan than she lets on.


Filed under: Oracle, PL/SQL, SQL Tagged: dbms_stats, dbms_stats.auto_stats_degree, dbms_stats.gather_table_stats, histograms, incremental stats, _optim_peek_user_binds

A False Sense of Security, or how Database Developers can save £35 million

Sun, 2015-11-29 12:00

Deb’s been recovering from an operation recently. During her convalescence, I have been designated as her nurse and carer.
Careful planning was required prior to the op. She promised not to over do things, and I promised to attend to her every need.
“I should have a little bell so I can ring when I need you”, she opined. “After all, that’s what they do in Downton Abbey”.

We don’t have a bell. Fortunately Deb did have something that would do the job. Last Christmas, a thoughtful relative had given her a toy gun, a replica of the sort that was in the Despicable Me films.

Yes, when my presence was required, Deb simply had to fire the Fart Gun.

In order to attempt to retain a little of the Downton aura that she had been so keen to capture, I did make a point of saying “You rang M’Lady”, in my best Parker-from-Thunderbirds voice whenever I was summoned by the sound of electronic flatulence.

It seems to have worked out OK in the end. Deb is now up and about, having survived a week of my cooking.

When not attending to my nursing duties, I did have the chance to catch up with the unfolding story about the latest TalkTalk cyber attack.
Things that, in retrospect, are quite obvious were rather less clear at the time they were reported.

To begin with, the report was of a Distributed Denial of Service (Ddos) attack which had resulted in a loss of customer data.
Was this some fiendishly clever variation on the theme of a Ddos attack ? As far as I knew, such an exploit was designed solely to take down a site, not to obtain data.
Further “clarification” followed. It was reported that there had also been a “Sequential Attack”. I’ve never heard of one of those.
Just before I ran off to do some research, this was finally translated into techie – it was actually a SQL Injection (SQLi) attack.

Later, it was reported that TalkTalk have estimated the cost of this attack at up to £35 million.

Whilst it’s easy to laugh at the efforts of a CEO struggling with the terminology around this topic, it’s worth bearing in mind that the responsibility, ultimately, lies within the IT Department. But where ? with the Network Admins, Architects, the DBAs ?

SQLi has been around for longer than some of the people who are now making use of it.

As a Database Developer, you may well be feeling confident about security at this point.
After all, your code is sitting on the database server. Access to that server is probably restricted to a White List of machines, which will include the Application Server.
If you’re working with a public facing web application, the Application Server will be sitting behind a firewall.
Additionally, it may well be the case that the Application has been designed to implement the Data Access Layer pattern. Any database calls from the Controller layer are made to your PL/SQL packages which have been written to the Transactional API (XAPI) pattern. So, you don’t even need to wonder whether your Web Developers have used prepared statement calls.
On top of that, the application only connects to the database as a minimally privileged database user. The Application Owning schema itself is actually locked.

What we’re going to cover here is a Database Developer’s eye view of how such an application might look.
We’ll also look at just why these multiple layers of security provide no protection whatsoever against a SQL Injection attack.

Once we’ve compromised the application and caused a bit of havoc, we’ll look at how we, as Database Developers, can ensure that our code is resistant to this kind of attack.

The Application

Our application is running on an Oracle 11g database. The front-end and mid-tier layers can use whatever technology takes your fancy.
For the purposes of the demonstrations that follow, I’ll be using SQL*Plus to simulate calls from the application into the PL/SQL packages that comprise the DAL layer.
I’m not a networking expert so won’t embarass myself by going into detail about the Network Security mechanisms that might bet in place for this application.
For the purposes of the examples that follow, we’ll assume that our Network Admins know what they’re doing and that Network Security is adequate for our application.

In terms of how our application looks from the view of the Database Developers…

DAL and XAPI Patterns

In an n-tier application using the Oracle RDBMS, the Data Access Layer (DAL) usually interacts with the database by means of calls to PL/SQL packages.
These packages are normally written to conform to the Transaction API (XAPI) pattern.

In other words, the mid-tier code contains no SQL statements. Instead, it simply issues calls to the PL/SQL packages, where the SQL is encapsulated.

The PL/SQL packages themselves implement the required transactions with the database.

Functional Requirements

Using the HR schema as an example, let’s see what the XAPI packages to handle Department information might look like.

There are some assumptions that I’ve made in writing this code.

For DML operations on a department, the user will select the relevant department from a drop-down list. The value passed into the package will be the department_id.
The same applies to Locations and Managers.

A user can search for a department using any (or none) of the following criteria :

  • Department Name ( translated to Departemnt ID)
  • Manager Name( translated to Manager id)
  • Location (translated to location_id)
  • Country Name
The MANAGE_DEPARTMENTS package

Based on these assumptions, we may have a package that looks like this :

create or replace package manage_departments
as

    procedure add_department
    ( 
        i_department_name departments.department_name%type,
        i_location_id departments.location_id%type default null,
        i_manager_id departments.manager_id%type default null
    ); 
        
    procedure delete_department( i_department_id departments.department_id%type);

end manage_departments;
/

create or replace package body manage_departments
as

    procedure add_department
    ( 
        i_department_name departments.department_name%type,
        i_location_id departments.location_id%type default null,
        i_manager_id departments.manager_id%type default null
    )
    is
    --
    -- Create a new Department.
    -- The location and manager are optional in the underlying table so the
    -- business rule is that a department may be created without either being specified
    --
    begin
        if i_department_name is null then
            raise_application_error(-20000, 'You must provide a name for the new Department');
        end if;
        
        insert into departments( department_id, department_name, location_id, manager_id)
        values( departments_seq.nextval, i_department_name, i_location_id, i_manager_id);
    end add_department;
      
    procedure delete_department( i_department_id departments.department_id%type)
    is
    --
    -- Remove a department. Once again, the Referrential Integrity in the data model
    -- should prevent the removal of a department that still has Employees working in it.
    --
    begin
        delete from departments
        where department_id = i_department_id;
    end delete_department;
end manage_departments;
/

We’ll come onto the routine for implementing the Department search shortly.

First though, it’s worth noting that the way input parameters are used in the package’s procedures ensures that they are bound at runtime…

begin
    manage_departments.add_department('My Test Department');
end;
/

PL/SQL procedure successfully completed.

commit;

Commit complete.

select department_id
from departments
where department_name = 'My Test Department'
/

DEPARTMENT_ID
-------------
290

If we now look at the statement that was actually run, we can see that the variables have been bound into the query :

select sql_text
from v$sqlarea
where lower(sql_text) like 'insert into departments%';
/

SQL_TEXT
--------------------------------------------------------------------------------------------------------------------------------------------
INSERT INTO DEPARTMENTS( DEPARTMENT_ID, DEPARTMENT_NAME, LOCATION_ID, MANAGER_ID) VALUES( DEPARTMENTS_SEQ.NEXTVAL, :B3 , :B2 , :B1 )

The principle of Least Privilege

There are still a fair number of applications which implement these patterns yet which also initiate database connections as the Application Owner schema.
Let’s have a look at exactly what privileges the Application Owner has.

select privilege
from session_privs
/

PRIVILEGE
----------------------------------------
CREATE SESSION
ALTER SESSION
UNLIMITED TABLESPACE
CREATE TABLE
CREATE CLUSTER
CREATE SYNONYM
CREATE VIEW
CREATE SEQUENCE
CREATE DATABASE LINK
CREATE PROCEDURE
CREATE TRIGGER
CREATE TYPE
CREATE OPERATOR
CREATE INDEXTYPE

14 rows selected.


That’s a fair amount of stuff, without even looking at what object privileges may also be granted.
It’s not unheard of for Application Owner accounts to be very highly privileged, even to the point of having CREATE USER.
Just as relevant is the fact that this user automatically has full DML privileges on the tables that it owns – i.e. the tables that comprise our application.

By contrast, having implemented the XAPI pattern, the only privileges required to run the application are :

  • CREATE SESSION
  • Execute permissions on the XAPI packages

So, instead of connecting as HR, we can use a minimally privileged application user account.
Such an account could be created (connecting as a user with CREATE USER privs) using some variation of…

create user hr_app_user identified by some_password
/

User created.

grant create session to hr_app_user
/

grant execute on hr.manage_departments to hr_app_user
/

-- Execute privs for other packages could go here
-- Alternatively, they could all be assigned to a role
-- which is then granted to this user

There are a couple of points worth noting here.
In reality you’d probably create this user to use some external authentication (e.g. Kerberos) to avoid the need to supply the password when connecting to the database.
It’s also worth bearing in mind that, even though it’s only been explicitly granted these privileges, the user will also inherit any privileges granted to PUBLIC.

Now that the application connects as HR_APP_USER, we can even lock the HR account and the application will continue to function…

alter user hr account lock
/

User altered.

select account_status
from dba_users
where username = 'HR'
/

ACCOUNT_STATUS
--------------------------------
LOCKED

Just to prove that everything still works, let’s remove the department we just created. Connected as hr_app_user…

select user
from dual
/

USER
------------------------------
HR_APP_USER

begin
    hr.manage_departments.delete_department(290);
end;
/

PL/SQL procedure successfully completed.

commit;

Commit complete.

We cannot directly query the departments table as the HR_APP_USER, but if we connect as a user with SELECT ANY TABLE, we can see that the department has indeed been deleted…

select department_name
from hr.departments
where department_id = 290
/

no rows selected

At this point it looks like our application is pretty secure. We’re not even using synonyms – another potential attack vector.

Now let’s introduce our Department Search routine…

NOTE – as the HR account is now locked, I’ve created the following packages by connecting as a user with CREATE ANY PROCEDURE and then issuing the command…

alter session set current_schema = hr
/
create or replace package search_queries
as
    function get_departments
    (
        i_department_id departments.department_id%type default null,
        i_manager_id departments.manager_id%type default null,
        i_location_id departments.location_id%type default null,
        i_country_name countries.country_name%type default null
    )
        return sys_refcursor;
end search_queries;
/

create or replace package body search_queries
as
    function get_departments
    (
        i_department_id departments.department_id%type default null,
        i_manager_id departments.manager_id%type default null,
        i_location_id departments.location_id%type default null,
        i_country_name countries.country_name%type default null
    )
        return sys_refcursor
    is
    -- Either get a list of all departments or filter by one or more criteria - i.e. :
    -- Department ( will return details of one department)
    -- Manager ( all departments with this manager)
    -- Location (all departments in this location)
    -- Country (all departments in a given country) 
    --

        l_stmnt varchar2(4000);
        l_where varchar2(4000);
        l_rc sys_refcursor;
    begin
        l_stmnt := 
            'select dept.department_name, coun.country_name, loc.city, ' 
            ||q'[ emp.first_name||' '||emp.last_name as manager_name ]'
            ||' from departments dept '
            ||' inner join locations loc '
            ||'     on dept.location_id = loc.location_id '
            ||' inner join countries coun '
            ||' on loc.country_id = coun.country_id '
            ||' left outer join employees emp '
            ||'     on dept.manager_id = emp.employee_id '
            ||' where 1 = 1 ';
        
        if i_department_id is not null then
            l_where := l_where||' and dept.department_id = '||i_department_id;
        end if;
        
        if i_manager_id is not null then
            l_where := l_where||' and dept.manager_id = '||i_manager_id ;
        end if; 
        
        if i_location_id is not null then
            l_where := l_where||' and dept.location_id = '||i_location_id;
        end if;
        
        if i_country_name is not null then
              l_where := l_where||q'[ and coun.country_name = ']'||i_country_name||q'[']';
        end if;
        
        if nvl(length(l_where), 0) > 0 then
            l_stmnt := l_stmnt||l_where;
        end if;
        open l_rc for l_stmnt;
        return l_rc;       
    end get_departments;
end search_queries;
/

This is an extremely simplified example of a search routine. However, it’s still not obvious exactly what it’s meant to be doing at first glance.
Once you do figure out what’s going on, you can see that the variables are being concatenated into the string that builds the query which is ultimately executed.
That’s OK though, isn’t it ? After all, this is PL/SQL, where binding happens automagically.
Hmmm, let’s put our Black Hat on for a moment and test that assumption…

Using the Department Search to find out what the boss earns

To keep things simple, I’ll be using SQL*Plus to simulate the calls from the Application front-end
So, connected as hr_app_user…


select hr.search_queries.get_departments(10) from dual;

HR.SEARCH_QUERIES.GE
--------------------
CURSOR STATEMENT : 1

CURSOR STATEMENT : 1

DEPARTMENT_NAME COUNTRY_NAME CITY MANAGER_NAME
------------------------------ ------------------------------ -------------------- --------------------
Administration United States of America Seattle Jennifer Whalen

All seems to work as expected.

The same applies if we specify a country…


select hr.search_queries.get_departments(null, null, null, 'Canada') from dual;

HR.SEARCH_QUERIES.GE
--------------------
CURSOR STATEMENT : 1

CURSOR STATEMENT : 1

DEPARTMENT_NAME COUNTRY_NAME CITY MANAGER_NAME
------------------------------ ------------------------------ -------------------- --------------------
Marketing Canada Toronto Michael Hartstein

However, if we now get a bit creative, the result certainly isn’t what the developer intended…

select hr.search_queries.get_departments(null, null, null,
    'A'||chr(39)||' union select table_name, null, null, null from user_tables --')
from dual;

HR.SEARCH_QUERIES.GE
--------------------
CURSOR STATEMENT : 1

CURSOR STATEMENT : 1

DEPARTMENT_NAME COUNTRY_NAME CITY MANAGER_NAME
------------------------------ ------------------------------ -------------------- --------------------
COUNTRIES
DEPARTMENTS
EMPLOYEES
JOBS
JOB_HISTORY
LOCATIONS
NON_SALES_EMPS
REGIONS

8 rows selected.

Because the function is concatenating user input into the query, we can manipulate it using the CHR(39) (single-quote character) to end the string the function is expecting and allows us to concatenate code after it.
The “- -” at the end of our input string is simply to ensure that the statement definition ends after our input.
Remember, there are no carriage returns in our statement as it’s built in the package.

The actual query being executed ( formatted to make it more readable) is :

select dept.department_name, coun.country_name, loc.city,
    emp.first_name||' '||emp.last_name as manager_name
from departments dept
inner join locations loc
    on dept.location_id = loc.location_id
inner join countries coun
    on loc.country_id = coun.country_id
left outer join employees emp
    on dept.manager_id = emp.employee_id
where 1 = 1
and coun.country_name = 'A'
union select table_name, null, null, null
from user_tables

Whilst HR_APP_USER itself does not own any tables, the query is being executed from a definer rights package owner by HR. Therefore we get a list of the tables that are owned by HR – the Application Tables.

Now, I wonder if there’s anything of interest in the EMPLOYEES table …

select hr.search_queries.get_departments(null, null, null,
    'A'||CHR(39)||q'[ union select table_name, column_name, null, null from user_tab_cols where table_name = 'EMPLOYEES' --]') 
from dual;

HR.SEARCH_QUERIES.GE
--------------------
CURSOR STATEMENT : 1

CURSOR STATEMENT : 1

DEPARTMENT_NAME COUNTRY_NAME CITY MANAGER_NAME
-------------------- -------------------- -------------------- --------------------
EMPLOYEES COMMISSION_PCT
EMPLOYEES DEPARTMENT_ID
EMPLOYEES EMAIL
EMPLOYEES EMPLOYEE_ID
EMPLOYEES FIRST_NAME
EMPLOYEES HIRE_DATE
EMPLOYEES JOB_ID
EMPLOYEES LAST_NAME
EMPLOYEES MANAGER_ID
EMPLOYEES PHONE_NUMBER
EMPLOYEES SALARY

11 rows selected.

Now to make the Annual Review process in the company a bit more interesting, we can extract the name, salary and commission rate for every employee, together with their e-mail addresses…

select hr.search_queries.get_departments(null, null, null,
    'A'||CHR(39)|| q'[union select first_name||' '||last_name, email, to_char(salary), to_char(commission_pct) from employees --]')
from dual;

HR.SEARCH_QUERIES.GE
--------------------
CURSOR STATEMENT : 1

CURSOR STATEMENT : 1

DEPARTMENT_NAME COUNTRY_NAME CITY MANAGER_NAME
-------------------- -------------------- -------------------- --------------------
Adam Fripp AFRIPP 8200
Alana Walsh AWALSH 3100
Alberto Errazuriz AERRAZUR 12000 .3
Alexander Hunold AHUNOLD 9000
Alexander Khoo AKHOO 3100
Alexis Bull ABULL 4100
Allan McEwen AMCEWEN 9000 .35
Alyssa Hutton AHUTTON 8800 .25
...

107 rows selected.

…and send this information to everyone on the list.

Notwithstanding the considerable security precautions in place around this application, we’ve demonstrated how easy it is to compromise it using SQLi.
The precautions we have taken are no defence because, as far as the Application is concerned, it’s operating normally.

This demonstrates fairly graphically that, whilst security is the responsibility of Network Admins, Architects, and DBAs, it’s also the responsibility of us Database Developers.
So, let’s see how we can protect against this attack…

Binding the Variables in the Function

The immediate problem we’re faced with here is that there are multiple combinations of values that a user may choose to call the function with.
The USING clause of EXECUTE IMMEDIATE is not dynamic – we have to know the number of variables we’re binding into the query.
Therefore, if we’re going to persist with the Dynamic SQL approach we either need to

  • code for all possible parameter combinations (14 in this case)
  • use the DBMS_SQL package
  • cheat

Let’s try the last one of those…

The Tom Kyte Solution

Using the technique in this article by Mr Kyte, we can just bind all of the variables into the query, irrespective of whether we’ve been passed a value for them :

create or replace package body search_queries
as
    function get_departments
    (
        i_department_id departments.department_id%type default null,
        i_manager_id departments.manager_id%type default null,
        i_location_id departments.location_id%type default null,
        i_country_name countries.country_name%type default null
    )
        return sys_refcursor
    is
    -- Either get a list of all departments or filter by one or more criteria - i.e. :
    -- Department ( will return details of one department)
    -- Manager ( all departments with this manager)
    -- Location (all departments in this location)
    -- Country (all departments in a given country) 
    --

        l_stmnt varchar2(4000);
        l_where varchar2(4000);
        l_rc sys_refcursor;
    begin
        l_stmnt := 
            'select dept.department_name, coun.country_name, loc.city, ' 
            ||q'[ emp.first_name||' '||emp.last_name as manager_name ]'
            ||' from departments dept '
            ||' inner join locations loc '
            ||'     on dept.location_id = loc.location_id '
            ||' inner join countries coun '
            ||' on loc.country_id = coun.country_id '
            ||' left outer join employees emp '
            ||'     on dept.manager_id = emp.employee_id '
            ||' where 1 = 1 ';
        
        if i_department_id is not null then
            l_where := l_where||' and dept.department_id = :dept_id';
        else
            l_where := l_where||' and ( 1 = 1 or :dept_id is null)';
        end if;
        
        if i_manager_id is not null then
            l_where := l_where||' and dept.manager_id = :mgr_id' ;
        else
            l_where := l_where||' and (1 = 1 or :mgr_id is null)'; 
        end if; 
        if i_location_id is not null then
            l_where := l_where||' and dept.location_id = :loc_id';
        else
            l_where := l_where||' and ( 1 = 1 or :loc_id is null)';
        end if;
        
        if i_country_name is not null then
            l_where := l_where||' and coun.country_name = :country';
        else
            l_where := l_where||' and ( 1 = 1 or :country is null)';
        end if;
        l_stmnt := l_stmnt||l_where;

        open l_rc for l_stmnt using i_department_id, i_manager_id, i_location_id, i_country_name;
        return l_rc;
    end get_departments;            

end search_queries;
/

If we now replace our package body with this, we can see that it works in the same way as before…

select hr.search_queries.get_departments(null, null, null, 'Canada') from dual;

HR.SEARCH_QUERIES.GE
--------------------
CURSOR STATEMENT : 1

CURSOR STATEMENT : 1

DEPARTMENT_NAME COUNTRY_NAME CITY MANAGER_NAME
-------------------- -------------------- -------------------- ------------------------------
Marketing Canada Toronto Michael Hartstein

…except when you try to inject it…

select hr.search_queries.get_departments(null, null, null,
    'A'||chr(39)||' union select table_name, null, null, null from user_tables --')
from dual;

HR.SEARCH_QUERIES.GE
--------------------
CURSOR STATEMENT : 1

CURSOR STATEMENT : 1

no rows selected

The query that it’s executing is ( once again formatted for readability) :

select dept.department_name, coun.country_name, loc.city,
    emp.first_name||' '||emp.last_name as manager_name
from departments dept
inner join locations loc
    on dept.location_id = loc.location_id
inner join countries coun
    on loc.country_id = coun.country_id
left outer join employees emp
    on dept.manager_id = emp.employee_id
where 1 = 1
and ( 1 = 1 or :dept_id is null)
and (1 = 1 or :mgr_id is null)
and ( 1 = 1 or :loc_id is null)
and coun.country_name = :country

Yes, instead of simply concatenating our parameter values into the statement, it binds the parameters into the query.

At this point, a thought occurs. How is the query that is executed so different from …

The Anti-Kyte solution

Sometimes you have to use dynamic SQL. Quite often however, something like this will fit the bill…

create or replace package body search_queries
as
    function get_departments
    (
        i_department_id departments.department_id%type default null,
        i_manager_id departments.manager_id%type default null,
        i_location_id departments.location_id%type default null,
        i_country_name countries.country_name%type default null
    )
        return sys_refcursor
    is
    -- Either get a list of all departments or filter by one or more criteria - i.e. :
    -- Department ( will return details of one department)
    -- Manager ( all departments with this manager)
    -- Location (all departments in this location)
    -- Country (all departments in a given country) 
    --

        l_rc sys_refcursor;
    begin
        open l_rc for
            select dept.department_name, coun.country_name, loc.city, 
                emp.first_name||' '||emp.last_name as manager_name
            from departments dept
            inner join locations loc
                on dept.location_id = loc.location_id
            inner join countries coun
                on loc.country_id = coun.country_id
            left outer join employees emp
            on dept.manager_id = emp.employee_id
            where dept.department_id = nvl(i_department_id, dept.department_id)
            and dept.manager_id = nvl( i_manager_id, dept.manager_id)
            and dept.location_id = nvl( i_location_id, dept.location_id)
            and coun.country_name = nvl( i_country_name, coun.country_name);
        
        return l_rc;
    end get_departments;            
end search_queries;
/

Functionally, this works in exactly the same way. However, it’s much easier to see what’s going on.

OK, I know that this is a fairly simple example and once you start getting into some of the more complex combinations of possible predicates, Dynamic SQL may be the only practical solution.
You may also be wondering about the relative performance of these queries. I’m not going to try to draw any conclusions from the tiny data set contained in this application. However, it’s worth running some tests on your application with your data, before you determine the relative merits of these approaches in terms of performance.

I think the main point here is that, whilst there are times when only Dynamic SQL will do, a static SQL approach is usually preferrable.

Changing the Culture – how to stop this happening in the first place

As we’ve seen, SQLi has been around a long time. Despite this, this particular vulnerability is widespread. Wheras it used to be the case that it took a Gru-like evil genius, slaving away at the keyboard, to run this kind of attack, nowdays, there are kits available to do this automatically. Some of them even have user-friendly GUIs.
All Gru requires now is for a Minion or two to stop playing with their Fart guns and click a few buttons.
We can also conclude that, unless the Database Developers do their bit, any other security precautions are rendered ineffective in protecting data, however thorough they may be.

As Database Developers, we need to accept that our code must fulfill four main criteria. It must :

  • Implement the required functionality
  • Be efficient in terms of performance
  • Be maintainable for ongoing production support
  • …and be secure

As well as functional tests, performance tests and peer reviews, the code should undergo penetration tests as part of the development cycle.
There are resources available out there to help with writing and executing such tests.
Red Database Security have a SQL Injection Cheat Sheet available.
If it comes to it, why not obtain one of the aforementioned SQLi tools that are available ? You can search on Google, there’s plenty around, although sqlmap might be a good place to start.

In addition to this, care should be taken when using Dynamic SQL for DML statements.
Part of any Peer Review could be to check for such statements and make sure that binding is always taking place.
I would go further. I would suggest that any use of dynamic SQL needs to be justified to the satisfaction of the team as a whole.

Such steps are unlikely to be expensive in terms of either time or extra staff.
Any expense is going to be peanuts (or possibly bananas), compared to the potential cost of finding your customer details for sale somewhere in the Deep Web.


Filed under: Oracle, PL/SQL, SQL Tagged: bind variables, binding an unkown number of variables into a using clause, DAL pattern, Data Access Layer, defence against sql injection, execute immediate, locking a user account, Native Dynamic SQL, SQL Injection, SQLi, Transaction API, XAPI pattern

Gits and Giggles – Getting onto Github from Mint

Sun, 2015-11-08 08:36

“Github ?” said Deb, “sounds like a chat room for grumpy old men. You should fit right in !”
To be fair, neither of us were in a particularly good mood at the time.
Deb had just been made to sit through the Rugby World Cup Final whilst my emergency backup nationality had finally born fruit.
All I said to her was that it’s nice to be able to support a real country rather than a mere principality, like Wales. Honestly, some people are so touchy.

For my part, I had just discovered that Github, based on the Git source control system written by Linus Torvalds himself, has integrated clients for Windows and Mac, but not for Linux.

No matter. If you want to interact with Github, you’ll need to have Git installed on your client machine anyway and, mine being Linux, there are a number of GUIs available for Git.

Aside from the Git documentation itself, which is extensive, there are a number of excellent guides to both Git and Github available.
Rather than re-hashing these – although I will link some of them – I’m going to look at things from a slightly different perspective.

Throughout my career, I’ve been, first and foremost, a database developer.
Way back when, choices of version control systems were rather limited. In a professional sense, I grew up with PVCS and Visual Source Safe.
Even later on, the fact that Oracle Forms and Reports were binary source code meant that the Edit-Merge paradigm of source control was something that tended not to gain traction in the Oracle Shops that I worked in.

Later on, in larger organisations, Perforce was the tool of choice, although always with the comforting P4Win ( and later P4V) front-end.

So, I’d rather like the comfort of a GUI, if only to see that the files I think I’ve checked in are actually there.

Additionally, Github uses an enhanced version of the Markdown language for text files. It would be nice to be able to preview these files before uploading them to the repository.

First things first then….

Installing and Configuring Git

Installation is simple enough, I’m running on Mint so…

sudo apt-get install git

…does the job.

The first things to configure are all about you…

git config --global user.name "mike"
git config --global user.email "mike@somewhere.com"

Additionally, I’d quite like to use Gedit, rather than the default Vim, as my editor…

git config --global core.editor gedit

Once all that is sorted out, the next step is to create a repository.
There’s a very comprehensive guide to setting up git and getting on Github by Rob Krul here.

Code School also an excellent, interactive tutorial introduction to to Git.

Choosing a GUI

Unsurprisingly, you have a number of choices, some of which are listed here.

Initially, I was tempted by Git-Cola, if only for the opportunity it offered to work some Kool Aid gags into this post.
In the end though, I found that I was most comfortable with Giggle. Well, I’m game for a laugh…

Installing Giggle

Once again, giggle should be in the repositories…

sudo apt-get install giggle

There are also a couple of plugins available for giggle so…

sudo apt-get install giggle-personal-details-plugin giggle-terminal-view-plugin
Having a chuckle with Giggle

At this point, I’m assuming that you’ve created your Git repository and may or may not have set up your Github account.
For me, the repository I’m using is under my home directory in :

/home/mikes/crudo

On Mint 17 (with the Cinnamon desktop), giggle has appeared in the Programming menu.
The first time you fire it up, you should see this :

giggle1

To start with, swallow your disappointment and maximize the Window.

Now from the Project Menu, select Open and then point to the directory where you’ve created your git repository.

You should now see a tree view of the files in your project :

giggle2

Navigating to a file will allow you to see a Preview :

giggle_preview

And if you want to edit a file :

giggle_edit

In this case, I’m editing README.md which is a markdown file. Obviously, amending this file in a text editor is going to be prone to error given the nature of markdown. Fortunately, there is a way of reviewing your markdown changes before commiting them to the repository. I’ll come onto that in a bit.
In the meantime, however, I’m going to make a small change to this file

changing_readme

After saving these changes, if we now return to giggle and hit the Refresh button, we can see that the icon for README.md has changed :

changed_files

This is because we now have a change to the file. If we want to check the status of the change in git, we can simply open a terminal window from Giggle.
To do this, select the View menu and then Create Terminal.

The terminal will open in the location that’s currently in context in giggle.
We can now check the status with a standard Git command :

giggle_terminal

After closing the terminal window, we can switch to the history view, and review the change we’ve made. To do this, click the History button (next to the Browse Button on the button bar at the top of the window) :

giggle_history

If we wanted to, we could now commit this change by going back to the file browser, right-clicking the file we’ve changed and selecting Commit Changes :

giggle_commit

A Window will then pop-up for us to enter a comment :

giggle_comment

If we now hit the Commit button, the change will be saved to the repository.

There’s probably a bit more to discover about giggle. For now though, I’ve got enough to be getting on with.

As mentioned previously, Github uses markdown as it’s preferred language for text files. README.md is normally the first file you’ll create a Github repository with.
It also has it’s own extensions to this language.

You can find an introduction to markdown here.

If you want to check what your markdown file will look like prior to uploading to Github, help is at hand.

Getting a grip with a Github Markdown Previewer

Hosted on Github itself, Grip is a Github Readme Instant Previewer.

There are a couple of things you need to do before you can get a Grip.

First of all, you need to have Python installed.

This being Linux, there should be no problem there.
If you want to check…

python
Python 2.7.6 (default, Jun 22 2015, 17:58:13) 
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> quit()

Next, you need to have pip….

sudo apt-get install python-pip

…and you’re ready to proceed with grip…

sudo pip install grip

You should get the usual reams of output ending with…

Successfully installed grip docopt Flask itsdangerous Jinja2 Markdown MarkupSafe path-and-address Pygments requests Werkzeug
Cleaning up...

Now to view our file. Easiest way is to open a Terminal from Giggle with our file in context. This ensures we’re in the correct directory.
We can now simply launch grip :

grip
 * Running on http://localhost:6419/ (Press CTRL+C to quit)

If we now open a browser at the specified address, we can see what our markdown file will look like on Github :

preview_md

Any links you have to other files in your repository should still work as Grip supports relative paths.
If you want to make any edits to your files, you can do so then simply refresh your browser view.
Alternatively, you can stop grip ( with CTRL+C), edit and save your file then run it again.

With the aid of Grip and Giggle, I’ve now managed to upload my first Github project.

Now the Rugby World Cup is over, I suppose I’ll have to go back to supporting England. Mind you, as any Welsh readers will be quick to point out, England isn’t a proper country either.


Filed under: Linux, Mint, Ubuntu Tagged: giggle, giggle-terminal-view-plugin, git, git config --global core.editor, Github, Grip

Reverse Engineering a CRUD Matrix in Oracle (again) – with CRUDO

Tue, 2015-10-20 11:55

Dependencies between a table and database stored program units can be found in the DBA_DEPENDENCIES dictionary view. However, this only records the fact that there is a dependency.
Say you have a long-running report that will benefit from an index on a table. It would be good to know if there’s a packaged procedure that does a large update on that table and may therefore take longer with a new index to populate. How can you figure out if there are any such procedures ?
Well, you’re in luck.

CRUDO is not, as you may think, a domestic cleaning product. It is, in fact, a PL/SQL application for generating CRUD matrices for Stored Program Units against the tables that they reference.
I’ve been playing around with something like this for a while now, and
this time, I’ve posted the resulting code on GitHub.

CRUDO enables you to :

  • Determine a CRUD Matrix for each database stored program unit against a given table ( or synonym on that table)
  • generate Matrices for all tables in a schema
  • record the results in a table for fast querying
  • update existing records only for program units that have been changed since the last update of it’s record
  • specify override records for situations where the dependency may not be recorded in DBA_DEPENDENCIES(e.g. when using dynamic SQL statements)
  • view runtime information on what it is doing
  • amend logging levels to provide more detailed information on large runs
  • laugh at my inability to come up with a decent name for it

I developed CRUDO on Oracle 11gR2 and it will run on any Oracle Database Edition ( XE, Standard One, Standard, Enterprise).

Installing CRUDO

First thing to do is to get CRUDO from Github.
Head over to the CRUDO Github repository and click the Download ZIP button.

gh_download

Congratulations, you are now the proud owner of a file called crudo-master.zip.

Extract this into a location of your choosing.

Full installation instructions are included in the installation folder.
You can either install into an existing schema, or – the default option – you can create a new schema called CRUDO.
The examples that follow assume that you have done the latter.

You can create the CRUDO schema by running the script provied.
From the installation directory, connect to the database as a user with CREATE USER privileges and run…

@scripts/crudo_owner.sql

You will be prompted for :

  • The name of the user you want to create ( CRUDO by default)
  • The default tablespace for the new user ( default is USERS)
  • The temporary tablespace for the new user ( default is TEMP)

If you want to use any of the defaults, just hit return at the appropriate prompt.

The output of the script will look something like this :

SQL&gt; @scripts/crudo_owner.sql
Enter the name of the new crudo Application Owner schema [crudo] : 
Default tablespace for the new schema [users] : 
Temporary tablespace for the new schem [temp] : 
Enter a password for the new schema [] : 

User created.


Grant succeeded.


User altered.

SQL&gt; 

If you want to check that the user has been created as expected, well, there’s a script for that as well.
Connect as the newly created schema (in this case, CRUDO) and…

@scripts/pre_install_check.sql
SQL&gt; @scripts/pre_install_check
Pre-requisite checks complete.

PL/SQL procedure successfully completed.

SQL&gt; 

Now to install the database components that comprise the application.
Still in the installation directory, connect to the database as the application owner (crudo in this example) and…

@crudo_deploy.sql

Running this script will generate a log file in the same directory. If all goes well, it should look something like this :


Creating tables
===============
APPLICATION_LOGS

Table created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.

CRUD_MATRICES

Table created.


Table altered.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.

Creating Packages
=================
LOGS

Package created.


Package body created.

SEARCH_CODE

Package created.


Package body created.

GENERATE_MATRICES

Package created.


Package body created.

Deployment completed.
Optional Step – Installing the SQLDeveloper Extensions

CRUDO is a PL/SQL application. This means that all of the functionality is available via it’s PL/SQL packages.
We’ll look at these in detail later on.
If you are a SQLDeveloper user, then you may want to install the SQLDeveloper Extensions that come with the application to make life just that little bit more, well, GUI.

The Extensions are written in XML ( no Java). They can be added to SQLDeveloper in the usual way, namely…

  1. Open SQLDeveloper, go to the Tools menu and select Preferences
  2. Expand the Database node and select User Defined Extensions. It should look something like this :
  3. ude_dialog

  4. Click the Add Row button.
  5. In the new row that’s created, click in the Type column and select EDITOR from the drop-down
  6. add_tab1

  7. In the new row, click in the Location column and choose the crud_tab.xml file. You can find this file in the sqldev_extensions folder.
  8. add_tab2

    Next, add the Context Menu.

  9. Click the Add Row button.
  10. In the new row that’s created, click in the Type column and select ACTION from the drop-down
  11. add_action.png

  12. In the new row, click in the Location column and choose the crud_table_action.xml file. You can find this file in the sqldev_extensions folder.
  13. Cick OK.
  14. Re-start SQLDeveloper.

You should now see the CRUD MATRIX tab on the Tables view :

sqld_tab

If you right-click on a table, you should see the CRUDO Table Menu :

sql_menu

The Database Components

Whether you’ve decided to install the SQLDeveloper extensions or not, you will now have a database schema which contains the following tables :

  • CRUD_MATRICES – the table that holds all of the CRUD_MATRICES
  • APPLICATION_LOGS – table that holds any log messages generated by the application

…and the following packages…

  • GENERATE_MATRICES – containing all of the publicly called procedures and functions for the application
  • SEARCH_CODE – called from GENERATE_MATRICES to find DML statements in the source code
  • LOGS – used to manage entries to the APPLICATION_LOGS table
Granting Access to other users

A script is provided to grant access to the application to database users.
To run this, from the installation directory, connect as either the application owner (CRUDO in our case), or a user with GRANT ANY privileges and…

@scripts/grants.sql

You will be prompted for the application owner ( defaults to CRUDO) and the name of the user you’re granting access to. For example, if I want to grant access to MIKE it would look like this :

SQL&gt; @scripts/grants.sql
Enter the the Application Owning Schema [CRUDO] :
Enter the name of the user to grant access to : mike
old   1: grant execute on &amp;&amp;app_owner..generate_matrices to &amp;app_user
new   1: grant execute on CRUDO.generate_matrices to mike

Grant succeeded.

old   1: grant select on &amp;&amp;app_owner..crud_matrices to &amp;app_user
new   1: grant select on CRUDO.crud_matrices to mike

Grant succeeded.

old   1: grant select on &amp;&amp;app_owner..application_logs to &amp;app_user
new   1: grant select on CRUDO.application_logs to mike

Grant succeeded.

Grants completed.
SQL&gt; 

Now you’ve got everything setup it’s time to…

Generating your first CRUD

In the following examples, I’m connected as MIKE. I’m going to be working on the HR schema.

First up, we’re going to generate matrices for the HR.EMPLOYEES table. There are two ways to do this…

Generating from a PL/SQL block
begin
    crudo.generate_matrices.crud_table('HR', 'EMPLOYEES');
    commit;
end;
/
Generating From SQLDeveloper

If you’re using the SQLDeveloper Extensions…

In the Object Tree, navigate to Other Users and find HR.
Expand the HR node, then the Tables and right-click on EMPLOYEES.

In the context menu that pops up, click on CRUDO Table Menu then CRUD Table.

You should now see this :

crud_tab_dialog

You’ll notice that the Owner and Table Name are displayed for information and are not enterable.
You’ll also notice that there is a third parameter – Refresh Type.

By default this is set to DELTA.

I’ll cover the REFRESH TYPE option a bit later. For now though, just accept the defaults and click Apply.

NOTE – in SQLDeveloper, the Menu Options run in an Autonomous transaction and commit automatically. This is done to ensure that they do not interfere with any other transactions that may be ongoing in your current SQLDeveloper session.

Viewing the Crud Matrices

Whatever method you’ve used, you should now be able to see the matrices we’ve generated for employees. In SQL*Plus :

select object_owner, object_name, object_type,
    create_flag, read_flag, update_flag, delete_flag,
    last_updated, last_updated_user
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'EMPLOYEES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     C R U D LAST_UPDATED	LAST_UPDATED_USER
------------------------------ ------------------------------ ------------------------------ - - - - ------------------ ------------------------------
HR			       EMP_DETAILS_VIEW 	      VIEW			     N Y N N 16-OCT-15		MIKE
HR			       SECURE_EMPLOYEES 	      TRIGGER			     Y N Y Y 16-OCT-15		MIKE
HR			       UPDATE_JOB_HISTORY	      TRIGGER			     N N Y N 16-OCT-15		MIKE

3 rows selected.

Alternatively, in SQLDeveloper, simply navigate to the table as before.
You’ll notice that there is an now some data on the CRUD Matrix tab…

crud_tab_display

However you view the output, you should see entries for all dependent objects that are not wrapped.

Generating matrices one table at a time may get a bit tedious. Fortunately, CRUDO allows you to…

Generate Matrices for all tables in a schema

As with the CRUD_TABLE procedure, CRUD_SCHEMA also has an optional REFRESH_TYPE parameter which is also set to DELTA by default.
the length of time this procedure runs for will depend on a number of factors such as the number of tables in the schema and the number, length and type of program units that are dependent on them.
Therefore, it’s not necessarily suited to interactive running and so there is no SQLDeveloper menu option for this.

Additionally, whilst the procedure will update the ACTION column in V$SESSION, you may want a bit more detail about what it’s doing.
You can get this by setting the Application’s logging level.

CRUDO has three logging levels :

  • ERROR – only errors are written to the APPLICATION_LOGS table ( the default)
  • INFO – Errors and Information messages are written to the table
  • DEBUG – All messages are written to the table

To find out what the log setting is for the current session :

select crudo.generate_matrices.get_log_level
from dual
/

If we’re running the schema crud, we may want to set the level to INFO :

begin
    crudo.generate_matrices.set_log_level('INFO');
end;
/

One other setting you may want to play around with is the BULK_COLLECT_LIMIT. This dictates the bulk collect limit used when processing objects dependent on a table. To check the current limit :

select crudo.generate_matrices.get_bulk_collect_limit 
from dual
/

To change the limit for the current session :

begin
    crudo.generate_matrices.set_bulk_collect_limit(500);
end;
/

To generate matrices for all tables in a schema, you can run…

begin
    crudo.generate_matrices.crud_schema('HR');
    commit;
end;
/

Once this has completed, we should see records in the CRUD_DETAILS table for all tables in the HR schema :

select distinct table_name
from crudo.crud_matrices
where table_owner = 'HR'
/

TABLE_NAME
------------------------------
COUNTRIES
DEPARTMENTS
EMPLOYEES
JOBS
JOB_HISTORY
LOCATIONS
REGIONS

Before I go too much further, I should probably explain…

Refresh Types

By default CRUDO will only re-examine program units that have a last_ddl_time later than the crud records were last created.
It will however, remove records for any program unit that has been dropped since it was last run.
If you want to ensure that all program units are processed, irrespective of when they were last changed, you specify a Refresh Type of FULL.

To demonstrate how this works, let’s start by making a note of the time at which our CRUD_MATRICES records were generated for the COUNTRIES table :

select object_owner, object_name, object_type,
    to_char(last_updated, 'HH24:MI')
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'COUNTRIES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     TO_CH
------------------------------ ------------------------------ ------------------------------ -----
HR			       EMP_DETAILS_VIEW 	      VIEW			     13:11

1 row selected.

Now, let’s create the following function (as the HR user), which reads the COUNTRIES table :

create or replace function get_country_name( i_country_id in countries.country_id%type)
    return countries.country_name%type
is
    l_rtn countries.country_name%type;
begin
    select country_name into l_rtn
    from countries
    where country_id = upper(i_country_id);
    
    return l_rtn;
exception when no_data_found then
    raise_application_error( -20000, 'No record for this country id');
end;
/

If we now run a DELTA refresh ( the default, remember), either via SQLDeveloper or via PL/SQL…

begin
    crudo.generate_matrices.crud_table('HR', 'COUNTRIES');
    commit;
end;
/

…we can see that the new record has been added, but the pre-existing record has not been updated…

select object_owner, object_name, object_type,
    to_char(last_updated, 'HH24:MI')
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'COUNTRIES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     TO_CH
------------------------------ ------------------------------ ------------------------------ -----
HR			       EMP_DETAILS_VIEW 	      VIEW			     13:11
HR			       GET_COUNTRY_NAME 	      FUNCTION			     13:30

2 rows selected.

By contrast, if we were to run a FULL refresh, we can see that all of these records are overwritten. Using PL/SQL…

begin
    crudo.generate_matrices.crud_table
    (
        i_owner =&gt; 'HR',
        i_table_name =&gt; 'EMPLOYEES',
        i_refresh_type =&gt; 'FULL'
    );
    commit;
end;
/

In SQLDeveloper, you simply need to select FULL from the drop-down

full_tab_crud

If we now re-query the records, we can see that they have all been overwritten :

select object_owner, object_name, object_type,
    to_char(last_updated, 'HH24:MI')
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'EMPLOYEES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     TO_CH
------------------------------ ------------------------------ ------------------------------ -----
HR			       EMP_DETAILS_VIEW 	      VIEW			     18:35
HR			       SECURE_EMPLOYEES 	      TRIGGER			     18:35
HR			       UPDATE_JOB_HISTORY	      TRIGGER			     18:35


Generally then, a FULL refresh will re-calculate all of the matrices for a given table. There is one type of record however, where this does not apply…

Override CRUD records

Consider the following procedure :

create or replace procedure add_country( i_id in varchar2, i_name in varchar2, i_region_id in varchar2)
is
begin
    execute immediate 'insert into countries(country_id, country_name, region_id) values(:1, :2, :3)'
        using i_id, i_name, i_region_id;
end;
/

Because the only reference to the COUNTRIES table is in a dynamic SQL statement, no dependency between the procedure and the COUNTRIES table is recorded in the data dictionary…

select owner, name, type 
from all_dependencies
where referenced_owner = 'HR'
and referenced_name = 'COUNTRIES'
/

OWNER			       NAME			      TYPE
------------------------------ ------------------------------ ------------------
HR			       EMP_DETAILS_VIEW 	      VIEW
HR			       GET_COUNTRY_NAME 	      FUNCTION

Sure enough, if we run a crud against the table…

begin
    crudo.generate_matrices.crud_table( 'HR', 'COUNTRIES');
    commit;
end;
/

… we won’t have a record for this procedure…

select table_owner, table_name
from crudo.crud_matrices
where object_owner = 'HR'
and object_name = 'ADD_COUNTRY'
and object_type = 'PROCEDURE'
/

no rows selected

In order to make sure that this crud matrix is recorded, you can…

Creating an override record

In PL/SQL, you can do this as follows :

begin
    crudo.generate_matrices.set_override
    (
        i_table_owner =&gt; 'HR',
        i_table_name =&gt; 'COUNTRIES',
        i_object_owner =&gt; 'HR',
        i_object_name =&gt; 'ADD_COUNTRY',
        i_object_type =&gt; 'PROCEDURE',
        i_create =&gt; 'Y',
        i_read =&gt; 'N',
        i_update =&gt; 'N',
        i_delete =&gt; 'N'
    );
    commit;
end;
/

In SQLDeveloper, you can do this from the CRUDO Table Menu using the Add CRUD Override Record option.

When you select this option, you’ll see the following dialog box :

override1

In the dialog :

  • Enter the name of the program unit you want to create the record for ( note, this is not case sensitive)
  • select the program unit’s type from the drop-down
  • set the Create, Read, Update and Delete flags to match the CRUD of this program unit against the table

In our example the dialog box should now look something like this :

override2

Once you’re happy with what you’ve entered, click Apply.

We can now see the new record in the CRUD_DETAILS table :

select object_owner, object_name, object_type,
    create_flag, read_flag, update_flag, delete_flag,
    last_updated, last_updated_user,
    override_flag
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'COUNTRIES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     C R U D LAST_UPDATED	LAST_UPDATED_USER	       O
------------------------------ ------------------------------ ------------------------------ - - - - ------------------ ------------------------------ -
HR			       ADD_COUNTRY		      PROCEDURE 		     Y N N N 16-OCT-15		MIKE			       Y
HR			       EMP_DETAILS_VIEW 	      VIEW			     N Y N N 16-OCT-15		MIKE			       N
HR			       GET_COUNTRY_NAME 	      FUNCTION			     N Y N N 16-OCT-15		MIKE			       N

From this point on, anytime we run a CRUD against the COUNTRIES table, even if we specify a full refresh, the override record will remain untouched.
CRUDO is smart enough to know that it’s not as smart as you :-).

Removing an Override Record

Imagine time has moved on a bit and the developer has come to realise that using dynamic SQL in this procedure is, perhaps, not the best way to approach matters.
Instead, the procedure has been re-written like this :

create or replace procedure add_country( i_id in varchar2, i_name in varchar2, i_region_id in varchar2)
is
begin
    merge into countries 
        using dual
        on ( country_id = i_id)
        when matched then
            update
                set country_name = nvl(i_name,country_name),
                    region_id = nvl(i_region_id, region_id)
        when not matched then
            insert( country_id, country_name, region_id)
            values( i_id, i_name, i_region_id);
            
end;    
/

Notice that the CRUD on COUNTRIES will have changed as the procedure may now UPDATE, as well as INSERT.
Additionally, because we’re no longer using dynamic SQL, Oracle recognises that there is a dependency for this procedure on the countries table :

select referenced_owner, referenced_name
from all_dependencies
where owner = 'HR'
and name = 'ADD_COUNTRY'
and type = 'PROCEDURE'
and referenced_type = 'TABLE'
/

REFERENCED_OWNER		       REFERENCED_NAME
------------------------------ ------------------------------
HR			       COUNTRIES

Whilst CRUDO would be able to recognize and record this crud, it will not contradict the override record.
Therefore, we’ll need to remove it.

Once again, you can either do this in PL/SQL…

begin
    crudo.generate_matrices.remove_override
    (
        i_table_owner =&gt; 'HR',
        i_table_name =&gt; 'COUNTRIES',
        i_object_owner =&gt; 'HR',
        i_object_name =&gt; 'ADD_COUNTRY',
        i_object_type =&gt; 'PROCEDURE'
    );
    commit;
end;
/

…or use the CRUDO Table Menu in SQLDeveloper…

From the menu, select the Remove Crud Override Record.
You will see the following dialog :

remove1

Enter the object name and select the object type from the drop-down, then click Apply.

When you re-query the CRUD_MATRICES table, you’ll notice that the override record has not been removed, but it has been updated…

select object_owner, object_name, object_type,
    create_flag, read_flag, update_flag, delete_flag,
    last_updated, last_updated_user,
    override_flag
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'COUNTRIES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     C R U D LAST_UPDATED	LAST_UPDATED_USER	       O
------------------------------ ------------------------------ ------------------------------ - - - - ------------------ ------------------------------ -
HR			       ADD_COUNTRY		      PROCEDURE 		     Y N N N 01-JAN-70		MIKE			       N
HR			       EMP_DETAILS_VIEW 	      VIEW			     N Y N N 16-OCT-15		MIKE			       N
HR			       GET_COUNTRY_NAME 	      FUNCTION			     N Y N N 16-OCT-15		MIKE			       N

So, the last_updated date is now 01-JAN-1970 and the override_flag is set to N.
This means that, when we come to run a CRUD against this table, this record will be overwritten, irrespective of the Refresh Type we choose…

begin
    crudo.generate_matrices.crud_table('HR', 'COUNTRIES');
    commit;
end;
/

If we now check, we can see that the record has indeed been updated…

select object_owner, object_name, object_type,
    create_flag, read_flag, update_flag, delete_flag,
    last_updated, last_updated_user,
    override_flag
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'COUNTRIES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     C R U D LAST_UPDATED	LAST_UPDATED_USER	       O
------------------------------ ------------------------------ ------------------------------ - - - - ------------------ ------------------------------ -
HR			       ADD_COUNTRY		      PROCEDURE 		     Y N Y N 16-OCT-15		MIKE			       N
HR			       EMP_DETAILS_VIEW 	      VIEW			     N Y N N 16-OCT-15		MIKE			       N
HR			       GET_COUNTRY_NAME 	      FUNCTION			     N Y N N 16-OCT-15		MIKE			       N

SQL&gt;

That just about wraps it up for our wander through CRUDO ( I hope those shoes weren’t expensive !)
If you decide to give it a go, I’d be interested to hear what you think.


Filed under: CRUDO, Oracle, PL/SQL, SQL Tagged: CRUD matrix, crudo, reverse engineer a crud matrix in PL/SQL, reverse engineer crud oracle

Reverse Engineering a CRUD Matrix in Oracle (again) – with CRUDO

Tue, 2015-10-20 11:55

Dependencies between a table and database stored program units can be found in the DBA_DEPENDENCIES dictionary view. However, this only records the fact that there is a dependency.
Say you have a long-running report that will benefit from an index on a table. It would be good to know if there’s a packaged procedure that does a large update on that table and may therefore take longer with a new index to populate. How can you figure out if there are any such procedures ?
Well, you’re in luck.

CRUDO is not, as you may think, a domestic cleaning product. It is, in fact, a PL/SQL application for generating CRUD matrices for Stored Program Units against the tables that they reference.
I’ve been playing around with something like this for a while now, and
this time, I’ve posted the resulting code on GitHub.

CRUDO enables you to :

  • Determine a CRUD Matrix for each database stored program unit against a given table ( or synonym on that table)
  • generate Matrices for all tables in a schema
  • record the results in a table for fast querying
  • update existing records only for program units that have been changed since the last update of it’s record
  • specify override records for situations where the dependency may not be recorded in DBA_DEPENDENCIES(e.g. when using dynamic SQL statements)
  • view runtime information on what it is doing
  • amend logging levels to provide more detailed information on large runs
  • laugh at my inability to come up with a decent name for it

I developed CRUDO on Oracle 11gR2 and it will run on any Oracle Database Edition ( XE, Standard One, Standard, Enterprise).

Installing CRUDO

First thing to do is to get CRUDO from Github.
Head over to the CRUDO Github repository and click the Download ZIP button.

gh_download

Congratulations, you are now the proud owner of a file called crudo-master.zip.

Extract this into a location of your choosing.

Full installation instructions are included in the installation folder.
You can either install into an existing schema, or – the default option – you can create a new schema called CRUDO.
The examples that follow assume that you have done the latter.

You can create the CRUDO schema by running the script provied.
From the installation directory, connect to the database as a user with CREATE USER privileges and run…

@scripts/crudo_owner.sql

You will be prompted for :

  • The name of the user you want to create ( CRUDO by default)
  • The default tablespace for the new user ( default is USERS)
  • The temporary tablespace for the new user ( default is TEMP)

If you want to use any of the defaults, just hit return at the appropriate prompt.

The output of the script will look something like this :

SQL&gt; @scripts/crudo_owner.sql
Enter the name of the new crudo Application Owner schema [crudo] : 
Default tablespace for the new schema [users] : 
Temporary tablespace for the new schem [temp] : 
Enter a password for the new schema [] : 

User created.


Grant succeeded.


User altered.

SQL&gt; 

If you want to check that the user has been created as expected, well, there’s a script for that as well.
Connect as the newly created schema (in this case, CRUDO) and…

@scripts/pre_install_check.sql
SQL&gt; @scripts/pre_install_check
Pre-requisite checks complete.

PL/SQL procedure successfully completed.

SQL&gt; 

Now to install the database components that comprise the application.
Still in the installation directory, connect to the database as the application owner (crudo in this example) and…

@crudo_deploy.sql

Running this script will generate a log file in the same directory. If all goes well, it should look something like this :


Creating tables
===============
APPLICATION_LOGS

Table created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.

CRUD_MATRICES

Table created.


Table altered.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.


Comment created.

Creating Packages
=================
LOGS

Package created.


Package body created.

SEARCH_CODE

Package created.


Package body created.

GENERATE_MATRICES

Package created.


Package body created.

Deployment completed.
Optional Step – Installing the SQLDeveloper Extensions

CRUDO is a PL/SQL application. This means that all of the functionality is available via it’s PL/SQL packages.
We’ll look at these in detail later on.
If you are a SQLDeveloper user, then you may want to install the SQLDeveloper Extensions that come with the application to make life just that little bit more, well, GUI.

The Extensions are written in XML ( no Java). They can be added to SQLDeveloper in the usual way, namely…

  1. Open SQLDeveloper, go to the Tools menu and select Preferences
  2. Expand the Database node and select User Defined Extensions. It should look something like this :
  3. ude_dialog

  4. Click the Add Row button.
  5. In the new row that’s created, click in the Type column and select EDITOR from the drop-down
  6. add_tab1

  7. In the new row, click in the Location column and choose the crud_tab.xml file. You can find this file in the sqldev_extensions folder.
  8. add_tab2

    Next, add the Context Menu.

  9. Click the Add Row button.
  10. In the new row that’s created, click in the Type column and select ACTION from the drop-down
  11. add_action.png

  12. In the new row, click in the Location column and choose the crud_table_action.xml file. You can find this file in the sqldev_extensions folder.
  13. Cick OK.
  14. Re-start SQLDeveloper.

You should now see the CRUD MATRIX tab on the Tables view :

sqld_tab

If you right-click on a table, you should see the CRUDO Table Menu :

sql_menu

The Database Components

Whether you’ve decided to install the SQLDeveloper extensions or not, you will now have a database schema which contains the following tables :

  • CRUD_MATRICES – the table that holds all of the CRUD_MATRICES
  • APPLICATION_LOGS – table that holds any log messages generated by the application

…and the following packages…

  • GENERATE_MATRICES – containing all of the publicly called procedures and functions for the application
  • SEARCH_CODE – called from GENERATE_MATRICES to find DML statements in the source code
  • LOGS – used to manage entries to the APPLICATION_LOGS table
Granting Access to other users

A script is provided to grant access to the application to database users.
To run this, from the installation directory, connect as either the application owner (CRUDO in our case), or a user with GRANT ANY privileges and…

@scripts/grants.sql

You will be prompted for the application owner ( defaults to CRUDO) and the name of the user you’re granting access to. For example, if I want to grant access to MIKE it would look like this :

SQL&gt; @scripts/grants.sql
Enter the the Application Owning Schema [CRUDO] :
Enter the name of the user to grant access to : mike
old   1: grant execute on &amp;&amp;app_owner..generate_matrices to &amp;app_user
new   1: grant execute on CRUDO.generate_matrices to mike

Grant succeeded.

old   1: grant select on &amp;&amp;app_owner..crud_matrices to &amp;app_user
new   1: grant select on CRUDO.crud_matrices to mike

Grant succeeded.

old   1: grant select on &amp;&amp;app_owner..application_logs to &amp;app_user
new   1: grant select on CRUDO.application_logs to mike

Grant succeeded.

Grants completed.
SQL&gt; 

Now you’ve got everything setup it’s time to…

Generating your first CRUD

In the following examples, I’m connected as MIKE. I’m going to be working on the HR schema.

First up, we’re going to generate matrices for the HR.EMPLOYEES table. There are two ways to do this…

Generating from a PL/SQL block
begin
    crudo.generate_matrices.crud_table('HR', 'EMPLOYEES');
    commit;
end;
/
Generating From SQLDeveloper

If you’re using the SQLDeveloper Extensions…

In the Object Tree, navigate to Other Users and find HR.
Expand the HR node, then the Tables and right-click on EMPLOYEES.

In the context menu that pops up, click on CRUDO Table Menu then CRUD Table.

You should now see this :

crud_tab_dialog

You’ll notice that the Owner and Table Name are displayed for information and are not enterable.
You’ll also notice that there is a third parameter – Refresh Type.

By default this is set to DELTA.

I’ll cover the REFRESH TYPE option a bit later. For now though, just accept the defaults and click Apply.

NOTE – in SQLDeveloper, the Menu Options run in an Autonomous transaction and commit automatically. This is done to ensure that they do not interfere with any other transactions that may be ongoing in your current SQLDeveloper session.

Viewing the Crud Matrices

Whatever method you’ve used, you should now be able to see the matrices we’ve generated for employees. In SQL*Plus :

select object_owner, object_name, object_type,
    create_flag, read_flag, update_flag, delete_flag,
    last_updated, last_updated_user
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'EMPLOYEES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     C R U D LAST_UPDATED	LAST_UPDATED_USER
------------------------------ ------------------------------ ------------------------------ - - - - ------------------ ------------------------------
HR			       EMP_DETAILS_VIEW 	      VIEW			     N Y N N 16-OCT-15		MIKE
HR			       SECURE_EMPLOYEES 	      TRIGGER			     Y N Y Y 16-OCT-15		MIKE
HR			       UPDATE_JOB_HISTORY	      TRIGGER			     N N Y N 16-OCT-15		MIKE

3 rows selected.

Alternatively, in SQLDeveloper, simply navigate to the table as before.
You’ll notice that there is an now some data on the CRUD Matrix tab…

crud_tab_display

However you view the output, you should see entries for all dependent objects that are not wrapped.

Generating matrices one table at a time may get a bit tedious. Fortunately, CRUDO allows you to…

Generate Matrices for all tables in a schema

As with the CRUD_TABLE procedure, CRUD_SCHEMA also has an optional REFRESH_TYPE parameter which is also set to DELTA by default.
the length of time this procedure runs for will depend on a number of factors such as the number of tables in the schema and the number, length and type of program units that are dependent on them.
Therefore, it’s not necessarily suited to interactive running and so there is no SQLDeveloper menu option for this.

Additionally, whilst the procedure will update the ACTION column in V$SESSION, you may want a bit more detail about what it’s doing.
You can get this by setting the Application’s logging level.

CRUDO has three logging levels :

  • ERROR – only errors are written to the APPLICATION_LOGS table ( the default)
  • INFO – Errors and Information messages are written to the table
  • DEBUG – All messages are written to the table

To find out what the log setting is for the current session :

select crudo.generate_matrices.get_log_level
from dual
/

If we’re running the schema crud, we may want to set the level to INFO :

begin
    crudo.generate_matrices.set_log_level('INFO');
end;
/

One other setting you may want to play around with is the BULK_COLLECT_LIMIT. This dictates the bulk collect limit used when processing objects dependent on a table. To check the current limit :

select crudo.generate_matrices.get_bulk_collect_limit 
from dual
/

To change the limit for the current session :

begin
    crudo.generate_matrices.set_bulk_collect_limit(500);
end;
/

To generate matrices for all tables in a schema, you can run…

begin
    crudo.generate_matrices.crud_schema('HR');
    commit;
end;
/

Once this has completed, we should see records in the CRUD_DETAILS table for all tables in the HR schema :

select distinct table_name
from crudo.crud_matrices
where table_owner = 'HR'
/

TABLE_NAME
------------------------------
COUNTRIES
DEPARTMENTS
EMPLOYEES
JOBS
JOB_HISTORY
LOCATIONS
REGIONS

Before I go too much further, I should probably explain…

Refresh Types

By default CRUDO will only re-examine program units that have a last_ddl_time later than the crud records were last created.
It will however, remove records for any program unit that has been dropped since it was last run.
If you want to ensure that all program units are processed, irrespective of when they were last changed, you specify a Refresh Type of FULL.

To demonstrate how this works, let’s start by making a note of the time at which our CRUD_MATRICES records were generated for the COUNTRIES table :

select object_owner, object_name, object_type,
    to_char(last_updated, 'HH24:MI')
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'COUNTRIES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     TO_CH
------------------------------ ------------------------------ ------------------------------ -----
HR			       EMP_DETAILS_VIEW 	      VIEW			     13:11

1 row selected.

Now, let’s create the following function (as the HR user), which reads the COUNTRIES table :

create or replace function get_country_name( i_country_id in countries.country_id%type)
    return countries.country_name%type
is
    l_rtn countries.country_name%type;
begin
    select country_name into l_rtn
    from countries
    where country_id = upper(i_country_id);
    
    return l_rtn;
exception when no_data_found then
    raise_application_error( -20000, 'No record for this country id');
end;
/

If we now run a DELTA refresh ( the default, remember), either via SQLDeveloper or via PL/SQL…

begin
    crudo.generate_matrices.crud_table('HR', 'COUNTRIES');
    commit;
end;
/

…we can see that the new record has been added, but the pre-existing record has not been updated…

select object_owner, object_name, object_type,
    to_char(last_updated, 'HH24:MI')
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'COUNTRIES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     TO_CH
------------------------------ ------------------------------ ------------------------------ -----
HR			       EMP_DETAILS_VIEW 	      VIEW			     13:11
HR			       GET_COUNTRY_NAME 	      FUNCTION			     13:30

2 rows selected.

By contrast, if we were to run a FULL refresh, we can see that all of these records are overwritten. Using PL/SQL…

begin
    crudo.generate_matrices.crud_table
    (
        i_owner =&gt; 'HR',
        i_table_name =&gt; 'EMPLOYEES',
        i_refresh_type =&gt; 'FULL'
    );
    commit;
end;
/

In SQLDeveloper, you simply need to select FULL from the drop-down

full_tab_crud

If we now re-query the records, we can see that they have all been overwritten :

select object_owner, object_name, object_type,
    to_char(last_updated, 'HH24:MI')
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'EMPLOYEES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     TO_CH
------------------------------ ------------------------------ ------------------------------ -----
HR			       EMP_DETAILS_VIEW 	      VIEW			     18:35
HR			       SECURE_EMPLOYEES 	      TRIGGER			     18:35
HR			       UPDATE_JOB_HISTORY	      TRIGGER			     18:35


Generally then, a FULL refresh will re-calculate all of the matrices for a given table. There is one type of record however, where this does not apply…

Override CRUD records

Consider the following procedure :

create or replace procedure add_country( i_id in varchar2, i_name in varchar2, i_region_id in varchar2)
is
begin
    execute immediate 'insert into countries(country_id, country_name, region_id) values(:1, :2, :3)'
        using i_id, i_name, i_region_id;
end;
/

Because the only reference to the COUNTRIES table is in a dynamic SQL statement, no dependency between the procedure and the COUNTRIES table is recorded in the data dictionary…

select owner, name, type 
from all_dependencies
where referenced_owner = 'HR'
and referenced_name = 'COUNTRIES'
/

OWNER			       NAME			      TYPE
------------------------------ ------------------------------ ------------------
HR			       EMP_DETAILS_VIEW 	      VIEW
HR			       GET_COUNTRY_NAME 	      FUNCTION

Sure enough, if we run a crud against the table…

begin
    crudo.generate_matrices.crud_table( 'HR', 'COUNTRIES');
    commit;
end;
/

… we won’t have a record for this procedure…

select table_owner, table_name
from crudo.crud_matrices
where object_owner = 'HR'
and object_name = 'ADD_COUNTRY'
and object_type = 'PROCEDURE'
/

no rows selected

In order to make sure that this crud matrix is recorded, you can…

Creating an override record

In PL/SQL, you can do this as follows :

begin
    crudo.generate_matrices.set_override
    (
        i_table_owner =&gt; 'HR',
        i_table_name =&gt; 'COUNTRIES',
        i_object_owner =&gt; 'HR',
        i_object_name =&gt; 'ADD_COUNTRY',
        i_object_type =&gt; 'PROCEDURE',
        i_create =&gt; 'Y',
        i_read =&gt; 'N',
        i_update =&gt; 'N',
        i_delete =&gt; 'N'
    );
    commit;
end;
/

In SQLDeveloper, you can do this from the CRUDO Table Menu using the Add CRUD Override Record option.

When you select this option, you’ll see the following dialog box :

override1

In the dialog :

  • Enter the name of the program unit you want to create the record for ( note, this is not case sensitive)
  • select the program unit’s type from the drop-down
  • set the Create, Read, Update and Delete flags to match the CRUD of this program unit against the table

In our example the dialog box should now look something like this :

override2

Once you’re happy with what you’ve entered, click Apply.

We can now see the new record in the CRUD_DETAILS table :

select object_owner, object_name, object_type,
    create_flag, read_flag, update_flag, delete_flag,
    last_updated, last_updated_user,
    override_flag
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'COUNTRIES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     C R U D LAST_UPDATED	LAST_UPDATED_USER	       O
------------------------------ ------------------------------ ------------------------------ - - - - ------------------ ------------------------------ -
HR			       ADD_COUNTRY		      PROCEDURE 		     Y N N N 16-OCT-15		MIKE			       Y
HR			       EMP_DETAILS_VIEW 	      VIEW			     N Y N N 16-OCT-15		MIKE			       N
HR			       GET_COUNTRY_NAME 	      FUNCTION			     N Y N N 16-OCT-15		MIKE			       N

From this point on, anytime we run a CRUD against the COUNTRIES table, even if we specify a full refresh, the override record will remain untouched.
CRUDO is smart enough to know that it’s not as smart as you :-).

Removing an Override Record

Imagine time has moved on a bit and the developer has come to realise that using dynamic SQL in this procedure is, perhaps, not the best way to approach matters.
Instead, the procedure has been re-written like this :

create or replace procedure add_country( i_id in varchar2, i_name in varchar2, i_region_id in varchar2)
is
begin
    merge into countries 
        using dual
        on ( country_id = i_id)
        when matched then
            update
                set country_name = nvl(i_name,country_name),
                    region_id = nvl(i_region_id, region_id)
        when not matched then
            insert( country_id, country_name, region_id)
            values( i_id, i_name, i_region_id);
            
end;    
/

Notice that the CRUD on COUNTRIES will have changed as the procedure may now UPDATE, as well as INSERT.
Additionally, because we’re no longer using dynamic SQL, Oracle recognises that there is a dependency for this procedure on the countries table :

select referenced_owner, referenced_name
from all_dependencies
where owner = 'HR'
and name = 'ADD_COUNTRY'
and type = 'PROCEDURE'
and referenced_type = 'TABLE'
/

REFERENCED_OWNER		       REFERENCED_NAME
------------------------------ ------------------------------
HR			       COUNTRIES

Whilst CRUDO would be able to recognize and record this crud, it will not contradict the override record.
Therefore, we’ll need to remove it.

Once again, you can either do this in PL/SQL…

begin
    crudo.generate_matrices.remove_override
    (
        i_table_owner =&gt; 'HR',
        i_table_name =&gt; 'COUNTRIES',
        i_object_owner =&gt; 'HR',
        i_object_name =&gt; 'ADD_COUNTRY',
        i_object_type =&gt; 'PROCEDURE'
    );
    commit;
end;
/

…or use the CRUDO Table Menu in SQLDeveloper…

From the menu, select the Remove Crud Override Record.
You will see the following dialog :

remove1

Enter the object name and select the object type from the drop-down, then click Apply.

When you re-query the CRUD_MATRICES table, you’ll notice that the override record has not been removed, but it has been updated…

select object_owner, object_name, object_type,
    create_flag, read_flag, update_flag, delete_flag,
    last_updated, last_updated_user,
    override_flag
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'COUNTRIES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     C R U D LAST_UPDATED	LAST_UPDATED_USER	       O
------------------------------ ------------------------------ ------------------------------ - - - - ------------------ ------------------------------ -
HR			       ADD_COUNTRY		      PROCEDURE 		     Y N N N 01-JAN-70		MIKE			       N
HR			       EMP_DETAILS_VIEW 	      VIEW			     N Y N N 16-OCT-15		MIKE			       N
HR			       GET_COUNTRY_NAME 	      FUNCTION			     N Y N N 16-OCT-15		MIKE			       N

So, the last_updated date is now 01-JAN-1970 and the override_flag is set to N.
This means that, when we come to run a CRUD against this table, this record will be overwritten, irrespective of the Refresh Type we choose…

begin
    crudo.generate_matrices.crud_table('HR', 'COUNTRIES');
    commit;
end;
/

If we now check, we can see that the record has indeed been updated…

select object_owner, object_name, object_type,
    create_flag, read_flag, update_flag, delete_flag,
    last_updated, last_updated_user,
    override_flag
from crudo.crud_matrices
where table_owner = 'HR'
and table_name = 'COUNTRIES'
order by object_owner, object_name, object_type
/

OBJECT_OWNER		       OBJECT_NAME		      OBJECT_TYPE		     C R U D LAST_UPDATED	LAST_UPDATED_USER	       O
------------------------------ ------------------------------ ------------------------------ - - - - ------------------ ------------------------------ -
HR			       ADD_COUNTRY		      PROCEDURE 		     Y N Y N 16-OCT-15		MIKE			       N
HR			       EMP_DETAILS_VIEW 	      VIEW			     N Y N N 16-OCT-15		MIKE			       N
HR			       GET_COUNTRY_NAME 	      FUNCTION			     N Y N N 16-OCT-15		MIKE			       N

SQL&gt;

That just about wraps it up for our wander through CRUDO ( I hope those shoes weren’t expensive !)
If you decide to give it a go, I’d be interested to hear what you think.


Filed under: CRUDO, Oracle, PL/SQL, SQL Tagged: CRUD matrix, crudo, reverse engineer a crud matrix in PL/SQL, reverse engineer crud oracle