Skip navigation.

The Anti-Kyte

Syndicate content The Anti-Kyte
Oracle - for when it was like that when you got there
Updated: 16 hours 51 min ago

What’s in a Name ? USER_TAB_COLS and USER_TAB_COLUMNS are different.

Tue, 2015-06-23 13:22

My son and I are quite similar in some ways ( although he would vehemently dispute this).
Like me, he works in IT, in his case as a Support Engineer.
Like me, he’s called Mike (well, my Mum likes the name…and I can spell it).
Unlike me – as he would be quick to point out – he still has all his own hair.
These similarities have been known to cause confusion – I’m often contacted by recruitment agents with enticing offers to work on…some newfangled stuff I know nothing about, whilst he’s constantly being offered “exciting” Database related opportunities.

Similar confusion can arise when you’re delving into the Oracle Data Dictionary…

Note – the examples that follow apply to 11gR2. Additionally, apart from the COLS synonym, what is true for USER_TAB_COLUMNS and USER_TAB_COLS also applies to their ALL_ and DBA_ equivalents.

When it comes to getting column meta-data out of the Data Dictionary, you’ve got several choices. To illustrate this, connect as HR and ….

select column_name, data_type
from cols
where table_name = 'EMPLOYEES'
order by column_id
/

select column_name, data_type
from user_tab_columns
where table_name = 'EMPLOYEES'
order by column_id
/

select column_name, data_type
from user_tab_cols
where table_name = 'EMPLOYEES'
order by column_id
/

In each case the results are identical :

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
COMMISSION_PCT		       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER

11 rows selected.

So, it would appear that COLS, USER_TAB_COLUMNS and USER_TAB_COLS are all synonyms for the same thing…

select synonym_name, table_owner, table_name
from all_synonyms
where synonym_name in ('COLS', 'USER_TAB_COLS', 'USER_TAB_COLUMNS')
order by table_name
/ 

SYNONYM_NAME		       TABLE_OWNER	    TABLE_NAME
------------------------------ -------------------- --------------------
USER_TAB_COLS		       SYS		    USER_TAB_COLS
USER_TAB_COLUMNS	       SYS		    USER_TAB_COLUMNS
COLS			       SYS		    USER_TAB_COLUMNS

…OK, so COLS is indeed a synonym for USER_TAB_COLUMNS. USER_TAB_COLS and USER_TAB_COLUMNS also appear to be identical…

select table_name, comments
from all_tab_comments
where table_name in ('USER_TAB_COLUMNS', 'USER_TAB_COLS')
/

TABLE_NAME	     COMMENTS
-------------------- --------------------------------------------------
USER_TAB_COLS	     Columns of user's tables, views and clusters
USER_TAB_COLUMNS     Columns of user's tables, views and clusters

There you go then. Must be the case…

Unused Columns

Lets create another table in the HR schema as a copy of EMPLOYEES….

create table non_sales_emps as
    select * 
    from employees
    where commission_pct is null
/

Table created.

As the name suggests, we’re not going to have any Sales Staff in this table, so we don’t really need the COMMISSION_PCT column…

SQL> alter table non_sales_emps
  2      set unused column commission_pct
  3  /

Table altered.

SQL> 

So, the table no longer contains the COMMISSION_PCT column…

select column_name, data_type
from user_tab_cols
where table_name = 'NON_SALES_EMPS'
order by column_id
/

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER
SYS_C00009_15061918:46:18$     NUMBER

11 rows selected.

Yes, the table now has 10 columns…and here are the details of all 11 of them.
Weren’t expecting that ? Well then you probably won’t be expecting this either….

select column_name, data_type
from user_tab_columns
where table_name = 'NON_SALES_EMPS'
order by column_id
/

COLUMN_NAME		       DATA_TYPE
------------------------------ --------------------
EMPLOYEE_ID		       NUMBER
FIRST_NAME		       VARCHAR2
LAST_NAME		       VARCHAR2
EMAIL			       VARCHAR2
PHONE_NUMBER		       VARCHAR2
HIRE_DATE		       DATE
JOB_ID			       VARCHAR2
SALARY			       NUMBER
MANAGER_ID		       NUMBER
DEPARTMENT_ID		       NUMBER

10 rows selected.

The extra column has magically disappeared again. Just what is going on ?

Delving a bit deeper into this particular rabbit-hole…

select column_name, data_type
from all_tab_columns
where owner = 'SYS'
and table_name = 'USER_TAB_COLS'
minus
select column_name, data_type
from all_tab_columns
where owner = 'SYS'
and table_name = 'USER_TAB_COLUMNS'
/

COLUMN_NAME		       DATA_TYPE
------------------------------ ------------------------------
HIDDEN_COLUMN		       VARCHAR2
INTERNAL_COLUMN_ID	       NUMBER
QUALIFIED_COL_NAME	       VARCHAR2
SEGMENT_COLUMN_ID	       NUMBER
VIRTUAL_COLUMN		       VARCHAR2

SQL> 

From this we can see that USER_TAB_COLS contains five additional columns over those available in USER_TAB_COLUMNS.

select column_name, comments
from all_col_comments
where owner = 'SYS'
and table_name = 'USER_TAB_COLS'
and column_name in ( 'HIDDEN_COLUMN', 'INTERNAL_COLUMN_ID', 
    'QUALIFIED_COL_NAME', 'SEGMENT_COLUMN_ID', 'VIRTUAL_COLUMN')
/

COLUMN_NAME		  COMMENTS
------------------------- --------------------------------------------------
HIDDEN_COLUMN		  Is this a hidden column?
VIRTUAL_COLUMN		  Is this a virtual column?
SEGMENT_COLUMN_ID	  Sequence number of the column in the segment
INTERNAL_COLUMN_ID	  Internal sequence number of the column
QUALIFIED_COL_NAME	  Qualified column name

Furthermore, if we examine the source code for the USER_TAB_COLUMNS view, the reason for it’s similarity with USER_TAB_COLS becomes apparent :

select TABLE_NAME, COLUMN_NAME, DATA_TYPE, DATA_TYPE_MOD, DATA_TYPE_OWNER,
       DATA_LENGTH, DATA_PRECISION, DATA_SCALE, NULLABLE, COLUMN_ID,
       DEFAULT_LENGTH, DATA_DEFAULT, NUM_DISTINCT, LOW_VALUE, HIGH_VALUE,
       DENSITY, NUM_NULLS, NUM_BUCKETS, LAST_ANALYZED, SAMPLE_SIZE,
       CHARACTER_SET_NAME, CHAR_COL_DECL_LENGTH,
       GLOBAL_STATS, USER_STATS, AVG_COL_LEN, CHAR_LENGTH, CHAR_USED,
       V80_FMT_IMAGE, DATA_UPGRADED, HISTOGRAM
  from USER_TAB_COLS
 where HIDDEN_COLUMN = 'NO'

So, just when does it become useful to use USER_TAB_COLS rather than USER_TAB_COLUMNS ?

In 11g, you’d set a column to be unused on a large table in order for the “drop” to happen quickly.
Once you have set it to unused, the only thing you can do with it is drop it altogether to reclaim the space it’s using.
To find out which tables have unused columns, you can use…

select table_name, count
from user_unused_col_tabs
/

TABLE_NAME			    COUNT
------------------------------ ----------
NON_SALES_EMPS				1

…and if you want to drop an unused column, you don’t need to specify it’s name in the command…

alter table non_sales_emps
    drop unused columns
/

Table altered.

Where USER_TAB_COLS does come in handy is when you’re looking for …

Virtual Columns

For example, we could add a column to our NON_SALES_EMPS table to calculate the number of completed years service for each employee.
First, we need a deterministic function to return the number of full years between a given date and today :

create or replace function years_elapsed_fn( i_date in date)
    return number deterministic
as
begin
    return floor( months_between( trunc(sysdate), i_date) / 12);
end;
/

Now we add a virtual column to the table which calls this function :

alter table non_sales_emps
    add years_service generated always as
        (years_elapsed_fn(hire_date))
/

Whilst there’s no way to tell which columns are virtual in USER_TAB_COLUMNS, there is in USER_TAB_COLS :

select column_name, data_type, virtual_column
from user_tab_cols
where table_name = 'NON_SALES_EMPS'
/

COLUMN_NAME		  DATA_TYPE			 VIR
------------------------- ------------------------------ ---
EMPLOYEE_ID		  NUMBER			 NO
FIRST_NAME		  VARCHAR2			 NO
LAST_NAME		  VARCHAR2			 NO
EMAIL			  VARCHAR2			 NO
PHONE_NUMBER		  VARCHAR2			 NO
HIRE_DATE		  DATE				 NO
JOB_ID			  VARCHAR2			 NO
SALARY			  NUMBER			 NO
MANAGER_ID		  NUMBER			 NO
DEPARTMENT_ID		  NUMBER			 NO
YEARS_SERVICE		  NUMBER			 YES

11 rows selected.

SQL> 

The reasons for having two such similar ( and similarly named) dictionary views seem to have been lost in the mists of time.
Whatever the rationale, it’s worth knowing the difference next time you need to go poking around the column meta-data in your database.


Filed under: Oracle, SQL Tagged: all_tab_comments, alter table set unused column, cols, deterministic, hidden columns, user_tab_cols, user_tab_columns, virtual columns

Upgrading to APEX 5 on Oracle XE 11g

Mon, 2015-05-25 07:27

It’s a Bank Holiday weekend here in the UK.
This is usually a time for doing odd-jobs as a distraction from watching the rain come down.
This time around, rather than subject you to another lament about the Great British Summer ( or lack thereof), I’m going to go through the steps needed to install APEX5 on Oracle 11gXE.

Now, I know that the documentation doesn’t mention Express Edition.
I also know that the instructions that Oracle do have for upgrading APEX on XE haven’t yet been updated to account for APEX5.
I know this because I’ve spent a wet Bank Holiday finding this stuff out the hard way so that (hopefully), you don’t have to.
What I’m going to cover here is :

  • Pre-installation checks
  • Getting APEX5
  • Installation
  • Configuration

I would say “let’s get cracking before the sun comes out”, but that would only give us until around the second week in July…

The environment

I’m dong this on Oracle Database Express Edition 11gR2.
As I’m doing this on a Linux machine, you may see the odd indication of this but the essential steps outlined here will apply for all Operating Systems
I’m starting with a clean installation of Oracle XE, so the current version of APEX is 4.0. However, these steps should still be valid when upgrading from any APEX4x version.

Incidentally, if you really want to hang on to the default XE Database Management Application, you’ll probably want to have a look at the steps required to back it up prior to upgrade.

Pre-Installation checks

The documentation details several checks. However, as we’re running on XE11g, we don’t have to worry too much about some of them.

If you really want to double-check….

Oracle Database Version

The minimum version required is 11.1.07. XE runs 11.2.0.2 as we can see with the following query in SQL*Plus :

select banner
from v$version
where banner like 'Oracle Database%';

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production

SQL> 
Oracle XML DB

That’s there as well, by default…

select comp_name, version, status
from dba_registry
where comp_id = 'XDB';

COMP_NAME					   VERSION			  STATUS
-------------------------------------------------- ------------------------------ --------------------------------------------
Oracle XML Database				   11.2.0.2.0			  VALID

Web Listener Requirements

Oracle XE ships with the Embedded PL/SQL Gateway by default.
Provided you can hit the Database Home Page (e.g. by clicking on the Getting Started With Oracle Database 11g Express Edition desktop shortcut), you’re good to go.

There are some pre-requisites that you will need to verify.

Memory Target Setting

Once again, on a fresh XE11g install, you shouldn’t have any problems.
In memory terms, the smallest machine I’ve installed XE11g on had a total of 1GB RAM. Even in an environment as constrained as this, the MEMORY_TARGET should still meet the minimum requirement of 300MB.

To check, login to SQL*Plus and ….

SQL> show parameter memory_target

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
memory_target			     big integer 1G
SQL> 
Space Requirements

The easy bit first. If you’re planning to download the English Language Only version of APEX5, you’ll need 250MB of space on disk.
If you’re going for the full version, this goes up to 630 MB.

At this point, the installation instructions then start talking about the “Oracle Application Express tablespace”.
This is simply the default tablespace for the schema that will be the APEX owner. Once the installation happens this will be a schema called APEX_050000. It will be created with SYSAUX as it’s default tablespace. This is exactly the same as for the existing APEX04000 user that shipped with XE11g. Incidentally, we’ll also need to know the user’s temporary tablespace for the installation, so we may as well verify both of them now…

SQL> select default_tablespace, temporary_tablespace
  2  from dba_users
  3  where username = 'APEX_040000';

DEFAULT_TABLESPACE	       TEMPORARY_TABLESPACE
------------------------------ ------------------------------
SYSAUX			       TEMP

SQL> 

The amount of space required in SYSAUX is 220MB plus 60MB per additional language installed.
There is also a requirement for 100MB in the SYSTEM tablespace.

Working out how much space is available isn’t entirely straightforward. However, we can get most of the way with the following query :

select tablespace_name,
  file_name,
  (maxbytes - bytes) / 1024/1024 as "Available Space MB",
  autoextensible
from dba_data_files
where tablespace_name in ('SYSAUX', 'SYSTEM')
/
TABLESPACE_NAME 	       FILE_NAME					  Available Space MB AUT
------------------------------ -------------------------------------------------- ------------------ ---
SYSAUX			       /u01/app/oracle/oradata/XE/sysaux.dbf			  31677.9844 YES
SYSTEM			       /u01/app/oracle/oradata/XE/system.dbf				 150 YES

SQL> 

If you’re query returns an Available Space MB figure less than the requirements, don’t worry too much.
Provided the tablespace is Autoextensible and there is enough space on disk, it will automatically grow as it needs more space.

Browser Version

The minimum requirements for Web browsers are :

  • Firefox version 35
  • Chrome version 40
  • Safari version 7
  • IE version 9
Getting APEX5

Right, once you’re happy with the pre-requisite steps, head over to the OTN APEX5 download page and download you’re required version.
This will be one of :

  • Oracle Application Express 5.0 – All languages
  • Oracle Application Express 5.0 – English language only

I’ve gone for the English language only version.

As mentioned previously, the Download Page does state that :

“Application Express 5.0 can also be used with Oracle Database 11g Express Edition (XE), but is supported only through the OTN discussion forum, and not through Oracle Support Services.”

However, the installation instructions page it links to has yet to be updated for APEX5 at the time of writing.

Anyway, I now have a file called apex_5.0_en.zip in my Downloads directory.

As I’m on Linux, I’m going to unzip and deploy this to the ORACLE_BASE directory (/u01/app/oracle).
To avoid any issues with file permissions, I’ll do this as the oracle OS user.

I should point out that it doesn’t really matter where you deploy the files to. Also, you don’t have to be oracle to do this.
I’ve just done it this way to keep things simple.

cd $HOME/Downloads
sudo su oracle
unzip -d /u01/app/oracle apex_5.0_en.zip

You’ll then see something like …

...
  inflating: /u01/app/oracle/apex/core/template.plb  
  inflating: /u01/app/oracle/apex/core/dev_grants.sql  
  inflating: /u01/app/oracle/apex/apxsqler.sql  
  inflating: /u01/app/oracle/apex/apxprereq.sql  
  inflating: /u01/app/oracle/apex/apxupgrd.sql  
  inflating: /u01/app/oracle/apex/apxconf.sql  
  inflating: /u01/app/oracle/apex/coreins5.sql  
  inflating: /u01/app/oracle/apex/apxdvins.sql  
  inflating: /u01/app/oracle/apex/apxchpwd.sql  
 extracting: /u01/app/oracle/apex/apxexit.sql  
  inflating: /u01/app/oracle/apex/catapx.sql  
  inflating: /u01/app/oracle/apex/apxe102.sql  

After that, you should have a sub-directory call apex where you’ve unzipped the file.
NOTE – you can stop being the oracle user now.

Installation

A bit of housekeeping to start with – we need to make sure that the APEX_PUBLIC_USER database account is unlocked :

select account_status
from dba_users
where username = 'APEX_PUBLIC_USER';

If the account_status is LOCKED then…

alter user apex_public_user account unlock;

User altered.

select account_status from dba_users where username = 'APEX_PUBLIC_USER';

ACCOUNT_STATUS
--------------------------------
OPEN

SQL>       

NOTE – strictly speaking, you should also perform this check for the ANONYMOUS user. However, if Oracle XE is newly installed, or if you’re running the Embedded PL/SQL Gateway, it should be unlocked.
If you want to satisfy yourself that this is, in fact, the case :

select account_status
from dba_users
where username = 'ANONYMOUS';
Loading APEX5 into the database

As of APEX5, we now have the option of installing just the APEX runtime…but where’s the fun in that ?
We want the full-blown development environment…

The script we need to run to do the installation – apexins.sql – takes three parameters :

  • the default tablespace of the APEX owner schema
  • the default tablespace of the FLOWS_FILES schema
  • a temporary tablespace in the database
  • a virtual directory for APEX images

We already know that the default tablespace for the APEX owner is SYSAUX.
We also know that the temporary tablespace is called TEMP.
As for the FLOWS_FILES schema…

SQL> select default_tablespace
  2  from dba_users
  3  where username = 'FLOWS_FILES';

DEFAULT_TABLESPACE
------------------------------
SYSAUX

SQL> 

As for the virtual directory – “/i/” always seems to work.

Now, change directory to the apex directory you’ve created as part of the unzip step, and connect to the database as sys as sysdba.

cd /u01/app/oracle/apex 

sqlplus sys as sysdba

…and run the script…

@apexins.sql SYSAUX SYSAUX TEMP /i/

After several minutes worth of messages whizzing up your screen you’ll get :

PL/SQL procedure successfully completed.

-- Now beginning upgrade. This will take several minutes.-------
-- Ensuring template names are unique -------

…finally, you’ll get…

Thank you for installing Oracle Application Express 5.0.0.00.31

Oracle Application Express is installed in the APEX_050000 schema.

The structure of the link to the Application Express administration services is as follows:
http://host:port/pls/apex/apex_admin (Oracle HTTP Server with mod_plsql)
http://host:port/apex/apex_admin     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex/apex_admin     (Oracle REST Data Services)

The structure of the link to the Application Express development interface is as follows:
http://host:port/pls/apex (Oracle HTTP Server with mod_plsql)
http://host:port/apex     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
http://host:port/apex     (Oracle REST Data Services)


PL/SQL procedure successfully completed.






1 row selected.

Disconnected from Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
mike@mike-Aspire-E3-112 /u01/app/oracle/apex $ 

Configuration Re-setting the ADMIN password

The next step is to set the APEX ADMIN password. Note that, even if you’ve already done this for the previously installed APEX version, you’ll need to do it again here, using the script that’s shipped with this version of APEX.
Also, despite any configuration changes you may have made to the APEX password complexity rules, the password you set will need to conform to the following :

  • Password must contain at least 6 characters.
  • Password must contain at least one numeric character (0123456789).
  • Password must contain at least one punctuation character(!”#$%&()“*+,-/:;?_).
  • Password must contain at least one upper-case alphabetic character.
  • Password must not contain username.

Bearing this in mind, connect to the database again as SYS AS SYSDBA and you’ll be prompted as follows….

@apxchpwd.sql

================================================================================
This script can be used to change the password of an Application Express
instance administrator. If the user does not yet exist, a user record will be
created.
================================================================================
Enter the administrator's username [ADMIN] 
User "ADMIN" exists.
Enter ADMIN's email [ADMIN] 
Enter ADMIN's password [] 
Changed password of instance administrator ADMIN.
Load Images

One final step – we need to load the images.
As we’re running the Embedded PL/SQL Gateway, we’ll need to use the apex_epg_config.sql script.
This script takes, as a parameter, the name of the directory that you’ve extracted the apex zip into – i.e. without the /apex directory itself. As you can see from the output below, it does take a while (8 minutes in this case) :

@apex_epg_config.sql /u01/app/oracle

SQL> @apex_epg_config.sql /u01/app/oracle

. Loading images directory: /u01/app/oracle/apex/images
timing for: Load Images
Elapsed: 00:08:00.92
SQL> 
Post Installation Steps

Just before we can go and play with our shiny new APEX version, we need to do a little tidying.
First of all, confirm the port that the PL/SQL Gateway is listening on :

select dbms_xdb.gethttpport
from dual;

Then, finally, shutdown and re-start the database.

Once it comes back up, point your browser to :

http://localhost:port/apex/apex_admin

…where port is the port number returned by the query above.

Login as user ADMIN with the password that you set for admin when you ran apxchpwd.sql. If all goes well, you should now see…

apex5_admin

It may be raining outside, but at least you now have a cosy APEX5 installation to play with.


Filed under: APEX, Oracle Tagged: APEX5 on Oracle XE 11g, apexins.sql, apex_epg_config.sql, apxchpwd.sql, Oracle Application Express Tablespace

Migrating the XE Database Management Application to a new version of APEX

Sun, 2015-05-24 08:31

I must confess to a weakness when it comes to throwing stuff away.
This is particularly true of techie stuff.
Whilst I have occasionally cannibalised an old machine for parts, there is a regrettably large part of the garage reserved for “vintage” hardware that I might just need at some point.

I’ve recently added to this hoard. I’ve finally gone and got a replacement for my ageing netbook.
As part of the configuration of the new machine, I’ve installed Oracle XE again.

I’m now poised to attempt an upgrade to a shiny new version of APEX.

First of all though, if you are similarly keen to upgrade from the venerable APEX 4.0, which XE ships with, to something more modern, your hoarding instincts may kick-in when it comes to the default Database Management Application.

Once you upgrade APEX 4 to any subsequent version, this application “disappears”.
The functionality it offers is readily available through SQLDeveloper (or indeed, any of the major Oracle Database IDE’s).
Alternatively, it’s a fairly simple matter to come up with your own, improved version.

Not convinced ? Oh well, I suppose we’d better save it for re-deployment into your new APEX environment.

What I’m going to cover here is :

  • Backing up the default XE ADMIN application
  • Tweaking the APEX export file
  • Restoring the XE ADMIN application

I’ve tested this process against both APEX4.2 and APEX5.0 running on Oracle XE11g.
In the steps that follow, I’m assuming that you’re upgrading to APEX5.0.
The main difference here is the APEX owning schema.
For APEX4.2, the owner is APEX_040200, in APEX 5.0 it’s APEX_050000.
As the APEX upgrade takes place entirely within the database, the steps that follow are platform independent.

Incidentally, if you’re wondering exactly how you would upgrade XE11g to this APEX version, details will follow in my next post.

NOTE – I’m assuming here that you’re doing this on your own personal playground 11GXE database and have therefore not
worried too much about any security implications for some of the activities detailed below.

Right, let’s get started…

Backing up the XE ADMIN application

The script below uses the APEX4 PL/SQL API to create an export of the application. This is simply an SQL file that we’ll need to do some light hacking and then run it against the database once the APEX upgrade is completed.
The script ( saved as export_xe_app.sql) is :

declare
	-- Must be logged on as SYSTEM
	-- Need to grant execute on UTL_FILE to SYSTEM
	l_fh utl_file.file_type;
	l_buffer varchar2(32767);
	l_amount pls_integer := 32767;
	l_clob clob;
	l_length pls_integer;
	l_pos pls_integer := 1;
	
begin
	-- Get the source code for the XE Admin APEX application
	l_clob := wwv_flow_utilities.export_application_to_clob ( p_application_id   => 4950);
	
	l_length := dbms_lob.getlength(l_clob);
	
	-- Now write it to a file
	l_fh := utl_file.fopen( 'DATA_PUMP_DIR', 'f4950.sql', 'w');
	
	loop
		exit when l_pos > l_length;
		l_buffer := substr( l_clob, l_pos, 32767);
		utl_file.put(l_fh, l_buffer);
		utl_file.fflush(l_fh);
		l_pos := l_pos + length(l_buffer);
	end loop;
	utl_file.fclose(l_fh);
end;
/

There are a couple of points to note prior to running this script.
The first is that it must be run as SYSTEM as this is the parsing schema for this application.
The second is that you will need to grant execute on UTL_FILE to SYSTEM.
So connect as SYS as sysdba and..

grant execute on utl_file to system
/

Now that’s done, connect as system and execute the script.

Once this is done, you should have a file called f4950.sql in the DATA_PUMP_DIR.
I’ve used this directory as it’s created by default when XE is installed. If you don’t know where this maps to on disk, then you can find it by running the following query :

select directory_path
from dba_directories
where directory_name = 'DATA_PUMP_DIR'
/

In my case (running on Linux) , this returns :

DIRECTORY_PATH
--------------------------------------------------------------------------------
/u01/app/oracle/admin/XE/dpdump/

Tweaking the APEX Export

There are a couple of things that we need to change in the export script.
The first is the call to the USER_IS_DBA function.

This function was moved from the WWV_FLOW_LOGIN package to WWV_FLOW_SECURITY in APEX 4.2, where( as at APEX 5.0), it still resides.

Therefore, we need to amend line number 218 from …

  p_scheme=>'return nvl(wwv_flow.g_flow_step_id,-1) in (1,101)'||chr(10)||
'       or wwv_flow_login.user_is_dba(p_username => :APP_USER);',

…to…

  p_scheme=>'return nvl(wwv_flow.g_flow_step_id,-1) in (1,101)'||chr(10)||
'       or wwv_flow_security.user_is_dba(p_username => :APP_USER);',

The other thing to change is the name of the APEX owner.
In my case ( migrating to APEX5), this will change from APEX_040000 to APEX_050000.

So, line number 142 in the file changes from…

  p_owner => nvl(wwv_flow_application_install.get_schema,'APEX_040000'),

…to…

  p_owner => nvl(wwv_flow_application_install.get_schema,'APEX_050000'),
Deploying the application to the new APEX version

Before running our application export against the new APEX repository, we need to grant select on the relevant views directly to the APEX owner.
NOTE – I’m indebted to this article by Jeff Eberhard for this particular step:

grant select on dba_temp_files to APEX_050000;
grant select on v_$temp_extent_pool to APEX_050000;
grant select on v_$temp_space_header to APEX_050000;
grant select on v_$system_parameter to APEX_050000;
grant select on v_$session to APEX_050000;

Now that’s done, we can simply connect as SYSTEM and run our import.
The output should look like this (ignore the two spurious errors at the start) :

SQL> @f4950.sql
SP2-0734: unknown command beginning "Content-ty..." - rest of line ignored.
SP2-0734: unknown command beginning "Content-le..." - rest of line ignored.
APPLICATION 4950 - XE_ADMIN
Set Credentials...
Check Compatibility...
API Last Extended:20130101
Your Current Version:20130101
This import is compatible with version: 20100513
COMPATIBLE (You should be able to run this import without issues.)
Set Application ID...
...authorization schemes
...navigation bar entries
...application processes
...application items
...application level computations
...Application Tabs
...Application Parent Tabs
...Shared Lists of values
...Application Trees
...page groups
...PAGE 0: 0
...PAGE 1: Home
...PAGE 2: Storage
...PAGE 4: Sessions
...PAGE 5: Backups
...PAGE 6: Parameters
...PAGE 7: Application Express
...PAGE 9: Tablespace Storage Details
...PAGE 10: Session Details
...PAGE 101: Login
...lists
...breadcrumbs
...page templates for application: 4950
......Page template 936879405068865354
......Page template 936880509857865357
......Page template 936881728833865360
......Page template 936882328801865361
...button templates
......Button Template 936883817777865362
......Button Template 936884428414865363
...region templates
......region template 936886425092865365
......region template 936888203598865371
......region template 936889721148865373
......region template 936890925366865374
......region template 936891504925865375
......region template 936892120369865376
...List Templates
......list template 8548028083899041
......list template 936898727314865389
...report templates
......report template 936900209975865391
......report template 936902514655865394
...label templates
......label template 936904524832865485
......label template 936904703644865485
...breadcrumb templates
......template 936904805731865485
...popup list of values templates
......template 936905603379865499
...calendar templates
...application themes
......theme 936905710643865499
...build options used by application 4950
...messages used by application: 4950
...dynamic translations used by application: 4950
...Language Maps for Application 4950
...Shortcuts
...web services (9iR2 or better)
...shared queries
...report layouts
...authentication schemes
......scheme 936873424775859940
...plugins
...done
SQL> 

As we have imported the application with the same ID that it had originally ( 4950), the Getting Started with Oracle Database 11g Express Edition desktop icon should still work in exactly the same way…

xe_admiin_apex5

NOTE – the first time you click on a tab that requires login, there is a bit of an issue.
Instead of prompting for login credentials, the text “Content-type:text/html; charset=utf8″ appears in the top-left of the page.
If you click the tab a second time, you will get prompted for login credentials as expected.

Once you’ve connected, the tabs should work pretty much as usual :

storage_tab

sessions_tab

parameters_tab

As a word of caution, I’d be wary of using the Application Express tab for APEX admin on the database.
Instead, I’d use the APEX URL specific to the installed APEX version for this purpose.

Right, off to sort out the clutter in the garage.


Filed under: APEX, Oracle, PL/SQL, SQL Tagged: dba_directories, UTL_FILE, wwv_flow_utilities.export_application_to_clob

The Ping of Mild Annoyance Attack and other Linux Adventures

Wed, 2015-05-13 14:46

Sometimes, it’s the simple questions that are the most difficult to answer.
For example, how many votes does it take to get an MP elected to the UK Parliament ?
The answer actually ranges from around 20,000 to several million depending on which party said MP is standing for.
Yes, our singular electoral system has had another outing. As usual, one of the main parties has managed to win a majority of seats despite getting rather less than half of the votes cast ( in this case 37%).

Also, as has become traditional, they have claimed to have “a clear instruction from the British People”.
Whenever I hear this, can’t help feeling that the “instruction” is something along the lines of “don’t let the door hit you on the way out”.

Offering some respite from the mind-bending mathematics that is a UK General Election, I’ve recently had to ask a couple of – apparently – simple questions with regard to Linux…

How do I list the contents of a zip file on Linux ?

More precisely, how do I do this on the command line ?

Let’s start wit a couple of csv files. First questions.csv :

question number, text
1,How many pings before it gets annoying ?
2,Where am I ?
3,How late is my train ?
4,What's in the zip ?
5,Fancy a game of Patience ?

Now answers.csv :

answer number, answer
1,6
2,Try hostname
3,Somewhere between a bit and very
4,Depends what type of zip
5,No!

Now we add these into a zip archive :

zip wisdom.zip questions.csv answers.csv
  adding: questions.csv (deflated 21%)
  adding: answers.csv (deflated 10%)

If you now want to check the contents of wisdom.zip, rather than finding the appropriate switch for the zip command, you actually need to use unzip….

unzip -l wisdom.zip
Archive:  wisdom.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
      156  04-29-2015 19:21   questions.csv
      109  04-29-2015 19:23   answers.csv
---------                     -------
      265                     2 files

If you want to go further and actually view the contents of one of the files in the zip….

unzip -c wisdom.zip answers.csv
Archive:  wisdom.zip
  inflating: answers.csv             
answer number, answer
1,6
2,Try hostname
3,Somewhere between a bit and very
4,Depends what type of zip
5,No!
The thing about PING

Say you have a script that checks that another server on the network is available, as a prelude to transferring files to it.
On Solaris, it may well do this via the simple expedient of…

ping

Now, whilst ping has been around for decades and is implemented on all major operating systems, the implementations differ in certain subtle ways.
Running it with no arguments on Solaris will simply issue a single ping to check if the target machine is up.
On Windows, it will attempt to send and recieve 4 packets and report the round-trip time for each.
On Linux however….

ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.032 ms
64 bytes from localhost (127.0.0.1): icmp_seq=2 ttl=64 time=0.087 ms
64 bytes from localhost (127.0.0.1): icmp_seq=3 ttl=64 time=0.088 ms
64 bytes from localhost (127.0.0.1): icmp_seq=4 ttl=64 time=0.098 ms
64 bytes from localhost (127.0.0.1): icmp_seq=5 ttl=64 time=0.096 ms
64 bytes from localhost (127.0.0.1): icmp_seq=6 ttl=64 time=0.097 ms
64 bytes from localhost (127.0.0.1): icmp_seq=7 ttl=64 time=0.095 ms
64 bytes from localhost (127.0.0.1): icmp_seq=8 ttl=64 time=0.099 ms
64 bytes from localhost (127.0.0.1): icmp_seq=9 ttl=64 time=0.096 ms
64 bytes from localhost (127.0.0.1): icmp_seq=10 ttl=64 time=0.100 ms
64 bytes from localhost (127.0.0.1): icmp_seq=11 ttl=64 time=0.066 ms
^C
--- localhost ping statistics ---
11 packets transmitted, 11 received, 0% packet loss, time 9997ms
rtt min/avg/max/mdev = 0.032/0.086/0.100/0.022 ms

Yep, it’ll just keep going until you cancel it.

If you want to avoid initiating what could be considered a very half-hearted Denial of Service attack on your own server, then it’s worth remembering that you can specify the number of packets that ping will send.
So…

ping -c1 localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.079 ms

--- localhost ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms

…is probably more what you’re after. This will exit with 0 if the target is up, as can be demonstrated using the script below (called you_up.sh)…

#!/bin/sh
ping -c1 localhost >/dev/null
if [ $? -ne 0 ]; then
    echo 'Something has gone horribly wrong'
else
    echo 'All OK'
fi
exit 0

Run this and we get…

sh you_up.sh
All OK

The long-suffering British electorate isn’t getting too much of a break. We now have the prospect of a Referendum on the UK’s EU membership to look forward to. On the plus side, it should be a bit easier to work out which side wins.


Filed under: Linux, Shell Scripting Tagged: ping -c, reading contents of a zip archive, specify number of packets to send using ping, unzip -c, unzip -l, zip

Oracle XE 11g – Getting APEX to start when your database does

Sun, 2015-05-03 03:53

They say patience is a virtue. It’s one that I often get to exercise, through no fault of my own.
Usually trains are involved. Well, I say involved, what I mean is…er…late.
I know, I do go on about trains. It’s a peculiarly British trait.
This may be because the highest train fares in Europe somehow don’t quite add up to the finest train service.
We can debate the benefits of British Trains later – let’s face it we’ll have plenty of time whilst we’re waiting for one to turn up. For now, I want to concentrate on avoiding any further drain on my badly tried patience by persuading APEX that it should be available as soon as my Oracle XE database is…

Oracle Express Edition – how it starts

There are three main components to Oracle XE :

  1. The Database
  2. The TNS Listener
  3. APEX

When you fire up Express Edition, it will start these components in this order :

  1. The Database
  2. The TNS Listener

APEX doesn’t get a look in at this point. Instead, when you first invoke it, it has to wait for the XDB database component to be initialized.

As I’ve observed previously, starting up the database before the listener can cause a lag if you’re trying to connect via TNS – i.e. from any machine other than the one the database is running on, or by specifying the database in the connect string.

The other problem is, of course, APEX will often refuse to play when you first call it after startup.

Often, your first attempt to get to the Database Home Page will be met with the rather unhelpful :

leaves on the line, or the wrong kind of snow ? Either way, APEX isn't talking to you

leaves on the line, or the wrong kind of snow ? Either way, APEX isn’t talking to you

It’s not until the TNS Listener is up and running that you’ll actually be able to connect to APEX.

In fact, it won’t be until you see the XEXDB service has been started by the Listener that you’ll be able to use APEX.
To check this :

lsnrctl status

The output should look something like this :

LSNRCTL for Linux: Version 11.2.0.2.0 - Production on 02-MAY-2015 19:10:19

Copyright (c) 1991, 2011, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC_FOR_XE)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 11.2.0.2.0 - Production
Start Date                02-MAY-2015 18:25:19
Uptime                    0 days 0 hr. 44 min. 59 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Default Service           XE
Listener Parameter File   /u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/xe/log/diag/tnslsnr/mike-Monza-N2/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC_FOR_XE)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=mike-Monza-N2)(PORT=1521)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=mike-Monza-N2)(PORT=8080))(Presentation=HTTP)(Session=RAW))
Services Summary...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "XE" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully

You can see what happens when you first call APEX by looking in the database alert log. If you want to see it in real-time, open a terminal and type :

tail -f /u01/app/oracle/diag/rdbms/xe/XE/trace/alert_XE.log

With the terminal window open and visible, click on the Getting Started Desktop icon (or simply invoke APEX directly from your browser). You should see this in the alert.log…

XDB installed.
XDB initialized.

So, the solution is :

  1. Start the Listener before starting the Database
  2. Get “APEX” to start directly after starting the Database

I’ve put APEX in quotes here because what we actually want to do is initialize the XDB component within the database.

Step 1 – changing the starting order

To do this, we’ll need to edit the standard startdb.sh script, after first making a backup copy, just in case …

sudo su oracle
cd /u01/app/oracle/product/11.2.0/xe/config/scripts
cp startdb.sh startdb.sh.bak
gedit startdb.sh

… The edited script should look something like this :

#!/bin/bash
#
#       svaggu 09/28/05 -  Creation
#	svaggu 11/09/05 -  dba groupd check is added
#

xsetroot -cursor_name watch
case $PATH in
    "") PATH=/bin:/usr/bin:/sbin:/etc
        export PATH ;;
esac

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
export ORACLE_SID=XE
LSNR=$ORACLE_HOME/bin/lsnrctl
SQLPLUS=$ORACLE_HOME/bin/sqlplus
LOG="$ORACLE_HOME_LISTNER/listener.log"
user=`/usr/bin/whoami`
group=`/usr/bin/groups $user | grep -i dba`

if test -z "$group"
then
	if [ -f /usr/bin/zenity ]
	then
		/usr/bin/zenity --error --text="$user must be in the DBA OS group to start the database." 
		exit 1
	elif [ -f /usr/bin/kdialog ]
	then
		/usr/bin/kdialog --error "$user must be in the DBA OS group to start the database."
		exit 1
	elif [ -f /usr/bin/xterm ]
	then
		/usr/bin/xterm -T "Error" -n "Error" -hold -e "echo $user must be in the DBA OS group to start the database."
		exit 1
	fi
else
    # Listener start moved to before database start to avoid lag in db
    # registering with listener after db startup
    # Mike
	if [ ! `ps -ef | grep tns | cut -f1 -d" " | grep -q oracle` ]
	then
		$LSNR start > /dev/null 2>&1
	else
		echo ""
	fi
# now start the database
	$SQLPLUS -s /nolog @$ORACLE_HOME/config/scripts/startdb.sql > /dev/null 2>&1
fi

xsetroot -cursor_name left_ptr

Now, when the database first starts and looks around for the Listener to register with, it’ll find it up and ready to go.

Step 2 – initialise XDB

Exactly how you do this properly has been a bit of a puzzle to me. I’m sure there is a proper way to do this, other than pointing your browser at APEX only for it to tell you to go away.
In lieu of this elusive “proper” XDB startup command, I’m going to use one that tells you what port the PL/SQL Gateway ( the default listener for APEX) is listening….

select dbms_xdb.gethtpport
from dual;

Something interesting happens when you run this command. The first time you execute it after database startup and when you haven’t invoked APEX, it takes quite a long time to return. If you look in the alert log you’ll see the reason for this…

XDB installed.
XDB initialized.

Yes, the same entries you see when you first try to invoke APEX.

So, we’re going to get this query to run as soon as the database is started. The easiest way to do this is to edit the startdb.sql script that’s called by the shell script we’ve just edited…

sudo su oracle
cd /u01/app/oracle/product/11.2.0/xe/config/scripts
cp startdb.sql startdb.sql.bak
gedit startdb.sql

Here, we’re simply adding this query directly the database is open…

connect / as sysdba
startup
-- added to start the PL/SQL Gateway so that APEX should be reachable
-- right after startup
select dbms_xdb.gethttpport from dual;
exit

Now, if we check the alert.log on startup of the database we’ll see something like…

QMNC started with pid=28, OS id=2469
Wed Apr 29 12:16:27 2015
Completed: ALTER DATABASE OPEN
Wed Apr 29 12:16:32 2015
db_recovery_file_dest_size of 10240 MB is 57.54% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Apr 29 12:16:32 2015
Starting background process CJQ0
Wed Apr 29 12:16:32 2015
CJQ0 started with pid=29, OS id=2483
Wed Apr 29 12:16:43 2015
XDB installed.
XDB initialized.

As soon as that last message is there, APEX is up and ready to receive requests.
On the one hand, it’s nice to know for definite when APEX will finally deign to answer your call, as opposed to hiding behind the PAGE NOT FOUND error and pretending to be out.
On the other hand, having to tail the alert.log to figure out when this is seems a bit like hard work.

Of course, in Linux land, you can always just prompt the shell script to announce when it’s finished…

Desktop Notification

I’m running this on a Gnome based desktop ( Cinnamon, if you’re interested, but it should work on anything derived from Gnome). KDE adherents will have their own, equally useful methods.
As in my previous attempt at this sort of thing, I’m going to use notify-send.

If you want to test if it’s installed, you can simply invoke it from the command line :

notify-send "Where's that train ?"

If all is OK, you should get this message displayed in a notification on screen…

Is it a bird ? Is it a train...

Is it a bird ? Is it a train…

Now we simply use this utility to add a message at the end of the database startup script.
We can even add an icon if we’re feeling flash….

#!/bin/bash
#
#       svaggu 09/28/05 -  Creation
#	svaggu 11/09/05 -  dba groupd check is added
#

xsetroot -cursor_name watch
case $PATH in
    "") PATH=/bin:/usr/bin:/sbin:/etc
        export PATH ;;
esac

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
export ORACLE_SID=XE
LSNR=$ORACLE_HOME/bin/lsnrctl
SQLPLUS=$ORACLE_HOME/bin/sqlplus
LOG="$ORACLE_HOME_LISTNER/listener.log"
user=`/usr/bin/whoami`
group=`/usr/bin/groups $user | grep -i dba`

if test -z "$group"
then
	if [ -f /usr/bin/zenity ]
	then
		/usr/bin/zenity --error --text="$user must be in the DBA OS group to start the database." 
		exit 1
	elif [ -f /usr/bin/kdialog ]
	then
		/usr/bin/kdialog --error "$user must be in the DBA OS group to start the database."
		exit 1
	elif [ -f /usr/bin/xterm ]
	then
		/usr/bin/xterm -T "Error" -n "Error" -hold -e "echo $user must be in the DBA OS group to start the database."
		exit 1
	fi
else
    # Listener start moved to before database start to avoid lag in db
    # registering with listener after db startup
    # Mike
	if [ ! `ps -ef | grep tns | cut -f1 -d" " | grep -q oracle` ]
	then
		$LSNR start > /dev/null 2>&1
	else
		echo ""
	fi
    # now start the database
	$SQLPLUS -s /nolog @$ORACLE_HOME/config/scripts/startdb.sql > /dev/null 2>&1
fi
#
# Publish desktop notification that we're ready to go...
#
notify-send -i /usr/share/pixmaps/oraclexe-startdatabase.png "Database and APEX ready to play" 

xsetroot -cursor_name left_ptr

When the script hits the notify line, we’re rewarded with…

db_ready

So, even if your train has stopped randomly in-between stations or is simply proving once again, that the timetable is a work of fiction, at least you won’t have to wonder if your database is ready for action.


Filed under: APEX, Oracle, Shell Scripting, SQL Tagged: alert.log, dbms_xdb.gethttpport, lsnrctl, notify-send, XDB

Getting a File Listing from a Directory in PL/SQL

Sat, 2015-04-25 12:32

It’s General Election time here in the UK.
Rather than the traditional two-way fight to form a government, this time around we seem to have a reasonably broad range of choice.
In addition to red and blue, we also have purple and – depending on where you live in the country, multiple shades of yellow and green.
The net effect is to leave the political landscape looking not so much like a rainbow as a nasty bruise.

The message coming across from the politicians is that everything that’s wrong in this country is down to foreigners – Eastern Europeans…or English (once again, depending on your location).
Strangely, the people who’ve been running our economy and public services for the last several years tend not to get much of a mention.
Whatever we end up choosing, our ancient electoral system is not set up to cater for so many parties attracting a significant share of support.

The resulting wrangling to cobble together a Coalition Government will be hampered somewhat by our – equally ancient – constitution.

That’s largely because, since Magna Carta, no-one’s bothered to write it down.

In olden times, if you wanted to find out what files were in a directory from inside the database, you’re options were pretty undocumented as well.
Fortunately, times have changed…

What I’m going to cover here is how to use an External Table pre-process to retrieve a file listing from a directory from inside the database.
Whilst this technique will work on any platform, I’m going to focus on Linux in the examples that follow…

An Overview of the Solution

First of all we’re going to write a shell script to give us a file listing on Linux.
This will include all of the details supplied by ls -l listing, in a comma delimited format.
The directory we’re going to get a list of files for is the DATA_PUMP_DIR.
Then we’ll create an External Table to read the output of the script.
Here goes then….

Getting a useable file listing on Linux

The first thing we need here is a shell script for the external table to execute in it’s pre-processor directive.
The output needs to be in a format that can be easily loaded by the external table.

Let’s have a look then…

/u01/app/oracle/admin/XE/dpdump $ ls -l
total 322320
-rw-r--r-- 1 oracle dba     87816 Aug 22  2014 export.log
-rw-r--r-- 1 oracle dba         0 Apr 16 13:13 file_list_dummy.txt
-rw-r----- 1 oracle dba 327385088 Aug 22  2014 full220814.dmp
-rw-r--r-- 1 oracle dba      2889 Jan  9 15:29 hr_dev_imp.log
-rw-r----- 1 oracle dba    524288 Jan  9 15:16 hr_export.dmp
-rw-r--r-- 1 oracle dba      2171 Jan  9 15:16 hr_export.log
-rw-r----- 1 oracle dba   1560576 Feb 12 11:46 ossca_pre_050_export.dmp
-rw-r--r-- 1 oracle dba      3331 Feb 12 11:46 ossca_pre_050_export.log
-rw-r----- 1 oracle dba    462848 Mar  5 13:44 test_schemas.dmp
-rw-r--r-- 1 oracle dba      1759 Mar  5 13:44 test_schemas_exp.log

There are a few things to note from here.
Firstly, there are up to eight distinct columns in the output. You’ll notice that files over 6 months old are reported with a date and then a year. Newer files just have a date.

Secondly, the widths of the fields themselves vary in length between files, most notably the file sizes.
We need to tidy up this output a bit if we’re going to be able to load it easily into the External Table.

After some trial and error, it looks like the following will meet our needs :

ls -l --time-style=long-iso | awk 'BEGIN {OFS = ",";} {print $1, $2, $3, $4, $5, $6" "$7, $8}'
total,322320,,,, ,
-rw-r--r--,1,oracle,dba,87816,2014-08-22 13:30,export.log
-rw-r--r--,1,oracle,dba,0,2015-04-16 13:13,file_list_dummy.txt
-rw-r-----,1,oracle,dba,327385088,2014-08-22 13:30,full220814.dmp
-rw-r--r--,1,oracle,dba,2889,2015-01-09 15:29,hr_dev_imp.log
-rw-r-----,1,oracle,dba,524288,2015-01-09 15:16,hr_export.dmp
-rw-r--r--,1,oracle,dba,2171,2015-01-09 15:16,hr_export.log
-rw-r-----,1,oracle,dba,1560576,2015-02-12 11:46,ossca_pre_050_export.dmp
-rw-r--r--,1,oracle,dba,3331,2015-02-12 11:46,ossca_pre_050_export.log
-rw-r-----,1,oracle,dba,462848,2015-03-05 13:44,test_schemas.dmp
-rw-r--r--,1,oracle,dba,1759,2015-03-05 13:44,test_schemas_exp.log
The Shell Script

There are a few things we need to bear in mind for this script.
As it’s going to be called from the external table, we cannot assume that any environment variables have been set (even $PATH).
Therefore, the commands need to be called using their absolute path, rather than just their name.

Additionally, the script will be passed the fully qualified path and filename of the location of our external table as the first argument. This is going to be useful as we’ll need to direct the shell script to the appropriate directory to list.
We can strip the filename from this argument and use the directory by using basename…

If you want to find out what the paths for these commands are ( and they can differ between Linux Distros), then you can do the following :

which ls
/bin/ls
which awk
/usr/bin/awk
which basename
/usr/bin/basename

The final script looks like this :

#!/bin/sh
#
# Generate a comma separated ls -l for the directory in which the external table's
# location file resides
#
targetDir=`/usr/bin/dirname $1`
/bin/ls -l --time-style=long-iso $targetDir | /usr/bin/awk 'BEGIN {OFS = ",";} {print $1, $2, $3, $4, $5, $6" "$7, $8}'
exit 0

I’ve saved this file as list_files.sh

Now for…

The External Table

The output we’ll be reading starts with the Total number of blocks in the directory, so we’ll want to skip that.
As for the files themselves, each file record will have the following attributes :

  • File Permissions
  • Number of Hard Links
  • File Owner
  • Group of the file owner
  • File size in bytes
  • Date the file was last modified
  • The file name

As we’re using a pre-processor in our external table, the usual rules will apply.

The External Table needs to have a location file specified at runtime, even though it won’t actually be reading the contents of that file in this instance.

So, our table will look something like this :

create table list_files_xt
(
    permissions varchar2(15),
    hard_links number,
    file_owner varchar2(32),
    group_name varchar2(32),
    size_bytes number,
    last_modified date,
    file_name varchar2(255)
)
    organization external
    (
        type oracle_loader
        default directory data_pump_dir
        access parameters
        (
            records delimited by newline
            nologfile
            preprocessor pre_proc_dir : 'list_files.sh'            
            skip 1
            fields terminated by ','            
            ( 
                permissions,
                hard_links,
                file_owner,
                group_name,
                size_bytes,
                last_modified date 'YYYY-MM-DD HH24:MI',
                file_name
            )
        )
        location('list_files_dummy.txt')
    )
/
Deploying the Components

First, we need to put the script in place. To work out where, check the path of the directory object that we’re referencing in the External Table definition ( in this case, PRE_PROC_DIR).
Then we’ll need to create the external table’s dummy file in the directory specified as the Default Directory ( DATA_PUMP_DIR).

So, we need to know the OS location of these directories…

select directory_name, directory_path
from all_directories
where directory_name in ('PRE_PROC_DIR', 'DATA_PUMP_DIR')
/

DIRECTORY_NAME	DIRECTORY_PATH
--------------- ----------------------------------------
PRE_PROC_DIR	/u01/app/oracle/pre_proc
DATA_PUMP_DIR	/u01/app/oracle/admin/XE/dpdump/

Now to make sure that the shell script is in the right place and has the appropriate permissions :

sudo cp list_files.sh /u01/app/oracle/pre_proc/.
sudo chmod o+x /u01/app/oracle/pre_proc/list_files.sh

If we now check the permissions on the file, we can see that all users have execute :

ls -l /u01/app/oracle/pre_proc/list_files.sh
-rw-r--r-x 1 root root 281 Apr 23 13:02 /u01/app/oracle/pre_proc/list_files.sh

Next we need to create the dummy file for the External table. The oracle os user needs to have read and write permissions on this file.
The simplest way to achieve this is to create the file as the oracle os user :

sudo su oracle
touch /u01/app/oracle/admin/XE/dpdump/list_files_dummy.txt

Now we can see that we’ve got an empty file ….

ls -l /u01/app/oracle/admin/XE/dpdump/list_files_dummy.txt
-rw-r--r-- 1 oracle dba 0 Apr 23 13:05 /u01/app/oracle/admin/XE/dpdump/list_files_dummy.txt

Finally, we just need to login to the database and create the table using the script above.
Now let’s see what export files I’ve got :

select *
from list_files_xt
where file_name like '%.dmp'
order by last_modified
/

PERMISSION HARD_LINKS FILE_OWNER GROUP_NAME  SIZE_BYTES LAST_MODIFIED      FILE_NAME
---------- ---------- ---------- ----------- ---------- ------------------ --------------------------------------------------
-rw-r-----          1 oracle     dba          327385088 22-AUG-14          full220814.dmp
-rw-r-----          1 oracle     dba             524288 09-JAN-15          hr_export.dmp
-rw-r-----          1 oracle     dba            1560576 12-FEB-15          ossca_pre_050_export.dmp
-rw-r-----          1 oracle     dba             462848 05-MAR-15          test_schemas.dmp

NOTE – no minorities were persecuted in the writing of this post.


Filed under: Linux, Oracle, SQL Tagged: awk, basename, chmod, external table, external table preprocessor, ls, ls --time-style = long-iso, OFS, touch, which

SQLCL – The New SQL*Plus

Sun, 2015-04-12 10:33

To borrow a well-known saying, One-Day International Cricket is a game played by two sides for 100 overs…and then the Aussies win the World Cup.
Something else that doesn’t seem to change much over time is SQL*Plus. The command line interface to Oracle Databases has been around, in it’s current guise, since 1985.
Whilst there have been some changes here and there, it’s basic functionality has remained largely unchanged over the intervening 30 years.
Now, however, it looks like things are about to change as Oracle lavish some attention on the noble CLI.

You may be wondering how this is in any way relevant in the modern world of GUI development.
Well, there are still some things that you need the command line for.
Interactive connection to a database from a server that’s not running a desktop environment would be one.

More common though, are those jobs that need to run unattended. These will include batch jobs managed by a scheduling tool external to the Oracle RDBMS, such as Autosys, or even good old CRON.
Increasingly, it will also include jobs that are initiated as part of Continuous Integration or Release Management testing.

SQL*Plus for the 21st Century is currently going by the name of SQLCL (SQL Command Line). It has also been known as SDSQL (SQLDeveloper SQL) and even SQL*Plus++ (my personal favourite).

Whilst the currently available versions of SQLCL are very much in the Early Adopter stage, there is enough there to show the direction in which things are moving.
Whilst the decision has been taken to ensure that SQLCL is fully backward-compatible with the current SQL*Plus, some of the new features may well have significant implications in the way that the Oracle CLI is used in the future.

What I’m going to cover here is :

  • How SQLCL differs “structurally” from SQL*Plus
  • Improvements in command-line interaction incorporating Linux-like and IDE features, including glorious technicolour!

All of which leaves the ageing geek in me unashamedly excited.

Before I go on, I should take this opportunity to say thanks to Jeff Smith and the team for answering the questions I had about some of the new features.

Download and Installation

The first thing you’ll notice about SDSQL is that, unlike the Oracle Client of which SQL*Plus is a component, it is a single file.
This is a pretty good start as you don’t have to figure out which files you need to download before you get going.

Instead, you simply need to head over to the SQLDeveloper Download Page and download Command Line SDSQL – All Platforms.
You are rewarded with posession of 11MB worth of :

sqlcl-4.1.0.15.067.0446-no-jre.zip

Once you’ve unzipped the tool, go to the bin directory. In my case :

cd /opt/sqlcl/bin

Here you will find the following files :

  • sql.bat – a Windows batch script
  • sql – a bash script
  • sql.exe – a Windows executable

On a Windows client, you can just run sql.exe and be on your way. The bash script provided ultimately executes this command :

java  -Djava.awt.headless=true -Dapple.awt.UIElement=true -jar /opt/sqlcl/sqlcl/lib/oracle.sqldeveloper.sqlcl.jar

In order for SQLCL to run whether on Windows or Linux, you need Java installed. To be more precise, you need a Java Runtime Environmnent (JRE) of version 1.7 or higher.

This in itself is not a major issue. However, it’s worth bearing this dependency in mind if you’re working in a large organization with a separate Server Admin team as you’ll probably need to negotiate some process hurdles to get a suitable JRE onto your servers if there isn’t one already in place.

Database Connections

As SQLCL isn’t part of the client, it does not assume that there is a tnsnames.ora hanging around for it to read (although it will happily read one that’s pointed to by the TNS_ADMIN environment variable).

I think it’s fair to say that the “native” connection method is to use the EZConnect syntax which has the format :

host:port/sid

So, connecting to my local XE database interactively can look something like this :

SQLcl: Release 4.1.0 Beta on Fri Apr 10 15:49:47 2015

Copyright (c) 1982, 2015, Oracle.  All rights reserved.


Username? (''?) mike
Password? (**********?) ********
Database? (''?) localhost:1526/XE
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production 

SQL> 

…where my database is on the current machine, the TNS Listener is on port 1526 and the SID is XE.

Of course, having an Oracle Client installed and a TNS entry for my database specified, I can also still do this :

SQLcl: Release 4.1.0 Beta on Fri Apr 10 15:49:47 2015

Copyright (c) 1982, 2015, Oracle.  All rights reserved.


Username? (''?) mike
Password? (**********?) ********
Database? (''?) XE
Connected to:
Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production 

SQL> 

There are other connection methods available. Barry McGillin has an example of connecting via LDAP.

Of course, using the EZConnect syntax all the time, especially when you open another connection from within your session could involve a fair amount of typing. Fortunately, The developers have thought of this…

NET

The NET command allows you to “save network details and assign it a shortcut command”.
For example, to create an alias for the database running on a Developer Day VirtualBox image :

net alias dday=0.0.0.0:1521/orcl;

NOTE – you need to terminate the string with a semi-colon at the end of the line.

To test it, we first need to enable NET in our SQLCL session (it’s set to OFF by default) …

SQL> set net on
SQL> show net
net: ON

So, whilst connected to XE as mike, I can use the NET alias I’ve just created to connect to the Developer Day database :

SQL> conn hr@dday
Password? (**********?) ************
Connected

SQL> show user
USER is "HR"

SQL> select name from v$database;

NAME    
---------
CDB1     

SQL> 
The Command Line, but not as we know it

When it comes to command-line editing and scrollback, SQL*Plus is, well, a bit basic.
If you’re on Windows, you can scroll back through previous commands. Unfortunately, even this is not a feature on Linux.
There have been efforts to make the SQL*Plus a bit more friendly over the years, notably the RLWRAP project.
If you haven’t discovered the joys of that particular software, you are about to have a new experience at the SQL prompt…

Tab Completion

Type the following…

select * from hr.reg

when you hit the TAB key, the table name is automagically completed for you.
OK, it uppercases the table name to REGIONS but hey, if you’re that worried about code formatting…well, we’ll get to that in a bit.

In the meantime, just marvel at the fact that it’s almost like being at the $ prompt.

Multi-Line Console Editing

If you think that’s good…

select * 
from hr.regions
/

I run this and then realise I really should have added a predicate to avoid having to wade through a massive result set ( OK, it’s only 4 rows, but just bear with me here).
Rather than having to fiddle about with the arcane SQL*Plus editing commands, I can simply use the list command to show me what’s currently in the buffer ( i.e. the query I’ve just run) and then hit the up arrow.

I’m now magically transported to the end of the last line of the statement. I can add a new line, simply by hitting RETURN and then add my predicate (NOTE – it’s probably a good idea to hit back arrow, forward arrow if you’re doing this as otherwise SQLCL thinks you want to cycle through the previous commands you’ve issued).

SQL> select *
  2  from hr.regions
  3* where REGION_ID = 1
  4  /

 REGION_ID REGION_NAME             
---------- -------------------------
         1 Europe                   

SQL> 

The uppercase REGION_ID indicates that I got this column name using tab completion.
Barry McGillin has a demonstration of this feature here.

HISTORY

Rather than simply remembering the last SQL or PL/SQL that was executed, SQLCL retains a history of the last 100 commands executed. Note that this is the last 100 commands for any user connecting on the current client.
This includes SQL*Plus commands, such as describe.

You can scroll back and forward through the previous commands using the arrow keys – nothing new for Windows clients but it is for Linux.

Alternatively, you can get a listing by typing :

history

This will give you a numbered listing of the commands in the history.

Incidentally, if you use the built-in SQLCL help command for history, you’ll get this listing :

SQL>help history
history [<index> | FULL | USAGE | HELP ]

SQL>history full
1  select 1 from dual;
2  select 2
>  from dual;
3  select 3 from dual
>  where 1=1;

SQL>history usage
1  (2) select 1 from dual; 
2  (11) select 2 from dual; 
3  (2) select 3 from dual where 1=1; 

SQL>history 3
1  select 3 from dual
2* where 1=1;

However, if you try the help option of the history command, you get something slightly different …

SQL> history help 

history [<index> | FULL | USAGE | TIME | HELP | CLEAR]

SQL> 

The full option appears to simply list all of the commands in the history ( the same as simply issuing the HISTORY command on it’s own).

When we look at some of the other options, it gets a bit interesting :

SQL> history
  1  select name from v$database
  2  select sysdate from dual
SQL> history usage
  1  (1) select name from v$database
  2  (4) select sysdate from dual
SQL> history time
  1  (00.176) select name from v$database
  2  (05.415) select sysdate from dual
SQL> 

The USAGE option keeps a count of the number of times a particular command has been executed.
The TIME option shows the total execution time for the statement (thanks Jeff).

In some circumstances, this could prove handy, especially when you realise that you need to set timing on…just after you’ve kicked off your script.

If you want to cleardown the history…

SQL> history clear
History Cleared

One point to note – as with it’s Linux counterpart, HISTORY retains any command executed using that client, not simply any command when connected as a particular user. This is something you may need to be aware of on servers where the Oracle “client” is shared by multiple users.

CD

You know how much fun it is when you have a master release script, either for deployment to an environment or, just as common these days, to refresh a Continuous Integration environment.

Such a script may well look something like this :

prompt Deploying HR_DEV...

prompt Sequences
prompt ---------

@sequences/locations_seq.sql

prompt Tables
prompt ------

@tables/countries_tab.sql
@tables/locations_tab.sql

prompt Deployment complete

With SQLCL, the script becomes somewhat simpler :

prompt Deploying HR_DEV...

prompt Sequences
prompt ---------

cd sequences
@locations_seq.sql

prompt Tables
prompt ------

cd ../tables
@countries_tab.sql
@locations_tab.sql

prompt Deployment complete

Yes, you can now change directory from SQLCL, in the same way that you would at the OS level. As well as being rather convenient, this should serve to save a fair bit of typing, especially for scripts such as this one.

If all that isn’t quite enough Linux-like goodness for you then you’d probably be quite interested in…

ALIAS

As you may guess from the name, ALIAS allows you to save and invoke SQL or PL/SQL blocks under a pre=defined name.
There are already some aliases defined. You can find them by simply typing :

SQL> alias
locks
tables
tables2
SQL> 

If you want to see the definition of the tables alias….

SQL> alias list tables
tables
------
 select table_name "TABLES"from user_tables
SQL> 

You can run this simply by typing the alias name at the prompt. For example, if you connect as the HR user …

SQL> tables
Command=tables

TABLES                       
------------------------------
REGIONS                       
LOCATIONS                     
DEPARTMENTS                   
JOBS                          
EMPLOYEES                     
JOB_HISTORY                   
COUNTRY_EXT                   
COUNTRIES                     

 8 rows selected 

SQL> 

Of course, you can add your own alias. If you’re an inveterate clock-watcher, for example….

SQL> alias current_time=select to_char(sysdate, 'HH24:MI') from dual;

The alias is now saved so, whenever I want to know if it’s nearly hometime….

SQL> current_time
Command=current_time

TO_CHAR(SYSDATE,'HH24:MI')
--------------------------
19:53                     

SQL> 

According to the help, you can define aliases for PL/SQL blocks, and even reference bind variables.

Choosing you’re output format with SQLFORMAT

The new SQLFORMAT parameter deserves particular attention. Before I get into some of it’s more obvious benefits, take a look at this :

set sqlformat ansiconsole
with ts_details as
(
    select rpad(tablespace_name,30, ' ')||' '||lpad(floor(used_percent), 3, ' ') as ts_line,
        used_percent
    from dba_tablespace_usage_metrics
)
select 
    case 
        when used_percent > 70 then '@|bg_red '||ts_line||'|@'
        when used_percent < 1 then '@|bg_green '||ts_line||'|@'
        else '@|bg_yellow '||ts_line||'|@' 
    end as ts_usage_percentage
from ts_details
/

Now at first glance this script looks rather odd. Have I had an attack of fat-fingers ? Well, let’s run it and see…

Contrived example to show traffic-lighting

Contrived example to show traffic-lighting

Yep, SQL*Plus now does colour. OK, so I set the threshold values to make sure I got records to show in all three colours, but you get the idea.

There’s a rather more comprehensive script by Kris Rice here, which details many colour options.

One of the neat features in SQLDeveloper has always been the ability to generate the output from a query in a pre-defined format.
This ability is still there in SQLCL. To use a fashionable example…

SQL> select /*json*/ * from hr.regions;

{"items":[
{"region_id":1,"region_name":"Europe"},{"region_id":2,"region_name":"Americas"},{"region_id":3,"region_name":"Asia"},{"region_id":4,"region_name":"Middle East and Africa"},{}]}

SQL> 

SQLFORMAT allows you to specify the output format separately from the code. For example :

SQL> set sqlformat csv
SQL> select * from hr.regions;

"REGION_ID","REGION_NAME"
1,"Europe"
2,"Americas"
3,"Asia"
4,"Middle East and Africa"

SQL> 

The fact that the output format is separated from the script means that you can use the same script to generate multiple formats.
Ultimately, this means less code, not to mention less complex (or, at least, fiddly) code.

For example, instead of :

select employee_id||','||first_name||','||last_name
    ||'.'||email||','||phone_number
from hr.employees
/

… you can now have…

set sqlformat csv
select employee_id, first_name, last_name,
    email, phone_number
from hr.employees
/

…and if you need to create an additional feed file in a different format, you simply need to type…

set sqlformat xml
...

…and execute exactly the same code again.

There are several SQLFORMAT options. The ones that I’ve tested are :

  • default
  • ansiconsole
  • csv
  • insert – lists resuts as an insert statement
  • loader – pipe-delimited
  • delimited – same as csv
  • xml
  • html
  • fixed – fixed width
  • text
  • json
INFORMATION = KNOWLEDGE = POWER = ENERGY = MATTER = MASS

This heading is a slight misquote of Terry Pratchett’s L-Space equation.
The theory of L-space is that a large collection of accumulated knowledge, such as a library, can act like a black hole.
If you want to have a go at bending the fabric of space and time armed with nothing more than a command line…

INFORMATION

The humble DESCRIBE command in SQL*Plus has been around for a while. I’d guess it’s probably as old as the tool itself.
In that time, things have changed a bit.
It’s now possible to have comments on columns. There’s also this new-fangled PL/SQL, together with it’s associated packages.
If, like me, you’ve hankered after a describe that’s just a bit more able to handle these more modern developments, well, you’re in luck.

First off, let’s issue a good old DESCRIBE :

SQL> desc employees
Try the new information command: info employees


Name           Null     Type         
-------------- -------- ------------ 
EMPLOYEE_ID    NOT NULL NUMBER(6)    
FIRST_NAME              VARCHAR2(20) 
LAST_NAME      NOT NULL VARCHAR2(25) 
EMAIL          NOT NULL VARCHAR2(25) 
PHONE_NUMBER            VARCHAR2(20) 
HIRE_DATE      NOT NULL DATE         
JOB_ID         NOT NULL VARCHAR2(10) 
SALARY                  NUMBER(8,2)  
COMMISSION_PCT          NUMBER(2,2)  
MANAGER_ID              NUMBER(6)    
DEPARTMENT_ID           NUMBER(4)    
SQL> 

Interesting message, let’s give it a go (note, it’s probably a good idea to set the SQLFORMAT to ansiconsole at this point) :

SQL> set sqlformat ansiconsole
SQL> info employees
Columns 
NAME             DATA TYPE           NULL  DEFAULT    COMMENTS
*EMPLOYEE_ID     NUMBER(6,0)         No                   Primary key of employees table.
 FIRST_NAME      VARCHAR2(20 BYTE)   Yes                  First name of the employee. A not null column.
 LAST_NAME       VARCHAR2(25 BYTE)   No                   Last name of the employee. A not null column.
 EMAIL           VARCHAR2(25 BYTE)   No                   Email id of the employee
 PHONE_NUMBER    VARCHAR2(20 BYTE)   Yes                  Phone number of the employee; includes country code and area code
 HIRE_DATE       DATE                No                   Date when the employee started on this job. A not null column.
 JOB_ID          VARCHAR2(10 BYTE)   No                   Current job of the employee; foreign key to job_id column of the
                                                                    jobs table. A not null column.
 SALARY          NUMBER(8,2)         Yes                  Monthly salary of the employee. Must be greater
                                                                    than zero (enforced by constraint emp_salary_min)
 COMMISSION_PCT  NUMBER(2,2)         Yes                  Commission percentage of the employee; Only employees in sales
                                                                    department elgible for commission percentage
 MANAGER_ID      NUMBER(6,0)         Yes                  Manager id of the employee; has same domain as manager_id in
                                                                    departments table. Foreign key to employee_id column of employees table.
                                                                    (useful for reflexive joins and CONNECT BY query)
 DEPARTMENT_ID   NUMBER(4,0)         Yes                  Department id where employee works; foreign key to department_id
                                                                    column of the departments table

Indexes
INDEX_NAME            UNIQUENESS  STATUS  FUNCIDX_STATUS  COLUMNS                COLUMN_EXPRESSION  
HR.EMP_JOB_IX         NONUNIQUE   VALID                   JOB_ID                                    
HR.EMP_NAME_IX        NONUNIQUE   VALID                   LAST_NAME, FIRST_NAME                     
HR.EMP_EMAIL_UK       UNIQUE      VALID                   EMAIL                                     
HR.EMP_EMP_ID_PK      UNIQUE      VALID                   EMPLOYEE_ID                               
HR.EMP_MANAGER_IX     NONUNIQUE   VALID                   MANAGER_ID                                
HR.EMP_DEPARTMENT_IX  NONUNIQUE   VALID                   DEPARTMENT_ID                             


References
TABLE_NAME   CONSTRAINT_NAME  DELETE_RULE  STATUS   DEFERRABLE      VALIDATED  GENERATED  
DEPARTMENTS  DEPT_MGR_FK      NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
EMPLOYEES    EMP_MANAGER_FK   NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
JOB_HISTORY  JHIST_EMP_FK     NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  

SQL> 

INFORMATION can give you even more in-depth information:

SQL> info+ employees
Columns 
NAME             DATA TYPE           NULL  DEFAULT    LOW_VALUE             HIGH_VALUE            NUM_DISTINCT   HISTOGRAM  
*EMPLOYEE_ID     NUMBER(6,0)         No                   100                   206                   107            NONE       
 FIRST_NAME      VARCHAR2(20 BYTE)   Yes                  Adam                  Winston               91             NONE       
 LAST_NAME       VARCHAR2(25 BYTE)   No                   Abel                  Zlotkey               102            NONE       
 EMAIL           VARCHAR2(25 BYTE)   No                   ABANDA                WTAYLOR               107            NONE       
 PHONE_NUMBER    VARCHAR2(20 BYTE)   Yes                  011.44.1343.329268    650.509.4876          107            NONE       
 HIRE_DATE       DATE                No                   2001.01.13.00.00.00   2008.04.21.00.00.00   98             NONE       
 JOB_ID          VARCHAR2(10 BYTE)   No                   AC_ACCOUNT            ST_MAN                19             FREQUENCY  
 SALARY          NUMBER(8,2)         Yes                  2100                  24000                 58             NONE       
 COMMISSION_PCT  NUMBER(2,2)         Yes                  .1                    .4                    7              NONE       
 MANAGER_ID      NUMBER(6,0)         Yes                  100                   205                   18             FREQUENCY  
 DEPARTMENT_ID   NUMBER(4,0)         Yes                  10                    110                   11             FREQUENCY  

Indexes
INDEX_NAME            UNIQUENESS  STATUS  FUNCIDX_STATUS  COLUMNS                COLUMN_EXPRESSION  
HR.EMP_JOB_IX         NONUNIQUE   VALID                   JOB_ID                                    
HR.EMP_NAME_IX        NONUNIQUE   VALID                   LAST_NAME, FIRST_NAME                     
HR.EMP_EMAIL_UK       UNIQUE      VALID                   EMAIL                                     
HR.EMP_EMP_ID_PK      UNIQUE      VALID                   EMPLOYEE_ID                               
HR.EMP_MANAGER_IX     NONUNIQUE   VALID                   MANAGER_ID                                
HR.EMP_DEPARTMENT_IX  NONUNIQUE   VALID                   DEPARTMENT_ID                             


References
TABLE_NAME   CONSTRAINT_NAME  DELETE_RULE  STATUS   DEFERRABLE      VALIDATED  GENERATED  
DEPARTMENTS  DEPT_MGR_FK      NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
EMPLOYEES    EMP_MANAGER_FK   NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  
JOB_HISTORY  JHIST_EMP_FK     NO ACTION    ENABLED  NOT DEFERRABLE  VALIDATED  USER NAME  

SQL> 

Whilst all of this additional functionality is very welcome, the best bit, for me at least, is this…

SQL> info dbms_utility.get_time
Package

/* Package SYS.DBMS_UTILITY */
/*  FUNCTION  SYS.DBMS_UTILITY.GET_TIME  */
    /*   RETURN NUMBER   */
     v_ret := SYS.DBMS_UTILITY.GET_TIME( );


SQL> info dbms_metadata.get_ddl
Package

/* Package SYS.DBMS_METADATA */
/*  FUNCTION  SYS.DBMS_METADATA.GET_DDL  */
    /*   RETURN CLOB   */
     v_ret := SYS.DBMS_METADATA.GET_DDL(   OBJECT_TYPE   =>  p_IN_param0  /*   VARCHAR2   */,
                                           NAME          =>  p_IN_param1  /*   VARCHAR2   */,
                                           SCHEMA        =>  p_IN_param2  /*   VARCHAR2   */,
                                           VERSION       =>  p_IN_param3  /*   VARCHAR2   */,
                                           MODEL         =>  p_IN_param4  /*   VARCHAR2   */,
                                           TRANSFORM     =>  p_IN_param5  /*   VARCHAR2   */);


SQL> 

Yes, no longer will I have to scroll through the 86 members of DBMS_METADATA to find out the signature for GET_DDL.
I can now simply describe (OK, INFO) the package member directly. Just like going through a wormhole directly to the information I need (to stretch the L-space metaphor).

DDL

As the name suggests, DDL outputs the DDL for a given object. For example…

SQL> ddl regions

  CREATE TABLE "HR"."REGIONS" 
   (	"REGION_ID" NUMBER CONSTRAINT "REGION_ID_NN" NOT NULL ENABLE, 
	"REGION_NAME" VARCHAR2(25), 
	 CONSTRAINT "REG_ID_PK" PRIMARY KEY ("REGION_ID")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"  ENABLE
   ) SEGMENT CREATION IMMEDIATE 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"

You also have the option of writing the output to a file. This command …

ddl regions regions_tab.sql

…writes the output to a file called regions_tab.sql, as well as the command line.

Whilst we’re on the subject of Time (well, saving some, at least)…

CTAS

CTAS – Create Table as Select – without all of that typing…

CTAS regions new_regions

… generates DDL to create a new table called new_regions…

  CREATE TABLE "HR"."NEW_REGIONS" 
   (	"REGION_ID",
	"REGION_NAME",
	 CONSTRAINT "REG_ID_PK" PRIMARY KEY ("REGION_ID")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS"  ENABLE
   ) SEGMENT CREATION IMMEDIATE 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "USERS" 

as 
select * from REGIONS

As CTAS does use DBMS_METADATA, you can amend the output using that package’s SET_TRANFORM_PARAM procedure. For example :

SQL> exec dbms_metadata.set_transform_param(dbms_metadata.session_transform, 'STORAGE', false)
anonymous block completed

SQL> ctas regions new_regions

  CREATE TABLE "HR"."NEW_REGIONS" 
   (	"REGION_ID",
	"REGION_NAME",
	 CONSTRAINT "REG_ID_PK" PRIMARY KEY ("REGION_ID")
  USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
  TABLESPACE "USERS"  ENABLE
   ) SEGMENT CREATION IMMEDIATE 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
  TABLESPACE "USERS" 
as 
select * from REGIONS
SQL> 

UPDATE – Jeff has posted a much better example of this feature here.

Remember, CTAS simply generates the DDL command to create the table and writes it to the buffer (and a file, if specified). It does not actually execute the command.

LOAD

The LOAD command offers an alternative way of populating a table from a CSV file.

Let’s try using a topical example…

create table icc_wc_winners
(
    year number(4),
    country varchar2(30)
)
/

The csv file is in a sub-directory called cricket :

YEAR,COUNTRY
1975,WEST INDIES
1979,WEST INDIES
1983,INDIA
1987,AUSTRALIA
1992,PAKISTAN
1996,SRI LANKA
1999,AUSTRALIA
2003,AUSTRALIA
2007,AUSTRALIA
2011,INDIA
2015,AUSTRALIA

At present, LOAD has a bit of a problem recognising that you’ve chnaged directory using the CD command. However, the developers have fixed this and it should be available in a future release.
In the meantime though, we can get this to work by specifying the relative path for the csv file :

SQL> load icc_wc_winners cricket/wc_winners.csv
--Number of rows processed: 11
--Number of rows in error: 0
0 - SUCCESS: Load processed without errors
SQL>

NOTE – you need to issue a commit for these records to be saved permanently.

FORMAT

Being a rather particular breed, developers tend to like their code formatted in a certain way. The trouble is, if you ask ten developers what that way should be, you’re likely to get ten different answers.
It is for this reason that Coding Standards documents tend to illicit an inordinate amount of controversy.
As I’ve noted previously, one possible solution to this is to let developers format their code in whatever way they see fit and then just run the code through a formatting tool that converts it to whatever the standard is.
All the main IDEs have their own formatting solutions. However, you can now do this at the command line.

For example, let’s try this query :

select dept.department_name, emp.first_name||' '||emp.last_name as empname
from employees emp
inner join departments dept
  on emp.department_id = dept.department_id
order by 1,2  
/

Using the new FORMAT command, we can, now apply the default SQLDeveloper formatting…

SQL> format buffer
  1  SELECT
  2    dept.department_name,
  3    emp.first_name
  4    ||' '
  5    ||emp.last_name AS empname
  6  FROM
  7    employees emp
  8  INNER JOIN departments dept
  9  ON
 10    emp.department_id = dept.department_id
 11  ORDER BY
 12*   1,2
SQL> 

Better still, we can even take the source file for the query, run it through the formatter and save the output to a new file :

SQL> format file hr.sql hr_yuk.sql
SQL> 

The resulting file looks like this :

SELECT
  dept.department_name,
  emp.first_name
  ||' '
  ||emp.last_name AS empname
FROM
  employees emp
INNER JOIN departments dept
ON
  emp.department_id = dept.department_id
ORDER BY
  1,2 /

As you can probably tell by the output filename I’ve chosen, I’m not particularly keen on the default formatting.
I’ll admit, this is largely subjective on my part. However, I would like the code to be formatted according to my preferences rather than the default.
Once again, the developers have a solution for this which should be available in the near future.
With this functionality in place, you could potentially leave the contentious business of formatting code to the standard until just before deployment. Running each program through a formatter setup to meet your coding standards may be an appealing option.

BRIDGE

One of the major advantages IDE’s have over the command line is the ability to compare two schemas in different databases.
Up until now, this has not been possible from SQL*Plus without the use of a database link.
The BRIDGE command may well change all that.

At present, it’s functionality is restricted to creating a table in your current connection based on the results of a query run against a second connection that you specify. For example, to create a table in my current schema with the details of the tables in a remote hr schema….

SQL> bridge hr_tabs as "jdbc:oracle:thin:hr/the_hr_password@0.0.0.0:1521/orcl"(select table_name from user_tables);
Table hr_tabs : insert succeeded 
SQL> select * from hr_tabs;

TABLE_NAME                                                                     
--------------------------------------------------------------------------------
JOB_HISTORY                                                                     
EMPLOYEES                                                                       
JOBS                                                                            
DEPARTMENTS                                                                     
LOCATIONS                                                                       
REGIONS                                                                         
COUNTRIES                                                                       

 7 rows selected 

SQL> 

…where the_hr_password is the password for the HR user on the target database.

One point to note here is that bridge executes the remote connection and the Create Table independently of each other.
Therefore, it’s quite possible for the connection to fail and the table to be created anyway.

NOTE – it should be possible to get around this behaviour by specifiying WHENEVER SQLERROR EXIT. I’ve not managed to get this to work as I’d expect, but this could be because I’m doing something silly.

Another thing to be aware of is that the BRIDGE command you’ve issued will be retained in the HISTORY, complete with uid/password.
This is currently on the developers’ to-do list.

APEX

The last of the new commands to cover is APEX.

If you simply issue this command without any arguments, it will list all of the APEX Applications where the user you are currently connected as is defined as the owning schema.
The exception to this is if you’re connected as SYS as SYSDBA where you’ll get a full list of all APEX applications.

So, connected as user OSSCA :

SQL> apex
WORKSPACE  APPLICATION_ID  APPLICATION_NAME             BUILD_STATUS     LAST_UPDATED_ON        
OSSCA2     100             Sample Database Application  Run and Develop  26-FEB-15              
OSSCA2     101             OSSCA UI                     Run Only         26-FEB-15              

APEX also provides the facility to export an application. By spooling the output, you can create the appropriate file :

spool my_apex_export.sql
apex export 100
spool off

…creates the APEX export file my_apex_export.sql, as well as outputting to the screen.

Conclusion

Whilst it’s still very early days for SQLCL, it’s potential impact is significant. I make no apologies for being excited about it.

Where to go for more SQLCL stuff

There’s a fair amount of information out there about SQLCL.

A good starting point would be Jeff’s presentation, which is available here.

Both Kris Rice and Barry McGillin are members of the development team and post regularly on the subject of SQLCL.

If you find any glitches, or have any questions, then the place to go is the SQLDeveloper OTN Forum.

Finally, as it’s around Easter Time, try entering the following at the SQLCL command line :

show sqldev2

Filed under: Oracle, SQL Tagged: alias, APEX, bridge, cd, colours in sqlcl, ctas, dbms_metadata.set_transform_param, ddl, format, history, information, load, multi-line editing in sqlcl, net, sdsql, sqlcl, sqlformat, tab completion in sqlcl

SQLDeveloper XML Extensions and auto-navigation

Sun, 2015-03-22 09:42

It’s official, England are now the second best cricket team in the British Isles !
After all, Scotland were dispatched with ease and as for Wales…they didn’t even make it to the Cricket World Cup.
OK, technically they did because they’re part of England for the purposes of cricket…although you’d be hard pressed to get them to admit it.
Ireland are, of course, some way in front having actually managed to actually win the odd game against Test Playing Nations.
Whilst it takes quite some effort to find silver lining in the cloud of English Cricket’s latest debacle, the same cannot be said if SQLDeveloper is your Oracle Database IDE of choice …

Why I use SQLDeveloper

Now, I’m well aware that each developer has their own favourite IDE. TOAD and PL/SQL Developer are fine tools in their own right, each with their own strengths. Which of these tools you swear by, or swear at, is always likely to be based on some fairly subjective criteria.

One of the main reasons I have for using SQLDeveloper is that it’s so easy to extend by means of a judicious application of XML, combined with a bit of SQL and/or PL/SQL.

Fun with Foreign Keys

As with all of the mainstream IDE’s, SQLDeveloper displays a table’s constraints as standard. However, what it doesn’t show is which tables have Foreign Keys to the table you’re looking at.

If, for example, you want to find out which tables have a Foreign Key to HR.COUNTRIES, you need to run a query like this :

select owner, table_name, constraint_name
from all_constraints
where constraint_type = 'R'
and (r_owner, r_constraint_name) in 
(
    select owner, constraint_name
    from all_constraints
    where constraint_type in ('U', 'P')
    and owner = 'HR'
    and table_name = 'COUNTRIES'
)
order by owner, table_name, constraint_name
/

OWNER                          TABLE_NAME                     CONSTRAINT_NAME
------------------------------ ------------------------------ ------------------------------
HR                             LOCATIONS                      LOC_C_ID_FK

SQL> 

Wouldn’t it be handy if you could somehow add a tab onto the table view in SQLDeveloper and have this information
pop-up for the table you’re looking at.
Well, funny you should say that….

<items>
	<item type="editor" node="TableNode" vertical="true">
		<title><![CDATA[Child Tables]]></title>
		<query>
			<sql>
				<![CDATA[
					select owner, table_name, constraint_name
					from all_constraints 
					where constraint_type = 'R' 
					and (r_owner, r_constraint_name) in
					(
						select owner, constraint_name 
						from all_constraints 
						where constraint_type in ('U', 'P') 
						and owner = :OBJECT_OWNER 
						and table_name = :OBJECT_NAME 
					) 
					order by owner, table_name, constraint_name 
				]]>
			</sql>
		</query>
	</item>
</items>

It may not be immediately obvious how this xml code is going to help us.
The answer is that it’s the SQLDeveloper extension that we’ve just written. Yep, just that one xml file.

It’s probably worth taking a closer look at the structure of this file in terms of the XML…

<items>
	<item type="editor" node="TableNode" vertical="true">
		<title><![CDATA[Enter the Tab Name Here]]></title>
		<query>
			<sql>
				<![CDATA[Enter your query here]]>
			</sql>
		</query>
	</item>
</items>

If you want to add a node to a different object type, it seems that you just need to change the node attribute of the item tag as appropriate.
For example, specifying “Viewnode” should enable you to add a Tab to your Views.

Anyway, to add this to SQLDeveloper, we just need to do a bit of pointing and clicking…

To start with, save the above code into an xml file. I’ve called mine deps_fk.xml.

In SQLDeveloper, go to the Tools menu and select Preferences

In the tree on the left hand side expand the Database node. Then click on User Defined Extensions.

sqld_add_ext1

Now click the Add Row button at the bottom of the Right Hand Pane and you should see a new Row appear under the Type and Location headings like this :

sqld_add_ext2

If you now click in the row you’ve just created under the Type heading, you should get a drop-down list of values.
Select Editor from the list :

sqld_add_ext3

In the Location field, you just need to tell SQLDeveloper where your xml file is :

sqld_add_ext4

Finally, hit the OK button at the bottom of the window to save your changes and then re-start SQLDeveloper.

Now, if we open the HR.COUNTRIES table, we can see that there’s now an additional tab called Child Tables.
When we go to this tab, we are rewarded with :

sqld_tab1

This stuff has been done before. As well as this post from Tony Andrews, I also posted something similar several years ago.
I must be getting old, I’m starting to repeat myself !

Whilst all this is quite handy, wouldn’t it be good to be to be able to link directly to the Child table ?

Linking to other objects

Jeff Smith’s post here might be just what we’re looking for.

Using Jeff’s linking technique, we can improve our extension a bit…

<items>
	<item type="editor" node="TableNode" vertical="true">
		<title><![CDATA[Child Tables]]></title>
		<query>
			<sql>
				<![CDATA[
					select owner,  
						'SQLDEV:LINK:'
						    ||owner||':TABLE:'||table_name
						    ||':oracle.dbtools.raptor.controls.grid.DefaultDrillLink' table_name,
						constraint_name
					from all_constraints 
					where constraint_type = 'R' 
					and (r_owner, r_constraint_name) in
					(
						select owner, constraint_name 
						from all_constraints 
						where constraint_type in ('U', 'P') 
						and owner = :OBJECT_OWNER 
						and table_name = :OBJECT_NAME 
					) 
					order by owner, table_name, constraint_name 
				]]>
			</sql>
		</query>
	</item>
</items>

Now when I run this, the table_name appears in blue.

sqld_ext_final1

By clicking it, I can navigate directly to the child table…

sqld_ext_final2

When the code is run, in this example, the string generated for the table_name column is :

SQLDEV:LINK:HR:TABLE:LOCATIONS:oracle.dbtools.raptor.controls.grid.DefaultDrillLink

You can link to all sorts of things using this technique. In fact, it would seem to be that you can link to any object that has a node in the SQLDeveloper Navigator Tree. If you want to play around, just select the string from dual in SQLDeveloper (using F9 to execute).

For example :

select 'SQLDEV:LINK:HR:TRIGGER:UPDATE_JOB_HISTORY:oracle.dbtools.raptor.controls.grid.DefaultDrillLink'
from dual;

…returns a link to the UPDATE_JOB_HISTORY trigger in the results grid.
Note that the link syntax is case sensitive in that the owner, object type and object name all need to be in uppercase for this to work.

Another point worth noting, specifying a type of PACKAGE takes you to the package header.
If you want to link to the body, then specify PACKAGE BODY.

Other SQLDeveloper Extension Types

There’s a whole bunch of stuff you can do to customise SQLDeveloper using nothing more than a bit of boilerplate xml.
In addition to Jeff’s blog, there is an Oracle Wiki which contains, among other things,

It’s definitely worth a look, especially if, like me, you need something to take your mind off the cricket.


Filed under: SQLDeveloper Tagged: adding tabs to sqldeveloper, all_constraints, finding child constraints, linking to tabs in sqldeveloper, oracle.dbtools.raptor.controls.grid.DefaultDrillLink, sqldeveloper xml extensions