Skip navigation.

Feed aggregator

EMC World 2015 - Day 1 at Momentum

Yann Neuhaus - Mon, 2015-05-04 08:07

The first day of my first EMC World conferences and specially the ones from Momentum wich covers the Enterprise Content Division (ECD) products/solutions/strategies aso. The start was great, being in Las Vegas where you have the feeling you are on another planet, I had the same feel during the General Session or the ECD Keynote; each time good explanations coupled with good shows.

The information I have got was interresting and some questions came in my mind. Questions that I hope can be answered in the next days.


Before attending the General Session I went to another one which was about EMC InfoArchive. Today I work mainly with the Documentum Content Server and products around it like xPlore, ADTS, D2 aso.

To be prepared for new futur customer requests and challenges I wanted to see what is behind InfoArchive. Let's give some points:

- One main goal of using InfoArchive is to reduce the cost of the storage and to keep the assets.

- Once legacy applications are shut down, you can archive their data into InfoArchive. You can also use it to archive data from active applications where you can build some rules to define which data will be moved to InfoArchive. And this can be done for flat, complex as well as, of course, for document records.

- When the data are saved into InfoArchive, you can use xQuery, xForm to retrieve the data and display them in a way the user wants to see it.

That's on the general overview. On a technical point of view here some information:

- The Archive Service is build using a Data Service (xDB data server) and/or a Content Server. In case you have to archive only metadata the xDB service is sufficient.

- The storage to be used is obviously the EMC storages but other ones can also be used meaning this solution can be implemented in more type of infrastructures.

- To the question what is archived, the answer is SIP (Submission Information Package). You have a SIP descriptor  and SIP Data (metadata or/and Content)

- LWSO objects are stored to use less storage

- The search is done first against the AIP (Archive Info Packages) and once the object is found, against the AIU (Archive Info Unit).There is no fulltext available on the InfoArchive layer, the reason is that an archive system does not use it in general.

- RPS can be used to manage the retention.

Open questions

So that for the "facts", now there are some other open points which could be raised in case InforArchive will be used. You can save you data in normal XML formats but you can also define how the data are saved and how you want to search them. In this case who will manage that, the Record&Archive team or do you need first a business analyste? Can the defined model easily be changed for the current archived information? There are technical questions but I think the organization has first to be defined to have a successfull implementation of InfoArchive

Again, some questions are coming in my mind. And again, let's see if I can have some answers in ... the next days.

Get the main Administration Information from SQL Server with PowerShell

Yann Neuhaus - Mon, 2015-05-04 07:48

In my previous blog Automate SQL Server Administration with PowerShell – How to retrieve SQL Server services?, I presented you the first step of the SQL Server administration through an automated process.

This blog is a follow-up of the previous one, and it will focus on retrieving information of a SQL Server instance with PowerShell


Disclaimer: I am not a developer but a SQL Server dba. If you find errors or some ways of improvement, I will be glad to read your comments!


List all SQL Server instances

To be able to proceed for all the instances, you can easily get all your instance names with this function:



Retrieve SQL Server instance information

In my example, I execute my script on the machine hosting my SQL Server instance.

I use SMO objects to access to the instance information . But you need the instance full name , as follows:


I only give the instance name as parameter because I execute my script on a local server, otherwise I need to give the server name as parameter.


First I initialize my SMO object of my instance like this:



This SMO object contains SQL Server instance main information. To list all properties and the object methods, proceed as follows:



To list the general information of  the instance, you can proceed like this:


To list the directory paths related to the instance, here is an example:


To list important instance configuration, here is an example:



By formating the information you retrieve in the instance SMO object, you can generate reports, audit your environment or whatever!

The following capture is an existing dashboard from our Database Management Kit (DMK).



Next steps

The SMO object for the SQL Server instance has a limit number of properties and methods. Sometimes, you need information which are not present in the object. In this case, you must use the "sqlcmd" command and retrieve your information by using T-SQL.

Here is the way to proceed:


To retrieve any of SQL Server instance information, the "sqlcmd" command would always work. You can also use it to modify the instance configuration.


I hope this blog will help you in your work. In my next blog, I will show you how to access to your database information with PowerShell.

APEX 5.0: Custom Favicon for Applications using Universal Theme

Patrick Wolf - Mon, 2015-05-04 06:08
For applications which are using Universal Theme you don’t have to modify the Page Template anymore if you want to replace the default favicon with a custom one. Instead follow these steps: Go to Shared Components Click Application Definition Attributes (in Application … Continue reading →
Categories: Development

I love Live Demos – how about you?

The Oracle Instructor - Mon, 2015-05-04 05:59

Tired of boring slide-shows? Join me for free to see Oracle core technology live in action!

Live demonstrations have always been a key part of my classes, because I consider them one of the best ways to teach.

This is your opportunity to have a glimpse into my classroom and watch a demo just as I have delivered it there.

Apparently, not many speakers are keen to do things live, so the term Demonar (Demonstration + Seminar) waited for me to be invented :-)

A positive effect towards your attitude about LVC and Oracle University Streams with its live webinars is intended, since the setting and platform is very similar there.

Categories: DBA Blogs

Get SQL Server services with PowerShell

Yann Neuhaus - Mon, 2015-05-04 02:13

SQL Server Configuration Manager and SQL Server Management Studio are the main tools to administrate the components of SQL Server. They are very convenient to use and pretty complete.
But as soon as you wish an automated process, these tools have their limitations. Nevertheless, there is still the solution: PowerShell!

This blog introduces a first step towards an automation process of SQL Server administration. I will retrieve all SQL Server services related to a specific instance name.

The process will always be similar by using the SMO WMI server PowerShell object.


Disclaimer: I am not a developer but a SQL Server dba. If you find errors or some ways of improvement, I will be glad to read your comments!


SQL Engine

To retrieve the SQL Engine service for a specific instance name:


  SQL Agent

To retrieve the SQL Agent service for a specific instance name:


  SQL Full-text Filter

To retrieve the SQL Full-text Filter service for a specific instance name:


  SQL Browser

To retrieve the SQL Browser service:


  SQL Analysis

To retrieve the SQL Analysis service for a specific instance name:


  SQL Reporting

To retrieve the SQL Reporting service for a specific instance name:


  SQL Integration

To retrieve the SQL Integration service:


  Service Object

Each function returns an object with the following properties and methods:


You are able to start, restart or stop your service. But you can also retrieve specific information such as the Service Account, the Start Mode or the Service Account.

  Next Step

If you do not want to proceed just for a specific instance, but for all instances, you can list all instance names in that way:


Then, with your list of instance names, you loop by calling each function. Do not forget to test if the service returned exists (by testing if it is null).


To finish my article, all these functions are part of our Database Management Kit (DMK) developed by our team. We use it to access faster to common and standards information, but also to automate processes.

For example, the DMK is able (in just one command!) to make a security audit of your SQL Server environment, by following the best practices from Microsoft and from our experts. A report is generated at the end of the audit to list all the security points to review.

theshortenspot on twitter!

Anthony Shorten - Sun, 2015-05-03 19:02

If you want to be kept up to date on when a new article is published I recommend that you use subscribe to the twitter account for theshortenspot. Anytime a new article is posted a new tweet is also added to announce the article (with links to the article).

The twitter account is Use your favorite twitter client (or just the browser) to view the tweets...

Secure By Default in FW

Anthony Shorten - Sun, 2015-05-03 18:53

One of the new features of Oracle Utilities Application Framework V4. for Oracle WebLogic customers is that new installation of the product will be using HTTPS rather than HTTP by default. In past releases it was always possible to use HTTPS instead of HTTP but the decision was an opt-in decision. In this release since the use of HTTPS is provided as the default option, the decision is an opt-out if you do not want to use the HTTPS installation option.

Customers upgrading will not be affected as the configuration decision is retained across upgrades.

If you do use the default HTTPS setup you should be aware of the following:

  • By default, a demonstration development certification is provided with Oracle WebLogic. This certificate is limited in its scope and is only provided to complete a basic HTTPS configuration within Oracle WebLogic. The certificate will be detected as not valid by your browser. This is not a bug but intentional behavior as Oracle cannot issue production quality certificates in Oracle WebLogic as part of its base installation. If the default certificate is used, developers can accept the certificate according their browser preferences (Mozilla Firefox will ask you to add an exception and Internet Explorer will ask you to confirm that is ok to proceed). If you proceed the browser will indicate you are using a digital certificate visually on the address bar of the browser (this will vary from browser to browser).
  • It is HIGHLY recommended that customers who want to use the HTTPS functionality obtain a valid digital certificate from a valid certificate issuing authority and implement the certificate as per the Installation Guide or WebLogic documentation.
  • To find out the valid Certificate issuing authorities supported by the java version you have use the following command:
keytool -list -v -keystore $JAVA_HOME/jre/lib/security/cacerts

The bottom line is that if you want to use HTTPS then get a valid certificate for your organization, otherwise you can opt-out and use HTTP if that is valid for your site. Typically, most installations are expected to use HTTP for non-production and HTTPS for production to minimize costs.

Updated Whats New in FW4 whitepaper

Anthony Shorten - Sun, 2015-05-03 18:24

The What's New in FW4 whitepaper is a whitepaper that summarizes all the major changes from Oracle Utilities Application Framework V2.2 to Oracle Utilities Application Framework V4.

It has been updated for new functionality and changes implemented in Oracle Utilities Application Framework V4.

It is available from My Oracle Support at What's New In Oracle Utilities Application Framework V4 (Doc Id: 1177265.1).

Note: In earlier versions of Oracle Utilities Application Framework V4 some features have been introduced that have been replaced with newer features in Oracle Utilities Application Framework V4. In this case, the entries in the What's New have been altered to remove these replaced features. Refer to the release notes for the version of Oracle Utilities Application Framework for details of this replaced functionality.

SQLcl - Cloud connections via Secure Shell tunnels

Barry McGillin - Sun, 2015-05-03 17:39
We're always trying to make SQLcl easier to connect to your database, whether its at your place or in the cloud.  So, one other thing we have added to enable you to drill into your cloud databases is an SSHTUNNEL command.  Lets take a look at the help for it, which you can get as follows.

SQL> help sshtunnel

Creates a tunnel using standard ssh options
such as port forwarding like option -L of the given port on the local host
will be forwarded to the given remote host and port on the remote side. It also supports
identity files, using the ssh -i option
If passwords are required, they will be prompted for.

SSHTUNNEL <username>@<hostname> -i <identity_file> [-L localPort:Remotehost:RemotePort]


-L localPort:Remotehost:Remoteport

Specifies that the given port (localhost) on the local (client) host is to be forwarded to
the given remote host (Remotehost) and port (Remoteport) on the remote side. This works by
allocating a socket to listen to port on the local side.
Whenever a connection is made to this port, the connection is forwarded over
the secure channel, and a connection is made to remote host & remoteport from
the remote machine.

-i identity_file
Selects a file from which the identity (private key) for public key authentication is read.


So for this to work we need to decide which ports locally we are going to use and which remote machine and port we want to use to map our ports from local to remote.  We also need a RSA file from the target host.  In this example, we have created one with the default name of id_rsa.  

The format of the flags follow the standard ssh rules and options, so -i for identity files and -L for port forwarding.  Heres an example connecting to a remote host via a tunnel.

(bamcgill@daedalus.local)–(0|ttys000|-bash)–(Mon May 04|12:16:46)
(~/.ssh) $sql /nolog

SQLcl: Release 4.1.0 Release Candidate on Mon May 04 00:16:58 2015

Copyright (c) 1982, 2015, Oracle. All rights reserved.

SQL> sshtunnel -i ./id_rsa -L

Password for ********
ssh tunnel connected

SQL> connect barry/oracle@localhost:8888/DB11GR24

SQL> select 'test me' as BLRK from dual weirdtable

test me


You can download SQLcl from OTN here and give this a try when the next EA is released.

Added a page about my LVC schedule

The Oracle Instructor - Sun, 2015-05-03 06:49

I get often asked by customers about my schedule, so they can book a class with me. This page now shows my scheduled Live Virtual Classes. I deliver most of my public classes in that format and you can attend from all over the world :-)

Categories: DBA Blogs

Oracle XE 11g – Getting APEX to start when your database does

The Anti-Kyte - Sun, 2015-05-03 03:53

They say patience is a virtue. It’s one that I often get to exercise, through no fault of my own.
Usually trains are involved. Well, I say involved, what I mean is…er…late.
I know, I do go on about trains. It’s a peculiarly British trait.
This may be because the highest train fares in Europe somehow don’t quite add up to the finest train service.
We can debate the benefits of British Trains later – let’s face it we’ll have plenty of time whilst we’re waiting for one to turn up. For now, I want to concentrate on avoiding any further drain on my badly tried patience by persuading APEX that it should be available as soon as my Oracle XE database is…

Oracle Express Edition – how it starts

There are three main components to Oracle XE :

  1. The Database
  2. The TNS Listener
  3. APEX

When you fire up Express Edition, it will start these components in this order :

  1. The Database
  2. The TNS Listener

APEX doesn’t get a look in at this point. Instead, when you first invoke it, it has to wait for the XDB database component to be initialized.

As I’ve observed previously, starting up the database before the listener can cause a lag if you’re trying to connect via TNS – i.e. from any machine other than the one the database is running on, or by specifying the database in the connect string.

The other problem is, of course, APEX will often refuse to play when you first call it after startup.

Often, your first attempt to get to the Database Home Page will be met with the rather unhelpful :

leaves on the line, or the wrong kind of snow ? Either way, APEX isn't talking to you

leaves on the line, or the wrong kind of snow ? Either way, APEX isn’t talking to you

It’s not until the TNS Listener is up and running that you’ll actually be able to connect to APEX.

In fact, it won’t be until you see the XEXDB service has been started by the Listener that you’ll be able to use APEX.
To check this :

lsnrctl status

The output should look something like this :

LSNRCTL for Linux: Version - Production on 02-MAY-2015 19:10:19

Copyright (c) 1991, 2011, Oracle.  All rights reserved.

Alias                     LISTENER
Version                   TNSLSNR for Linux: Version - Production
Start Date                02-MAY-2015 18:25:19
Uptime                    0 days 0 hr. 44 min. 59 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Default Service           XE
Listener Parameter File   /u01/app/oracle/product/11.2.0/xe/network/admin/listener.ora
Listener Log File         /u01/app/oracle/product/11.2.0/xe/log/diag/tnslsnr/mike-Monza-N2/listener/alert/log.xml
Listening Endpoints Summary...
Services Summary...
Service "PLSExtProc" has 1 instance(s).
  Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
Service "XE" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
Service "XEXDB" has 1 instance(s).
  Instance "XE", status READY, has 1 handler(s) for this service...
The command completed successfully

You can see what happens when you first call APEX by looking in the database alert log. If you want to see it in real-time, open a terminal and type :

tail -f /u01/app/oracle/diag/rdbms/xe/XE/trace/alert_XE.log

With the terminal window open and visible, click on the Getting Started Desktop icon (or simply invoke APEX directly from your browser). You should see this in the alert.log…

XDB installed.
XDB initialized.

So, the solution is :

  1. Start the Listener before starting the Database
  2. Get “APEX” to start directly after starting the Database

I’ve put APEX in quotes here because what we actually want to do is initialize the XDB component within the database.

Step 1 – changing the starting order

To do this, we’ll need to edit the standard script, after first making a backup copy, just in case …

sudo su oracle
cd /u01/app/oracle/product/11.2.0/xe/config/scripts

… The edited script should look something like this :

#       svaggu 09/28/05 -  Creation
#	svaggu 11/09/05 -  dba groupd check is added

xsetroot -cursor_name watch
case $PATH in
    "") PATH=/bin:/usr/bin:/sbin:/etc
        export PATH ;;

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
group=`/usr/bin/groups $user | grep -i dba`

if test -z "$group"
	if [ -f /usr/bin/zenity ]
		/usr/bin/zenity --error --text="$user must be in the DBA OS group to start the database." 
		exit 1
	elif [ -f /usr/bin/kdialog ]
		/usr/bin/kdialog --error "$user must be in the DBA OS group to start the database."
		exit 1
	elif [ -f /usr/bin/xterm ]
		/usr/bin/xterm -T "Error" -n "Error" -hold -e "echo $user must be in the DBA OS group to start the database."
		exit 1
    # Listener start moved to before database start to avoid lag in db
    # registering with listener after db startup
    # Mike
	if [ ! `ps -ef | grep tns | cut -f1 -d" " | grep -q oracle` ]
		$LSNR start > /dev/null 2>&1
		echo ""
# now start the database
	$SQLPLUS -s /nolog @$ORACLE_HOME/config/scripts/startdb.sql > /dev/null 2>&1

xsetroot -cursor_name left_ptr

Now, when the database first starts and looks around for the Listener to register with, it’ll find it up and ready to go.

Step 2 – initialise XDB

Exactly how you do this properly has been a bit of a puzzle to me. I’m sure there is a proper way to do this, other than pointing your browser at APEX only for it to tell you to go away.
In lieu of this elusive “proper” XDB startup command, I’m going to use one that tells you what port the PL/SQL Gateway ( the default listener for APEX) is listening….

select dbms_xdb.gethtpport
from dual;

Something interesting happens when you run this command. The first time you execute it after database startup and when you haven’t invoked APEX, it takes quite a long time to return. If you look in the alert log you’ll see the reason for this…

XDB installed.
XDB initialized.

Yes, the same entries you see when you first try to invoke APEX.

So, we’re going to get this query to run as soon as the database is started. The easiest way to do this is to edit the startdb.sql script that’s called by the shell script we’ve just edited…

sudo su oracle
cd /u01/app/oracle/product/11.2.0/xe/config/scripts
cp startdb.sql startdb.sql.bak
gedit startdb.sql

Here, we’re simply adding this query directly the database is open…

connect / as sysdba
-- added to start the PL/SQL Gateway so that APEX should be reachable
-- right after startup
select dbms_xdb.gethttpport from dual;

Now, if we check the alert.log on startup of the database we’ll see something like…

QMNC started with pid=28, OS id=2469
Wed Apr 29 12:16:27 2015
Wed Apr 29 12:16:32 2015
db_recovery_file_dest_size of 10240 MB is 57.54% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Wed Apr 29 12:16:32 2015
Starting background process CJQ0
Wed Apr 29 12:16:32 2015
CJQ0 started with pid=29, OS id=2483
Wed Apr 29 12:16:43 2015
XDB installed.
XDB initialized.

As soon as that last message is there, APEX is up and ready to receive requests.
On the one hand, it’s nice to know for definite when APEX will finally deign to answer your call, as opposed to hiding behind the PAGE NOT FOUND error and pretending to be out.
On the other hand, having to tail the alert.log to figure out when this is seems a bit like hard work.

Of course, in Linux land, you can always just prompt the shell script to announce when it’s finished…

Desktop Notification

I’m running this on a Gnome based desktop ( Cinnamon, if you’re interested, but it should work on anything derived from Gnome). KDE adherents will have their own, equally useful methods.
As in my previous attempt at this sort of thing, I’m going to use notify-send.

If you want to test if it’s installed, you can simply invoke it from the command line :

notify-send "Where's that train ?"

If all is OK, you should get this message displayed in a notification on screen…

Is it a bird ? Is it a train...

Is it a bird ? Is it a train…

Now we simply use this utility to add a message at the end of the database startup script.
We can even add an icon if we’re feeling flash….

#       svaggu 09/28/05 -  Creation
#	svaggu 11/09/05 -  dba groupd check is added

xsetroot -cursor_name watch
case $PATH in
    "") PATH=/bin:/usr/bin:/sbin:/etc
        export PATH ;;

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/xe
group=`/usr/bin/groups $user | grep -i dba`

if test -z "$group"
	if [ -f /usr/bin/zenity ]
		/usr/bin/zenity --error --text="$user must be in the DBA OS group to start the database." 
		exit 1
	elif [ -f /usr/bin/kdialog ]
		/usr/bin/kdialog --error "$user must be in the DBA OS group to start the database."
		exit 1
	elif [ -f /usr/bin/xterm ]
		/usr/bin/xterm -T "Error" -n "Error" -hold -e "echo $user must be in the DBA OS group to start the database."
		exit 1
    # Listener start moved to before database start to avoid lag in db
    # registering with listener after db startup
    # Mike
	if [ ! `ps -ef | grep tns | cut -f1 -d" " | grep -q oracle` ]
		$LSNR start > /dev/null 2>&1
		echo ""
    # now start the database
	$SQLPLUS -s /nolog @$ORACLE_HOME/config/scripts/startdb.sql > /dev/null 2>&1
# Publish desktop notification that we're ready to go...
notify-send -i /usr/share/pixmaps/oraclexe-startdatabase.png "Database and APEX ready to play" 

xsetroot -cursor_name left_ptr

When the script hits the notify line, we’re rewarded with…


So, even if your train has stopped randomly in-between stations or is simply proving once again, that the timetable is a work of fiction, at least you won’t have to wonder if your database is ready for action.

Filed under: APEX, Oracle, Shell Scripting, SQL Tagged: alert.log, dbms_xdb.gethttpport, lsnrctl, notify-send, XDB

APEX You Tube Channel

Denes Kubicek - Sun, 2015-05-03 02:56
Great idea to organize the Oracle APEX YouTube Channel. David Peake has started a Video Series about the new Page Designer in APEX 5.0. You should have a look.

Categories: Development

Notes, links and comments, May 2, 2015

DBMS2 - Sat, 2015-05-02 08:36

I’m going to be out-of-sorts this week, due to a colonoscopy. (Between the prep, the procedure, and the recovery, that’s a multi-day disablement.) In the interim, here’s a collection of links, quick comments and the like.

1. Are you an engineer considering a start-up? This post is for you. It’s based on my long experience in and around such scenarios, and includes a section on “Deadly yet common mistakes”.

2. There seems to be a lot of confusion regarding the business model at my clients Databricks. Indeed, my own understanding of Databricks’ on-premises business has changed recently. There are no changes in my beliefs that:

  • Databricks does not directly license or support on-premises Spark users. Rather …
  • … it helps partner companies to do so, where:
    • Examples of partner companies include usual-suspect Hadoop distribution vendors, and DataStax.
    • “Help” commonly includes higher-level support.

However, I now get the impression that revenue from such relationships is a bigger deal to Databricks than I previously thought.

Databricks, by the way, has grown to >50 people.

3. DJ Patil and Ruslan Belkin apparently had a great session on lessons learned, covering a lot of ground. Many of the points are worth reading, but one in particular echoed something I’m hearing lots of places — “Data is super messy, and data cleanup will always be literally 80% of the work.” Actually, I’d replace the “always” by something like “very often”, and even that mainly for newish warehouses, data marts or datasets. But directionally the comment makes a whole lot of sense.

4. Of course, dirty data is a particular problem when the data is free-text.

5. In 2010 I wrote that the use of textual news information in investment algorithms had become “more common”. It’s become a bigger deal since. For example:

6. Sometimes a post here gets a comment thread so rich it’s worth doubling back to see what other folks added. I think the recent geek-out on indexes is one such case. Good stuff was added by multiple smart people.

7. Finally, I’ve been banging the drum for electronic health records for a long time, arguing that the great difficulties should be solved due to the great benefits of doing so. The Hacker News/New York Times combo offers a good recent discussion of the subject.

Categories: Other

Our new Oracle APEX YouTube Channel is up and running!

Patrick Wolf - Sat, 2015-05-02 07:12
Check out our new Oracle APEX YouTube Channel! Our Product Manager David Peake has started a Video Series about the new Page Designer in Oracle Application Express 5.0. It’s a great start to get familiarized with the new IDE to edit … Continue reading →
Categories: Development

getting started with postgres plus advanced server (3) - setting up a hot standby server

Yann Neuhaus - Sat, 2015-05-02 02:42

So, we have a ppas 94 database up and running and we have a backup server for backing up and restoring the database. Now it is time to additionally protect the database by setting up a hot standby database. This database could even be used to offload reporting functionality from the primary database as the standby database will be open in read only mode. Again, I'll use another system for that so that the system overview looks like this:

server ip address purpose ppas ppas database cluster ppasbart backup and recovery server ppasstandby ppas hot standby database

As the standby database will need the ppas binaries just follow the first post for setting this up again. Once the binaries are installed and the database is up and running I'll completely destroy it but keep the data directory:

[root@oel7 tmp]# service ppas-9.4 stop
Stopping Postgres Plus Advanced Server 9.4: 
waiting for server to shut down.... done
server stopped
[root@oel7 tmp]# rm -rf /opt/PostgresPlus/9.4AS/data/*
[root@oel7 tmp]# 

Ready to go. It is amazingly easy to setup a hot standby server with postgres. In a nutshell, everything that needs to be done is to create a replication user in the database, do a base backup of the primary database, copy that to the standby server, create a recovery.conf file and startup the standby database. Lets start by creating the user which will be used for the recovery in the primary database:

[root@ppas ~]# su - enterprisedb
-bash-4.2$ . ./ 
-bash-4.2$ psql
psql.bin (
Type "help" for help.

edb=# edb=# create role standby LOGIN REPLICATION UNENCRYPTED PASSWORD 'standby';
edb=# commit;

... and adjust the pg_hba.conf file (the second entry is for the base backup later):

-bash-4.2$ tail -1 data/pg_hba.conf
host    replication     standby          md5
local   replication     standby                                              md5

... and adjust the wal-level in postgresql.conf

-bash-4.2$ grep wal_level data/postgresql.conf 
wal_level = hot_standby			# minimal, archive, hot_standby, or logical

For the settings in pg_hba.conf and postgresql.conf to take effect either a reload of the main server process or a complete restart is required:

-bash-4.2$ pg_ctl -D data/ restart
waiting for server to shut down..... done
server stopped
server starting

Now it is a good time to test if we can connect to the primary database from the standby node:

[root@oel7 tmp]# /opt/PostgresPlus/9.4AS/bin/psql -h -U standby edb
Password for user standby: 
psql.bin (
Type "help" for help.


Ready for the basebackup of the primary database?

mkdir /var/tmp/primary_base_backup/
-bash-4.2$ pg_basebackup -D /var/tmp/primary_base_backup/ -U standby -F t -R -x -z -l for_standby -P
56517/56517 kB (100%), 1/1 tablespace

Especially notice the "-R" switch of pg_basebackup as this creates a minimal recovery.conf for us which we can use as a template for our standby database. Transfer and extract the file written to the standby server (I again prepared passwordless ssh authentication between the primary and the standby server. check the second post on how to do that).

bash-4.2$ pwd
bash-4.2$ scp* .
base.tar.gz                                                                                                  100% 5864KB   5.7MB/s   00:00    
-bash-4.2$ tar -axf base.tar.gz 
-bash-4.2$ ls
backup_label  dbms_pipe  pg_dynshmem    pg_log        pg_notify    pg_snapshots  pg_subtrans  PG_VERSION            postgresql.conf
base          global     pg_hba.conf    pg_logical    pg_replslot  pg_stat       pg_tblspc    pg_xlog               recovery.conf
base.tar.gz   pg_clog    pg_ident.conf  pg_multixact  pg_serial    pg_stat_tmp   pg_twophase

Almost ready. Now we need to adjust the recovery.conf file:

standby_mode = 'on'
primary_conninfo = 'host= port=5444 user=standby password=standby'
restore_command = 'scp bart@ %p'

... and enable hot standby mode in the postgresql.conf file on the standby server and adjust the listen address:

-bash-4.2$ grep hot postgresql.conf 
wal_level = hot_standby			# minimal, archive, hot_standby, or logical
hot_standby = on			# "on" allows queries during recovery
#hot_standby_feedback = off		# send info from standby to prevent
-bash-4.2$ grep listen data/postgresql.conf
listen_addresses = ''		# what IP address(es) to listen on;

Startup the standby database and if everything is fine messages similar to this should be reported in the postgresql log file (/opt/PostgresPlus/9.4AS/data/pg_log/):

2015-04-29 14:03:36 CEST LOG:  entering standby mode
scp: /opt/backup/ppas94/archived_wals/000000010000000000000017: No such file or directory
2015-04-29 14:03:36 CEST LOG:  consistent recovery state reached at 0/17000090
2015-04-29 14:03:36 CEST LOG:  redo starts at 0/17000090
2015-04-29 14:03:36 CEST LOG:  record with zero length at 0/170000C8
2015-04-29 14:03:36 CEST LOG:  database system is ready to accept read only connections
2015-04-29 14:03:36 CEST LOG:  started streaming WAL from primary at 0/17000000 on timeline 1

To further prove the setup lets create a simple table in the primary database and add some rows to it:

edb=# create table standby_test ( a int ); 
edb=# insert into standby_test values (1);
edb=# insert into standby_test values (2);
edb=# commit;
edb=# \! hostname

Lets see if we can query the table on the standby:

-bash-4.2$ psql
psql.bin (
Type "help" for help.

edb=# select * from standby_test;
(2 rows)

edb=# \! hostname

Cool. Minimal effort for getting a hot standby database up and running. Make yourself familiar with the various settings that influence the behavior of the standby database. I'll write another post on how to do failovers in near future.

Backup file using copy with date append

Michael Dinh - Fri, 2015-05-01 21:51

How many ways are there to backup a file? Copy and append .bak, .sav, .orig, etc …

I was looking for a simple, consistent, yet elegant way to do this.

Good thing for Google (disclaimer I do own the stock), I came across this blog –


$ ps
  PID TTY          TIME CMD
 4343 pts/1    00:00:00 bash
 6915 pts/1    00:00:00 ps

$ ls -l logdump*
-rw-r--r--. 1 oracle oinstall 2487 Apr 19 11:09 logdump.hst

$ filename=logdump.hst

$ cp $filename{,.`date +%d%b%Y-%H%M`}

$ ls -l logdump*
-rw-r--r--. 1 oracle oinstall 2487 Apr 19 11:09 logdump.hst
-rw-r--r--. 1 oracle oinstall 2487 May  1 20:31 logdump.hst.01May2015-2031

db file parallel read faster on Linux than HP-UX?

Bobby Durrett's DBA Blog - Fri, 2015-05-01 16:39

I am still working on comparing performance between an HP-UX blade and a Linux virtual machine and I have a strange result.  I tried to come up with a simple example that would do a lot of single block I/O.  The test runs faster on my Linux system than my HP-UX system and I’m not sure why.  All of the parameters are the same, except the ones that contain the system name and filesystem names.  Both systems are  The dramatic difference in run times corresponds to an equally dramatic difference in db file parallel read wait times.

I created a table called TEST and populated it with data and added an index called TESTI. I ran this query to generate a lot of single block I/O:

select /*+ index(test testi) */ sum(blocks) from test;

Here is the result on HP:


Elapsed: 00:01:28.38

          9  recursive calls
          0  db block gets
    3289143  consistent gets
     125896  physical reads
      86864  redo size
        216  bytes sent via SQL*Net to client
        248  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

  3  WHERE a.SID= :monitored_sid
  4  order by time_waited desc;

------------------------------ ----------- ----------- ------------
db file parallel read                 4096        6760         1.65
db file sequential read              14526         236          .02
events in waitclass Other                1          28        28.49
SQL*Net message from client             19           5          .28
db file scattered read                   5           3          .65
SQL*Net message to client               20           0            0
Disk file operations I/O                 1           0          .01

Here is the same thing on Linux:


Elapsed: 00:00:09.01

          9  recursive calls
          0  db block gets
    3289130  consistent gets
     125872  physical reads
      77244  redo size
        353  bytes sent via SQL*Net to client
        360  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
          1  rows processed

  3  WHERE a.SID= :monitored_sid
  4  order by time_waited desc;

------------------------------ ----------- ----------- ------------
db file parallel read                 4096          55          .01
events in waitclass Other                1          17        16.72
db file sequential read              14498          11            0
SQL*Net message from client             19           6          .31
db file scattered read                  15           0            0
SQL*Net message to client               20           0            0
Disk file operations I/O                 1           0            0

Something doesn’t seem right.  Surely there is some caching somewhere.  Is it really possible that the Linux version runs in 9 seconds while the HP one runs in a minute and a half?  Is it really true that db file parallel read is 1 hundredth of a second on HP and .01 hundredths of a second on Linux?

I’m still working on this but thought I would share the result since it is so strange.

Here is a zip of my scripts and their logs if you want to check them out: zip

– Bobby

p.s. Here are some possibly relevant parameters, same on both system:

cpu_count                    4
db_block_size                8192
db_cache_size                512M
db_writer_processes          2
disk_asynch_io               FALSE
dispatchers                  (PROTOCOL=TCP)(DISPATCHERS=32)
filesystemio_options         ASYNCH
large_pool_size              32M
log_buffer                   2097152
max_shared_servers           12
pga_aggregate_target         5871947670
sga_max_size                 3G
sga_target                   3G
shared_pool_size             1G
shared_servers               12
star_transformation_enabled  FALSE
Categories: DBA Blogs

Variations on 1M rows insert (1): bulk insert

Yann Neuhaus - Fri, 2015-05-01 13:30

This blog is the first one of a series about inserting 1 million rows. Do you have an idea about the time it takes? The idea came from another project, about NoSQL databases, but I'll focus on Oracle in those posts. There are a lot of different ways to insert 1 million rows in Oracle: row-by-row insert or bulk, row-by-row commit or not, different ways to commit, In-Memory Database, Database In-Memory Option, etc.

Test case

Here is the table I create:

create table DEMO ("id" number , "text" varchar2(15), "number" number) ;

Here is the first PL/SQL I'll run:

 type people_array is varray(12) of varchar(15);
 people people_array := people_array( 'Marc', 'Bill', 'George', 'Eliot', 'Matt', 'Trey', 'Tracy','Greg', 'Steve', 'Kristina', 'Katie', 'Jeff' );
 people_count number :=people.COUNT;
 n number;
 for i in 0..1e6 loop
  insert into DEMO values( i , people( dbms_random.value(1,people_count) ) , n );
 end loop;

You may wonder why I chose this testcase. As I said, this comes from another project and you can google to find where the the test case comes from. The performance will be mainly about inserts but the idea is also to implement some procedural processing to show that it's possible in different database system, here in PL/SQL on Oracle and in a future post in TimesTen.


Test environment

All the test will be done on a VM with only one CPU activated. This is important because some activity will involve background processes.

$ grep "model name" /proc/cpuinfo
model name      : Intel(R) Core(TM) i7-3630QM CPU @ 2.40GHz


Oracle 11g

My first tests are done on the first Oracle Developer Day VM that was available for download. It's a 11g version. We will see 12c later.

SQL> select * from v$version;

Oracle Database 11g Enterprise Edition Release - Production

And it's 32-bit:

SQL> select addr,4*length(addr) from v$process where rownum=1;

-------- --------------
3B3C9F80             32


With and without index

As you have seen above, I've created the table with a primary key, so I have an index on it. How long does it take to run the statement above to insert 1 million rows?

PL/SQL procedure successfully completed.
Elapsed: 00:00:46.80

On one CPU it takes 46 seconds to insert one million rows, row-by-row, from a simple PL/SQL loop. We will take that as the base for future comparisons. Of course I've done several runs and I keep them as long as there is less than 5% difference.


Just to show that the index has an important cost when inserting, let's run the same testcase without the PRIMARY KEY:

PL/SQL procedure successfully completed.
Elapsed: 00:00:35.10

I'll keep the testcase with the primary key index as a point of comparison. There is no point to insert one million rows if we can't retreive one easily. And the table with primary key pattern is the most relevant one when comparing different databases - SQL or NoSQL.


Bulk insert

The previous inserts are quite optimal: directly done from PL/SQL in the database, so no roundtrips. Commit only at the end and we know that in Oracle it's better not to commit too often. I will cover the need of row-by-row commits in a future post. But this is still row-by-row insert and this is not optimal. When we have all rows to insert from one session, then we can insert in bulk (one execution of the insert statement inserts several rows).

There are several ways to do that depending on the client language. Here we are in PL/SQL, so it is easy: put all rows into a collection and call the insert with that collection:

 type people_array is varray(12) of varchar(15);
 people people_array := people_array( 'Marc', 'Bill', 'George', 'Eliot', 'Matt', 'Trey', 'Tracy','Greg', 'Steve', 'Kristina', 'Katie', 'Jeff' );
 people_count number :=people.COUNT;
 -- declare the collection
 type DEMO_record_type is record ("user_id" number , "name" varchar2(15), "number" number) ;
 type DEMO_collection_type is table of DEMO_record_type index by binary_integer;
 DEMO_collection DEMO_collection_type;
 -- fill the collection
 for i in 1..1e6 loop
  DEMO_collection(i)."name":=people( dbms_random.value(1,people.count) );
 end loop;
 -- call the insert
 forall i in 1..DEMO_collection.count insert into DEMO values DEMO_collection(i);

The result is obvious:

PL/SQL procedure successfully completed.
Elapsed: 00:00:06.55

And for the testcase without the primary key index:

PL/SQL procedure successfully completed.
Elapsed: 00:00:03.80

So bulk inserts here are 8x to 10x faster.



The immediate conclusion is that when you have lot of rows to insert and you don't need to commit at each row, then you should do it in bulk. Here I made it from PL/SQL which run directly in the database. If you are inserting from an application server or a remote client, then the elimination of roundtrips will be a lot more important.


In future posts we will see what happens when you need to commit for each row. And I will also run it with In-Memory database and with Database In-Memory. Don't forget that if you are in Switzerland in June, our experts from Oracle, Microsoft, and SAP technologies will talk about In-Memory as implemented by SQL Server, Oracle and SAP HANA. All the same day. It's free and you can register now: Event In-Memory: boost your IT performance!

Logstash and Oracle Database

Kris Rice - Fri, 2015-05-01 13:09
    For anyone that doesn't know what Logstash is head over to and watch the video on the home page.  Mark Rittman has done a great blog post on the full ELK stack with OBIEE.     This blog post is a first cut at an input filter for Logstash to gather metrics, logs, or anything that can be expressed in sql.  A huge caution that this is a 0.01 attempt and will get better as I