Skip navigation.

DBA Blogs

On the Road with Laerte

Pythian Group - Mon, 2014-08-18 09:21

For the month of October, Microsoft PowerShell MVP, Laerte Junior will be touring Brazil and Europe for various SQL Server-related speaking engagements.

“Thankfully, I am working at a company that fully supports their employees to speak and participate in community events.” Laerte says. “I can travel to Europe for 5 SQL Server conferences, and then go to the USA to attend the MVP Global Summit and SQL PASS Summit.”

While most European speaking sessions have been confirmed, we’ll be updating the schedule as the topics become available. You can follow Laerte on his personal blog at shellyourexperience.com.

Date Location Event Topic Speaking Schedule September 27, 2014 São Paulo, Brazil SQL Saturday #325 Criando suas próprias solouções usando PowerShell See speaking schedule October 1, 2014 Schelle, Belgium SQL Server Days Mastering PowerShell for SQL Server See speaking schedule October 2, 2014 Utrecht, Holland SQL Saturday #336 Full day pre-conference training session: Mastering PowerShell for SQL Server October 3/4 2014 Utrecht, Holland SQL Saturday #336 TBD See speaking schedule October 11, 2014 Sophia, Bulgaria SQL Saturday #311 Writing Your Solutions Using PowerShell See speaking schedule October 18, 2014 Oporto, Portugal SQL Saturday #341 Criando suas próprias solouções usando PowerShell See speaking schedule October 24, 2014 Barcelona, Spain SQL Saturday #338 TBD See speaking schedule

Will you be attending any of these sessions? If so, which ones?

Categories: DBA Blogs

What Is Oracle Elapsed Time And Wall Time With A Parallelism Twist


What Is Oracle Elapsed Time And Wall Time With A Parallelism Twist
In this post I'm focusing on Oracle Database SQL elapsed time, adding parallelism into the mix and then revisiting wall time. What initially seems simple can take some very interesting twists!

If you are into tuning Oracle Database systems, you care about time. And if you care about time, then you need to understand the most important time parameters: what they are, their differences, how they relate to each other and how to use them in your performance tuning work.

A couple weeks ago I wrote about Oracle DB Time, non-idle wait time, and server process CPU consumption (DB CPU) time. If you have not read that posting, HERE is the link. It must be a good read because it quickly become my most viewed post ever! In this posting, the focus is SQL elapsed time, parallelism, and again wall time. Enjoy!

Quick Review
In my previous related post, I covered non-idle wait time, DB CPU, and DB Time. Here is a very quick summary of each.

Non-Idle Wait Time occurs when an Oracle process is not consuming CPU, the session pauses (i.e., waits) and Oracle considers the wait time important for performance tuning. An example of a non-idle wait event is direct path read temp. An example of an idle wait event is SQL*Net message from client or pmon timer.

DB CPU is Oracle server/foreground/shadow process CPU consumption. This is not include Oracle background process CPU consumption.

DB Time is DB CPU plus Non-Idle Wait Time. Remember that DB Time does not include background process CPU consumption and Oracle Corporation determines which wait events are considered idle.

Elapsed Time
Elapsed Time (ET) is all DB Time related to a defined task. A "defined task" could be a SQL statement, group of SQL statements, pl/sql procedure, batch job, etc. It is whatever makes sense in your tuning situation.

The elapsed time for a SQL_ID can be found in v$sql. But be careful because this elapsed time is related to "all" the SQL_ID executions. Thankfully, there is an "executions" column in v$sql.


Elapsed time is displayed in a number of areas within an Oracle Database AWR and Statspack report. Looking at the above screen shot, the "top" elapsed time SQL has an elapsed time of 268561 seconds. This means that over the AWR report's snapshot interval, for all this SQL's executions, its total DB Time is 268561 seconds. Said another way, if we were to add up all this SQL's DB CPU and non-idle wait time for all its executions within the snapshot interval, the value should be 268561.

There is a lot of great information provided in the AWR and Statspack SQL reports. For example, because the elapsed time and the CPU time (DB CPU) is shown above, we can calculate the non-idle wait time for the "top" elapsed time SQL ID.

non idle wait time = elapsed time - cpu time
268465 = 268561 - 96

For the "top" elapsed SQL, its elapsed time 268561 and it's DB CPU is 96 therefore its non-idle wait time is 268465. Wow! This statement has tons of associated wait time compared to CPU consumption time.

But it gets even better! Because the total Elapsed Time and the total number Executions over the snapshot interval is displayed, we can determine the average elapsed time!

average elapsed time = total elapsed time / executions
746.03 = 268561 / 36

Do not be deceived! The average elapsed time is unlikely what the user is experiencing. Two possibility examples for this deception are skewed elapsed times and parallelism.

For most DBAs this is unexpected. It also causes performance perception problems yet solutions are available to understand what's really going on.

I've spent so much time researching this topic and seen it increase my consulting value, I've posted a number of blog entries on this subject. Plus I created an OraPub Online Institute seminar focused specifically on this subject. It's called Using Skewed Performance Data To Your Advantage. Check it out. I'm really proud how it turned out. I also have a couple of OSM scripts dedicated to this topic, sqlelget[11].sql.

Revisiting Wall Time With A Parallelism Twist
Now it's time to put this all together.

DB CPU is the Oracle server process CPU consumption.

Non-Idle Wait Time (NIWT) is the time when an Oracle process can not consume CPU and must pause and we care about this time.

DB Time is the Oracle server process CPU consumption and all non-idle wait time.

Elapsed Time (ET) is the sum (i.e., all) DB Time related to a task, such as a SQL_ID.

Wall Time is what we hope the user experiences. I'll assume there is no time gap between Oracle and the user, therefore the wall time will equal the user's experience.

Effective Parallelism is the effective number of Oracle parallel slaves or some other form of parallelism, such as designed-in application parallelism. (For simplicity, I'm only going to mention Oracle parallel query.) If Oracle parallel query is not involved, then the effective parallelism is one. If two parallel query slaves are involved, then the effective parallelism will be a little less than 2

Parallelism can reduce wall time because we can simultaneously "burn time" in multiple places. For example, 60 seconds of elapsed time with a process running serially, results in a wall time of 60 seconds. But if we have two parallel query slaves, while the elapsed time (i.e., all the DB Time) is still 60 seconds (plus some overhead time), the wall time will be around 30 seconds (plus some overhead time).

The math is really simple...that is until you factor in scalability (i.e., the overhead), which I won't. If you're interested, read the last chapter of my book, Forecasting Oracle Performance.

Let's simplify this by using some mathematical notation.

DB Time = DB CPU + NIWT

Elapsed Time = Sum of DB Time

Wall Time = Elapsed Time / Effective Parallelism

Pretty straightforward, eh? Below is a short video clip summarizing the key time parameters taken from the OraPub Online Institute seminar, Tuning Oracle Using an AWR Report - Part 2. If you can't see the video, click HERE watch it on YouTube.



Test You Knowledge
True or False? If the total elapsed time is 60 seconds and parallel query is not involved, the total wall time will also be 60 seconds. True

True or False? If the elapsed time per execution is 60 seconds and the wall time is 30 seconds, then parallel query is involved. True

True or False? Bonus question yet very important to understand: If the elapsed time per execution is 60 seconds and two PQ slaves are involved, then the wall time will be 30 seconds.

The last question is false because there is overhead when parallelizing. Parallelism is not free. Because of this, the wall time will hopefully drop to perhaps 35 seconds. That 5 seconds is the parallelization overhead.

Coming Up Next: Video Proof!
While the above may seem correct, I ran some SQL statements and captured the relevant time statistics. There is quite a bit of detail and I ran two different tests, so I'll post that in a week or two.

Thanks for reading,

Craig.
https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htmYou can watch the seminar introductions for free on YouTube!If you enjoy my blog, subscribing will ensure you get a short-concise email about a new posting. Look for the form on this page.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com.




Categories: DBA Blogs

Partner Webcast - Oracle SOA Suite 12c: Connect 4 Cloud, Mobile, IoT with On-premise

The pace of new business projects continues to grow from increasing customer self-service to seamlessly connecting all your back office and in-the-field applications. At the same time increased...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Closer look at the SOA 12c Feature: Oracle Managed File Transfer

The rapid growth of cloud-based applications in the enterprise, combined with organizations' desire to integrate applications with mobile technologies, is dramatically increasing application...

We share our skills to maximize your revenue!
Categories: DBA Blogs

ASM Commands : 1 -- Adding and Using a new DiskGroup for RAC

Hemant K Chitale - Sat, 2014-08-16 10:22
In 11gR2 Grid Infrastructure and RAC

On node1, I discover and add a disk to ASM.  NFS "devices" asmdisk.1 to asmdisk.6 are present as ASM Disks. asmdisk.7 has been added on NFS mount point /data1. (Disks asmdisk.3 to asmdisk.6 are on /data2)

I start on node1 in my Cluster

[root@node1 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:42:02 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> show parameter asm_diskstring

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_diskstring string /crs/*, /data1/*, /data2/*, /f
ra/*
SQL> !ls -l /data1/asm*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 16 23:42 /data1/asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 16 23:33 /data1/asmdisk.7

SQL> create diskgroup DATA3 disk '/data1/asmdisk.7';
create diskgroup DATA3 disk '/data1/asmdisk.7'
*
ERROR at line 1:
ORA-15018: diskgroup cannot be created
ORA-15072: command requires at least 2 regular failure groups, discovered only
1


SQL> create diskgroup DATA3 external redundancy disk '/data1/asmdisk.7';

Diskgroup created.

SQL>
SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
5 DATA3 1953

SQL>

I now have a new DiskGroup using External Redundancy with a single disk.  Is it visible at node2 ?

[root@node2 ~]# su - grid
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sat Aug 16 23:47:45 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
0 DATA3 0

SQL>

Why is the size not visible yet ?  Because, although the CREATE from node1 had also MOUNTed the Disk Group, it hasn't been mounted on node2 yet.

SQL> alter diskgroup DATA3 mount;

Diskgroup altered.

SQL> select group_number, name, total_mb
2 from v$asm_diskgroup
3 where name = 'DATA3'
4 /

GROUP_NUMBER NAME TOTAL_MB
------------ ------------------------------ ----------
5 DATA3 1953

SQL>

Can I confirm the underlying disk ?

SQL> select group_number, disk_number, header_status, state, total_mb
2 from v$asm_disk
3 where group_number = 5;

GROUP_NUMBER DISK_NUMBER HEADER_STATU STATE TOTAL_MB
------------ ----------- ------------ -------- ----------
5 0 MEMBER NORMAL 1953

SQL>


What happens when I create a tablespace/datafile in this DiskGroup, from the instance on node1 ?

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$ su - oracle
Password:
-sh-3.2$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:08:31 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> create tablespace NEW_TBS datafile '+DATA3';
create tablespace NEW_TBS datafile '+DATA3'
*
ERROR at line 1:
ORA-01119: error in creating database file '+DATA3'
ORA-15045: ASM file name '+DATA3' is not in reference form
ORA-17502: ksfdcre:5 Failed to create file +DATA3
ORA-15081: failed to submit an I/O operation to a disk


SQL>

Why do I get this error ? I could create a DiskGroup on the ASM Disk but I couldn't add a datafile ?  Let me check the permissions.

SQL> !sh
sh-3.2$ cd /data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:11 asmdisk.7
sh-3.2$ su grid
Password:
sh-3.2$ pwd
/data1
sh-3.2$ ls -l asmd*
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.1
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.2
-rw-r--r-- 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$ chmod 775 asmdisk.7
sh-3.2$ ls -l asmdisk.7
-rwxrwxr-x 1 grid oinstall 2048000000 Aug 17 00:12 asmdisk.7
sh-3.2$

The oinstall group that is used by "oracle" did not have write permissions. Let me go back to Oracle now after having granted the permissions.

sh-3.2$ exit
exit
sh-3.2$ exit
exit

SQL> l
1* create tablespace NEW_TBS datafile '+DATA3'
SQL> /

Tablespace created.

SQL>

The CREATE TABLESPACE has succeeded.  I can verify the datafile and the ASM file from node2 now.

-sh-3.2$ id
uid=500(grid) gid=1001(oinstall) groups=1001(oinstall),1011(asmdba)
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:17:19 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysasm

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options

SQL> select group_number, file_number, bytes/1048576, type, redundancy
2 from v$asm_file
3 where group_number=5;

GROUP_NUMBER FILE_NUMBER BYTES/1048576
------------ ----------- -------------
TYPE REDUND
---------------------------------------------------------------- ------
5 256 100.007813
DATAFILE UNPROT


SQL>
SQL> exit
suDisconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Real Application Clusters and Automatic Storage Management options
-sh-3.2$
-sh-3.2$ su - oracle
Password:
-sh-3.2$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Sun Aug 17 00:19:34 2014

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options

SQL> select file_name, bytes/1048576 from dba_data_files
2 where tablespace_name = 'NEW_TBS';

FILE_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
+DATA3/racdb/datafile/new_tbs.256.855792859
100


SQL>

Now, I have the new DataFile visible in ASM and the Database on the New DiskGroup.
.
.
.

Categories: DBA Blogs

Webcast - Oracle Database In-Memory Option

Next to the recent announcement by Larry Ellison on the Future of the Database, we are happy to share this exclusive series of live webcasts from Oracle Database Product Management, where you can...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Log Buffer #384, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-15 07:32

This Log Buffer Edition starts with some great posts from Oracle arena, then passes through the world of SQL Server, and stops at the MySQL field.

Oracle:

OAG/OES Integration for Web API Security: skin and guts by Andre Correa

Showing Foreign Key Names in your Data Modeler Diagrams

walkmod : A Tool to Apply Coding Conventions

Oracle VM Virtual Appliances for E-Business Suite 12.1.3 Now Available

RMAN Catalog requires Enterprise Edition (EE) since Oracle Database 12.1.0.2

SQL Server:

Restore Gene : Automating SQL Server Database Restores

With a hybrid cloud, can you get the freedom and flexibility of a public cloud with the security and bandwidth of a private cloud?

A clear understanding of SQL Data Types and domains is a fundamental requirement for the Database Developer, but it is not elementary.

Automating SQL Server Agent Notification

Adding Custom Reports to SQL Server Management Studio

MySQL:

The Road to MySQL 5.6 — A DBA Perspective

Virtual servers for MySQL are popular but are they the answer? Should we be containing our instances instead.

Jeremy Cole recently blogged about the feature SET GLOBAL sql_log_bin.

Which SQL queries take all the time? Using MaxScale to answer that age old question.

SBR vs RBR when using On Duplicate Key Update for High Availability

Categories: DBA Blogs

OGG-00212, what a frustrating error.

DBASolved - Thu, 2014-08-14 14:50

Normally, I don’t mind errors when I’m working with Oracle GoldenGate (OGG); I actually like getting errors, keeps me on my toes and gives me something to solve.  Clients on the other hand do not like errors…LOL.  Solving errors in OGG is normally pretty straight forward with the help of the documentation.  Although today I can almost disagree with the docs.

Today, as I’ve been working on implementing a solution with OGG 11.1.x on the source side and OGG 11.2.x on the target side, this error came up as I was trying to start the OGG 11.1.x Extracts:

OGG-00212  Invalid option for MAP: PMP_GROUP=@GETENV(“GGENVIRONMENT”.

OGG-00212  Invalid option for MAP:  TOKENS(.

In looking around in the OGG documentation and other resources (online and offline).  Some errors are self-explanatory; not in the case of OGG-00212.  Looking up the error in OGG 11.1.x docs was pointless; didn’t exist.  When I finally found the error in the docs for OGG 11.2.x, the docs say:

OGG-00212: Invalid option for [0]:{1}
Cause: The parameter could not be parsed because the specified option is invalid.
Action: Fix the syntax

Now that the documentation has stated the obvious, how is the error actually corrected?  There is no easy way to correct this error because it is syntax related.  In the case that I’m having the error was being thrown due to needing additional spaces in the TABLE mapping.  Silly I know, but true.  

Keep in mind, to fix an OGG-00212 error, especially with OGG 11.1.x or older, remember to add spaces where you many not think one is needed.

Example (causes the error):

TABLE <schema>.SETTINGS,TOKENS( #opshb_info() );

Example (fixed the error):

TABLE <schema>.SETTINGS, TOKENS ( #opshb_info() );

Notice the space between the common (,) and TOKEN. Also between TOKENS and the open parentheses (().  Those simple changes fixed the OGG-00212 error I was getting.

Hope this helps!

Enjoy!

http://about.me/dbasolved

 

 

 

 


Filed under: Golden Gate
Categories: DBA Blogs

SQL Tuning Health Check (SQLHC)

DBA Scripts and Articles - Thu, 2014-08-14 09:12

What is SQL Tuning Health Check? The SQL Tuning Health Check is provided by Oracle (Doc ID 1366133.1) in order to check the environment where the problematic SQL query runs. It checks the statistics, the metadata, initialization parameters and other elements that may influence the performance of the SQL being analyzed. The script generates an [...]

The post SQL Tuning Health Check (SQLHC) appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Managing Files with SaltStack

Pythian Group - Thu, 2014-08-14 09:06

Before we begin, take a look at my previous two blog posts, SaltStack for Remote Parallel Execution of Commands and Using SaltStack for Configuration Management.

We manage files using configurations management much for the same reasons that we manage packages and services – we want/need consistency across all of our boxes, and to spend our time on tasks that add more business value than logging into all of our servers to change a line in a config file.

Using Salt I will show you have to manage the message of the day (MOTD) on the server.

Templates

There are many examples of configuration files which differ only by a few lines between staging and production. A great example of this is a config file which has database host/user/pass information. The drive for consistency tells us that we would like the environment that our release is tested on to match (as closely as possible) the production environment where the code will run.

Using templates allows us to affect the few lines in a configuration file, which we would like to change, while leaving the rest alone. This also simplifies the management of our servers and the configuration repository, allowing us to maintain one config file for many servers.

Salt grains

The salt minions know a lot of information about the boxes they run on. Everything from the hostname, to the IP address, to the kernel, and more is available to be queried by the salt master. These pieces of information are called “grains” in the salt world and allow us to insert dynamic variables into our templates.

A great use case for grains would be the expansion of our Apache formula from my last post. On Red Hat-based machines, the Apache package is called “httpd” but on Debian-based machines the package is called “Apache2″ Using the “osfamily” grain we can dynamically redefine the package name for each minion while maintaining a single formula for all servers.

Likewise, any configurations files which need to have the current box IP address can benefit from grains. As each minion installs that configuration file it will see that the variable needs to be populated with a “grain” and will then do so as requested. Rather than maintaining an Apache vhost file for each of your 10 web servers where the only difference is the IP address declaration you can maintain one dynamic template which will ensure that everything else in that config file matches on each of the 10 nodes other then the one thing that needs to be dynamic (the IP address).

Putting it all together – the MOTD

In your /srv/salt dir we are going to create a subdir called motd. inside of that directory you will find 2 files. an init.sls which is the default top level formula for the director and an motd.template file which is our config file. The init.sls looks like this:

/etc/motd: 
  file.managed: 
    - user: root 
    - group: root 
    - mode: 0644 
    - source: salt://motd/motd.template 
    - template: jinja

For the file /etc/motd we are telling Salt that we want to manage the file that its owner and group should be root, that we want the file to have 0644 permissions. We are letting Salt know that it will find the config file (source) under the motd subdir, the salt:// maps to /srv/salt and that our template will be in the jinja format.

Our template will look like:

------------------------------------------------------------------------------
Welcome to {{ grains['fqdn'] }}

Server Stats at a Glance:
------------------------

OS: {{ grains['osfullname'] }}
Kernel: {{ grains['kernelrelease'] }}
Memory: {{ grains['mem_total'] }} MB

This server is managed using a configuration management system (saltstack.org).
Changes made to this box directly will likely be over-written by SALT. Instead
modify server configuration via the configuration management git repository.
------------------------------------------------------------------------------

As each minion installs this MOTD file it will see the variables in use because they are grains the minion will know that it has the information required to populate the variable and will do so for each server. This will give you a final MOTD that looks like this:

[root@ip-10-0-0-172 ~]# cat /etc/motd

------------------------------------------------------------------------------
Welcome to ip-10-0-0-172.ec2.internal

Server Stats at a Glance:
------------------------

OS: Amazon Linux AMI
Kernel: 3.10.42-52.145.amzn1.x86_64
Memory: 996 MB

This server is managed using a configuration management system (saltstack.org).
Changes made to this box directly will likely be over-written by SALT.  Instead
modify server configuration via the configuration management git repository.
------------------------------------------------------------------------------
[root@ip-10-0-0-172 ~]#

As you can see each variable was populated with the information specific to the node.

If we wanted to add, remove, or change anything in the MOTD, rather than having to box walk the entire infrastructure (which depending on your side, could tie up a resource for days), we can edit the single template file on the master and allow the tool to propagate the change out to the boxes for us, reducing that task from a very boring day (or more) to a few minutes!

Categories: DBA Blogs

How to Configure an Azure Point-to-Site VPN – Part 2

Pythian Group - Thu, 2014-08-14 08:47

This blog post is the second in a series of three which will demonstrate how to configure a Point-to-Site VPN step-by-step. In my first blog post, I demonstrated how to configure a virtual network and a dynamic routing gateway. Today’s post will be about creating certificates.

CREATING CERTIFICATES

At this step, we will create and upload a certificate. This certificate will be used to authenticate the VPN clients and are performed in few steps:

  • Generate the certificate
  • Upload the root certificate to the Azure Management Portal
  • Generate a client certificate
  • Export and install the client certificate

Let’s start …

  1. We will need to use the MakeCert tool. MakeCert is part of “Microsoft Visual Studio Express” available here.
  2. After successfully downloading the tool, start the setup and follow the installation steps. Note that you can generate this certificate in any computer, not only in the computer where you are configuring the VPN.
    After the installation, you can find MakeCert at:

    • C:\Program Files (x86)\Windows Kits\8.1\bin\x64
    • C:\Program Files (x86)\Windows Kits\8.1\bin\x86
  3. Launch the command prompt as Administrator. Point the path to one of the folders referred in the previous step and execute the following command (note: keep the command line opened):
    makecert -sky exchange -r -n “CN=RootCertificateMurilo” -pe -a sha1 -len 2048 -ss My “RootCertificateMurilo.cer”
    (where “RootCertificateMurilo” is teh certificate name).Screen Shot 2014-07-30 at 11.38.38
    This command will create and install a root certificate in the Personal certificate store and create the define RootCertificateMurilo.cer file in the same directory that you are executing the command.Screen Shot 2014-07-30 at 11.59.41Note: Store this certificate in a safe location.
  4. Now, go to the Windows Azure Management Portal https://manage.windowsazure.com/ in order to upload the certificate.
  5. In the networks section, select the previously created network and go to the certificate page.Screen Shot 2014-07-30 at 13.02.05
  6. Click Upload a root certificate, select your certificate, and click in the check mark.Screen Shot 2014-07-30 at 13.04.10
    • Depending on the time zone of the server where you created the certificate, you might receive an error message, “The certificate is not valid yet, effective date is [date and time].” To work around this, delete the created certificate, and create another one adding the following parameter (change the date):-b “07/30/2014″It will be valid form 00:00:00 hours for the day you set.
  7. Now we need to create a Client Certificate. We will use the Root Certificate to do this.
    In the same command line window, opened before, execute the following command:makecert.exe -n “CN=ClientCertificateMurilo” -pe -sky exchange -m 96 -ss My -in “RootCertificateMurilo” -is my -a sha1This certificate will be stored in your personal certificate store.
  8. Now we need to export this certificate, as this should be installed on each computer that needs to be connected to the virtual network. To achieve this, enter the command “mmc”, still in the opened command line. The following window will be shown: Screen Shot 2014-07-30 at 16.52.59
    • Go to File->Add/Remove Snap-in.
    • Select “Certificates” and click on “Add >”.Screen Shot 2014-07-30 at 16.54.02
    • Select My user account and click Finish.Screen Shot 2014-07-30 at 16.54.56
    • Click OK in the remaining window.
    • Now you will be able to see your certificates under the “Personal\Certificates” folder:Screen Shot 2014-07-30 at 16.56.13
  9. To export the certificate, right click the Client certificate and click on “All Tasks->Export…”, as shown:Screen Shot 2014-07-30 at 17.00.46
  10. A wizard will be presented. Choose Yes, export the private key and click.Screen Shot 2014-07-31 at 11.15.06
  11. Leave this as default, and click Next.Screen Shot 2014-07-31 at 11.22.15
  12. Choose a strong password (try to remember this) and click Next.Screen Shot 2014-07-31 at 11.23.39
  13. Now you need to set the path to store you .pfx file.Screen Shot 2014-07-31 at 11.25.01
  14. Click Next, then Finish.
  15. To finalize the “Certificates part”, we will need to install the certificate on all the servers where we want to setup the VPN.To accomplish this, you just need to:
    • Copy the exported .pfx file (step 13) to all the servers.
    • Double-click the pfx on all the servers.
    • Enter the password.
    • Proceed with the installation, maintaining the default location.

Stay tuned for my next blog post on how to configure the VPN client.

Categories: DBA Blogs

Michael Abbey: Still Presenting After All These Years

Pythian Group - Thu, 2014-08-14 07:50

A cool, wintery day in late 1989. This kid’s working for the Office of the Auditor General of Canada. I’d been working with Oracle and in my fourth year. I had cut my teeth on 6.0.27.9.4 after first seeing V3 some four years prior. I stumbled across a well-placed ad for a show happening in Anaheim USA in September 1990. I’ve got the bug. I apply to go to the show and was told by my employer ,”Just a sec, David and I were thinking of going to that show – let us get back to you.” Some three weeks I am told it’s a go.

I am off to sunny California for six wonderful days of International Oracle User Week (IOUW); this was a joint effort put on by Oracle and the International Oracle User Group (IOUG). I had spent the better part of the summer of 1969 in southern Cali so this was shaping up to be a resurrection. I toddle off to Cali and have a wonderful time. It’s magic – such a learning opportunity. I even came away knowing how to place a database in archivelog mode. I was so pleased with myself and got to meet one of my heroes. I had only been working with the software for 4 years, but already knew of Ken Jacobs (a.k.a. Dr. DBA).

I had the bug to present almost from day one. I saw an ad in one of the bazillion pieces of paper I brought home from that IOUW about a show in DC – Sheraton Woodley Park to be exact. I don’t even think that it exists anymore. I figured I’d attend ECO then present an abstract for IOUW 1991 in Miami. Some of the history is described in a blog post I made in 2013 located here. Enough said about that. It was quite a whirlwind of activity on the presentation circuit in those days. Starting in 1992 I became very active in the IOUG holding a handful of board positions up to the 2006 or maybe 2007 time frame. I attended a gazillion conferences in those days and the pinnacle was a show in Philly in 1995. I had been on the board of the IOUW for a few years and the paid attendance count at that show was staggering. Chubby Checker played at the big bash and arrangements were made for me to sit in on the bass guitar for the Twist. That got cancelled at the last minute but it was close. My paper was in one of the biggest rooms in the convention centre. There were over 1,500 people in attendance and it was intoxicating. I was pleased when I got my evals to find out the attendees were as pleased as I was. It was all (or close to all) about the CORE database technology in those days. In 1995, Oracle7 was the hot item having been on the street for over 3 years.

As guests of Oracle, a handful of us had the pleasure of attending the launch of Oracle7 at the Hudson Theatre in the Hotel Macklowe on 44th St. in beloved NYC. We were thrilled to be very close in those days to Ray Lane, then President of Oracle Corp. and we introduced Ray to a lot of his direct reports at that “party.” A mere four years later we were back for the release launch of Oracle8 at Radio City Music Hall. Again, a pleasant time was had by all. There turned out to be surprisingly little coverage/mention of Oracle8 at that event. It was more concentrated on Oracle Network Computer (NC) designed to bring computing power to every desktop at a low cost. Once during that Oracle8 launch, the operator of the boom mic in then pit swept the stage to get from one side to the other and almost hit LJE in the side of the head. I think I was the only one who heard what Larry said – “Watch out Bill.” Does anyone get the reference but me?

My torrid Oracle technology career was just that. Between 1991 and the date of this post I have probably given over 100 papers at shows from Ottawa to Hyderabad, Brighton to San Diego, and Vienna to Addis Ababa. There is still a voracious hunger out there for the heart of my expertise – anything that starts with an “O” and ends in an “E” and has the word database tagged on the end. After becoming very close to some of the kernel developers at Oracle, we discussed how they were still in the middle of their workday when the Loma Prieta quake hit in October 1989. Me and a few close friends hung out with the guys whose names litter the bottom of the “this change was done when” section of the ?/rdbms/admin directory on Oracle database software installs. We were in David Anderson’s office schmoozing and asked what he happened to be up to that day. He was ftp’ing source code from a VAX to a Sun box in preparation for the base-platform change that happened in the early 1990s. It was a magic carpet ride.

In some ways it still is. To finish off this year I am appearing at:

  • OOW (Oracle Open World) in San Francisco – September 29-October 2
  • ECO (East Coast Oracle) event in Raleigh/Durham – November 3-5
  • MOUS (Michigan Oracle User Summit) in Livonia – November 13
  • UKOUG in Liverpool – December 8-10

My personal top 10 moments (actually top 11 – the exchange rate) in my still developing tech career you say … drum roll:

Rank Event Date 11 First ever tech show 1990 10 Longest lasting tech contact – Yossi Amor 25 years 9 Number of IOUG yearly events attended 23 8 Books published in Oracle Press series (including translations) 42 7 Most attendees at a presentation – 1500 (Philadelphia) 1995 6 Fewest attendees at a presentation – 1 2013 5 Most exciting event attended – CODA in Burlingame CA 1993 4 First PL/SQL code block to compile – Oracle7 1993 3 Favourite version of SQL*Forms – 2.3 1993 2 First got hands wet with this famous technology – 5.1.22 1986 1 Biggest thrill – the rush of speaking to live audiences 1991-??
Categories: DBA Blogs

keeping my fingers crossed just submitted abstract for RMOUG 2015 Training Days ...

Grumpy old DBA - Tue, 2014-08-12 12:06
The Rocky Mountain Oracle Users Group has been big and organized for a very long time.  I have never been out there ( my bad ) but am hoping to change that situation in 2015.

Abstracts are being accepted for Training Days 2015 ... my first one is in there now thinking about a second submission but my Hotsos 2014 presentation needs some more work/fixing.  Ok lets be honest I need to shrink it considerably and tighten the focus of that one.

Information on RMOUG 2015 can be found here: RMOUG Training Days 2015

Keeping my fingers crossed!
Categories: DBA Blogs

Watch Oracle DB Session Activity With The Real-Time Session Sampler

Watch Oracle DB Session Activity With My Real-Time Session Sampler
Watching session activity is a great way to diagnose and learn about Oracle Database tuning. There are many approaches to this. I wanted something simple, useful, modifiable, no Oracle licensing
issues and that I could give away. The result is what I call the Oracle Real-Time Session Sampler (OSM: rss.sql).

The tool is simple to use.  Based on a number filtering command line inputs, it repeatedly samples active Oracle sessions and writes the output to a file in /tmp. You can do a "tail -f" on the file to watch session activity in real time!

The rss.sql tool is included in the OraPub System Monitor (OSM) toolkit (v13j), which can be downloaded HERE.

If you simply want to watch a video demo, watch below or click HERE.


The Back-Story
Over the past two months I have been creating my next OraPub Online Institute seminar about how to tune Oracle with an AWR/Statspack report using a quantitative time based approach. Yeah... I know the title is long. Technically I could have used Oracle's Active Session History view (v$active_session_history) but I didn't want anyone to worry about ASH licensing issues. And ASH is not available with Oracle Standard Edition.

The Real-Time Session Sampler is used in a few places in the online seminar where I teach about Oracle session CPU consumption and wait time. I needed something visual that would obviously convey the point I wanted to make. The Real-Time Session Sampler worked perfectly for this.

What It Does
Based on a number of command line inputs, rss.sql repeatedly samples active Oracle sessions and writes the output to file in /tmp. The script contains no dml statements. You can do a "tail -f" on the output file to see session activity in real time. You can look at all sessions, a single session, sessions that are consuming CPU or waiting or both, etc. You can even change the sample rate. For example, once every 5.0 seconds or once every 0.25 seconds! It's very flexible and it's fascinating to watch.

Here is an example of some real output.



How To Use RSS.SQL
The tool is run within SQL*Plus and the output is written to the file /tmp/rss_sql.txt. You need two windows: one to sample the sessions and other other to look at the output file. Here are the script parameter options:

rss.sql  low_sid  high_sid  low_serial  high_serial  session_state  wait_event_partial|%  sample_delay

low_sid is the low Oracle session id.
high_sid is the high Oracle session id.
low_serial is the low Oracle session's serial number.
high_serial is the high Oracle session's serial number.
session_state is the current state of the session at the moment of sampling: "cpu", "wait" or for both "%".
wait_event_partial is when the session is waiting, select the session only with this wait event. Always set this to "%" unless you want to tighten the filtering.
sample_delay is the delay between samples, in seconds.

Examples You May Want To Try
By looking at the below examples, you'll quickly grasp that this tool can be used in a variety of situations.

Situation: I want to sample a single session (sid:10 serial:50) once every five seconds.

SQL>@rss.sql  10 10 50 50 % % 5.0

Situation: I want to essentially stream a single session's (sid:10 serial:50) activity.

SQL>@rss.sql 10 10 50 50 % % 0.125

Situation: I want to see what sessions are waiting for an row level lock while sampling once every second.

SQL>@rss.sql 0 99999 0 99999 wait enq%tx%row% 1.0

Situation: I want to see which sessions are consuming CPU, while sampling once every half second.

SQL>@rss.sql 0 99999 0 99999 cpu % 0.50

Be Responsible... It's Not OraPub's Fault!
Have fun and explore...but watch out! Any time you are sample repeatedly, you run the risk of impacting the system under observation. You can reduce this risk by sampling less often (perhaps once every 5 seconds), by limiting the sessions you want to sample (not 0 to 99999) and by only select sessions in either a "cpu" or "wait" state.

A smart lower impact strategy would be to initially keep a broader selection criteria but sample less often; perhaps once every 15 seconds. Once you know what you want to look for, tighten the selection criteria and sample more frequently. If you have identified a specific session of interest, then you stream the activity (if appropriate) every half second or perhaps every quarter second.

All the best in your Oracle Database tuning work,

Craig.
https://resources.orapub.com/OraPub_Online_Training_About_Oracle_Database_Tuning_s/100.htmYou can watch the seminar introductions for free on YouTube!If you enjoy my blog, subscribing will ensure you get a short-concise email about a new posting. Look for the form on this page.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com.






Categories: DBA Blogs

Offline Visualization of Azkaban Workflows

Pythian Group - Mon, 2014-08-11 07:51

As mentioned in my past adventures, I’m often working with the workflow management tool ominously called Azkaban. Its foreboding name is not really deserved; it’s relatively straightforward to use, and offers a fairly decent workflow visualization. For that last part, though, there is a catch: to be able to visualize the workflow, you have to (quite obviously) upload the project bundle to the server. Mind you, it’s not that much of a pain, and could easily managed by, say, a Gulp-fueled watch job. But still, it would be nice to tighten the feedback loop there, and be able to look at the graphs without having to go through the server at all.

Happily enough, all the information we need is available in the Azkaban job files themselves, and in a format that isn’t too hard to deal with. Typically, a job file will be called ‘foo.job’ and look like

type=command
command=echo "some command goes here"
dependencies=bar,baz

So what we need to do to figure out a whole workflow is to begin at its final job, and recursively walk down all its dependencies.

use 5.12.0;

use Path::Tiny;

sub create_workflow {
  my $job = path(shift);
  my $azkaban_dir = $job->parent;

  my %dependencies;

  my @files = ($job);

  while( my $file = shift @files ) {
    my $job = $file->basename =~ s/\.job//r;

    next if $dependencies{$job}; # already processed

    my @deps = map  { split /\s*,\s*/ }
               grep { s/^dependencies=\s*// }
                    $file->lines( { chomp => 1 } );

    $dependencies{$job} = \@deps;

    push @files, map { $azkaban_dir->child( $_.'.job' ) } @deps;
  }

  return %dependencies;
}

Once we have that dependency graph, it’s just a question of drawing the little boxes and the little lines. Which, funnily enough, is a much harder job one would expect. And better left off to the pros. In this case, I decided to go with Graph::Easy, which output text and svg.

use Graph::Easy;

my $graph = Graph::Easy->new;

while( my( $job, $deps ) = each %dependencies ) {
    $graph->add_edge( $_ => $job ) for @$deps;
}

print $graph->as_ascii;

And there we go. We put those two parts together in a small script, and we have a handy cli workflow visualizer.

$ azkaban_flow.pl target/azkaban/foo.job

  +------------------------+
  |                        v
+------+     +-----+     +-----+     +-----+
| zero | --> | baz | --> | bar | --> | foo |
+------+     +-----+     +-----+     +-----+
               |                       ^
               +-----------------------+

Or, for the SVG-inclined,

$ azkaban_flow.pl -f=svg target/azkaban/foo.job

which gives us

Screen Shot 2014-08-10 at 3.09.42 PM
Categories: DBA Blogs

Partner Webcast - The Revolution of Oracle Java 8

Java 8, released in March 2014, is a revolutionary release of the world’s #1 development platform. It is the single largest upgrade ever to the programming model, with coordinated core code evolution...

We share our skills to maximize your revenue!
Categories: DBA Blogs

12c: Fun with WITH!

Pythian Group - Fri, 2014-08-08 11:30

Last night I couldn’t sleep and what else you’re going to do? I was thinking about Oracle stuff.

In Oracle version 12, Oracle has enhanced the WITH clause – traditionally used for sub-query factoring – to allow the declaration of functions and procedures. This can be (ab)used to create a very interesting scenario, that is not very common in Oracle: Reading data within the same SELECT statement, but from two different points in time. And the points in time are in the future, and not in the past.

Let’s say I want to take a snapshot of the current SCN, and then another one 5 or 10 seconds after that. Traditionally we’d have to store that somewhere. What if I could take two snapshots – at different SCNs – using a single SELECT statement ? Without creating any objects ?

col value for a50
set lines 200 pages 99

with  
procedure t (secs in number, scn out varchar2)
  is
    pragma autonomous_transaction;
  begin
    dbms_lock.sleep(secs);
    select 'at ' || to_char(sysdate,'HH24:MI:SS') || ' SCN: ' 
                 || dbms_flashback.get_system_change_number 
      into scn 
      from dual;
  end;
function wait_for_it (secs in number) 
 return varchar2 is
    l_ret varchar2(32767);
  begin
    t(secs, l_ret);
    return l_ret;
  end;
select 1 as time, 'at ' || to_char(sysdate,'HH24:MI:SS') || ' SCN: ' 
                || dbms_flashback.get_system_change_number as value 
  from dual
union all
select 5, wait_for_it(5) from dual
union all
select 10, wait_for_it(5) from dual
/

And the result is:

      TIME VALUE
---------- --------------------------------------------------
         1 at 09:55:49 SCN: 3366336
         5 at 09:55:54 SCN: 3366338
        10 at 09:55:59 SCN: 3366339

 


We can clearly see there, that the SCN is different, and the time shown matches the intervals we’ve chosen, 5 seconds apart. I think there could be some very interesting uses for this. What ideas can you folks come up with ?

Categories: DBA Blogs

Log Buffer #383, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-08-08 07:34

This Log Buffer Edition picks few of the informative blog posts from Oracle, SQL Server, and MySQL fields of database.


Oracle:

g1gc logs – Ergonomics -how to print and how to understand

In Solaris 11.2, svcs gained a new option, “-L”.  The -L option allows a user to easily look at the most recent log events for a service.

ADF Thematic Map component from DVT library was updated in ADF 12c with marker zoom option and area layer styling

When cloning pluggable databases Oracle gives you also SNAPSHOT COPY clause to utilize storage system snapshot capabilities to save on storage space.

It is normal for bloggers including myself to post about the great things they have done.

SQL Server:

In six years Microsoft has come from almost zero corporate knowledge about how cloud computing works to it being an integral part of their strategy.

A brief overview of Columnstore index and its usage with an example.

The Road To Hell – new article from the DBA Team

Encryption brings data into a state which cannot be interpreted by anyone who does not have access to the decryption key, password, or certificates.

How to test what a SQL Server application would do in the past or in the future with date and time differences.

MySQL:

MySQL for Visual Studio 1.2.3 GA has been released

An approach to MySQL dynamic cross-reference query.

The MySQL replication and load balancing plugin for PHP, PECL/mysqlnd_ms, aims to make using a cluster of MySQL servers instead of a single server as transparent as possible.

Picking the Right Clustering for MySQL: Cloud-only Services or Flexible Tungsten Clusters? New webinar-on-demand.

Collation options for new MySQL schemas and tables created in MySQL for Excel

Categories: DBA Blogs