Skip navigation.

DBA Blogs

Log Buffer #412, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-02-27 10:58

This Log Buffer Edition makes it way through the realms of Oracle, SQL Server and MySQL and brings you some of the blog posts.

Oracle:

Introducing Oracle Big Data Discovery Part 3: Data Exploration and Visualization

FULL and NO_INDEX Hints

Base64 Encode / Decode with Python (or WebLogic Scripting Tool) by Frank Munz

Why I’m Excited About Oracle Integration Cloud Service – New Video

Reminder: Upgrade Database 12.1.0.1 to 12.1.0.2 by July 2015

SQL Server:

An article about how we underestimate the power of joins and degrade our query performance by not using proper joins

Most large organizations have implemented one or more big data applications. As more data accumulates internal users and analysts execute more reports and forecasts, which leads to additional queries and analysis, and more reporting.

How do you develop and deploy your database?

A database must be able to maintain and enforce the business rules and relationships in data in order to maintain the data model.

Error handling with try-catch-finally in PowerShell for SQL Server

MySQL:

MySQL Enterprise Monitor 3.0.20 has been released

MySQL Cluster 7.4 is GA!

Connector/Python 2.1.1 Alpha released with C Extension

Worrying about the ‘InnoDB: detected cycle in LRU for buffer pool (…)’ message?

MySQL Cluster 7.4 GA: 200 Million QPS, Active-Active Geographic Replication and more

Categories: DBA Blogs

Parallel Execution -- 1 The PARALLEL Hint and AutoDoP

Hemant K Chitale - Thu, 2015-02-26 09:29
The behaviour of the PARALLEL Hint has changed subtly but significantly in 11.2.  From the documentation :
Beginning with Oracle Database 11g Release 2 (11.2.0.1), the PARALLEL and NO_PARALLEL hints are statement-level hints and supersede the earlier object-level hints .......... If you omitinteger, then the database computes the degree of parallelism.

Further down, the documentation states :
This hint overrides the value of the PARALLEL_DEGREE_POLICY initialization parameter. 
and 
PARALLEL: The statement always is run parallel, and the database computes the degree of parallelism, which can be 2 or greater.

It is important to note that the Statement Level PARALLEL Hint *overrides* the PARALLEL_DEGREE_POLICY.
So, even if PARALLEL_DEGREE_POLICY is set to MANUAL, implying that automatic degree of parallelism is disabled, the PARALLEL Hint, itself, allows Oracle to auto-compute a DoP.

What further complicates understanding of the behaviour is a reading of Oracle Support Note 1269321.1 that implies that if I/O calibration statistics are missing, Oracle does not use the automatic degree of parallelism feature.
So, one would assume that if I/O Calibration is not done, with the "automatic degree of parallelism" feature not being used, the PARALLEL Hint would not compute any Auto DoP !

Let's run a simple test case :

HEMANT>select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
PL/SQL Release 11.2.0.2.0 - Production
CORE 11.2.0.2.0 Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

HEMANT>
HEMANT>show parameter parallel;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 40
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 16
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
HEMANT>
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 10488 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 10488 (1)| 00:02:06 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 10488 (1)| 00:02:06 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
2

HEMANT>
HEMANT>select * from dba_rsrc_io_calibrate;

no rows selected

HEMANT>

Apparently, I/O Calibration statistics are not present ("are missing"). And yet, Oracle chose to use 2 PX Servers (not 4, not 1) for the query.  Isn't this confusing ?  Is AutoDoP used or is it not used ?

Let's make a change somewhere (else ?)

HEMANT>show parameter cpu_count

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 1

HEMANT>

Let's try on a server with more CPUs.  I reconfigure the same VM to run with 4 "CPUs" (cores) and restart the VM and database instance.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Thu Feb 26 23:18:47 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>show parameter cpu

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
cpu_count integer 4
parallel_threads_per_cpu integer 2
resource_manager_cpu_allocation integer 4
SYS>show parameter parallel

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_degree_limit string CPU
parallel_degree_policy string MANUAL
parallel_execution_message_size integer 16384
parallel_force_local boolean FALSE
parallel_instance_group string
parallel_io_cap_enabled boolean FALSE
parallel_max_servers integer 135
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_min_time_threshold string AUTO
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_servers_target integer 64
parallel_threads_per_cpu integer 2
recovery_parallelism integer 0
SYS>

We can now see that CPU_COUNT, PARALLEL_MAX_SERVERS and PARALLEL_SERVERS_TARGET have all gone up.

SYS>connect hemant/hemant
Connected.
HEMANT>set serveroutput off
HEMANT>select degree from user_tables where table_name = 'LARGE_TABLE';

DEGREE
----------------------------------------
4

HEMANT>select /*+ PARALLEL */ count(*) from LARGE_TABLE;

COUNT(*)
----------
4802944

HEMANT>select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 47m7qyrj6uzqn, child number 0
-------------------------------------
select /*+ PARALLEL */ count(*) from LARGE_TABLE

Plan hash value: 2085386270

-----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Cost (%CPU)| Time | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | 2622 (100)| | | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | PX COORDINATOR | | | | | | | |
| 3 | PX SEND QC (RANDOM) | :TQ10000 | 1 | | | Q1,00 | P->S | QC (RAND) |
| 4 | SORT AGGREGATE | | 1 | | | Q1,00 | PCWP | |
| 5 | PX BLOCK ITERATOR | | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWC | |
|* 6 | TABLE ACCESS FULL| LARGE_TABLE | 4802K| 2622 (1)| 00:00:32 | Q1,00 | PCWP | |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

6 - access(:Z>=:Z AND :Z<=:Z)

Note
-----
- automatic DOP: skipped because of IO calibrate statistics are missing


27 rows selected.

HEMANT>
HEMANT>select px_servers_executions from v$sqlstats where sql_id='47m7qyrj6uzqn';

PX_SERVERS_EXECUTIONS
---------------------
8

HEMANT>

Aaha !  The same SQL statement, the same SQL_ID, the same Execution Plan (Plan Hash Value) and Oracle  chose to use 8  (not 1, not 4, not 2) PX servers for the query !

So, there IS some way that AutoDoP is being used even when I/O calibration statistics are missing. Is this AutoDoP simply a function CPU_COUNT x PARALLEL_THREADS_PER_CPU ?

.
.
.

Categories: DBA Blogs

Virtual CPUs with Google Compute Engine

Pythian Group - Wed, 2015-02-25 12:12

Continuing on my series of virtual CPU behavior in Amazon Web Services, Amazon Web Service HVM instances, and Microsoft Azure, I’m taking a look at Google Compute Engine (GCE). GCE is a relative newcomer to the public cloud world, become generally available in December 2013. It does have some interesting features, including transparent maintenance through live migration, and automatic sustained-use discounts without upfront commitments.

Unlike Amazon or Microsoft, Google is very upfront about their vCPU definition.

For the n1 series of machine types, a virtual CPU is implemented as a single hyperthread on a 2.6GHz Intel Sandy Bridge Xeon or Intel Ivy Bridge Xeon (or newer) processor. This means that the n1-standard-2 machine type will see a whole physical core.

I still believe calling such a hyperthread a “virtual CPU” is misleading. When creating a virtual machine in a non-cloud VM platform, 1 virtual CPU = 1 physical core. Plain and simple. But when using a cloud platform, I need 2 virtual CPUs to get that same physical core.

cpu-comparison

Anyways, off to run some CPU tests. n1-standard-4 is a close match to the m3.xlarge instances previously tested, so I’ll try that.

Getting set up on Google Compute Engine

I already signed up with Google Compute Engine’s free trial and created a project I’m calling marc-cpu-test. Installing the gcloud compute command-line tools.

[marc@quartz ~]$ gcloud auth login --no-launch-browser
Go to the following link in your browser:

https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%(redacted)&access_type=offline

Enter verification code: (redacted)
Saved Application Default Credentials.

You are now logged in as [fielding@pythian.com].
Your current project is [None].  You can change this setting by running:
  $ gcloud config set project PROJECT
[marc@quartz ~]$ gcloud config set project marc-cputest
[marc@quartz ~]$ gcloud config set compute/zone us-central1-a
[marc@quartz ~]$ gcloud compute instances create cpu-test-n4 --image centos-6 --machine-type "n1-standard-4" --zone us-central1-a
Created [https://www.googleapis.com/compute/v1/projects/marc-cputest/zones/us-central1-a/instances/cpu-test-n4].
NAME        ZONE          MACHINE_TYPE  INTERNAL_IP    EXTERNAL_IP   STATUS
cpu-test-n4 us-central1-a n1-standard-4 10.240.222.194 104.154.75.96 RUNNING
[marc@quartz ~]$ gcloud compute ssh cpu-test-n4
WARNING: You do not have an SSH key for Google Compute Engine.
WARNING: [/usr/bin/ssh-keygen] will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/marc/.ssh/google_compute_engine.
Your public key has been saved in /home/marc/.ssh/google_compute_engine.pub.
The key fingerprint is:
(redacted)
Updated [https://www.googleapis.com/compute/v1/projects/marc-cputest].
Warning: Permanently added '104.154.75.96' (RSA) to the list of known hosts.
Warning: Permanently added '104.154.75.96' (RSA) to the list of known hosts.

OK, instance all set and connected. As a CentOS 6 image it doesn’t allow SSH root logins by default, so attempting to set up a gcloud environment as a root user will get you “permission denied” errors on SSH. Serves me right for trying to run these tools as root in the first place :-).

Looking around

Checking what they got us:

[marc@cpu-test-n4 ~]$ egrep '(processor|model name|cpu MHz|physical id|siblings|core id|cpu cores)' /proc/cpuinfo
processor       : 0
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
processor       : 1
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 0
cpu cores       : 2
processor       : 2
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2
processor       : 3
model name      : Intel(R) Xeon(R) CPU @ 2.60GHz
cpu MHz         : 2600.000
physical id     : 0
siblings        : 4
core id         : 1
cpu cores       : 2

Google has redacted the exact CPU model numbers, but has clearly marked this as a 2-core system with core IDs 0 and 1.

The single-CPU case
[marc@cpu-test-n4 ~]$ taskset -pc 0 $$
pid 1558's current affinity list: 0-3
pid 1558's new affinity list: 0
[marc@cpu-test-n4 ~]$  dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null
2170552320 bytes (2.2 GB) copied, 14.3427 s, 151 MB/s
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 29.3081 s, 74.1 MB/s
2170552320 bytes (2.2 GB) copied, 29.3065 s, 74.1 MB/s

We get a nice boost in raw CPU numbers as compared to the 120 MB/s I saw in AWS. With two processes sharing this CPU, see a tiny bit less than half the throughput.

Sharing the cores (or trying to)
[marc@cpu-test-n4 ~]$ taskset -pc 0,1 $$
pid 1558's current affinity list: 0
pid 1558's new affinity list: 0,1
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c &gt; /dev/null &amp; done
[1] 1803
[2] 1805
[marc@cpu-test-n4 ~]$ 2170552320 bytes (2.2 GB) copied, 14.6959 s, 148 MB/s
2170552320 bytes (2.2 GB) copied, 14.7139 s, 148 MB/s

This is interesting; we see almost full-speed throughput in spite of processors 0 and 1 said to have a shared core. With processors 0 and 2 the situation is the same.

[marc@cpu-test-n4 ~]$ taskset -pc 0,2 $$
pid 1558's current affinity list: 0,1
pid 1558's new affinity list: 0,2
[marc@cpu-test-n4 ~]$ for i in {1..2}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c &gt; /dev/null &amp; done
[1] 1830
[2] 1833
[marc@cpu-test-n4 ~]$ 2170552320 bytes (2.2 GB) copied, 14.6683 s, 148 MB/s
2170552320 bytes (2.2 GB) copied, 14.6692 s, 148 MB/s

Is the CPU scheduler ignoring my taskset commands? Running mpstat 2-second samples during the test to see actual CPU usage:

[marc@cpu-test-n4 ~]$ mpstat -P ALL 2
...
06:08:44 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
06:08:46 PM  all   46.31    0.00    3.75    0.00    0.00    0.00    0.00    0.00   49.94
06:08:46 PM    0   93.50    0.00    6.50    0.00    0.00    0.00    0.00    0.00    0.00
06:08:46 PM    1   92.00    0.00    8.00    0.00    0.00    0.00    0.00    0.00    0.00
06:08:46 PM    2    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:46 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
...
06:08:52 PM  CPU    %usr   %nice    %sys %iowait    %irq   %soft  %steal  %guest   %idle
06:08:54 PM  all   46.75    0.00    3.25    0.00    0.00    0.00    0.00    0.00   50.00
06:08:54 PM    0   93.47    0.00    6.53    0.00    0.00    0.00    0.00    0.00    0.00
06:08:54 PM    1    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00
06:08:54 PM    2   93.50    0.00    6.50    0.00    0.00    0.00    0.00    0.00    0.00
06:08:54 PM    3    0.00    0.00    0.00    0.00    0.00    0.00    0.00    0.00  100.00

So the taskset commands are working: when we ask for CPUs 0 and 1, we are getting them, but throughput shows that cores aren’t being shared. It means that the CPUs in the virtual machine are not statically bound to hardware threads as seen under AWS. I’d call it a win as it gets more consistent performance even if the guest operating system is forced to makes poor CPU scheduling decisions as in this case.

[marc@cpu-test-n4 ~]$ taskset -pc 0-3 $$
pid 1558's current affinity list: 0,2
pid 1558's new affinity list: 0-3
[marc@cpu-test-n4 ~]$ for i in {1..4}; do dd if=/dev/zero bs=1M count=2070 2> >(grep bytes >&2 ) | gzip -c > /dev/null & done
2170552320 bytes (2.2 GB) copied, 22.9823 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 22.9914 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 22.9915 s, 94.4 MB/s
2170552320 bytes (2.2 GB) copied, 23.1333 s, 93.8 MB/s

This is more the throughput we would expect with two shared cores.

Lessons learned

Over the course of these tests, I’ve discovered a few things:

  • Although they share virtual CPUs like competitors, Google is very upfront about this behavior.
  • Actual throughput for a simple gzip workload is excellent.
  • Google Compute Engine has an abstraction layer in front of CPUs that dynamically schedules tasks between CPU threads, in addition to the regular scheduler in the virtual machine. In my testing, it allocates tasks efficiently across CPU cores, even when the OS scheduler is configured suboptimally.
Categories: DBA Blogs

Exadata X5 – A Practical Point of View of the New Hardware and Licensing

Pythian Group - Wed, 2015-02-25 12:10

Oracle recently announced its latest iteration of Exadata – X5-2. It includes a refresh of the hardware to the most recent Xeon® E5-2699 v3 CPUs. These new CPUs boost the total cores count in a full rack to 288. This is higher than the current 8 socket “big machine” version X4-8, which has only 240 cores.

But the most exciting part is the all flash version of Exadata. In the previous generation – X4 – Oracle had to switch from 15K drives to 10K drives in order to boost capacity from 600 GB to 1200 GB per hard drive to keep disk space higher than flash cache size. At that time of X4 announcements, we were already wondering why Oracle was still offering high-speed disks and not switching to all flash, and now we know why. Because that type of high-performance flash wasn’t quite ready.

Maintaining high IO rates over long periods of times needed some changes to the ILOM in order to maintain cooling fans speed based on many individual temperature sensors inside the flash cards (details). Removing the SAS controller and using the new NVMe connectivity resulted in much higher bandwidth per hard drive – 3.2 GBytes/sec vs. the old 1.2 GBytes/sec SAS.

With temperature and bandwidth sorted out, we now have a super-high performance option (EF – Extreme Flash) for Exadata which delivers the stunning 263 GB/sec uncompressed scan speed in a full rack. The difference in performance between the High Capacity and High Performance EF flash option is now much higher. The high-performance option in Exadata X5 is now viable. In Exadata X4 it made so little difference, that it was pointless.

x4 vs x5

The one thing I wonder with the X5 announcement is why the X5-2 storage server still uses the very old and quite outdated 8 core CPUs. I’ve seen many cases where a Smart Scan on an HCC table is CPU bound on the storage server even when reading from spinning disk. I am going to guess that there’s some old CPU inventory to cleanup. But that may not end up being such a problem (see “all columnar” flash cache feature).

But above all, the most important change was the incremental licensing option. With 36 cores per server, even the 1/8th rack configuration was in the multi-million dollars in licenses, and in many cases was too much for the problem in hand.

The new smallest configuration is:

  • 1/8th rack, with 2 compute nodes
  • 8 cores enabled per compute node (16 total)
  • 256 GB RAM per node (upgradable to 768 GB per node)
  • 3 storage servers with only half the cores, disks and flash enabled

Then you can license additional cores as you need them, 2 cores at a time. Similar to how ODA licensing option worked. You cannot reduce licensed cores.

The licensing rules changes go even further. Now you can mix & match compute and storage servers to create even more extreme options. Some non-standard examples:

  • Extreme Memory – more compute nodes with max RAM, reduced licensed cores
  • Extreme Storage – replace compute node with storage nodes, reduced licensed cores

x5 custom
Link to video

In conclusion, Oracle Exadata X5 configuration options and the changes it brings to licensing allows an architect to craft a system that will meet any need and allow for easy, small step increments in the future, potentially without any hardware changes.

There are many more exciting changes in Oracle 12c, Exadata X5 and the new storage server software which I may cover in the future as I explore them in detail.

Categories: DBA Blogs

Log Buffer #411, A Carnival of the Vanities for DBAs

Pythian Group - Wed, 2015-02-25 12:00

This Log Buffer Edition brings you some blog posts from Oracle, SQL Server and MySQL.

Oracle:

Suppose you have a global zone with multiple zpools that you would like to convert into a native zone.

The digital revolution is creating abundance in almost every industry—turning spare bedrooms into hotel rooms, low-occupancy commuter vehicles into taxi services, and free time into freelance time

Every time I attend a conference, the Twitter traffic about said conference is obviously higher.  It starts a couple weeks or even months before, builds steadily as the conference approaches, and then hits a crescendo during the conference.

Calling All WebLogic Users: Please Help Us Improve WebLogic Documentation!

Top Two Cloud Security Concerns: Data Breaches and Data Loss

SQL Server:

This article describes a way to identify the user who truncated the table & how you can recover the data.

When SQL Server 2014 was released, it included Hekaton, Microsoft’s much talked about memory-optimized engine that brings In-Memory OLTP into play.

Learn how you can easily spread your backup across multiple files.

Daniel Calbimonte has written a code comparison for MariaDB vs. SQL Server as it pertains to how to comment, how to create functions and procedures with parameters, how to store query results in a text file, how to show the top n rows in a query, how to use loops, and more.

The article show a simple way we managed to schedule index rebuild and reorg for an SQL instance with 106 databases used by one application using a Scheduled job.

MySQL:

How to setup a PXC cluster with GTIDs (and have async slaves replicating from it!)

vCloud Air and business-critical MySQL

MySQL Dumping and Reloading the InnoDB Buffer Pool

How to benchmark MongoDB

MySQL Server on SUSE 12

Categories: DBA Blogs

SQL Server 2014 Cumulative Update 6

Pythian Group - Wed, 2015-02-25 11:59

Hello everyone,

Just a quick note to let you know that this week, while most of North America was enjoying a break, Microsoft released the 6th cumulative update for SQL Server 2014. This update contains fixes for 64 different issues, distributed as follows:

SQL 2014 Cumulative Update 6

As the name implies, this is a cumulative update, that means it is not necessary to install the previous 5 in case you don’t have them. Please remember to test thoroughly any update before applying to production.

The cumulative update and the full release notes can be found here: https://support.microsoft.com/kb/3031047/en-us?wa=wsignin1.0

 

 

Categories: DBA Blogs

Partner Webcast – Oracle Business Process Management 12c : The Game Changer for your Business

The Oracle Business Process Management Suite 12c (BPM) is of one the most complete BPM suites in the market and the most feature rich BPM suite offerings. There have been a wide variety of changes...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Speaking at Delphix User Group Webex March 11

Bobby Durrett's DBA Blog - Tue, 2015-02-24 17:42

On March 11 at 10 am California time I will be speaking in a Delphix User Group Webex session.

Here is the sign up url: WebEx sign up.

Adam Leventhal, the Delphix CTO, will also be on the call previewing the new Delphix 4.2 features.

I will describe our experience with Delphix and the lessons we have learned.  It is a technical talk so it should have enough details to have value to a technical audience.  Hopefully I have put enough effort into the talk to make it useful to other people who have or are considering getting Delphix.

There will time for questions and answers in addition to our talks.

I really enjoy doing user group presentations.  I had submitted an abstract for this talk to the Collaborate 2015 Oracle user group conference but it was not accepted so I wont get a chance to give it there.  But, this WebEx event gives me a chance to present the same material, so I’m happy to have this opportunity.

If you have an interest in hearing about Delphix join the call.  It is free and there will be some good technical content.

– Bobby

P.S. If this talk interests you I also have some earlier blog posts that relate to some of the material I will be covering:

Delphix first month

Delphix direct i/o and direct path reads

Delphix table recovery

Also, I plan to post the slides after the talk.

Categories: DBA Blogs

Oracle REST data services on Oracle Database Cloud Application Express

For those familiar with Oracle Application Express (aka APEX) - Oracle’s web-based application development tool, you probably now, that Oracle Application Express Listener is now known...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Final version of RMOUG 2015 presentation and script files are here ...

Grumpy old DBA - Sat, 2015-02-21 16:46
Wow I had a great time presenting at RMOUG 2015.  The room was absolutely packed and I felt presentation went pretty well ( so let's see what the evaluation numbers show ).

Here is a link (Final version presentation and supporting files )  to zip file containing the final version of the presentation as well the script and sql file shown at the end of the presentation.  Please let me know if anyone has any problems using this and sorry I thought I would get this updated and available Thursday.

Thanks so much to RMOUG!

John
Categories: DBA Blogs

Data Guard Logical Standby – what does it mean?

The Oracle Instructor - Fri, 2015-02-20 11:34

With Data Guard, you have the choice between Physical and Logical Standby databases. Let’s see the differences! My demo starts with a Physical Standby, that is then converted into a Logical Standby (therefore the name of the database):

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
    prima - Primary database
    logst - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

For now, logst is still a Physical Standby. It is called that way, because the datafiles of prima and logst are physically identical. I can even restore them from one side to the other:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 11:43:07 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > alter database datafile 4 offline;

Database altered.

Now I copy the datafile from the standby server uhesse2 to the primary server uhesse1 – there are different ways to do that, but scp is one:

SYS@logst > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/logst/users01.dbf

SYS@logst > exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
[oracle@uhesse2 ~]$ scp /u01/app/oracle/oradata/logst/users01.dbf uhesse1:/u01/app/oracle/oradata/prima/users01.dbf
The authenticity of host 'uhesse1 (192.168.56.10)' can't be established.
RSA key fingerprint is e9:e7:5b:8b:b2:33:42:26:89:03:54:0c:16:0d:98:57.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'uhesse1,192.168.56.10' (RSA) to the list of known hosts.
oracle@uhesse1's password: 
users01.dbf                                                                                               100% 5128KB   5.0MB/s   00:00    
[oracle@uhesse2 ~]$ 

When I try to online the datafile again on prima, it is like if I would have restored it from backup:

SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > recover datafile 4;
Media recovery complete.
SYS@prima > alter database datafile 4 online;

Database altered.

The datafiles and also the archived logfiles are physically identical on both sites here, only the controlfiles are different. v$database (like v$datafile, by the way) derives its content from the controlfile:

SYS@prima > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
PRIMA						   2012613220 PRIMARY

SYS@prima > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
PRIMA						   2012613220 PHYSICAL STANDBY

Now I will convert it into Logical Standby:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 17:29:16 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > exec dbms_logstdby.build

PL/SQL procedure successfully completed.

SYS@prima > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > alter database recover to logical standby logst;

Database altered.

SYS@logst > shutdown immediate
ORA-01507: database not mounted


ORACLE instance shut down.
SYS@logst > startup mount
ORACLE instance started.

Total System Global Area  521936896 bytes
Fixed Size		    2214936 bytes
Variable Size		  314573800 bytes
Database Buffers	  201326592 bytes
Redo Buffers		    3821568 bytes
Database mounted.
SYS@logst > alter database open resetlogs;

Database altered.

SYS@logst > select name,dbid,database_role from v$database;

NAME							 DBID DATABASE_ROLE
-------------------------------------------------- ---------- ----------------
LOGST						   3156487356 LOGICAL STANDBY

SYS@logst > exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.1.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> remove database logst;
Removed database "logst" from the configuration
DGMGRL> add database logst as connect identifier is logst;
Database "logst" added
DGMGRL> enable database logst;
Enabled.
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxPerformance
  Databases:
    prima - Primary database
    logst - Logical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

One significant change is that the DBID and the name is now different from the primary database as you see above. And the datafiles are no longer physically identical:

DGMGRL> edit database logst set state=apply-off;
Succeeded.
DGMGRL> exit
[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Fri Feb 20 17:38:56 2015

Copyright (c) 1982, 2009, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production

SYS@prima > alter database datafile 4 offline;

Database altered.

SYS@prima > select name from v$datafile where file#=4;

NAME
--------------------------------------------------
/u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > host cp /u01/app/oracle/oradata/prima/users01.dbf /u01/app/oracle/oradata/prima/users01.old

I copy the original file because I know that the restore from logst will not work. It is just to show my point:

[oracle@uhesse2 ~]$ scp /u01/app/oracle/oradata/logst/users01.dbf uhesse1:/u01/app/oracle/oradata/prima/users01.dbf
oracle@uhesse1's password: 
users01.dbf                                                                                   100% 5128KB   5.0MB/s   00:00    
SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01122: database file 4 failed verification check
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'
ORA-01206: file is not part of this database - wrong database id

Exactly. logst is now an autonomous database that is just incidentally doing (nearly) the same DML as prima does. It is no longer Oracle-Block-wise the same as prima. The rowids from prima have no meaning on logst any more:

DGMGRL> edit database logst set state=apply-on;
Succeeded.
SYS@prima > insert into scott.dept values (50,'TEST','TEST');
insert into scott.dept values (50,'TEST','TEST')
                  *
ERROR at line 1:
ORA-00376: file 4 cannot be read at this time
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > host cp /u01/app/oracle/oradata/prima/users01.old /u01/app/oracle/oradata/prima/users01.dbf

SYS@prima > alter database datafile 4 online;
alter database datafile 4 online
*
ERROR at line 1:
ORA-01113: file 4 needs media recovery
ORA-01110: data file 4: '/u01/app/oracle/oradata/prima/users01.dbf'


SYS@prima > recover datafile 4;
Media recovery complete.
SYS@prima > alter database datafile 4 online;

Database altered.

SYS@prima > insert into scott.dept values (50,'TEST','TEST');

1 row created.

SYS@prima > commit;
Commit complete.

SYS@prima > select rowid,dept.* from scott.dept where deptno=50;

ROWID		       DEPTNO DNAME	     LOC
------------------ ---------- -------------- -------------
AAADS8AAEAAAACNAAE	   50 TEST	     TEST

This rowid is what we normally record in the redo entries and it would be sufficient to retrieve that row on the primary and also on a physical standby where we do “Redo Apply” (another term for “recover database”). But that rowid is different on logst:

SYS@logst > connect sys/oracle@logst as sysdba
Connected.
SYS@logst > select rowid,dept.* from scott.dept where deptno=50;

ROWID DEPTNO DNAME LOC
------------------ ---------- -------------- -------------
AAADS8AAEAAAACOAAA 50 TEST TEST

That is why we need to put additional information – supplemental log data – into the redo entries on the primary. It will help the SQL Apply mechanism to retrieve the row there:

Logical Standby Architecture

Logical Standby Architecture

The supplemental log data contains at least additionally the primary/unique key like on the picture. In the absence of primary/unique keys, every column of a modified row is written into the redo logs. That may impact the performance of the primary database. Another serious drawback of Logical Standby is that not every datatype and not every operation on the primary is supported for the SQL Apply mechanism. The number of unsupported datatypes decreases version by version, though.

The demo and the sketch above are from my presentation about Transient Logical Standby at the Oracle University Expert Summit 2015 in Dubai – really an amazing location! Hope you find it useful :-)


Tagged: Data Guard
Categories: DBA Blogs

Database Flashback -- 5

Hemant K Chitale - Fri, 2015-02-20 10:15
Continuing my series on Database Flashback.

Here I demonstrate that Flashback Logs alone are not sufficient.  The process of FLASHBACK DATABASE does need *some* redo entries from Archive/Online Redo Logs.



[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Fri Feb 20 23:18:37 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
20-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573526 17-FEB-15 1440 78561280 415383552

SYS>

My database seems to have adequate Flashback Capability in the Flashback Logs.  Let me run some transactions and generate Redo and Flashback.

SYS>connect hemant/hemant
Connected.
HEMANT>drop table obj_list;
drop table obj_list
*
ERROR at line 1:
ORA-00942: table or view does not exist


HEMANT>create table obj_list tablespace users
2 as select * from dba_objects where 1=2;

Table created.

HEMANT>select tablespace_name
2 from user_tables
3 where table_name = 'OBJ_LIST';

TABLESPACE_NAME
------------------------------
USERS

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects;

225138 rows created.

HEMANT>rollback;

Rollback complete.

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects;

225135 rows created.

HEMANT>delete obj_list;

225135 rows deleted.

HEMANT>
HEMANT>select count(*)
2 from v$archived_log
3 where first_time >
4 (select startup_time
5 from v$instance)
6 /

COUNT(*)
----------
2

HEMANT>
HEMANT>alter system switch logfile;

System altered.

HEMANT>insert into obj_list
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 union all
8 select * from dba_objects;

300180 rows created.

HEMANT>insert into obj_list
2 select * from obj_list;

300180 rows created.

HEMANT>select count(*) from obj_list;

COUNT(*)
----------
600360

HEMANT>commit;

Commit complete.

HEMANT>select count(*) from v$archived_log
2 where first_time >
3 (select startup_time from v$instance)
4 /

COUNT(*)
----------
4

HEMANT>

Now, let's suppose that a scheduled (periodic) archive log backup job kicks in and creates a backup of archivelogs and deletes them.

HEMANT>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:30:36 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN> backup as compressed backupset archivelog all delete input;

Starting backup at 20-FEB-15
current log archived
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=58 device type=DISK
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=106 STAMP=872033779
channel ORA_DISK_1: starting piece 1 at 20-FEB-15
channel ORA_DISK_1: finished piece 1 at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnshxs_.bkp tag=TAG20150220T233055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:04
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_18/o1_mf_1_3_bg9dcl3v_.arc RECID=106 STAMP=872033779
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=3 RECID=112 STAMP=872205844
input archived log thread=1 sequence=4 RECID=113 STAMP=872205859
input archived log thread=1 sequence=5 RECID=114 STAMP=872205867
input archived log thread=1 sequence=6 RECID=115 STAMP=872206048
input archived log thread=1 sequence=7 RECID=116 STAMP=872206098
input archived log thread=1 sequence=8 RECID=117 STAMP=872206254
channel ORA_DISK_1: starting piece 1 at 20-FEB-15
channel ORA_DISK_1: finished piece 1 at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_3_bggndmz4_.arc RECID=112 STAMP=872205844
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_4_bggnf0qy_.arc RECID=113 STAMP=872205859
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_5_bggnf8ty_.arc RECID=114 STAMP=872205867
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_6_bggnlzvr_.arc RECID=115 STAMP=872206048
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_7_bggnnk6c_.arc RECID=116 STAMP=872206098
archived log file name=/home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_8_bggnsg5k_.arc RECID=117 STAMP=872206254
Finished backup at 20-FEB-15

Starting Control File and SPFILE Autobackup at 20-FEB-15
piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/autobackup/2015_02_20/o1_mf_s_872206274_bggnt34k_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 20-FEB-15

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$

Now, suppose that I need to Flashback the database.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Fri Feb 20 23:32:36 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14577571 19-FEB-15 1440 135348224 3784998912

SYS>select to_char(oldest_flashback_time,'DD-MON HH24:MI:SS')
2 from V$flashback_database_log;

TO_CHAR(OLDEST_FLASHBACK
------------------------
19-FEB 00:04:02

SYS>

Notice how the OLDEST_FLASHBACK_TIME has changed from 17-Feb to the midnight of 18/19-Feb ! Apparently, my FRA cannot hold very many Flashback Logs.
As I have mentioned in two posts earlier, here and here, the scope of the actual ability to Flashback may vary.

Can I flashback to SCN 14577572 ?  Let me give it a try.

SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>flashback database to SCN 14577572;
flashback database to SCN 14577572
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577571 to SCN 14577572
ORA-38761: redo log sequence 1 in thread 1, incarnation 5 could not be accessed


SYS>

Aaha ! Apparently, Oracle needs to read some redo from Archive Log(s) !  So, having Flashback Logs alone is *not* sufficient.  I know that I need the database to be running in ArchiveLog mode.  But I should also know that if I want to Flashback to a particular Time or SCN, I will need the corresponding ArchiveLog as well !  (Imaging trying to Flashback to 3 days ago and having purged all ArchiveLogs simply because I do daily Full Backups and have Retention of 2 days only !)

So, I must take the necessary action now.

SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:43:43 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 1 until sequence 2;

Starting restore at 20-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=1
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001619_bg9gpq87_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001619_bg9gpq87_.bkp tag=TAG20150219T001619
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=2
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001930_bg9gwl9w_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_19/o1_mf_annnn_TAG20150219T001930_bg9gwl9w_.bkp tag=TAG20150219T001930
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 20-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14577572
SYS>/

Flashback complete.

SYS>alter database open read only;

Database altered.

SYS>

What if I want to Flashback to another, later time ?

SYS>select sequence#, first_change#, to_char(first_time,'DD-MON HH24:MI:SS')
2 from v$archived_log
3 where first_time > trunc(sysdate)
4 order by 1;

SEQUENCE# FIRST_CHANGE# TO_CHAR(FIRST_TIME,'DD-M
---------- ------------- ------------------------
4 14580736 20-FEB 23:24:03
5 14581651 20-FEB 23:24:16
6 14583536 20-FEB 23:24:23
7 14584203 20-FEB 23:27:27
8 14584351 20-FEB 23:28:17

SYS>
SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>flashback database to SCN 14584205;
flashback database to SCN 14584205
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577573 to SCN 14584205
ORA-38761: redo log sequence 5 in thread 1, incarnation 5 could not be accessed


SYS>
SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri Feb 20 23:53:48 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 5 until sequence 6;

Starting restore at 20-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=21 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=5
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=6
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 20-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ cd /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20
sh-3.2$ ls -l
total 63344
-rw-rw---- 1 oracle oracle 494592 Feb 20 23:44 o1_mf_1_1_bggol5t8_.arc
-rw-rw---- 1 oracle oracle 18432 Feb 20 23:44 o1_mf_1_2_bggol6wm_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 20 23:54 o1_mf_1_5_bggp4zbp_.arc
-rw-rw---- 1 oracle oracle 13635072 Feb 20 23:54 o1_mf_1_6_bggp4zh3_.arc
sh-3.2$

I have Sequences 1 and 2 that were restored for the first flashback and Sequences 5 and 6 that have been restored now. Do I need Sequences 3 and 4 ? Do I need Sequence 7 (that contains the Redo beyond SCN 14584203) ?

Let's see.
sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14584205
SYS>/
flashback database to SCN 14584205
*
ERROR at line 1:
ORA-38754: FLASHBACK DATABASE not started; required redo log is not available
ORA-38762: redo logs needed for SCN 14577573 to SCN 14584205
ORA-38761: redo log sequence 3 in thread 1, incarnation 5 could not be accessed


SYS>
Apparently, I also need Sequences 3,at least,  and (maybe ?) 4 ! Why ? Because my database is currently at an SCN lower than the one I want to Flashback to and the corresponding redo is required. (If I had *not* done the first Flashback to the lower SCN, I wouldn't need these Archivelogs !)

SYS>!sh
sh-3.2$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Sat Feb 21 00:01:34 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> restore archivelog from sequence 3 until sequence 4;

Starting restore at 21-FEB-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=17 device type=DISK

channel ORA_DISK_1: starting archived log restore to default destination
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=3
channel ORA_DISK_1: restoring archived log
archived log thread=1 sequence=4
channel ORA_DISK_1: reading from backup piece /home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp
channel ORA_DISK_1: piece handle=/home/oracle/app/oracle/flash_recovery_area/ORCL/backupset/2015_02_20/o1_mf_annnn_TAG20150220T233055_bggnsmgc_.bkp tag=TAG20150220T233055
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 21-FEB-15

RMAN> exit


Recovery Manager complete.
sh-3.2$
sh-3.2$ cd /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20
sh-3.2$ ls -l
total 63344
-rw-rw---- 1 oracle oracle 494592 Feb 20 23:44 o1_mf_1_1_bggol5t8_.arc
-rw-rw---- 1 oracle oracle 18432 Feb 20 23:44 o1_mf_1_2_bggol6wm_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 20 23:54 o1_mf_1_5_bggp4zbp_.arc
-rw-rw---- 1 oracle oracle 13635072 Feb 20 23:54 o1_mf_1_6_bggp4zh3_.arc
sh-3.2$ cd ../*21
sh-3.2$ ls -l
total 80272
-rw-rw---- 1 oracle oracle 31472640 Feb 21 00:01 o1_mf_1_3_bggpmf06_.arc
-rw-rw---- 1 oracle oracle 50630144 Feb 21 00:01 o1_mf_1_4_bggpmdxk_.arc
sh-3.2$

Here is something important (nothing to do with Flashback Database).  The server clock went past midnight into 21-Feb so the restored files went into 2015_02_21 and not 2015_02_20 !

sh-3.2$ exit
exit

SYS>l
1* flashback database to SCN 14584205
SYS>/

Flashback complete.

SYS>

Remember my question about whether I would need Sequence 7 ?  Let's see what the alert.log shows about the Flashback Database actions.

Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_2_bggol6wm_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_21/o1_mf_1_3_bggpmf06_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_21/o1_mf_1_4_bggpmdxk_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_5_bggp4zbp_.arc
Flashback Media Recovery Log /home/oracle/app/oracle/flash_recovery_area/ORCL/archivelog/2015_02_20/o1_mf_1_6_bggp4zh3_.arc
Recovery of Online Redo Log: Thread 1 Group 1 Seq 7 Reading mem 0
Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo01.log
Incomplete Recovery applied until change 14584206 time 02/20/2015 23:28:03
Flashback Media Recovery Complete
Completed: flashback database to SCN 14584205

Another learning point !  Sequence 7 was applied from the Online Redo Log file.

What we have learned :
1.  If we Flashback the database to a particular SCN / Time / Restore Point, Oracle does need the Redo from the Archive / Online Redo Log file that was active at that time.  It still needs some Redo to make the database consistent (e.g. apply Undo)

2. If we Flashback the database to SCN 101 and then (without OPEN RESETLOGS), choose to Flashback to a subsequent SCN 201, we again need ArchiveLogs !

3. Flashback from the SQL command-line is intelligent enough to use the Online Redo Log but not (like, say, RMAN's RECOVER DATABASE), automatically restore ArchiveLogs as they are required !

.
.
,



Categories: DBA Blogs

Oracle MAF - assigning value to a backing bean property from javascript

Communicating between layers in MAF is quite easy with the provided APIs. For calling Java method for javascript we have the invokeMethod js function, which is quite straight forward: ...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Test MySQL on AWS quickly

Kubilay Çilkara - Thu, 2015-02-19 02:20
Using sysbench to performance test AWS RDS MySQL hardware is an easy three step  operation. Sysbench creates synthetic tests and they are done on a 1 mil row 'sbtest' table that sysbench creates in the MySQL database you indicate. The test doesn't intrude with your database schema, and it doesn't use your data, so it is quite safe. The test is an OLTP test trying to simulate event operations in the database as it runs various, SELECT, INSERT, UPDATE and DELETE requests on it's own 'sbtest' table.The results  of the tests are metrics like transactions per second, number of events, elapsed time etc. See man pages for description and Google it, it is a  popular testing tool. Other things you can set it up to do are to control how many requests (events) you want it to execute in a given time or you can tell it to keep on executing infinite requests until you stop it, or destruction testing. Is a very flexible testing tool with many options including throtling concurrency.You can be up and running with 3 commands on a unix system as follows.  Download sysbench tool (doing this on ubuntu)sudo apt-get install sysbenchCreate a table with 1 mil rowssysbench --test=oltp --oltp-table-size=1000000 --mysql-host={your rds host url} --db-driver=mysql --mysql-user={your rds root user} --mysql-password={password} --mysql-db={your mysql database name} prepareTest with different parameterssysbench --test=oltp --oltp-table-size=1000000 --mysql-host={your rds host url} --db-driver=mysql --mysql-user={your rds root user} --mysql-password={password} --mysql-db={your mysql database name} --max-time=60 --num-threads=550 runWarning: Synthetic  tests will just give you the ability of the hardware at a given standard set of requests and DML operations. There are no way an indication of what will happen to your database if the real workload increases beacause of the applications. Application Load Testing is something else, applications are  complex!  Database Workload is dependent on the application generated workload from real users using the system and is very hard to simulate that in a test. It is not imppossible If you use a database, such as Oracle which has the capability of recording and replaying its production database workload - called Automatic Workload Repository (AWR). In MySQL I couldn't find  so far a way to do this. But sysbench synthetic tests gave me the ability to quickly benchmark and baseline a MySQL database capabilities on different AWS Amazon hardware, something is better than nothing I suppose. 
Categories: DBA Blogs

Detecting Source of MySQL Queries with Comments

Pythian Group - Tue, 2015-02-17 19:22

As a MySQL DBA I already know the data changes that happen on my system. I have logs for that.

However, it’s a common problem that several years into the life of an application, the current developers won’t know where in the codebase queries come from. It’s often hard for them to find the location in the code if queries are formed dynamically; the pattern I show them to optimize doesn’t match anything in the code.

I stumbled on a trick a couple years ago that has been invaluable in tracking down these problematic queries: query comments.

Here’s an example:

When a query generally shows up in a slow query log, it might look something like this:

# Time: 150217 10:26:01
# User@Host: comments[comments] @ localhost []  Id:    13
# Query_time: 0.000231  Lock_time: 0.000108 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424186761;
select * from cars;

That logging shows me who executed the query (the comments user), the server it was executed from (localhost in this case), and what time it was executed (who, where, when).

What this doesn’t tell me is where in my codebase this query is. For example, if I want to change that select * to a more targeted select column1, column2, we may not know where to find it.

This is where comments help.

Comments can be any of three styles:

select * from cars; # comment to end of line here
select * from cars; -- comment to end of line here
select * from /* inline comment here */ cars;

When you add a comment to your query, you’ll capture the comment. Here’s an example.

On the command line:

mysql> select * from cars; # valerie - command line - testing comments

In the slow log:

# Time: 150217 11:25:24
# User@Host: comments[comments] @ localhost []  Id:     3
# Query_time: 0.000246  Lock_time: 0.000119 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424190324;
select * from cars # valerie - command line - testing comments;

This is especially useful if you are forming queries dynamically and can add the name of the piece of code, function, user, etc. to the comment. Those comments would look something like this in the slow log:

# Time: 150217 11:32:04
# User@Host: comments[comments] @ localhost []  Id:     3
# Query_time: 0.000225  Lock_time: 0.000104 Rows_sent: 3  Rows_examined: 3
SET timestamp=1424190724;
select * from cars # userid 7695424 - index.php?search=true - display function;

 

Categories: DBA Blogs

Pythian’s Annual MySQL Community Dinner at Pedro’s

Pythian Group - Tue, 2015-02-17 08:54

Once again, Pythian is organizing an event that by now may be considered a tradition: The MySQL community dinner at Pedro’s! This dinner is open to all MySQL community members since many of you will be in town for Percona Live that week. Here are the details:

What: The MySQL Community Dinner

When: Monday April 13, 2015 –  7:00 PM at Pedro’s (You are welcome to show up later, too!)

Where: Pedro’s Restaurant and Cantina – 3935 Freedom Circle, Santa Clara, CA 95054

Cost: Tickets are $35 USD, Includes Mexican buffet, non-alcoholic drinks, taxes, and gratuities (Pedro’s Dinner Menu 2015)

How: RSVP through Eventbrite

Attendees:

Laine Campbell

Derek Downey

Gillian Gunson

Miklos Szel

Marco Tusa

Mark Filipi

Alkin Tezuysal

Brian Cain

Brian Kelly

Joe Muraski

Patrick Pearson

Looking forward to seeing you all at the event!

 

Categories: DBA Blogs

Database Flashback -- 4

Hemant K Chitale - Tue, 2015-02-17 08:44
Continuing my series on Oracle Database Flashback.  This post demonstrates a FLASHBACK DATABASE when a Tablespace is set to  FLASHBACK OFF.

UPDATE 20-Feb-15 : When might I have FLASHBACK OFF for a Tablespace in a FLASHBACK ON Database ?  When, the Tablespace contents are "throwaway-able".  I can avoid the overheads of Flashback for DML in the Tablespace and I am confident that on the rare occasion that I have to Flashback the Database, I can discard and rebuild the tablespace and it's contents because they are not important for persistency.


[oracle@localhost Hemant]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Tue Feb 17 22:30:53 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
17-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573520 15-FEB-15 1440 190971904 0

SYS>

After my last post on 15/16-Feb (midnight), I had generated some transactions. The Flashback Size has grown from 24.576MB to 190.972MB.

Question : Does Database Flashback always apply to the whole database ?  Can we exclude a selected tablespace ?

SYS>create tablespace TEST_NOFB;

Tablespace created.

SYS>alter tablespace TEST_NOFB flashback OFF;

Tablespace altered.

SYS>

So, it is possible to set FLASHBACK OFF for a tablespace !  What are the implications ?  Oracle does NOT save Flashback information for this tablespace.  If I need to Flashback the Database, this tablespace or all the datafiles in this tablespace must be taken OFFLINE or DROPped.

UPDATE : 18-FEB-15

Continuing the case.
Note : Subsequent to last night's post I had some more activity on the database and even a RESTORE and RESETLOGS.  The v$flashback_database_log contents are different now.

[oracle@localhost Hemant]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 18 23:27:19 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
18-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573526 17-FEB-15 1440 62783488 0

SYS>

Now, let me create a RESTORE POINT and then some activity in the TEST_NOFB database.

SYS>create restore point FEB18_15;

Restore point created.

SYS>connect hemant/hemant
Connected.
HEMANT>create table test_nofb_tbl tablespace test_nofb
2 as select * from dba_objects
3 where 1=2;

Table created.

HEMANT>select tablespace_name
2 from user_tables
3 where table_name = 'TEST_NOFB_TBL'
4 /

TABLESPACE_NAME
------------------------------
TEST_NOFB

HEMANT>insert into test_nofb_tbl
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 /

225138 rows created.

HEMANT>truncate table test_nofb_tbl;

Table truncated.

HEMANT>insert into test_nofb_tbl
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 /

225138 rows created.

HEMANT>commit;

Commit complete.

HEMANT>connect / as sysdba
Connected.
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573526 17-FEB-15 1440 62783488 646397952

SYS>

So, I have generated DML activity in the TEST_NOFB Tablespace.

Let me define another Restore Point and generate more DML.

SYS>create restore point FEB18_15_2nd;

Restore point created.

SYS>connect hemant/hemant
Connected.
HEMANT>insert into test_nofb_tbl
2 select * from dba_objects
3 union all
4 select * from dba_objects
5 union all
6 select * from dba_objects
7 /

225138 rows created.

HEMANT>commit;

Commit complete.

HEMANT>select count(*) from test_nofb_tbl;

COUNT(*)
----------
450276

HEMANT>

I now have two Restore Points and DML against the target table in a FLASHBACK OFF Tablespace between and after the Restore Points.

Let my try to Flashback to the 2nd Restore Point.  What should I see ?  225138 rows in the table ? Or no rows in the table ?

HEMANT>connect / as sysdba
Connected.
SYS>shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>
SYS>flashback database to restore point FEB18_15_2nd;
flashback database to restore point FEB18_15_2nd
*
ERROR at line 1:
ORA-38753: Cannot flashback data file 6; no flashback log data.
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


SYS>

AAHA ! It cannot flashback the datafile because no flashback log exists for it.

Let me quote again what I said yesterday :  If I need to Flashback the Database, this tablespace or all the datafiles in this tablespace must be taken OFFLINE or DROPped.

So, I must take the necessary action :

SYS>alter database datafile 6 offline;

Database altered.

SYS>flashback database to restore point FEB18_15_2nd;

Flashback complete.

SYS>alter database open read only;

Database altered.

SYS>
SYS>alter database datafile 6 online;
alter database datafile 6 online
*
ERROR at line 1:
ORA-01113: file 6 needs media recovery
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


SYS>recover datafile 6;
ORA-00283: recovery session canceled due to errors
ORA-38798: Cannot perform partial database recovery
ORA-38797: Full database recovery required after a database has been flashed back


SYS>

I *cannot* selectively recover that datafile.  What options do I have ?
In the meantime, let's look at the alert.log.

Wed Feb 18 23:47:04 2015
flashback database to restore point FEB18_15_2nd
ORA-38753 signalled during: flashback database to restore point FEB18_15_2nd...
Wed Feb 18 23:49:42 2015
alter database datafile 6 offline
Completed: alter database datafile 6 offline
flashback database to restore point FEB18_15_2nd
Flashback Restore Start
Flashback Restore Complete
Flashback Media Recovery Start
Serial Media Recovery started
Warning: Datafile 6 (/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf) is offline during full database recovery and will not be recovered
Recovery of Online Redo Log: Thread 1 Group 3 Seq 3 Reading mem 0
Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo03.log
Recovery of Online Redo Log: Thread 1 Group 1 Seq 4 Reading mem 0
Mem# 0: /home/oracle/app/oracle/oradata/orcl/redo01.log
Incomplete Recovery applied until change 14577570 time 02/18/2015 23:40:29
Flashback Media Recovery Complete
Completed: flashback database to restore point FEB18_15_2nd
Wed Feb 18 23:50:08 2015
alter database open read only
AUDIT_TRAIL initialization parameter is changed to OS, as DB is NOT compatible for database opened with read-only access
ARCH: STARTING ARCH PROCESSES
.....
.....
Completed: alter database open read only
Wed Feb 18 23:50:10 2015
.....
.....
Wed Feb 18 23:50:36 2015
alter database datafile 6 online
ORA-1113 signalled during: alter database datafile 6 online...
Wed Feb 18 23:50:36 2015
Signalling error 1152 for datafile 6!
Checker run found 2 new persistent data failures
ALTER DATABASE RECOVER datafile 6
Media Recovery Start
Serial Media Recovery started
Media Recovery failed with error 38798
ORA-283 signalled during: ALTER DATABASE RECOVER datafile 6 ...

So, the datafile is unusable.
I can only drop the tablespace.

SYS>shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SYS>startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SYS>
SYS>recover datafile 6;
ORA-00283: recovery session canceled due to errors
ORA-38798: Cannot perform partial database recovery
ORA-38797: Full database recovery required after a database has been flashed back


SYS>alter database open;
alter database open
*
ERROR at line 1:
ORA-01589: must use RESETLOGS or NORESETLOGS option for database open


SYS>alter database open resetlogs;

Database altered.

SYS>
SYS>recover datafile 6;
ORA-00283: recovery session canceled due to errors
ORA-19909: datafile 6 belongs to an orphan incarnation
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/orcl/ORCL/datafile/o1_mf_test_nof_bg9cq9bv_.dbf'


SYS>select tablespace_name from dba_data_files
2 where file_id=6;

TABLESPACE_NAME
------------------------------
TEST_NOFB

SYS>drop tablespace test_nofb;
drop tablespace test_nofb
*
ERROR at line 1:
ORA-01549: tablespace not empty, use INCLUDING CONTENTS option


SYS>drop tablespace test_nofb including contents and datafiles;

Tablespace dropped.

SYS>

There ! A Tablespace that was in FLASHBACK OFF mode cannot be accessed after a Flashback Database --- not even in respect to data that existed before the Restore Point that is used for the Flashback.
.
.
.

Categories: DBA Blogs

Partner Webcast – Agile Business Intelligence in the Cloud for Oracle Partner Solutions

As the #1 vendor in business analytics with the industry’s most complete and integrated range of enterprise-class business intelligence (BI) solutions, Oracle leads the way in helping organizations...

We share our skills to maximize your revenue!
Categories: DBA Blogs

A new index on a small table makes a big difference

Bobby Durrett's DBA Blog - Mon, 2015-02-16 16:12

A few weeks back on the weekend just before I went on call we got a complaint about slowness on an important set of reports.  I worried that the slowness of these reports would continue during my support week so I tried to figure out why they were slow.  I reviewed an AWR report for the 24 hours when the reports were running and found a simple query against a tiny table at the top of the “SQL ordered by Elapsed Time” report:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6      307,516.33 3.416388E+09

I edited the AWR report to show just elapsed seconds and number of executions.  3.4 billion executions totaling 307,000 seconds of elapsed time.  This was about 90 microseconds per execution.

The previous weekend the same query looked like this:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6      133,143.65 3.496291E+09

So, about the same number of executions but less than half of the elapsed time.  This was about 38 microseconds per execution.  I never fully explained the change from week to week, but I found a way to improve the query performance by adding a new index.

The plan was the same both weekends so the increase in average execution time was not due to a plan change.  Here is the plan:

SQL_ID 77hcmt4kkr4b6
--------------------
SELECT DIV_NBR FROM DIV_RPT_GEN_CTL WHERE RPT_NM = :B1 AND
GEN_STAT = 1

Plan hash value: 1430621991

------------------------------------------------------------------
| Id  | Operation                   | Name               | Rows  |
------------------------------------------------------------------
|   0 | SELECT STATEMENT            |                    |       |
|   1 |  TABLE ACCESS BY INDEX ROWID| DIV_RPT_GEN_CTL    |     1 |
|   2 |   INDEX RANGE SCAN          | DIV_RPT_GEN_CTL_U1 |     1 |
------------------------------------------------------------------

I found that the table only had 369 rows and 65 blocks so it was tiny.

The table’s only index was on columns RPT_NM and RPT_ID but only RPT_NM was in the query.  For the given value of RPT_NM the index would look up all rows in the table with that value until it found those with GEN_STAT=1.  I suspect that on the weekend of the slowdown that the number of rows being scanned for a given RPT_NM value had increased, but I can not prove it.

I did a count grouping by the column GEN_STAT and found that only 1 of the 300 or so rows had GEN_STAT=1.

SELECT GEN_STAT,count(*)
FROM DIV_RPT_GEN_CTL
group by GEN_STAT;

  GEN_STAT   COUNT(*)
---------- ----------
         1          1
         2        339
         0         29

So, even though this table is tiny it made sense to add an index which included the selective column GEN_STAT.  Also, since the reports execute the query billions of times per day it made sense to include the one column in the select clause as well, DIV_NBR.  By including DIV_NBR in the index the query could get DIV_NBR from the index and not touch the table.  The new index was on the columns RPT_NM, GEN_STAT, and DIV_NBR in that order.

Here is the new plan:

SQL_ID 77hcmt4kkr4b6
--------------------
SELECT DIV_NBR FROM DIV_RPT_GEN_CTL WHERE RPT_NM = :B1 AND
GEN_STAT = 1

Plan hash value: 2395994055

-------------------------------------------------------
| Id  | Operation        | Name               | Rows  |
-------------------------------------------------------
|   0 | SELECT STATEMENT |                    |       |
|   1 |  INDEX RANGE SCAN| DIV_RPT_GEN_CTL_U2 |     1 |
-------------------------------------------------------

Note that it uses the new index and does not access the table.  Here is the part of the AWR report for the problem query for last weekend:

   SQL Id         Elapsed (s)        Execs
-------------   ------------- ------------
77hcmt4kkr4b6       84,303.02 4.837909E+09

4.8 billion executions and only 84,000 seconds elapsed.  That is about 17.4 microseconds per execution.  That is less than half of what the average execution time was the weekend before the problem started.

The first Monday after we put the index in we found that one of the slow reports had its run time reduced from 70 minutes to 50 minutes.  It was great that we could improve the run time so much with such a simple fix.

It was a simple query to tune.  Add an index using the columns in the where clause and the one column in the select clause.  It was a tiny table that normally would not even need an index.  But, any query that an application executes billions of times in a day needs to execute in the most efficient way possible so it made sense to add the best possible index.

– Bobby

Categories: DBA Blogs

Log Buffer #409, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2015-02-16 10:29

This Log Buffer Edition sheds light at some of the nifty blog post of the week from Oracle, SQL Server and MySQL.

Oracle:

Patch Set Update: Hyperion Data Relationship Management 11.1.2.3.504

The Hitchhiker’s Guide to the EXPLAIN PLAN Part 33: The mother of all SQL antipatterns?

MongoDB as a Glassfish Security Realm

E-Business Suite customers must ensure that their database remains on a level that is covered by Error Correction Support (ECS)

EM12c: How to Retrieve Passwords from the Named Credentials

SQL Server:

How does a View work on a Table with a Clustered Columnstore Index ?

How do you develop and deploy your database?

Microsoft Azure Storage Queues Part 3: Security and Performance Tips

Stairway to SQL Server Security Level 6: Execution Context and Code Signing

Centralize Your Database Monitoring Process

MySQL:

New Galera Cluster version is now released! It includes patched MySQL server 5.6.21 and Galera replication provider 3.9

Shinguz: Nagios and Icinga plug-ins for MySQL 1.0.0 have been released

The next release of MongoDB includes the ability to select a storage engine, the goal being that different storage engines will have different capabilities/advantages, and user’s can select the one most beneficial to their particular use-case. Storage engines are cool.

The MySQL grant syntax allows you to specify dynamic database names using the wildcard characters.

Oracle‘s 10 commitments to MySQL – a 5 year review

Categories: DBA Blogs