Skip navigation.

DBA Blogs

how to read the values of bind variables for currently executing statements real time monitoring kicks in

Grumpy old DBA - Thu, 2014-01-23 18:19
As usual Tanel Poder has done an excellent job of writing up this approach.

Caveats I have tested this in 11.2.0.2.x and 11.2.0.3.x might have some issues earlier not quite sure but hey Tanel probably has this documented also.

This came in really handy recently looking at some SQL chewing up large amounts of LIO.

Here is Tanels writeup: bind variable sql monitor leading over to here ...


Categories: DBA Blogs

Script to Collect Database Information Quickly

Pythian Group - Thu, 2014-01-23 15:38
As a DBA there are occasions where we are required to collect details from a database which can 
be used for further analysis. For example, getting the status of the database (i.e. File Size, 
Free Space, Status of the database, and who is the current owner of the database). This kind of 
information is required, and very useful, for auditing purposes in addition to tracking the database/database 
file's size for various reasons. I had a script which does this job for me, exactly the way 
I want it; however, I have to run it for each database separately.  

One fine day, while answering some question's on a forum, I found a script by Dan Guzman which 
retrieved most of the information I needed, and it does this for all the databases. I have 
adopted Dan G.'s script for my use and modified it by adding some more details to it. 

Please review the script below. Let me know if you like it or dislike it. I will try to 
make further improvements on this script.
--==================================================================================================================
-- Script Originally Written By: Dan Guzman | http://www.dbdelta.com/ 
-- Modified by: Hemantgiri S. Goswami 
-- Reference: 
-- http://social.msdn.microsoft.com/Forums/en/transactsql/thread/226bbffc-2cfa-4fa8-8873-48dec6b5f17f
--==================================================================================================================
DECLARE
    @SqlStatement NVARCHAR(MAX)
    ,@DatabaseName SYSNAME;

IF OBJECT_ID(N'tempdb..#DatabaseSpace') IS NOT NULL
    DROP TABLE #DatabaseSpace;

CREATE TABLE #DatabaseSpace
(
    SERVERNAME        SYSNAME,
    DBID            INT,
    DATABASE_NAME    SYSNAME,
    Recovery_Model    VARCHAR(15),
    DBOWNER            VARCHAR(25),
    LOGICAL_NAME    SYSNAME,
    FILE_PATH        SYSNAME,
    FILE_SIZE_MB    DECIMAL(12, 2),
    SPACE_USED_MB    DECIMAL(12, 2),
    FREE_SPACE_MB    DECIMAL(12, 2),
    GROWTH_OPTION    VARCHAR(15),
    MAXIMUM_SIZE    INT,
    AUTOGROWTH        INT,
    DB_STATUS        VARCHAR(100)
);

DECLARE DatabaseList CURSOR LOCAL FAST_FORWARD FOR
    SELECT name FROM sys.databases WHERE STATE = 0;

OPEN DatabaseList;
WHILE 1 = 1
BEGIN
    FETCH NEXT FROM DatabaseList INTO @DatabaseName;
    IF @@FETCH_STATUS = -1 BREAK;
    SET @SqlStatement = N'USE '
        + QUOTENAME(@DatabaseName)
        + CHAR(13)+ CHAR(10)
        + N'INSERT INTO #DatabaseSpace
                SELECT
                [ServerName]         = @@ServerName
                ,[DBID]             = SD.DBID
                ,[DATABASE_NAME]    = DB_NAME()
                ,[Recovery_Model]    = d.recovery_model_desc
                ,[DBOwner]             = SUSER_SNAME(sd.sid)
                ,[LOGICAL_NAME]     = f.name
                ,[File_Path]         = sf.filename
                ,[FILE_SIZE_GB]     = (CONVERT(decimal(12,2),round(f.size/128.000,2))/1024)
                ,[SPACE_USED_GB]     = (CONVERT(decimal(12,2),round(fileproperty(f.name,''SpaceUsed'')/128.000,2))/1024)
                ,[FREE_SPACE_GB]     = (CONVERT(decimal(12,2),round((f.size-fileproperty(f.name,''SpaceUsed''))/128.000,2))/1024)
                ,[Growth_Option]     = case sf.status 
                                        & 0x100000
                                        WHEN 1048576    THEN    ''Percentage''                                        
                                        WHEN 0            THEN    ''MB''
                                      END
                ,[Maximum_Size]     = SF.MaxSize
                ,[AutoGrowth(MB)]     = (SF.Growth*8/1024)
                ,[DB_Status]        =
                                    CASE SD.STATUS
                                        WHEN 0 THEN ''Normal''
                                        WHEN 1 THEN ''autoclose'' 
                                        WHEN 2 THEN ''2 not sure'' 
                                        WHEN 4 THEN ''select into/bulkcopy'' 
                                        WHEN 8 THEN ''trunc. log on chkpt'' 
                                        WHEN 16 THEN ''torn page detection'' 
                                        WHEN 20 THEN ''Normal'' 
                                        WHEN 24 THEN ''Normal'' 
                                        WHEN 32 THEN ''loading'' 
                                        WHEN 64 THEN ''pre recovery'' 
                                        WHEN 128 THEN ''recovering'' 
                                        WHEN 256 THEN ''not recovered'' 
                                        WHEN 512 THEN ''offline'' 
                                        WHEN 1024 THEN ''read only'' 
                                        WHEN 2048 THEN ''dbo use only'' 
                                        WHEN 4096 THEN ''single user'' 
                                        WHEN 8192 THEN ''8192 not sure'' 
                                        WHEN 16384 THEN ''16384 not sure'' 
                                        WHEN 32768 THEN ''emergency mode'' 
                                        WHEN 65536 THEN ''online'' 
                                        WHEN 131072 THEN ''131072 not sure'' 
                                        WHEN 262144 THEN ''262144 not sure'' 
                                        WHEN 524288 THEN ''524288 not sure'' 
                                        WHEN 1048576 THEN ''1048576 not sure'' 
                                        WHEN 2097152 THEN ''2097152 not sure'' 
                                        WHEN 4194304 THEN ''autoshrink'' 
                                        WHEN 1073741824 THEN ''cleanly shutdown''
                                    END 
            FROM SYS.DATABASE_FILES F
            JOIN 
            MASTER.DBO.SYSALTFILES SF
            ON F.NAME = SF.NAME
            JOIN 
            MASTER.SYS.SYSDATABASES SD
            ON 
            SD.DBID = SF.DBID
            JOIN
            MASTER.SYS.DATABASES D
            ON 
            D.DATABASE_ID = SD.DBID
            AND DATABASEPROPERTYEX(SD.NAME,''Updateability'') <> ''OFFLINE''
            ORDER BY [File_Size_GB] DESC';
    EXECUTE(@SqlStatement);

END
CLOSE DatabaseList;
DEALLOCATE DatabaseList;

SELECT * FROM #DatabaseSpace;

DROP TABLE #DatabaseSpace;
GO

-- Hemantgiri S. Goswami |Linkedin | Twitter

Categories: DBA Blogs

Log Buffer: Marching from 2013 to 2014

Pythian Group - Thu, 2014-01-23 15:15

Pythian’s cherished Log Buffer carnival is celebrating the year 2013, while looking passionately towards yet another promising year for 2014. So much digital water has flown under the bridges in 2013, and Log Buffer covered it all from the eyes of bloggers across database technologies like Oracle, SQL Server, MySQL, and big data.

Almost 50 Log Buffer episodes in year 2013 threw light on new releases of software, cool features, elucidating obscure technical corners, conferences stories, database news, quirks, nifty tricks, trivia, and much more.

Log Buffer remained focused on every facet of database technology, and while doing so, noticed that there were a few salient technologies which interested bloggers and their readers a lot. Some of these technologies and areas were old and some new.

One evergreen area of interest for the database bloggers is discussing performance tuning. This is the one area of databases where the interest carried on from the past year, remaining hot throughout the year and will be the center of discussion next year.

The hot topics this year among the database bloggers will include big data, Cloud Computing, Engineered Systems, and the new version of Oracle database 12c.  Regardless of the database vendor, bloggers loved to explore, discuss, share, and present about these brave new bleeding edge technologies which are still evolving.

One prime takeaway from the Log Buffer carnival is the unique ability of the database bloggers to set the tone of the technology. Database vendors take these bloggers very seriously and improve their offerings based on the blog posts and resulting discussions. There is so much to learn, and database blog posts are a quick and efficient way to stay on top of the ever-changing and ever-evolving data oriented technologies.

Log Buffer is all geared up to keep encompassing all things data related in 2014. Stay tuned.

Pythian’s cherished Log Buffer carnival is celebrating the year 2013, while looking passionately towards yet another promising year for 2014. So much digital water has flown under the bridges in 2013, and Log Buffer covered it all from the eyes of bloggers across database technologies like Oracle, SQL Server, MySQL, and big data.

Almost 50 Log Buffer episodes in year 2013 threw light on new releases of software, cool features, elucidating obscure technical corners, conferences stories, database news, quirks, nifty tricks, trivia, and much more.

Log Buffer remained focused on every facet of database technology, and while doing so, noticed that there were a few salient technologies which interested bloggers and their readers a lot. Some of these technologies and areas were old and some new.

One evergreen area of interest for the database bloggers is discussing performance tuning. This is the one area of databases where the interest carried on from the past year, remaining hot throughout the year and will be the center of discussion next year.

The hot topics this year among the database bloggers will include big data, Cloud Computing, Engineered Systems, and the new version of Oracle database 12c.  Regardless of the database vendor, bloggers loved to explore, discuss, share, and present about these brave new bleeding edge technologies which are still evolving.

One prime takeaway from the Log Buffer carnival is the unique ability of the database bloggers to set the tone of the technology. Database vendors take these bloggers very seriously and improve their offerings based on the blog posts and resulting discussions. There is so much to learn, and database blog posts are a quick and efficient way to stay on top of the ever-changing and ever-evolving data oriented technologies.

Log Buffer is all geared up to keep encompassing all things data related in 2014. Stay tuned.

Categories: DBA Blogs

Configure OEM12c to perform checkups on EXADATA (EXACHK)

DBASolved - Wed, 2014-01-22 21:10

Warning: This one may be longer than normal

How many times have you ran an exachk, produced the text file, and then had to either read the file on the compute node or copy it to your local machine?  Well there is another way to view the report of the EXACHK; it can even be ran on a timely basis so you have current EXACHK information on a regular basis.  What I’m talking about is using Oracle Enterprise Manager 12c for scheduling and producing the exachks. 

If you look at the Oracle Enterprise Manager 12c (OEM) documentation, you will eventually find information on the plug-in for “Oracle Engineered System Healthchecks”.  I’ll make it easy for you, here it is.  This plug-in allows OEM to process the XML output from the Exachk tool, which is part of the OEM by default.  This approach can be used to automate the assessment of Exadata V2, X2-2, X2-8, SPARC Supercluster, Exalogic, Exalytics, and Availability machine system for known configuration problems and best practices.

So why would you want to use OEM to review Exachk items?  The OEM interface allows you to view the output from an interface that is easy to view.  It allows you to setup metrics against the report to keep track of the health of your engineered system.  Lastly, the report can be scheduled to run on a predetermined timeframe so you don’t forget to run it.

Prerequisites

As with anything Oracle there are prerequisites that need to be meet.  Fro the healthcheck plug-in there are four simple ones.

  • Review Oracle Exadata Best Practices and Oracle Database Machine Monitoring Best Practices (757552.1/1110675.1)
  • Verify and enable the Exachk Tool.  The plug-in supports Exachk 2.2.2 and later
  • Operating systems and platforms (docs has the list, here)
  • Oracle Enterprise Manager 12c (12.1.0.3)
Plug-In

Once the prerequisites have been confirmed, then you will need to deploy the plug-in.  Before the plug-in can be deployed, in most environments, it needs to be applied to the software library.  To download or apply the plug-in to the software library, this is done from the Setup –> Extensibility –> Self Update menu.  Figure 1, shows that the plug-in has already been downloaded and applied to the software library.

Figure 1: Plug-in applied to software library

image

Add Healthcheck Targets

Ever wonder what the “Add Targets Declaratively by Specifying Target Monitoring Properties” when adding target manually is used for?  Well, this is one of those times where you get to use it.  When adding Healthcheck targets,  you will use the Setup –> Add Targets –> Add Targets Manually menu items.  When you reach the Add Targets Manually page,  select the “Add Targets Declaratively by Specifying Target Monitoring Properties” option; then select “Oracle Engineered System Healthchecks” from the drop down menu, then add the monitoring agent for one of the compute nodes in the Exadata machine.  Lastly click the Add Manually button.  Once all this is completed, you should end up on the “Add Oracle Engineered System Healthchecks” page (Figure 2).

Figure 2: Add Oracle Engineered System Healthchecks page

image

A few things to notice on this page.  First the Target Name, this can be any name you want to give the healthcheck target.  It will be used to look the target up in the All Targets page.  Other than All Targets, there is no direct way to get to the healthcheck targets (I’ve looked, still searching). 

In the properties dialog area, notice the Max Interval Allowed.  This is the number of days between consecutive Exachk runs.  You can make this longer or shorter; depends on you environment.  31 is the default and recommended by Oracle. 

The Exachk Results Directory is where OEM will read the Exachk information from.  In most of the environments, I usually use /home/oracle.  Fill in what you need.

Prerequisites for Exachk Tool

I know, I know, thought you took care of the prerequisites earlier before starting.  Well, this is another set of prerequisites.  This time, you need to make changes to the .profile (.bash_profile) for the Oracle use.    There are two entries that you need to add to the .profile.  Table 1 below provides the details and values that should be set with these parameters.

Table 1: Environment parameters for profiles

Parameter Value Why RAT_COPY_EM_XML_FILES

1

Setting this environment variable will enable copying of results between all nodes in the cluster RAT_OUTPUT <exachk output directory> Location where results will be copied to for all nodes in the cluster. Run Exachk Tool

Here comes the fun part; you get to run the Exachk tool.  When I say run, I mean configure it to run in the back ground on the Exadata Machine.   What there is a way to keep Exachk running in the background?…Yep! 

The way this is done is by using the daemon (-d start) option to keep it running (Listing 1).  Although the Exachk process will start to run as a daemon, you will still be prompted for the information required.  Once you have provided all the information (password and such), the daemon will remember this until the daemon is stopped.

Listing 1: Start Exachk as daemon

./exachk –d start

Now with Exachk running in the background as a process; how do you get Exachk to run?  Well, you initially have to run Exachk.  When you do, everything you entered at the prompts previously will be used. 

To run Exachk in silent mode, use the –daemon option.  For the purpose of OEM, it is best to gather all the information the report can obtain.  From the command prompt run Exachk (Listing 2).

Listing 2: Run Exachk silently

./exachk –daemon –a

Metrics Collected

Now that the Exachk is done running, you want to see the results in OEM.  In order to do this, you need to tell the agent to upload the results.  Over time, the results will be uploaded; however, on the initial setup, you will want to run the upload process manually.  Listing 3 provides the command used to force the upload.

Listing 3: Force agent to upload Exachk results

./emctl control agent runCollection <target_name>:oracle_exadata_hc ExadataResults

Checking to see what is in OEM

Now that everything has been configured, Exachk ran and the management agent uploaded it, where can you see the results of this configuration?  The easiest way to see is from All Targets –> Engineered Systems –> Oracle Engineered System Healthchecks.  Then you need to click on the target name.

Once you are on the healthchecks page, you will notice a lot of information.  There are 3 main areas: Summary, Incidents and Problems, and Results Summary.  This is all the information that the Exachk report generated and the management agent uploaded.  Figure 3 is a small snapshot of the overall screen.

Figure 3: partial view of healthchecks page

image

Summary

Once again, Oracle Enterprise Manager 12c, has proven to be a tool that can do more than is expected.  Although, us command line junkies who like to run reports and various other items from the command line may look down on this; the Healthcheck plug-in is a cool way to review Exachk information.  Also a slick way to remember to run it each month….because it is scheduled.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Database, Exadata, OEM
Categories: DBA Blogs

Java Performance: The Definitive Guide By Scott Oaks

Surachart Opun - Wed, 2014-01-22 12:26
Java is a programming language and computing platform. You will see lots of applications and websites that are written in Java. Java is fast, secure and reliable. How about performance? Java performance is a matter of concern because lots of business software has been written in Java.

I mention a book titles - Java Performance: The Definitive Guide By Scott Oaks. Readers will learn about the world of Java performance. It will help readers get the best possible performance from a Java application.
In a book, Chapter 2 written about testing Java applications, including pitfalls of Java benchmarking, Chapter 3, talked an overview of some of the tools available to monitor Java applications.
If you are someone who are interested in Java or develop applications in Java. Performance is very important for you. This book is focused on how to best use the JVM and Java Platform APIs so that program run faster. If You are interested in improving your applications in Java. This book can help.

Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Bug 13930580 Workaround Effective

Bobby Durrett's DBA Blog - Mon, 2014-01-20 14:24

We put the workaround for Bug 13930580 in on Friday and the results have been much better than I expected.  Here is when the workaround went in as reported in the alert log:

Fri Jan 17 18:38:26 2014
ALTER SYSTEM SET _use_adaptive_log_file_sync='FALSE' SCOPE=BOTH;

Here are the log file sync average times.  Notice how they go down after 7 pm Friday:

END_INTERVAL_TIME          number of waits ave microseconds
-------------------------- --------------- ----------------
17-JAN-14 12.00.49.669 AM            78666       15432.6923
17-JAN-14 01.00.27.862 AM            13380       15509.9778
17-JAN-14 02.00.11.834 AM            15838       17254.2949
17-JAN-14 03.00.56.429 AM            10681       29832.4282
17-JAN-14 04.00.39.502 AM            26127       14880.2097
17-JAN-14 05.00.22.716 AM            21637       10952.5322
17-JAN-14 06.00.01.558 AM            67162       9756.44207
17-JAN-14 07.00.45.358 AM           123705       11755.6535
17-JAN-14 08.00.29.811 AM           223799       11341.2467
17-JAN-14 09.00.19.275 AM           319051       13651.4647
17-JAN-14 10.00.09.089 AM           507335       13991.5543
17-JAN-14 11.00.59.502 AM           583835       11609.8432
17-JAN-14 12.00.44.044 PM           627506       10857.4556
17-JAN-14 01.00.30.133 PM           610232       11233.9348
17-JAN-14 02.00.18.961 PM           664368       10880.3887
17-JAN-14 03.00.05.694 PM           647896       9865.96367
17-JAN-14 04.00.44.694 PM           538270       10425.6479
17-JAN-14 05.00.24.376 PM           343863       9873.98468
17-JAN-14 06.00.11.481 PM           169654       9735.80996
17-JAN-14 07.00.03.087 PM            87590       7046.92633
17-JAN-14 08.00.52.390 PM            69297       2904.62955
17-JAN-14 09.00.29.888 PM            38244       3017.15969
17-JAN-14 10.00.09.436 PM            28166       3876.77469
17-JAN-14 11.00.54.765 PM            23220       11109.3063
18-JAN-14 12.00.33.790 AM            13293       9749.99428
18-JAN-14 01.00.17.853 AM            15332       3797.76839
18-JAN-14 02.00.56.050 AM            16137       6167.15127
18-JAN-14 03.00.33.908 AM            14621       9664.63108
18-JAN-14 04.00.12.383 AM             9708        6024.9829
18-JAN-14 05.00.56.348 AM            14565       3618.76938
18-JAN-14 06.00.39.683 AM            14323       3517.45402
18-JAN-14 07.00.29.535 AM            38243       3753.46422
18-JAN-14 08.00.16.778 AM            44878       2280.22924
18-JAN-14 09.00.01.176 AM            73082       9689.52484
18-JAN-14 10.00.45.168 AM            99302       2094.03293
18-JAN-14 11.00.35.070 AM           148789       1898.40424
18-JAN-14 12.00.23.344 PM           151780       1932.64997
18-JAN-14 01.00.08.631 PM           186040       2183.18563
18-JAN-14 02.00.59.839 PM           199826       2328.87331
18-JAN-14 03.00.45.441 PM           210098        1335.9759
18-JAN-14 04.00.36.453 PM           177331       1448.39219
18-JAN-14 05.00.21.669 PM           150837       1375.07256
18-JAN-14 06.00.59.959 PM           122234       1228.21767
18-JAN-14 07.00.37.851 PM           116396       1334.64569
... skip a few to find some higher load times...
19-JAN-14 10.00.01.434 AM           557020       2131.02737
19-JAN-14 11.00.42.786 AM           700781       1621.16596
19-JAN-14 12.00.31.934 PM           715327       1671.72335
19-JAN-14 01.00.10.699 PM           718417       1553.98083
19-JAN-14 02.00.51.524 PM           730149        2466.6241
19-JAN-14 03.00.38.088 PM           628319       2465.45829

When the system is busy we are seeing less than 3000 microseconds = 3 milliseconds log file sync which is good.  We were seeing 10 milliseconds or more which isn’t that great.

Oracle support has been pushing this for a long time but our own testing wasn’t able to recreate the problem.  Have to hand it to them.  They were right!

Here is a link to my previous post on this issue: url

- Bobby

 

 

 

Categories: DBA Blogs

2013 in Review — SQL Server

Pythian Group - Mon, 2014-01-20 13:24

It’s that time of year. When I take some time away from work, hide out, and reflect on the past year.

2013 was a big year for Microsoft, particularly SQL Server, Windows Azure, and HDInsight. While I’d love to recap every Microsoft moment that excited me, there are too many to count, so I’ll just list the highlights and leave you with some things to look forward to in 2014.

HDInsight

HDInsight went from a selective beta to a general release. Microsoft’s Hadoop partner, HortonWorks, released version 2.0. Although Windows Azure has not upgraded to HDP 2.0 yet, I’m told it will upgrade soon. I think there is a lot of potential to use Hadoop in conjunction with SQL Server and this is something I’ll be experimenting with soon.

SQL Server 2012

Microsoft released several patches for SQL Server 2012. I’ve seen a lot of organizations begin migrating to the platform, which is excellent. There is a lot of interest in AlwaysOn Availability Groups, which is one of my favorite new(er) technologies in SQL Server.

SQL Server 2014

Microsoft announced SQL Server 2014, and then proceeded to release not one but two community preview versions (CTP1 and CTP2) to the public. A full release (RTM) is expected sometime in 2014. I’m very excited about SQL Server 2014 and its full list of features. My favorites are the expanded replicas for AlwaysOn Availability Groups, the change in how SQL Server responds if a replica is offline, and in memory tables. The SQL Server 2014 CTP releases were placed in the image gallery, making it easy to explore the new product in minutes. There are also some new features slated to be added to the full product that will solve a lot of technical challenges for DBAs.

Windows Azure upgrades

Microsoft continued to enhance Windows Azure, and also announced support for Oracle databases. I’m a regular Windows Azure user and a big fan of the platform. Gartner predicts that cloud adoption will continue to grow and accelerate, so it’s time to accept the shift. If you haven’t checked out Windows Azure, I recommend you spend some time on the platform. It’s a strong player in the market and will only get better in the future.

These are a few of my highlights. What are yours?

Categories: DBA Blogs

OTNYathra 2014

Hemant K Chitale - Mon, 2014-01-20 10:18
The February 2014 OTNYathra in India.

.
.
.

Categories: DBA Blogs

Speed up Import with TRANSFORM=DISABLE_ARCHIVE_LOGGING in #Oracle 12c

The Oracle Instructor - Mon, 2014-01-20 07:47

A very useful 12c New Feature is the option to suppress the generation of redo during Data Pump import. I was talking about it during my recent 12c New Features class in Finland and like to share that info with the Oracle Community here. My usual demo user ADAM owned a table named BIG with one index on it. Both were in LOGGING mode when I exported them. The Data Pump export did not use any 12c New Feature and is not shown therefore.

SQL> select log_mode,force_logging from v$database;

LOG_MODE     FORCE_LOGGING
------------ ---------------------------------------
ARCHIVELOG   NO

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                             0                         0               0          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

8 rows selected.

The database is not in force logging mode – else the new Data Pump parameter would be ignored. Archive log mode was just turned on, therefore no archive log file yet. First I will show the redo generating way to import, which is the default. Afterwards the new feature for comparison.

SQL> host impdp adam/adam directory=DPDIR tables=big

Import: Release 12.1.0.1.0 - Production on Mon Jan 20 11:50:42 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "ADAM"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "ADAM"."SYS_IMPORT_TABLE_01":  adam/******** directory=DPDIR tables=big
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ADAM"."BIG"                                660.1 MB 5942016 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "ADAM"."SYS_IMPORT_TABLE_01" successfully completed at Mon Jan 20 11:54:32 2014 elapsed 0 00:03:48

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                         13.47                         0              12          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

8 rows selected.

The conventional way with redo generation took almost 4 minutes and generated 12 archive logs – my online logs are 100 megabyte in size. Now let’s see the new feature:

SQL> drop table big purge;

Table dropped.

SQL> host impdp adam/adam directory=DPDIR tables=big transform=disable_archive_logging:y

Import: Release 12.1.0.1.0 - Production on Mon Jan 20 11:57:19 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Master table "ADAM"."SYS_IMPORT_TABLE_01" successfully loaded/unloaded
Starting "ADAM"."SYS_IMPORT_TABLE_01":  adam/******** directory=DPDIR tables=big transform=disable_archive_logging:y
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "ADAM"."BIG"                                660.1 MB 5942016 rows
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "ADAM"."SYS_IMPORT_TABLE_01" successfully completed at Mon Jan 20 11:58:21 2014 elapsed 0 00:01:01

SQL> select * from v$recovery_area_usage;

FILE_TYPE               PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
----------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                             0                         0               0          0
REDO LOG                                 0                         0               0          0
ARCHIVED LOG                         13.47                         0              12          0
BACKUP PIECE                             0                         0               0          0
IMAGE COPY                               0                         0               0          0
FLASHBACK LOG                            0                         0               0          0
FOREIGN ARCHIVED LOG                     0                         0               0          0
AUXILIARY DATAFILE COPY                  0                         0               0          0

8 rows selected.

SQL> select table_name,logging from user_tables;

TABLE_NAME                                                   LOG
------------------------------------------------------------ ---
BIG                                                          YES

SQL> select index_name,logging from user_indexes;

INDEX_NAME                                                   LOG
------------------------------------------------------------ ---
BIG_IDX                                                      YES

Note that the segment attributes are not permanently changed to NOLOGGING by the Data Pump import.
The comparison shows a striking improvement in run time – because the 2nd run did not generate additional archive logs, we still see the same number as before the 2nd call.

Another option is to suppress redo generation only for the import of indexes, in my example with the command

impdp adam/adam directory=DPDIR tables=big transform=disable_archive_logging:y:index

That is a safer choice because indexes are always reproducible. Keep in mind that any NOLOGGING operation is a risk – that is the price to pay for the speed up.
As always: Don’t believe it, test it! :-)


Tagged: 12c New Features
Categories: DBA Blogs

wow this looks interesting and not too far away world Information Architecture day 2014 ( Ann Arbor MI )

Grumpy old DBA - Sun, 2014-01-19 12:37
Just saw this posted ... hmm road trip time from Cleveland on Saturday Feb 15?  Only possibly dicey thing is driving/traveling on Saturday in mid February in the Midwest you can never predict what that is going to be like.  But hey 4 wheel drive in the Honda CRV so ... time to check schedule and see who else might want to attend.

See this link for the Ann Arbor event World Information Architecture Day 2014 Ann Arbor MI here is the overall site maybe there is an event planned near you WIAD 2014 ?

Categories: DBA Blogs

Yet another way to monitor Oracle GoldenGate with OEM12c

DBASolved - Sat, 2014-01-18 22:06

In a recent post on monitoring Oracle GoldenGate, I talked about how to configure the JAGENT to use Oracle Enterprise Manager 12c to monitor Oracle GoldenGate.  If you would like to review that post, you can find it here.  For this post, I’ll show you how you can use Metric Extensions within Oracle Enterprise Manager 12c to monitor Oracle GoldenGate. 

Note: Another good post on this topic can be found from my friend Maaz.  Here is his post.

As Maaz and I were talking about the finer aspects of the JAGENT, we started to talk about Metric Extensions to monitor Oracle GoldenGate.  This got me to thinking how this could be accomplished. 

One of the first things that is needed before you can set up a Metric Extension is how you are going to monitor the processes.  Being that most of the Oracle GoldenGate instances that I monitor are on Unix/Linux, I decided to use a Perl script to grab the basic info associated with Oracle GoldenGate.  This basically means, I just wanted to see what the current status of the processes was.  This is achieved from GGSCI using the INFO ALL command (Figure 1).

Figure 1: Info All
image

As you can tell, all of the Oracle GoldenGate processes are running.  In order to monitor these processes, they need to be put into a format that Oracle Enterprise Manager 12c can understand.  In order to do this, I used a Perl script to get the output in Figure 1 into a pipe delimited string.  The Perl script that I used to do this is located in Listing 1.

Listing 1: Perl Script
#!/usr/bin/perl -w
#
#
use strict;
use warnings;

#Static Variables

my $gghome = “/oracle/app/product/12.1.2/ggate”;

#Program

my @buf = `$gghome/ggsci << EOF
info all
EOF`;

foreach (@buf)
{
        if(/EXTRACT/||/REPLICAT/)
        {
                s/\s+/\|/g;
                print $_.”\n”;
        }
}

The Perl script provided (Listing 1) basically reads the output from the INFO ALL command into a buffer.  For everything in the buffer, look for any lines that have EXTRACT or REPLICAT in it.  Then replace all the spaces with a pipe (|) .  Lastly print out the output I want.  When the script is ran, you should get output similar to Figure 2.

Figure 2: Output from Perl script
image

Now that the output is in a pipe (|) format, I can use this in the Metric Extension.

Before we take a look at Metric Extensions, if you have never used them they are a great way to extend Oracle Enterprise Manager 12c to be more efficient in monitoring granular things. The documentation associated with Metric Extensions in Oracle docs are great and they are also covered in the Expert Oracle Enterprise Manager 12c book out by Apress.

To begin setting up a Metric Extension for monitoring Oracle GoldenGate, you need to go to the Metric Extensions page within Oracle Enterprise Manager 12c (Enterprise –> Monitoring –> Metric Extensions).    Once on the page, you are presented with a Create button about half way down the page (Figure 3).  The Create option can also be accessed from the Actions drop down.

Figure3: Metric Extensions Page

image

Once you click on the Create button, you will be taken to a wizard to begin developing the Metric Extension.  On the General Properties page, you need to fill out the required fields.  Since you are going to monitor this with a Perl script, the Target Type will be Host.  Then you need to name the Metric Extension (ME$).  Next provide the display name; this is the name that will show up in the All Metrics later.  Finally, the Adaptor drop down, select OS Command – Multiple Columns.  This way the metric will read the pipe (|) sign and break the string up.  Figure 4 provides you a screen shot.

Figure 4: General Properties
image

On the next screen, you need to provide the command that will run the script and the script that you want ran.  The delimiter field is a pipe (|) by default.  Fill in the required information (Figure 5).

Figure 5: Adapter
image

The Columns screen is next.  This screen is where you will setup the names of the columns and map what will go where.  Also, you can set the Alert Threshold for the the column that will be monitored.  One key think to point out here, you will need a PK for the data.  This basically means a unique way of identify an Oracle GoldenGate process.  For this configuration, the PK is the uniqueness between Program and Object.  Figure 6 illustrates how I have the columns configured for this setup.

Figure 6: Columns
image

Once you have the columns established, Oracle Enterprise Manager 12c will ask you what Credentials you want to use with this metric.  The default monitoring credentials are fine (Figure 7).

Figure 7: Credentials
image

Finally, you can test the Metric Extension and see how it is going to look.  This is your chance to make sure that all the values from the script return and appear how you expect them to be.  Figure 8, shows a successful test run.  If you have everything configured as expected, your output should come up in the Test Results without the pipe (|) signs.  If an error message is returned then investigate why you received it. 

In my testing, the errors I received were due to the PK issue with the columns when defined for the metric.

Figure 8: Test
image

Lastly, when you make it to the review screen, just check everything and click Finish.  Once you click Finish, you will be taken back to the Metric Extension landing page.  From the landing page, you can now deploy the metric to target the Metric Extension is to be associated with.

To deploy the Metric Extension, use the Action menu and select Deploy to Target.  Select the target which is should be deployed to (Figure 9).

Figure 9: Deploy to Targets….
image

Once you Metric Extension has been deployed to the selected target, you can then go to that target and check in All Metrics to see  how it looks.  For a host, you can do this by going to Targets –> Hosts.  Then select the host that you wanted.  From the Host menu, select Monitoring –> All Metrics.  Once on the All Metrics landing page, look for the name of the metric in the drop down tree then click on it.  You will be presented with the current status of the metric in the right-hand pane (Figure 10).

Figure 10: Metric Extension status in All Metrics
image

At this point, the Metric Extension for monitoring Oracle GoldenGate processes has been successfully developed, tested and deployed to a target.  The next step you need to accomplish is to ensure that you are notified when something happens to a process.  This is done with Notification Rules (not covered here).  Once the notification rules are established, monitoring Oracle GoldenGate using Metric Extensions is a great way to work around any JAGENT issues you may have.

Enjoy!

twitter: @dbasolved

blog: http://dbasolved.com


Filed under: Golden Gate, OEM
Categories: DBA Blogs

Log Buffer #355, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-01-17 07:55

It’s no secret anymore that social networks like Facebook are fast loosing their hyper-active charm. But ask anybody about blogging, and its still there and the future is as solid as rock. Innovation, creation, and quality content with public engagement is what makes it lasting. This Log Buffer acknowledges that fact for the database bloggers.
Oracle:

The  Critical Patch Update (CPU) for January 2014 was released on January 14, 2014.  Oracle strongly recommends applying the patches as soon as possible.

Inject filesystem I/O latency. Oracle Linux 5 and 6

What to expect at the Oracle Value Chain Summit

Get a Closer Look at Oracle Database Appliance

EBS Release 12 Certified with Mac OS X 10.9 with Safari 7 and JRE 7

SQL Server:

Calculating Duration Using DATETIME Start and End Dates (SQL Spackle)

Learn SQL Server in Arabic

There is nothing mysterious about SQL Server backups. They are essential, however you use your databases.

Stairway to Advanced T-SQL Level 1: The TOP Clause

Free eBook: Troubleshooting SQL Server: A Guide for the Accidental DBA

MySQL:

Multi-master, multi-region MySQL deployment in Amazon AWS

In fact, Docker is a wrapper around LXC. It is fun to use. Docker has the philosophy to virtualize single applications using LXC.

Analyzing WordPress MySQL queries with Query Analytics

MySQL Cluster: The Latest Developments in Management, Free webinar

MySQL Proxy lives – 0.8.4 alpha released

Categories: DBA Blogs

Getting started with 12c

Bobby Durrett's DBA Blog - Thu, 2014-01-16 15:12

Back in July I finally got Oracle 12c installed on my laptop as documented in this post: url

But, that was as far as I got.  The last thing I did was get an error message creating a user.  Well, I figured out how to create a new user and a few other things.  I’m working with the ORCL database that comes with the install and all the parameters, etc. that come with it.

Evidently the default install comes with a PDB called PDBORCL.  So, I have two tns entries one for the parent CBD and one for the child PDB and they look like this:

ORCL.WORLD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.133.128)
(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = orcl.mydomain.com)
    )
  )

PDB.WORLD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.133.128)
(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdborcl.mydomain.com)
    )
  )

I guess the service name has the name of the PDB in it.

So, if I connect as SYSTEM/password@orcl I’m connected to the CDB and if I connect to SYSTEM/password@pdb I’m connected to the PDB.  When I connected to the PDB I could create a new user without getting an error.

But, when I first tried connecting to the PDB I got this error, even though the database was up:

ORA-01033: ORACLE initialization or shutdown in progress

So, to bring the database up (by the way, I’m on 64 bit Linux) after booting the Linux VM the following steps were required:

lsnrctl start

sqlplus / as sysdba

startup

alter session set container=PDBORCL;

startup

Probably this could all be scripted but that’s what I did today.

Interestingly there is only one pmon:

$ ps -ef | grep pmon
oracle   29495     1  0 06:52 ?        00:00:00 ora_pmon_orcl

But you get different results when you query dba_data_files depending on whether connected to the CDB or PDB:

CDB

FILE_NAME                                 
------------------------------------------
/u01/app/oracle/oradata/orcl/system01.dbf 
/u01/app/oracle/oradata/orcl/sysaux01.dbf 
/u01/app/oracle/oradata/orcl/undotbs01.dbf
/u01/app/oracle/oradata/orcl/users01.dbf

PDB

FILE_NAME                                                     
--------------------------------------------------------------
/u01/app/oracle/oradata/orcl/pdborcl/system01.dbf             
/u01/app/oracle/oradata/orcl/pdborcl/sysaux01.dbf             
/u01/app/oracle/oradata/orcl/pdborcl/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/orcl/pdborcl/example01.dbf

So, I guess each PDB has its own SYSTEM and SYSAUX tablespaces?

Lastly when running my scripts to poke around I edited my sqlplus header script to report which container you are in.  It looks like this now:

set linesize 32000
set pagesize 1000
set long 2000000000
set longchunksize 1000
set head off;
set verify off;
set termout off;

column u new_value us noprint;
column n new_value ns noprint;
column c new_value cs noprint;

select name n from v$database;
select user u from dual;
SELECT SYS_CONTEXT('USERENV', 'CON_NAME') c FROM DUAL;

set sqlprompt &ns:&cs:&us>

set head on
set echo on
set termout on
set trimspool on

spool &ns..&cs..logfilename.log

Replace “logfilename” with whatever you want for your script name.

It puts out a prompt like this:

CDB

ORCL:CDB$ROOT:SYSTEM>

PDB

ORCL:PDBORCL:SYSTEM>

And the log file names:

ORCL.CDB$ROOT.sessions.log

ORCL.PDBORCL.sessions.log

Anyway, this is just a quick post about my first attempts to get around in 12c.

- Bobby

 

 

 

 

 

Categories: DBA Blogs

Adaptive Optimization Limitation Example

Bobby Durrett's DBA Blog - Thu, 2014-01-16 11:58

I’ve been reading up on Oracle 12c to get certified and to help advise my company on potential uses for the new version.  I’ve been looking forward to researching the new Adaptive Optimization features because it makes so much sense that the database should change its plans when it finds differences between the expected number of rows each part of a plan sees and the actual number of rows.

I’ve written blog posts in the past about limitations of the optimizer related to its ability to determine the number of rows (cardinality) that steps in a plan would see.  I took the scripts from some of these and ran them on a 12c instance to see if the new features would cause any of the inefficient plans to change to the obvious efficient plans.

Sadly, none of my examples ran differently on 12c.  I don’t doubt that there are examples that run better because of the new features but the ones I constructed earlier didn’t see any improvement.  So, I thought I would blog about one such example.

Here is the original blog post with an example run on 11.2 Oracle: url

Here is the same script run on 12c: zip

Here is the query with the bad plan:

select B.DIVNUM 
from DIVISION A,SALES B 
where a.DIVNUM=B.DIVNUM and                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     
A.DIVNAME='Mayberry'                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    

Plan hash value: 480645376                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              

--------------------------------------------------------------------                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
| Id  | Operation          | Name     | Rows  | Bytes | Cost (%CPU)|                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
--------------------------------------------------------------------                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
|   0 | SELECT STATEMENT   |          |       |       |   421 (100)|                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
|*  1 |  HASH JOIN         |          |   500K|  8300K|   421   (2)|                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
|*  2 |   TABLE ACCESS FULL| DIVISION |     1 |    14 |     2   (0)|                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
|   3 |   TABLE ACCESS FULL| SALES    |  1000K|  2929K|   417   (1)|                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         
--------------------------------------------------------------------

There is only 1 SALES row that has the DIVNUM associated with DIVNAME=’Mayberry’. There 1,000,001 SALES rows  and there is an index on SALES.DIVNUM so an index scan would be the most efficient access method and a nested loops join the most efficient join method. But the 12c optimizer chooses a hash join and full table scan instead.

According to the 12c SQL Tuning manual there are two types of Adaptive Optimization that might help in this case: Adaptive Plans and Adaptive Statistics.  I tried to tweak my test script to get Adaptive Statistics to kick in by commenting out the dbms_stats calls but it didn’t help.  Also, I tried running the query several times in a row but it never changed plan.

I can see why Adaptive Plans wouldn’t work.  How long will it let the full scan of SALES go before it decides to switch to a nested loops join with an index scan?  If it gets half way through the table it is too late.  So, I’m not sure how Adaptive Plans could change the plan on the fly when it expects a lot of rows and only finds a few.

On the Adaptive Statistics I guess this is just a case that it still can’t handle.  I guess it is like a histogram across joins case that would be pretty complex to solve in general.

Anyway, this all reminds me of when I first learned about histograms.  I got all excited that histograms would solve all my query performance problems and then came crashing down to earth when I realized it wasn’t the case.  I think the analogy fits.  Histograms improved cardinality estimates and can help in certain cases.  I think the new adaptive features will help improve plans by using real cardinality figures where it can, but they aren’t a cure-all.

I’m not sure that getting cardinality right is a solvable problem in the general case.  You have to have a fast optimizer so there are limits to how much it can do.

I ran all this as the user SYSTEM on the base 12c 64 bit Linux install with all the default parameters unchanged on the standard default database.

- Bobby

 

 

 

 

 

 

 

Categories: DBA Blogs

WebLogic Server 12c - PerDomain Node Manager Configuration Model

Let's start with giving a brief definition of Node Manager. Server instances in a WebLogic Server production environment are often distributed across multiple domains, machines, and geographic...

We share our skills to maximize your revenue!
Categories: DBA Blogs

oracle security patch notification now out

Grumpy old DBA - Wed, 2014-01-15 20:02
Most people have seen this already but just in case oracle jan 2014 patch notification ...

From what I have seen there are huge differences company by company site by site how patching and testing of patching occurs.  That's a whole different blog entry though right?
Categories: DBA Blogs

New Features Guide Highlights

Bobby Durrett's DBA Blog - Wed, 2014-01-15 12:03

I just finished reading the main section of the Oracle 12c New Features Guide.  I read pages 17-107 of the version I have – Section 1 titled “Oracle Database 12c Release 1 (12.1) New Features”.  I underlined and asterisked the things that stuck out as interesting in this pass and thought I would include them in this post:

1.1.4.7 New PL/SQL DBMS_UTILITY.EXPAND_SQL_TEXT Procedure

Expands views into SQL automatically.

1.1.6.1 Default Values for Columns Based on Oracle Sequences

Use sequence in column definition for default values.

1.1.6.4 Increased Size Limit for VARCHAR2, NVARCHAR2, and RAW 
Data Types

32K varchar2 columns

1.1.8.6 JDBC Support for Database Resident Connection Pool

Possible alternative to shared servers

1.2.3.1 Asynchronous Global Index Maintenance for DROP and 
TRUNCATE Partition

Global index not made unusable by partition maintenance

1.2.4.1 Adaptive Query Optimization

Plans change when DB sees that its cardinality estimates
were wrong.

1.2.4.15 Session-Private Statistics for Global Temporary Tables

Gather stats on global temp tables for your session only - cool.

1.3.4.3 SecureFiles is the Default for LOB Storage

Not sure what the means, but good to know that the default 
changed.

1.4.1 Database Consolidation

Subsections 2-6 give some clues to the way the multitenant 
architecture works.

1.4.3.1 Cloning a Database

Sounds similar to Delphix

1.4.4.3 Oracle Data Pump Change Table Compression at Import Time

Imported data can be compressed using HCC on target.

1.5.5.7 Multiple Indexes on Same Set of Columns

Can have different kinds of indexes on same set of columns 
(same order I assume)

1.5.9.1 Active Database Duplication Enhancements

Faster clone of an open database using RMAN

1.6.1.1 Enterprise Manager Database Express

Sounds like a better EM tool - would like to check it out
and review the 2-Day dba manuals which show uses of it.

1.6.3.1 Queryable Patch Inventory

Don't have to do lsniventory to see your patches?

1.8.3.1 Multi-Process Multi-Threaded Oracle

Wondering what platforms this is on and what it really means.

1.9.7.1 Unified Context-Based Database Audit Architecture

Sounds like this may replace some third party tools.  Improved 
auditing.

1.12.1.2 Parallel Upgrade

May speed up upgrade by parallelising

I read all 91 pages but there were sections that didn’t really interest me since they are about features we don’t use such as Spatial.  If you are interested in 12c I encourage you to read this as I did.  I printed out about 10 pages at a time.  It’s only 91 pages so it doesn’t take forever to read it.

- Bobby

 

 

 

Categories: DBA Blogs

12c Online Partitioned Table Reorganisation Part II (Move On)

Richard Foote - Tue, 2014-01-14 23:06
In Part I, we looked at some of the issues associated with locking and unusable indexes when Moving both tables and table partitions. The Oracle 12c Database has introduced a number of great new capabilities associated with online DDL activities. One of these really useful features is the capability to now move table partitions online while […]
Categories: DBA Blogs

Deinstalling GoldenGate 12c

VitalSoftTech - Tue, 2014-01-14 20:30

In GoldenGate 12c, the installation and the de-installation process has been standardized to use the Oracle Universal Installer (OUI) interface. In this article we will look at how can we deinstall Oracle GoldenGate.

The post Deinstalling GoldenGate 12c appeared first on VitalSoftTech.

Categories: DBA Blogs