Skip navigation.

Feed aggregator

PFCLScan Version 1.3 Released

Pete Finnigan - Tue, 2014-11-25 18:50

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-11-25 18:50

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2014-11-25 18:50

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Pythian at UKOUG 14

Pythian Group - Tue, 2014-11-25 13:39

Will you be joining us at the UKOUG Conference and Exhibition in Liverpool, UK? Over 200 world-class speakers and industry experts will be in attendance, including some of our very own.

Michael Abbey, Oracle ACE and Team Lead at Pythian notes that Pythian’s attendance is important, not only for the company and its employees, but for the Oracle community as a whole. “Pythian’s presence at UKOUG this year is the next chapter in an ongoing participation in database and EIS technical events around the world,” Michael explains. “We pride ourselves on presence on all seven continents and our appearances in Liverpool are strategic to the worldwide user community, as are all the locations we frequent every year.The user group community is one of the fundamental building blocks for technical resources as they hone their skills to better serve their clients and the masses in general. Since its founding, Pythian has been a strategic and financial support organization feeding talent to many of the world’s largest technical shows.”

“You will see a number of presenters from the Pythian suite of experts including Oracle ACEs, Oracle ACE Directors, and members of the OakTable Network. We look forward to catching up with new acquaintances and rekindling existing relationships.” In the meantime, you can find their speaking sessions below:

 

RMAN: The Necessary Basics
Presented by Michael Abbey | Monday December 8, 2014 — 9:00-9:50 AM

This session, best suited for attendees just getting started with RMAN, the basic skills to write consistent backups and perform recovery activities from day one. Highlighted in this masterclass will be the following: RMAN architecture, using a catalog, backups to disk, backups to tape, recovery (complete and incomplete), database duplication, and tips and tricks.

About Michael: Michael Abbey is a seasoned and experienced presenter on the Oracle Database CORE technology. He first cut his teeth on V3 in 1986 and it has been a whirlwind of Oracle since then. Michael co-authored the first work in the Oracle Press series in 1994.

 

Why Use OVM for Oracle Database?
Presented by Francisco Munoz Alvarez | Monday December 8, 2014 — 5:00-5:50 PM

Vibrant session about OVM, that will explain how, when and why use this product for Virtualization. It will also give an overview of how Revera is currently using this product in NZ (Biggest OVM Farm in the ANZ region) and show benchmark results between Bare Metal, OVM and ESX concluding with some tips and showing the scalability and break point of load of the Virtualization solutions.

Come and discover the answers for the following questions:

  • Does an Oracle Database perform well on a virtualized environment?
  • What virtualization technology is more stable and allows an Oracle database to perform faster?
  • What is the performance difference between using a bare metal and a virtualized guest?
  • Is it safe to run a production database in a virtualized environment?

About Francisco:  Working out of Pythian’s Australian office in Macquarie Park, Francisco Munoz Alvarez is Vice President and Managing Director of Service Delivery in Asia Pacific, overseeing its regional expansion effort. Francisco previously served as Chief Technology Officer at Database Integrated Solutions Ltd and has more than two decades’ experience in Oracle databases. As President of the Chilean Oracle Users Group and founder of the Oracle Technology Network (OTN) tours in Latin America and Asia Pacific, he is best known for his evangelist work with the Oracle community.

 

Big Data with Exadata
Presented by Christo Kutrovsky | Tuesday December 9, 2014 — 5:30-6:20 PM

In this presentation, Oracle ACE Christo Kutrovsky will discuss common big data use cases and how they can be implemented efficiently with Exadata. Attendees will learn how Exadata actually delivers most of the benefits touted by newer big data technologies, and can often be the right platform for data scalability.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Measuring Performance in Oracle Solaris and Oracle Linux
Presented by Christo Kutrovsky | Wednesday December 10, 2014 —9:00-9:50 AM

You can’t improve what you can’t measure. If you want to get the most value from your database, you need to start with the basics: are you using your hardware and operating systems efficiently? Attend this session to learn how to measure system utilization in the Linux and Oracle Solaris operating systems and how to use this information for tuning and capacity planning.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Lessons Learned in Implementing Oracle Access Manager 11g with Forms, Reports, and Discoverer
Presented by Sudeep Raj and Maris Elsins | Wednesday December 10, 2014 — 12:30-1:20 PM

Support for SSO has ended in December 2011. To take advantage of the latest security enhancements, it’s always recommended for customers to upgrade their system to the latest and the greatest version of the product i.e OAM/OID 11g, very important to stay on the supported configurations. This session will give you an opportunity to understand how OAM 11g can be configured with Forms/Reports/Discoverer 11g, integration with external directory service like Microsoft AD and discuss about the upgrade considerations for customers planning to upgrade from 10g SSO/OID to OAM 11g and OID 11g.

About Sudeep: Sudeep Raj is a Team Lead/Oracle Applications Database Consultant at Pythian, managing a group of expert DBAs spread across the globe. With nearly a decade of experience as an Apps DBA, he has been involved in and led multiple Oracle E-Business Suite 11i/R12 implementations, maintenance, migration and upgrade projects. Sudeep Raj is an OCP certified professorial and holds a Bachelor of Engineering degree in Computer Science.

About Maris: Recently awarded the Oracle ACE designation, Maris Elsins is a Lead Database Consultant at Pythian. He is a blogger and frequent speaker at many Oracle related conferences like Collaborate, UKOUG, and LVOUG where he is a board member. Maris is an exceptional trouble shooter and enjoys learning why things behave the way they do.

 

Optimizing and Simplifying Complex SQL with Advanced Grouping
Presented by Jared Still | Wednesday December 10, 2014 — 3:30-4:20 PM

This presentation will show how these features can be used to simplify SQL that was previously quite complex by reducing the amount of code needed and improving readability, and perhaps most importantly, greatly optimizing the performance of SQL statements.

About Jared: Jared Still is a Senior Database Consultant at Pythian. His experience includes working with Oracle databases beginning with version 7.0. While Oracle has expanded to encompass many aspects of the application environment, Jared’s focus has been on the database itself and related infrastructure.

 

Oracle RAC — Designing Applications for Scalability
Presented by Christo Kutrovsky | Wednesday December 10, 2014 — 3:30-4:20 PM

Oracle Real Application Clusters (RAC) promises 100% transparent active-active clustering technology – true horizontal scaling, but does it work in all cases? This presentation explores the challenges with Oracle’s active-active solution and how to solve them from both database side and application side. Both conceptual design and highly practical solutions are explored.

About Christo: Christo Kutrovsky is an Oracle ACE in Pythian’s Advanced Technology Consulting Group. With a deep understanding of databases, application memory, and input/output interactions, he is an expert at optimizing the performance of the most complex infrastructures.  A dynamic speaker, Christo has delivered presentations at the IOUG, the UKOUG, the Rocky Mountain Oracle Users Group, Oracle Open World, and other industry conferences.

 

Database as a Service on the Oracle Database Appliance Platform
Presented by Marc Fielding and Maris Elsins | Wednesday December 10, 2014 — 3:30-4:20 PM

Oracle Database Appliance provides a robust, highly-available, cost-effective, and surprisingly scalable platform for database as a service environment. By leveraging Oracle Enterprise Manager’s self-service features, databases can be provisioned on a self-service basis to a cluster of Oracle Database Appliance machines. Discover how multiple ODA devices can be managed together to provide both high availability and incremental, cost-effective scalability. Hear real-world lessons learned from successful database consolidation implementations.

About Marc: Marc Fielding is a passionate and creative problem solver, drawing on deep understanding of the full enterprise application stack to identify the root cause of problems, and to implement effective and sustainable solutions. He has extensive experience implementing Oracle’s engineered system portfolio, including leading one of the first enterprise Oracle Exadata implementations. Marc has a strong background in performance tuning and high availability.

About Maris: Recently awarded the Oracle ACE designation, Maris Elsins is a Lead Database Consultant at Pythian. He is a blogger and frequent speaker at many Oracle related conferences like Collaborate, UKOUG, and LVOUG where he is a board member. Maris is an exceptional trouble shooter and enjoys learning why things behave the way they do.

 

Pythian is a global leader in data consulting and managed services. We specialize in optimizing and managing mission-critical data systems, combining the world’s leading data experts with advanced, secure service delivery. Learn more about Pythian’s Oracle expertise or read some of our Oracle-related blog posts.

Categories: DBA Blogs

Holiday Sales by category

Nilesh Jethwa - Tue, 2014-11-25 13:26

Image

Big Data... Is Hadoop the good way to start?

Tugdual Grall - Tue, 2014-11-25 09:27
In the past 2 years, I have met many developers, architects that are working on “big data” projects. This sounds amazing, but quite often the truth is not that amazing. TL;TR You believe that you have a big data project? Do not start with the installation of an Hadoop Cluster -- the "how" Start to talk to business people to understand their problem -- the "why" Understand the data you must Tugdual Grallhttps://plus.google.com/103667961621022786141noreply@blogger.com0

Why we won’t need a PeopleSoft v9.3

Duncan Davies - Tue, 2014-11-25 09:00

3291330534_84cc20eac9_z[1]I caught up with Paco Aubrejuan’s “PeopleSoft Townhall” webinar from Quest the other day. Paco is Senior VP of Development for the PeopleSoft product line and it was a really interesting listen. The session can be found here, although you need to sign-up with Quest to view it. It’s an hour long and he discusses the future direction of the PeopleSoft product family plus the new simplified and mobile user experience for PeopleSoft, the new Fluid User Interface (UI) and the delivery model of more frequent, customer-driven product enhancements which is enabled by PeopleSoft Update Manager.

Most interestingly for me though, was the Q&A section at the end. Paco tackled the v9.3 question head on. I’ve transcribed his words, and I think it’s a strong and positive message for those with an interest in the PeopleSoft product line. Here are the ‘best bits':

On PUM:

We’re calling our model PeopleSoft Selective Adoption … and let me be specific about what it means, we’re going to deliver new capabilities about 2 to 3 times a year (and may deliver some functionality more frequent than that). Once you’re on 9.2 you can get this functionality without upgrading ever.

On PeopleSoft v9.3:

Should I upgrade to PeopleSoft 9.2 or should I wait for 9.3? There is no 9.3. We don’t have a 9.3 codeline, there’s no 9.3 plan, our plan is to never do a 9.3 and we’re going to continuously deliver on 9.2 using the PeopleSoft Selective Adoption and so you should not be waiting for a 9.3. … We’re just going to continue extending the timelines for PeopleSoft 9.2 so the idea is that there is no more upgrade and premier support will just continue.

On why a 9.3 isn’t needed:

The risk we take with saying that there’s no 9.3 is that people read into that and say that PeopleSoft is dead. … That’s not true. The investment level that we’re making in the product does not change with this delivery model at all. … We’re delivering all the Fluid functionality without a new release. We’ve never done that before. The only thing that this is comparable to is the 8.0 version when we moved from client-server to the internet, and that was a major release. We’re now doing something equivalent to that without even a minor release. It’s now just selective features that you can take as long as you’re on 8.54. So PeopleSoft is not dead, and having no PeopleSoft 9.3 does not mean that PeopleSoft is dead.

So, we now have a definitive answer to the v9.3 question. I think it’s a strong and positive message which is backed up with evidence of the investment that Oracle are putting in to the product family, and a nod to the fact that PeopleSoft is adapting its model to the changing needs of the customer.


Using DB Adapter to connect to DB2 on AS400

Darwin IT - Tue, 2014-11-25 04:37
In my current project I need to connect to a DB2 database on an AS400. To do so is no rocket science, but not exactly a NNF (Next-Next-Finish) config.

First you need to download the IBM JDBC adapter for DB2, which is open souce. Download the JT400.jar  from http://jt400.sourceforge.net/. Place it in a folder on your server. Since it's not an Oracle driver, I don't like to have it placed in the Oracle-Home, so I would put it on a different lib folder, where it is recognisable. Create a logical one, where you place other shared libs as well.

There are several methods to add the lib to your weblogic class path. What worked for me was to add it to the 'setDomainEnv.cmd'/'setDomainEnv.sh' file in the domain home.

(The Default Domain of the integrated weblogic of JDeveloper 12.1.3 under Windows can be found in: “c:\Users\%USER%\AppData\Roaming\JDeveloper\system12.1.3.0.41.140521.1008\DefaultDomain”)
Search for the keyword ‘POST_CLASSPATH’ and add the following at the end of the list of POST_CLASSPATH-additions:
set POST_CLASSPATH=c:\Oracle\lib\jtopen_8_3\lib\jt400.jar;%POST_CLASSPATH%
Where 'c:\Oracle\lib\jtopen_8_3' was the folder where I put it under windows. Then restart your server(s), and create a DataSource. For 'Database Type' as well as for 'Driver' choose 'Other' in the wizard. Then for the following fields enter the corresponding values in the given format (see also the doc.):
FieldValue/FormatURLjdbc:as400://hostname/Schema-Name;translate binary=trueDriver Class
com.ibm.as400.access.AS400JDBCDriver
or
com.ibm.as400.access.AS400JDBCXADataSource
Driver Jar
jt400.jar
or
jt400Native.jar

Since in our case the database apparently has a time out (don't know it this is default behaviour with DB2-AS400), I put in a one-row-query in the Test Table Name-field. And I checked the Test Connections On Reserve-checkbox, because I don't know the time-out frequency.

A description of configuring the library and connection in JDeveloper and the DBAdapter can be found in section 9.6.2 of this doc.

Having the DataSource in Weblogic setup, you can register it in de Database Adapter. Besides provinding the DataSourceName or XADataSourceName you should adapt the PlatformClassName:

The default is 'org.eclipse.persistence.platform.database.oracle.Oracle10Platform' (It only now strikes me that it contains 'org.eclipse.persistence' in the package name). Leaving it like this could have you running in the exception:
ConnectionFactory property platformClassName was set to org.eclipse.persistence.platform.database.oracle.Oracle10Platform but the database you are connecting to is DB2 UDB for AS/400
For DB2 on AS/400, the value should be: 'oracle.tip.adapter.db.toplinkext.DB2AS400Platform', see the docs here.

Fabric8 Gateway for the Unified Push Server

Matthias Wessendorf - Tue, 2014-11-25 03:28

If you want to run the Unified Push Server behind a firewall, you still need to expose those RESTful endpoints that are accessed from the mobile apps running on the different devices:

With the help of the Fabric8 Gateway Servlet this is a fairly simple task!

I have created such a gateway that only exposes the above URLs, nothing else. Checkout therepository on github!

Have fun!


SQL Server tips: Executing a query with the EXECUTE command

Yann Neuhaus - Mon, 2014-11-24 22:52

This short SQL Server blog post is meant to help people who have experienced the error messages 2812 and 203 with the EXECUTE command.

Parsing blocks stats blocks parsing

Bobby Durrett's DBA Blog - Mon, 2014-11-24 13:33

I had a five-minute conversation with Oracle development Friday that rocked my world.  I found out that parsing blocks stats which blocks parsing.

We have a system with queries that are taking minutes to parse.  These queries include the main tables on our database, one of which is interval partitioned and has 20,000 sub-partitions.  We have seen a situation where Oracle’s delivered statistics gathering job hangs on a library cache lock waiting for a query to finish parsing.  But, much worse than that, we find most queries on our system hanging on library cache lock waits blocked by the statistics job.

We have an SR open on this situation because it seems to be a bug, but Friday someone on the phone from Oracle development explained that this parse blocks stats blocks parse situation is normal.  Later after I got off the phone I built a simple test case proving that what he said was true.  I took a query that took a long time to parse in production and ran it on our development database and it took 16 seconds to parse there.  I choose the smallest table that the query included and gathered stats on it.  The stats ran in a fraction of a second when run by itself, but if I started the long parsing query in one window and ran the stats in another window the stats hung on a library cache lock wait for 15 seconds.  Then I created a trivial query against the same small table I had gathered stats on.  The query ran instantly by itself.  But, if I ran the long parsing query first, kicked of the stats which hung on the lock, and then kicked off the short query against the table I was gathering stats on the short query hung on a library cache lock also.  This example convinced me that the parse blocks stats blocks parse chain was real.

This morning I built a standalone test case that others can run to prove this out on their databases: zip of testcase.  To run the testcase you need three windows where you can run three sqlplus scripts in rapid succession.  In one window first just run tables.sql to create the test tables.  Then run these three scripts one after the other in each window to create the three link chain: chain1.sql, chain2.sql, chain3.sql.  Chain1.sql has the explain plan of a query that takes a long time to parse.  Chain2.sql gathers stats on one table.  Chain3.sql runs a simple query against the table whose stats are being gathered.  Chain1 spends all of its time on the CPU doing the parse.  Chain2 and 3 spends all of their time on library cache lock waits.

First I created two tables:

create table t1 as select * from dba_tables;
create table t2 as select * from dba_tables;

Next I kicked off the explain plan that takes a long time to run.  It joined 100 tables together:

explain plan into plan_table for 
select 
count(*)
from
     t1,
     t2,
     t2 t3,
...
     t2 t100
where
  t1.owner=t2.owner and
...
  t1.owner=t100.owner
/

This explain plan ran for 26 seconds, almost all of which was CPU:

Elapsed: 00:00:26.90

...

CPU in seconds
--------------
         26.81

Right after I kicked off the explain plan I kicked off this statement which gathered stats on the first table in the from clause:

execute dbms_stats.gather_table_stats(NULL,'T1');

This ran for 25 seconds and almost all of the time was spent on a library cache lock wait:

Elapsed: 00:00:25.77

...

Library cache lock in seconds
-----------------------------
                        25.55

Right after I kicked off the gather table stats command I ran this simple query making sure that it was unique and required a hard parse:

select /* comment to force hard parse */ count(*) from T1;

This ran for 24 seconds and almost all of the time was spent on a library cache lock wait:

Elapsed: 00:00:24.48

...

Library cache lock in seconds
-----------------------------
                        24.48

Evidently when a session parses a query it needs to obtain a shared lock on every table that the query includes.  When you gather statistics on a table you need to obtain an exclusive lock on the table, even if you are gathering statistics on one partition or sub-partition of the table.  While the statistics gathering session waits to acquire an exclusive lock any new parses that include the same table will hang.

Prior to Friday I did not think that there was any non-bug situation where gathering optimizer statistics would lock up sessions.  I thought that the only negative to gathering statistics at the same time as other application processing was that statistics gathering would compete for system resources such as CPU and I/O and possibly slow down application code.  But, now I know that gathering statistics can hang all queries that use the given table if stats gathering gets hung up waiting for a query that takes a long time to parse.

– Bobby

P.S. After reviewing the SR I wanted to understand what this parse blocks stats blocks parse looked like in a state dump.  The Oracle support analyst explained how the locks looked in a state dump that we uploaded but I didn’t get a chance to look at it closely until today.  I found the most important information in lines with the string “LibraryObjectLock” at the front of the line after some spaces or tabs.  There were three types of lines – the holding share lock, the waiting exclusive lock, and the many waiting share locks:

LibraryObjectLock:  Address=... Handle=0x5196c8908 Mode=S ...
LibraryObjectLock:  Address=... Handle=0x5196c8908 RequestMode=X ...
LibraryObjectLock:  Address=... Handle=0x5196c8908 RequestMode=S ...

The “…” indicates places I edited out other details.  The handle 0x5196c8908 identifies the table being locked.  The “Mode=S” string indicates a successful share lock of that table.  The “RequestMode=X” was from the stats job trying to get exclusive access to the table.  The “RequestMode=S” was all the other sessions trying to get shared access to the table waiting for stats to first get exclusive access.  Anyway, I just wanted to translate what Oracle support told me into something that might be useful to others.  Plus I want to remember it myself!


Categories: DBA Blogs

Feeling trepidatious? Time to lay very low?

FeuerThoughts - Mon, 2014-11-24 12:34
Sure, "trepidatious" might not be a word, per se.

But I am confident it is something that more than one very famous male actor is feeling right now, as they watch Bill Cosby go down in flames.

As in: seriously and deeply apprehensive about what the future might bring.

There are a few things we can be sure of right now, even if Cosby never faces a judge or jury:

1. Bill Cosby is a nasty piece of work, and very likely (was) a pedophile.

The pattern of behavior, finally brought to light after years of self-censorship by victims and callous disregard by the media and judicial system, is overwhelming and seemingly never-ending. Mr. Cosby is a serial rapist, and he did it by drugging young women, some of them less than 18 years old at the time.

2. Bill Cosby is an actor. 

The roles he played were just that: roles. We are easily fooled into thinking of the people behind the roles as sharing characteristics of their characters, but that's just, well, foolish.

The whole point of being a great actor is that you can act really well. You can pretend to be someone else really convincingly. But they are still someone else and not the "real you."

3. Bill Cosby cannot be the only one.

That's where the trepidation comes in. Seriously, what's the chance that Cosby is the only famous, powerful, rich actor who has a long history of taking advantage of and raping women (and/or men, for that matter)?

There have got to be others, and they've got to be terrified that soon their victims will say "Enough!" and then the next deluge will begin.

So my advice to all those A-listers who are also serial rapists:

Lay low, lay really low. Do not provoke your victims. Do not laugh in their faces.

And then maybe you will be able to retire and fade into the sunset, so that your obituary will not be some variation of:

Funny Guy, Sure, But Also a Rapist
Categories: Development

Securing Sensitive Database Data Stores

Chris Foot - Mon, 2014-11-24 10:18

Introduction

Database administrators, since the inception of their job descriptions, have been responsible for the protection of their organization’s most sensitive database assets. They are tasked with ensuring that key data stores are safeguarded against any type of unauthorized data access.

Since I’ve been a database tech for 25 years now, this series of articles will focus on the database system and some of the actions we can take to secure database data. We won’t be spending time on the multitude of perimeter protections that security teams are required to focus on. Once those mechanisms are breached, the last line of defense for the database environments will be the protections the database administrator has put in place.

You will notice that I will often refer to the McAfee database security protection product set when I describe some of the activities that will need to be performed to protect your environments. If you are truly serious about protecting your database data, you’ll quickly find that partnering with a security vendor is an absolute requirement and not “something nice to have.”

I could go into an in-depth discussion on RDX’s vendor evaluation criteria, but the focus of this series of articles will be on database protection, not product selection. After an extensive database security product analysis, we felt that the breadth and depth of McAfee’s database security offering provided RDX with the most complete solution available.

This is serious business, and you are up against some extremely proficient opponents. To put it lightly, “they are one scary bunch.” Hackers can be classified as intelligent, inquisitive, patient, thorough, driven and more often than not, successful. This combination of traits makes database data protection a formidable challenge. If they target your systems, you will need every tool at your disposal to prevent their unwarranted intrusions.

Upcoming articles will focus on the following key processes involved in the protection of sensitive database data stores:

Evaluating the Most Common Threats

In the first article of this series, I’ll provide a high level overview of the most common threat vectors. Some of the threats we will be discussing will include unpatched database software vulnerabilities, unsecured database backups, SQL Injection, data leaks and a lack of segregation of duties. The spectrum of tactics used by hackers could result in an entire series of articles dedicated to database threats. The scope of these articles is on database protection activities and not a detailed threat vector analysis.

Identifying Sensitive Data Stored in Your Environment

You can’t protect what you don’t know about. The larger your environment, the more susceptible you will be to data being stored that hasn’t been identified as being sensitive to your organization. In this article, I’ll focus on how RDX uses McAfee’s vulnerability scanning software to identify databases that contain sensitive data such as credit card or Social Security numbers stored in clear text. The remainder of the article will focus on identifying other objects that may contain sensitive, and unprotected data, such as test systems cloned from production, database backups, load input files, report output, etc…

Initial and Ongoing Vulnerability Analysis

Determining how the databases are currently configured from a security perspective is the next step to be performed. Their release and patch levels will be identified and compared to vendor security patch distributions. An analysis of how closely support teams adhere to industry and internal security best practices is evaluated at this stage. The types of vulnerabilities will range the spectrum, from weak and default passwords to unpatched (and often well known) database software weaknesses.

Ranking the vulnerabilities allows the highest priority issues to be addressed more quickly than their less important counterparts. After the vulnerabilities are addressed, the configuration is used as a template for future database implementations. Subsequent scans, run on a scheduled basis, will ensure that no new security vulnerabilities are introduced into the environment.

Database Data Breach Monitoring

Most traditional database auditing mechanisms are designed to report data access activities after they have occurred. There is no alerting mechanism. Auditing is activated, the data is collected and reports are generated that allow the various activities performed in the database to be analyzed for the collected time period.

Identifying a data breach after the fact is not database protection. It is database reporting. To protect databases we are tasked with safeguarding, we need a solution that has the ability to alert or alert and stop the unwarranted data accesses from occurring.

RDX found that McAfee’s Database Activity Monitoring product provides the real time protection we were looking for. McAfee’s product has the ability to identify, terminate and quarantine a user that violates a predefined set of database security policies.

To be effective, database breach protection must be configured as a stand-alone, and separated, architecture. Otherwise, internal support personnel could deactivate the breach protection service by mistake or deliberate intention. This separation of duties is an absolute requirement for most industry compliance regulations such as HIPAA, PCI DSS and SOX. The database must be protected from both internal and external threat vectors.

In an upcoming article of this series, we’ll learn more about real-time database activity monitoring and the benefits it provides to organizations that require a very high level of protection for their database data stores.

Ongoing Database Security Strategies

Once the database vulnerabilities have been identified and addressed, the challenge is to ensure that the internal support team’s future administrative activities do not introduce any additional security vulnerabilities into the environment.

In this article, I’ll prove recommendations on a set of robust, documented security controls and best practices that will assist you in your quest to safeguard your database data stores.

A documented plan to quickly address new database software vulnerabilities is essential to their protection. The hacker’s “golden window of zero day opportunity” exists from when the software’s weakness is identified until the security patch that addresses it is applied.

Separation of duties must also be considered. Are the same support teams that are responsible for your vulnerability scans, auditing and administering your database breach protection systems also accessing your sensitive database data stores?

Reliable controls that include support role separation and the generation of audit records that ensure proper segregation of duties so that even privileged users cannot bypass security will need to be implemented.

Wrap-up

Significant data breach announcements are publicized on a seemingly daily basis. External hackers and rogue employees continuously search for new ways to steal sensitive information. There is one component that is common to many thefts – the database data store. You need a plan to safeguard them. If not, your organization may be the next one that is highlighted on the evening news.

The post Securing Sensitive Database Data Stores appeared first on Remote DBA Experts.

Top 5 wait events from v$active_session_history

DBA Scripts and Articles - Mon, 2014-11-24 09:30

This query returns the top 5 wait events for the last hour from v$active_session_history view. Be careful, this view is part of the diagnostic pack, you should not query this view if you don’t have license for it. Top 5 wait events from v$active_session_history [crayon-547411b39d0be279072425/] This is obviously an approximation, because v$active_session_history contains only 1 second samples, […]

The post Top 5 wait events from v$active_session_history appeared first on Oracle DBA Scripts and Articles (Montreal).

Categories: DBA Blogs

Log Buffer #398, A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2014-11-24 09:10

This Log Buffer Edition covers some informative and interesting posts from Oracle, SQL Server and the MySQL.

Oracle:

If you’re not using Toad DBA Suite, it’s sometimes hard to find solutions. Somebody wanted to know how to find indexes that aren’t that aren’t indirect. Indirect indexes are those created for a primary key because a primary key column or set of columns are both not null and uniquely constrained. You can’t drop a unique index for a primary key without dropping the primary key constraint that indirectly created it.

At the NoCOUG fall conference at the eBay town hall in San Jose, we got a first-hand look at the workings of the most complex database environments in the world.

Is there a feature you would like added to Inventory or Inventory Items? Tired of logging Service Requests with Support to request product enhancements? Well those days are over. You can now submit Enhancement Requests (ER’s) for Logistics (Inventory) and/or Inventory Items (APC/PLM/PIM) directly in their respective Communities.

Oracle Database 12c : EXPAND_SQL_TEXT, APPROX_COUNT_DISTINCT, Session Sequences and Temporary Undo.

Integrating Cordova Plugin with Oracle MAF – iOS Calendar Plugin by Chris Muir.

SQL Server:

Learn how to invoke SSRS reports from an SSIS package after the data load is completed.

Questions About Using TSQL to Import Excel Data You Were Too Shy to Ask.

This article shows a step-by-step guide to move the distribution database to a new SQL Server instance.

Monitoring Azure SQL Database.

Stairway to SQL Server Agent – Level 2: Job Steps and Subsystems.

Where in the Application Should Data Validation be Done?

MySQL:

Creating JSON documents with MariaDB.

Geographic replication with MySQL and Galera.

Sys Schema for MySQL 5.6 and MySQL 5.7.

Logging with MySQL: Error-Logging to Syslog & EventLog.

Multi-source Replication with Galera Cluster for MySQL.

Categories: DBA Blogs

Watch: Hadoop vs. Oracle Exadata

Pythian Group - Mon, 2014-11-24 09:05

Every data platform has its value, and deciding which one will work best for your big data objectives can be tricky—Alex Gorbachev, Oracle ACE Director, Cloudera Champion of Big Data, and Chief Technology Officer at Pythian, has recorded a series of videos comparing the various big data platforms and presents use cases to help you identify which ones will best suit your needs.

“Obviously there’s a big difference in cost,” Alex explains. “However, you would require significantly more engineering effort invested in Hadoop—so it’s a matter of scale when a solution like Hadoop becomes cost efficient.” Learn more key differentiators by watching Alex’s video Hadoop vs. Oracle Exadata.

Note: You may recognize this series, which was originally filmed back in 2013. After receiving feedback from our viewers that the content was great, but the video and sound quality were poor, we listened and re-shot the series.

Categories: DBA Blogs

Innovation in Managing the Chaos of Everyday Project Management

WebCenter Team - Mon, 2014-11-24 08:50
Fishbowl

Invitation

Innovation in Managing the Chaos
of Everyday Project Management

Oracle Media Network

Stay Connected

OPN on PartnerCast

Oracle and Fishbowl Solutions Present:
A Breakthrough in Enterprise-Wide Project Management

Controlled chaos - this phrase sums up most enterprise-wide projects as workers, documents, and materials move from task to task. To effectively manage this chaos, project-centric organizations need to consider a new set of tools to allow for speedy access to all project assets and to ensure accurate and up-to-date information is provided to the entire project team.

Live Webcast

4th December, 2014

02:00 PM EST

Duration: 60 minutes


Register Now

Fishbowl Solutions and Oracle would like to invite you to a webinar on an exciting new solution for enterprise project management.

This solution transforms how project-based tools like Oracle Primavera, and project assets, such as documents and diagrams, are accessed and shared.


With this solution:

  • Project teams will have access to the most accurate and up to date project assets based on their role within a specific project;
  • Through a single dashboard, project managers will gain new real-time insight to the overall status of even the most complex projects;
  • The new mobile workforce will now have direct access to the same insight and project assets through an intuitive mobile application;

With this real-time insight and enhanced information sharing and access, this solution can help project teams increase the ability to deliver on time and on budget.

Fishbowl's Cole Orndorff, who has 10+ years in the engineering and construction industry, will keynote and share how a mobile-ready portal can integrate project information from Oracle Primavera and other sources to serve information up to users in a personalized, intuitive user experience.

Please join us on Thursday, December 4th at 2:00 pm EST to better understand this new and exciting solution.

 Contact Us

 Get Resources

opncomms_ww@oracle.com

OPN Website

Oracle Partner Store

OPN Information Center

OPN News Blog

Market Your Offerings

facebook twitter linkedin

Copyright © 2014, Oracle and/or its affiliates. All rights reserved.

Oracle Corporation - Worldwide Headquarters
500 Oracle Parkway, OPL - E-mail Services
Redwood Shores, CA 94065, United States

Contact Us | Legal Notices and Terms of Use | Privacy Statement

Create or update your profile to receive customized e-mail about Oracle products and services.

You are receiving this communication because of your current Oracle PartnerNetwork (OPN) membership agreement. General Marketing e-mail opt-out preferences may have been overridden to ensure that you receive this program information. If you are a designated OPN Administrator you may not opt out of receiving communications from Oracle PartnerNetwork; please refer to your OPN agreement for additional information.

You can opt-out of general Oracle Marketing e-mails at any time. Please note that opting-out of Marketing communications does not affect your receipt of important business communications related to your current relationship with Oracle such as Security Updates, Event Registration notices, Account Management, and Support/Service communications.


What Is Oracle 12 Unified Auditing? The View UNIFIED_AUDIT_TRAIL with 94 Columns

What is Oracle 12c Unified Auditing? The short answer is the view UNIFED_AUDIT_TRAIL. This view consolidates all logging and auditing information into a single source. Regardless of using either Mixed Mode or Pure Unified Auditing, the SYS.UNIFIED_AUDIT_TRAIL can be used. 

The key column in SYS.UNIFIED_AUDIT_TRAIL is AUDIT_TYPE.  This column shows from which Oracle component the log data originated -

SYS.UNIFIED_AUDIT_TRAIL Component Sources

Column AUDIT_TYPE Value

Description

Number of Columns in Table

Standard

Standard auditing including SYS audit records

44

XS

Real Application Security (RAS)and RAS auditing

17

Label Security

Oracle Label Security

14

Datapump

Oracle Data Pump

2

FineGrainedAudit

Fine grained audit(FGA)

1

Database Vault

Data Vault(DV)

10

RMAN_AUDIT

Oracle RMAN

5

Direct path API

SQL*Loader Direct Load

1

 

Total

94

If you have questions, please contact us at mailto:info@integrigy.com

Reference Tags: AuditingOracle Database
Categories: APPS Blogs, Security Blogs

Reminder to myself: turn off felix service urlhandlers in combined BPM & OSB12c installation

Darwin IT - Mon, 2014-11-24 05:12
Last week I started with creating a few OSB services for my current project, which is in fact a BPM12c project that needs to be interated with database services on an AS400, thus DB2. Firstly I found that when I tried to deploy on a standalone wls domain (created with the qs_config script), it lacks an OSB installation. Whereas the integrated weblogic default domain has one.

But when I try to deploy to a pretty simple project I ran into the fault 'The WSDL is not semantically valid: Failed to read wsdl file from url due to -- java.net.MalformedURLException: Unknown protocol: servicebus.'

I even tried to do an import of a configuration.jar into the sbconsole, but same error here.

Frustration all over the place: how hard can it be, beïng quite an experienced osb developer on 11g?

Luckily I wasn't the only frustrated chap in the field: Lucas Jellema already ran into it and found a solution, where he credited Daniel Dias from Middleware by Link Consulting.