Skip navigation.

Feed aggregator

A few words about deployment of ADF production domain on Oracle Linux

Recently I have configured ADF production domain on Oracle Linux 6 OS. One of the stages of this process is installation and configuration of Application Development Runtime (ADR) which is...

We share our skills to maximize your revenue!
Categories: DBA Blogs

PuppyCoin - New currency

WebCenter Team - Tue, 2014-04-01 03:15

I'm firmly convinced today that as far as content goes - Puppies have become the newest currency. Here at Oracle - we received a notice today about a new feature on our benefits plan that rivals other company benefit plans. We just started this year with an option for TeleMedicine, which is great, but NONE of my friends have this PuppyBenefit where they work. So... maybe you should consider working for Oracle - great products, great people,  awesome benefits and PUPPIES!

Introducing PuppyConnect

American Well is delighted to announce a brand new service:PuppyConnect. Although only in beta, PuppyConnect enables you to visit with a puppy 24/7/365 through secure video chat.

Connecting with a puppy is just as easy as speaking with a doctor or nutritionist on American Well. The only difference is that with PuppyConnect, your visit is with an adorable puppy rather than a clinician. You simply choose the puppy you would like to see, click connect, and the puppy will appear within moments – groomed and ready to go.

For more information on the PuppyConnect feature or to register for our beta experience, please visit

Here is their promotional video on the new service below. Check out the links above - you can even select your puppy! I'm already signed up!

<span id="XinhaEditingPostion"></span>

Enterprises learn how to capitalize on big data

Chris Foot - Tue, 2014-04-01 01:42

Due to the limited capabilities of a 24-person IT department faced with data analytics programs, many organizations have turned to database administration experts to monitor and operate them. Though they may not deploy the systems themselves, an outsourced service well acquainted with specific client operations can provide valuable insight for business executives looking to gain actionable digital information. 

An unlikely friend 
Organizations providing data analytics systems often push their products as "one size fits all" programs that may or may not be applicable to businesses engaged in different industries. Database administration services acknowledge the specific needs of each of their clients and how they intend to use digital information processing software. Some may collect real-time data points on individual shopping habits while others may be using predictive tools to anticipate product backorders during an impending snow storm. 

According to CIO Magazine, rental-car company Hertz supplements its in-house analytics resources and data center with an outsourced IT service provider. Barbara Wixom, a business intelligence expert at the Massachusetts Institute of Technology, claimed that the nationwide organization relies on the database experts to purge unnecessary information, host and manage data and provide insights. One of the programs the company utilizes examines comments from Web surveys, emails and text messages so that store managers can get a better view of customer satisfaction. 

Connecting with the rest of the company
As database administrator services encounter hundreds, if not thousands of different data analytics programs in a typical work week, their personnel have obtained the invaluable ability to communicate the results of the programs to the people utilizing them. Predictive analytics tools provide actionable results, but learning how they work can be a daunting task for marketing professionals just trying to get market insight on particular individuals or populations. 

Ron Bodkin, a contributor to InformationWeek, noted that acclimating individual departments to specific data processing actions is essential to the survival of a company. The writer cited Hitachi Global Storage Technologies, which created a data processing platform capable of hosting each team's separate needs and desires while still providing executives with a holistic view of all operations. 

"Access to internal data often requires IT to move from limiting access for security to encouraging sharing while still governing access to data sets," claimed Bodkin. 

The writer also acknowledged the importance of a general willingness to learn. Who better than database experts to educate unknowledgeable executives in how analytics programs operate? 

New Doclet: Administering Search for Content Management Business Attribute Security in PeopleSoft Interaction Hub

PeopleSoft Technology Blog - Mon, 2014-03-31 17:17

PeopleSoft Interaction Hub now implements content search based on content management business attribute security when using the Global Search bar.  It also supports this search within the content management feature. For this purpose, PeopleSoft Interaction Hub delivers new search definitions, search categories, and indexes, which are described in this document.

PeopleSoft Interaction Hub delivers these new search definitions and search categories in addition to the existing delivered search definitions and search categories.  This document is a supplement to the existing search definitions and search categories and the steps for deploying them. For information on the existing search definitions and search categories, refer PeopleSoft Interaction Hub 9.1: Portal and Site Administration, “Understanding Search in PeopleSoft Interaction Hub” and “Administering Search Indexes.”  This document is applicable to customers who have implemented the content management business attribute security (CM002) and are using Oracle's Secure Enterprise Search with PeopleSoft, or customers who are planning to implement the CM002 feature and are using SES.

Clarifications on UF Online Payments to Pearson Embanet

Michael Feldstein - Mon, 2014-03-31 15:35

I wrote a post over the weekend that included information from the Gainesville Sun about the University of Florida Online (UF Online) program and its expected payments to Pearson Embanet. Chris Newfield from Remaking the University also wrote on the subject today. Chris raises some very important issues in his post, including his point:

Universities may have a cost disease, but they now have a privatization disease that is even worse.

In the article, however, there seems to be a misunderstanding of how the revenue sharing agreement works. Given the importance of the questions that Chris raises, I think it is important to understand the payment model used by most Online Service Providers (OSP) such as in place at UF Online.

The part of the blog post that is mistaken, in my understanding, is this [emphasis added]:

UF did admit that it had to pay Pearson cash up front: it just wouldn’t say how much. A week later, Mr. Schweers reported that through various documents he’d been able to show that UF would pay Pearson Embanet $186 million over the 11 year life of the contract. The business plan sounds much like the Udacity-Georgia Tech deal. It involved very large growth projections to 13,000 students paying full in-state or non-resident tuition for an all-online program by 2018, with Pearson getting, in addition to its fee, $19 million of $43 million in projected revenues. 13,000 is the size of UF’s first year class.

The revenue estimates are worth pondering. Even if Pearson fails, it will effectively pocket all of the state funding that was given to UF for online, and some internal UF money besides. Pearson is owed $186 million over time for getting involved, and the state provided $35 million. Pearson will contractually absorb all of the state money and then be entitled to another $151 million of UF’s internal funds. (UF Associate Provost Andy McDonough says that Pearson will get $9.5 million in the first five years, but it is not clear whether or how this reflects the still partially redacted contract.)

If somehow the Pearson dragnet finds thousands of students to pay full tuition for an all-online program with the University of Florida name, UF is slated to gross $24 million in 2019, which is projected to rise to $48 million five years later. In this best possible scenario, UF will get back its initial $151 million around ten years from now. The University will thus be ready to earn its first net dollar in 2025.

The basic idea is that the OSP provides up-front investment, spending far more money in the initial years of an online program than it makes from the school. This is why 2U is growing quickly ($83.1 million revenue on 49% growth) but still is losing big ($27.9 million last year, with unclear prospects on breaking even). Most of 2U’s programs are in the early stages, when they are investing more in the online program than they are making in revenue.

In the UF Online case, they appear to be paying Pearson Embanet $9.5 million for the first five years as partial compensation for these up-front expenses. I believe that Pearson will internally spend far in excess of $9.5 million over the next five years, running a loss. During that same startup period, however, the Florida legislature will fund UF Online with $35 million. Pearson will only make 27% of this money if the Gainesville Sun is correct in its analysis of the contract.

After 2019, all payments shift to revenue from tuition and fees paid by students, as described by the Sun:

After 2018, UF will also stop paying Pearson directly and Pearson’s income will come entirely from its share of tuition revenue and any fees it charges. UF projects it will have over 13,000 students in UF Online generating $43 million in tuition revenue by 2019 — of which Pearson will get close to $19 million.

By 2024, with 24,000 students anticipated, revenues generated will be about $76 million, with $28 million going to Pearson, McCullough said.

OSPs typically take a percentage of the tuition revenue based on enrollment targets. What is important here is that the revenue for the OSP depends on enrollment – if UF Online does not hit the enrollment targets, Pearson Embanet will not get $186 million in revenue. They make a percentage of the revenue without guaranteed payments.

In the best possible scenario for UF Online and for Pearson Embanet, the school will start making money from students on day 1. In 2019, if UF Online hits enrollment targets, UF Online will net $24 million ($43 million of revenue, paying $19 million to Pearson Embanet). As the enrollment grows (again, assuming that it does), then UF Online will make more and more over time, estimated to be net $44 million in 2024 ($76 million of revenue, paying $28 million to Pearson Embanet). If UF Online does not hit targets, both UF Online and Pearson Embanet will make far less than the projections in the article.

As mentioned before, Chris raises some important questions, but this is not a matter of a school paying all revenue to an OSP without seeing a dime of net revenue until 2025 and beyond.

Update (3/31): I found the spreadsheets in the business plan, and these contract numbers are directly derived from the plan. The key is that they label Pearson Embanet (OSP) as “P3″ for Public / Private Partnership (see page 87 for explanation.

As for the mechanism to pay Embanet, they use a moving scale, with different percentages of revenue split per year and per in-state or out-of-state tuition. In 2015 Pearson Embanet makes 40% of the in-state tuition and 60% of the out-of-state tuition, and then in 2022 they make 30% and 42%. This also shows the “Additional Fixed Fee” of $9.5 million broken up over the first five years. See here on page 84:

Revenue mechanism

On page 82 these numbers are applied to the estimated enrollment, with the resultant fee to Pearson Embanet labeled as “P3 Services”. This combines the tuition sharing along with the additional fixed fee. For example in 2016, ($3.2 m * 0.4) + ($7.2 m * 0.6) + $2.0 m = $7.6 million. If you add up the row labeled “P3 Services” you get the total of $186 million.

Revenue and costs

What is unknown from this analysis is the internal costs from Pearson Embanet. The document on page 87 includes the following language, which seems to quantify the Embanet “investment” as “direct cost savings realized from these transfers” at $14 million per year.

There are some recognizable cost transfers in the service purchase, “partnership” plan. It is admittedly difficult to capture all of the services that are part of an external package in an internal matrix subject to per unit, per student, or per activity pricing. However, there are recognizable cost transfers in the market assessment, marketing services, recruitment, contact call center, production (on demand), program coordinators (retention), digital content and tutoring. The direct cost savings realized from these transfers is estimated at about $14 million per year. The present value of the P3 services annualized is approximately $15 million. The University believes the summation of the immediacy of the expertise, the on-request availability, the joint research opportunities, and the expanding innovative digital content represent greater value added than the differential.

Update (2): Note that these are projections that seem to be best-case scenarios.

Full disclosure: Pearson is a client of MindWires Consulting but not for OSP. All information here is from public sources.

The post Clarifications on UF Online Payments to Pearson Embanet appeared first on e-Literate.

To_char, Infinity and NaN

XTended Oracle SQL - Mon, 2014-03-31 15:23

Funny that oracle can easily cast ‘nan’,'inf’,'infinity’,'-inf’,'-infinity’ to corresponding binary_float_infinity,binary_double_nan, but there is no any format models for to_char(binary_float_infinity,format) or to_binary_***(text_expr,format) that can output the same as to_char(binary_float_infinity)/to_binary_float(‘inf’) without format parameter:

If a BINARY_FLOAT or BINARY_DOUBLE value is converted to CHAR or NCHAR, and the input is either infinity or NaN (not a number), then Oracle always returns the pound signs to replace the value.

Little example:

SQL> select to_binary_float('inf') from dual;


SQL> select to_binary_float('inf','9999') from dual;
select to_binary_float('inf','9999') from dual
ERROR at line 1:
ORA-01722: invalid number

SQL> select
  2     to_char(binary_float_infinity)         without_format
  3    ,to_char(binary_float_infinity,'99999') with_format
  4    ,to_char(1e6d,'99999')                  too_large
  5  from dual;

--------- ------------------ ------------------
Inf       ######             ######

SQL> select to_char(0/0f) without_format, to_char(0/0f,'tme') with_format from dual;

--------- --------------------------------------------------------------------------
Nan       ################################################################

ps. it’s just crossposting of my old blog.

Categories: Development

The Art of Exploiting Injection Flaws

Slavik Markovich - Mon, 2014-03-31 15:14
Sid is doing his popular course, The Art of Exploiting Injection Flaws, at this year’s Black Hat. You can find more details here. Definitely highly recommended.

Customized pages with Distributed Credential Cellector (DCC)

Frank van Bortel - Mon, 2014-03-31 12:29
One of the worst documented areas in OAM; customizing pages with DCC. One revelation: you must use when you want to work, as seems to build the "Callback URL" list, that uses to destroy the session cookies. Frank

Oracle Midlands : Event #3 – Registration Open

Tim Hall - Mon, 2014-03-31 11:04

Registration is now open for Oracle Midlands Event #3 on Tuesday 20th May..

As I mentioned in a previous postChristian Antognini will be the speaker for both the sessions this time. He’ll be covering “12c Adaptive Query Optimization” and “Row Chaining and Row Migration Internals”.

Red Gate Software have kindly offered to sponsor the event again, so registration is free!

I’ve already registered. :) Please make the effort to come along and support the event!



Oracle Midlands : Event #3 – Registration Open was first posted on March 31, 2014 at 6:04 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MapR Sandbox for Hadoop Learning

Surachart Opun - Mon, 2014-03-31 10:49
I got email about MapR Sandbox, that is a fully functional Hadoop cluster running on a virtual machine (CentOS 6.5) that provides an intuitive web interface for both developers and administrators to get started with Hadoop. I belief it's a good idea to learn about Hadoop and its ecosystem. Users can download for VMware VM or VirtualBox. I downloaded for VirtualBox and imported it. I changed about network to use "Bridged Adapter". After started... I connected it http://ip-address:8443
Then, I selected "Launch HUE" and "Launch MCS", but got some error and fixed.
Finally,  I could use HUE and MCS.

Hue is an interface for interacting with web applications that access the MapR File System (MapR-FS). Use the applications in HUE to access MapR-FS, work with tables, run Hive queries, MapReduce jobs, and Oozie workflows.

The MapR Control System (MCS) is a graphical, programmatic control panel for cluster administration that provides complete cluster monitoring functionality and most of the functionality of the command line.

After reviewing MapR Sandbox for VirtualBox, "maprdev" account is development account that can sudo to be root.
login as: maprdev
Server refused our key
Using keyboard-interactive authentication.
Welcome to your Mapr Demo virtual machine.
[maprdev@maprdemo ~]$ sudo -l
Matching Defaults entries for maprdev on this host:
    !visiblepw, always_set_home, env_reset, env_keep="COLORS DISPLAY HOSTNAME HISTSIZE INPUTRC KDEDIR LS_COLORS", env_keep+="MAIL PS1 PS2 QTDIR USERNAME LANG
    env_keep+="LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY", secure_path=/sbin\:/bin\:/usr/sbin\:/usr/bin
User maprdev may run the following commands on this host:
[maprdev@maprdemo ~]$
[maprdev@maprdemo ~]$ sudo showmount -e localhost
Export list for localhost:
/mapr                *
/mapr/ *
[maprdev@maprdemo ~]$
Read More Written By: Surachart Opun
Categories: DBA Blogs

Innovation Springs to Life Again - Award Nominations Now Open

WebCenter Team - Mon, 2014-03-31 07:00


Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

2014 Oracle Excellence Awards: Oracle Fusion Middleware Innovation

Oracle is pleased to announce the call for nominations for the 2014 Oracle Excellence Awards: Oracle Fusion Middleware Innovation.  The Oracle Excellence Awards for Oracle Fusion Middleware Innovation honor organizations using Oracle Fusion Middleware to deliver unique business value.  This year, the awards will recognize customers across nine distinct categories:

Customers may submit separate nominations forms for multiple categories; The 2014 Fusion Middleware categories are as follows; subject to change:

If you consider yourself a pioneer using these solutions in innovative ways to achieve significant business value, submit your nomination for the 2014 Oracle Excellence Awards for Oracle Fusion Middleware Innovation by Friday, June 20th, 2014, for a chance to win a FREE registration to Oracle OpenWorld 2014 (September 28-October 2) in San Francisco, California.   Top customers will be showcased at Oracle Open World 2014, get a chance to mingle with Oracle executives, network with their peers and be featured in Oracle Publications.

NOTE: All nominations are also considered for Oracle OpenWorld Session presentations and speakers. Speakers chosen receive FREE registration to Oracle OpenWorld. 

For additional details, See: Call for Nominations: Oracle Fusion Middleware Innovation

Last year’s winners can be seen here: 2013 FMW Innovation Winners

Come See Integrigy at Collaborate 2014

Come see Integrigy’s sessions at Collaborate 2014 in Las Vegas ( Integrigy is presenting the following papers:

IOUG - #526 Oracle Security Vulnerabilities Dissected, Wednesday, April 9, 11:00am

OAUG – #14365 New Security Features in Oracle E-Business Suite 12.2, Friday April 11, 9:45am

OAUG – #14366 OBIEE Security Examined, Friday, April 11, 12:15pm

If you are going to Collaborate 2014, we would also be more than happy to talk with you about your Oracle security projects or questions. If you would like to talk with us while at Collaborate please contact us at

Tags: ConferencePresentation
Categories: APPS Blogs, Security Blogs

APEX World 2014

Rob van Wijk - Mon, 2014-03-31 03:23
The fifth edition of OGh APEX World took place last Tuesday at Hotel Figi, Zeist in the Netherlands. Again it was a beautiful day full of great APEX sessions. Every year I think we've reached the maximum number of people interested in APEX and we'll never attract more participants. But, after welcoming 300 attendees last year, 347 people showed up this year. Michel van Zoest and Denes Kubicek Rob van Wijk

Health care executives find challenges in new IT adoption

Chris Foot - Mon, 2014-03-31 01:59

As the United States Centers for Medicare and Medicaid Services push health care providers toward electronic health record adoption, many industry leaders are finding the process to be much more difficult than the federal government anticipated. Many physicians are claiming that their in-house IT departments are struggling with implementation, while others are relying on database administration services to successfully deploy EHR programs. 

Forcing deployment 
As outlined by CMS, Stage 2 Meaningful Use requires all health care companies to utilize EHR systems by the end of this year. While some organizations have had better luck than others, the general consensus among professionals is that the industry was taken off guard by the mandate. Creed Wait, a family-practice doctor living in Texas, spoke with The Atlantic contributor James Fallows on a few of the issues hospital IT departments are facing. 

In general, Wait noted that if the health care industry was ready to deploy EHR systems, participants would have done so of their own accord. By forcing hospitals and treatment centers to acclimate to software that – in a number of respects – is poorly designed, Wait claimed the current approach is counterproductive to achieving better care delivery. 

"Our IT departments are swimming upstream trying to implement and maintain software that they do not understand while mandated changes to this software are being released before we can get the last update debugged and working," said Wait, as quoted by Fallows. 

Let someone else handle it 
In an effort to abide by stringent government regulations, some health care CIOs are turning to database support services capable of implementing and managing EHR programs better than in-house IT teams. According to Healthcare IT News, Central Maine Healthcare CIO Denis Tanguay noted that his workload nearly quadrupled once CMS' regulations came into effect. With just a staff of 70 employees to manage IT operations for three hospitals and 85 physician practices, Tanguay claimed that his department was buckling under the pressure. 

"My CEO has a line: We're not in the IT business, we're in the health care business," said Tanguay, as quoted by Healthcare IT News. "This allows me to focus more on making sure that we're focused on the hospital."

In order to resolve the issue, Tanguay advised his fellow executives that investing in a third-party database administration firm would be the most efficient way to streamline the EHR adoption process. The source reported that an outsourced entity specializing in network maintenance would be able to dedicate more resources and personnel to abiding by stringent CMS standards. 

SQL Server: transparent data encryption, key management, backup strategies

Yann Neuhaus - Mon, 2014-03-31 01:55

Transparent Data Encryption requires the creation of a database key encryption. The database key is part of the hierarchy of the SQL Server encryption tree with the DPAPI at the top of the tree. Then, if we go through the tree from top to bottom, we can find the service master key, the database master key, the server certificate or the asymmetric key, and finally the database encryption key (AKA the DEK). In this hierarchy each encryption key is protected by its parent. Encryption key management is one of the toughest tasks in cryptography. Improperly managing the encryption keys can compromises the entire security strategy.

Here is some basic advice on encryption keys:

  • Limit encryption key access to those who really only need it
  • Backup encryption keys and secure them. This is important to be able to restore them in case of corruption or disaster recovery scenarios
  • Rotate the encryption keys on a regular basis. Key rotation based on a regular schedule should be part of the IT policy. Leaving the same encryption key in place for lengthy periods of time gives hackers and other malicious persons the time to attack it. By rotating your keys regularly, your keys become a moving target, much harder to hit.

SQL Server uses the ANSI X.917 hierarchical model for key management which has certain advantages over a flat single-model for encryption keys, particularly in terms of key rotation. With SQL Server, rotate the encryption key that protects the database encryption key requires decrypting and reencrypting an insignificantly small amount of symmetric key data and not the entire database.

However, managing the rotation of the encryption key is very important. Imagine a scenario with a schedule rotation of every day (yes, we are paranoid!!!) and you have a strategy backup with a full back up every Sunday and a transaction log backup every night between Monday and Sunday.




















Here is an interesting question I had to answer: If I have a database page corruption on Tuesday morning that requires a restore of the concerned page from the full backup and the couple of backup logs from Monday to Tuesday, does it work with only the third encryption key? In short: do I need all the certificates TDE_Cert1, TDE_Cert2 and TDE_Cert3 in this case?

To answer this, let’s try with the AdventureWorks2012 database and the table Person.Person.

First, we can see the current server certificate used to protect the DEK of the AdventureWorks2012 database (we can correlate this with the certificate thumbprint):

SELECT        name AS certificate_name,        pvt_key_encryption_type_desc AS pvt_key_encryption,        thumbprint FROM master.sys.certificates WHERE name LIKE 'TDE_Cert%'; GO




SELECT        DB_NAME(database_id) AS database_name,        key_algorithm,        key_length,        encryptor_type,        encryptor_thumbprint FROM sys.dm_database_encryption_keys WHERE database_id = DB_ID('AdventureWorks2012')




Now, we perform a full backup of the AdventureWorks2012 database followed by the database log backup:





Then, according to our rotation strategy, we change the old server certificate TDE_Cert by the new one TDE_Cert_2 to protect the DEK

-- Create a new server certificate USE [master]; GO   CREATE CERTIFICATE TDE_Cert2 WITH SUBJECT = 'TDE Certificat 2';   -- Encrypt the DEK by the new server certificate TDE_Cert_2 USE AdventureWorks2012; GO   ALTER DATABASE ENCRYPTION KEY ENCRYPTION BY SERVER CERTIFICATE TDE_Cert_2; GO -- Drop the old server certificate TDE_Cert USE [master]; GO   DROP CERTIFICATE TDE_Cert; GO   SELECT        name AS certificate_name,        pvt_key_encryption_type_desc AS pvt_key_encryption,        thumbprint FROM master.sys.certificates WHERE name LIKE 'TDE_Cert%'; GO




SELECT        DB_NAME(database_id) AS database_name,        key_algorithm,        key_length,        encryptor_type,        encryptor_thumbprint FROM sys.dm_database_encryption_keys WHERE database_id = DB_ID('AdventureWorks2012')




We perform again a new backup log:





Finally, we repeat the same steps as above a last time (rotate the server certificate and perform a new log backup):

-- Create a new server certificate USE [master]; GO   CREATE CERTIFICATE TDE_Cert3 WITH SUBJECT = 'TDE Certificat 3';   -- Encrypt the DEK by the new server certificate TDE_Cert_3 USE AdventureWorks2012; GO   ALTER DATABASE ENCRYPTION KEY ENCRYPTION BY SERVER CERTIFICATE TDE_Cert_3; GO   -- Drop the old server certificate TDE_Cert USE [master]; GO   DROP CERTIFICATE TDE_Cert_2; GO   SELECT        name AS certificate_name,        pvt_key_encryption_type_desc AS pvt_key_encryption,        thumbprint FROM master.sys.certificates WHERE name LIKE 'TDE_Cert%'; GO




SELECT        DB_NAME(database_id) AS database_name,        key_algorithm,        key_length,        encryptor_type,        encryptor_thumbprint FROM sys.dm_database_encryption_keys WHERE database_id = DB_ID('AdventureWorks2012')








So we have achieved our backup strategy with a full backup and a sequence of 3 transaction logs backups before initiating a database corruption next. In the same time, we have performed a rotation of 3 server certificates as encryption keys. Now it’s time to corrupt a data page that belongs to the table Person.Person in the AdventureWorks2012 database:

-- First we check IAM page to get a page ID that belongs to the Person.Person table DBCC IND(AdventureWorks2012, 'Person.Person', 1); GO




Then we randomly take a page from the result with the ID 2840. Then, to quickly corrupt the page, we use the undocumented DBCC WRITEPAGE as follows (/! Don’t use DBCC WRITEPAGE in a production environment /!)


ALTER DATABASE AdventureWorks2012 SET SINGLE_USER; GO   DBCC WRITEPAGE(AdventureWorks2012, 1, 2840, 0, 2, 0x1111, 1); GO   ALTER DATABASE AdventureWorks2012 SET MULTI_USER; GO

We corrupt the page with the ID 2840 by introducing at the offset 0 two bytes with a global value of 0x1111. The last directORBufferpool option allows the page checksum failures to be simulated by bypassing the bufferpool and flushing the concerned page directly to the disk. We have to switch the AdventureWorks2012 database in the single user mode in order to use this option.

No let’s try to get data from the Person.Person table:

USE AdventureWorks2012; GO   SELECT * FROM Person.Person; GO


As expected a logical consistency I/O error with an incorrect checksum occurs during the reading of the Person.Person table with the following message:




At this point, we have two options:

  • Try to run DBCC CHECKDB and the REPAIR option but we will likely lose data in this case
  • Restore the page ID 2840 from a consistent full back up and the necessary sequence of transaction log backups after taking a tail log backup

We are reasonable and decide to restore the page 2840 from the necessary backups, but first, we have to perform a tail log backup:




Now we begin our restore process by trying to restore the concerned page from the full backup, but we encounter the first problem:


-- Restore the page ID 2840 from the full backup RESTORE DATABAE AdventureWorks2012 PAGE = '1:2840' FROM DISK = 'E:SQLSERVERENCRYPTEDBACKUPADVENTUREWORKS2012_DB.BAK' WITH NORECOVERY, STATS = 10; GO


According to the above error message, we can’t restore the page from this full backup media because it is protected by a server certificate. The displayed thumbprint corresponds to the TDE_Cert certificate which has been deleted during the rotation operation. At this point, we can understand why it is important to have a backup of the server certificate stored somewhere. This is where we remember the basis of encryption and key management.

Of course, we were on the safe side and performed a backup of each server certificate after their creation. Thus, we can restore the server certificate TDE_Cert:


USE [master]; GO



Then, if we try to restore the page from the full database backup, it now works:



To continue with the restore process we now have to restore the transaction log backup sequence with beginning with the ADVENTUREWORKS2012_DB.TRN media:




Then we try to restore the second transaction log backup ADVENTUREWORKS2012_DB_2.TRN and we face to the same problem as during the earlier full backup. To open the backup media, we first have to restore the certificate with the thumbprint displayed below:




Ok, we have to restore the TDE_Cert_2 certificate …



… and we retry to restore the second transaction log. As expected, it works:



At this point, we have only two transaction log backups to restore: ADVENTUREWORKS2012_DB_3.TRN and the tail log backup ADVENTUREWORKS2012_DB_TAILLO.TRN. Fortunately, these last two backup medias are encrypted by the TDE_Cert_3 which is the current server certificate that protects the DEK.








The restore process is now finished and we can read the data from the Person.Person table without any problem:


USE AdventureWorks2012; GO   SELECT * FROM Person.Person



To summarize, we have seen the importance of a good key management with a backup / restore strategy in this post. Of course, we chose a paranoid scenario to quickly highlight the problem, but you can easily transpose the same in a normal context with a normal rotation schedule of the encryptions keys - either it is a server certificate, an asymmetric key, or a third party tool.

So what about you, how do you manage your backup strategy with the rotation of the encryption keys?

IT outsourcing a popular trend in business practices

Chris Foot - Mon, 2014-03-31 01:45

Due to the complexity of contemporary IT infrastructures, many enterprises are turning toward database administration services to efficiently manage and secure their digital assets. Between the sophistication of modern hackers and the amount of endpoint devices that are connecting to corporate networks, executives are deducing that on-premise IT departments are not capable of ensuring operability as well as outsourced services.

Common problems 
As long as businesses continue to store critical information in their databases, hackers are sure to attempt to exploit them. Charlie Osborne, a contributor to ZDNet, claimed that these malevolent individuals and groups don't discriminate based on what kind of data enterprises harbor. Whether to obtain financial information, intellectual property or confidential information, cybercriminals look for the following vulnerabilities in an organization's network:

  • Companies without the assistance of third-party database administration often only test for what the system should be doing as opposed to what it should not. If unauthorized activity isn't identified, it can compromise an infrastructure.
  • Unnecessary database features employees neglect to use are often exploited by hackers capable of accessing the hub through legitimate credentials and then forcing the service to run malicious code. 
  • Database experts realize that encryption keys don't need to be held on a disk, but in-house IT teams may not be aware of this option, effectively giving infiltrators the ability to quickly decrypt vital information. 

Moving ahead 
While some corporations choose to stick to management techniques, others are looking for ways to solidify the operability of their systems. Minneapolis/St.Paul Business Journal reported that Cargill, a company specializing in the food procurement process, recently announced that it intends to hire the expertise of a database administration service to manage and oversee all IT operations for the worldwide organization. Although the move could potentially take away 300 jobs from the Twin Cities, some of Cargill's personnel will be hired by the outsourced company. 

As Cargill conducts operations over 67 countries, possessing more than 142,000 employees, its data collection methods are quite vast. Overall, the enterprise currently has 900 workers supporting IT operations, meaning that a mere 0.63 percent of staff is responsible for maintaining database functionality for the entire company. Furthermore, it's likely that many of these professionals don't have the industry-specific knowledge required to adequately manage its system. 

In Cargill's case, consulting with a company to undertake all database administration duties seems like the more secure option. Having a team of professionals well versed in the environment focusing all their energy toward one task can provide the food logistics expert with the protection necessary to conduct global operations. 

Bash infinite loop script

Vattekkat Babu - Sun, 2014-03-30 23:42

There are times when cron won't do for automated jobs. Classic example is when you need to start a script, enter some data and then get it into cron. Now, in most cases this can be handled by having env variables or config files. However, what if some one needs to enter a secret value? You don't want that stored in the filesystem anywhere. For situations like these, you can get inspiration from the following script.

Deterministic functions, result_cache and operators

XTended Oracle SQL - Sun, 2014-03-30 16:51

In previous posts about caching mechanism of determinstic functions I wrote that cached results are kept only between fetch calls, but there is one exception from this rule: if all function parameters are literals, cached result will not be flushed every fetch call.
Little example with difference:

SQL> create or replace function f_deterministic(p varchar2)
  2     return varchar2
  3     deterministic
  4  as
  5  begin
  6     dbms_output.put_line(p);
  7     return p;
  8  end;
  9  /
SQL> set arrays 2 feed on;
SQL> set serverout on;
SQL> select
  2     f_deterministic(x) a
  3    ,f_deterministic('literal') b
  4  from (select 'not literal' x
  5        from dual
  6        connect by level<=10
  7       );

A                              B
------------------------------ ------------------------------
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal
not literal                    literal

10 rows selected.

not literal
not literal
not literal
not literal
not literal
not literal

As you can see, ‘literal’ was printed once, but ‘not literal’ was printed 6 times, so it was returned from cache 4 times.

Also i want to show the differences in consistency between:
1. Calling a function with determinstic and result_cache;
2. Calling an operator for function with result_cache;
3. Calling an operator for function with deterministic and result_cache;

In this example I will do updates in autonomouse transactions to emulate updates in another session during query execution:
Spoiler:: Tables and procedures with updates SelectShow

drop table t1 purge;
drop table t2 purge;
drop table t3 purge;

create table t1 as select 1 id from dual;
create table t2 as select 1 id from dual;
create table t3 as select 1 id from dual;

create or replace procedure p1_update as
  pragma autonomous_transaction;
   update t1 set id=id+1;
create or replace procedure p2_update as
  pragma autonomous_transaction;
   update t2 set id=id+1;
create or replace procedure p3_update as
  pragma autonomous_transaction;
   update t3 set id=id+1;

Spoiler:: Variant 1

create or replace function f1(x varchar2) return number result_cache deterministic
  r number;
   select id into r from t1;
   return r;

Spoiler:: Variant 2

create or replace function f2(x varchar2) return number result_cache
  r number;
   select id into r from t2;
   return r;
create or replace operator o2
return number
using f2

Spoiler:: Variant 3

create or replace function f3(x varchar2) return number result_cache deterministic
  r number;
   select id into r from t3;
   return r;
create or replace operator o3
return number
using f3


SQL> set arrays 2;
SQL> select
  2     f1(dummy) variant1
  3    ,o2(dummy) variant2
  4    ,o3(dummy) variant3
  5  from dual
  6  connect by level<=10;

---------- ---------- ----------
         1          1          1
         2          1          1
         2          1          1
         3          1          1
         3          1          1
         4          1          1
         4          1          1
         5          1          1
         5          1          1
         6          1          1

10 rows selected.

SQL> /

---------- ---------- ----------
         7         11         11
         8         11         11
         8         11         11
         9         11         11
         9         11         11
        10         11         11
        10         11         11
        11         11         11
        11         11         11
        12         11         11

10 rows selected.

We can see that function F1 returns same results every 2 execution – it is equal to fetch size(“set arraysize 2″),
operator O2 and O3 return same results for all rows in first query execution, but in the second query executions we can see that they are incremented by 10 – it’s equal to number of rows.
What we can learn from that:
1. A calling a function F1 with result_cache and deterministic reduces function executions, but all function results inconsistent with query;
2. Operator O2 returns consistent results, but function is always executed because we invalidating result_cache every execution;
3. Operator O3 works as well as operator O2, without considering that function is deterministic.

All tests scripts:

Categories: Development

Partner Webcast – Oracle ADF Mobile - Implementing Data Caching and Syncing for Working Off Line

Mobile access to enterprise applications is fast becoming a standard part of corporate life. Such applications increase organizational efficiency because mobile devices are more readily at hand...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Visualising OBIEE DMS metrics with Graphite

Rittman Mead Consulting - Sun, 2014-03-30 13:50

Assuming you have set up obi-metrics-agent and collectl as described in my previous post, you have a wealth of data at your disposal for graphing and exploring in Graphite, including:

  • OS (CPU, disk, network, memory)
  • OBIEE’s metrics
  • Metrics about DMS itself
  • Carbon (Graphite’s data collector agent) metrics

In this post I’ll show you some of the techniques we can use to put together a simple dashboard.

Building graphs

First off, let’s see how Graphite actually builds graphs. When you select a data series from the Metrics pane it is added to the Graphite composer where you can have multiple metrics. They’re listed in a legend, and if you click on Graph Data you can see the list of them.

Data held in Graphite (or technically, held in whisper) can be manipulated and pre-processed in many ways before Graphite renders it. This can be mathmatical transforms of the data (eg Moving Average), but also how the data and its label is shown. Here I’ll take the example of several of the CPU metrics (via collectl) to see how we can manipulate them.

To start with, I’ve just added idle, wait and user from the cputotals folder, giving me a nice graph thus:

We can do some obvious things like add in a title, from the Graph Options menu

Graphite functions

Looking at the legend there’s a lot of repeated text (the full qualification of the metric name) which makes the graph more cluttered and less easy to read. We can use a Graphite function to fix this. Click on Graph Data, and use ctrl-click to select all three metrics:

Now click on Apply Function -> Set Legend Name By Metric. The aliasByMetric function is wrapped around the metrics, and the legend on the graph now shows just the metric names which is much smarter:

You can read more about Graphite functions here.

Another useful technique is being able to graph out metrics using a wildcard. Consider the ProcessInfo group of metrics that DMS provides about some of the OBIEE processes:

Let’s say we want a graph that shows cpuTime for each of the processes (not all are available). We could add each metric individually:

But that’s time consuming, and assumes there are only two processes. What if DMS gives us data for other processes? Instead we can use a wildcard in place of the process name:


You can do this by selecting a metric and then amending it in the Graph Data view, or from the Graph Data view itself click on Add and use the auto-complete to manually enter it.

But now the legend is pretty unintelligable, and this time using the aliasByMetric function won’t help because the metric name is constant (cpuTime). Instead, use the Set Legend Name By Node function. In this example we want the third node (the name of the process). Combined with a graph title this gives us:

This aliasbyNode method works well for Connection Pool data too. However it can be sensitive to certain characters (including brackets) in the metric name, throwing a IndexError: list index out of range error. The latest version of obi-metrics-agent should workaround this by modifying the metric names before sending them to carbon.

The above graph shows a further opportunity for using Graphite functions. The metric is a cumulative one – amount to CPU time that the process has used, in total. What would be more useful would be if we could show the delta between each occurrence. For this, the derivative function is appropriate:

Sometimes you’ll get graphs with gaps in; maybe the server was busy and the collector couldn’t keep up.


To “gloss over” these, use the Keep Last Value function:


Saving graphs

You don’t have to login to Graphite by default, but to save and return to graphs and dashboards between sessions you’ll want to. If you used the obi-metrics-agent installation script then Graphite will have a user oracle with password Password01. Click the Login button in the top right of the Graphite screen and enter the credentials.

Once logged in, you should see a Save icon (for you young kids out there, that’s a 3.5″ floppy disk…).

You can return to saved graphs from the Tree pane on the left:


As well as the standard Graphite graphing described above, you also have the option of using flot, which is available from the link in the top-right options, or the icon on an existing graph:


Graphlot/Flot is good for things like examining data values at specific times:


Creating a dashboard

So far we’ve seen individual graphs in isolation, which is fine for ad-hoc experimentation but doesn’t give us an overall view of a system. Click on Dashboard in the top-right of the Graphite page to go to the dashboards area, ignoring the error about the default theme.

You can either build Graphite dashboards from scratch, or you can bring in graphs that you have prepared already in the Graphite Composer and saved.

At the top of the Graphite Dashboard screen is the metrics available to you. Clicking on them drills down the metric tree, as does typing in the box underneath

Selecting a metric adds it in a graph to the dashboard, and selecting a second adds it into a second graph:

You can merge graphs by dragging and dropping one onto the other:

Metrics within a graph can be modified with functions in exactly the same way as in the Graphite Composer discussed above:

To add in a graph that you saved from Graphite Composer, use the Graphs menu

You can resize the graphs shown on the dashboard, again using the Graphs menu:

To save your dashboard, use the Dashboard -> Save option.

Example Graphite dashboards

Here are some examples of obi-metrics-agent/Graphite being used in anger. Click on an image to see the full version.

  • OS stats (via collectl)
    OS stats from collectl
  • Presentation Services sessions, cache and charting
    Presentation Services sessions, cache and charting
  • BI Server (nqserver) Connection and Thread Pools
    BI Server (nqserver) Connection and Thread Pools
  • Response times vs active users (via JMeter)
    Response times vs active users (via JMeter)
Categories: BI & Warehousing