Skip navigation.

Feed aggregator

HBR and Junk Charts

Abhinav Agarwal - Thu, 2015-01-15 04:39
Even the best can get it wrong. Only sometimes though, one hopes.
The venerable Harvard Business Review gets data visualizations horribly wrong. They have a post on Facebook where they contrast the costs of cancer treatment in the US and India.

The cost in the US is $22,000 on average, while in India it is shown as $2,900 (I would dispute this figure, as it looks very low).
The ratios is 7.58:1 (22000÷2900)



If you measure the heights of the two circles shown to represent these costs, the ratio comes out to approximately 7.2:1. It is not completely accurate, but it is still close enough that we can ignore it.

However, when using a bubble or circle to compare two values, we are implicitly using the areas of the circles to base our comparison on.
In case you have forgotten your geometry, the area of a circle is Π×r2 (i.e., pi times the square of the radius). Using this, the ratio of the areas of the two circles is 55:1! Yikes.
Put it another way, HBR is saying that if cancer treatment in the US costs $22,000, then in India it should cost $400. Or, if it costs $2900 in India, it should cost $159,500 in the US. Either way, this is wrong.


Vishal Sikka's Appointment as Infosys CEO

Abhinav Agarwal - Thu, 2015-01-15 04:38


My article in the DNA on Vishal Sikka's appointment as CEO of Infosys was published on June 25, 2014.

This is the full text of the article:


Vishal Sikka's appointment as CEO of Infosys was by far the biggest news event for the Indian technology sector in some time. Sikka was most recently the Chief Technology Officer at the German software giant SAP, where he led the development of HANA - an in-memory analytics appliance that has proven, since its launch in 2010, to be the biggest challenger to Oracle's venerable flagship product, the Oracle Database. With the launch of Oracle Exalytics in 2012 and Oracle Database In-Memory this month, the final chapter and word on that battle between SAP and Oracle remains to be written. Vishal will watch that battle from the sidelines.



By all accounts, Vishal Sikka is an extraordinary person, and Infosys has made what could well be the turning point for the iconic Indian software services company. If well executed, five years from now people will refer to this event as the one that catapulted Infosys into a different league altogether. However, there are several open questions, challenges, as well as opportunities that confront Infosys the company, Infoscians and shareholders, that Sikka will need to resolve.

First off, is Sikka a "trophy CEO?" There will be more than one voice heard whispering that Sikka's appointment is more of a publicity gimmick meant to save face for its iconic co-founder, Narayan Murthy, who has been unable to right the floundering ship of the software services giant. Infosys has seen a steady stream of top-level attrition for some time, which had only accelerated after Murthy's return. The presence of his son Rohan Murthy was seen to grate on several senior executives, and also did not go down too well with corporate governance experts. Infosys had also lagged behind its peers in earnings growth. The hiring of a high-profile executive like Sikka has certainly restored much of the lost sheen for Infosys. To sustain that lustre, however, he will need to get some quick wins under his belt.

The single biggest question on most people's minds is how well will the new CEO adapt to the challenge of running a services organisation. This is assuming that he sees Infosys' long term future in this area of services. Other key issues include reconciling the "people versus products" dilemma. Infosys lives and grows on the back of its ability to hire more people, place them on billable projects that are offshored, and then to keep its salary expenses low - i.e. a volume business with wafer thin margins that are constantly under pressure. This is different from the hiring philosophy adopted by leading software companies and startups around the world - which is to hire the best, from the best colleges, and provide them with a challenging and yet flexible work environment. It should be clear that a single company cannot have two diametrically opposite work cultures for any extended length of time. This, of course assumes, that Sikka sees a future in Infosys beyond labor cost-arbitraged services. Infosys' CEO, in an interview to the New York Times in 2005, had stated that he did not see the company as aspiring beyond that narrow focus. Whether Sikka subscribes to that view or not is a different question.

In diversifying, it can be argued that IBM could serve as a model. It has developed excellence in the three areas of hardware, software, and services. But Infosys has neither a presence in hardware - and it is hard to imagine it getting into the hardware business for several reasons - nor does it have a particularly strong software products line of business. There is Finacle, but that too has not been performing too well. Sikka may see himself as the ideal person to incubate several successful products within Infosys. But there are several challenges here.

Firstly, there is no company, with the arguable exception of IBM, that has achieved excellence in both services and products. Not Microsoft, not Oracle, not SAP. Sikka will have to decide where he needs to focus on. Stabilize the services business and develop niche but world-class products that are augmented by services, or build a small but strong products portfolio as a separate business that is hived off from the parent company - de-facto if not in reality. One cannot hunt with the hound and run with the hare. If he decides to focus on nurturing a products line of business, he leaves the company vulnerable to cut-throat competition on one hand and the exit of talented people looking for greener pastures on the other hand.

Secondly, if Infosys under Sikka does get into products, then it will need to decide what products it builds. He cannot expect to build yet another database, or yet another operating system, or even yet another enterprise application and hope for stellar results. To use a much-used phrase, he will need to creatively disrupt the market. Here again, Sikka's pedigree points to one area - information and analytics. This is a hot area of innovation which finds itself at the intersection of multiple technology trends - cloud, in-memory computing, predictive analytics and data mining, unstructured data, social media, data visualizations, spatial analytics and location intelligence, and of course, the mother of all buzzwords - big data. A huge opportunity awaits at the intersection of analytics, the cloud, and specialized solutions. Should Infosys choose to walk down this path, the probability of success is more than fair given Sikka's background. His name will alone attract the best of talent from across the technology world. Also remember, the adoption of technology in India, despite its close to one billion mobile subscriber base, is still abysmally low. There is a crying need for innovative technology solutions that can be adopted widely and replicated across the country. The several new cities planned by the government itself presents Sikka and Infosys, and of course many other companies, with a staggering opportunity.

Thirdly, the new CEO will have the benefit of an indulgent investor community, but not for long. Given the high hopes that everyone has from him, Sikka's honeymoon period with Dalal Street may last a couple of quarters, or perhaps even a year, but not much more. The clock is ticking. The world of technology, the world over, is waiting and watching.

(The opinions expressed in this article are the author's own, and do not necessarily reflect the views of dna)

Flipakart's Billion Dollar Sale, And A Few Questions

Abhinav Agarwal - Thu, 2015-01-15 04:35
My article on Flipkart's Billion Dollar Sale and an article that appeared in a business daily on the preparations that went into it was published in  DNA on December 29, 2014.

This is the full text of the article:
A Billion Dollar Sale, And A Few Questions, by Abhinav Agarwal, published in DNA, Dec 29 2014An article published on an online news portal (reproduced from a business daily) claimed that "Flipkart's 'Big Billion Day' was planned over more than 700,000 man hours (six months of work put in by 280 people over 14 hours every day) to get the back-end systems ready." This is a stupendous achievement by any yardstick, and all the more creditable given that Flipkart's infrastructure is nothing to scoff at to begin with, and which is rarely known to keel over during traffic surges. Despite all this preparation however, Flipkart didexperience issues during its big sale, which led to its founders issuing a public apology - an act of entrepreneurial humility that was well appreciated by many.

The article states that Flipkart "clocked a gross merchandise value (GMV) of $100 million". But what is "GMV"? According to Investopedia, Gross Merchandise Value, abbreviated as GMV, is "The total value of merchandise sold over a given period of time through a customer to customer exchange site. It is a measure of the growth of the business, or use of the site to sell merchandise owned by others." But there is some confusion as to what GMV actually means. This arises from the fact that GMV is not a standard accounting term. For instance, a search for "GMV" or for "Gross Merchandise Value" on the web site of The Institute of Chartered Accountants of India throws up zero results. GMV's definition differs based on each e-commerce vendor's assumptions. Therefore, if an item's price is marked at Rs 100, and Flipkart sells ten such items for Rs 70 each, is the GMV 700 or Rs 1000? Let us be generous and assume that GMV refers to the total sale value, before discounts - that would make it easier for Flipkart to claim they clocked in a hundred million dollars in GMV. Plus it is the logical thing to do - from a marketing perspective.
Next, the average discount offered at leading e-commerce sites like Flipkart can be 20%, 30%, or even touching 40% in some cases. It is generally understood, that at least for items like books, wholesalers get a discount of 40% off the list price from publishers. This can be lower for other categories of goods like electronics (retailers do not get the Apple iPhone at 40% off the list price - at least one hopes so!), but can be higher for other categories like clothes. Therefore, assuming a discount of 40% (on the higher end), it means that for the hundred million dollars worth of GMV, Flipkart's cost for those goods was 60 million dollars. Assuming they passed off 20% of the GMV as a discount to the end customer - and discounts were generally higher than 20% during that sale period, it leaves them with a gross profit of twenty million dollars, into which they have to squeeze all their costs, to be profitable. Hold that number in your mind for a minute.

Let us now take a closer look at the other statement in the article's sub-heading, which says, "Flipkart's 'Big Billion Day' was planned over more than 700,000 man hours (six months of work put in by 280 people over 14 hours every day) to get the back-end systems ready." If Flipkart, as the article claims, took 700,000 man hours (the politically correct phrase would be "person-hours", but we will grant the author of the article some leeway here), that translates to a little over $20 million that the company spent on this sale in getting its network infrastructure scaled up. How? My assumptions are, and there are many, many assumptions here, that the base salary for the Flipkart employees that worked on this initiative is Rs 26 lakhs per annum (yes, yes, more on this later). Second, I have taken a year as having 2080 work-hours (52 weeks, times 40 hours per week). Third, I assume that the loaded cost adds 50% on top of a person's actual salary - or Annual Guaranteed Pay (AGP) as is also sometimes referred to. Fourth, I take Rs 65 to a dollar as my exchange rate (the exchange rate was lower a couple of months back; the numbers would look worse - for Flipkart - if I were to use those). Using these assumptions, at an aggregate level, this means a hundred and thirty crores rupees, or US$20 million. You can see the calculations, and the assumptions, in the figure at the end. Hold this second figure of $20 million in your head.



Compare the two figures now. For a maximum gross profit of 20 million dollars from their big billion sale, Flipkart had to spend the same sum on upgrading their hardware and network infrastructure to deal with the traffic to their web site. Ignoring other fixed costs, did Flipkart make money, any money at all from their sale?

One could argue that there are many assumptions in my calculations, and I do agree that there are several. You can plug in your own numbers and the mileage will surely vary, but not by much, I expect. I have used three different sets of numbers (see below)



Second, the argument could be made that these are not contractors who are paid by the hour, and many worked for "over fourteen hours every day" - which is not uncommon for young, passionate employees working at hot startups (I regularly clock in 12 hour workdays, and I am no longer young, nor do I work for a start-up!) On the other hand, I could argue that Rs 26 lakh is not that wildly off the mark a figure, given that Flipkart recently made offers to fresh graduates at IIT-Kharagpur where the average salary offered was Rs 20 lakh - for engineers with zero work experience. Not everyone working on the preparation for the Big Billion Sale would have been a fresh-out-of-college greenhorn. The loaded cost factor could be more than 50%, given that Flipkart would be handing out stock options to its employees, and having other on-premise perks (the role model for all young startups seems to be Google).

But to fixate on the specifics of any one number here would be to miss the wood for the trees, in a manner of speaking. It is not the main point of my post either. The article itself reveals two hidden points. The first is that this article in the business daily is a successful implementation of a company's PR or marketing department getting its point of view across to a news organisation, not in the form of a press release, but as a news article. It lends an air of neutrality and credibility even as it presents the company's point of view, in an unquestioned manner. There is no mention of the article being a paid advertorial. Second, it also serves as an eloquent advertisement for Flipkart's formidable computing prowess and infrastructure. To that end, it serves as a signal of intent to its competitors, especially Amazon, that Flipkart won't be found wanting when it comes to competing with the big bad daddies of the global e-commerce world. In the end, however, the inescapable question remains - to drive its top-line, is Flipkart caught in a situation where the more it sells, the more it loses? Is it finding it difficult to get economies of scale? Or is it still in the growth stage, where margins and profits take a backseat to marketshare and growth? With more than a thousand crores in annual revenues, if one of the most successful e-tailers in the country finds itself in this bind, the other, smaller players will find the going much tougher. Is the online e-commerce space headed for a brutal shakeout in 2015?
Views expressed are the author's own

President Obama takes stand against hackers

Chris Foot - Thu, 2015-01-15 02:46

Legislation pertaining to cybersecurity is a topic of discussion that isn't going away. Cyberattacks are only less shocking nowadays because they've grown more commonplace.

Therefore, it's not surprising that President Barack Obama is taking a stance on the matter, especially upon seeing how the United States has "more to lose than any other nation on Earth" as far as cyber warfare is concerned, according to former National Security Agency employee Edward Snowden. Snowden recently conducted an interview with PBS as part of a documentary about cyber attacks, discussing the implications of what a major infiltration could do to a country. 

"I think the public still isn't aware of the frequency with which these cyberattacks, as they're being called in the press, are being used by governments around the world, not just the U.S.," said Snowden, as quoted by the news source.

Obama's response 
While the sanctity of the U.S. government's IT assets is obviously a priority, the president is advocating for protection of private industries as well. According to InfoWorld, President Obama recently announced the proposal of law in a speech to the Federal Trade Commission that would obligate companies to notify customers of a data breach within 30 days of the attack occurring. 

Obama acknowledged the various state-based laws regarding business transparency, but asserted that these mandates are not consistent, making a case for a federal law that would apply to all organizations based in the U.S.

"It's confusing for consumers and it's confusing for companies – and it's costly, too, to have to comply to this patchwork of laws," said Obama, as quoted by the source. "Sometimes, folks don't even find out their credit card information has been stolen until they see charges on their bill, and then it's too late."

What are the chances of the bill being passed? 
Whether the bill will be implemented into law depends on the sentiments of those in Congress. John Pescatore, the SANS Institute's director of emerging trends, spoke with InfoWorld about the proposed legislation, commenting that several iterations of a similar bill have entered both houses but have not been approved. 

What makes this particular iteration so different? For one thing, both Senate and House of Representatives majorities reside with the Republicans, so the president arguably doesn't have as much clout with the institutions as he would have otherwise. 

Regardless of whether the bill passes or not, organizations should not neglect to develop a recovery plan in the event any one of them suffers a major data breach. 

The post President Obama takes stand against hackers appeared first on Remote DBA Experts.

Annonce : Workshop DBaaS

Jean-Philippe Pinte - Thu, 2015-01-15 02:43
Un workshop sur le sujet DBaaS, à destination des partenaires Oracle, aura lieu le 4 et 5 mars à Colombes (France)
Plus de détails : http://eventreg.oracle.com/profile/web/index.cfm?PKWebId=0x1713052088

NLS_LANG on Windows in Europe: WE8MSWIN1252 or WE8PC850?

Yann Neuhaus - Wed, 2015-01-14 23:57

In europe we have accents and non US7ASCII characters. We need special characterset. I'm not talking about Unicode here that solves all the problems. If you have a Java application, you have no problem: it's Unicode. You can store all characters in one multi-byte characterset. But for other applications, on Windows, you have 2 possible charactersets for Western Europe WE8MSWIN1252 and WE8PC850. WE8MSWIN1252 is the one that is set by default in the registry, but is it the right one?

Oracle E-Business Suite 12.0 - CPU Support Ends This Quarter

Oracle E-Business Suite 12.0 Extended Support ends on January 31, 2015.  Sustaining Support does not include security fixes in the form of Critical Patch Updates (CPU).  The final 12.0 CPU will be the January 2015 CPU released on January 20th.

Oracle E-Business Suite 12.0 customers should be looking to upgrade to 12.1 or 12.2 in the near future.

For those customers unable to upgrade from 12.0 in the near future, Integrigy will be including in our web application firewall product, AppDefend, virtual patching rules for web security vulnerabilities in the Oracle E-Business Suite 12.0 which are patched in other versions (i.e., 11i, 12.1, and 12.2).  This will provide at least protection from known web security vulnerabilities in un-patched 12.0 environments.

This support timeline is different than Oracle E-Business Suite 11i which is covered by an Exception to Sustaining Support (ESS) until December 31, 2015 and includes security patches for this period.  Oracle E-Business Suite 11i customers should be planning to upgrade to 12.1 or 12.2 by the end of this year in order to stay supported with security patches and to get off the ridiculously old version of the Oracle Application Server.  Some components in the 11i installation of the Oracle Application Server on the application tier are 1999 versions.

 

Tags: Oracle E-Business SuiteOracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Cary Millsap

Bobby Durrett's DBA Blog - Wed, 2015-01-14 14:31

This is my third of four posts about people who have made a major impact on my Oracle database performance tuning journey.  This post is about Cary Millsap.  The previous two were about Craig Shallahamer and Don Burleson.

I am working through these four people in chronological order.  The biggest impact Cary Millsap had on me was through the book Optimizing Oracle Performance which he co-authored with Jeff Holt.  I have also heard Cary speak at conferences and we had him in for a product demo one time where I work.

I have delayed writing this post because I struggle to put into words why Cary’s book was so useful to me without repeating a long explanation of the book’s contents.  Just before reading the book I had worked on a system with high CPU usage and queuing for the CPU.  I had just read the paper “Microstate Response-time Performance Profiling” by Danisment Gazi Unal which talked about why CPU measurements in Oracle do not include time spent queued for the CPU.  Then I read Cary Millsap’s book and it was very enlightening.  For one thing, the book was very well written and written in a convincing way.  But the key concept was Cary Millsap’s idea of looking at the waits and CPU time that Oracle reports at a session level and comparing that to the real elapsed time.  This performance profile with waits, CPU, and elapsed time formed the basis of my first conference talk which I gave at Collaborate 06: PowerPoint, Word, Zip

Here is an example of a session profile from my presentation:

TIMESOURCE                  ELAPSED_SECONDS
--------------------------- ---------------
REALELAPSED                             141
CPU                                   44.81
SQL*Net message from client            9.27
db file sequential read                 .16

This is a profile of a session that spent roughly two-thirds of its time queued for the CPU.

Since reading Optimizing Oracle Performance I have resolved many performance problems by creatively applying the concepts in the book.  The book focuses on using traces to build profiles.  I have made my own scripts against V$ views and I have also used Precise.  I have used traces as the book suggests but only with TKPROF.  I have not had a chance to use the tool that the book describes, the Method R Profiler.

However I do it the focus is on waits, CPU as reported by Oracle, and real elapsed time all for a single session.  It is a powerful way to approach performance tuning and the main thing I learned from Cary Millsap.  I highly recommend Cary Millsap and Jeff Holt’s book to anyone who wants to learn more about Oracle database performance tuning because it made such a profound impact on my career.

– Bobby



Categories: DBA Blogs

cancannible role-based access control gets an update for Rails 4

Paul Gallagher - Wed, 2015-01-14 08:48
Can You Keep a Secret? / 宇多田ヒカル

cancannible is a gem that has been kicking around in a few large-scale production deployments for years. It still gets loving attention - most recently an official update for Rails 4 (thanks to the push from @zwippie).

And now also some demo sites - one for Rails 3.2.x and another for Rails 4.3.x so that anyone can see it in action.


So what exactly does cancannible do? In a nutshell, it is a gem that extends CanCan with a range of capabilities:

  • permissions inheritance (so that, for example, a User can inherit permissions from Roles and/or Groups)
  • general-purpose access refinements (to automatically enforce multi-tenant or other security restrictions)
  • automatically stores and loads permissions from a database
  • optional caching of abilities (so that they don't need to be recalculated on each web request)
  • export CanCan methods to the model layer (so that permissions can be applied in model methods, and easily set in a test case)

Bind Effects

Jonathan Lewis - Wed, 2015-01-14 07:24

A couple of days ago I highlighted an optimizer anomaly caused by the presence of an index with a descending column. This was a minor (unrelated) detail that appeared in a problem on OTN where the optimizer was using an index FULL scan when someone was expecting to see an index RANGE scan. My earlier posting supplies the SQL to create the table and indexes I used to model the problem – and in this posting I’ll explain the problem and answer the central question.

Here’s the query and execution plan (from 11.2.0.x) as supplied by the OP – the odd appearance of the sys_op_descend() function calls is the minor detail that I explained in the previous post, but that’s not really relevant to the question of why Oracle is using an index full scan rather than an index range scan. The /*+ first_rows */ hint isn’t something you should be using but it was in the OP’s query, so I’ve included it in my model:


select /*+ FIRST_ROWS gather_plan_statistics scanned */ count(1)      FROM  XXX
where  (((((COL1 = '003' and COL2 >= '20150120') and COL3 >= '00000000') and COL4>= '000000000000' )
or ((COL1 = '003' and COL2 >= '20150120') and COL3> '00000000' )) or (COL1= '003' and COL2> '20150120'))
order by COL1,COL2,COL3,COL4  

Plan hash value: 919851669  

---------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |      1 |        |  18533 |00:01:47.04 |    156K |  70286 |
|   1 |  TABLE ACCESS BY INDEX ROWID| XXX    |      1 |  7886K |  18533 |00:01:47.04 |    156K |  70286 |
|*  2 |   INDEX FULL SCAN           | XXXXPK |      1 |  7886K |  18533 |00:01:30.36 |    131K |  61153 |
---------------------------------------------------------------------------------------------------------  

Predicate Information (identified by operation id):
---------------------------------------------------
  2 - filter((("COL2">:B2 AND "COL1"=:B1 AND
              SYS_OP_DESCEND("COL2")<SYS_OP_DESCEND(:B2)) OR ("COL1"=:B1 AND "COL2">=:B2
              AND "COL3">:B3 AND SYS_OP_DESCEND("COL2")<=SYS_OP_DESCEND(:B2)) OR
              ("COL1"=:B1 AND "COL2">=:B2 AND "COL3">=:B3 AND "COL4">=:B4 AND
              SYS_OP_DESCEND("COL2")<=SYS_OP_DESCEND(:B2))))  

If you look closely you’ll see that the OP has NOT supplied the output from a call to dbms_xplan.display_cursor() – the column and table names are highly suspect (but that’s allowable cosmetic change for confidentiality reasons) the giveaway is that the SQL statement uses literals but the execution plan is using bind variables (which are of the form B{number}, suggesting that the real SQL is embedded in PL/SQL with PL/SQL variables being used to supply values): the bind variables make a difference.

Let’s go back to my model to demonstrate the problem. Here’s a query with the same predicate structure as the problem query (with several pairs of brackets eliminated to improve readability) showing the actual run-time plan (from 11.2.0.4) when using literals:


select
        /*+ first_rows */
        *
from t1
where
        (C1 = 'DE' and C2 >  'AB')
or      (C1 = 'DE' and C2 >= 'AB' and C3 > 'AA' )
or      (C1 = 'DE' and C2 >= 'AB' and C3 >= 'AA' and C4 >= 'BB')
order by
        C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |    21 |  2478 |     4  (25)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |    21 |       |     3  (34)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C1"='DE')
       filter(((SYS_OP_DESCEND("C2")<SYS_OP_DESCEND('AB') AND "C2">'AB') OR
              (SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C3">'AA' AND "C2">='AB') OR
              (SYS_OP_DESCEND("C2")<=SYS_OP_DESCEND('AB') AND "C4">='BB' AND "C2">='AB' AND
              "C3">='AA')))

As you can see, the optimizer has managed to “factor out” the predicate C1 = ‘DE’ from the three disjuncts and has then used it as an access() predicate for an index range scan. Now let’s see what the code and plan look like if we replace the four values by four bind variables:


variable B1 char(2)
variable B2 char(2)
variable B3 char(2)
variable B4 char(2)

begin
        :b1 := 'DE';
        :b2 := 'AB';
        :b3 := 'AA';
        :b4 := 'BB';
end;
/

select
        /*+ first_rows */
        *
from t1
where
        (C1 = :B1  and C2 >  :B2 )
or      (C1 = :B1  and C2 >= :B2 and C3 >  :B3 )
or      (C1 = :B1  and C2 >= :B2 and C3 >= :B3 and C4 >= :B4)
order by C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |    31 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   437 | 51566 |    31   (4)| 00:00:01 |
|*  2 |   INDEX FULL SCAN           | T1_IASC |   437 |       |    27   (4)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter((("C1"=:B1 AND "C2">:B2) OR ("C1"=:B1 AND "C3">:B3 AND
              "C2">=:B2) OR ("C1"=:B1 AND "C4">=:B4 AND "C2">=:B2 AND "C3">=:B3)))

The optimizer hasn’t factored out the common expression C1 = :B1. The reason for this, I think, is that though WE know that :B1 is supposed to be the same thing in all three occurrences the optimizer isn’t able to assume that that’s the case; in principle :B1 could be the place holder for 3 different values – so the optimizer plays safe and optimizes for that case. This leaves it with three options: Full tablescan with filter predicates, index full scan with filter predicates, three-part concatenation with index range scans in all three parts. The combination of the /*+ first_rows */ hint and the “order by” clause which matches the t1_1asc index has left the optimizer choosing the index full scan path – presumably to avoid the need to collect all the rows and sort them before returning the first row.

Given our understanding of the cause of the problem we now have a clue about how we might make the query more efficient – we have to eliminate the repetition of (at least) the :B1 bind variable. In fact we can get some extra mileage by modifying the repetition of the :B2 bind variable. Here’s a rewrite that may help:


select
        /*+ first_rows */
        *
from t1
where
        (C1 = :B1 and C2 >= :B2)
and     (
             C2 > :B2
         or  C3 > :B3
         or (C3 >= :B3 and C4 > :B4)
        )
order by C1, C2, C3, C4
;

---------------------------------------------------------------------------------------
| Id  | Operation                   | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |         |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1      |   148 | 17464 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T1_IASC |   148 |       |     2   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("C1"=:B1 AND "C2">=:B2 AND "C2" IS NOT NULL)
       filter(("C2">:B2 OR "C3">:B3 OR ("C4">:B4 AND "C3">=:B3)))

I’ve factored out as much of the C1 and C2 predicates as I can – and the optimizer has used the resulting conditions as the access() predicate on the index (adding in a “not null” predicate on C2 that looks redundant to me – in fact the index was on the primary key in the original, but I hadn’t included that constraint in my model). You’ll notice, by the way, that the cardinality is now 148; compare this with the previous cardinality of 437 and you might (without bothering to look closely as the 10053 trace) do some hand-waving around the fact that 437 = (approximately) 148 * 3, which fits the idea that the optimizer was treating the three :B1 appearances as if they were three different possible values accessing three sets of data.

Miscellaneous.

This isn’t the end of the story; there are always more complications and surprises in store as you look further into the detail. For example, on the upgrade to 12c the execution plan for the query with bind variables was the same (ignoring the sys_op_descend() functions) as the query using literals – the optimizer managed to factor out the C1 predicate: does this mean SQL*Plus got smarter about telling the optimizer about the bind variables, or does it mean the optimizer got smarter about something that SQL*Plus has been doing all along ?

This change might make you think that the optimizer is supposed to assume that bind variables of the same name represent the same thing – but that’s not correct, and it’s easy to show; here’s a trivial example (accessing the same table with a query that, for my data, identifies the first row):


declare
        m_id number := 1;
        m_c1 char(2) := 'BV';
        m_c2 char(2) := 'GF';
        m_n number := 0;
begin
        execute immediate
                'SELECT /*+ FIND THIS */ COUNT(*) FROM T1 WHERE ID = :B1 AND C1 = :B1 AND C2 = :B1'
                into m_n
                using m_id, m_c1, m_c2
        ;
end;
/

select sql_id, sql_text from V$sql where sql_text like 'SELECT%FIND THIS%';

SQL_ID        SQL_TEXT
------------- ----------------------------------------------------------------------------------
9px3nuv54maxp SELECT /*+ FIND THIS */ COUNT(*) FROM T1 WHERE ID = :B1 AND C1 = :B1 AND C2 = :B1

If you were looking at the contents of v$sql, or a trace file, or an AWR report, you might easily be fooled into thinking that this was a query where the same value had been used three times – when we know that it wasn’t.

So, as we upgrade from 11g to 12c my model of the original problem suggests that the problem is going to go away – but, actually, I don’t really know why that’s the case (yet). On the other hand, I have at least recognised a pattern that the 11g optimizer currently has a problem with, and I have a method for helping the optimizer to be a little more efficient.

 


Concurrent RPD Development in OBIEE

Rittman Mead Consulting - Wed, 2015-01-14 07:06

OBIEE is a well established product, having been around in various incarnations for well over a decade. The latest version, OBIEE 11g, was released 3.5 years ago, and there are mutterings of OBIEE 12c already. In all of this time however, one thing it has never quite nailed is the ability for multiple developers to work with the core metadata model – the repository, known as the RPD – concurrently and in isolation. Without this, development is doomed to be serialised – with the associated bottlenecks and inability to scale in line with the number of developers available.

My former colleague Stewart Bryson wrote a series of posts back in 2013 in which he outlines the criteria for a successful OBIEE SDLC (Software Development LifeCycle) method. The key points were :

  • There should be a source control tool (a.k.a version control system, VCS) that enables us to store all artefacts of the BI environment, including RPD, Presentation Catalog, etc etc. From here we can tag snapshots of the environment at a given point as being ready for release, and as markers for rollback if we take a wrong turn during development.
  • Developers should be able to do concurrent development in isolation.
    • To do this, source control is mandatory in order to enable branch-based development, also known as feature-driven development, which is a central tenet of an Agile method.

Oracle’s only answer to the SDLC question for OBIEE has always been MUDE. But MUDE falls short in several respects:

  • It only manages the RPD – there is no handling of the Presentation Catalog etc
  • It does not natively integrate with any source control
  • It puts the onus of conflict resolution on the developer rather than the “source master” who is better placed to decide the outcome.

Whilst it wasn’t great, it wasn’t bad, and MUDE was all we had. Either that, or manual integration into source control (1, 2) tools, which was clunky to say the least. The RPD remained a single object that could not be merged or managed except through the Administration Tool itself, so any kind of automatic merge strategies that the rest of the software world were adopting with source control tools were inapplicable to OBIEE. The merge would always require the manual launching of the Administration Tool, figuring out the merge candidates, before slowly dying in despair at having to repeat such a tortuous and error-prone process on a regular basis…

Then back in early 2012 Oracle introduced a new storage format for the RPD. Instead of storing it as a single binary file, closed to prying eyes, it was instead burst into a set of individual files in MDS XML format.

For example, one Logical Table was now one XML files on disk, made up of entities such as LogicalColumn, ExprText, LogicalKey and so on:

It even came with a set of configuration screens for integration with source control. It looked like the answer to all our SDLC prayers – now us OBIEE developers could truly join in with the big boys at their game. The reasoning went something like:

  1. An RPD stored in MDS XML is no longer binary
  2. git can merge code that is plain text from multiple branches
  3. Let’s merge MDS XML with git!

But how viable is MDS XML as a storage format for the RPD used in conjunction with a source control tool such as git? As we will see, it comes down to the Good, the Bad, and the Ugly…

The Good

As described here, concurrent and unrelated developments on an RPD in MDS XML format can be merged successfully by a source control tool such as git. Each logical object is an file, so git just munges (that’s the technical term) the files modified in each branch together to come up with a resulting MDS XML structure with the changes from each development in it.

The Bad

This is where the wheels start to come off. See, our automagic merging fairy dust is based on the idea that individually changed files can be spliced together, and that since MDS XML is not binary, we can trust a source control tool such as git to also work well with changes within the files themselves too.

Unfortunately this is a fallacy, and by using MDS XML we expose ourselves to greater complications than we would if we just stuck to a simple binary RPD merged through the OBIEE toolset. The problem is that whilst MDS XML is not binary, is not unstructured either. It is structured, and it has application logic within it (mdsid, of which see below).

Within the MDS XML structure, individual first-class objects such as Logical Tables are individual files, and structured within them in the XML are child-objects such as Logical Columns:

Source control tools such as git cannot parse it, and therefore do not understand what is a real conflict versus an unrelated change within the same object. If you stop and think for a moment (or longer) quite what would be involved in accurately parsing XML (let alone MDS XML), you’ll realise that you basically need to reverse-engineer the Administration Tool to come up with an accurate engine.

We kind of get away with merging when the file differences are within an element in the XML itself. For example, the expression for a logical column is changed in two branches, causing clashing values within ExprText and ExprTextDesc. When this happens git will throw a conflict and we can easily resolve it, because the difference is within the element(s) themselves:

Easy enough, right?

But taking a similarly “simple” merge conflict where two independent developers add or modify different columns within the same Logical Table we see what a problem there is when we try to merge it back together relying on source control alone.

Obvious to a human, and obvious to the Administration Tool is that these two new columns are unrelated and can be merged into a single Logical Table without problem. In a paraphrased version of MDS XML the two versions of the file look something like this, and the merge resolution is obvious:

But a source control tool such as git looks as the MDS XML as a plaintext file, not understanding the concept of an XML tree and sibling nodes, and throws its toys out of the pram with a big scary merge conflict:

Now the developer has to roll up his or her sleeves and try to reconcile two XML files – with no GUI to support or validate the change made except loading it back into the Administration Tool each time.

So if we want to use MDS XML as the basis for merging, we need to restrict our concurrent developments to completely independent objects. But, that kind of hampers the ideal of more rapid delivery through an Agile method if we’re imposing rules and restrictions like this.

The Ugly

This is where is gets a bit grim. Above we saw that MDS XML can cause unnecessary (and painful) merge conflicts. But what about if two developers inadvertently create the same object concurrently? The behaviour we’d expect to see is a single resulting object. But what we actually get is both versions of the object, and a dodgy RPD. Uh Oh.

Here are the two concurrently developed RPDs, produced in separate branches isolated from each other:

And here’s what happens when you leave it to git to merge the MDS XML:

The duplicated objects now cannot be edited in the Administration Tool in the resulting merged RPD – any attempt to save them throws the above error.

Why does it do this? Because the MDS XML files are named after a globally unique identifier known as the mdsid, and not their corresponding RPD qualified name. And because the mdsid is unique across developments, two concurrent creations of the same object end up with different mdsid values, and thus different filenames.

Two files from separate branches with different names are going to be seen by source control as being unrelated, and so both are brought through in the resulting merge.

As with the unnecessary merge conflict above, we could define process around same object creation, or add in a manual equalise step. The issue really here is that the duplicates can arise without us being aware because there is no conflict seen by the source control tool. It’s not like merging an un-equalised repository in the Administration Tool where we’d get #1 suffixes on the duplicate object so that at least (a) we spot the duplication and (b) the repository remains valid and the duplicate objects available to edit.

MDS XML Repository opening times

Whether a development strategy based on MDS XML is for you or not, another issue to be aware of is that for anything beyond a medium sized RPD opening times of an MDS XML repository are considerable. As in, a minute from binary RPD, and 20 minutes from MDS XML. And to be fair, after 20 minutes I gave up on the basis that no sane developer would write off that amount of their day simply waiting for the repository to open before they can even do any work on it. This rules out working with any big repositories such as that from BI Apps in MDS XML format.

So is MDS XML viable as a Repository storage format?

MDS XML does have two redeeming features :

  1. It reduces the size of your source control repository, because on each commit you will be storing just a delta of the overall repository change, rather than the whole binary RPD each time.
  2. For tracking granular development progress and changes you can identify what was modified through the source control tool alone – because the new & modified objects will be shown as changes:

But the above screenshots both give a hint of the trouble in store. The mdsid unique identifier is used not only in filenames – causing object duplication and strange RPD behaviour- but also within the MDS XML itself, referencing other files and objects. This means that as a RPD developer, or RPD source control overseer, you need to be confident that each time you perform a merge of branches you are correctly putting Humpty Dumpty back together in a valid manner.

If you want to use MDS XML with source control you need to view it as part of a larger solution, involving clear process and almost certainly a hybrid approach with the binary RPD still playing a part — and whatever you do, the Administration Tool within short reach. You need to be aware of the issues detailed above, decide on a process that will avoid them, and make sure you have dedicated resource that understands how it all fits together.

If not MDS XML, then what?…

Source control (e.g. git) is mandatory for any kind of SDLC, concurrent development included. But instead of storing the RPD in MDS XML, we store it as a binary RPD.

Wait wait wait, don’t go yet ! … it gets better

By following the git-flow method, which dictates how feature-driven development is done in source control (git), we can write a simple script that determines when merging branches what the candidates are for an OBIEE three-way RPD merge.

In this simple example we have two concurrent developments – coded “RM–1” and “RM–2”. First off, we create two branches which take the code from our “mainline”. Development is done on the two separate features in each branch independently, and committed frequently per good source control practice. The circles represent commit points:

The first feature to be completed is “RM–1”, so it is merged back into “develop”, the mainline. Because nothing has changed in develop since RM–1 was created from it, the binary RPD file and all other artefacts can simply ‘overwrite’ what is there in develop:

Now at this point we could take “develop” and start its deployment into System Test etc, but the second feature we were working on, RM–2, is also tested and ready to go. Here comes the fancy bit! Git recognises that both RM–1 and RM–2 have made changes to the binary RPD, and as a binary RPD git cannot try to merge it. But now instead of just collapsing in a heap and leaving it for the user to figure out, it makes use of git and the git-flow method we have followed to work out the merge candidates for the OBIEE Administration Tool:

Even better, it invokes the Administration Tool (which can be run from the command line, or alternatively use command line tools comparerpd/patchrpd) to automatically perform the merge. If the merge is successful, it goes ahead with the commit in git of the merge into the “develop” branch. The developer has not had to do any kind of interaction to complete the merge and commit.

If the merge is not a slam-dunk, then we can launch the Administration Tool and graphically figure out the correct resolution – but using the already-identified merge candidates in order to shorten the process.

This is not perfect, but there is no perfect solution. It is the closest thing that there is to perfection though, because it will handle merges of :

  • Unique objects
  • Same objects, different modifications (c.f. two new columns on same table example above)
  • Duplicate objects – by equalisation
Conclusion

There is no single right answer here, nor are any of the options overly appealing.

If you want to work with OBIEE in an Agile method, using feature-driven development, you will have to adopt and learn specific processes for working with OBIEE. The decision you have to make is on how you store the RPD (binary or multiple MDS XML files, or maybe both) and how you handle merging it (git vs Administration Tool).

My personal view is that taking advantage of git-flow logic, combined with the OBIEE toolset to perform three-way merges, is sufficiently practical to warrant leaving the RPD in binary format. The MDS XML format is a lovely idea but there are too few safeguards against dodgy/corrupt RPD (and too many unnecessary merge conflicts) for me to see it as a viable option.

Whatever option you go for, make sure you are using regression testing to test the RPD after you merge changes together, and ideally automate the testing too. Here at Rittman Mead we’ve written our own suite of tools that do just this – get in touch to find out more.

Categories: BI & Warehousing

Announcing the Next Generation of Oracle Engineered Systems

Live Launch Event What happens when extreme performance meets extreme savings? Find out on January 21, 2015, as Larry Ellison, Executive Chairman of the Board and Chief Technology Officer, unveils...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Using Database In-Memory Column Store with Complex Datatypes

Marco Gralike - Wed, 2015-01-14 03:19
From those who are interested, hereby my slide deck I used during UKOUG Tech14, regarding…

The last thing cybersecurity experts want: A ‘Skeleton Key’ for hackers

Chris Foot - Wed, 2015-01-14 01:06

Imagine giving a skeleton key to your databases to a cybercriminal – obviously a situation everybody would like to avoid.

While the so-named type of malware doesn't work exactly like a skeleton key, it still poses a grievous threat to financial institutions, government agencies, retailers and other companies participating in different industries. 

What Skeleton Key can do for hackers 
According to Dark Reading, Dell SecureWorks Counter Threat Unit discovered "Skeleton Key" which is capable of circumventing Active Directory systems that use single-factor user authentication. The way the malware is presented to victims is what makes it so dangerous. Dell's report found that Skeleton Key is implemented as an in-memory patch on a machine's AD domain controllers, enabling the hacker who initiated the endeavor to give any user authorization. 

Essentially, using Skeleton Key eliminates the need for a cybercriminal to steal a user's login credentials or change his or her password. Don Smith, CTU's director of technology, informed the source that Skeleton Key also prevents behavioral analysis software from distinguishing an illegitimate administrator from a legitimate one.

"The Skeleton Key malware allows the adversary to trivially authenticate as any user using their injected password," explained Smith, as quoted by Dark Reading. "This can happen remotely for Webmail or VPN. This activity looks like, and is, normal end user activity, so the chances of the threat actor raising any suspicion is extremely low and this is what makes this malware particularly stealthy." 

The malware isn't perfect 
Although Skeleton Key may seem like the perfect tool for any cybercriminal, it's not without its own flaws. For one thing, Dark Reading noted that in order for a hacker to deploy the malware, he or she needs to have already obtained admin-level access to an organization's network. 

In addition, Forbes contributor Thomas Fox-Brewster noted that Skeleton Key also isn't "persistent," meaning it can be deleted once an infected Active Directory system is rebooted. Once this step is taken, perpetrators will not be able to sign into systems as employees. However, this particular vulnerability can be subverted by using a Remote Access Trojan, which would allow Skeleton Key to get back up and running. 

It's malware such as Skeleton Key that necessitates a comprehensive database security monitoring strategy. Ensuring all data is secure involves more than simply establishing "more robust access permissions" – rather, it consists of consulting a team of experts who know how to defend databases against malware and other intrusion techniques. 

The post The last thing cybersecurity experts want: A ‘Skeleton Key’ for hackers appeared first on Remote DBA Experts.

Keep ready to test the final EA of APEX 5.0

Dimitri Gielis - Tue, 2015-01-13 13:04
Oracle is gearing up to release APEX 5.0... the final early adopter release (EA3) will be released soon. Over 6000 people participated in APEX EA2...


Here's the email of Joel Kallman:



As EA3 will be very close to the final release of APEX 5.0 many more people will probably join EA3, so keep ready for it! I look forward what color scheme people will create with Theme Roller and how they make the universal theme look like, fun guaranteed :)

Categories: Development

What Is Wrong With Thanet?

Pete Scott - Tue, 2015-01-13 12:25
Well that title can be taken many ways. It could be a plaintive “get your act together, Thanet!” or perhaps an appraisal of the issues that make Thanet a bit of a mess. I suppose it’s up to you to decide which. Historically, Thanet is a real place, it was an island sitting on the […]

Dear Julia: SmartWatch Habits and Preferences

Oracle AppsLab - Tue, 2015-01-13 11:40

Julia’s recent post about her experiences with the Samsung Gear watches triggered a lively conversation here at the AppsLab. I’m going to share my response here and sprinkle in some of Julia’s replies.  I’ll also make a separate post about the interesting paper she referenced.

Dear Julia,

You embraced the idea of the smart watch as a fully functional replacement for the smart phone (nicely captured by your Fred Flintstone image). I am on the other end of the spectrum. I like my Pebble precisely because it is so simple and limited.

I wonder if gender-typical fashion and habit is a partial factor here. One reason I prefer my phone to my watch is that I always keep my phone in my hip pocket and can reliably pull it out in less than two seconds. My attitude might change if I had to fish around for it in a purse which may or may not be close at hand.

Julia’s response:

I don’t do much on the watch either. I use it on the go to:

  • read and send SMS
  • make and receive a call
  • read email headlines
  • receive alerts when meetings start
  • take small notes

and with Gear Live:

  • get driving directions
  • ask for factoids

I have two modes to my typical day. One is when I am moving around with hands busy. Second is when I have 5+ minutes of still time with my hands free. In the first mode I would prefer to use a watch instead of a phone. In the second mode I would prefer to use a tablet or a desktop instead of a phone. I understand that some people find it useful to have just one device – the phone – for both modes. From Raymond’s description of Gear S, it sounds like reading on a watch is also okay.

Another possible differentiator, correlated with gender, is finger size. For delicate tasks I sometimes ask my wife for help. Her small, nimble fingers can do some things more easily than my big man paws. Thus I am wary of depending too heavily on interactions with the small screen of a watch. Pinch-zooming a map is delightful on a phone but almost impossible on a watch. Even pushing a virtual button is awkward because my finger obscures almost the entire surface of the watch. I am comfortable swiping the surface of the watch, and tapping one or two button targets on it, but not much more. For this reason I actually prefer the analog side buttons of the Pebble.

Julia’s response:

Gear has a very usable interface. It is controlled by a tap, swipe, single analog button, and voice. Pinch-zoom of images was enabled on old Gear, but there were no interaction that depended on pinch-zoom.

How comfortable are you talking to your watch in public? I have become a big fan of dictation, and do ask Siri questions from time to time, but generally only when I am alone (in my car, on a walk, or after everyone else has gone to bed). I am a bit self-conscious about talking to gadgets in public spaces. When other people do it near me I sometimes wonder if they are talking to me or are crazy, which is distracting or alarming, so I don’t want to commit the same offense.

I can still remember watching Noel talking to his Google Glass at a meeting we were in. He stood in a corner of the room, facing the wall, so that other people wouldn’t be distracted or think he was talking to them. An interesting adaption to this problem, but I’m not sure I want a world in which people are literally driven into corners.

Julia’s Response:

I am not at all comfortable talking to my watch. We should teach lipreading to our devices (wouldn’t that be a good kickstarter project?) But I would speak to the watch out of safety or convenience. Speaking to a watch is not as bad as to glasses. I am holding the watch to my mouth, looking at it, and, in case of Gear Live, first say “Okay, Google.” I don’t think many think I am talking to them. I must say most look at me with curiosity and, yes, admiration.

What acrobatics did you have to go through to use your watch as a camera? Did you take it off your wrist? Or were you able to simultaneously point your watch at your subject while watching the image on the watch? Did tapping the watch to take the photo jiggle the camera? Using the watch to take pictures of wine bottles and books and what-not is a compelling use case but often means that you have to use your non-watch hand to hold the object. If you ever expand your evaluation, I would love it if you could have someone else video you (with their smart watch?) as you take photos of wine bottles and children with your watch.

Julia’s Response:

No acrobatics at all. The camera was positioned at the right place. As a piece of industrial design it looked awful. My husband called it the “carbuncle” (I suspect it might be the true reason for camera’s disappearance in Gear Live). But it worked great. See my reflection in the mirror as I was taking the picture below? No acrobatics. The screen of the watch worked well as a viewfinder. I didn’t have to hold these “objects” in my hands. Tapping didn’t jiggle the screen.

dhdibfff      julia-spy-photo

Thanks again for a thought-provoking post, Julia.  I am also not sure how typical I am. But clearly there is a spectrum of how much smart watch interaction people are comfortable with.

JohnPossibly Related Posts:

Securing Big Data Part 6 - Classifying risk

Steve Jones - Tue, 2015-01-13 09:00
So now your Information Governance groups consider Information Security to be important you have to then think about how they should be classifying the risk.  Now there are docs out there on some of these which talk about frameworks.  British Columbia's government has one for instance that talks about High, Medium and Low risk, but for me that really misses the point and over simplifies the
Categories: Fusion Middleware

The RedstoneXperience

WebCenter Team - Tue, 2015-01-13 07:29

&amp;lt;span id=&amp;quot;XinhaEditingPostion&amp;quot;&amp;gt;&amp;lt;/span&amp;gt;

Redstone Content Solutions Guest Blog Post

At Redstone Content Solutions, our #1 priority is enabling your business with WebCenter.   

We continually strive to earn your trust and believe WebCenter initiatives are most successful when a strong working relationship exists.
Redstone has developed a hybrid project methodology that incorporates Agile principles, years of WebCenter experience, and client feedback relating to their own best practices, governance guidelines and mandates.  We call this the RedstoneXperience.
Understanding that no two engagements are identical, Agile & Scrum aspects of the RedstoneXperience enable our team to proactively identify changing requirements and analyze remaining tasks. 
Our team utilizes Scrum, a Pathway centered on the Empirical Process Control Model to provide and exercise control through frequent inspection and adaptation. 
RedstoneXperience provides working solutions and visible progress on a frequent basis so that stakeholders are better equipped to make informed decisions.   
We are committed to your success, and have found that employing the RedstoneXperience improves project outcome, accelerates solution understanding, and shortens the learning curve to self-sufficiency.

To learn more about the RedstoneXperience, please follow the links below. 

Our Process. Project Phases. Team Members. Management Tools.

Redstone Content Solutions…We Deliver

Two Changes in PeopleTools Requirements

Duncan Davies - Tue, 2015-01-13 07:00

Oracle have just announced two changes to what they require customers to be running on.

PeopleTools 8.53 Patch 10 or above for PUM Patches

If you’re on PeopleSoft v9.2 and using the Update Images to select the patches to apply then Oracle ‘strongly advises’ customers to be on the .10 patch of PeopleTools 8.53 or higher.

From Oracle:

FSCM Update Image 9.2.010 and higher, HCM Update Image 9.2.009 and higher, and ELM Update 9.2.006 and higher all need PeopleTools 8.53.10 for many of the updates and fixes to be applied. Failure to update your PeopleTools patch level to PeopleTools 8.53.10 or higher will result in the inability to take these updates and fixes. It may also inhibit you from applying critical maintenance in the future.

New PeopleTools Requirements for PeopleSoft Interaction Hub

Oracle also announced that they’re changing the support policy for Interaction Hub and PeopleTools. Basically, if you use Interaction Hub you must upgrade to a PeopleTools release no later than 24 months after that PeopleTools release becomes generally available.

It was originally a little confusingly worded, but there’s now an example that made it clearer for me: For example, PeopleTools 8.53 was released in February 2013. Therefore, customers who use Interaction Hub will be required to upgrade to PeopleTools 8.53 (or newer, such as PeopleTools 8.54) no later than February 2015 (24 months after the General Availability date of PeopleTools 8.53). As of February 2015, product maintenance and new features may require PeopleTools 8.53. I suspect that this is going to impact quite a few of customers. Full details here: https://blogs.oracle.com/peopletools/entry/important_peopletools_requirements_for_peoplesoft