Skip navigation.

Feed aggregator

Throw it away - Why you shouldn't keep your POC

Rob Baillie - Sat, 2014-12-13 04:26

"Proof of Concepts" are a vital part of many projects, particularly towards the beginning of the project lifecycle, or even in the pre-business case stages.

They are crucial for ensuring that facts are gathered before some particularly risk decisions are made.  Technical or functional, they can address many different concerns and each one can be different, but they all have one thing in common.  They serve to answer questions.

It can be tempting, whilst answering these questions to become attached to the code that you generate.

I would strongly argue that you should almost never keep the code that you build during a POC.  Certainly not to put into a production system.

I'd go so far as to say that planning to keep the code it is often damaging to the proof of concept; planning to throw the code away is liberating, more efficient and makes proof of concepts more effective by focussing the minds on the questions that require answers..

Why do we set out on a proof of concept?

The purpose of a proof of concept is to (by definition):

  * Prove:  Demonstrate the truth or existence of something by evidence or argument.
  * Concept: An idea, a plan or intention.

In most cases, the concept being proven is a technical one.  For example:
  * Will this language be suitable for building x?
  * Can I embed x inside y and get them talking to each other?
  * If I put product x on infrastructure y will it basically stand up?

They can also be functional, but the principles remain the same for both.

It's hard to imagine a proof of concept that cannot be phrased as one or more questions.  In a lot of cases I'd suggest that there's only really one important question with a number of ancillary questions that are used to build a body of evidence.

The implication of embarking on a proof of concept is that when you start you don't know the answer to the questions you're asking.  If you *do* already know the answers, then the POC is of no value to you.

By extension, there is the implication that the questions posed require to be answered as soon as possible in order to support a decision.  If that's not the case then, again, the POC is probably not of value to you.

As such, the only thing that the POC should aim to achieve is to answer the question posed and to do so as quickly as possible.

This is quite different to what we set out to do in our normal software development process. 

We normally know the answer to the main question we're asking (How do we functionally provide a solution to this problem / take advantage of this opportunity), and most of the time is spent focussed on building something that is solid, performs well and generally good enough to live in a production environment - in essence, not answering the question, but producing software.

What process do we follow when embarking on a proof of concept?

Since the aim of a POC is distinct from what we normally set out to achieve, the process for a POC is intrinsically different to that for the development of a production system.

With the main question in mind, you often follow an almost scientific process.  You put forward a hypothesis, you set yourself tasks that are aimed at collecting evidence that will support or deny that hypothesis, you analyse the data, put forward a revised hypothesis and you start again.

You keep going round in this routine until you feel you have an answer to the question and enough evidence to back that answer up.  It is an entirely exploratory process.

Often, you will find that you spend days following avenues that don't lead anywhere, backtrack and reassess, following a zig-zag path through a minefield of wrong answers until you reach the end point.  In this kind of situation, the code you have produced is probably one of the most barnacle riddled messes you have every produced.

But that's OK.  The reason for the POC wasn't to build a codebase, it was to provide an answer to a question and a body of evidence that supports that answer.

To illustrate:

Will this language be suitable for building x?

You may need to check things like that you can build the right type of user interfaces, that APIs can be created, that there are ways of organising code that makes sense for the long term maintenance for the system.

You probably don't need to build a completely functional UI, create a fully functioning API with solid error handling or define the full set of standards for implementing a production quality system in the given language.

That said, if you were building a production system in the language you wouldn't dream of having in incomplete UI, or an API that doesn't handle errors completely or just knocking stuff together in an ad-hoc manner.

Can I embed x inside y and get them talking to each other

You will probably need to define a communication method and prove that it basically works.  Get something up and running that is at least reasonably functional in the "through the middle" test case.

You probably don't need to develop an architecture that is clean with separation of concerns that means the systems are properly independant and backwards compatible with existing integrations. Or that all interactions are properly captured and that exceptional circumstances are dealt with correctly.

That said, if you were building a production system, you'd need to ensure that you define the full layered architecture, understand the implications of lost messages, prove the level of chat that will occur between the systems.  On top of that you need to know that you don't impact pre-existing behaviour or APIs.

If I put product x on infrastructure y will it basically stand up?

You probably need to just get the software on there and run your automated tests.  Maybe you need to prove the performance and so you'll put together some ad-hoc performance scripts.

You probably don't need to prove that your release mechanism is solid and repeatable, or ensure that your automated tests cover some of the peculiarities of the new infrastructure, or that you have a good set of long term performance test scripts that drop into your standard development and deployment process.

That said, if you were building a production system, you'd need to know exactly how the deployments worked, fit it into your existing continuous delivery suite, performance test and analyse on an automated schedule.

Production development and Proof of Concept development is not the same

The point is, when you are building a production system you have to do a lot of leg-work; you know you can validate all the input being submitted in a form, or coming through an API - you just have to do it.

You need to ensure that the functionality you're providing works in the majority of use-cases, and if you're working in a TDD environment then you will prove that by writing automated tests before you've even started creating that functionality.

When you're building a proof of concept, not only should these tests be a lower priority, I would argue that they should be *no priority whatsoever*, unless they serve to test the concept that you're trying to prove.

That is,  you're not usually trying to ensure that this piece of code works in all use-cases, but rather that this concept works in the general case with a degree of certainty that you can *extend* it to all cases.

Ultimately, the important deliverable of a POC is proof that the concept works, or doesn't work; the exploration of ideas and the conclusion you come to; the journey of discovery and the destination of the answer to the question originally posed.

That is intellectual currency, not software.  The important delivery of a production build is the software that is built.

That is the fundamental difference, and why you should throw your code away.

Paginated HTML is here and has been for some time ... I think!

Tim Dexter - Fri, 2014-12-12 18:03

We have a demo environment in my team and of course things get a little beaten up in there. Our go to, 'here's Publisher' report was looking really bad. Data was not returning or being rendered correctly on the five templates we have for it.
So, I spent about a half hour cleaning up the report; getting things working again; clearing out the rubbish. I noticed that one of the layouts when rendered in HTML was repeatedly showing a header down the screen. Oh, I know where to get rid of that and off I click to the report properties to fix it. But what is this I see? Is it? Can it be? Are my tired old eyes deceiving me?

Yes, Dexter, you see that right, 'View Paginated'! I nervously changed the value to 'true' and went back to the HTML output.
Holy Amaze Balls Batman, paginated HTML, the holy grail of HTML rendered reports, the Mount Everest of ... no, thats too easy, the K2 of html output ... its fan-bloody-tastic! Can you tell Im excited? I was immediately on messenger to Leslie (doc writer extraordinaire) 

Obviously not quite as big a deal in the sane, real world outside of my head. 'Oh yeah, we have that now ...' Leslie is so calm and collected, however, she does like Maroon 5 but, we overlook that :)

I command you'ers to go find the property and turn it on right now and bask in the glory that is, 'paginated html.!'
I cannot stop clicking back and forth and then to the end and then all the way back to the beginning. Its fantastic!

Just look at those icons, just click em, you know you want to!

Categories: BI & Warehousing

AZORA – Arizona Oracle User Group meeting January 20th

Bobby Durrett's DBA Blog - Fri, 2014-12-12 16:25

AZORA is planning a meeting January 20th.  Here is the link to RSVP: url

Hope to see you there. :)

– Bobby

Categories: DBA Blogs

Aliases with sdsql and simpler CTAS

Kris Rice - Fri, 2014-12-12 14:29
First, we just put up a new build of sdsql.  Go get it or the last thing here will not work.   SQL is a great and verbose language so there's many ways to shorten what we have to type.  As simple as a view or saving a script to call later with @/path/to/sessions.sql  SDSQL is taking it a step further and we added aliases to the tool.  Almost as if right on queue, John asked if we could add them.

What is SDSQL ?

Kris Rice - Fri, 2014-12-12 14:29
  SQL Developer is now up to version 4.1 and has had many additions over the years to beef up the sqlplus compatibility.  This is used today by millions of users adding up to millions if not billions of hours in the tool doing their work.  That means our support of core sqlplus has to be very full featured.  One idea we kicked around for a while but never had the time to do was to make our

SDSQL's flux capacitor

Kris Rice - Fri, 2014-12-12 14:29
  Writing sql or any code is an iterative process.  Most of the time that means to see what you have done say 5 minutes ago means how big is your undo buffer or better is if you are in SQL Developer there's a full blown history.  If you are in sqlplus, you are basically out of luck. History   SDSQL has built in history and between sessions.  We are still jiggling where it stores the history so

Getting DDL for objects with sdsql

Kris Rice - Fri, 2014-12-12 14:29
Getting ddl out for any object is quite simple.  You can just call dbms_metadata with something nice and easy like select dbms_metata.get_ddl('TABLE','EMP') from dual; SQL> select dbms_metadata.get_ddl('TABLE','EMP') from dual; DBMS_METADATA.GET_DDL('TABLE','EMP') -------------------------------------------------------------------------------- CREATE TABLE "KLRICE"."EMP" ( "EMPNO" NUMBER

SQL Developer 4.1 EA1 is out

Kris Rice - Fri, 2014-12-12 14:29
SQL Developer 4.1 is out for tire kicking.  There's a lot of new things in there as well as some great enhancements.  There's a nice shiny new Instance Monitor to get an overview of what's going on.   Keep in mind this is in flux from a UI stance and the final may be quite different that what you see today.  There's tons of data here and mostly all things have drill downs. Also, we took the

Notable updates of SUSE Linux Enterprise 12

Chris Foot - Fri, 2014-12-12 13:03


Hi, welcome to RDX! Using SUSE Linux Enterprise Server to manage your workstations, servers and mainframes? SUSE recently released a few updates to the solution, dubbed Linux Enterprise Server 12, that professionals should take note of.

For one thing, SUSE addressed the problem with Unix’s GNU Bourne Again Shell, also known as the “Shellshock” bug. This is a key fix, as it disallows hackers from placing malicious code onto servers through remote computers.

As far as disaster recovery capabilities are concerned, Linux Enterprise Server 12 is equipped with snapshot and full-system rollback features. These two functions enable users to revert back to the original configuration of a system if it happens to fail.

Want a team of professionals that can help you capitalize on these updates? Look no further than RDX’s Linux team – thanks for watching!

The post Notable updates of SUSE Linux Enterprise 12 appeared first on Remote DBA Experts.

SDSQL - Editing Anyone?

Barry McGillin - Fri, 2014-12-12 12:05
Since we dropped our beta out of SQLDeveloper 4.1 and announced SDSQL, we've been busy getting some of the new things out to users.  We support SQL*plus editing straight out of the box, but one thing that was always annoying was the time when you make a mistake and can't fix it to you have finished typing to go back and add a line like this.

This was always the way as console editors didn't let you move around, the best you could hope for on the command line was a decent line editor and anything above was printed to the screen and not accessible unless through commands like you see here in the images about..

Well, not any more.  In SDSQL we've taken a look at several things like history, aliases and colors and we've now added a separate multiline console editor which allows you to walk up and down your buffer and make all the changes you want before executing?  Sounds normal, right? So, thats what we did.  Have a look and tell us what you think.

Log Buffer #401, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-12-12 09:00

This Log Buffer Edition goes right through the fields of salient database blog posts and comes out with something worth reading.


Extract SQL full text from SQL Monitor html.

Disruption: Are Hot Brands Breaking the Rules?

Understanding Flash: Unpredictable Write Performance.

The caveats of running .sql scripts with GUI tools.

File Encoding in the Next Generation Outline Extractor.

SQL Server:

Arshad Ali discusses how to use CTE and the ranking function to access or query data from previous or subsequent rows.

SSRS – Report for Stored Procedure with Multiple Values Passed.

Continuous Delivery for Databases: Microservices, Team Structures, and Conway‘s Law.

Scripting SQL Server databases with SMO using EnforceScriptingOptions.

How to troubleshoot SSL encryption issues in SQL Server.


MySQL 5.7: only_full_group_by Improved, Recognizing Functional Dependencies, Enabled by Default!

MaxScale, manual control, external monitors and notification methods.

MySQL 5.7: only_full_group_by Improved, Recognizing Functional Dependencies, Enabled by Default!

Recover MySQL root password without restarting MySQL (no downtime!)

Oracle DBAs have has the luxury of their V$ variables for a long time while we MySQL DBAs pretended we were not envious.

Categories: DBA Blogs

push_pred – evolution

Jonathan Lewis - Fri, 2014-12-12 08:22

Here’s a query (with a few hints to control how I want Oracle to run it) that demonstrates the difficulty of trying to solve problems by hinting (and the need to make sure you know where all your hinted code is):

		leading (@main t1@main v1@main t4@main)
	select	/*+ qb_name(inline) no_merge */
		t2.n1, t3.n2, count(*)
	from	t2, t3
	where exists (
		select	/*+ qb_name(subq) no_unnest push_subq */
		from	t5
		where	t5.object_id = t2.n1
	and	t3.n1 = t2.n2
	group by t2.n1, t3.n2
	)	v1,
	v1.n1 = t1.n1
and	t4.n1(+) = v1.n1

Nominally it’s a three-table join, except the second table is an in-line view which joins two tables and includes an existence subquery. Temporarily I have made the join to t4 an outer join – but that’s just to allow me to make a point, I don’t want an outer join in the final query. I’ve had to include the no_merge() hint in the inline view to stop Oracle using complex view merging to “join then aggregate” when I want it to “aggregate then join”; I’ve included the no_unnest and push_subq hints to make sure that the subquery is operated as a subquery, but operates at the earliest possible moment in the inline view. Ignoring the outer join (which would make operation 1 a nested loop outer), this is the execution plan I want to see:

| Id  | Operation                         | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                  |       |    50 | 12850 |  4060   (1)| 00:00:21 |
|   1 |  NESTED LOOPS                     |       |    50 | 12850 |  4060   (1)| 00:00:21 |
|   2 |   NESTED LOOPS                    |       |    50 | 12850 |  4060   (1)| 00:00:21 |
|   3 |    NESTED LOOPS                   |       |    50 |  7400 |  4010   (1)| 00:00:21 |
|   4 |     TABLE ACCESS FULL             | T1    |  1000 |   106K|     3   (0)| 00:00:01 |
|   5 |     VIEW PUSHED PREDICATE         |       |     1 |    39 |     4   (0)| 00:00:01 |
|   6 |      SORT GROUP BY                |       |     1 |    16 |     4   (0)| 00:00:01 |
|   7 |       NESTED LOOPS                |       |     1 |    16 |     3   (0)| 00:00:01 |
|   8 |        TABLE ACCESS BY INDEX ROWID| T2    |     1 |     8 |     2   (0)| 00:00:01 |
|*  9 |         INDEX UNIQUE SCAN         | T2_PK |     1 |       |     1   (0)| 00:00:01 |
|* 10 |          INDEX RANGE SCAN         | T5_I1 |     1 |     4 |     1   (0)| 00:00:01 |
|  11 |        TABLE ACCESS BY INDEX ROWID| T3    |     1 |     8 |     1   (0)| 00:00:01 |
|* 12 |         INDEX UNIQUE SCAN         | T3_PK |     1 |       |     0   (0)| 00:00:01 |
|* 13 |    INDEX UNIQUE SCAN              | T4_PK |     1 |       |     0   (0)| 00:00:01 |
|  14 |   TABLE ACCESS BY INDEX ROWID     | T4    |     1 |   109 |     1   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   9 - access("T2"."N1"="T1"."N1")
              "T5" "T5" WHERE "T5"."OBJECT_ID"=:B1))
  10 - access("T5"."OBJECT_ID"=:B1)
  12 - access("T3"."N1"="T2"."N2")
  13 - access("T4"."N1"="V1"."N1")

Note, particularly, operation 5: VIEW PUSHED PREDICATE, and the associated access predicate at line 9 “t2.n1 = t1.n1″ where the predicate based on t1 has been pushed inside the inline view: so Oracle will evaluate a subset view for each selected row of t1, which is what I wanted. Then you can see operation 10 is an index range scan of t5_i1, acting as a child to the index unique scan of t2_pk of operation 9 – that’s Oracle keeping the subquery as a subquery and executing it as early as possible.

So what happens when I try to get this execution plan using the SQL and hints I’ve got so far ?

Here’s the plan I got from

| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT             |       |    50 | 12750 |    62   (4)| 00:00:01 |
|   1 |  NESTED LOOPS                |       |    50 | 12750 |    62   (4)| 00:00:01 |
|*  2 |   HASH JOIN                  |       |    50 |  7350 |    12  (17)| 00:00:01 |
|   3 |    TABLE ACCESS FULL         | T1    |  1000 |   105K|     3   (0)| 00:00:01 |
|   4 |    VIEW                      |       |    50 |  1950 |     9  (23)| 00:00:01 |
|   5 |     HASH GROUP BY            |       |    50 |   800 |     9  (23)| 00:00:01 |
|*  6 |      HASH JOIN               |       |    50 |   800 |     7  (15)| 00:00:01 |
|*  7 |       TABLE ACCESS FULL      | T2    |    50 |   400 |     3   (0)| 00:00:01 |
|*  8 |        INDEX RANGE SCAN      | T5_I1 |     1 |     4 |     1   (0)| 00:00:01 |
|   9 |       TABLE ACCESS FULL      | T3    |  1000 |  8000 |     3   (0)| 00:00:01 |
|  10 |   TABLE ACCESS BY INDEX ROWID| T4    |     1 |   108 |     1   (0)| 00:00:01 |
|* 11 |    INDEX UNIQUE SCAN         | T4_PK |     1 |       |     0   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - access("V1"."N1"="T1"."N1")
   6 - access("T3"."N1"="T2"."N2")
              FROM "T5" "T5" WHERE "T5"."OBJECT_ID"=:B1))
   8 - access("T5"."OBJECT_ID"=:B1)
  11 - access("T4"."N1"="V1"."N1")

In 10g the optimizer has not pushed the join predicate down into the view (the t1 join predicate appears in the hash join at line 2); I think this is because the view has been declared non-mergeable through a hint. So let’s upgrade to

| Id  | Operation                        | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT                 |       |    50 | 12950 |  4008K  (1)| 05:34:04 |
|   1 |  NESTED LOOPS                    |       |    50 | 12950 |  4008K  (1)| 05:34:04 |
|   2 |   MERGE JOIN CARTESIAN           |       |  1000K|   205M|  2065   (3)| 00:00:11 |
|   3 |    TABLE ACCESS FULL             | T1    |  1000 |   105K|     3   (0)| 00:00:01 |
|   4 |    BUFFER SORT                   |       |  1000 |   105K|  2062   (3)| 00:00:11 |
|   5 |     TABLE ACCESS FULL            | T4    |  1000 |   105K|     2   (0)| 00:00:01 |
|   6 |   VIEW PUSHED PREDICATE          |       |     1 |    43 |     4   (0)| 00:00:01 |
|   7 |    SORT GROUP BY                 |       |     1 |    16 |     4   (0)| 00:00:01 |
|*  8 |     FILTER                       |       |       |       |            |          |
|   9 |      NESTED LOOPS                |       |     1 |    16 |     3   (0)| 00:00:01 |
|  10 |       TABLE ACCESS BY INDEX ROWID| T2    |     1 |     8 |     2   (0)| 00:00:01 |
|* 11 |        INDEX UNIQUE SCAN         | T2_PK |     1 |       |     1   (0)| 00:00:01 |
|* 12 |         INDEX RANGE SCAN         | T5_I1 |     1 |     4 |     1   (0)| 00:00:01 |
|  13 |       TABLE ACCESS BY INDEX ROWID| T3    |  1000 |  8000 |     1   (0)| 00:00:01 |
|* 14 |        INDEX UNIQUE SCAN         | T3_PK |     1 |       |     0   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   8 - filter("T4"."N1"="T1"."N1")
  11 - access("T2"."N1"="T4"."N1")
              "T5" "T5" WHERE "T5"."OBJECT_ID"=:B1))
  12 - access("T5"."OBJECT_ID"=:B1)
  14 - access("T3"."N1"="T2"."N2")

Excellent – at operation 6 we see VIEW PUSHED PREDICATE, and at operation 11 we can see that the join predicate “t2.n1 = t1.n1″.

Less excellent – we have a Cartesian Merge Join between t1 and t4 before pushing predicates. Of course, we told the optimizer to push join predicates into the view, and there are two join predicates, one from t1 and one from t4 – and we didn’t tell the optimizer that we only wanted to push the t1 join predicate into the view. Clearly we need a way of specifying where predicates should be pushed FROM as well as a way of specifying where they should be pushed TO.

If we take a look at the outline information from the execution plan there’s a clue in one of the outline hints: PUSH_PRED(@”MAIN” “V1″@”MAIN” 3 2) – the hint has a couple of extra parameters to it – perhaps the 2 and 3 refer in some way to the 2nd and 3rd tables in the query. If I test with an outer join to t4 (which means the optimizer won’t be able to use my t4 predicate as a join INTO the view) I get the plan I want (except it’s an outer join, of course), and the hint changes to: PUSH_PRED(@”MAIN” “V1″@”MAIN” 2) – so maybe the 2 refers to t1 and the 3 referred to t4, so let’s try the following hints:

push_pred(v1@main 2)
no_push_pred(v1@main 3)

Unfortunately this gives us the following plan:

| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | SELECT STATEMENT             |       |    50 | 12300 |    62   (4)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER          |       |    50 | 12300 |    62   (4)| 00:00:01 |
|*  2 |   HASH JOIN                  |       |    50 |  6900 |    12  (17)| 00:00:01 |
|   3 |    TABLE ACCESS FULL         | T1    |  1000 |   105K|     3   (0)| 00:00:01 |
|   4 |    VIEW                      |       |    50 |  1500 |     9  (23)| 00:00:01 |
|   5 |     HASH GROUP BY            |       |    50 |   800 |     9  (23)| 00:00:01 |
|*  6 |      HASH JOIN               |       |    50 |   800 |     7  (15)| 00:00:01 |
|*  7 |       TABLE ACCESS FULL      | T2    |    50 |   400 |     3   (0)| 00:00:01 |
|*  8 |        INDEX RANGE SCAN      | T5_I1 |     1 |     4 |     1   (0)| 00:00:01 |
|   9 |       TABLE ACCESS FULL      | T3    |  1000 |  8000 |     3   (0)| 00:00:01 |
|  10 |   TABLE ACCESS BY INDEX ROWID| T4    |     1 |   108 |     1   (0)| 00:00:01 |
|* 11 |    INDEX UNIQUE SCAN         | T4_PK |     1 |       |     0   (0)| 00:00:01 |

Predicate Information (identified by operation id):
   2 - access("V1"."N1"="T1"."N1")
   6 - access("T3"."N1"="T2"."N2")
              FROM "T5" "T5" WHERE "T5"."OBJECT_ID"=:B1))
   8 - access("T5"."OBJECT_ID"=:B1)
  11 - access("T4"."N1"(+)="V1"."N1")

We don’t have join predicate pushdown; on the other hand we’ve got the join order we specified with our leading() hint – and that didn’t appear previously when we got the Cartesian Merge Join with predicate pushdown (our hints were incompatible, so something had to fail). So maybe the numbering has changed because the join order has changed and I should push_pred(v1 1) and no_push_pred(v1 3). Alas, trying all combinations of 2 values from 1,2, and 3 I can’t get the plan I want.

So let’s upgrade to As hinted we get the pushed predicate with Cartesian merge join, but this time the push_pred() hint that appears in the outline looks like this: PUSH_PRED(@”MAIN” “V1″@”MAIN” 2 1) – note how the numbers have changed between and So let’s see what happens when I try two separate hints again, fiddling with the third parameter, e.g.:

push_pred(v1@main 1)
no_push_pred(v1@main 2)

With the values set as above I got the plan I want – it’s just a pity that I’m not 100% certain how the numbering in the push_pred() and no_push_pred() hints is supposed to work. In this case, though, it no longer matters as all I have to do now is create an SQL Baseline for my query, transferring the hinted plan into the the SMB with the unhinted SQL.

In passing, I did manage to get the plan I wanted in by adding the hint /*+ outline_leaf(@main) */ to the original SQL. I’m even less keen on doing that than I am on adding undocumented parameters to the push_pred() and no_push_pred() hints, of course; but having done it I did wonder if there are any SQL Plan Baslines in production systems that include the push_pred() hint that are going to change plan on the upgrade to because the numbering inside the hint is supposed to change with version.


Loosely speaking, this blog note is the answer to a question posted about five years ago.

The Hidden Benefit of PeopleSoft Selective Adoption

Duncan Davies - Fri, 2014-12-12 07:00

There has been a lot of talk over the last couple of weeks about PeopleSoft Selective Adoption, the recently-coined term for the PeopleSoft Update Manager delivery model. Much of this has been on the direct benefits to the customer, which is how it should be. Greg Parikh has linked to some of the posts on LinkedIn.

While discussing this with a colleague at the recent Apps14 conference we noticed that there is another implication that I’ve not seen anyone else call out yet. Although at first glance it seems an immediate advantage to Oracle it’s not difficult to see how the customer is also going to reap significant rewards.

Getting everyone onto 9.2, and then delivering innovation on that version means that PeopleSoft development can operate on a single codeline. Currently, a legislative update will have to be coded and applied for all supported releases (and each version might require the update to be different, depending upon the underlying pages), meaning a lot of extra complication and repeat work. A Global Payroll update might need to be created for v9.2, v9.1 and v9.0, for instance, which is a significant burden.

Once updates are only being created on the v9.2 codeline then they only have to be done once, saving development staff time (and support staff a lot of troubleshooting time also) and thereby freeing them up to concentrate much more time on delivering extra value to the customers in the way of faster updates and more innovative new functionality. This can only be a big plus in the long-run.

What can the Oracle Audit Vault Protect?

For Oracle database customers the Oracle Audit Vault can protect the following:

  • SQL statements logs – Data Manipulation Language (DML) statement auditing such as when users are attempting to query the database or modify data, using SELECT, INSERT, UPDATE, or DELETE.
  • Database Schema Objects changes – Data Definition Language (DDL) statement auditing such as when users create or modify database structures such as tables or views.
  • Database Privileges and Changes – Auditing can be defined for the granting of system privileges, such as SELECT ANY TABLE.  With this kind of auditing, Oracle Audit Vault records SQL statements that require the audited privilege to succeed.
  • Fine-grained audit logs – Fine Grained Auditing activities stored in SYS.FGA_LOG$ such as whether an IP address from outside the corporate network is being used or if specific table columns are being modified.  For example, when the HR.SALARY table is SELECTED using direct database connection (not from the application), a condition could be to log the details of result sets where the PROPOSED_SALARY column is greater than $500,000 USD.
  • Redo log data – Database redo log file data.  The redo log files store all changes that occur in the database.  Every instance of an Oracle database has an associated redo log to protect the database in case of an instance failure.  In Oracle Audit Vault, the capture rule specifies DML and DDL changes that should be checked when Oracle Database scans the database redo log.

The Audit Vault also supports –

  • Database Vault – Database Vault settings stored in DVSYS.AUDIT_TRAIL$ such as Realm audit, factor audit and Rule Audit. 
  • System and SYS – Core changes to the database by privileged users such as DBAs as recorded by AUDIT_SYS_OPERATIONS.
  • Stored Procedure Auditing – Monitor any changes made to PL/SQL and stored procedures.  Standard reports are provided to stored procedure operations, deleted and created procedures as well as modification history.

If you have questions, please contact us at

Reference Tags: AuditingOracle Audit VaultOracle Database
Categories: APPS Blogs, Security Blogs

Notes and links, December 12, 2014

DBMS2 - Fri, 2014-12-12 05:05

1. A couple years ago I wrote skeptically about integrating predictive modeling and business intelligence. I’m less skeptical now.

For starters:

  • The predictive experimentation I wrote about over Thanksgiving calls naturally for some BI/dashboarding to monitor how it’s going.
  • If you think about Nutonian’s pitch, it can be approximated as “Root-cause analysis so easy a business analyst can do it.” That could be interesting to jump to after BI has turned up anomalies. And it should be pretty easy to whip up a UI for choosing a data set and objective function to model on, since those are both things that the BI tool would know how to get to anyway.

I’ve also heard a couple of ideas about how predictive modeling can support BI. One is via my client Omer Trajman, whose startup ScalingData is still semi-stealthy, but says they’re “working at the intersection of big data and IT operations”. The idea goes something like this:

  • Suppose we have lots of logs about lots of things.* Machine learning can help:
    • Notice what’s an anomaly.
    • Group* together things that seem to be experiencing similar anomalies.
  • That can inform a BI-plus interface for a human to figure out what is happening.

Makes sense to me.

* The word “cluster” could have been used here in a couple of different ways, so I decided to avoid it altogether.

Finally, I’m hearing a variety of “smart ETL/data preparation” and “we recommend what columns you should join” stories. I don’t know how much machine learning there’s been in those to date, but it’s usually at least on the roadmap to make the systems (yet) smarter in the future. The end benefit is usually to facilitate BI.

2. Discussion of graph DBMS can get confusing. For example:

  • Use cases run the gamut from short-request to highly analytic; no graph DBMS is well-suited for all graph use cases.
  • Graph DBMS have huge problems scaling, because graphs are very hard to partition usefully; hence some of the more analytic use cases may not benefit from a graph DBMS at all.
  • The term “graph” has meanings in computer science that have little to do with the problems graph DBMS try to solve, notably directed acyclic graphs for program execution, which famously are at the heart of both Spark and Tez.
  • My clients at Neo Technology/Neo4j call one of their major use cases MDM (Master Data Management), without getting much acknowledgement of that from the mainstream MDM community.

I mention this in part because that “MDM” use case actually has some merit. The idea is that hierarchies such as organization charts, product hierarchies and so on often aren’t actually strict hierarchies. And even when they are, they’re usually strict only at specific points in time; if you care about their past state as well as their present one, a hierarchical model might have trouble describing them. Thus, LDAP (Lightweight Directory Access Protocol) engines may not be an ideal way to manage and reference such “hierarchies:; a graph DBMS might do better.

3. There is a surprising degree of controversy among predictive modelers as to whether more data yields better results. Besides, the most common predictive modeling stacks have difficulty scaling. And so it is common to model against samples of a data set rather than the whole thing.*

*Strictly speaking, almost the whole thing — you’ll often want to hold at least a sample of the data back for model testing.

Well, WibiData’s couple of Very Famous Department Store customers have tested WibiData’s ability to model against an entire database vs. their alternative predictive modeling stacks’ need to sample data. WibiData says that both report significantly better results from training over the whole data set than from using just samples.

4. Scaling Data is on the bandwagon for Spark Streaming and Kafka.

5. Derrick Harris and Pivotal turn out to have been earlier than me in posting about Tachyon bullishness.

6. With the Hortonworks deal now officially priced, Derrick was also free to post more about/from Hortonworks’ pitch. Of course, Hortonworks is saying Hadoop will be Big Big Big, and suggesting we should thus not be dismayed by Hortonworks’ financial performance so far. However, Derrick did not cite Hortonworks actually giving any reasons why its competitive position among Hadoop distribution vendors should improve.

Beyond that, Hortonworks says YARN is a big deal, but doesn’t seem to like Spark Streaming.

Categories: Other

How Do I Know I Have The Latest SLOB Kit?

Kevin Closson - Thu, 2014-12-11 17:25

This is a quick blog post to show SLOB users how to determine whether they are using the latest SLOB kit. If you visit you’ll see the webpage I captured in the following screenshot.

Once on the SLOB Resources page you can simply hover over the “SLOB 2.2 (Click here)” hyperlink and the bottom of your browser will show the full name of the tar archive. Alternatively you can use md5sum(1) on Linux (or md5 on Mac) to get the checksum of the tar archive you have and compare it to the md5sum I put on the web page (see the arrow).



Filed under: oracle

Oracle Priority Support Infogram for 11-DEC-2014

Oracle Infogram - Thu, 2014-12-11 14:25

UTL_CALL_STACK : Get Detailed Information About the Currently Running Subprogram in Oracle Database 12c Release 1 (12.1), from ORACLE-BASE.
Oracle on Linux
Physical IO on Linux, from the Frits Hoogland Weblog.
SQL Developer
From that JEFF SMITH: Oracle SQL Developer version 4.1 Early Adopter Now Available.
Cloud Computing
Embedded Oracle Documents Cloud Service Web User Interfaces for folder contents and search results, from the Oracle A-Team Chronicles blog.
From the same source: Application Composer Application Containers.
From the dbi services Blog: Oracle AVDF - Database Firewall Policies
Ops Center
Two items from the Ops Center blog:
Ops Center 12.2.2 now available
Command Line Interface
Oracle presents: Avatar 2.0 – where to next?, from jaxenter.
SEO for ATG, Endeca and Oracle Ecommerce, from builtvisible.
Covering Your Loose Ends
The seven deadly sins that lead to an Oracle audit, from SearchOracle.
From AMIS Technology Blog: Bulk authorizing Oracle Unified Directory (OUD) users by adding them to OUD groups from the Linux/Unix Command Line.
Using OAM Pre Authentication Advanced Rules in OIF IdP, from Damien Carru's Blog: It's a Federated World.
How To Be A Better Public Speaker Based On Your Personality Type, from Business Insider.
From Oracle E-Business Suite Technology:
Top 9 Frequently-Asked Questions About Online Patching
Top Three EBS 12.2 Online Patching Resources
Database 12c Certified With 12.1 On Oracle Solaris on x86-64
Last Stop For Oracle Applications Tablespace Migration
Oracle Functional Testing Suite Advanced Pack for EBS 12.2.4 Now Available
From Oracle E-Business Suite Support Blog:
New White Paper on How To Use Meters in EAM
Are you taking advantage of the E-Business Suite Analyzer Scripts?
Webcast: eAM Mobile App Overview and Product Tour
Webcast: Order Management Corrupt Data and Data Fixes
EU You Need Movement Statistics!
Webcast: AutoLockbox Validation: Case Studies For Customer Identification & Receipt Application
…and Finally

This could be huge, if the hypothetical becomes the observed: The Fastest Stars in the Universe May Approach Light Speed, from Wired.

Impressions from #ukoug_tech14

The Oracle Instructor - Thu, 2014-12-11 10:15

ACC Liverpool

The Oracle circus went to Liverpool this year for the annual conference of the UK Oracle User Group and it was a fantastic event there! Top speakers and a very knowledgeable audience too, I was really impressed by the quality we have experienced. Together with my friends and colleagues Iloon and Joel, I was waving the flag for Oracle University again – and it was really fun to do so :-)

The 3 of us

The 3 of us

One little obstacle was that I actually did many presentations and roundtables. So less time for me to listen to the high quality talks of the other speakers…

Joel and I hosted three roundtables:

About Exadata, where we had amongst others Dan Norris (Member of the Platform Integration MAA Team, Oracle) and Jason Arneil (Solutions Architect, e-DBA) contributing

Exadata Roundtable

Exadata Roundtable, Jason and Dan on my left side, Joel and Iloon on my right

About Grid Infrastructure & RAC, where Ian Cookson (Product Manager  Clusterware, Oracle) took many questions from the audience. We could have had Markus Michalewicz also if I only would have told him the day before during the party night – I’m still embarrassed about that.

About Data Guard, where Larry Carpenter (Master product Manager Data Guard and Maximum Availability Architecture, Oracle) took all the questions as usual. AND he hit me for the article about the Active Data Guard underscore parameter, so I think I will remove it…

Iloon delivered her presentation about Apex for DBA Audience, which was very much appreciated and attracted a big crowd again, same as in Nürnberg before.

Joel had two talks on Sunday already: Managing Sequences in a RAC Environment (This is actually a more complex topic than you may think!) and Oracle Automatic Parallel Execution (Obviously complex stuff)

I did two presentations as well: The Data Guard Broker – Why it is recommended and Data Guard 12c New Features in Action

Both times, the UKOUG was so kind to give me very large rooms, and I can say that they haven’t looked empty although I faced tough competition by other interesting talks. This is from the first presentation:

Uwe Hesse

A big THANK YOU goes out to all the friendly people of UKOUG who made this possible and maintained the great event Tech14 was! And also to the bright bunch of Oracle colleagues and Oracle techies (speakers and attendees included) that gave me good company there: You guys are the best! Looking forward to see you at the next conference :-)

Tagged: #ukoug_tech14
Categories: DBA Blogs

Oracle Day Roundtable Recap

WebCenter Team - Thu, 2014-12-11 10:09

Primitive Logic was named one of the keynote speakers at December 4th's Oracle Day, WebCenter Executive Roundtable. Andy Lin, Primitive Logic's VP of Interactive presented on Digital Engagement and the importance of connecting experiences to outcomes.

Customers demand a quality digital experience. 

  • Mobile matters. 70% of all mobile searches result in action within 1 hour. 70% of online searches result in action in one month.
  • Getting personal has its rewards. Businesses that personalize their web experience are seeing an average of 19% increase in sales.
  • Don't forget to be social. 40% of consumers factor in social media recommendations when making purchasing decisions.
  • Say the same thing across all channels. 35% of consumers shop multiple channels and expect a consistent sales and marketing experience. 

A good experience leads to engaged customers. (this is not a new concept) Companies such as Zappos and Nordstrom have always understood that experience matters and a happy customer is an engaged customer.  As brick and mortar store fronts migrated to the world wide web, digital became the frontier for the ultimate customer experience. The rise of social media, personalization, video and user generated content has expanded the social aspect of digital. The proliferation of devices has delivered a content-on-demand atmosphere and an always connected society. 

What's Next? 

TIME is the next dimension to deepen Digital Engagement.
  • Capitalizing on those moments in time of “Discovery and Inspiration” creates an experience that deepens Engagement
  • Reducing friction to shorten the cycle time of the buy/transaction
  • Enterprises that have the best time to market with new engagement channels will lead and win

Interested in a Complete Solution Approach? Click here to learn more.

The Battle for Open and MOOC Completion Rates

Michael Feldstein - Wed, 2014-12-10 23:21

Yesterday I wrote a post on the 20 Million Minds blog about Martin Weller’s new book The Battle for Open: How openness won and why it doesn’t feel like victory. Exploring different aspects of open in higher education – open access, MOOCs, open education resources and open scholarship – Weller shows how far the concept of openness has come, to the point where “openness is now such a part of everyday life that it seems unworthy of comment”. If you’re interested in OER, open courses, open journals, or open research in higher education – get the book (it’s free and available in a variety of formats).

Building on the 20MM post about the ability to reuse or repurpose the book itself, I would like to expand on a story from early 2013 where I happen to play a role. I’ll mix in Weller’s description (MW) from the book with Katy Jordan’s data (KJ) and my own description (PH) from this blog post.

(MW) I will end with one small example, which pulls together many of the strands of openness. Katy Jordan is a PhD student at the OU focusing on academic networks on sites such as Academia. edu. She has studied a number of MOOCs on her own initiative to supplement the formal research training offered at the University. One of these was an infographics MOOC offered by the University of Texas. For her final visualisation project on this open course she decided to plot MOOC completion rates on an interactive graph, and blogged her results (Jordan 2013).


(MW) This was picked up by a prominent blogger, who wrote about it being the first real attempt to collect and compile completion data for MOOCs (Hill 2013), and he also tweeted it.

(PH) How many times have you heard the statement that ‘MOOCs have a completion rate of 10%’ or ‘MOOCs have a completion rate of less than 10%’? The meme seems to have developed a life of its own, but try to research the original claim and you might find a bunch of circular references or anecdotes of one or two courses. Will the 10% meme hold up once we get more data?

While researching this question for an upcoming post, I found an excellent resource put together by Katy Jordan, a graduate student at The Open University of the UK. In a blog post from Feb 13, 2013, Katy described a new effort of hers to synthesize MOOC completion rate data – from xMOOCs in particular and mostly from Coursera.

(MW) MOOC completion rates are a subject of much interest, and so Katy’s post went viral, and became the de-facto piece to link to on completion rates, which almost every MOOC piece references. It led to further funding through the MOOC Research Initiative and publications. All on the back of a blog post.

This small example illustrates how openness in different forms spreads out and has unexpected impact. The course needed to be open for Katy to take it; she was at liberty to share her results and did so as part of her general, open practice. The infographic and blog relies on open software and draws on openly available data that people have shared about MOOC completions, and the format of her work means others can interrogate that data and suggest new data points. The open network then spreads the message because it is open access and can be linked to and read by all.

(PH) Once I had and shared Katy’s blog post, it seemed the natural move was to build on this data. What was interesting to me was that there seemed to be different student patterns of behavior within MOOCs, leading to this initial post and culminating (for now) in a graphical view of MOOC student patterns.


(PH) With a bit of luck or serendipity, this graphical view of patterns nicely fit together with research data from Stanford.

(MW) It’s hard to predict or trigger these events, but a closed approach anywhere along the chain would have prevented it. It is in the replication of small examples like this across higher education that the real value of openness lies.

Weller has a great point on the value of openness, and I appreciate the mention in the book.

Source: Weller, M. 2014. The Battle for Open: How openness won and why it doesn’t feel like victory. London: Ubiquity Press. DOI:

The post The Battle for Open and MOOC Completion Rates appeared first on e-Literate.