Skip navigation.

Feed aggregator

Update to "Getting Started with Impala" in the Pipeline

Tahiti Views - Sun, 2016-04-10 01:38
Sometime soon, there will be an update to the first edition of the "Getting Started with Impala" book from O'Reilly. It'll have about 30 new pages covering new features such as analytic functions, subqueries, incremental statistics, and complex types. This is where the e-version from O'Reilly proves its worth, because existing owners will get a free update. O'Reilly store page Amazon page John Russell

Video : SQL/XML (SQLX) : Generating XML using SQL in Oracle

Tim Hall - Sat, 2016-04-09 09:16

Another video fresh off the press.

If videos aren’t your thing, you can always read the article the video is based on.

The star of this video is Kevin Closson. Kevin’s a really nice guy and has a brain the size of a planet, but you know somewhere in the back of his mind he’s wondering what it would be like to hunt you down, kill you and mount your head above his fireplace.

Create GoldenGate 12.2 Database User

Michael Dinh - Sat, 2016-04-09 08:33

Oracle GoldenGate for Windows and UNIX 12c (

First, I am disappointed that Oracle does not go above and beyond to provide SQL scripts to create GoldenGate users for the database.

There are different set of privileges depending on the version of the database: Oracle or Earlier Database Privileges Oracle or Later Database Privileges

PDB is not being used and it’s different for PDB.

Depending on whether you want to practice the least principle privileges, ggadmin user can be create with privileges for both extract (capture) and replicat (apply).

Please don’t forget to change the password from the script since it is hard coded to be same as username :=)

-- Oracle or Later Database Privileges
set echo on lines 200 pages 1000 trimspool on tab off
define _username='GGADMIN'
-- grant privileges for capture
create user &_username identified by &_username default tablespace ggdata;
select DEFAULT_TABLESPACE,TEMPORARY_TABLESPACE from dba_users where username='&_username';
grant create session, connect, resource, alter any table, alter system, dba, select any transaction to &_username;
-- grant privileges for replicat
grant create table, lock any table to &_username;
-- grant both capture and apply
exec dbms_goldengate_auth.grant_admin_privilege('&_username')
-- grant capture
-- exec dbms_goldengate_auth.grant_admin_privilege('&_username','capture');
-- grant apply
-- exec dbms_goldengate_auth.grant_admin_privilege('&_username','apply');


$ sysdba @cr_ggadmin_12c.sql

SQL*Plus: Release Production on Sat Apr 9 07:06:41 2016

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

ARROW:(SYS@tiger):PRIMARY> define _username='GGADMIN'
ARROW:(SYS@tiger):PRIMARY> -- grant privileges for capture
ARROW:(SYS@tiger):PRIMARY> create user &_username identified by &_username default tablespace ggdata;

User created.

ARROW:(SYS@tiger):PRIMARY> select DEFAULT_TABLESPACE,TEMPORARY_TABLESPACE from dba_users where username='&_username';

------------------------------ ------------------------------
GGDATA                         TEMP

ARROW:(SYS@tiger):PRIMARY> grant create session, connect, resource, alter any table, alter system, dba, select any transaction to &_username;

Grant succeeded.

ARROW:(SYS@tiger):PRIMARY> -- grant privileges for replicat
ARROW:(SYS@tiger):PRIMARY> grant create table, lock any table to &_username;

Grant succeeded.

ARROW:(SYS@tiger):PRIMARY> -- grant both capture and apply
ARROW:(SYS@tiger):PRIMARY> exec dbms_goldengate_auth.grant_admin_privilege('&_username')

PL/SQL procedure successfully completed.

ARROW:(SYS@tiger):PRIMARY> -- grant capture
ARROW:(SYS@tiger):PRIMARY> -- exec dbms_goldengate_auth.grant_admin_privilege('&_username','capture');
ARROW:(SYS@tiger):PRIMARY> -- grant apply
ARROW:(SYS@tiger):PRIMARY> -- exec dbms_goldengate_auth.grant_admin_privilege('&_username','apply');
ARROW:(SYS@tiger):PRIMARY> exit
Disconnected from Oracle Database 11g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Financial Apps Collaborate 16 Sessions

David Haimes - Fri, 2016-04-08 22:59

We’re just two days away from the start of Collaborate and there are so many session I want to get to, my focus is on Financial Applications both Cloud and E-Business Suite.  I already listed sessions where you can find me presenting, but here are ones I think will be interesting, I will attend as many as I can

Firstly two cloud customers(Alex Lee and Westmont Hotels) talking about their experiences implementing cloud financials.

Fusion Financials Implementation Customer Success Experience

Monday April 11th 12:45 PM–1:45 PM – South Seas J

Derrick Walters, Corporate Applications Manager at Alex Lee

How Westmont Hospitality Benefited by Leveraging Cloud ERP
3:15 PM–4:15 PMApr 11, 2016 – South Seas I
Sacha Agostini Oracle Functional Consultant at Vigilant Technologies, LLC.

Next some AGIS, Legal Entity and related topics on E-Business suite.  In these areas that have been out for some time, I generally learn something about innovative uses of the products.  Our partners and customers are very smart.

Intracompany, Intercompany, AGIS – Unraveling the Mysteries!
2:15 PM–3:15 PM Apr 10, 2016 – South Seas A
Bharati ManjeshwarMs at Highstreet IT Solutions, LLC

Master Data Series – Legal Entities From 11i thru Cloud 9:15 AM–10:15 AM Apr 12, 2016 – Breakers F
Thomas Simkiss Vice-President of Consulting at Denovo Ventures, LLC

Its Not too Late! How to Replace Your eBTax Solution After You Have Upgraded
10:30 AM–11:30 AM Apr 11, 2016 – South Seas I
Mr Andrew BohnetDirector ateBiz Answers Ltd

General Ledger and Financial Accounting Hub – A look at a Multi-National Structure
3:30 PM–4:30 PMApr 10, 2016 – Jasmine H
Bharati ManjeshwarMs at Highstreet IT Solutions, LLC

Finally, there are sessions called Power Hours, which are strangely two hours, but i really like the experience last year and based on the fact they are back i assume others did too.  they are not a traditional lecture format, they are more interactive and allow people to discuss their experiences and learn from each other.  If you have not tried one, I highly recommend them.  Here are a couple that jumped out at me Power Hour – Coexistence – On Premise and Cloud Together and In Harmony
3:15 PM–5:30 PM Apr 11, 2016 – Mandalay Bay C
Mohan Iyer Practice Director at Jade Global, Inc. Power Hour – eBTax Hacks – Your Questions Answered
9:15 AM–11:45 AM Apr 12, 2016 – Mandalay Bay C
Mr Andrew Bohnet Director at eBiz Answers Ltd Alexander Fiteni President at Fiteni Enterprises Inc Dev Singh Manager at KPMG LLP Canada Power Hour – Master Data Structures in EBS and Cloud
12:45 PM–3:00 PM Apr 11, 2016 – Mandalay Bay C
Mohan Iyer Practice Director at Jade Global, Inc.
Categories: APPS Blogs

Cloud Integration Challenges and Opportunities

The rapid shift from on-premise applications to a hybrid mix of Software-as-a-Service (SaaS) and onpremise applications has introduced big challenges for companies attempting to simplify enterprise...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Playing around with JSON using the APEX_JSON package

Tim Hall - Fri, 2016-04-08 12:26

hockey-149683_640We publish a number of XML web services from the database using the PL/SQL web toolkit, as described here. In more recent times we’ve had a number of requirements for JSON web services, so we did what most people probably do and Googled for “json pl/sql” and got a link to PL/JSON.

I know about the support for JSON in 12c, but we are not on 12c for these projects and that’s more about consuming JSON, rather than publishing it.

People seemed reasonably happy with PL/JSON, so I thought no more about it. At the weekend, kind-of by accident, I came across the APEX_JSON package that comes as part of APEX 5 and thought, how could I have missed that?

This is not a slight against PL/JSON, but given the choice of using something built and supported by Oracle, that is already in the database (we have APEX 5 in most databases already) or loading something else, I tend to pick the Oracle method. Since then I’ve been having a play with APEX_JSON and I quite like it. Here’s what I wrote while I was playing with it.

If you have done anything with XML in PL/SQL, you should find it pretty simple.

I’m guessing this post will result in a few people saying, “What about ORDS?” Yes I know. Because of history we are still mostly using mod_plsql and OHS, but ORDS is on the horizon. Even so, we will probably continue to use APEX_JSON to do the donkey-work, and just use ORDS to front it.



Playing around with JSON using the APEX_JSON package was first posted on April 8, 2016 at 7:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Installing Sample Databases To Get Started In Microsoft SQL Server

Pythian Group - Fri, 2016-04-08 08:20

Anyone interested in getting started in SQL server will need some databases to work with/on. This article hopes to help the new and future DBA/Developer get started with a few databases.

There are several places to get sample databases, but one starting out in SQL server should go to the Microsoft sample databases. The reason for this is that there are thousands of Blogs/Tutorials on the internet that use these databases as the basis for the tutorial.

The below steps will detail how to get the sample databases and how to attach them to SQL server to start working with them.

This blog assumes you have a version of SQL server installed if not you can click here for a great tutorial

These 2 Databases are a great start to learning SQL Server from both a Transactional and Data Warehousing point of view.

  • Now that we have downloaded these 2 files we will need to attach them one at a time. First, open SQL Server and connect to your instance.
  • Expand the object explorer tree until you can right click on the folder called databases and then left click on Attach…


  • Click the Add Button and navigate to and select the .mdf file (This is the database file you downloaded).


  • There is one step a lot of people getting started in SQL server often miss. As we have just attached a data file in order for SQL Server to bring the database online, it needs a log file which we don’t have. The trick to this is, if we remove the log file from the Attach Database window, SQL Server will automatically create a log file for us. To do this, simply select the log file and click remove.


  • Finally, when you window looks like below simply click ok to attach the database.


  • Repeat steps 3 to 6 for the second database file and any others you wish to attach.
  • The Databases are now online in SQL server and ready to be used.


And that’s it! You now have an OLTP database and a DW database for BI.

Below are links to some good starting tutorials and some additional databases.



Stack Overflow




Categories: DBA Blogs

Trello is my new knowledge base

Tony Andrews - Fri, 2016-04-08 07:53
How often do you hit an issue in development and think "I know I've had this problem before, but what's the solution?"  Most days if you've been around a long time like me.  It could be "how do you create a transparent icon", or "what causes this Javascript error in an APEX page".  So you can spend a while Googling and sifting through potential solutions that you vaguely remember having seen Tony Andrews

security diversion before going up the stack

Pat Shuff - Fri, 2016-04-08 01:07
This entry is going to be more of a Linux tutorial than a cloud discussion but it is relevant. One of the questions and issues that admins are faced is creation and deletion of accounts. With cloud access being something relatively new the last thing that you want is to generate a password with telnet access to a server in the cloud. Telnet is inherently insecure and any script kiddy with a desire to break into accounts can run ettercat and look for clear text passwords flying across an open wifi or wired internet connection. What you really want to do is login via secure ssh or secure putty is you are on windows. This is done with a public/private key exchange.

There are many good explanations of ssh key exchange, generating ssh keys, and using ssh keys. My favorite is a writeup. The net, net of the writeup is that you generate a public and private key using ssh-keygen or putty-gen and upload the public file to the ~user/.ssh/authorized_keys location for that user. The following scripts should work on an Azure, Amazon, and Oracle Linux instance created in the compute shapes. The idea is that we initially created a virtual machine with the cloud vendor and the account that we created with the VM is not our end user but our cloud administrator. The next level of security is to create a new user and give them permissions to execute what they want to execute on this machine. For example, in the Oracle Database as a Service images there are two users created by default; oracle and opc. The oracle user has the rights to execute everything related to sqlplus, access the file systems where the database and backups are located, and everything related to the ora user. The opc user has sudo rights so that they can execute root scripts, add software packages, apply patches, and other things. The two users have different access rights and administration privileges. In this blog we are going to look at creating a third user so that we can have someone like a backup administrator login and copy backups to tape or a disk at another data center. To do this you need to execute the following instructions.

sudo useradd backupadmin -g dba
sudo mkdir ~backupadmin/.ssh
sudo cp ~oracle/.ssh/authorized_keys ~backupadmin/.ssh
sudo chown -R backupadmin:dba ~backupadmin
sudo chmod 700 ~backupadmin/.ssh

Let's walk through what we did. First we create a new user called backupadmin. We add this user to the dba group so that they can perform dba functions that are given to the dba group. If the oracle user is part of a different group then they need to be added to that group and not the dba group. Next we create a hidden directory in the backupadmin directory called .ssh. The dot in front of the file denotes that we don't want this listed with the typical ls command. The sshd program will by default look in this directory for authorized keys and known hosts. Next we copy a known authorized_keys file into the new backupadmin .ssh directory so that we can present a private key to the operating system as the backupadmin to login. The last two commands are setting the ownership and permissions on the new .ssh directory and all files under it so that backupadmin can read and write this directory and no one else can. The chown sets ownership to backupadmin and the -R says do everything from that directory down to the same ownership. While we are doing this we also set the group permissions on all files to the group dba. The final command sets permissions on the .ssh directory to read, write, and execute for the owner of the directory only. The zeros remove permissions for the group and world.

In our example we are going to show how to access a Linux server from Azure and modify the permissions. First we go to the site and login. We then look at the virtual machines that we have created and access the Linux VM that we want to change permissions for. When we created the initial virtual machine we selected ssh access and uploaded a public key. In this example we created the account pshuff as the initial login. This account is created automatically for us and is given sudo rights. This would be our cloud admin account. We present the same ssh keys for all virtual machines that we create and can copy these keys or upload other keys for other users. Best practice would be to upload new keys and not replicate the cloud admin keys to new users as we showed above.

From the portal we get the ip address of the Linux server. In this example it is We open up putty from Windows, load the 2016.ppk key that corresponds to the key that we initialized the pshuff account with. When asked for a user to authenticate with we login as pshuff. If this were an Oracle Compute Service instance we would login as opc since this is the default account created and we want sudo access. To login as backupadmin we open putty and load the ppk associated with this account.

When asked for what account to login as we type in backupadmin and can connect to the Linux system using the public/private key that we initialized.

If we examine the public key it is a series of randomly generated text values. To revoke the users access to the system we change the authorized_keys file to a different key. The pub file looks like

if we open it in wordpad on Windows. This is the file that we uploaded when we created the virtual machine.

To deny access to backupadmin (in the case of someone leaving the organization or moving to another group) all we have to do is edit the authorized_keys file as root and delete this public key. We can insert a different key with a copy and paste operation allowing us to rotate keys. Commercial software like key vaults and key management systems allow you to do this from a central control point and update/rotate keys on a regular basis.

In summary, best practices are to upload a key per user and rotate them on a regular basis. Accounts should be created with ssh keys and not password access. Rather than copying the keys from an existing account it would be an upload and an edit. Access can be revoked by the root user by removing the keys or from an automated key management system.

Links for 2016-04-07 []

Categories: DBA Blogs

Five Reasons to Attend Oracle’s Modern Service Experience 2016

Linda Fishman Hoyle - Thu, 2016-04-07 12:38

A Guest Post by Christine Skalkotos, Oracle Program Management (pictured left)

Oracle’s Modern Service Experience 2016 is again lighting up fabulous Las Vegas April 26-28, and we’re betting this will be our best event yet. From the speaker lineup and session catalog to the networking experiences and Customer Appreciation Event, we’re going “all in,” and we hope you’ll join us. Here are five reasons you should head to Las Vegas this April for the Modern Service Experience:

1. In-Depth Service Content

The Modern Service Experience features more than 40 sessions led by customer service experts, analysts, and top brands. Through the keynotes, general sessions and breakouts, you’ll hear about current and future trends in customer service and will walk away inspired and ready to turn your insights into actions. Take a look at the just-launched conference program to see the impressive speaker lineup.

The conference program features content for everyone regardless of your role. Attend sessions in the following tracks:

  • Cross-Channel Contact Center
  • Executive
  • Field Service Management
  • Industry
  • Knowledge
  • Oracle Policy Management
  • Platform
  • Web Customer Service
  • Customer Experience

In addition, you’ll hear about Oracle Service Cloud’s vision and product roadmap. Within the breakouts, you’ll learn about new product functionality and how to get the most out of your implementation. In the expo hall, you’ll have the opportunity to participate in interactive demos.

2. One-of-a-Kind Networking

In addition to hearing best practices and soaking up insights from session and keynote speakers, some of the best information you’ll gather at the Modern Service Experience will come from your peers. Customer service leaders from some of the world’s top brands are attending the Modern Service Experience. The conference provides many opportunities to network with peers, as well as with Oracle product experts, sales, executives, and partners.

Before you head to Las Vegas, see who else is attending and start broadening your network through social media. Use the hashtag #ServiceX16, and join the conversation.

3. Thought Leaders & Inspiring Speakers

Attend the Modern Service Experience to hear from some of the leading minds in modern customer service. The featured speaker lineup includes:

  • Mark Hurd, CEO, Oracle
  • Jean-Claude Porretti, Customer Care Worldwide Manager, Peugeot Citroën
  • Scott McBain, Manager, Application Development, Overhead Door Corporation
  • Sara Knetzger, Applications Administrator, Corporate Applications, WageWorks
  • Ian Jacobs, Senior Analyst Serving Application Development & Delivery Professionals, Forrester Research
  • Kate Leggett, VP, Principal Analyst Serving Application Development & Delivery Professionals, Forrester Research
  • Ray Wang, Principal Analyst, Founder, and Chairman, Constellation Research, Inc.
  • Denis Pombriant, founder, managing principal, Beagle Research

4. More Opportunities for Increasing Your Knowledge

First, take advantage of our pre-conference workshops. You’ll probably have to roll the dice to decide which of the three you’ll attend: Get Prepared for the Knowledge-Centered Support (KCS) Practices v5 Certification, Head off to the Races with Agent Desktop Automation, and Step off the Beaten Path with Oracle Service Cloud Reporting.

Next, schedule time with an Oracle Service Cloud mastermind and get answers to your burning questions as part of the Ask the Experts program (sponsored by Oracle Gold Partner Helix).

Last, connect with your peers during lunch and participate in our birds of a feather program around popular topics.

5. Celebrate with Your Fellow Customers

To show our appreciation for our customers, we’re hosting a night of food, drinks, and amazing entertainment. Goo Goo Dolls will play a private concert for attendees at the MGM Grand Arena on Wednesday evening. The Oracle Customer Appreciation Event rarely disappoints—don’t miss it. 

Finally, at 1 p.m. on Thursday April 26, during our annual awards ceremony, we’ll recognize leading organizations and individuals in the customer service space, highlighting their impressive stories about innovation and differentiation. Guaranteed, you’ll leave motivated and energized.

What did last year’s customers have to say?

"Oracle Modern Service Experience 2015 was a top-notch event that provided me with the opportunity to learn about new Oracle Service Cloud capabilities and connected me with federal and private sector peers who have since influenced my direction as the Air Force Reserve's Chief Digital Officer, enabling me to drive the organization to a new level of innovation and efficiency this past year." – Lt Col Michael Ortiz, HQ Air Reserve Personnel Center

"The Modern Service Experience is a must for customers looking to maximize their effectiveness with Oracle Service Cloud." – Michael Morris,

See you in Las Vegas!

Oracle Live SQL: Explain Plan

Pythian Group - Thu, 2016-04-07 12:14

We’ve all encountered a situation when you want to check a simple query or syntax for your SQL and don’t have a database around. Of course, most of us have at least a virtual machine for that, but it takes time to fire it up, and if you work from battery, it can leave you without power pretty quickly. Some time ago, Oracle began to offer a new service called “Oracle Live SQL” . It provides you with the ability to test a sql query, procedure or function, and have a code library containing a lot of examples and scripts. Additionally, you can store your own private scripts to re-execute them later. It’s a really great online tool, but it lacks some features. I’ve tried to check the  execution plan for my query but, unfortunately, it didn’t work:

explain plan for 
select * from test_tab_1 where pk_id<10;

ORA-02402: PLAN_TABLE not found

So, what could we do to make it work? The workaround is not perfect, but it works and can be used in some cases. We need to create our own plan table using script from an installed Oracle database home $ORACLE_HOME/rdbms/admin/utlxplan.sql. We can open the file and copy the statement to create plan table to SQL worksheet in the Live SQL. And you can save the script in Live SQL code library, and make it private to reuse it later because you will need to recreate the table every time when you login to your environment again. So far so good. Is it enough? Let’s check.

explain plan for 
select * from test_tab_1 where pk_id<10;

Statement processed.

select * from table(dbms_xplan.display);

ERROR: an uncaught error in function display has happened; please contact Oracle support
       Please provide also a DMP file of the used plan table PLAN_TABLE
       ORA-00904: DBMS_XPLAN_TYPE_TABLE: invalid identifier

Ok, the package doesn’t work. I tried to create the types in my schema but it didn’t work. So far the dbms_xplan is not going to work for us and we have to request the information directly from our plan table. It is maybe not so convenient, but it give us enough and, don’t forget, you can save your script and just reuse it later. You don’t need to memorize the queries. Here is a simple example of how to get information about your last executed query from the plan table:

SELECT parent_id,id, operation,plan_id,operation,options,object_name,object_type,cardinality,cost from plan_table where plan_id in (select max(plan_id) from plan_table) order by 2;

 - 	0	SELECT STATEMENT	268	SELECT STATEMENT	 - 	 - 	 - 	9	49

I tried a hierarchical query but didn't find it too useful in the Live SQL environment. Also you may want to put unique identifier for your query to more easily find it in the plan_table. 

explain plan set statement_id='123qwerty' into plan_table for
select * from test_tab_1 where pk_id<10;

SELECT parent_id,id, operation,plan_id,operation,options,object_name,object_type,cardinality,cost from plan_table where statement_id='123qwerty' order by id;


Now I have my plan_table script and query saved in the Live SQL and reuse them when I want to check the plan for my query. I posted the feedback about the ability to use dbms_xplan and Oracle representative replied to me promptly and assured they are already working implementing dbms_xplan feature and many others including ability to run only selected SQL statement in the SQL worksheet (like we do it in SQLdeveloper). It sounds really good and promising and is going to make the service even better. Stay tuned.

Categories: DBA Blogs

Which Cassandra version should you use for production?

Pythian Group - Thu, 2016-04-07 11:47
What version for a Production Cassandra Cluster?

tl;dr; Latest Cassandra 2.1.x

Long version:

A while ago, Eventbrite wrote:
“You should not deploy a Cassandra version X.Y.Z to production where Z <= 5.” (Full post).

And, in general, it is still valid up until today! Why “in general“? That post is old, and Cassandra has moved a lot since them. So we can get a different set of sentences:

Just for the ones that don’t want follow the links, and still pick 3.x for production use, read this:

“Under normal conditions, we will NOT release 3.x.y stability releases for x > 0.  That is, we will have a traditional 3.0.y stability series, but the odd-numbered bugfix-only releases will fill that role for the tick-tock series — recognizing that occasionally we will need to be flexible enough to release an emergency fix in the case of a critical bug or security vulnerability.

We do recognize that it will take some time for tick-tock releases to deliver production-level stability, which is why we will continue to deliver 2.2.y and 3.0.y bugfix releases.  (But if we do demonstrate that tick-tock can deliver the stability we want, there will be no need for a 4.0.y bugfix series, only 4.x tick-tock.)”

What about end of life?

Well, it is about stability, there are still a lot of clusters out there running 1.x and 2.0.x. And since it is an open source software, you can always search in the community or even contribute.

If you still have doubts about which version, you can always contact us!

Categories: DBA Blogs

Oracle's Modern Finance Experience Blows into Chicago This Week

Linda Fishman Hoyle - Thu, 2016-04-07 11:17

An all-star cast will be speaking at the  in Modern Finance Experience in Chicago this week (April 6-7), including journalist and best-selling author Michael Lewis and Oracle CEOs Safra Catz and Mark Hurd. The theme of the conference is Creating Value in the Digital Age.

In this OracleVoice article leading up to the event, Oracle VP Karen dela Torre explains why 10- or 20-year-old systems are ill suited for the digital economy. She then lists 15 reasons why now is the time for finance to move to the cloud. Here are just a few:

  • New business models require new capabilities (i.e. KPIs, data models, sentiment analysis)
  • Subscription billing and revenue recognition standards require new functionality
  • Rapid growth requires systems that can quickly scale
  • Consolidation, standardization, and rationalization is easier on the cloud

Even to risk-averse finance executives, the call for change will be hard to ignore.

Oracle writer Margaret Harrist also writes about the digital age in a Forbes article that focuses on the not-so-well-known role of finance in the customer experience. Matt Stirrup, Oracle VP of Finance, states that leading finance organizations are looking at the business from the customer’s perspective and recommending changes to the business model or performance measures. Finance may just be the secret sauce to winning in the digital economy.

ADF BC View Criteria Query Execution Mode = Both

Andrejus Baranovski - Thu, 2016-04-07 09:55
View Criteria is set to execute in Database mode by default. There is option to change execution mode to Both. This would execute query and fetch results from database and from memory.  Such query execution is useful, when we want to include newly created (but not commited yet) row into View Criteria result. Newly created row will be included into View Criteria resultset.

Download sample application - JobsView in sample application is set with query execution mode for View Criteria to Both:

I'm using JobsView in EmployeesView through View Accessor. If data from another VO is required, you can fetch it through View Accessor. View Accessor is configured with View Criteria, this means it will be automatically filtered (we only need to set Bind Variable value):

Employees VO contains custom method, where View Accessor is referenced. I'm creating new row and executing query with bind variable (primary key for newly created row). View Criteria is set to execution mode Both, this allows to retrieve newly created row (not commited yet) after search:

View Criteria execution mode Both is useful, when we want to search without loosing newly created rows.

Another Email Subscription Update

Michael Feldstein - Thu, 2016-04-07 09:15

By Michael FeldsteinMore Posts (1068)

Those of you subscribed to the site by email may have noticed that you didn’t get anything in the last 24 hours. (Or maybe you didn’t notice, since the email never came.) We are aware of the problem. The new system sends a message once a day and is next scheduled to send an email digest every morning at 4 AM Eastern Time. For whatever reason, it did not engage this morning, but we believe that it is working correctly now. You should get something in your inbox that includes this post tomorrow morning. Obviously, if you are reading this by email, then all is well.

Thanks for your patience as we work out the kinks in the new system.

The post Another Email Subscription Update appeared first on e-Literate.

Two Amazing Men Discovered Evolution by Natural Selection!

FeuerThoughts - Thu, 2016-04-07 09:09
Most everyone knows about Darwin, and what they think they know is that Charles Darwin is the discoverer of Evolution through Natural Selection. And for sure, he did discover this. But the amazing thing is....he wasn't the only one. And whereas Darwin came to this theory pretty much as a Big Data Scientist over a long period of time (mostly via "armchair" collection of data from scientists and naturalists around the world), The Other Guy developed his theory of Natural Selection very much in the field - more specifically, in the jungle, surrounded by the living evidence. 

His name is Alfred Russel Wallace, he is one of my heroes, and I offer below the "real story" for your reading pleasure. 

One of the things I really love about this story is the way Darwin and Wallace respected each other, and did right by each other. We all have a lot to learn from their integrity and compassion.

Alfred Russel Wallace and Natural Selection: the Real Story 
By Dr George Beccaloni, Director of the Wallace Correspondence Project, March 2013

Alfred Russel Wallace OM, LLD, DCL, FRS, FLS was born near Usk, Monmouthshire, England (now part of Wales) on January 8th, 1823. Serious family financial problems forced him to leave school aged only fourteen and a few months later he took a job as a trainee land surveyor with his elder brother William. This work involved extensive trekking through the English and Welsh countryside and it was then that his interest in natural history developed.

Whilst living in Neath, Wales, in 1845 Wallace read Robert Chambers' extremely popular and anonymously published book Vestiges of the Natural History of Creation and became fascinated by the controversial idea that living things had evolved from earlier forms. So interested in the subject did he become that he suggested to his close friend Henry Walter Bates that they travel to the Amazon to collect and study animals and plants, with the goal of understanding how evolutionary change takes place. They left for Brazil in April 1848, but although Wallace made many important discoveries during his four years in the Amazon Basin, he did not manage to solve the great ‘mystery of mysteries’ of how evolution works.

Wallace returned to England in October 1852, after surviving a disastrous shipwreck which destroyed all the thousands of natural history specimens he had painstakingly collected during the last two and most interesting years of his trip. Undaunted, in 1854 he set off on another expedition, this time to the Malay Archipelago (Singapore, Malaysia and Indonesia), where he would spend eight years travelling, collecting, writing, and thinking about evolution. He visited every important island in the archipelago and sent back 110,000 insects, 7,500 shells, 8,050 bird skins, and 410 mammal and reptile specimens, including probably more than five thousand species new to science.

In Sarawak, Borneo, in February 1855, Wallace produced one of the most important papers written about evolution up until that time1. In it he proposed a ‘law’ which stated that "Every species has come into existence coincident both in time and space with a pre-existing closely allied species". He described the affinities (relationships) between species as being “ intricate as the twigs of a gnarled oak or the vascular system of the human body” with “...the stem and main branches being represented by extinct species...” and the “...vast mass of limbs and boughs and minute twigs and scattered leaves...” living species. The eminent geologist and creationist Charles Lyell was so struck by Wallace’s paper that in November 1855, soon after reading it, he began a ‘species notebook’ in which he started to contemplate the possibility of evolution for the first time.

In April 1856 Lyell visited Charles Darwin at Down House in Kent, and Darwin confided that for the past twenty years he had been secretly working on a theory (natural selection) which neatly explained how evolutionary change takes place. Not long afterwards, Lyell sent Darwin a letter urging him to publish before someone beat him to it (he probably had Wallace in mind), so in May 1856, Darwin, heeding this advice, began to write a ‘sketch’ of his ideas for publication.
Finding this unsatisfactory, Darwin abandoned it in about October 1856 and instead began working on an extensive book on the subject.

The idea of natural selection came to Wallace during an attack of fever whilst he was on a remote Indonesian island in February 1858 (it is unclear whether this epiphany happened on Ternate or neighbouring Gilolo (Halmahera)). As soon as he had sufficient strength, he wrote a detailed essay explaining his theory and sent it together with a covering letter to Darwin, who he knew from earlier correspondence, was deeply interested in the subject of species transmutation (as evolution was then called).

Wallace asked Darwin to pass the essay on to Lyell (who Wallace did not know), if Darwin thought it sufficiently novel and interesting. Darwin had mentioned in an earlier letter to Wallace that Lyell had found his 1855 paper noteworthy and Wallace must have thought that Lyell would be interested to learn about his new theory, since it neatly explained the ‘law’ which Wallace had proposed in that paper.

Darwin, having formulated natural selection years earlier, was horrified when he received Wallace’s essay and immediately wrote an anguished letter to Lyell asking for advice on what he should do. "I never saw a more striking coincidence. If Wallace had my M.S. sketch written out in 1842 he could not have made a better short abstract! ... So all my originality, whatever it may amount to, will be smashed." he exclaimed2. Lyell teamed up with another of Darwin's close friends, Joseph Hooker, and rather than attempting to seek Wallace's permission, they decided instead to present his essay plus two excerpts from Darwin’s writings on the subject (which had never been intended for publication3) to a meeting of the Linnean Society of London on July 1st 1858. The public presentation of Wallace's essay took place a mere 14 days after its arrival in England.

Darwin and Wallace's musings on natural selection were published in the Society’s journal in August that year under the title “On the Tendency of Species to Form Varieties; And On the Perpetuation of Varieties and Species by Natural Means of Selection”. Darwin's contributions were placed before Wallace's essay, thus emphasising his priority to the idea4. Hooker had sent Darwin the proofs to correct and had told him to make any alterations he wanted5, and although he made a large number of changes to the text he had written, he chose not to alter Lyell and Hooker’s arrangement of his and Wallace’s contributions.

Lyell and Hooker stated in their introduction to the Darwin-Wallace paper that “...both authors...[have]...unreservedly placed their papers in our hands...”, but this is patently untrue since Wallace had said nothing about publication in the covering letter he had sent to Darwin6. Wallace later grumbled that his essay “...was printed without my knowledge, and of course without any correction of proofs...”7

As a result of this ethically questionable episode8, Darwin stopped work on his big book on evolution and instead rushed to produce an ‘abstract’ of what he had written so far. This was published fifteen months later in November 1859 as On the Origin of Species: a book which Wallace later magnanimously remarked would “ as long as the "Principia" of Newton.”9

In spite of the theory’s traumatic birth, Darwin and Wallace developed a genuine admiration and respect for one another. Wallace frequently stressed that Darwin had a stronger claim to the idea of natural selection, and he even named one of his most important books on the subject Darwinism! Wallace spent the rest of his long life explaining, developing and defending natural selection, as well as working on a very wide variety of other (sometimes controversial) subjects. He wrote more than 1000 articles and 22 books, including The Malay Archipelago and The Geographical Distribution of Animals. By the time of his death in 1913, he was one of the world's most famous people.

During Wallace’s lifetime the theory of natural selection was often referred to as the Darwin- Wallace theory and the highest possible honours were bestowed on him for his role as its co- discoverer. These include the Darwin–Wallace and Linnean Gold Medals of the Linnean Society of London; the Copley, Darwin and Royal Medals of the Royal Society (Britain's premier scientific body); and the Order of Merit (awarded by the ruling Monarch as the highest civilian honour of Great Britain). It was only in the 20th Century that Wallace’s star dimmed while Darwin’s burned ever more brightly. 

So why then did this happen?

The reason may be as follows: in the late 19th and early 20th centuries, natural selection as an explanation for evolutionary change became unpopular, with most biologists adopting alternative theories such as neo-Lamarckism, orthogenesis, or the mutation theory. It was only with the modern evolutionary synthesis of the 1930s and ’40s that it became widely accepted that natural selection is indeed the primary driving force of evolution. By then, however, the history of its discovery had largely been forgotten and many wrongly assumed that the idea had first been published in Darwin’s On the Origin of Species. Thanks to the so-called ‘Darwin Industry’ of recent decades, Darwin’s fame has increased exponentially, eclipsing the important contributions of his contemporaries, like Wallace. A more balanced, accurate and detailed history of the discovery of what has been referred to as “...arguably the most momentous idea ever to occur to a human mind” is long overdue.


1. Wallace, A. R. 1855. On the law which has regulated the introduction of new species. Annals and Magazine of Natural History, 16 (2nd series): 184-196.

2. Letter from Darwin to Charles Lyell dated 18th [June 1858] (Darwin Correspondence Database, accessed 20/01/2013).

3. These were an extract from Darwin’s unpublished essay on evolution of 1844, plus the enclosure from a letter dated 5th September 1857, which Darwin had written to the American botanist Asa Gray.

4. Publishing another person’s work without their agreement was as unacceptable then as it is today. Publishing someone’s novel theory without their consent, prefixed by material designed to give priority of the idea to someone else is ethically highly questionable: Wallace should have been consulted first! Fortunately for Darwin and his supporters, Wallace appeared to be pleased by what has been called the ‘delicate arrangement’.

5. In a letter from Joseph Hooker to Darwin dated 13th and 15th July 1858 (Darwin Correspondence Database, accessed 20/01/2013), Hooker stated " I send the proofs from Linnæan Socy— Make any alterations you please..."

6. In a letter from Darwin to Charles Lyell dated 18th [June 1858] (Darwin Correspondence Database, accessed 20/01/2013), Darwin, who was referring to Wallace's essay, says "Please return me the M.S. [manuscript] which he does not say he wishes me to publish..." and in a letter from Darwin to Charles Lyell dated [25th June 1858] (Darwin Correspondence Database, accessed 20/01/2013), Darwin states that "Wallace says nothing about publication..."

7. Letter from Wallace to A. B. Meyer dated 22nd November 1869 cited in Meyer, A. B. 1895. How was Wallace led to the discovery of natural selection? Nature, 52(1348): 415.

8. See Rachels, J. 1986. Darwin's moral lapse. National Forum: 22-24 (pdf available at

9. Letter from Wallace to George Silk dated 1st September 1860 (WCP373 in Beccaloni, G. W. (Ed.). 2012. Wallace Letters Online [accessed 20/01/2013])


Please cite this article as: Beccaloni, G. W. 2013. Alfred Russel Wallace and Natural Selection: the Real Story. <>
This article is a slightly modified version of the introduction by George Beccaloni to the following privately published book: Preston, T. (Ed.). 2013. The Letter from Ternate. UK: TimPress. 96 pp.
Categories: Development

Gluent New World #02: SQL-on-Hadoop with Mark Rittman

Tanel Poder - Thu, 2016-04-07 09:02

Update: The video recording of this session is here:

Slides are here.

Other videos are available at Gluent video collection.

It’s time to announce the 2nd episode of the Gluent New World webinar series!

The Gluent New World webinar series is about modern data management: architectural trends in enterprise IT and technical fundamentals behind them.

GNW02: SQL-on-Hadoop : A bit of History, Current State-of-the-Art, and Looking towards the Future


  • This GNW episode is presented by no other than Mark Rittman, the co-founder & CTO of Rittman Mead and an all-around guru of enterprise BI!


  • Tue, Apr 19, 2016 12:00 PM – 1:15 PM CDT


Hadoop and NoSQL platforms initially focused on Java developers and slow but massively-scalable MapReduce jobs as an alternative to high-end but limited-scale analytics RDBMS engines. Apache Hive opened-up Hadoop to non-programmers by adding a SQL query engine and relational-style metadata layered over raw HDFS storage, and since then open-source initiatives such as Hive Stinger, Cloudera Impala and Apache Drill along with proprietary solutions from closed-source vendors have extended SQL-on-Hadoop’s capabilities into areas such as low-latency ad-hoc queries, ACID-compliant transactions and schema-less data discovery – at massive scale and with compelling economics.

In this session we’ll focus on technical foundations around SQL-on-Hadoop, first reviewing the basic platform Apache Hive provides and then looking in more detail at how ad-hoc querying, ACID-compliant transactions and data discovery engines work along with more specialised underlying storage that each now work best with – and we’ll take a look to the future to see how SQL querying, data integration and analytics are likely to come together in the next five years to make Hadoop the default platform running mixed old-world/new-world analytics workloads.



If you missed the last GNW01: In-Memory Processing for Databases session, here are the video recordings and slides!

See you soon!



NB! If you want to move to the "New World" - offload your data and workloads to Hadoop, without having to re-write your existing applications - check out Gluent. We are making history! ;-)

Related Posts

Oracle Management Cloud – IT Analytics

Marco Gralike - Thu, 2016-04-07 05:54
In this post I will give you a first glance of a demo environment of…