Skip navigation.

Feed aggregator

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - Wed, 2015-10-07 06:05

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Partner Webcast – Rapid Digital Transformation with Oracle Process Cloud

Today, IT is heavily optimized to develop and manage longer running durable applications with evolutionary change, current demand calls for creation of disposal applications and fast frequency...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Cracks In The Foundation Of Disruptive Innovation

Michael Feldstein - Tue, 2015-10-06 19:14

By Phil HillMore Posts (367)

The overuse of Clayton Christensen’s disruptive innovation theory has rightly been criticized in education circles for years. I say rightly in that judging a non-commodity public good with the same theory as disk drives is a silly notion without some extensive analysis to back up that extrapolation. As Audrey Watters wrote in 2013:

Rather, my assigning “myth” to “disruptive innovation” is meant to highlight the ways in which this narrative has been widely accepted as unassailably true. No doubt (as a Harvard professor) Christensen has faced very little skepticism or criticism about his theory about the transformation of industries— why, it’s as if The Innovator’s Dilemma were some sort of sacred text.
Helping to enhance its mythic status, the storytelling around “disruptive innovation” has taken on another, broader and looser dimension as well, as the term is now frequently invoked in many quarters to mean things quite different from Christensen’s original arguments in The Innovator’s Dilemma.

Referring back to Audrey’s posts, Jim Groom and Brian Lamb made efforts to “reclaim innovation” in 2014:

To understand much of the disconnect between higher education and innovation, we should take a look at innovation’s unruly cousin: disruption. Certainly, when surveying the rapid pace of change in digital and networked technologies and assessing the wreckage of organizations and industries that have, in one way or another, been swept aside, disruption as a descriptive term is not without merit. But unless we are prepared to rebuild from the wreckage and create something that represents a meaningful advance, it’s difficult to see the value in disruption for its own sake.

Audrey Watters has noted the essentially apocalyptic flavor of what she describes as “the myth and the millennialism of disruptive innovation” – mythic in the sense that it prophesies “the destruction of the old and the ascension of the new” and constitutes a narrative that “has been widely accepted as unassailably true.” When applied to education, disruptive innovation promises nothing less than “the end of school as we know it.”

Martin Weller this week explored the assumptions behind disruptive innovation to see what that would mean in practice in education, based on a “disruption in education” event, assuming people meant what they said when using this terminology:

I didn’t watch any of the event, maybe there were some very interesting presentations. But by labelling it Disruptors, the intention is made clear. Disruption, as set out by Christensen, is in fact very rare. I think it only actually applies about 1% of the time that term is claimed. But Christensen wouldn’t get rich by talking about a rare occurrence so has pushed the idea that it happens everywhere. The replacement of analogue photography by digital is the classic example. That really was disruption and swept away a whole industry. When it does happen, the thing about disruption is that it is absolutely brutal. A whole industry is replaced by a new one. This is not making improvements (that is the sustaining technology), it is completely destroying a sector and replacing it with a new one. It is an extinction event.

These are all valid and important critiques, and the underlying theme of disruptive innovation being too easily accepted is an important one. Last year Jill Lepore wrote the most widely-known critique in the New Yorker (with the subtitle “What the gospel of innovation gets wrong”):

Disruptive innovation as a theory of change is meant to serve both as a chronicle of the past (this has happened) and as a model for the future (it will keep happening). The strength of a prediction made from a model depends on the quality of the historical evidence and on the reliability of the methods used to gather and interpret it. Historical analysis proceeds from certain conditions regarding proof. None of these conditions have been met.

“The Innovator’s Dilemma” consists of a set of handpicked case studies, beginning with the disk-drive industry, which was the subject of Christensen’s doctoral thesis, in 1992.

Now there is even more evidence that the theory of disruptive innovation has some serious cracks in the foundation. In September’s MIT Sloan Management Review (you know, the other side of Cambridge) Dartmouth Professor Andrew King and UBC graduate student Baljir Baatartogtokh published the results of a study they led on the original claims in Christensen’s books. Titled “How Useful Is The Theory of Disruptive Innovation?”, the short answer is not very, even for the case studies claimed in the books.

The method of the study was to look at the 77 case studies used in the Innovator’s Dilemma and the Innovator’s Solution that underpin disruption, and then interview experts in the field to see if the case studies met the conditions of the theory itself. Of the 79 “experts”:

Fifty-eight percent of the surveyed respondents were academics; 18% were nonacademic authors of book-length historical analyses; 10% were financial analysts of the industries involved; and 14% were participants in the industries.

The four key elements of the theory are:

  1. Incumbents are improving along a trajectory of innovation
  2. The pace of sustaining innovation overshoots customer needs
  3. Incumbents have the capability to respond but fail to exploit it
  4. Incumbents flounder as a result of the disruption

Four elements DI

One interesting note is that the interviewers had to avoid using the term “disruptive innovation” until the end of their interviews to avoid tainting the discussion. The terminology is so overused that it even distorts the discussion of the theory itself.

Even Christensen’s underlying assumption that “objective function of management should be to maximize shareholder value” came into question when applied to education:

This clarified objective is problematic for the 9% of cases we examined that represent nonprofit organizations or publicly regulated utilities. For example, an author and expert on higher education noted that the “access mission of community colleges often runs counter to what presidents or other leaders might do to cut costs or improve completion outcomes.” This expert adds: “That makes it not such a great example for the theory [because] as a mission driven institution, they are responsible to the public and a higher calling.”

The article then describes the results of how each of the 77 case studies fit within the four key elements of the theory, and the end result was that only 9% of the provided case studies fit all four. It seems that the case studies were cherry picked to highlight one element at a time, but the cohesive theory of disruptive innovation might not be backed up even by its own data.

Analysis DI

The whole article is worth reading. You can access it for free either by joining the MIT Sloan Management Review for a free 14-day period or by forking over $6.50.

In the meantime, the conclusion of the article should be a lesson anytime someone in education uses “disruption” as a justification or description of what they do.

In summary, stories about disruptive innovation can provide warnings of what may happen, but they are no substitute for critical thinking. High-level theories can give managers encouragement, but they are no replacement for careful analysis and difficult choices.

If there is any one field where this conclusions applies more than others, it is in education with mission driven institutions, non-commodity outcomes, and poorly-understood basics on how to even define learning.

The post Cracks In The Foundation Of Disruptive Innovation appeared first on e-Literate.

Partner Webcast – Oracle Mobile Cloud Service: Gates to Enterprise Mobility for Your Business

Nowadays Mobility has definitely disrupted business models. Mobile first companies that are using the context of mobile to create unique applications are creating new business models disrupting and...

We share our skills to maximize your revenue!
Categories: DBA Blogs

About My Son, Chris Silva, Amazing Artist, Father and All-Around Human Being

FeuerThoughts - Tue, 2015-10-06 13:59
"For the record...."

Chris is the 2015 recipient of a 3arts grant, which makes me incredibly proud and also gives me the opportunity to share his professional art bio (I mostly experience him these days as Papa to my two wonderful granddaughters).
Born in Puerto Rico, Chris Silva has been a prominent figure in Chicago’s graffiti and skateboarding scenes since the 1980s, as well as an enthusiastic fan of a wide range of music genres which have resulted from the influence of metropolitan life. Building on his solid graffiti art foundation, Silva proceeded to play a significant role in the development of what is now commonly referred to as "street art." He now splits his time between working on large-scale commissions, producing gallery oriented work, and leading youth-involved public art projects. As a self-taught sound artist with roots in DJ culture, Silva also anchors a collaborative recording project known as This Mother Falcon, and has recently started integrating his audio compositions into his installation work.
In the early 90s, Silva worked on a mural with the Chicago Public Art Group and was eventually brought on board to help lead community art projects with other urban youth. As a result, the act of facilitating art experiences for young people has become an important part of his art practice, and he regularly includes students as collaborators on large-scale artwork that often leans heavily on improvisation. Over the years, Silva has helped orchestrate youth art projects both independently and in partnership with Chicago Public Art Group, Young Chicago Authors, Gallery 37, Yollocalli Arts Reach, After School Matters, and the School of The Art Institute of Chicago.
Silva was awarded a major public art commission by the Chicago Transit Authority to create a mosaic for the Pink Line California Station (2004); created block-long murals in Chicago's Loop “You Are Beautiful” (2006); created a sculpture for the Seattle Sound Transit System (2008); won the Juried Award for Best 3D Piece at Artprize (2012); and created large commissions for 1871 Chicago (2013), the City of Chicago, LinkedIn, CBRE (2014), OFS Brands, and The Prudential Building (2015). He has exhibited in Chicago, San Francisco, Los Angeles, New York City, Philadelphia, London, Melbourne, Copenhagen, and The International Space Station. In 2007 Silva received an Artist Fellowship Award from The Illinois Arts Council.
Categories: Development

SQLcl - Oct 5th Edition

Kris Rice - Tue, 2015-10-06 09:29
It was time for another push of the new sql command line , sqlcl.  Here's the changes in the latest build.  There are a number of bugs fixed relating to connecting to idle databases, alias feedback, and picking up the $ORACLE_HOME's jdbc driver when available.  There is only a couple functional changes. First is that now the jdbc connect string itself can be specified.  This opens up a number

SQL Server- CMDEXEC Subsystem Failed to Load

Chris Foot - Tue, 2015-10-06 08:43

During your life as a DBA, you will probably have to restore the master database and all the user databases on a new operating system to bring an already existing SQL Server instance online. One of my clients recently had an issue with hardware failure. Our only option was to install SQL Server on a new OS and use the backup files to restore all the databases from the old SQL Server instance. Once the new system was completely built and we restored msdb, we noticed that some of the SQL Server Agent jobs began to fail. Below is the error output from the job history. It states that the CMDEXEC subsystem failed to load.

This will occur because the installation of the previous SQL Server install was done on a different drive than the newly- built SQL Server installation. There are 2 different ways to fix the issue.

Option 1

If you run the below command in a SQL Server query window on the new instance, you will get the output below and find that the old installation, in this examples, was done on the C:\ drive. The newly-built SQL Server installation was done on the E:\ drive, so we have to update the table to correct the paths.

FROM msdb..syssubsystems


In order to update the table, you will have to run the below command and replace the subsystem_dll path for each subsystem to the drive that the new SQL Server installation is on.

sp_configure 'allow updates',1


UPDATE syssubsystems
SET subsystem_dll = REPLACE(subsystem_dll, 'C:\Program Files', 'E:\Program Files')
FROM syssubsystems
WHERE subsystem_dll LIKE 'C:\Program Files%' 

sp_configure 'allow updates',0

--Restart SQL Server Agent

After you are finished updating the table to the new paths of the SQL Server installation, you will have to restart the SQL Server Agent for the changes to take place.

Option 2

You would run the same exact SELECT statement to verify that the paths are not the correct paths of the installation. Then you would run the DELETE statement to truncate the table. Step 3 would be to execute the sp_verify_subsystems stored procedure which will update the subsystem table with the correct information. After a quick restart of the SQL Server Agent, the table will be repopulated, and that will be all that is needed for your jobs to now run successfully.

--Step 1
--Look for the existence of the subsystem values
FROM msdb..syssubsystems
--Step 2
--DELETE the existing subsystem values
DELETE FROM msdb..syssubsystems
--Step 3
--Reload the subsystem values
EXEC msdb.dbo.sp_verify_subsystems 1

--Restart SQL Server Agent.

Thanks for reading. Enjoy!

The post SQL Server- CMDEXEC Subsystem Failed to Load appeared first on Remote DBA Experts.

Oracle E-Business Suite: Virtual Host Names

Pythian Group - Tue, 2015-10-06 07:56

The ability to use virtual host names with Oracle E-Business Suite is one of the features that I have been waiting for a long time. When I finally saw a post on Steven Chan’s blog about it, I was very excited. But, when I finally got to review the Mos note “Configuring Oracle E-Business Suite Release 12.x Using Logical Host Names”, I was left with disappointed.

In my opinion, the main advantage of using virtual host names is during a DR failover scenario. By using virtual hosts we can setup the servers in both a primary datacenter and secondary datacenter to use the same virtual hostname, even though their physical hostnames are different. This virtual hostname setup helps when we failover services and databases to a secondary datacenter, as we don’t have to reconfigure the application to use new physical hostnames. Currently when we install E-Business Suite to use a virtual hostname, “Concurrent Managers” dont work, as they internally use the physical hostname to communicate.

The new MOS note describes this very feature of using virtual hostnames with Oracle E-Business Suite. But why I am disappointed? Because it left a very important use case out. In most cases when virtual hostnames are used, the servers are configured with a different physical hostname. i.e., if you run hostname or uname commands you will see that the actual physical hostname and virtual hostname is only present in DNS and hosts file. This scenario is not covered by the MOS note. The MOS note asks us to reconfigure the server with virtual hostname such that when we type hostname or uname command it shows the virtual hostname instead of the physical hostname.

I believe the need to reconfigure the server to use a virtual hostname, defeats the main purpose of setting up virtual hostnames, making this MOS note useless :(

Thus, I will keep on waiting for this out of the box feature. I currently have a custom in-house method to use virtual hostnames with E-Business Suite that I will blog about it in future.


Discover more about our expertise with Oracle.

Categories: DBA Blogs

The Oracle EPM Cloud Forecast for OpenWorld 2015 is Clear and Bright

Linda Fishman Hoyle - Tue, 2015-10-06 06:28

A Guest Post by Jennifer Toomey, Senior Principal Product Marketing Director, Business Analytics Product Group (pictured left)

Oracle OpenWorld San Francisco (October 25–29, 2015) offers more Oracle EPM content and expert experience than any other conference in the world. It offers nearly 50 EPM conference sessions, seven demo stations, and two hands-on labs, plus 35 customers, 15 partners, and 26 Oracle staff speaking. Whether you already have, or are considering, an Oracle EPM solution, Oracle OpenWorld is the place to be.

Cloud: EPM Cloud is in the spotlight this year with sessions featuring existing Oracle EPM Cloud customers, products, and strategy, as well as roadmap sessions that reveal plans for new offerings coming in the next 12–18 months. Attendees will get the opportunity for a first look at some of these new cloud solutions. In addition, a number of customers will share their experiences and results from using Oracle EPM Cloud solutions.

On Premises: There will be sessions covering on premises Oracle EPM products, with more than 10 sessions dedicated to customer case studies.

Meet the Experts: We will have experts ready to demonstrate the complete range of both cloud and on premises products. Be sure to book yourself for a test drive so you can try out Oracle EPM Cloud solutions first hand.

Recommended Sessions:

Oracle EPM General Session with KPMG: Executive Briefing on Oracle’s EPM Strategy and Roadmap [GEN7014]   Monday, Oct 26, 4:00 p.m. | Moscone West—2008

Customers Present:  Oracle Planning and Budgeting Cloud Service [CON9540] Monday, Oct 26, 1:30 p.m. | Moscone West—3018

Oracle Fusion Middleware: Meet This Year’s Most Impressive Innovators [CON10374] Tuesday, Oct 27, 4:00 p.m. | YBCA Theater

New: Oracle Planning and Budgeting Cloud Service Enterprise Edition [CON9529] Wednesday, Oct 28, 11:00 a.m. | Moscone West—3018

What’s New and What’s Coming: Financial Close in the Cloud [CON7031] Wednesday, Oct 28, 11:00 a.m. | Moscone West—3020

Product Development Panel Q&A: Oracle Hyperion EPM Applications [CON9524] Wednesday, Oct 28, 12:15 p.m. | Moscone West—3009

Customers: Watch for these customers who are speaking: Kraft Heinz, Serta Simmons Bedding, Wilsonart International, Baxters Food Group, Ambarella Corp, Invesco, WestRock Co, Cognizant Technology Solutions Inc, Vodafone, EA, Suntrust Banks, Inc. and many more.

URLs: Use this link to find out everything about Oracle EPM and OpenWorld 2015. And, don’t forget the Customer Appreciation Event at Treasure Island on Thursday evening, October 29.

Have fun and learn at Oracle OpenWorld 2015. We look forward to seeing you there!

A ShipIt at Oracle’s Mexico Development Center

Oracle AppsLab - Tue, 2015-10-06 02:37

Editor’s note: Our team participated in many of the Summer of Innovation events held by Laurie (@lsptahoe) and her team. Here’s a report from Luis (@lsgaleana) on the Mexico Development Center (MDC) ShipIt held in mid-August. Enjoy.

Oracle Mexico Development Center (MDC) has grown to bring together some 700 employees, and it continues to grow weekly. However, there is only a single point of entry for every non-employee person, the receptionist.

Every visitor that comes to MDC goes roughly through the same process. The person talks to the receptionist. The receptionist goes to the corporate directory to find the employee being visited. The employee meets the person at the lobby. This is a pretty straightforward job and it is similar for regular visitors, delivery personnel and interview candidates, which are the user roles we picked for our ShipIt project.


The days before the ShipIt event, Rafael Belloni (Rafa) gathered the Mariachis team, Osvaldo Villagrana (Os), Oscar Vargas, Juan Pablo Martinez (Juampi) and myself, and talked to us about his idea of a virtual kiosk that would serve as an entry point for every visitor. It would consist of a screen with a simple but elegant user interface, that in the background would take care of the tedious and repetitive job of finding people in the corporate directory, contacting them, printing badges, saving the information into a log and even entertaining visitors with a video about Oracle and all its wonders.


Technically, we wanted to have an Android app loaded onto a Nexus Player, which Oscar owned, connected to a touch-screen monitor, that Rafa had recently bought (with the ShipIt in mind, of course). This 3-part device would represent the kiosk. In the background, we would have web services to scrape the corporate directory, notify employees via e-mail, IM and text, print a badge for the visitor, and save all of the information into a log. One final part was a web panel, where HR personnel could schedule interviews and assign interviewers.

The day of the event, we started putting things together. However, as is common in this kind of events, issues started arising quickly.

To enable touch in the touch-screen monitor, a USB cable was needed between the monitor and the device controlling it. But the Nexus Player has no input for a USB. We debated on our options and concluded that we would try to make it work (somehow). In the mean time, we would try to have Android running on a Raspberry Pi connected to the monitor. If all failed, we would make a web app running on a laptop. In the end, Oscar came up with BlueStacks, an Android emulator for the PC, on which we could easily install an Android app. We went with that.

Rafa worked on the web panel for the interviews; Os worked on the web services, the pdf conversion, and IM and e-mail communication; Juampi did the design; and Oscar and I made the Android app. We all had our series of problems, but we discussed them among all. We all proposed solutions, but it always came down to the easiest and quickest.

The day went by and we called it for the night around 3am.


The next day, after breakfast, the kiosk was coming together. The design was ready, we had screens to show, the web panel was working and most of the web services were in place. We just needed to put everything together and polish some features.

We rehearsed the presentation about an hour before the scheduled time. Rafa did all the talking, I controlled the demo, and Os showed the e-mails, IM messages and pdf. We were number 3 of 5 to present. All teams made their voting. The time came to announce the winners: 3rd, UAE Tacklers; 2nd, Team Roller; 1st, Mariachis.


In the end, we skimmed the project. We decided that it was a bit too much to print the badge – generating a pdf was cool enough. We also discarded the texting, saving information to a log and the Oracle video. However, these features can easily be added in production.

This is how it looked:

1 2 3 4 5 6 Possibly Related Posts:

Cloud Control : Some minor issues (fixed)

Tim Hall - Tue, 2015-10-06 02:19

em-12cSince the upgrade to Cloud Control, we’ve been having a couple of issues, mostly around EMCLI.

Some of our databases use Service Guard, so you don’t know which node they are running on. Rather than having an agent per package, we have one on each node. One of my colleagues wrote a little script to check which node the instance is running on, and relocate it if it has moved. This is done using EMCLI and was working fine before the move to Since the upgrade it’s been rather erratic. It would work for a while, then fail. After watching for a while I noticed a couple of things.

EMCLI calls to the OMS that sent SQL were intermittently killing in the Agent on the Cloud Control server itself. It seemed to cause Out Of Memory errors. The heap size for the agent on the CC server was set to something like 1860M. Changing it to 2048M seemed to fix that issue. The setting is in the “agent_inst/sysman/config/” file. We now have this.

agentJavaDefines=-Xmx2048M -XX:MaxPermSize=128M

Note. The agents on the monitored servers have a tiny heap size. Nothing like this bad-boy. :)

After running OK for a few days, we started to get the following type of errors from EMCLI.

$ emcli sync
Error: Session expired. Run emcli login to establish a session.

We fixed this by running the “setup” command again.

emcli setup -username=sysman -password="MyPassword" -url=""

Since then, EMCLI has seemed to behave itself.

Note. As part of the upgrade, we downloaded the latest EMCLI jar file from the server and did the setup, so this part was working fine for a while. Not sure why it started to screw up…

Anyway, all seems fine now. I’m guessing if we weren’t using EMCLI, we would never have had a problem in the first place.



Cloud Control : Some minor issues (fixed) was first posted on October 6, 2015 at 9:19 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

El Crap-itan

Tim Hall - Tue, 2015-10-06 01:18

In the comments from yesterday’s post, Jonathan Roden mentioned the release of El Capitan. At that point, I hadn’t even heard about it. :) Being the naive idiot I am, I jumped head long into it.

The download was about 3G, which didn’t take too long. The upgrade itself took quite a while. That included one failure. During the installation, the system rebooted, as it said it would, and I thought it was over, but it was still running Yosemite. I manually started the upgrade again by running the installer, which was sitting in the Applications folder. The second time it completed.

The first snag was my external monitor didn’t work. A bit of Googling and it seems this is not uncommon. Some people said they couldn’t get the HDMI connection to work, so I switched to a display port connection. No luck there. The “Detect Displays” button is hidden these days, but it shows up if you hold the “Options” key (see here). Apparently, this happened in a previous release, but I’ve obviously not needed it up until now. :) Anyway, that didn’t help. I just kept switching between cables, each time with a hard reboot. Eventually, it noticed the monitor on the HDMI cable and all was working fine. This does of course make me worry what is going to happen when I plug this laptop into a projector. Am I going to need several hard reboots each time before it notices the new display? :(

During the repeated reboots, I noticed how long it takes to do a hard reboot under El Capitan. I know some fanboys whould have you believe you never have to reboot an Apple device, but that it clearly not true. This is running on a 8 month old i7, with 16G RAM and a 512G flash card. It was crazy fast to reboot under Yosemite. Not so much under El Capitan. Once it’s started, I can’t tell a performance difference (at the moment), but bootup time is shocking. Much worse than Windows 7 on my crappy i5 PC at work. Some of the folks on the interwebs are claiming general performance sucks since the upgrade, even on new gear. We shall see.

Since the upgrade, the laptop doesn’t seem to turn the screen off when it’s been inactive for a while. I woke up this morning to find it had been on all night. My first thought was I had left Caffeine running, but I hadn’t. As far as I could see, there was nothing running that would cause this. I didn’t have time to figure out why. I’m marking this as a fail, because I’m forced to investigate something that was working fine before.

During my Googling for solutions to my issues, it seems lots of people are complaining about poor battery life since switching to El Capitan. My laptop is permanently plugged in when I am at home. It will be interesting to see how it copes when I travel. I won’t give this a fail yet, as I don’t have any personal experience of it, but you might want to think twice if you are a battery user. :)

Visually, I’ve not been able to tell the last few releases apart. There are allegedly new features in this release, but I’m not sure I will ever notice them. I don’t care about new eye candy that much, but I don’t see the point of giving this a new name and all that, when it feels like a minor patch.

Overall, I’m giving El Capitan a resounding fail at this point. Hopefully, Apple will take note of the complaints on the net and fix this shit soon. If you are trying to decide to switch, or not, I would say wait a while and see what Apple do in the coming weeks. Maybe you will have a different experience. Maybe not. :)




El Crap-itan was first posted on October 6, 2015 at 8:18 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Fundamentals of SQL Writeback in Dodeca

Tim Tow - Mon, 2015-10-05 21:00
One of the features of Dodeca is read-write functionality to SQL databases.  We often get questions as to how to write data back to a relational database, so I thought I would post a quick blog entry for our customers to reference.

This example will use a simple table structure in SQL Server though the concepts are the same when using Oracle, DB2, and most other relational databases.  The example will use a simple Dodeca connection to a JDBC database.  Here is the Dodeca SQL Connection object used for the connection.

The table I will use for this example was created with the following CREATE TABLE  statement.

CREATE TABLE [dbo].[Test](
[TestID] [int] IDENTITY(1,1) NOT NULL,
[TestCode] [nvarchar](50) NULL,
[TestName] [nvarchar](50) NULL,
  ([TestID] ASC)

First, I used the Dodeca SQL Excel View Wizard to create a simple view in Dodeca to retrieve the data into a spreadsheet.  The view, before setting up writeback capabilities, looks like this.
To make this view writeable, follow these steps.
  1. Add the appropriate SQL insert, update, or delete statements to the Dodeca SQL Passthrough Dataset object.  The values to be replaced in the SQL statement must be specified using the notation @ColumnName where ColumnName is the column name, or column alias, of the column containing the data.
  2. Add the column names of the primary key for the table to the PrimaryKey property of the SQL Passthrough DataSet object.
  3. Depending on the database used, define the column names and their respective JDBC datatypes in the Columns property of the SQL Passthrough Dataset.  This mapping is optional for SQL Server because Dodeca can obtain the required information from the Microsoft JDBC driver, however, the Oracle and DB2 JDBC drivers do not provide this information and it must be entered by the developer.
For insert, update, and delete operations, Dodeca parses the SQL statement to read the parameters that use the @ indicator and creates a JDBC prepared statement to execute the statements.  The prepared statement format is very efficient as it compiles the SQL statement once and then executes it multiple times.  Each inserted row is also passed to the server during the transaction.  The values from each row are then used in conjunction with the prepared statement to perform the operation.

Here is the completed Query definition.

Next, modify the DataSetRanges property of the Dodeca View object and, to enable insert operations, set the AllowAddRow property to True.  Note that if you added update and/or delete SQL to your SQL Passthrough Dataset object, be sure to enable those operations on the worksheet via the AllowDeleteRow and AllowModifyRow properties.

Once this step is complete, you can run the Dodeca View, add a row, and press the Save button to save the record to the relational database.

The insert, update, and delete functionalities using plain SQL statements is limited to operations on a single table.  If you need to do updates on multiple tables, you must use stored procedures to accomplish the functionality.  You can call a stored procedure in Dodeca using syntax similar to the following example:

{call sp_InsertTest(@TestCode, @TestName)}

Dodeca customers can contact support for further information at
Categories: BI & Warehousing

IBM Bluemix - Specify only Liberty buildpack features you require

Pas Apicella - Mon, 2015-10-05 20:22
I am more often then not using spring boot applications on IBM Bluemix and most of what I need is packaged with the application from JPA or JDBC, drivers, Rest etc. Of course with IBM Bluemix we can specify which build pack we wish to use but by default for java applications LIberty is used.

When a stand-alone application is deployed, a default Liberty configuration is provided for the application. The default configuration enables the following Liberty features:
  • beanValidation-1.1
  • cdi-1.2
  • ejbLite-3.2
  • el-3.0
  • jaxrs-2.0
  • jdbc-4.1
  • jndi-1.0
  • jpa-2.1
  • jsf-2.2
  • jsonp-1.0
  • jsp-2.3
  • managedBeans-1.0
  • servlet-3.1
  • websocket-1.1
  • icap:managementConnector-1.0
  • appstate-1.0
Here is how I strip out some of what isn't required in my Liberty runtime container to a bare minimal of what I need.


 - name: pas-speedtest
   memory: 512M
   instances: 1
   path: ./demo-0.0.1-SNAPSHOT.jar
   host: pas-speedtest
     JBP_CONFIG_LIBERTY: "app_archive: {features: [jsp-2.3, websocket-1.1, servlet-3.1]}"

 More Information
Categories: Fusion Middleware

Uploading 26M StackOverflow Questions into Oracle 12c

Marcelo Ochoa - Mon, 2015-10-05 16:42
Just for fun or testing in-memory capabilities of Oracle 12c
Following the post Import 10M Stack Overflow Questions into Neo4j In Just 3 Minutes I modified the python script to basically include the foreign key columns not included into the graph database design and required in a relational model.
Python files and can be download from my drive, basically it adds these two lines:
                el.get('owneruserid'),when generating the output file csvs/posts.csv, the idea is to convert the StackOverflow export files:-rw-r--r-- 1 root root   37286997 ago 18 12:50
-rw-r--r-- 1 root root 7816218683 ago 18 13:52
-rw-r--r-- 1 root root     586861 ago 18 13:52
-rw-r--r-- 1 root root  160468734 ago 18 13:54
-rw-r--r-- 1 root root  524354790 ago 18 13:58
-rw-r--r-- 1 root root 2379415989 sep  2 14:28
-rw-r--r-- 1 root root  112105812 sep  2 14:29 a list of CSV files for quick importing into Oracle 12c RDBMS using external tables, here the list of converted files and theirs sizes:3,8G         posts.csv
287M posts_rel.csv
524K tags.csv
517M tags_posts_rel.csv
355M users.csv
427M users_posts_rel.csvwith above files and an Oracle 12c running in a Docker container as is described into my previous post On docker, Ubuntu and Oracle RDBMS, I executed these steps:- logged as SYSalter system set sga_max_size=4G scope=spfile;
alter system set sga_target=4G scope=spfile;
alter system set inmemory_size=2G scope=spfile;
create user sh identified by sh
   default tablespace ts_data
   temporary tablespace temp
   quota unlimited on ts_data;
grant connect,resource,luceneuser to sh;
create directory data_dir1 as '/mnt';
create directory tmp_dir as '/tmp';
grant all on directory data_dir1 to sh;
grant all on directory tmp_dir to sh;it basically create a new user and directories to be used by the external tables. Note that the CSV files are available into the Docker machine as /mnt directory, I am running my Docker images with:docker run --privileged=true --ipc=host --volume=/var/lib/docker/dockerfiles/ --volume=/mnt/backup/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true --publish=1521:1521 --publish=9099:9099 oracle-12102Then logged as SH user:- Importing userscreate table users_external
( user_id            NUMBER(10),
  display_name VARCHAR2(4000),
  reputation       NUMBER(10),
  aboutme         VARCHAR2(4000),
  website_url    VARCHAR2(4000),
  location          VARCHAR2(4000),
  profileimage_url VARCHAR2(4000),
  views             NUMBER(10),
  upvotes          NUMBER(10),
  downvotes     NUMBER(10)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
            terminated BY ','
            optionally enclosed BY '"'
            missing field VALUES are NULL
  location (data_dir1:'users.csv')
 reject limit unlimited;CREATE TABLE so_users
   TABLESPACE ts_data
      as (select * from users_external);
-- Elapsed: 00:00:22.76ALTER TABLE so_users ADD PRIMARY KEY (user_id);
-- Elapsed: 00:00:13.08
create index so_users_display_name_idx on so_users(display_name);
-- Elapsed: 00:00:08.01- Importing Postscreate table posts_external
( post_id      NUMBER(10),
  parent_id   NUMBER(10),
  user_id      NUMBER(10),
  title            VARCHAR2(4000),
  body          CLOB,
  score         NUMBER(10),
  views        NUMBER(10),
  comments NUMBER(10)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
            terminated BY ','
            optionally enclosed BY '"'
            missing field VALUES are NULL
  location (data_dir1:'posts.csv')
 reject limit unlimited;CREATE TABLE so_posts
   TABLESPACE ts_data
      as (select * from posts_external);
-- Elapsed: 00:14:20.89
ALTER TABLE so_posts ADD PRIMARY KEY (post_id);
-- Elapsed: 00:02:35.86
-- purge posts associated to no imported users
delete from so_posts where user_id not in (select user_id from so_users);
-- Elapsed: 00:02:41.64
create index so_posts_user_id_idx on so_posts(user_id);
-- Elapsed: 00:01:34.87
ALTER TABLE so_posts ADD CONSTRAINT fk_so_user FOREIGN KEY (user_id) REFERENCES so_users(user_id);
-- Elapsed: 00:00:09.28Note that 26 million posts where imported in 14 minutes, not so bad considering that CSV source was at an external USB 2.0 drive and Oracle 12c tablespaces where placed at an USB 3.0 drive, here a screenshot showing the IO bandwidth consumed in both drivers.
only 4.8 Mb/s for reading from sdb (CSV) and 9.7 Mb/s for writing at sdc1 (ts_data).- Importing tagscreate table tags_external
( tag_id      VARCHAR2(4000)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
            terminated BY ','
            optionally enclosed BY '"'
            missing field VALUES are NULL
  location (data_dir1:'tags.csv')
 reject limit unlimited;CREATE TABLE so_tags
   TABLESPACE ts_data
      as (select * from tags_external);
-- Elapsed: 00:00:00.55create table tags_posts_external
( post_id      NUMBER(10),
  tag_id      VARCHAR2(4000)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
            terminated BY ','
            optionally enclosed BY '"'
            missing field VALUES are NULL
  location (data_dir1:'tags_posts_rel.csv')
 reject limit unlimited;CREATE TABLE so_tags_posts
   TABLESPACE ts_data
      as (select * from tags_posts_external);
-- Elapsed: 00:00:43.75-- purge tags associated to no imported posts
delete from so_tags_posts where post_id not in (select post_id from so_posts);
-- Elapsed: 00:02:42.00
create index so_tags_posts_post_id_idx on so_tags_posts(post_id);
-- Elapsed: 00:00:43.29
ALTER TABLE so_tags_posts ADD CONSTRAINT fk_so_posts FOREIGN KEY (post_id) REFERENCES so_posts(post_id);
-- Elapsed: 00:01:16.65Note that as in posts<->users one-to-many relation, tags<->posts is also a one-to-many relation and some posts referenced by a few tags where not imported due character-encoding errors.As a summary of the above steps 26 millions posts of 4.5 millions registered users where imported; 41K distinct tags are used with an average of 1.11 tag by post (29M tags/posts rows).Next blog post will be about using Oracle 12c in-memory features to query this corpus data.

OTN at Oracle OpenWorld Group - Join today!

OTN TechBlog - Mon, 2015-10-05 11:27

Join the OTN at Oracle OpenWorld group on the OTN Community Platform!  This group is designed to keep you in the know about all the GREAT activities and events that the Team OTN is planning/organizing for Oracle OpenWorld in San Francisco this October (24th to 28th).

Some of the events/activities to look forward to -

Community Events - RAC Attack and Blogger Meetup.

Networking Opportunities - Sunday Kick off Party, Cloud Hour

NEW activities! Graffiti Wall and giant games plus Make Your Own T-Shirt is back with NEW art!

15221_OTN Lounge-Graphics_FINAL 1.jpg 15221_OTN Lounge-Graphics_FINAL 4.jpg15221_OTN Lounge-Graphics_FINAL 5.jpg15221_OTN Lounge-Graphics_FINAL 7.jpg15221_OTN Lounge-Graphics_FINAL 8.jpg

We hope to see you there!


Submit an abstract for Georgia Oracle User Group (GaOUG) Tech Day 2016

DBASolved - Mon, 2015-10-05 10:15

In 2014, myself, Danny Bryant and Stewart Bryson, from the Atlanta area were added to the board of directors for the Georgia Oracle User Group (GaOUG). With us being added to the Board of Directors, we initatied a rebranding of the user group from GOUSER to GaOUG in Feburary 2015, with much success. Then followed that event up with two quarterly events in April and July 2015, which proved to be an even bigger sucess and we continue to build on each success due to people we have been able to attract for quarterly events. After the success of our quarterly events, the board of directors established a goal of bringing the best speakers, locally, nationally and internationally, to the Atlanta area for a 1 day conference in 2016. So, I’m here to promote the conference and to hopefully convence you to submit an abstract to this new event on the conference circuit.

Little Background:

When I first moved to the Atlanta area (2001), I didn’t know anything about Atlanta much less the Oracle community. I joined GOUSER around 2006 with little going on in the community (story for another time). It wasn’t until 2012, when I first ventured out onto the conference stage at Oracle Open World 2012. For my first conference, it was nerve racking to say the least; however, I was introduced to many great people that year and my career, personal and professional networks, and friendships have benefited from that experiance. These are just some of the reasons why I keep submitting annually for a wide range of conferences and why I’m helping to bring this conference to Atlanta.


The aim of GaOUG Tech Day 2016 conference is to start small and grow into the best regional Oracle User Group conference in the Southest! This can only be achived with the help of great speakers (new and existing) from the Oracle Community! With that being said, our call for papers is currently open (submit here).

GaOUG Tech Day 2016 will have three categories for abstract submissions. These include Database & Development, Middleware and Applications. These categories cover a wide range of technologies form the Oracle stack plus many others.

Submitting an Abstract:

Before you submit your abstracts, there are a few reminders that you should be aware of that make a great abstract:

  1. Take the time to make a great abstract title and fill out the abstract and summary completely!
  2. Run it through spelling and grammar check.  If you submit a “sloppy” abstract with misspellings and errors in grammar, how can we know that you’ll take the time to ensure that the presentation will be performed professionally and technically accurate?
  3. List a few take-aways the attendee will leave with.  What is the value that will be gained by attending your session?
  4. Fill out your speaker biography.  We like to know a bit about you and why you are important to have presenting at the conference.
  5. No marketing!  Keep your session technical.  Our conference is a technical conference and nothing irks our attendees like marketing!  If they like what you are teaching, they’ll search out your company and/or product-  trust me!

(Tips were provided by DBAKevlar, thanks Kellyn!)

First/Local Time Speakers:

Finally, if you are submitting an abstract and you are a first time or local speaker; do not hesitate to reach out to the board and ask for guidance and/or mentoring. The board of directors has the experience to assist you in your submissions and selections. We are looking forward to and wanting to hear from new speakers in the community and help build your confidence in the speaking arena.

Submit your abstract today! Deadline for submissions is November 2nd, 2015!

Filed under: GaOUG, General
Categories: DBA Blogs

Do we really need semantic layer from OBIEE?

Dylan's BI Notes - Mon, 2015-10-05 09:22
Not all BI tools have the semantic layer.  For example, Oracle Discoverer seems not having a strong semantic layer. This page summarizes what OBIEE semantic layer can do for you… BI Platform Semantic Layer I think that if these features can be accomplished in other ways and can be proven that they are not necessary, […]
Categories: BI & Warehousing