Skip navigation.

Feed aggregator

New Presentation - Building Practical Oracle Audit Trails

Pete Finnigan - 3 hours 15 min ago

I wrote a presentation on designing and building practical audit trails back in 2012 and presented it once and then never again. By chance I did not post the pdf's of these slides at that time. I did though some....[Read More]

Posted by Pete On 01/10/15 At 05:16 PM

Categories: Security Blogs

Protect Your APEX Application PL/SQL Source Code

Pete Finnigan - 3 hours 15 min ago

Oracle Application Express is a great rapid application development tool where you can write your applications functionality in PL/SQL and create the interface easily in the APEX UI using all of the tools available to create forms and reports and....[Read More]

Posted by Pete On 21/07/15 At 04:27 PM

Categories: Security Blogs

Oracle Security and Electronics

Pete Finnigan - 3 hours 15 min ago

How does Oracle Security and Electronic mix together? - Well I started my working life in 1979 as an apprentice electrician in a factory here in York, England where I live. The factory designed and built trains for the national....[Read More]

Posted by Pete On 09/07/15 At 11:24 AM

Categories: Security Blogs

New Conference Speaking Dates Added

Pete Finnigan - 3 hours 15 min ago

In the last few years I have not done as many conference speaking dates as I used to. This is simply because when offered they usually clashed with pre-booked work. I spoke for the UKOUG in Dublin last year and....[Read More]

Posted by Pete On 06/07/15 At 09:40 AM

Categories: Security Blogs

Happy 10th Belated Birthday to My Oracle Security Blog

Pete Finnigan - 3 hours 15 min ago

Make a Sad Face..:-( I seemed to have missed my blogs tenth which happened on the 20th September 2014. My last post last year and until very recently was on July 23rd 2014; so actually its been a big gap....[Read More]

Posted by Pete On 03/07/15 At 11:28 AM

Categories: Security Blogs

Oracle Database Vault 12c Paper by Pete Finnigan

Pete Finnigan - 3 hours 15 min ago

I wrote a paper about Oracle Database Vault in 12c for SANS last year and this was published in January 2015 by SANS on their website. I also prepared and did a webinar about this paper with SANS. The Paper....[Read More]

Posted by Pete On 30/06/15 At 05:38 PM

Categories: Security Blogs

Unique Oracle Security Trainings In York, England, September 2015

Pete Finnigan - 3 hours 15 min ago

I have just updated all of our Oracle Security training offerings on our company website. I have revamped all class pages and added two page pdf flyers for each of our four training classes. In have also updated the list....[Read More]

Posted by Pete On 25/06/15 At 04:36 PM

Categories: Security Blogs

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - 3 hours 15 min ago

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

About My Son, Chris Silva, Amazing Artist, Father and All-Around Human Being

FeuerThoughts - Tue, 2015-10-06 13:59
"For the record...."

Chris is the 2015 recipient of a 3arts grant, which makes me incredibly proud and also gives me the opportunity to share his professional art bio (I mostly experience him these days as Papa to my two wonderful granddaughters).
Born in Puerto Rico, Chris Silva has been a prominent figure in Chicago’s graffiti and skateboarding scenes since the 1980s, as well as an enthusiastic fan of a wide range of music genres which have resulted from the influence of metropolitan life. Building on his solid graffiti art foundation, Silva proceeded to play a significant role in the development of what is now commonly referred to as "street art." He now splits his time between working on large-scale commissions, producing gallery oriented work, and leading youth-involved public art projects. As a self-taught sound artist with roots in DJ culture, Silva also anchors a collaborative recording project known as This Mother Falcon, and has recently started integrating his audio compositions into his installation work.
In the early 90s, Silva worked on a mural with the Chicago Public Art Group and was eventually brought on board to help lead community art projects with other urban youth. As a result, the act of facilitating art experiences for young people has become an important part of his art practice, and he regularly includes students as collaborators on large-scale artwork that often leans heavily on improvisation. Over the years, Silva has helped orchestrate youth art projects both independently and in partnership with Chicago Public Art Group, Young Chicago Authors, Gallery 37, Yollocalli Arts Reach, After School Matters, and the School of The Art Institute of Chicago.
Silva was awarded a major public art commission by the Chicago Transit Authority to create a mosaic for the Pink Line California Station (2004); created block-long murals in Chicago's Loop “You Are Beautiful” (2006); created a sculpture for the Seattle Sound Transit System (2008); won the Juried Award for Best 3D Piece at Artprize (2012); and created large commissions for 1871 Chicago (2013), the City of Chicago, LinkedIn, CBRE (2014), OFS Brands, and The Prudential Building (2015). He has exhibited in Chicago, San Francisco, Los Angeles, New York City, Philadelphia, London, Melbourne, Copenhagen, and The International Space Station. In 2007 Silva received an Artist Fellowship Award from The Illinois Arts Council.
Categories: Development

SQLcl - Oct 5th Edition

Kris Rice - Tue, 2015-10-06 09:29
It was time for another push of the new sql command line , sqlcl.  Here's the changes in the latest build.  There are a number of bugs fixed relating to connecting to idle databases, alias feedback, and picking up the $ORACLE_HOME's jdbc driver when available.  There is only a couple functional changes. First is that now the jdbc connect string itself can be specified.  This opens up a number

SQL Server- CMDEXEC Subsystem Failed to Load

Chris Foot - Tue, 2015-10-06 08:43

During your life as a DBA, you will probably have to restore the master database and all the user databases on a new operating system to bring an already existing SQL Server instance online. One of my clients recently had an issue with hardware failure. Our only option was to install SQL Server on a new OS and use the backup files to restore all the databases from the old SQL Server instance. Once the new system was completely built and we restored msdb, we noticed that some of the SQL Server Agent jobs began to fail. Below is the error output from the job history. It states that the CMDEXEC subsystem failed to load.

This will occur because the installation of the previous SQL Server install was done on a different drive than the newly- built SQL Server installation. There are 2 different ways to fix the issue.

Option 1

If you run the below command in a SQL Server query window on the new instance, you will get the output below and find that the old installation, in this examples, was done on the C:\ drive. The newly-built SQL Server installation was done on the E:\ drive, so we have to update the table to correct the paths.

FROM msdb..syssubsystems


In order to update the table, you will have to run the below command and replace the subsystem_dll path for each subsystem to the drive that the new SQL Server installation is on.

sp_configure 'allow updates',1


UPDATE syssubsystems
SET subsystem_dll = REPLACE(subsystem_dll, 'C:\Program Files', 'E:\Program Files')
FROM syssubsystems
WHERE subsystem_dll LIKE 'C:\Program Files%' 

sp_configure 'allow updates',0

--Restart SQL Server Agent

After you are finished updating the table to the new paths of the SQL Server installation, you will have to restart the SQL Server Agent for the changes to take place.

Option 2

You would run the same exact SELECT statement to verify that the paths are not the correct paths of the installation. Then you would run the DELETE statement to truncate the table. Step 3 would be to execute the sp_verify_subsystems stored procedure which will update the subsystem table with the correct information. After a quick restart of the SQL Server Agent, the table will be repopulated, and that will be all that is needed for your jobs to now run successfully.

--Step 1
--Look for the existence of the subsystem values
FROM msdb..syssubsystems
--Step 2
--DELETE the existing subsystem values
DELETE FROM msdb..syssubsystems
--Step 3
--Reload the subsystem values
EXEC msdb.dbo.sp_verify_subsystems 1

--Restart SQL Server Agent.

Thanks for reading. Enjoy!

The post SQL Server- CMDEXEC Subsystem Failed to Load appeared first on Remote DBA Experts.

Oracle E-Business Suite: Virtual Host Names

Pythian Group - Tue, 2015-10-06 07:56

The ability to use virtual host names with Oracle E-Business Suite is one of the features that I have been waiting for a long time. When I finally saw a post on Steven Chan’s blog about it, I was very excited. But, when I finally got to review the Mos note “Configuring Oracle E-Business Suite Release 12.x Using Logical Host Names”, I was left with disappointed.

In my opinion, the main advantage of using virtual host names is during a DR failover scenario. By using virtual hosts we can setup the servers in both a primary datacenter and secondary datacenter to use the same virtual hostname, even though their physical hostnames are different. This virtual hostname setup helps when we failover services and databases to a secondary datacenter, as we don’t have to reconfigure the application to use new physical hostnames. Currently when we install E-Business Suite to use a virtual hostname, “Concurrent Managers” dont work, as they internally use the physical hostname to communicate.

The new MOS note describes this very feature of using virtual hostnames with Oracle E-Business Suite. But why I am disappointed? Because it left a very important use case out. In most cases when virtual hostnames are used, the servers are configured with a different physical hostname. i.e., if you run hostname or uname commands you will see that the actual physical hostname and virtual hostname is only present in DNS and hosts file. This scenario is not covered by the MOS note. The MOS note asks us to reconfigure the server with virtual hostname such that when we type hostname or uname command it shows the virtual hostname instead of the physical hostname.

I believe the need to reconfigure the server to use a virtual hostname, defeats the main purpose of setting up virtual hostnames, making this MOS note useless :(

Thus, I will keep on waiting for this out of the box feature. I currently have a custom in-house method to use virtual hostnames with E-Business Suite that I will blog about it in future.


Discover more about our expertise with Oracle.

Categories: DBA Blogs

The Oracle EPM Cloud Forecast for OpenWorld 2015 is Clear and Bright

Linda Fishman Hoyle - Tue, 2015-10-06 06:28

A Guest Post by Jennifer Toomey, Senior Principal Product Marketing Director, Business Analytics Product Group (pictured left)

Oracle OpenWorld San Francisco (October 25–29, 2015) offers more Oracle EPM content and expert experience than any other conference in the world. It offers nearly 50 EPM conference sessions, seven demo stations, and two hands-on labs, plus 35 customers, 15 partners, and 26 Oracle staff speaking. Whether you already have, or are considering, an Oracle EPM solution, Oracle OpenWorld is the place to be.

Cloud: EPM Cloud is in the spotlight this year with sessions featuring existing Oracle EPM Cloud customers, products, and strategy, as well as roadmap sessions that reveal plans for new offerings coming in the next 12–18 months. Attendees will get the opportunity for a first look at some of these new cloud solutions. In addition, a number of customers will share their experiences and results from using Oracle EPM Cloud solutions.

On Premises: There will be sessions covering on premises Oracle EPM products, with more than 10 sessions dedicated to customer case studies.

Meet the Experts: We will have experts ready to demonstrate the complete range of both cloud and on premises products. Be sure to book yourself for a test drive so you can try out Oracle EPM Cloud solutions first hand.

Recommended Sessions:

Oracle EPM General Session with KPMG: Executive Briefing on Oracle’s EPM Strategy and Roadmap [GEN7014]   Monday, Oct 26, 4:00 p.m. | Moscone West—2008

Customers Present:  Oracle Planning and Budgeting Cloud Service [CON9540] Monday, Oct 26, 1:30 p.m. | Moscone West—3018

Oracle Fusion Middleware: Meet This Year’s Most Impressive Innovators [CON10374] Tuesday, Oct 27, 4:00 p.m. | YBCA Theater

New: Oracle Planning and Budgeting Cloud Service Enterprise Edition [CON9529] Wednesday, Oct 28, 11:00 a.m. | Moscone West—3018

What’s New and What’s Coming: Financial Close in the Cloud [CON7031] Wednesday, Oct 28, 11:00 a.m. | Moscone West—3020

Product Development Panel Q&A: Oracle Hyperion EPM Applications [CON9524] Wednesday, Oct 28, 12:15 p.m. | Moscone West—3009

Customers: Watch for these customers who are speaking: Kraft Heinz, Serta Simmons Bedding, Wilsonart International, Baxters Food Group, Ambarella Corp, Invesco, WestRock Co, Cognizant Technology Solutions Inc, Vodafone, EA, Suntrust Banks, Inc. and many more.

URLs: Use this link to find out everything about Oracle EPM and OpenWorld 2015. And, don’t forget the Customer Appreciation Event at Treasure Island on Thursday evening, October 29.

Have fun and learn at Oracle OpenWorld 2015. We look forward to seeing you there!

A ShipIt at Oracle’s Mexico Development Center

Oracle AppsLab - Tue, 2015-10-06 02:37

Editor’s note: Our team participated in many of the Summer of Innovation events held by Laurie (@lsptahoe) and her team. Here’s a report from Luis (@lsgaleana) on the Mexico Development Center (MDC) ShipIt held in mid-August. Enjoy.

Oracle Mexico Development Center (MDC) has grown to bring together some 700 employees, and it continues to grow weekly. However, there is only a single point of entry for every non-employee person, the receptionist.

Every visitor that comes to MDC goes roughly through the same process. The person talks to the receptionist. The receptionist goes to the corporate directory to find the employee being visited. The employee meets the person at the lobby. This is a pretty straightforward job and it is similar for regular visitors, delivery personnel and interview candidates, which are the user roles we picked for our ShipIt project.


The days before the ShipIt event, Rafael Belloni (Rafa) gathered the Mariachis team, Osvaldo Villagrana (Os), Oscar Vargas, Juan Pablo Martinez (Juampi) and myself, and talked to us about his idea of a virtual kiosk that would serve as an entry point for every visitor. It would consist of a screen with a simple but elegant user interface, that in the background would take care of the tedious and repetitive job of finding people in the corporate directory, contacting them, printing badges, saving the information into a log and even entertaining visitors with a video about Oracle and all its wonders.


Technically, we wanted to have an Android app loaded onto a Nexus Player, which Oscar owned, connected to a touch-screen monitor, that Rafa had recently bought (with the ShipIt in mind, of course). This 3-part device would represent the kiosk. In the background, we would have web services to scrape the corporate directory, notify employees via e-mail, IM and text, print a badge for the visitor, and save all of the information into a log. One final part was a web panel, where HR personnel could schedule interviews and assign interviewers.

The day of the event, we started putting things together. However, as is common in this kind of events, issues started arising quickly.

To enable touch in the touch-screen monitor, a USB cable was needed between the monitor and the device controlling it. But the Nexus Player has no input for a USB. We debated on our options and concluded that we would try to make it work (somehow). In the mean time, we would try to have Android running on a Raspberry Pi connected to the monitor. If all failed, we would make a web app running on a laptop. In the end, Oscar came up with BlueStacks, an Android emulator for the PC, on which we could easily install an Android app. We went with that.

Rafa worked on the web panel for the interviews; Os worked on the web services, the pdf conversion, and IM and e-mail communication; Juampi did the design; and Oscar and I made the Android app. We all had our series of problems, but we discussed them among all. We all proposed solutions, but it always came down to the easiest and quickest.

The day went by and we called it for the night around 3am.


The next day, after breakfast, the kiosk was coming together. The design was ready, we had screens to show, the web panel was working and most of the web services were in place. We just needed to put everything together and polish some features.

We rehearsed the presentation about an hour before the scheduled time. Rafa did all the talking, I controlled the demo, and Os showed the e-mails, IM messages and pdf. We were number 3 of 5 to present. All teams made their voting. The time came to announce the winners: 3rd, UAE Tacklers; 2nd, Team Roller; 1st, Mariachis.


In the end, we skimmed the project. We decided that it was a bit too much to print the badge – generating a pdf was cool enough. We also discarded the texting, saving information to a log and the Oracle video. However, these features can easily be added in production.

This is how it looked:

1 2 3 4 5 6 Possibly Related Posts:

Cloud Control : Some minor issues (fixed)

Tim Hall - Tue, 2015-10-06 02:19

em-12cSince the upgrade to Cloud Control, we’ve been having a couple of issues, mostly around EMCLI.

Some of our databases use Service Guard, so you don’t know which node they are running on. Rather than having an agent per package, we have one on each node. One of my colleagues wrote a little script to check which node the instance is running on, and relocate it if it has moved. This is done using EMCLI and was working fine before the move to Since the upgrade it’s been rather erratic. It would work for a while, then fail. After watching for a while I noticed a couple of things.

EMCLI calls to the OMS that sent SQL were intermittently killing in the Agent on the Cloud Control server itself. It seemed to cause Out Of Memory errors. The heap size for the agent on the CC server was set to something like 1860M. Changing it to 2048M seemed to fix that issue. The setting is in the “agent_inst/sysman/config/” file. We now have this.

agentJavaDefines=-Xmx2048M -XX:MaxPermSize=128M

Note. The agents on the monitored servers have a tiny heap size. Nothing like this bad-boy. :)

After running OK for a few days, we started to get the following type of errors from EMCLI.

$ emcli sync
Error: Session expired. Run emcli login to establish a session.

We fixed this by running the “setup” command again.

emcli setup -username=sysman -password="MyPassword" -url=""

Since then, EMCLI has seemed to behave itself.

Note. As part of the upgrade, we downloaded the latest EMCLI jar file from the server and did the setup, so this part was working fine for a while. Not sure why it started to screw up…

Anyway, all seems fine now. I’m guessing if we weren’t using EMCLI, we would never have had a problem in the first place.



Cloud Control : Some minor issues (fixed) was first posted on October 6, 2015 at 9:19 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

El Crap-itan

Tim Hall - Tue, 2015-10-06 01:18

In the comments from yesterday’s post, Jonathan Roden mentioned the release of El Capitan. At that point, I hadn’t even heard about it. :) Being the naive idiot I am, I jumped head long into it.

The download was about 3G, which didn’t take too long. The upgrade itself took quite a while. That included one failure. During the installation, the system rebooted, as it said it would, and I thought it was over, but it was still running Yosemite. I manually started the upgrade again by running the installer, which was sitting in the Applications folder. The second time it completed.

The first snag was my external monitor didn’t work. A bit of Googling and it seems this is not uncommon. Some people said they couldn’t get the HDMI connection to work, so I switched to a display port connection. No luck there. The “Detect Displays” button is hidden these days, but it shows up if you hold the “Options” key (see here). Apparently, this happened in a previous release, but I’ve obviously not needed it up until now. :) Anyway, that didn’t help. I just kept switching between cables, each time with a hard reboot. Eventually, it noticed the monitor on the HDMI cable and all was working fine. This does of course make me worry what is going to happen when I plug this laptop into a projector. Am I going to need several hard reboots each time before it notices the new display? :(

During the repeated reboots, I noticed how long it takes to do a hard reboot under El Capitan. I know some fanboys whould have you believe you never have to reboot an Apple device, but that it clearly not true. This is running on a 8 month old i7, with 16G RAM and a 512G flash card. It was crazy fast to reboot under Yosemite. Not so much under El Capitan. Once it’s started, I can’t tell a performance difference (at the moment), but bootup time is shocking. Much worse than Windows 7 on my crappy i5 PC at work. Some of the folks on the interwebs are claiming general performance sucks since the upgrade, even on new gear. We shall see.

Since the upgrade, the laptop doesn’t seem to turn the screen off when it’s been inactive for a while. I woke up this morning to find it had been on all night. My first thought was I had left Caffeine running, but I hadn’t. As far as I could see, there was nothing running that would cause this. I didn’t have time to figure out why. I’m marking this as a fail, because I’m forced to investigate something that was working fine before.

During my Googling for solutions to my issues, it seems lots of people are complaining about poor battery life since switching to El Capitan. My laptop is permanently plugged in when I am at home. It will be interesting to see how it copes when I travel. I won’t give this a fail yet, as I don’t have any personal experience of it, but you might want to think twice if you are a battery user. :)

Visually, I’ve not been able to tell the last few releases apart. There are allegedly new features in this release, but I’m not sure I will ever notice them. I don’t care about new eye candy that much, but I don’t see the point of giving this a new name and all that, when it feels like a minor patch.

Overall, I’m giving El Capitan a resounding fail at this point. Hopefully, Apple will take note of the complaints on the net and fix this shit soon. If you are trying to decide to switch, or not, I would say wait a while and see what Apple do in the coming weeks. Maybe you will have a different experience. Maybe not. :)




El Crap-itan was first posted on October 6, 2015 at 8:18 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Fundamentals of SQL Writeback in Dodeca

Tim Tow - Mon, 2015-10-05 21:00
One of the features of Dodeca is read-write functionality to SQL databases.  We often get questions as to how to write data back to a relational database, so I thought I would post a quick blog entry for our customers to reference.

This example will use a simple table structure in SQL Server though the concepts are the same when using Oracle, DB2, and most other relational databases.  The example will use a simple Dodeca connection to a JDBC database.  Here is the Dodeca SQL Connection object used for the connection.

The table I will use for this example was created with the following CREATE TABLE  statement.

CREATE TABLE [dbo].[Test](
[TestID] [int] IDENTITY(1,1) NOT NULL,
[TestCode] [nvarchar](50) NULL,
[TestName] [nvarchar](50) NULL,
  ([TestID] ASC)

First, I used the Dodeca SQL Excel View Wizard to create a simple view in Dodeca to retrieve the data into a spreadsheet.  The view, before setting up writeback capabilities, looks like this.
To make this view writeable, follow these steps.
  1. Add the appropriate SQL insert, update, or delete statements to the Dodeca SQL Passthrough Dataset object.  The values to be replaced in the SQL statement must be specified using the notation @ColumnName where ColumnName is the column name, or column alias, of the column containing the data.
  2. Add the column names of the primary key for the table to the PrimaryKey property of the SQL Passthrough DataSet object.
  3. Depending on the database used, define the column names and their respective JDBC datatypes in the Columns property of the SQL Passthrough Dataset.  This mapping is optional for SQL Server because Dodeca can obtain the required information from the Microsoft JDBC driver, however, the Oracle and DB2 JDBC drivers do not provide this information and it must be entered by the developer.
For insert, update, and delete operations, Dodeca parses the SQL statement to read the parameters that use the @ indicator and creates a JDBC prepared statement to execute the statements.  The prepared statement format is very efficient as it compiles the SQL statement once and then executes it multiple times.  Each inserted row is also passed to the server during the transaction.  The values from each row are then used in conjunction with the prepared statement to perform the operation.

Here is the completed Query definition.

Next, modify the DataSetRanges property of the Dodeca View object and, to enable insert operations, set the AllowAddRow property to True.  Note that if you added update and/or delete SQL to your SQL Passthrough Dataset object, be sure to enable those operations on the worksheet via the AllowDeleteRow and AllowModifyRow properties.

Once this step is complete, you can run the Dodeca View, add a row, and press the Save button to save the record to the relational database.

The insert, update, and delete functionalities using plain SQL statements is limited to operations on a single table.  If you need to do updates on multiple tables, you must use stored procedures to accomplish the functionality.  You can call a stored procedure in Dodeca using syntax similar to the following example:

{call sp_InsertTest(@TestCode, @TestName)}

Dodeca customers can contact support for further information at
Categories: BI & Warehousing

IBM Bluemix - Specify only Liberty buildpack features you require

Pas Apicella - Mon, 2015-10-05 20:22
I am more often then not using spring boot applications on IBM Bluemix and most of what I need is packaged with the application from JPA or JDBC, drivers, Rest etc. Of course with IBM Bluemix we can specify which build pack we wish to use but by default for java applications LIberty is used.

When a stand-alone application is deployed, a default Liberty configuration is provided for the application. The default configuration enables the following Liberty features:
  • beanValidation-1.1
  • cdi-1.2
  • ejbLite-3.2
  • el-3.0
  • jaxrs-2.0
  • jdbc-4.1
  • jndi-1.0
  • jpa-2.1
  • jsf-2.2
  • jsonp-1.0
  • jsp-2.3
  • managedBeans-1.0
  • servlet-3.1
  • websocket-1.1
  • icap:managementConnector-1.0
  • appstate-1.0
Here is how I strip out some of what isn't required in my Liberty runtime container to a bare minimal of what I need.


 - name: pas-speedtest
   memory: 512M
   instances: 1
   path: ./demo-0.0.1-SNAPSHOT.jar
   host: pas-speedtest
     JBP_CONFIG_LIBERTY: "app_archive: {features: [jsp-2.3, websocket-1.1, servlet-3.1]}"

 More Information
Categories: Fusion Middleware

Uploading 26M StackOverflow Questions into Oracle 12c

Marcelo Ochoa - Mon, 2015-10-05 16:42
Just for fun or testing in-memory capabilities of Oracle 12c
Following the post Import 10M Stack Overflow Questions into Neo4j In Just 3 Minutes I modified the python script to basically include the foreign key columns not included into the graph database design and required in a relational model.
Python files and can be download from my drive, basically it adds these two lines:
                el.get('owneruserid'),when generating the output file csvs/posts.csv, the idea is to convert the StackOverflow export files:-rw-r--r-- 1 root root   37286997 ago 18 12:50
-rw-r--r-- 1 root root 7816218683 ago 18 13:52
-rw-r--r-- 1 root root     586861 ago 18 13:52
-rw-r--r-- 1 root root  160468734 ago 18 13:54
-rw-r--r-- 1 root root  524354790 ago 18 13:58
-rw-r--r-- 1 root root 2379415989 sep  2 14:28
-rw-r--r-- 1 root root  112105812 sep  2 14:29 a list of CSV files for quick importing into Oracle 12c RDBMS using external tables, here the list of converted files and theirs sizes:3,8G         posts.csv
287M posts_rel.csv
524K tags.csv
517M tags_posts_rel.csv
355M users.csv
427M users_posts_rel.csvwith above files and an Oracle 12c running in a Docker container as is described into my previous post On docker, Ubuntu and Oracle RDBMS, I executed these steps:- logged as SYSalter system set sga_max_size=4G scope=spfile;
alter system set sga_target=4G scope=spfile;
alter system set inmemory_size=2G scope=spfile;
create user sh identified by sh
   default tablespace ts_data
   temporary tablespace temp
   quota unlimited on ts_data;
grant connect,resource,luceneuser to sh;
create directory data_dir1 as '/mnt';
create directory tmp_dir as '/tmp';
grant all on directory data_dir1 to sh;
grant all on directory tmp_dir to sh;it basically create a new user and directories to be used by the external tables. Note that the CSV files are available into the Docker machine as /mnt directory, I am running my Docker images with:docker run --privileged=true --ipc=host --volume=/var/lib/docker/dockerfiles/ --volume=/mnt/backup/db/ols:/u01/app/oracle/data --name ols --hostname ols --detach=true --publish=1521:1521 --publish=9099:9099 oracle-12102Then logged as SH user:- Importing userscreate table users_external
( user_id            NUMBER(10),
  display_name VARCHAR2(4000),
  reputation       NUMBER(10),
  aboutme         VARCHAR2(4000),
  website_url    VARCHAR2(4000),
  location          VARCHAR2(4000),
  profileimage_url VARCHAR2(4000),
  views             NUMBER(10),
  upvotes          NUMBER(10),
  downvotes     NUMBER(10)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
            terminated BY ','
            optionally enclosed BY '"'
            missing field VALUES are NULL
  location (data_dir1:'users.csv')
 reject limit unlimited;CREATE TABLE so_users
   TABLESPACE ts_data
      as (select * from users_external);
-- Elapsed: 00:00:22.76ALTER TABLE so_users ADD PRIMARY KEY (user_id);
-- Elapsed: 00:00:13.08
create index so_users_display_name_idx on so_users(display_name);
-- Elapsed: 00:00:08.01- Importing Postscreate table posts_external
( post_id      NUMBER(10),
  parent_id   NUMBER(10),
  user_id      NUMBER(10),
  title            VARCHAR2(4000),
  body          CLOB,
  score         NUMBER(10),
  views        NUMBER(10),
  comments NUMBER(10)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
            terminated BY ','
            optionally enclosed BY '"'
            missing field VALUES are NULL
  location (data_dir1:'posts.csv')
 reject limit unlimited;CREATE TABLE so_posts
   TABLESPACE ts_data
      as (select * from posts_external);
-- Elapsed: 00:14:20.89
ALTER TABLE so_posts ADD PRIMARY KEY (post_id);
-- Elapsed: 00:02:35.86
-- purge posts associated to no imported users
delete from so_posts where user_id not in (select user_id from so_users);
-- Elapsed: 00:02:41.64
create index so_posts_user_id_idx on so_posts(user_id);
-- Elapsed: 00:01:34.87
ALTER TABLE so_posts ADD CONSTRAINT fk_so_user FOREIGN KEY (user_id) REFERENCES so_users(user_id);
-- Elapsed: 00:00:09.28Note that 26 million posts where imported in 14 minutes, not so bad considering that CSV source was at an external USB 2.0 drive and Oracle 12c tablespaces where placed at an USB 3.0 drive, here a screenshot showing the IO bandwidth consumed in both drivers.
only 4.8 Mb/s for reading from sdb (CSV) and 9.7 Mb/s for writing at sdc1 (ts_data).- Importing tagscreate table tags_external
( tag_id      VARCHAR2(4000)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
            terminated BY ','
            optionally enclosed BY '"'
            missing field VALUES are NULL
  location (data_dir1:'tags.csv')
 reject limit unlimited;CREATE TABLE so_tags
   TABLESPACE ts_data
      as (select * from tags_external);
-- Elapsed: 00:00:00.55create table tags_posts_external
( post_id      NUMBER(10),
  tag_id      VARCHAR2(4000)
organization external
( type  oracle_loader
  default directory data_dir1
  access parameters
  ( records delimited BY newline
    badfile tmp_dir: 'sh%a_%p.bad'
    logfile tmp_dir: 'sh%a_%p.log'
            terminated BY ','
            optionally enclosed BY '"'
            missing field VALUES are NULL
  location (data_dir1:'tags_posts_rel.csv')
 reject limit unlimited;CREATE TABLE so_tags_posts
   TABLESPACE ts_data
      as (select * from tags_posts_external);
-- Elapsed: 00:00:43.75-- purge tags associated to no imported posts
delete from so_tags_posts where post_id not in (select post_id from so_posts);
-- Elapsed: 00:02:42.00
create index so_tags_posts_post_id_idx on so_tags_posts(post_id);
-- Elapsed: 00:00:43.29
ALTER TABLE so_tags_posts ADD CONSTRAINT fk_so_posts FOREIGN KEY (post_id) REFERENCES so_posts(post_id);
-- Elapsed: 00:01:16.65Note that as in posts<->users one-to-many relation, tags<->posts is also a one-to-many relation and some posts referenced by a few tags where not imported due character-encoding errors.As a summary of the above steps 26 millions posts of 4.5 millions registered users where imported; 41K distinct tags are used with an average of 1.11 tag by post (29M tags/posts rows).Next blog post will be about using Oracle 12c in-memory features to query this corpus data.

OTN at Oracle OpenWorld Group - Join today!

OTN TechBlog - Mon, 2015-10-05 11:27

Join the OTN at Oracle OpenWorld group on the OTN Community Platform!  This group is designed to keep you in the know about all the GREAT activities and events that the Team OTN is planning/organizing for Oracle OpenWorld in San Francisco this October (24th to 28th).

Some of the events/activities to look forward to -

Community Events - RAC Attack and Blogger Meetup.

Networking Opportunities - Sunday Kick off Party, Cloud Hour

NEW activities! Graffiti Wall and giant games plus Make Your Own T-Shirt is back with NEW art!

15221_OTN Lounge-Graphics_FINAL 1.jpg 15221_OTN Lounge-Graphics_FINAL 4.jpg15221_OTN Lounge-Graphics_FINAL 5.jpg15221_OTN Lounge-Graphics_FINAL 7.jpg15221_OTN Lounge-Graphics_FINAL 8.jpg

We hope to see you there!