Skip navigation.

Feed aggregator

Selling Books on Amazon vs. Intelivideo

Bradley Brown - Thu, 2015-09-17 20:44
For the last couple of years I've heard that publishers don't like selling their books through Amazon.  The way I heard it described sounded like Amazon was forcing (or pushing) you to sell books for under $10.  It's actually pretty complicated as to how their pricing works, so I've attempted to simplify for you here.  If you want the details, you can read more here:

Basically you have to pick which plan you’re on.  The 35% or 70% royalty plan.  At first glance, you would ask - why would anyone pick the 35% royalty plan?  Let's see, you do all of the research, write the book, take a book through 5 edits and get it to the point of publishing and Amazon is going to keep 35 "or" 70% of the revenue generated?  Logically who would say they only want to keep 35%?  It's more complicated - i.e. strings are attached to each choice.

If you pick the 70% royalty plan, you keep as much as 70% (minus delivery costs and with about 100 other rules) of whatever they sell it for.  But according to the small print, on a number of your sales, you’ll actually keep 35% of whatever they sell it for.  Here's the real kicker - if you want to keep 70% (minus delivery costs, VAT, etc), they force you to set the list price to $2.99 to $9.99 AND by the way they will keep 65% if they sell it in other countries, etc.  If you choose the 35% royalty plan (keep in mind, they are are keeping 65%), you can set the price between $2.99 and $200.  You can sell it for less than $2.99 (i.e. down to $.99) if you have a small book (i.e. less than 10Mb footprint).  They also say that the list price must be at least 20% below the lowest physical list price for the physical book.  Wow - SO many rules!

So Amazon charges (keeps) 30% (minus delivery costs) to 65% (and it’s usually this amount) and has minimum and maximum prices you can charge and a lot of rules AND it’s Amazon’s customer (not yours).

The 2 pricing options are explained (and tough to understand) here:

And their FAQ is here:

We're soon to release (secure) eBook functionality at Intelivideo.  So how does it work?  If you pick the Pro Plan, you keep 85% of the revenue and you can set the price to whatever price you want.  We have some other fine print, but overall I can assure you that our pricing is WAY better than Amazon's offer - and it's your customer.  You can sell them more products.  You can do promotions to them.  You can upsell them.  I'm shocked by Amazon's model and now understand the frustration others have!

Partner Webcast – Developers Continuous Delivery Using Oracle PaaS Cloud Services

Cloud computing is now broadly accepted as an economical way to share a pool of configurable computing resources. Several companies plan to move key parts of their development workloads to public...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Data Warehouse on Clould – Amazon Redshift

Dylan's BI Notes - Thu, 2015-09-17 15:50
Here is a brief summary of what I learned by reading these materials. 1. The data warehouse is stored in clusters It can support scale out, not scale up. “Extend the existing data warehouse rather than adding hardware” 2. Use SQL to access the data warehouse 3. Load data from Amazon S3 (Storage Service) using […]
Categories: BI & Warehousing

Oracle Priority Service Infogram for 17-SEP-2015

Oracle Infogram - Thu, 2015-09-17 15:41

OpenWorld is coming up soon, and articles on how to get the most of it are starting to fill the blogosphere. Here’s one from The Data Warehouse Insider: OpenWorld 2015 on your smartphone and tablet.
And one from that JEFF SMITH: All Things SQL Developer at Oracle Open World 2015
SE2 - Some questions, some answers ..., from Upgrade your Database – NOW!
From the same source: Script: Is your database ready for Oracle GoldenGate?
A PL/SQL Inlining Primer, from Oracle Database PL/SQL and EBR.
Exadata Software is Released, from Emre Baransel, Support Engineer's Blog.
From the same source: DBMCLI for Exadata Database Servers
All-Flash Oracle FS1 Storage System, from Oracle PartnerNetwork News.
BPEL-> Mediator -> BPEL: Passing Business Errors back thru Mediator , from the SOA & BPM Partner Community Blog.
From the same source: Searching Service Bus Pipeline Alert.
Concurrency on the JVM, from The Java Source.
From the same source: Java 8 Do and Don'ts and Microservices Architecture
Virtualization Monitoring in Solaris Zones, from Openomics.
From the Oracle E-Business Suite Support blog:
Webcast: Service Parts Planning 12.2.5 Features, Part 2
Don't Miss This! Oracle Exchange Certificate Renew August 19, 2015
Webcast: Use of a Custom Source to Derive Item Based Batch Close Variance Postings in OPM
New and Improved OTM Analyzer - Version 200.1 is now available!
From the Oracle E-Business Suite Technology blog:
Database Migration using 12cR1 Transportable Database Certified for EBS 12.1

Using EBS 12.2 Data Source Connection Pool Diagnostics

We are hiring!

Tanel Poder - Thu, 2015-09-17 15:32

Gluent – where I’m a cofounder & CEO – is hiring awesome developers and (big data) infrastructure specialists in US and UK!

We are still in stealth mode, so won’t be detailing publicly what exactly we are doing ;-)

However, it is evident that the modern data platforms (for example Hadoop) with their scalability, affordability-at-scale and freedom to use many different processing engines on open data formats are turning enterprise IT upside down.

This shift has already been going on for years in large internet & e-commerce companies and small startups, but now the shockwave is arriving to all traditional enterprises too. And every single one of them must accept it, in order to stay afloat and win in the new world.

Do you want to be part of the new world?



NB! After a 1.5 year break, this year’s only Advanced Oracle Troubleshooting training class (updated with Oracle 12c content) takes place on 16-20 November & 14-18 December 2015, so sign up now if you plan to attend this year!

Celebrating 5 Years in Oracle’s Mexico Development Center

Oracle AppsLab - Thu, 2015-09-17 14:51

Editor’s note: If you read here, you might recall that we have dos hermanos en Guadalajara, Luis (@lsgaleana) y Osvaldo (@vaini11a). Last month, the Mexico Development Center (MDC) celebrated its fifth anniversary. Here’s to many more. Reposted from VoX.

By Sarahi Mireles, (@sarahimireles), Oracle Applications User Experience (@usableapps)

As you may know, Oracle has a couple Development Centers around the globe, and one of them is in Guadalajara, México. The Oracle Mexico Development Center, aka Oracle MDC (where I work), was 5 years old on Aug. 18, and the celebration was just as tech-y and fun as it can be for a development center.


Oracle staff hang out at the event before lunch.

Staff from the 9th floor of Oracle MDC have fun and celebrate 5 years of Oracle in Mexico (hurray!)

Staff from the 9th floor of Oracle MDC have fun and celebrate 5 years of Oracle in Mexico (hurray!)

The celebration was split in two events, an open event called “Plug in” and a private event (just Oracle staff). Topics were related to what we love: Database, Cloud and, of course, User Experience. Some of the guest speakers were Hector García, who was chairman of the Computer Science Department at Stanford University; Javier Cordero, Managing Director of Oracle México; Jeremy Ashley (@jrwashley), Group Vice President, Applications User Experience, and Erik Peterson, General Manager of Oracle MDC.

Hector García Molina starts with his talk, "Thoughts on the Future Recommendation Systems," with students and Oracle staff.

Hector García Molina starts with his talk, “Thoughts on the Future Recommendation Systems,” with students and Oracle staff.

Andrew Mendelsohn, Executive Vice President, Database Server Technologies, gives a talk at the event.

Andrew Mendelsohn, Executive Vice President, Database Server Technologies, gives a talk at the event.

Cheers at the conference. It was a really fun event. Geeks knows how to have fun!

Cheers at the conference. It was a really fun event. Geeks knows how to have fun!

Late in the afternoon, the real celebration started! We got to celebrate with all of our friends, colleagues, mates and the whole staff of Oracle MDC, and we all got to be in the anniversary picture of this awesome team, team Oracle!

Members of different teams (UX, UAE) hang out at the celebration.

Members of different teams (UX, UAE) hang out at the celebration.

This year, we all received this fun, handmade airplane as a gift to remember the 5th anniversary of Oracle MDC.

This year, we all received this fun, handmade airplane as a gift to remember the 5th anniversary of Oracle MDC.

All of the crew of Oracle MDC pose in the annual photo taken at the event.

All of the crew of Oracle MDC pose in the annual photo taken at the event.

If you want to know more about life at Oracle MDC, check our Facebook page! And if you’re a student, don’t miss our post about student visits on the Usable Apps blog.Possibly Related Posts:

The Fundamental Challenge of Computer System Performance

Cary Millsap - Thu, 2015-09-17 10:46
The fundamental challenge of computer system performance is for your system to have enough power to handle the work you ask it to do. It sounds really simple, but helping people meet this challenge has been the point of my whole career. It has kept me busy for 26 years, and there’s no end in sight.
Capacity and WorkloadOur challenge is the relationship between a computer’s capacity and its workload. I think of capacity as an empty box representing a machine’s ability to do work over time. Workload is the work your computer does, in the form of programs that it runs for you, executed over time. Workload is the content that can fill the capacity box.

Capacity Is the One You Can Control, Right?When the workload gets too close to filling the box, what do you do? Most people’s instinctive reaction is that, well, we need a bigger box. Slow system? Just add power. It sounds so simple, especially since—as “everyone knows”—computers get faster and cheaper every year. We call that the KIWI response: kill it with iron.
KIWI... Why Not?As welcome as KIWI may feel, KIWI is expensive, and it doesn’t always work. Maybe you don’t have the budget right now to upgrade to a new machine. Upgrades cost more than just the hardware itself: there’s the time and money it takes to set it up, test it, and migrate your applications to it. Your software may cost more to run on faster hardware. What if your system is already the biggest and fastest one they make?

And as weird as it may sound, upgrading to a more powerful computer doesn’t always make your programs run faster. There are classes of performance problems that adding capacity never solves. (Yes, it is possible to predict when that will happen.) KIWI is not always a viable answer.
So, What Can You Do?Performance is not just about capacity. Though many people overlook them, there are solutions on the workload side of the ledger, too. What if you could make workload smaller without compromising the value of your system?
It is usually possible to make a computer produce all of the useful results that you need without having to do as much work.You might be able to make a system run faster by making its capacity box bigger. But you might also make it run faster by trimming down that big red workload inside your existing box. If you only trim off the wasteful stuff, then nobody gets hurt, and you’ll have winning all around.

So, how might one go about doing that?
Workload“Workload” is a conjunction of two words. It is useful to think about those two words separately.

The amount of work your system does for a given program execution is determined mostly by how that program is written. A lot of programs make their systems do more work than they should. Your load, on the other hand—the number of program executions people request—is determined mostly by your users. Users can waste system capacity, too; for example, by running reports that nobody ever reads.

Both work and load are variables that, with skill, you can manipulate to your benefit. You do it by improving the code in your programs (reducing work), or by improving your business processes (reducing load). I like workload optimizations because they usually save money and work better than capacity increases. Workload optimization can seem like magic.
The Anatomy of PerformanceThis simple equation explains why a program consumes the time it does:
r = cl        or        response time = call count × call latencyThink of a call as a computer instruction. Call count, then, is the number of instructions that your system executes when you run a program, and call latency is how long each instruction takes. How long you wait for your answer, then—your response time—is the product of your call count and your call latency.

Some fine print: It’s really a little more complicated than this, but actually not that much. Most response times are composed of many different types of calls, all of which have different latencies (we see these in program execution profiles), so the real equation looks like r = c1l1 + c2l2 + ... + cnln. But we’ll be fine with r = cl for this article.
Call count depends on two things: how the code is written, and how often people run that code.
  • How the code is written (work) — If you were programming a robot to shop for you at the grocery store, you could program it to make one trip from home for each item you purchase. Go get bacon. Come home. Go get milk... It would probably be dumb if you did it that way, because the duration of your shopping experience would be dominated by the execution of clearly unnecessary travel instructions, but you’d be surprised at how often people write programs that act like this.
  • How often people run that code (load) — If you wanted your grocery store robot to buy 42 things for you, it would have to execute more instructions than if you wanted to buy only 7. If you found yourself repeatedly discarding spoiled, unused food, you might be able to reduce the number of things you shop for without compromising anything you really need.
Call latency is influenced by two types of delays: queueing delays and coherency delays.
  • Queueing delays — Whenever you request a resource that is already busy servicing other requests, you wait in line. That’s a queueing delay. It’s what happens when your robot tries to drive to the grocery store, but all the roads are clogged with robots that are going to the store to buy one item at a time. Driving to the store takes only 7 minutes, but waiting in traffic costs you another 13 minutes. The more work your robot does, the greater its chances of being delayed by queueing, and the more such delays your robot will inflict upon others as well.
  • Coherency delays — You endure a coherency delay whenever a resource you are using needs to communicate or coordinate with another resource. For example, if your robot’s cashier at the store has to talk with a specific manager or other cashier (who might already be busy with a customer), the checkout process will take longer. The more times your robot goes to the store, the worse your wait will be, and everyone else’s, too.
The SecretThis r = cl thing sure looks like the equation for a line, but because of queueing and coherency delays, the value of l increases when c increases. This causes response time to act not like a line, but instead like a hyperbola.

Because our brains tend to conceive of our world as linear, nobody expects for everyone’s response times to get seven times worse when you’ve only added some new little bit of workload, but that’s the kind of thing that routinely happens with performance. ...And not just computer performance. Banks, highways, restaurants, amusement parks, and grocery-shopping robots all work the same way.

Response times are trememdously sensitive to your call counts, so the secret to great performance is to keep your call counts small. This principle is the basis for perhaps the best and most famous performance optimization advice ever rendered:
The First Rule of Program Optimization: Don’t do it.

The Second Rule of Program Optimization (for experts only!): Don’t do it yet.

Michael A. JacksonThe ProblemKeeping call counts small is really, really important. This makes being a vendor of information services difficult, because it is so easy for application users to make call counts grow. They can do it by running more programs, by adding more users, by adding new features or reports, or by even by just the routine process of adding more data every day.

Running your application with other applications on the same computer complicates the problem. What happens when all these application’ peak workloads overlap? It is a problem that Application Service Providers (ASPs), Software as a Service (SaaS) providers, and cloud computing providers must solve.
The SolutionThe solution is a process:
  1. Call counts are sacred. They can be difficult to forecast, so you have to measure them continually. Understand that. Hire people who understand it. Hire people who know how to measure and improve the efficiency of your application programs and the systems they reside on.
  2. Give your people time to fix inefficiencies in your code. An inexpensive code fix might return many times the benefit of an expensive hardware upgrade. If you have bought your software from a software vendor, work with them to make sure they are streamlining the code they ship you.
  3. Learn when to say no. Don’t add new features (especially new long-running programs like reports) that are inefficient, that make more calls than necessary. If your users are already creating as much workload as the system can handle, then start prioritizing which workload you will and won’t allow on your system during peak hours.
  4. If you are an information service provider, charge your customers for the amount of work your systems do for them. The economic incentive to build and buy more efficient programs works wonders.

Drop It Like It's Not

Scott Spendolini - Thu, 2015-09-17 08:50
I just ran the following script:

FOR x IN (SELECT table_name FROM user_tables)

FOR x IN (SELECT sequence_name FROM user_sequences)
  EXECUTE IMMEDIATE ('DROP SEQUENCE ' || x.sequence_name);

FOR x IN (SELECT view_name FROM user_views)
  EXECUTE IMMEDIATE ('DROP VIEW ' || x.view_name);

Basically, drop all tables, views and sequences.  It worked great, cleaning out those objects in my schema without touching any packages, producers or functions.  The was just one problem:  I ran it in the wrong schema.

Maybe I didn't have enough coffee, or maybe I just wasn't paying attention, but I essentially wiped out a schema that I really would rather not have.  But I didn't even flinch, and here's why.

All tables & views were safely stored in my data model.  All sequences and triggers (and packages, procedures and functions) were safely stored in scripts.  And both the data model and associated scripts were safely checked in to version control.  So re-instantating this project was a mere inconvenience that took no more than the time it takes to drink a cup of coffee - something I clearly should have done more of earlier this morning.

Point here is simple: take the extra time to create a data model and a version control repository for your projects - and then make sure to use them!  I religiously check in code and then make sure that at least my TRUNK is backed up elsewhere.  Worst case for me, I'd lose a couple of hours or work, perhaps even less, which is far better than the alternative.

Oracle Partners ♥ UX Innovation Events

Usable Apps - Thu, 2015-09-17 08:35

I have just returned from a great Apps UX Innovation Events Internet of Things (IoT) hackathon held in Oracle Nederland in Utrecht (I was acting in a judicial capacity). This was the first of such events organized in cooperation with an Oracle partner, in this case eProseed

eProseed Managing Partner Lonneke Dikmans

Design patterns maven: eProseed managing partner, SOA, BPM and UX champ, Lonneke Dikmans (@lonnekedikans) at the hackathon. Always ready to fashion a business solution in a smart, reusable way.

You can read more about what went on at the event on other blogs, but from an Oracle partner enablement perspective (my main role), this kind of participation means a partner can:  

  • Learn hands-on about the latest Oracle technology from Oracle experts in person. This event provided opportunities to dive deep into Oracle Mobile Cloud Service, Oracle IoT Cloud, Oracle Mobile Application Framework, Oracle SOA Suite, and more, to explore building awesome contextual and connected solutions across a range of devices and tech.
  • Bring a team together in one place to work on business problems, to exchange ideas, and to build relationships with the "go-to" people in Oracle's technology and user experience teams.  
  • Demonstrate their design and development expertise and show real Oracle technology leadership to potential customers, to the Oracle PartnerNetwork, and to the educational, development, and innovation ecosystem.

That an eProseed team was declared the winners of the hackathon and that eProseed scored high on all three benefits above is just sweet!

eProseed NL team demo parking solution

The eProseed NL team shows off its winning "painless parking" IoT solution.

Many thanks to eProseed for bringing a team from across Europe and for working with Apps UX Innovation Events to make this event such a success for everyone there!

Stay tuned for more events on the Apps UX Innovation Events blog and watch out for news of the FY16 PaaS4SaaS UX enablement for Oracle partners on this blog.

Pictures from the IoT hackathon are on the Usable Apps Instagram account

Rocana’s world

DBMS2 - Thu, 2015-09-17 05:49

For starters:

  • My client Rocana is the renamed ScalingData, where Rocana is meant to signify ROot Cause ANAlysis.
  • Rocana was founded by Omer Trajman, who I’ve referenced numerous times in the past, and who I gather is a former boss of …
  • … cofounder Eric Sammer.
  • Rocana recently told me it had 35 people.
  • Rocana has a very small number of quite large customers.

Rocana portrays itself as offering next-generation IT operations monitoring software. As you might expect, this has two main use cases:

  • Actual operations — figuring out exactly what isn’t working, ASAP.
  • Security.

Rocana’s differentiation claims boil down to fast and accurate anomaly detection on large amounts of log data, including but not limited to:

  • The sort of network data you’d generally think of — “everything” except packet-inspection stuff.
  • Firewall output.
  • Database server logs.
  • Point-of-sale data (at a retailer).
  • “Application data”, whatever that means. (Edit: See Tom Yates’ clarifying comment below.)

In line with segment leader Splunk’s pricing, data volumes in this area tend to be described in terms of new data/day. Rocana seems to start around 3 TB/day, which not coincidentally is a range that would generally be thought of as:

  • Challenging for Splunk, and for the budgets of Splunk customers.
  • Not a big problem for well-implemented Hadoop.

And so part of Rocana’s pitch, familiar to followers of analytic RDBMS and Hadoop alike, is “We keep and use all your data, unlike the legacy guys who make you throw some of it away up front.”

Since Rocana wants you to keep all your data, 3 TB/day is about 1 PB/year.

But really, that’s just saying that Rocana is an analytic stack built on Hadoop, using Hadoop for what people correctly think it’s well-suited for, done by guys who know a lot about Hadoop.

The cooler side of Rocana, to my tastes, is the actual analytics. Truth be told, I find almost any well thought out event-series analytics story cool. It’s an area much less mature than relational business intelligence, and accordingly with much more scope for innovation. On the visualization side, crucial aspects start:

  • Charting over time (duh).
  • Comparing widely disparate time intervals (e.g., current vs. historical/baseline).
  • Whichever good features from relational BI apply to your use case as well.

Other important elements may be more data- or application-specific — and the fact that I don’t have a long list of particulars illustrates just how immature the area really is.

Even cooler is Rocana’s integration of predictive modeling and BI, about which I previously remarked:

The idea goes something like this:

  • Suppose we have lots of logs about lots of things. Machine learning can help:
    • Notice what’s an anomaly.
    • Group together things that seem to be experiencing similar anomalies.
  • That can inform a BI-plus interface for a human to figure out what is happening.

Makes sense to me.

So far as I can tell, predictive modeling is used to notice aberrant data (raw or derived). This is quickly used to define a subset of data to drill down to (e.g., certain kinds of information from certain machines in a certain period of time). Event-series BI/visualization then lets you see the flows that led to the aberrant result, which was any luck will allow you to find the exact place where the data first goes wrong. And that, one hopes, is something that the ops guys can quickly fix.

I think similar approaches could make sense in numerous application segments.

Related links

Categories: Other

Use Case of Auto Re-Execute Functionality in ADF BC

Andrejus Baranovski - Thu, 2015-09-17 05:44
There are use cases, when data in DB is changed by background processes and we would like to display to the user latest data available. Very common implementation for this use case is to re-execute ADF iterator and VO each time when Task Flow or UI screen is accessed. Obviously this works, but performance would suffer - usually there is no need to re-fetch data each time, it must be re-fetched only when changes are detected in DB. ADF BC provides out of the box such functionality - it can detect changes in DB and re-execute VO through Database Change Notification. Make sure to grant CHANGE NOTIFICATION system privilege to the data source user.

Auto refresh functionality for ADF BC VO is working when DB pooling is disabled in AM tuning. This would mean you should use Auto refresh carefully and plan dedicated AM's.

I'm going to describe the use case and how you could benefit from VO auto refresh. Here is the example of typical ADF Task Flow with initial Method Call to re-execute VO and fetch fresh data from DB. In the next step UI fragment is rendered, where recent data is displayed:

This is how it works on UI. Departments TF is accessed by pressing Departments button in Employees table screen. This will trigger Execute action call in Departments TF and force VO to reload:

List of Departments will display recently fetched data from DB:

Each time when Departments TF is opened, VO executes SQL query and re-fetches data. This can be observed from the log:

While this works fine, it is not great from performance perspective. We should improve it by using VO auto refresh functionality. Change default activity in TF to UI fragment, we don't need to invoke VO re-execute:

Select AutoRefresh=true property in the VO settings, this will enable DB listener event for this VO and will force re-execution only when it is required:

We are ready to test it. Change Department Name column value:

Navigate to Departments from Employees table:

New data will be displayed automatically in Departments list, even without invoke initial Execute operation. If you would open the same TF, when there were no changes in DB, it will load data from cache, without re-executing VO and re-fetching the same data rows:

Download sample application -

Index Advanced Compression: Multi-Column Index Part I (There There)

Richard Foote - Thu, 2015-09-17 00:57
I’ve discussed Index Advanced Compression here a number of times previously. It’s the really cool additional capability introduced to the Advanced Compression Option with, that not only makes compressing indexes a much easier exercise but also enables indexes to be compressed more effectively than previously possible. Thought I might look at a multi-column index to highlight just […]
Categories: DBA Blogs

My Nomination for the Oracle Database Developer Choice Awards

Dietmar Aust - Wed, 2015-09-16 23:30
Actually this came as a wonderful surprise ... I have been nominated for the Oracle Database Developer Choice Awards: . I have basically devoted my entire work life to build solutions based on Oracle technology ... and you can build some pretty cool stuff with it. I have always enjoyed building software that makes a difference ... and even more so share to what I have learned and support and inspire others to do the same. The people in the Oracle community are simply amazing and I have made a lot of friends there. If you have an account for the Oracle Technology Network (OTN) I would appreciate your vote ! And if you don't feel like voting for me ... vote anyway in all the different categories ... because the Oracle community deserves the attention. Thanks, ~Dietmar.

US Consumer Law Attorney Rates

Nilesh Jethwa - Wed, 2015-09-16 21:27

The hourly rate in any consulting business or practice increases by the years of experience in the field.

Read more at:

If You're In Latvia, Estonia, Romania, Slovenia or Croatia, Oracle APEX is Coming to You!

Joel Kallman - Wed, 2015-09-16 20:04
In the first part of October, my colleague Vlad Uvarov and I are taking the Oracle APEX & Oracle Database Cloud message to a number of user groups who are graciously hosting us.  These are countries for which there is growing interest in Oracle Application Express, and we wish to help support these groups and aid in fostering their growing APEX communities.

The dates and locations are:

  1. Latvian Oracle User Group, October 5, 2015
  2. Oracle User Group Estonia, Oracle Innovation Day in Tallinn, October 7, 2015
  3. Romanian Oracle User Group, October 8, 2015
  4. Oracle Romania (for Oracle employees, at the Floreasca Park office), October 8-9, 2015
  5. Slovenian Oracle User Group, SIOUG 2015, October 12-13, 2015
  6. Croatian Oracle User Group, 20th HrOUG Conference, October 13-16, 2015

You should consider attending one of these user group meetings/conferences if:

  • You're a CIO or manager, and you wish to understand what Oracle Application Express is and if it can help you and your business.
  • You're a PL/SQL developer, and you want to learn how easy or difficult it is to exploit your skills on the Web and in the Cloud.
  • You come from a client/server background and you want to understand what you can do with your skills but in Web development and Cloud development.
  • You're an Oracle DBA, and you want to understand if you can use Oracle Application Express in your daily responsibilities.
  • You know nothing about Oracle Application Express and you want to learn a bit more.

The User Group meetings in Latvia, Estonia and Romania all include 2-hour instructor-led hands on labs.  All you need to bring is a laptop, and we'll supply the rest.  But you won't be merely watching an instructor drive their mouse.  You will be the ones building something real.  I guarantee that people completely new to APEX, as well as seasoned APEX developers, will learn a number of relevant skills and techniques in these labs.

If you have any interest or questions or concerns (or complaints!) about Oracle Application Express, and you are nearby, we would be very honored to meet you in person and assist in any way we can.  We hope you can make it!

College Scorecard Problem Gets Worse: One in three associate’s degree institutions are not included

Michael Feldstein - Wed, 2015-09-16 16:54

By Phil HillMore Posts (367)

Late yesterday I posted about the Education Department (ED) new College Scorecard and how it omits a large number of community colleges based on an arbitrary metric.

In particular, the Education Department (ED) is using a questionable method of determining whether an institution is degree-granting rather than relying on the IPEDS data source. In a nutshell, if an institution awarded more certificates than degrees, then it is not labeled as “predominantly awarded 2-year or 4-yeard degrees” and therefore excluded.

I am not quite confident that the explanation for the vast majority of missing schools is based on this finding. In short, if an institutions awards more certificates than degrees, ED removes them from the public-facing website even if they are technically degree-granting institutions.

Originally it appeared this situation encompassed 17% of all community colleges, but further analysis shows it to be more significant. Rather than using the official IPEDS definition of sector (public 4-year, public 2-year, etc), the College Scorecard looks at the largest number of degrees awarded and uses an Associate’s or Bachelor’s classification. This primarily affects community colleges that offer bachelor’s programs but the majority of graduating students get two-year Associate’s degrees. Think of the recent legislation in California to allow certain community colleges to offer new bachelor’s programs.

To account for this definition, I took the IPEDS data (2012-13 school year) and created new ‘sectors’ called Public 4-year AD, Private 4-year AD, and For-profit 4-year AD, with the AD standing for Associate’s Degrees. I then combined these AD schools with the corresponding Public 2-year, Private 2-year, and For-profit 2-year schools into a Combined Public Associate’s, Combined Private Associate’s, and Combined For-profit Associate’s category. The result: I can now come close to matching the results of the College Scorecard, meaning that this definition (called Brian Criteria after the commenter who described it on Stack Exchange) does indeed account a large majority of the missing institutions.

The numbers are even bigger than I thought.


  • IPEDS Listing = number of US, Title IV schools, with degree-seeking students and degrees awarded in 2012-13
  • Brian Criteria = number of schools passing Brian Criteria on IPEDS data
  • Scorecard = number of schools returned on College Scorecard using Advanced Search

Fully one in four Public Associate’s Degree institutions (mostly community colleges) are not listed on College Scorecard.

More than four in ten For-profit Associate’s Degree institutions are not listed.

Overall, almost one in three Associate’s Degree institutions are not listed.

These categories account for almost half of all US postsecondary students.

Cali Morrison from WCET described a big problem with this approach by email.

In my opinion, where this hurts the most is in promoting to potential students the idea of stackable credentials. Many of these certificates are awarded as an interim step to an associate degree. They are high quality certificates that lead to job potential and contribute to a student’s eventual degree. These institutions not appearing on this touted ‘good data’ site, produced by the government may make some students shy away from what could be a really useful, and employable credential.

Consider some of the worst examples, in terms of degree-granting institutions not included in Scorecard:

Top 30 Silly Factor

The post College Scorecard Problem Gets Worse: One in three associate’s degree institutions are not included appeared first on e-Literate.

Oracle SOA/BPM: What are Business Faults Really?

Jan Kettenis - Wed, 2015-09-16 13:12
You may have read that it is a best practice to let a service return a "business fault" as a fault. In this article I point out some pitfalls with this "best practice", and will argue that you should have a clear understanding of what "business fault" means before you start applying it. The examples are based upon the Oracle SOA Suite 11g, but apply as well to 12c.

To allow the consumer to recognize a specific fault, you add it as a fault to the WSDL. This looks similar to the following:

Sometimes you see that for every individual error situation a specific fault is defined, in some other cases you might find that all errors are wrapped in some generic fault with a code to differentiate between them. In the example above two different, specific faults are defined.

What you should realize is how a business fault manifests itself in Enterprise Manager. When you throw a fault from a service, it will be represented as a BusinessFault in the flow trace of the consumer (but not in the Recent Faults and Rejected Messages section):

Any instance of the consumer that threw a fault will have an instance state that is flagged as faulted.

Now, if the fault really concerns an error, meaning some system exception or an invalid invocation of the service by the consumer (e.g. wrong values for some of the arguments), than that probably is exactly how you would like it to respond. Such errors should stand out in EM because you probably either have some issue in the infrastructure (e.g. some service not being available), or some coding error. However, what I also see in some of the examples you can find on the internet as well as in practice, is that faults are thrown in situations that do not really concern an error. For example, for some CustomerService.find() operation a fault is returned when no customer could be found.

The problem with such a practice is that this type of error generally is of no interest to the systems administrator. In the Oracle SOA/BPM Suite 11g there is an option to search on Business Faults Only or System Faults Only but that does not work. So when thrown enough, these "pseudo errors" start cluttering their view. The log files are equally cluttered:

This cluttering in EM and logs introduces the risk that systems administrators can no longer tell the real errors from the faults, and may no longer takes them very serious. Exactly the opposite of what you want.

But system administrators are not the only ones suffering from this. Also BPM developers very often are confronted with tough challenges when integrating such services in their process model. For example look at the following model:

In this example the service throws 2 faults that are not really errors, but just some result that you may expect from the service. Each fault has to be handled, but in a different way. At the top of the service calls are two Boundary Error events, one for each type of error. In case of BusinessFaults you either have to catch each one individually, or have one to catch all business faults:

Unlike with system exceptions, there is no way to do both at the same time.

A BusinessFault manifest itself as a fault in the flow trace of the business process, suggesting that something went wrong while that is not the case at all.

Given this issue with of making the process model less clear, and cluttering the flow trace, I therefore prefer handling such "errors" as a normal response instead, as is done on the right-hand side of the service call in the process model. I used an exclusive gateway to filter them out from the normal flow, making it easier to follow how the process responds to it.

By the way, the "faultCode" and "faultString" elements are available because I defined them as element of the fault thrown from the BPEL process I use in the example. When you define a Business Exception object in BPM, then by default you only have one single "errorInfo" element at your disposal:

As I explain in this article, you can customize a Business Exception object by manually modifying its XSD.

I included handling of system exceptions (remoteFault and other system exceptions at the bottom) in the process model only for the sake of example. Rather than handling system faults in the process, you should use the Fault Management Framework. However, using this framework is not an option for the 2 BusinessFaults in the example.

In case of a system exception you have a couple of out-of-the-box elements at your disposal, but unfortunately this is not the same for a specific exception compared to a catch all:

Conclusion: the (real) best practice is to only throw a fault when it really concerns an error that is of interest to a systems administrator. Any other type of error should be returned as a normal response.

For example, for the CustomerService.find() operation you could choose to return an element that only contains a child node when one found, and that has an extra element "noOfCustomersFound" that returns 0 when none exists, or use some choice element that either returns the customers found, or some other element with a text like "customer not found".

More information:
"Fault Handling and Prevention, Part 1", Guide Smutz & Ronald van Luttikhuizen
"SOAP faults or results object", discussion on The Server Side

Part 3: Comparing Oracle Cloud Database Backups Options

Pythian Group - Wed, 2015-09-16 10:43
Comparing Oracle Database Backup Service (ODBS) and Oracle Secure Backups (OSB) to Amazon Web Services (AWS)

This is part 3 of a 3 part series on “Getting Started with Oracle Cloud Backups”.

  • Part 1 covers setting up RMAN to backup directly to the new Oracle Cloud Database Backup Service (ODBS).
  • Part 2 covers setting up RMAN to backup directly to the cloud using Amazon Web Services (AWS) Simple Storage Service (S3).
  • Part 3 compares and contrasts the two services.



Oracle recently announced their Oracle Database Backup Service (ODBS) as part of their big push to the Cloud. However while the name is new, the technology really isn’t. It’s effectively just a re-brand of their Oracle Secure Backup Cloud Module which was introduced years ago, initially with the ability to backup to the Amazon Simple Storage Service (S3). The functional and non-functional differences are minor but are summarized in this article.


Use Case

Both services probably appeal mostly to small or medium sized business looking for off-site backups for whatever reason (such as DR or regulatory requirements).

Keep in mind that a service like this probably isn’t a full replacement for your onsite primary backup storage device. But it very well could replace your old-style off site backup or tape vaulting vendor, which usually involves a physical pickup of backup tapes and transportation to a storage location on a daily, weekly, or in some cases monthly basis.

And while the restore times are certainly going to be considerably slower than restoring from on-premise disk based devices, it’s undoubtedly faster than bringing back tapes from an offsite storage location through a vendor service (time of which is usually measured in days with ad-hoc recall requests often being at an additional expense).

The specifics of how to technically get started with implementing either service is discussed in the previous articles of this series.


Decision Criteria Checklist

Many factors come into consideration when deciding on whether to allow a business critical database to travel off-site and when selecting the appropriate vendor or service to do so. The following generic checklist is simply a guide of suggested criteria that one may consider:

  • Storage costs (metered or flat rate; progressive or flat incremental rates)?
  • Ease of installation (outages required or not)?
  • Effects on internal processes (i.e. does the module need to be reinstalled when home cloning; changes to RDBMS software installation processes)?
  • Ease of use?
  • Changes required to existing scripts/processes (i.e. complicated changes to RMAN commands or scripts; changes or integration with 3rd party backup tools required)?
  • Is backed-up data secured at rest (at vendor data center)?
  • Is backed-up data secured in flight (transfer to or from vendor data center through the public internet)?
  • Upload and download performance (is there an effect on Recovery Time Objectives)?
  • Is transferring the additional data going to effect the organization internet performance or costs from their ISP?
  • Additional licensing costs?


Pros and Cons

The answers to some of the above criteria quests are going to be site and/or database specific. Others have been discussed in more details in the other articles in this series.

However, the pros and cons of each service can be summarized as follows:

OSDB ProsOSDB ConsNo upfront costs (no additional licenses required)No security through keys/credentials – instead must use “users” correlated to actual named users and email addressesReasonable and competitive metered ratesNavigating between services, accounts, and domains not as simple as it should beSecurity at-rest and in-flight through mandatory encryption and HTTPS transferWebUI doesn’t show details beyond overall “space used” (i.e. doesn’t show files or per database space usage)Advanced RMAN compression option included (without requiring the Advanced Compression Option)Can’t specify Oracle data center used, just the key geographical region (i.e. North America)Data is triple mirrored in Oracle Data CenterNo ability to replicate data between Oracle data centers


OSB & AWS ProsOSB & AWS ConsAbility to create unique IDs and keys for each DB being backed up (credentials independent of named users)Requires licenses for the “Oracle Secure Backup Cloud Module”, which is licensed on a RMAN per-channel basisBilling calculator for easy cost estimationBy default data is neither secure at-rest or in-flight (though both can be enabled)Additional options with S3 such as more specific data center selection and cross-region replication

It should be noted that while the Oracle Secure Backup Cloud Module is licensed on a per “stream” or per RMAN channel basis, those RMAN channels are not dedicated to one database. Rather, they are concurrently in-use channels. So if you had licenses for 10 “streams” (channels) those could be used by concurrently by 10 different databases each only using one RMAN channel or one database using 10 RMAN channels or any combination there of.

And while both provide use of backup encryption and advanced compression options as part of “special-use licensing”, it should be noted that these options are available only for the cloud based (or in the case of OSB, SBT based) backups. Regular disk based backups of the same database(s) would still require the Advanced Security Option for RMAN backup encryption and the Advanced Compression Option for anything other than “BASIC” RMAN backup compression.

The AWS solution also provides (by default) the option of not securing the data at rest or in flight. Not encrypting RMAN backups is beneficial when trying to take advantage of storage based deduplication which is not relevant here. Hence I struggle to think of a strong business use case for not wanting to encrypt backups all else being equal? Similarly why one may want to choose HTTP over HTTP for critical business data?



One possible requirement may be the need to use both services concurrently for test/evaluation purposes. Fortunately, since the module (library, configuration, and wallet) files are all uniquely named, it’s absolutely possible to use both services concurrently, even from within the same RMAN session. For example:

RMAN> run {
2> allocate channel odbs type sbt
3> PARMS=',SBT_PARMS=(OPC_PFILE=/u01/app/oracle/product/12.1.0/dbhome_2/dbs/opcCDB121.ora)';
4> backup tablespace users;
5> }

allocated channel: odbs
channel odbs: SID=270 device type=SBT_TAPE
channel odbs: Oracle Database Backup Service Library VER=

Starting backup at 10-SEP-15
channel odbs: starting full datafile backup set
channel odbs: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel odbs: starting piece 1 at 10-SEP-15
channel odbs: finished piece 1 at 10-SEP-15
piece handle=2tqgq3t5_1_1 tag=TAG20150910T114021 comment=API Version 2.0,MMS Version
channel odbs: backup set complete, elapsed time: 00:00:15
Finished backup at 10-SEP-15
released channel: odbs

RMAN> run {
2> allocate channel aws_s3 type sbt
3> PARMS=',SBT_PARMS=(OSB_WS_PFILE=/u01/app/oracle/product/12.1.0/dbhome_2/dbs/osbwsCDB121.ora)';
4> backup tablespace users;
5> }

allocated channel: aws_s3
channel aws_s3: SID=270 device type=SBT_TAPE
channel aws_s3: Oracle Secure Backup Web Services Library VER=

Starting backup at 10-SEP-15
channel aws_s3: starting full datafile backup set
channel aws_s3: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/CDB121/users01.dbf
channel aws_s3: starting piece 1 at 10-SEP-15
channel aws_s3: finished piece 1 at 10-SEP-15
piece handle=2uqgq3un_1_1 tag=TAG20150910T114111 comment=API Version 2.0,MMS Version
channel aws_s3: backup set complete, elapsed time: 00:00:15
Finished backup at 10-SEP-15
released channel: aws_s3



However, if the wrong SBT library is being used by certain RMAN commands trying to access the backup pieces, the following RMAN error will be returned:

RMAN-06207: WARNING: 2 objects could not be deleted for SBT_TAPE channel(s) due
RMAN-06208:          to mismatched status.  Use CROSSCHECK command to fix status
RMAN-06210: List of Mismatched objects
RMAN-06211: ==========================
RMAN-06212:   Object Type   Filename/Handle
RMAN-06213: --------------- ---------------------------------------------------
RMAN-06214: Backup Piece    2qqgq3ik_1_1
RMAN-06214: Backup Piece    c-3847224663-20150910-01


The issue with the above example DELETE BACKUP command is resolved by simply allocating a channel using the proper library for the proper vendor.

Running similar backups as the above commands but on an entire CDB (using the RMAN command “BACKUP AS COMPRESSED BACKUPSET DATABASE;”) rather than just one tablespace in order to have a larger source or amount of data to process shows some interesting results:

SQL> SELECT status, input_type,
  2         backed_by_osb, compression_ratio,
  3         input_bytes_display, output_bytes_display,
  4         input_bytes_per_sec_display, output_bytes_per_sec_display,
  5         time_taken_display
  6    FROM v$rman_backup_job_details
  7   ORDER BY start_time;

---------- ------------- --- ----------------- -------------- -------------- -------------- -------------- ----------
COMPLETED  DB FULL       YES        6.60411404     4.47G        692.75M          1.59M        245.88K      00:48:05
COMPLETED  DB FULL       YES        6.60794224     4.47G        692.50M          1.96M        303.43K      00:38:57


The interesting points are:

  1. The backups to OSDB (top line in red) consistently took longer on my system than the backups to AWS (blue second line). Would need longer term testing to see whether this is an anomaly or a pattern. Backup performance time would need to be evaluated in detail when selecting a service.
  2. Both are recorded in the catalog as using OSB (the “backed_by_osb” column). This is no surprise as the OSDB module is simply a copy of the OSBWS module as is apparent by the API version numbers.



From an RMAN functional perspective the two are almost identical. As would be expected since the ODBS module and library is essentially just a clone of the OSB module and library. With a re-branding and slight modification simply to differentiate it and to conveniently provide Oracle an exemption from their own licensing requirements. Which is not uncommon from Oracle – after-all they do like to provide exceptions to themselves, allowing them to promote their products and services over their competition.

From a data management perspective the Amazon S3 is far ahead with additional features, such as regional replication and more granular storage location options. Something Oracle very well may catch up on but at time of writing does not yet provide.

Hence, I think the choice really comes down to priorities. Additional storage/data management options vs. additional licensing costs. For many smaller customers price is a key concern and therefore the Oracle solution likely preferable as essentially it is the same as the OSB & AWS solution but without the license requirement.


Discover more about our expertise in Oracle and Cloud.

Categories: DBA Blogs

Is Moodle “Bigger than Martin”?

Michael Feldstein - Wed, 2015-09-16 10:27

By Michael FeldsteinMore Posts (1046)

In his recent post on why Moodle matters, Phil wrote,

For a large portion of our readers who deal mostly with US higher education, it could be easy to dismiss Moodle as an LMS and an idea past its prime.[…]And yet no other academic LMS solution comes close to Moodle in terms of worldwide deployments and learners enrolled.

Likewise, if you’re not embedded in the Moodle community, you may not know how central Moodle creator Martin Dougiamas is to that project, or even know who he is. And yet, he is huge. I can name maybe a handful of people who are relatively widely known and respected in educational technology. I can name only a few who are admired and even beloved. I can name very few indeed who are not working for universities (although that’s changing a little, now that Jim Groom and David Wiley have both joined commercial ventures). Within the circle that knows him, Martin’s many admirers have been fiercely loyal to and protective of him, trusting him absolutely to steer Moodle the product, Moodle the community, and Moodle the brand. When you add to that the size of Moodle’s adoption footprint, one can make the case that Martin Dougiamas is one of the most consequential figures in the history of educational technology.

Which is why it is so remarkable that Phil and I are hearing, for the first time ever, from a number of different, independent sources, the phrase, “Moodle is bigger than Martin now.” It is another indicator that Moodle is reaching an inflection point.

For starters, it is important to understand just how central Martin is to the Moodle ecosystem. Although Moodle is open source, all substantial development of Moodle core (as opposed to optional plugins) flows through Martin’s privately owned company, Moodle Pty (more commonly referred to in the Moodle community as “Moodle HQ” or just “HQ”). Here is a slide from Martin’s recent Moodle moot keynote:

Screenshot 2015-09-16 09.29.58

(Sorry for the poor image quality; it’s a screen grab from the video.)

Moodle Partners sell support, services, and proprietary add-ons around the core open source platform. They then tithe about 10% of their Moodle-related gross revenues to Moodle Pty, which uses the money to hire developers, under Martin’s direction, to work on the next versions of Moodle.  In Phil’s recent interview with him, Martin shied away from using the term “benevolent dictator,” but if you read his comments closely, he is more objecting to the connotations of the word “dictator” than he is to the characterization of him as the guy in charge. (In the past, Martin has used the term to describe his role.)

For people inside the Moodle community, this has been considered a feature, not a bug, because they trust Martin. Former Moodlerooms CEO Lou Pugliese recently told us,

Working with Martin Dougiamas has been one of the highlights of my career… he’s a brilliant, honest broker in global eLearning whose values remain steadfastly uncompromised in continuing to advance open source alternative in the market.[1]

In my experience, that is not an unusual statement. With minimal effort, I could string together a dozen similar quotes from both major and minor figures in the Moodle community. Phil and I heard a number of equally glowing statements in our recent spade work for this post series. In the eyes of many folks in the community, even now, Martin is Moodle. Moodle is Martin.

In fact, the personal loyalty to Martin has been so fierce that it has made reporting on Moodle difficult at times. For example, in the past I have tried to write about Moodle’s financial sustainability model. It is unprecedented in ed tech and has, up until now, been pretty wildly successful. But I couldn’t get much real information about it. It wasn’t a secret, exactly. Martin gave an overview of the way the ecosystem works in my 2010 interview with him. It just that the financial side of it wasn’t talked about, and certainly not in the level of detail that would enable me to do meaningful analysis. Martin himself doesn’t like to talk about finances, as he admits in the keynote from which the slide above comes. He tells us this, tells us he is going to talk about it anyway now, and then launches into a 45-minute keynote in which he…mostly doesn’t talk about finances. There’s a little more than that slide, but really, not that much. It wasn’t any different back when I was trying to write about the Moodle sustainability model the first time. When I went to the Moodle Partners to ask them about the details, none of them would say much. What I really wanted to know was exactly how much money was flowing through the system and where it was going. I couldn’t come close to getting those numbers. They all deferred to Martin. I would know whatever he was comfortable with me knowing. And yet when I asked whether I should approach Martin directly, I received more than one (friendly) suggestion that it would probably be a bad idea. Martin doesn’t like to talk about money, I was told.

I thought about writing a piece highlighting how the financial information wasn’t available. I shared this thought with somebody who was not a Moodle Partner but who I knew was a mutual friend of Martin’s and mine. I raised the point that here is a substantial amount of money flowing through an organizational structure that we don’t fully understand, supporting the world’s most widely adopted LMS, into a private company owned by one guy, and we have no visibility into how it works, how much money we are talking about, or where it’s going. He replied, “I can understand why that would be concerning from an American point of view.” [Subtext: I don’t share your concern.] “But I worry that if you write that piece, Martin will just clam up even more. You won’t get anywhere. He doesn’t like to talk about money.”

In the end, I never wrote that story. I had no reason to believe that anything shady was going on, no information that would shed new insights, and no desire to take a swing at a project and a person that both seemed to be doing net good in ed tech. In effect, the unity of the Moodle community behind Martin became the more important story for me, and although the lack of transparency bothered me, it didn’t bother me enough to feel like I needed to insert myself into that story.

Which is why,  when members of the Moodle community, including but not limited to current and past Moodle Partners, are beginning to talk to us about their concerns regarding the direction of Moodle, and that the phrase “Moodle is bigger than Martin now” is popping up in different, independent conversations, I interpret them as indicators that something serious is afoot. They reinforce the sense of change we were already getting from seeing Totara fork from Moodle, Remote Learner leave the Partner Program and sell off its UK subsidiary as well, and regional Partners such as Nivel Siete sell themselves to Blackboard. The point of my focus on Martin in this post is not whether people think he’s a swell guy or not. (For the record, they still do. Even those folks who are saying that “Moodle is bigger than Martin” are careful to quickly follow that statement with a declaration that they know Martin has the best interests of the community at heart.) Rather, it speaks to Moodle’s sustainability model going forward. Right now, as the graphic above shows, the substantial majority of Moodle development resources flow through Moodle HQ. And Martin is Moodle HQ. Moodle HQ is Martin. When somebody says, “Moodle is bigger than Martin,” what they really mean is that Moodle is bigger than Moodle HQ. They are effectively questioning whether Moodle development resources should flow through Moodle HQ.

There is an interesting bifurcation in the Moodle sustainability model, in the sense that it depends on both a large grassroots community to drive interest and energy and a relatively small circle of commercial partners to generate revenue for development. In light of the above, that raises a few questions. Will commercial partners begin to move away from Moodle HQ in sufficient numbers to substantially impact the development resources available for Moodle core? And what are the forces that would drive this divergence? Also, if the commercial partners move away from Martin and Moodle HQ, will the Moodle-adopting schools be both willing and able vote with their feet and leave their commercial partners in sufficient numbers to impact those companies? And finally, if these tensions play out as actions, what will happen to Moodle? There are some hints and possibilities of alternative sustainability models and alternative futures that Phil will play out in a future post.


  1. You can see my 2011 Skype interview with Lou, six months after he had taken the helm at Moodlerooms, here.

The post Is Moodle “Bigger than Martin”? appeared first on e-Literate.

SoapUI: change the location of your user home

Darwin IT - Wed, 2015-09-16 09:45
At my current customer I use a company supplied laptop. In the office, when I logon I get connected to a Home Folder on a network drive. SoapUI stores it's default workspace in the root of that folder, based on Windows settings.

At home I connect using VPN and somehow this Home Folder is unreachable. Very inconvenient, because with every restart I need to open/import my projects again.

As with the increase of the heap settings like I showed a few days ago, also for the location of the user-home, there is a property, that can be set as a Java -D argument.

So you can add that to the "SoapUI-5.1.3.vmoptions", just add the Duser.home property as follows:
-Dsoapui.home=C:\Program Files\SmartBear\SoapUI-5.1.3/bin
-Dsoapui.ext.libraries=C:\Program Files\SmartBear\SoapUI-5.1.3/bin/ext
-Dsoapui.ext.listeners=C:\Program Files\SmartBear\SoapUI-5.1.3/bin/listeners
-Dsoapui.ext.actions=C:\Program Files\SmartBear\SoapUI-5.1.3/bin/actions
-Dwsi.dir=C:\Program Files\SmartBear\SoapUI-5.1.3/wsi-test-tools
-Djava.library.path=C:\Program Files\SmartBear\SoapUI-5.1.3/bin
As easy as that.

The folder needs to exist of course. You can also add it to the JAVA_OPTS variable in soapui.bat/.sh:
rem JVM parameters, modify as appropriate
set JAVA_OPTS=-Xms128m -Xmx1024m "-Dsoapui.home=%SOAPUI_HOME%\" -splash:soapui-splash.png -Duser.home=c:\dev\SoapUI

On a restart and a subsequent close of SoapUI you'll notice that it writes the workspace in the denoted folder.

By the way, I found this tip here.