Skip navigation.

Feed aggregator

Using Streams with ODI12c for Oracle-to-Oracle Change Data Capture

Rittman Mead Consulting - Sun, 2015-07-26 05:10

Although Oracle GoldenGate replaced Oracle Streams a couple of years ago as the recommended data replication and change data capture technology for Oracle databases, many customers still on Oracle Database 11gR2 or earlier still use Streams for Oracle-to-Oracle change data capture as it works and compared to GoldenGate doesn’t require any additional licensing. Oracle’s GoldenGate Statement of Direction paper from 2014 states that streams in Oracle 11gR2 will continue to be supported but no future versions of the Oracle Database will come with Streams included, but if you’re on 11gR2 and you just want to trickle-feed capture between two Oracle databases it’s an interesting option.

I covered Oracle-to-Oracle data replication using Streams a few times in the past including this OTN article on ODI and Change Data Capture back from before 2007 or so, this article on OWB11gR2 and Change Data Capture from 2010 and once from back in 2006 that went into the details of setting up asynchronous hotlog change data capture with the new “Paris” OWB10gR2 release. We’re now on the 12c release of Oracle Data Integrator and I’m teaching our ODI12c Bootcamp course to a client next week who’s particularly interested in using Streams with ODI12c, so I thought it’d be worth taking a look at this feature in more detail to see if much has changed since the earlier articles.

Let’s start then with an ODI12c 12.1.3 install with a regular mapping set-up to copy and join the DEPT and EMP tables from one Oracle database into a denormalized table in another Oracle Database. Both are Oracle Database 11gR2 (11.2.0.3) and the initial mapping looks like this:

NewImage

One thing that many ODI developers don’t know about the 12c release is that it comes with a set of “component-style” knowledge modules built-into the tool, which you can use straightaway to get a mapping running without having to select and import IKMs, LKMs and other KMs from the ODI Studio filesystem. In my case the Physical mapping looks like the screenshot below with two execution units (one for each Oracle Database server) and a number of built-in component-style KMs available for selection. I choose the LKM SQL to SQL (Built-in) load knowledge module which uses a generic JDBC connection to load source records into the staging table on the target server, and then the IKM Oracle Insert integration knowledge module to take that staging data and integrate it into the target table.

NewImage

I then run this mapping and see that ODI extracted data from the source database using a Java routine and batch transfers into the Oracle staging table, and then integrated the contents of that staging table into the target Oracle table. I could of course improve this by using the LKM Oracle to Oracle (DBLink) knowledge module and thereby avoid loading in two steps, but what I’d instead like to do is use Oracle Streams to trickle-feed new and changed data from my source tables over to the target database server, as shown in the diagram below. 

 

Asynch

In the OWB and Asynchronous Change Data Capture article i linked to earlier in the post, setting up change data capture involved quite a few steps; the database had to be put into archivelog mode, the GLOBAL_NAMES parameter had to be set and a whole bunch of PL/SQL procedures had to be called to set up the source-to-target connection. Once it’s running, Streams takes transactions off of the redo log files on the source database and send them across the network to the target database server in a similar way to how GoldenGate sends transactions in the trail file across to target database servers – except it’s Oracle-to-Oracle only and in my experience is a lot more fragile than GoldenGate, which is why we and most other customers switched to GoldenGate when it came out.

ODI12c comes with a number of change data capture or “journalizing” knowledge modules that use either database triggers, UPDATE_DATE fields, Oracle Streams or GoldenGate to replicate data from source system to the target data warehouse. The journalizing knowledge module we’ll use, JKM Oracle 11g Consistent (Streams) is a template-style KM that needs to be imported first from the filesystem where ODI Studio was installed, as shown in the screenshot below – note also when you do this yourself that there’s a big “deprecated” notice next to it, saying that it could be removed at any time (presumably in-favour of the GoldenGate-based ones)

NewImage

ODI and the JKM Oracle 11g Consistent (Streams) KM takes a more “f*ck it, let’s just to it” approach to database configuration than my OWB10gR2 version did, automatically configuring the source database for Streams by running all of the setup PL/SQL routines leaving you just to put the database in archivelog mode if it’s not already, and granting the connecting user (in my case, SYSTEM) streams administrator privileges. Moving over to SQL*Plus on the source database I therefore run the setup commands listed in the KM notes like this:

SQL*Plus: Release 11.2.0.3.0 Production on Sun Jul 26 06:34:44 2015

 

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

 

 

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 – 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

 

SQL> grant dba to system;

 

Grant succeeded.

 

SQL> begin

  2  dbms_streams_auth.grant_admin_privilege(

  3       grantee   => ‘system’,

  4       grant_privileges => true);

  5  end;

  6  /

 

PL/SQL procedure successfully completed.

 

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup mount;

ORACLE instance started.

 

Total System Global Area 2471931904 bytes

Fixed Size    2230872 bytes

Variable Size  570426792 bytes

Database Buffers1895825408 bytes

Redo Buffers    3448832 bytes

Database mounted.

SQL> alter database archivelog;

 

Database altered.

 

SQL> alter database open;

 

Database altered.

Next I’ll go back to ODI Studio and enable the source Model for journalising by double-clicking on the model in ODI Studio and then selecting the JKM from the Journalizing tab, like this:

NewImage

I then right-click on the EMP and DEPT tables within the source model and select Changed Data Capture > Add to CDC, where I can fine-tune the replication order so that new departments (DEPTNO) employees link to will always have been created before the employee record hits.

NewImage

It’s at this next stage, when I enable journalizing, that Streams is set-up on the source and target database servers and all the supporting tables and views are created. To enable journalizing I click on the model, not the individual tables, and select Changed Data Capture > Start Journal, like this:

NewImage

If you’ve read any of our previous posts on ODI and changed data capture you’ll realise that this setup process is that same regardless of the underlying replication technology, which makes it easy to start with database-centric CDC technologies such as this and then move to GoldenGate later on without lots of rework or re-training. For now though lets run this setup process using the local agent and then check the Operator navigator to see what it did (and whether it worked…)

NewImage

And it did work. Enabling journalising with the Oracle Streams JKM involves quite a few setup steps including checking that all the database settings and parameters are enabled correctly, then running the various DBMS_STREAMS and other packages to setup the capture and transmission process. Then, as with all of the ODI JKMs a set of J$ tables are created to hold the primary keys of new and changed records coming from the source system, along with JV$ views that join those primary keys to the full incoming replicated rows – this blog post by the Oracle ODI team explains the background to JKMs very well if you want to understand them in more detail. Looking at the source SCOTT schema in SQL*Developer I can see the CDC and J$/JV$ tables and views created in the schema; if I didn’t want these tables created in the actual data schema I could have specified a different schema as the WORK schema when I created the database connection in ODI Studio prior to this exercise.

NewImage

Next I have to define one or more “subscribers’ to the journals; for these more advanced “consistent set JKMs” of which the Oracle Streams one is one, you can define multiple consumers or “subscribers” to the changed data so that one can be further down the queue than the other (Simple JKMs only allow a single subscriber). I call my subscriber “SUNOPSIS” as this is the default subscriber name ODI adds to the mappings downstream. 

NewImage

Pressing OK after adding the subscriber again brings up the prompt to select an agent, and going over to the Operator navigator I can see that another set of steps have run, again doing some streams setup but also adding details of the journal subscribers to the tables created on the source database.

NewImage

I can check that journalising is now working by using the View Data… feature in the Designer navigator to insert new rows into the EMP and DEPT tables, and then checking the J$ tables in the source database schema to see if the rows’ primary keys have been added – which they have been.

NewImage

To now just read journalised new and changed data into my mapping, all I do then is go to the Physical mapping diagram, select the first source table and check the Use Journalized Data Only checkbox, then do the same for the other table (note it is the table source you select, not the access point for that table into the target execution block).

NewImage

So now I’ll run the mapping and check the results in the Operator navigator … but instead, I get an error:

NewImage

This is because, given the way that ODI handles CDC and journalising, it can’t allow two journalised tables to be directly joined in a mapping – we think this is because ODI can’t guarantee both tables are in the same update “state” and therefore makes you copy their data into a staging or intermediary table before you can do the join. I therefore amend the mapping to load the journalised tables into staging tables on the target database server, amend the joins and filter to reference the staging tables, and then join their contents to then filter and load into the target reporting table, like this:

NewImage

With the Physical mapping details, the two incoming Oracle staging tables are loaded by the same LKM SQL to SQL component-style mapping as we used before to extract data from the two journalised tables, and the Journalized Data Only flag is still set for the source tables, as you can see below.

NewImage

What this also highlights is a key difference between the way I trickle-fed transactions across from my source database back with the OWB and Changed Data Capture article back at the start of this post, and the way ODI’s JKMs do it; in the OWB example I set up an actual trickle-feed process outside of OWB which transferred changed data across the network to my target data warehouse, whereapon I then read those change tables and used them to update the target DW tables in my data warehouse.

And this is actually how the GoldenGate KMs work with ODI – a GoldenGate replication process copies new and changed data from the source database to the target and ODI then reads from these change tables, whereas the Streams (and other non-GoldenGate) JKMs create the change capture tables back on the source database (the various J$ and JV$ tables I reviewed using SQL*Developer earlier on), with ODI then reading from those remote change tables and bringing across the new and changed data to the target database server. This I guess makes things easier to set up – you don’t have to worry about configuring the target database for streams replication – but it does mean that you still incur the network traffic every-time you micro-batch the changes across the network rather than spreading that traffic transaction-by-transaction.

Anyway, back to ODI and this time, when I run the mapping it works, though looking at the Operator navigator again I can see that no new data came across, and the journal data is still waiting to be consumed afterwards. Why is this?

NewImage

If you’ve only used the Simple CDC JKMs from Oracle before this is normally all you need to do, but with Consistent Set ones such as this one, or the GoldenGate JKMs, you need to lock the subscriber view of the journalised data and extend the CDC window before you can access the journal records; for the Simple JKMs the IKM (Integration Knowledge Module) takes care of the unlock, extend, purge and lock operations for you automatically in the background, whereas with Consistent Set ones you typically do this as part of a wider ODI package as shown in the screenshot below, with the first and last tasks created by dragging and dropping the journalized model onto the package canvas and selecting Journalizing Model as the type (the subscriber name, “SUNOPSIS” in this case is typed-in below those settings and is off-screen in the screenshot)

NewImage

Now when I run this package, as opposed to the mapping on its own, I get the row of new data I was expecting and the journal table is now empty, as I was the only subscriber.

NewImage

Finally, if I was looking for real-time continuous loading into this target table, I could wrap the package in an event-detection loop that waits for in this case ten seconds for three journal rows to be written, then either processes the three as soon as the third arrives or loops around again every ten seconds (obviously in-reality you’d want to put in a mechanism to halt the loop if needed, but in my case I’ll just kill the job from the Operator navigator when I want it to stop)

NewImage

So that’s the basics of using ODI with Oracle Streams for Oracle-to-Oracle changed data capture; if you’re just copying data between two Oracle databases and you’re on 11gR2 or earlier this might be an option, but long-term you’ll need to think about GoldenGate as Oracle aren’t developing Streams beyond the 11gR2 release. Note also that all the streams activity happens over on the source database server so you still need this additional step to copy the journaled data across to the target data warehouse, but it’s still a fairly non-invasive way to capture changes on the source Oracle database and it does have the considerable advantage (compared to GoldenGate) of being free-to-use.

Categories: BI & Warehousing

Check out 6.00.1x computer science class on edX!

Bobby Durrett's DBA Blog - Sat, 2015-07-25 11:15

I just finished the last program for a computer science class on edX and I urge you to try it.

I took this class:

MITx: 6.00.1x Introduction to Computer Science and Programming Using Python

I was more interested in how MIT taught the class than in the material itself because I already know the subjects covered.

The class taught the basics of programming – expressions, variables, loops, if statements, and functions.

It also had a large focus on bisection or binary search and the performance benefits of this type of search over sequentially reading through a list.

It also covered lists, hash tables, trees, stacks, and queues.

It discussed object-oriented programming.

The class concluded with the professor stating that the programming and computer science skills taught in this class are key to advancing your career, even if you do not work in a computer related job.

I interacted with a number of students in the class and found some that were in other fields and were having success taking the class.  Others were in business computing or IT and yet did not have a computer science background so they were good programmers but learning new concepts.  Many struggled with the class but, it is free, and is given often. The class starts up again August 26th.  Nothing stops you from taking it multiple times.

I tried to think about whether I should recommend this class to the people I work with as a method of helping develop my coworkers that do not have experience in these areas.  At first I thought that the subject is too academic and has no connection to their jobs. But, after thinking about it for a while, I now believe that just the opposite is true.

Searching for practical applications of the class, I first remembered the programs that we wrote that compared searching sequentially through a list to using binary search.  In one test case the sequential method took 15 seconds but the binary search took less than one second.  This reminded me so much of tuning Oracle SQL queries.  The sequential scan of the list was like a full table scan in Oracle.  The binary search was like looking up a single row using an index scan.  As I tune Oracle queries my computer science knowledge of binary search and binary trees makes it easy to understand index and full table scans.

In another example, we recently had slowness on a Weblogic portal server.  CPU was getting maxed out and the CPU spent most of its time in a Java ConcurrentHashMap object.  I don’t know the internals of Weblogic and I have never used a ConcurrentHashMap but I know how hashing works.  I know that hashing is very fast until your hash table fills up or if the hash function distributes the items in an unequal way. My knowledge of hashing helped me grasp why our portal server was using a lot of CPU despite my lack of access to its internals.

So, contrary to my original fear that the edX class was too academic and not practical I believe that the concepts covered are very practical.  If you do not know how binary search works or what a binary tree is you will benefit from 6.00.1x on edX.  If you can not explain how a hash table works and what causes hashing to slow down you can learn from 6.00.1x. And, if you have never written a computer program, although you may find the class difficult and have to take it more than once, you will benefit from 6.00.1x on edX.

– Bobby

 

Categories: DBA Blogs

Blackboard Ultra and Other Product and Company Updates

Michael Feldstein - Sat, 2015-07-25 08:58

By Michael FeldsteinMore Posts (1037)

Phil and I spent much of this past week at BbWorld trying to understand what is going on there. The fact that their next-generation Ultra user experience is a year behind is deservedly getting a lot of attention, so one of our goals going into the conference was to understand why this happened, where the development is now, and how confident we could be in the company’s development promises going forward. Blackboard, to their credit, gave us tons of access to their top executives and technical folks. Despite the impression that a casual observer might have, there is actually a ton going on at the company. I’m going to try to break down much of the major news at a high level in this post.

The News

Ultra is a year late: Let’s start with the obvious. The company showed off some cool demos at last year’s BbWorld, promising that the new experience would be Coming Soon to a Campus Near You. Since then, we haven’t really heard anything. So it wasn’t surprising to get confirmation that it is indeed behind schedule. What was more surprising was to see CEO Jay Bhatt state bluntly in the keynote that yes, Ultra is behind schedule because it was harder than they thought it would be. We don’t see that kind of no-spin honesty from ed tech vendors all that often.

Ultra isn’t finished yet: The product has been in use by a couple of dozen early adopter schools. (Phil and I haven’t spoken with any of the early adopters yet, but we intend to.) It will be available to all customers this summer. But Blackboard is calling it a “technical preview,” largely because there are large swathes of important functionality that have not yet been added to the Ultra experience–things like tests and groups. It’s probably fine to use it for simple (and fairly common) on-campus use cases, but there are still some open manholes here.

Screenshot 2015-07-25 09.34.48

Ultra is only available in SaaS at the moment and will not be available for on-premise installations any time soon: This was a surprise both to us and to a number of Blackboard customers we spoke to. It’s available now for SaaS customers and will be available for managed hosting customers, but the company is making no promises about self-hosted. The main reason is that they have added some pretty bleeding edge new components to the architecture that are hard to wrap up into an easily installable and maintainable bundle. The technical team believes this situation may change over time as the technologies that they are using mature—to be clear, we’re talking about third-party technologies like server containers rather than homegrown Blackboard technologies—they think it may become practical for schools to self-host Ultra if they still want to by that time. But don’t expect to see this happen in the next two years.

Ultra is much more than a usability makeover and much more ambitious than is commonly understood: There is a sense in the market that Ultra is Blackboard’s attempt to catch up with Instructure’s ease of use. While there is some truth to that, it would be a mistake to think of Ultra as just that. In fact, it is a very ambitious re-architecture that, for example, has the ability to capture a rich array of real-time learning analytics data. These substantial and ambitious under-the-hood changes, which Phil and I were briefed on extensively and which were also shared publicly at Blackboard’s Devcon, are the reason why Ultra is late and the reason why it can’t be locally installed at the moment. I’m not going to have room to go into the details here, but I may write more about it in a future post.

Blackboard “Classic” 9.x is continuing under active development: If you’re self-hosted, you will not be left behind. Blackboard claims that the 9.x code line will continue to be under active development for some time to come, and Phil and I found their claims to be fairly convincing. To begin with, Jay got burned at Autodesk when he tried to push customers onto a next-generation platform and they didn’t want to go. So he has a personal conviction that it’s a bad idea to try that again. But also, Blackboard gets close to a quarter of its revenue and most of its growth from international markets now, and for a variety of reasons, Ultra is not yet a good fit for those markets and probably won’t be any time soon. So self-hosted customers on Learn 9.x will likely get some love. This doesn’t mean development will be as fast as they would like; the company is pushing hard in a number of directions, and we get the definite sense that there is a strain on developer resources. But 9.x will not be abandoned or put into maintenance mode in the near future.

Slide09

If you want to get a sense of what Ultra feels like, try out the Blackboard Student mobile app: The way Blackboard uses the term “Ultra” is confusing, because sometimes it means the user experience but sometimes it means the next generation architecture for Learn. If you want to try Ultra the user experience, the play with the Student mobile app, which is in production today and which will work with Learn 9.x as well as Learn Ultra. Personally, I think it represents some really solid thinking about designing for students.

Slide60

Slide66

Moodle may make a comeback: One of the reasons that Moodle adoption has suffered in the United States the past few years is that it has lacked an advocate with a loud voice. Moodlerooms used to be the biggest promoter of the platform, and when Blackboard acquired them, they went quiet in the US. But, as I already mentioned, the international market is hugely important for Blackboard now, and Moodle is the cornerstone of the company’s international strategy. They have been quietly investing in the platform, making significant code contributions and acquisitions. There are signs that Blackboard may unleash Moodlerooms to compete robustly in the US market again. This would entail taking the risk that Moodle, a cheaper and lower-margin product, would cannibalize their Learn business, so file this under “we’ll believe it when we see it,” but Apple has killed the taboo of self-cannibalization when the circumstances are right, and they seem like they may be right in this situation.

Collaborate Ultra is more mature than Learn Ultra but still not mature: This is another case where thinking about Ultra as a usability facelift would be hugely underestimating the ambition of what Blackboard is trying to do. The new version of Collaborate is built on a new standard called WebRTC, which enables webconferencing over naked HTML rather than through Flash or Java. This is extremely hard stuff that big companies like Google, Microsoft, and Apple are still in the process of working out right now. It is just this side of crazy for a company the size of Blackboard to try to release a collaboration product based heavily on this technology. (And the only reason it’s not on the other side of crazy is because Blackboard acquired a company that has one of the world’s foremost experts on WebRTC.) Phil and I have used Collaborate Ultra a little bit. It’s very cool but a little buggy. And, like Learn Ultra, it’s still missing some features. At the moment, the sweet spot for the app appears to be online office hours.

Slide53

 

My Quick Take

I’m trying to restrain myself from writing a 10,000-word epic; there is just a ton to say here. I’ll give a high-level framework here and come back to some aspects in later posts. Bottom line: If you think that Ultra is all about playing catch-up with Instructure on usability, then the company’s late delivery, functionality gaps, and weird restrictions on where the product can and cannot be run look pretty terrible. But that’s probably not the right way to think about Ultra. The best analogy I can come up with is Apple’s Mac OS X. In both cases, we have a company that is trying to bring a large installed base of customers onto a substantially new architecture and new user experience without sending them running for the hills (or the competitors). This is a really hard challenge. Hardcore OS X early adopters will remember that 10.0 was essentially an unusable technology preview, 10.1 was usable but painful, 10.2 was starting to feel pretty good, and 10.3 was when we really began to see why the new world was going to be so much better than the old one. If I am right, Ultra will go through the same sort of evolution. I don’t know that these stages will each be a year long; I suspect that they may be shorter than that. But right now we are probably partway through the 10.0 era for Ultra. As I mentioned earlier in the post, Phil and I still need to talk to some Ultra customers to get a sense of real usage and, of course, since it will be generally available to SaaS customers for use in the fall semester, we’ll have more folks to talk to soon. We will be watching closely to see how big the gaps are and how quickly they are filled. For example, how long will it take Blackboard to get to the items labeled as “In Development” on their slides? Does that mean in a few months? More? And what about the “Research” column? Based on these slide and our conversations, I think the best case scenario is that we reach the 10.2 era—where the platform is reasonably feature-complete, usable, and feeling pretty good overall—by BbWorld 2016, and with some 10.3-type new and strongly differentiating features starting to creep into the picture. Or they could fall flat and utterly fail to deliver. Or something in between. I’m pretty excited by the scope of the company’s ambition and am willing to cut them some slack, partly because they persuaded me that what they are trying to do is pretty big and party because they persuaded me that they probably know what they are doing. But they have had their Mulligan. As the saying goes (when properly remembered), the proof of the pudding is in the eating. We’ll see what they deliver to customers in the next 6-12 months.

Watch this space.

The post Blackboard Ultra and Other Product and Company Updates appeared first on e-Literate.

PL/SQL Error Logging and Quantum Theory

The Anti-Kyte - Fri, 2015-07-24 14:17

When I started writing this post, it was going to be about something else.
This happens occasionally, I have an idea in my head and set to work.
Then I do some research – don’t look so surprised, I do look at the docs occasionally – and, as in this case, I find out that there’s rather more to the topic at hand than I first thought.
What follows is a re-visiting of some of the tools available in Oracle to help with error logging.
It includes stuff that either I’d forgotten or had never considered about some fairly common functions.
Before I dive in, I’d just like to say thanks to William Robertson, who first pointed out to me the similarity between PL/SQL error logging and Quantum Theory. If you’re still unclear of the connection between the two then consider, if you will, the Schrodinger’s Cat Thought Experiment.
It involves locking a cat in a box and possibly poisoning it.
Schrodinger postulates that the cat is both alive and dead…until you open the box to check.
The conclusions we can draw from this experiment are :

  • According to Quantum Theory, the act of observation changes the nature of the thing being observed
  • Schrodinger wasn’t a Cat person

Before going any further, I should point out that most of the stuff I know about Physics comes from watching Star Trek.

Moving on, I now invite you to consider…

Mysteriously moving errors

As with cats – according to Shrodinger at least – the act of “observing”- well, handling – a PL/SQL exception changes the error ( or the location from which it originated at any rate).

For example…

declare
    l_cat number;
begin
    l_cat := 'GREEBO';
end;
/

declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 4

No problem with this. It shows us that the error happened at line 4 in the code, which is correct.
However….

declare
    l_cat number;
begin
    l_cat := 'GREEBO';
exception
    when others then
        -- do some logging stuff....
        raise;
end;
/

declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at line 8

Here, the exception originated on line 4. However, the error is reported at line 8 – inside the exception handler.

This then, is the problem with which we need to wrestle.
Time to take a closer look at the tools that Oracle provides us for the purpose of error handling, starting with the most venerable…

SQLCODE and SQLERRM

SQLCODE and SQLERRM have been around for as long as I can remember.
If you’ve worked on PL/SQL applications for any length of time, you will almost certainly seen a variation of one of the following in an exception handler :

    sqlerrm(sqlcode);
    
    sqlcode||' : '||sqlerrm;
    
    substr(sqlerrm(sqlcode),1,500));

If any of the above ring any bells then it illustrates the point that these functions ( for that is what they are), are not especially well understood.

SQLCODE

SQLCODE returns the number of the last error encountered. Not the message, just the error number :

set serveroutput on
declare
    l_cat_lives number;
begin
    l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9';
exception 
    when others then
        dbms_output.put_line(sqlcode);
end;
/

-6502

PL/SQL procedure successfully completed.

SQLCODE returns 0 on successful completion…

declare
    l_cat_lives number;
begin
    l_cat_lives := 9;
    dbms_output.put_line(sqlcode);
end;
/
0

PL/SQL procedure successfully completed.

For user-defined errors, it returns 1 by default…

declare
    e_no_cat exception;
begin
    raise e_no_cat;
exception when e_no_cat then
    dbms_output.put_line(sqlcode);
end;
/

1

PL/SQL procedure successfully completed.

…unless you associate the exception with an error number using the EXCEPTION_INIT pragma…

declare
    e_no_cat exception;
    pragma exception_init( e_no_cat, -20000);
begin
    raise e_no_cat;
exception when e_no_cat then
    dbms_output.put_line(sqlcode);
end;
/
-20000

PL/SQL procedure successfully completed.

SQL> 

It will also return the relevant error code if you use RAISE_APPLICATION_ERROR…

begin
    raise_application_error(-20001, 'The cat has run off');
exception when others then
    dbms_output.put_line(sqlcode);
end;
/
-20001

PL/SQL procedure successfully completed.

SQL> 

On it’s own then, SQLCODE is not much help in terms of working out what went wrong unless you happen to have memorized all of the Oracle error messages.

Fortunately we also have…

SQLERRM

This function takes in an error number and returns the relevant message :

begin
    dbms_output.put_line(sqlerrm(-6502));
end;
/
ORA-06502: PL/SQL: numeric or value error

PL/SQL procedure successfully completed.

Because of this SQLERRM can be used to create the equivalent of the oerr utility in PL/SQL.
Better still, it takes SQLCODE as it’s default parameter…

declare
    l_cat_lives number;
begin
    l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9';
exception
    when others then
        dbms_output.put_line(sqlerrm);
end;
/

ORA-06502: PL/SQL: numeric or value error: character to number conversion error

PL/SQL procedure successfully completed.

The maximum length of a varchar returned by SQLERRM is, according to the documentation, “the maximum length of an Oracle Database error message” – 512.

Whilst we’re on the subject, the 11gR2 documentationincludes a note recommending that, generally, DBMS_UTILITY.FORMAT_ERROR_STACK be used instead…

DBMS_UTILITY.FORMAT_ERROR_STACK

So, let’s see what this function gives us when used as a drop-in replacement for SQLERRM….

declare
    l_cat_lives number;
begin
    l_cat_lives := 'If the cat belongs to Shrodinger the answer is uncertain. Otherwise 9';
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/
ORA-06502: PL/SQL: numeric or value error: character to number conversion error


PL/SQL procedure successfully completed.

SQL> 

Not much then based on this example. However, there are a couple of differences.
The first is that this function returns up to 2000 characters of the error stack.
The second is that it does not take any arguments. Fair enough I suppose. From the name you’d infer that this function returns the entire error stack rather than the single message that SQLERRM does.
Let’s put that to the test…

create or replace package transporter as
    function find_target return varchar2;
    procedure beam_me_up_scotty;
end transporter;
/

create or replace package body transporter as
    function find_target 
        return varchar2
    is
    begin
        raise_application_error(-20003, 'Location or velocity unknown');
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    end beam_me_up_scotty;
end transporter;
/

This package is an analog of what Star Fleet Engineers would have been working with before they came up with the Heisenburg Compensator.

If we call this without any error handling, we’ll get a “stack” of errors…

begin
    transporter.beam_me_up_scotty;
end;
/

*
ERROR at line 1:
ORA-20003: Location or velocity unknown
ORA-06512: at "MIKE.TRANSPORTER", line 6 
ORA-06512: at "MIKE.TRANSPORTER", line 13
ORA-06512: at line 1

You’d expect something fairly similar if you used the FORMAT_ERROR_STACK function…

set serveroutput on size unlimited
begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/

ORA-20003: Location or velocity unknown



PL/SQL procedure successfully completed.

SQL> 

So, not similar at all then.
NOTE – if you’ve already spotted the deliberate mistake here. Bear with me for a bit.
If we change the package body so that errors are raised at multiple levels…

create or replace package body transporter as
    function find_target 
        return varchar2
    is
    begin
        raise_application_error(-20003, 'Location or velocity unknown');
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    exception when others then
        raise_application_error(-20004, 'I canna change the laws o physics!');
    end beam_me_up_scotty;
end transporter;
/

… we simply get the last error passed…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/

ORA-20004: I canna change the laws o physics!



PL/SQL procedure successfully completed.

SQL> 

From all of this, it would appear that DBMS_UTILITY.FORMAT_ERROR_STACK doesn’t really give us much (if anything) over SQLERRM. This is especially true if the documentation is correct and no single Oracle Error Message will exceed 512 bytes.
All of which is rather odd, until you consider…

RAISE_APPLICATION_ERROR

RAISE_APPLICATION_ERROR is usually invoked as in the above example. However, it actually accepts three arguments :

  • NUM – an error code in the range -20000 to -20999
  • MSG – an error message up to 1024 characters ( including the error code)
  • KEEPERRORSTACK – if TRUE then the error code is placed at the top of the error stack. Otherwise it replaces the error stack.
    Default is FALSE

The first point to note here is that, unlike SQLERRM, FORMAT_ERROR_STACK can accomodate the full length of a message from RAISE_APPLICATION_ERROR.
More relevant to the issue at hand however, it the KEEPERRORSTACK parameter. If we tweak the package once more to set this parameter to true…

create or replace package body transporter as
    function find_target 
        return varchar2
    is
    begin
        raise_application_error(-20003, 'Location or velocity unknown', true);
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    exception when others then
        raise_application_error(-20004, 'I canna change the laws o physics!', true);
    end beam_me_up_scotty;
end transporter;
/

…and re-run our test…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_stack);
end;
/
ORA-20004: I canna change the laws o physics!
ORA-06512: at "MIKE.TRANSPORTER", line 16
ORA-20003: Location or velocity unknown

PL/SQL procedure successfully completed.

…we now get a stack. However, we’re still stuck without the line number from where the error originated.
Fortunately, they’ve been burning the candle at both ends over at Star Fleet, or possibly at Redwood Shores…

DBMS_UTILITY.FORMAT_ERROR_BACKTRACE

Let’s try re-executing our package, this time using FORMAT_ERROR_BACKTRACE…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(dbms_utility.format_error_backtrace);
end;
/

ORA-06512: at "MIKE.TRANSPORTER", line 7
ORA-06512: at "MIKE.TRANSPORTER", line 14
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

Well, that’s different. We get a stack, together with the line number at which the error originated. Unfortunately it doesn’t include the originating error message itself. Let’s try that again, but this time in combination with SQLERRM…

 
begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(sqlerrm);
        dbms_output.put_line(dbms_utility.format_error_backtrace);
end;
/
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at "MIKE.TRANSPORTER", line 7
ORA-06512: at "MIKE.TRANSPORTER", line 14
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

Now we have the original error. We also have the line at which it happened. At last, we have our Heisenburg compensator.
Well, in most circumstances. Just before we test it on Admiral Archer’s prize beagle …

create or replace package body transporter as
    function find_target 
        return varchar2
    is
        l_silly number;
    begin
        l_silly :=  'Location or velocity unknown';
        exception when others then
            -- do some logging and...
            raise;
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    end beam_me_up_scotty;
end transporter;
/

Now we’ve added an error handler to the innermost package member…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(sqlerrm);
        dbms_output.put_line(dbms_utility.format_error_backtrace);
end;
/

ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at "MIKE.TRANSPORTER", line 10
ORA-06512: at "MIKE.TRANSPORTER", line 17
ORA-06512: at line 2



PL/SQL procedure successfully completed.

…once again the handled error has “moved” to the exception block of the function.

In terms of retrieving the error stack, it would appear that a combination of SQLERRM and DBMS_UTILITY.FORMAT_ERROR_BACKTRACE offer the most comprehensive and reliable information.
However, in order to further pin down where those pesky errors are originating we may need to turn to a time-honoured technique – albeit with a comparatively modern twist…

Location Markers with $$PLSQL_LINE

The $$PLSQL_LINE variable simply returns the line number of the stored program unit or anonymous block you’re currently in…

begin
    dbms_output.put_line('At line : '||$$plsql_line);
end;
/

At line : 2

PL/SQL procedure successfully completed.

SQL> 

By sprinkling a few dollars through our code, we should get a better (although still not necessarily exact) idea of where our error is originating.

I’m going to persevere with this transporter code. After all, they managed to get it working in the original Star Trek and that was way back in the 60’s…

create or replace package body transporter as
    function find_target 
        return varchar2
    is
        l_loc pls_integer;
        l_silly number;
    begin
        l_loc := $$plsql_line;
        l_silly :=  'Location or velocity unknown';
        exception when others then
            dbms_output.put_line('Error originating after line '||l_loc);
            raise;
    end find_target;

    procedure beam_me_up_scotty is
        l_target varchar2(30);
    begin
        -- engage the heisenburg compensator...
        l_target := find_target;
        dbms_output.put_line('Energize !');
    end beam_me_up_scotty;
end transporter;
/

Now, we should get a bit more information…

begin
    transporter.beam_me_up_scotty;
exception
    when others then
        dbms_output.put_line(sqlerrm);
        dbms_output.put_line(dbms_utility.format_error_backtrace); 
end;
/
Error originating after line 8
ORA-06502: PL/SQL: numeric or value error: character to number conversion error
ORA-06512: at "MIKE.TRANSPORTER", line 12
ORA-06512: at "MIKE.TRANSPORTER", line 19
ORA-06512: at line 2



PL/SQL procedure successfully completed.

SQL> 

The error actually originates from line 9 so that’s a pretty good approximation.
The downside is the aforementionned sprinkling of the assignment of $$PLSQL_LINE to a variable immediately before you perform any action.

Well, I’ve probably managed to annoy an Physics experts and Star Trek fans that happen to be reading. That’s before you even start thinking about PL/SQL Developers.
On the plus side I can say, hand-on-heart, that no cats were harmed in the writing of this post.


Filed under: Oracle, PL/SQL Tagged: $$plsql_line, dbms_utility.format_error_backtrace, dbms_utility.format_error_stack, pl/sql exceptions, pragma exception_init, raise_application_error, sqlcode, sqlerrm

Log Buffer #433: A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2015-07-24 10:05

This Log Buffer Edition covers Oracle, SQL Server and MySQL blogs of the running week.

Oracle:

  • While checking the sources of the Cassandra/NetBeans integration into GitHub yesterday, something went very badly wrong and ALL the source files in my Maven project that disappeared!
  • AWR Reports, Performance Hub, historisches SQL Monitoring in 12c
  • Oracle Database Mobile Server 12c: Advanced data synchronization engine
  • ORA-39001, ORA-39000 and ORA-39142
  • ORA-15410: Disks in disk group do not have equal size

SQL Server:

  • SAN and NAS protocols and how they impact SQL Server
  • SQL Style Habits: Attack of the Skeuomorphs
  • Is It Worth Writing Unit Tests?
  • Large SQL Server Database Backup on an Azure VM and Archiving
  • Reporting Services: Drawing a Buffer on a Map

MySQL:

  • MySQL Tcpdump system : use percona-toolkit to analyze network packages
  • Replication in real-time from Oracle and MySQL into data warehouses and analytics
  • Altering tablespace of table – new in MySQL 5.7
  • MySQL QA Episode 8: Reducing Testcases for Engineers: tuning reducer.sh
  • MySQL upgrade 5.6 with innodb_fast_checksum=1

 

Learn more about Pythian’s expertise in Oracle , SQL ServerMySQL, as well as the author Fahd Mirza.

 

The post Log Buffer #433: A Carnival of the Vanities for DBAs appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

APEX 5.0.1 : We’re all patched up!

Tim Hall - Fri, 2015-07-24 09:32

apexAPEX 5.0.1 was released about a week ago. I started to patch some stuff straight away. We were already on APEX 5.0 across the board, so we didn’t need to do any full installations, just patches.

During the patching I noticed we were getting some issues with supposed misconfiguration of static files. After clearing my browser cache, the message went away, so I tweeted this.

“Regarding APEX 5.0.1 patch. Clear your browser cache, or you may get that static files message, even though config is correct. :) #orclapex”

Practically before I hit return, Patrick Wolf came back with this.

“@oraclebase Thanks for letting us know. I have filed bug# 21463521 and fixed it for 5.0.2″

We’re a week on now and all our APEX installations are happily running 5.0.1. We had no issues with the upgrades and no problems with the apps.

We are small fry where APEX is concerned, so don’t take this as the green light to upgrade everything yourself. I’m just saying we had no problems with it, which is pretty cool. If APEX is a strategic environment for you, you will probably need to do a lot more testing than us. :)

Cheers

Tim…

APEX 5.0.1 : We’re all patched up! was first posted on July 24, 2015 at 4:32 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

MobaXterm 8.0

Tim Hall - Fri, 2015-07-24 08:19

command-promptI just noticed MobaXterm 8.0 was released a few days go.

Downloads and changelog available in the usual places.

Happy unzipping!

Cheers

Tim…

MobaXterm 8.0 was first posted on July 24, 2015 at 3:19 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

AUDIT_SYS_OPERATIONS defaults to TRUE in #Oracle 12c

The Oracle Instructor - Fri, 2015-07-24 08:09

A small but remarkable change in Oracle Database 12c is the default value of AUDIT_SYS_OPERATIONS has changed to TRUE now. In other words, all actions done by the superuser sys are being audited now by default!

[oracle@uhesse ~]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Jul 24 15:23:10 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select name,value from v$spparameter where isspecified='TRUE';

NAME                                     VALUE
---------------------------------------- --------------------------------------------------
memory_target                            1073741824
control_files                            /u01/app/oracle/oradata/prima/control01.ctl
db_block_size                            8192
compatible                               12.1.0.2
db_recovery_file_dest                    /u02/fra
db_recovery_file_dest_size               2147483648
undo_management                          auto
undo_tablespace                          undotbs1
remote_login_passwordfile                exclusive
db_name                                  prima
diagnostic_dest                          /u01/app/oracle

11 rows selected.


SQL> show parameter sys_oper

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_sys_operations                 boolean     TRUE
SQL> select count(*) from scott.dept;

  COUNT(*)
----------
         4

SQL> show parameter audit_file

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest                      string      /u01/app/oracle/product/12.1.0
                                                 /dbhome_1/rdbms/audit
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
[oracle@uhesse ~]$ cd /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/audit

[oracle@uhesse audit]$ cat prima_ora_6204_20150724152310753136143795.aud
Audit file /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/audit/prima_ora_6204_20150724152310753136143795.aud
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1
System name:    Linux
Node name:      uhesse
Release:        3.8.13-68.2.2.el7uek.x86_64
Version:        #2 SMP Tue May 12 14:38:58 PDT 2015
Machine:        x86_64
Instance name: prima
Redo thread mounted by this instance: 1
Oracle process number: 41
Unix process pid: 6204, image: oracle@uhesse (TNS V1-V3)

Fri Jul 24 15:23:10 2015 +02:00
LENGTH : '160'
ACTION :[7] 'CONNECT'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[6] 'oracle'
CLIENT TERMINAL:[5] 'pts/1'
STATUS:[1] '0'
DBID:[10] '2113606181'
[Output shortened...]
Fri Jul 24 15:23:56 2015 +02:00
LENGTH : '185'
ACTION :[31] 'select count(*) from scott.dept'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[6] 'oracle'
CLIENT TERMINAL:[5] 'pts/1'
STATUS:[1] '0'
DBID:[10] '2113606181'

Something you might need to know as a DBA, don’t you think? :-)


Tagged: 12c New Features, security
Categories: DBA Blogs

Subquery Factoring (9)

Jonathan Lewis - Fri, 2015-07-24 05:34

Several years go (eight to be precise) I wrote a note suggesting that Oracle will not materialize a factored subquery unless it is used at least twice in the main query. I based this conclusion on a logical argument about the cost of creating and using a factored subquery and, at the time, I left it at that. A couple of years ago I came across an example where even with two uses of a factored subquery Oracle still didn’t materialize even though the cost of doing so would reduce the cost of the query – but I never got around to writing up the example, so here it is:


create table t1
as
select
        object_id, data_object_id, created, object_name, rpad('x',1000) padding
from
        all_objects
where
        rownum <= 10000
;

exec dbms_stats.gather_table_stats(user,'T1')

explain plan for
with gen as (
        select /*+ materialize */ object_id, object_name from t1
)
select
        g1.object_name,
        g2.object_name
from
        gen g1,
        gen g2
where
        g2.object_id = g1.object_id
;

select * from table(dbms_xplan.display);

You’ll notice that my original table has very wide rows, but my factored subquery selects a “narrow” subset of those rows. My target is to have an example where doing a tablescan is very expensive but the temporary table holding the extracted data is much smaller and cheaper to scan.

I’ve included a materialize hint in the SQL above, but you need to run the code twice, once with, and once without the hint. Here are the two plans – unhinted first:


============================
Unhinted - won't materialize
============================

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      | 10000 |   468K|   428   (2)| 00:00:03 |
|*  1 |  HASH JOIN         |      | 10000 |   468K|   428   (2)| 00:00:03 |
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   234K|   214   (2)| 00:00:02 |
|   3 |   TABLE ACCESS FULL| T1   | 10000 |   234K|   214   (2)| 00:00:02 |
---------------------------------------------------------------------------

==================================
Hinted to materialize - lower cost
==================================

--------------------------------------------------------------------------------------------------------- 
| Id  | Operation                  | Name                       | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT           |                            | 10000 |   585K|   227   (2)| 00:00:02 |
|   1 |  TEMP TABLE TRANSFORMATION |                            |       |       |            |          |
|   2 |   LOAD AS SELECT           | SYS_TEMP_0FD9D6664_9DAAEB7 |       |       |            |          | 
|   3 |    TABLE ACCESS FULL       | T1                         | 10000 |   234K|   214   (2)| 00:00:02 | 
|*  4 |   HASH JOIN                |                            | 10000 |   585K|    13   (8)| 00:00:01 | 
|   5 |    VIEW                    |                            | 10000 |   292K|     6   (0)| 00:00:01 | 
|   6 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D6664_9DAAEB7 | 10000 |   234K|     6   (0)| 00:00:01 | 
|   7 |    VIEW                    |                            | 10000 |   292K|     6   (0)| 00:00:01 | 
|   8 |     TABLE ACCESS FULL      | SYS_TEMP_0FD9D6664_9DAAEB7 | 10000 |   234K|     6   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------

Clearly the optimizer isn’t considering the costs involved. If I add the predicate “where object_id > 0″ (which identifies ALL the rows in the table), materialization occurs unhinted (with the same costs reported as for the hinted plan above. My tentative conclusion is that the transformation is a heuristic one that follows the rule “two or more appearances of the subquery and some indication of row selection in the subquery rowsource”. (In fact if the rowsource is “select * from pipeline_function” the requirement for subsetting doesn’t seem to apply.)

The plans above came from 11.2.0.4 but I got the same result, with a slight difference in costs, in 12.1.0.2. It’s worth pointing out that despite Oracle apparently ignoring the costs when deciding whether or not to materialize, it still seems to report self-consistent values after materialization: the 227 for the plan above is the 214 for creating the temporary table plus the 13 for deriving the hash join of the two copies of the temporary table.


Giving D2L Credit Where Credit Is Due

Michael Feldstein - Thu, 2015-07-23 21:20

By Phil HillMore Posts (345)

Michael and I have made several specific criticisms of D2L’s marketing claims lately culminating in this blog post about examples based on work at the University of Wisconsin-Milwaukee (UWM) and California State University at Long Beach (CSULB).

I understand that other ed tech vendors make marketing claims that cannot always be tied to reality, but these examples cross a line. They misuse and misrepresent academic outcomes data – whether public research-based on internal research – and essentially take credit for their technology “delivering results”.

This week brought welcome updates from D2L that go a long way towards addressing the issues we raised. As of Monday, I noticed that the ‘Why Brightspace? Results’ page now has links to supporting material for each claim, and the UWM claim has been reworded. Today, D2L released a blog post explaining these changes and admitting the mistakes. D2L even changed the web page to allow text selection for copy / paste. From the blog post:

Everyone wants more from education and training programs—so it’s critical that our customers are part of the process of measurement and constant improvement.

At Fusion, our customers came together to share new ideas and practices to push education forward. They like to hear about the amazing results, like U-Pace, which we post on our website. In our excitement to share the great results our customers are seeing through their programs, we didn’t always provide the details around the results. When we make mistakes, it’s our job to fix it—as we are doing now.

U-Pace is the specific program at UWM (course redesign from large lecture to self-paced / mastery approach), and D2L now links to a documented case study and quotes this case study in the blog post.

We have a Customer Success Program in place where approvals from our clients are acquired before we post anything about them. Stories are revisited every six months to make sure that they’re still valid and accurate. However, a recent customer success story was mistakenly posted on our website without their permission or knowledge. We will be doubling down on our efforts to help ensure that this doesn’t happen again, and we will work harder to provide citations for all the facts.

This “without their permission or knowledge” paragraph refers to a claim about CSULB.

Make no mistake, we’re extremely proud of what our clients are accomplishing. Our customers’ innovation, dedication, and just plain awesomeness is making a huge difference—and we’re proud to be a part of it. We will continue to measure and improve our offerings, listen to our community for suggestions, and when warranted, share their results. Here’s to them!

Kudos to D2L for these admissions and changes. Well done.

Notes and Caveats

While the overall change is very positive, I do have a few additional notes and caveats to consider.

  • The blog post today should have come from Renny Monaghan (Chief Marketing Officer) or John Baker (CEO). The blog post was written by Barry Dahl[1], and unless I misunderstand he is their lead for community engagement – building a user community that is mostly behind-login and not public-facing. The “mistakes” were made in official marketing and company communications. The leader of the department in charge of official messaging (Renny) or the company leader (John) should have taken ownership of what happened in the past and the corrections they are making.
  • In the blog post section describing the U-Pace program at the UWM, I would have included the description of moving from large lecture to self-paced / mastery approach. That change should not be embedded as one of “many factors that came together for UWM to achieve the results that they did, and that the increases in student success are not all attributed to their use of Brightspace.” That change to self-paced / mastery was the intervention, and all other factors are secondary. The case study describes the program quite well, but such an omission in the blog post is misleading.
  • The blog post only references UWM and CSULB examples, yet the ‘Why Brightspace? Results’ page added links to all claims. Changing them all was the right move.
  • Apparently, specific criticisms do not carry a CC-BY license.

These are welcome changes.

  1. For what it’s worth, Barry does great work for the company

The post Giving D2L Credit Where Credit Is Due appeared first on e-Literate.

Oracle Priority Support Infogram for 23-JUL-2015

Oracle Infogram - Thu, 2015-07-23 12:08

RDBMS
ORA-54033 and the Hidden Virtual Column Mystery, from All Things SQL.
Oracle Database In-Memory Bundle Patch 10 Released, from Oracle Database In-Memory.
Oracle 12 and latches, part 2, from Frits Hoogland Weblog.
OVM
Oracle VM 3.3.3 Officially Released!!!!, from the Simon Coter Blog.
SQL Developer
Search and Replace in Oracle SQL Developer Data Modeler, from that JEFF SMITH.
WebLogic
WebLogic Partner Community Newsletter July 2015, from WebLogic Partner Community EMEA.
SOA
Performance Study – REST vs SOAP for Mobile Applications, from SOA & BPM Partner Community Blog.
And from the same source: 
Purging and partitioned schemas
ADF
Auto Refresh for ADF BC Cached LOV, from Andrejus Baranovskis Blog.
Hadoop
How to Build a Hadoop Cluster Using OpenStack, from The art of virtualization.
Hyperion
Patch Set Update: Hyperion Profitability and Cost Management 11.1.2.4.100, from SOA & BPM Partner Community Blog.
And another update announcement:
Patch Set Update: Hyperion Profitability and Cost Management 11.1.2.4.100, from Business Analytics - Proactive Support.
OBIEE
New Knowledge Article: OBIEE 11g: Problem invoking WLST - Traceback' when Installing OBIEE 11.1.1.9.0 on Windows 2012 R2, from Business Analytics - Proactive Support.
Ops Center
Kernel Zones support in 12.3, from the Ops Center blog.
Certification
What's New With Oracle Certification This Week?, from Oracle Certification.
EBS
From the Oracle E-Business Suite Support blog:
Oracle E-Business Suite Releases 11i & 12 Critical Patch Update
Have you Discovered the Power of the iProcurement Change Request Analyzer?
Keep Those Sales Orders Flowing
How to run the new XML version of the APList Invoice Data Collection Diagnostic
July 2015 WEBCAST – BI Publisher Overview
From the Oracle E-Business Suite Technology blog:
Can You Use IE 11 Enterprise Mode or Compatibility View With EBS?
Oracle Internet Directory 11gR1 11.1.1.9 Certified with E-Business Suite
Oracle Access Manager 11.1.2.3 Certified with E-Business Suite

…And Finally
Two really interesting items this week. For those of you who always wanted to have a tricorder, the first iteration is on its way to your phone. It starts out at $249, but I’m betting this will be a standard item on smart phones in not too long. Then you can analyze your pizza for alien life forms: Pocket Spectrometer is the Star Trek Tricorder you Always Wanted, from Epic Times.
And another article fraught with Star Trek implications (data crystals here we come): Scientists have finally discovered massless particles, and they could revolutionise electronics, from ScienceAlert.

Best Practices for Using My Oracle Support to Find the Patches You Need

Joshua Solomin - Thu, 2015-07-23 11:46

Benefits Overview:

  • Help save time and improve your effectiveness in finding patches that you need.
  • Learn about product specific best practices for patching and upgrading.
  • Avoid issues by staying current on your Oracle products and applying critical fixes.


1. Review Oracle Support Lifecycle Advisors for patching and upgrading best practices
View Oracle Support Lifecycle Advisors (Document ID 250.2) to learn about best practices for patching and upgrading the Oracle products you own.

2. Use the Patch Simple Search when you know the Patch ID

3. Use the Patch Advanced Search to search based on a product and release
Include additional filters to find Oracle recommended patches and to filter by platform or other attributes.

4. Use the Patch Related Activity region to find recently viewed or downloaded patches

5. Download the latest Critical Patch Updates and Security Alerts

  • Oracle strongly recommends staying current with the latest Critical Patch Updates and Security Alerts.
  • Critical Patch Updates are collections of security fixes for Oracle products and are released quarterly.
  • Security Alerts are issued for vulnerability fixes deemed too critical to wait for distribution in the next Critical Patch Update.
  • The latest information is always available from the "Critical Patch Updates and Security Alerts" link on the Patching Quick Links region.

6. For more information on searching for patches

Planet Hard Drive? Muddled nonsense from Scientific American

FeuerThoughts - Thu, 2015-07-23 07:59
In the August 2015 issue of Scientific American, I came across an article titled "Planet Hard Drive", a "thought experiment" arguing that we can think of Earth as a kind of "hard drive" and "although Earth has an enormous capacity to store information, order is still rare....but the growth of order on Earth also stems from the production of cultural information."

The article is behind a paywall, so I cannot reproduce it here, but if you are a subscriber, here you go.

I find it generally hard to read SciAm these days, as well as many other scientific sources, because of the pervasive species-ism (humans unique, more important than all others) found sadly among scientists.

But this article was, I thought, a real disappointment, coming from SciAm. I sent this letter to the author:

Professor Hidalgo, 
I read your SciAm article with the above title, and I found it scientifically sloppy and offensively tone deaf, given the state of our planet today (specifically the threat of climate change and human-cause extinctions and species degradation). 
You might not read past that initial paragraph but if you do:
Scientifically Sloppy
I am all for interesting “thought experiments”, but it should have a reasonable amount of logical consistency. I think your experiment fails in this regard. 
Specifically, you talk about the growth of order on earth from production of cultural information.
This implies a clear net positive change in order due to our intensely “ordered” products. 
Yet previously, you recognized that there is order (lots of it) in living things. 
And that's where I see a very deep (specie-ist-driven) fallacy: to create our products humans destroy a vast amount of living things and therefore wipe out corresponding enormous amounts of order. 
Vast parts of the rainforest, extinction of entire species, degradation of the ocean, etc., etc., etc. - do you really think that if you even attempted to conceptualize the volume of order sacrificed to build iPhones, you could come out with a net positive growth in order?
I suppose it might be remotely possible - but you don’t even address this trade-off, making your argument incomplete and sloppy. I am very surprised that SciAm did not insist on a more rigorous treatment.
Sdaly, you seem to blithely accept that destruction of life on our planet in order to manifest our culture-as-thought as products. 
Which that brings me to…
Offensively Tone Deaf
Your insistence to see the entire world through a human filter and impose human paradigms onto the rest of the natural world is shocking, giving the growing awareness (especially among the most rational of us, like many scientists).“A tree, for example, is a computer”
“Objects of this kind [manufactured products] are particularly special.”
“Biological cells are finite computers”
“People are also limited, and we transcend our finite compuational capacities by forming social and professional networks.”“Special” “Transcend”
You use words that impute relentlessly positive values to human activity. 
Yet if you do not place humans “above” all others, you could at least say (my changes in bold):
“People are also limited, and we augment our finite compuational capacities by forming social and professional networks. A necessary consequence of this agumentation is the destruction of the computational capacities of billions of other living creatures.
At the very end of your muddled thought experiment, you finally hint at a bigger picture:“The resulting hyperconnected society will present our species with some of the most challenging ethical problems in human history.”Ah, ethics! Finally! Professor Hidalgo will now point out the grave price paid by our planet and co-inhabitants for human's desire for comfort and convenience, but....
No, no. For you, like way too many other humans, all that matters is the human species.“We could lose aspects of our humanity that some of us consider essential: for example, we might cheat death.”Now that would be a real ethical disaster (cheating death) - precisely because it mean accelerated devastation of our planet and non-humans.
But that doesn’t seem to even register in your thinking.
Categories: Development

Mobilizing Siebel CRM Documents with Documents Cloud Service

WebCenter Team - Thu, 2015-07-23 07:24

As you know, Oracle Documents Cloud Service is not just an Enterprise File Sync and Share (EFSS) solution in the Cloud but more than that, it offers a cloud platform for documents management, storage and collaboration to drive business processes hooked to business applications.

Today we are going to look at how Siebel CRM users can extend their documents, embed attachments and drive collaboration in the Cloud using Oracle Documents Cloud Service.

Integration between Documents Cloud and Siebel CRM allows Siebel users to store file attachments in the cloud dynamically allowing you to share this content with your team, both internally and externally by defining granular permission privileges. Storing it in the Cloud also means the content is available to you anytime, anywhere and on any device - on the web, desktop or even your mobile devices giving mobility to your content like never before. With inherent granular security controls, leveraging Oracle Documents Cloud for document storage, access and sharing also means security assurance with your information protected at rest, in motion and at access points. Your documents are available to you 24 X 7, in context and are always up to date. Once you have setup Documents Cloud Service with Siebel, you can simply access your relevant files seamlessly from within your application like so:


What's more, Documents Cloud gives you the ability to share these documents amongst other Oracle Cloud solutions such as Sales Cloud and in the future Service Cloud and Oracle Social Network, etc. so that you have a seamless work experience through the business process and that contextual documents are available for your use anytime from within your application of choice. Version control ensures there are not multiple versions floating around leading to erroneous decision making.

A simple drag and drop interface make document collaboration easy to use and intuitive. You can store attachments for any Siebel entity using Documents Cloud. With the scalable, trusted Oracle Cloud architecture, you significantly reduce your CapEx as well as maintenance and upgrade costs down the line. The service is available as soon as you "switch it on" giving your documents the mobility you need and even offline access.

Implementing the solution into a Siebel CRM deployment is easy with step by step instructions listed in this whitepaper. Leveraging Oracle Documents Cloud with Oracle Siebel means lower TCO, proven integration model, same foundational architecture and one solution partner.

Here's a brief video explaining it all. Take a look and make your Siebel documents mobile with Documents Cloud Service today.

Have some questions? Our colleagues, John Bedford on the Siebel side along with the Documents Cloud product management team, have compiled this FAQ for you. For additional questions and follow up, please do leave us a comment here or contact your Oracle account manager.

NetBeans 8.1 Remote Debugging with WebLogic

Steve Button - Thu, 2015-07-23 06:39
Need to debug your application?  With NetBeans 8.1 (dev) and WebLogic it's very easy to do.

First start your WebLogic server in debug mode.  The startup scripts generated for a domain provide an option to start the server in debug mode. 

To run in debug mode, set the environment variable "debugFlag" to a value of "true" and  start the server.

$ export debugFlag="true"
$ ./startWebLogic.sh

 This launches the server with a command line such as that shown below, which sets the standard Java debug properties:
Starting WLS with line:
/Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/bin/java -server -Xdebug -Xnoagent -Xrunjdwp:transport=dt_socket,address=8453,server=y,suspend=n -Djava.compiler=NONE  -Xmx512m -Dweblogic.Name=myserver -Djava.security.policy=/tmp/dev_update2/wls12130/wlserver/server/lib/weblogic.policy  -Xverify:none -Djava.endorsed.dirs=/Library/Java/JavaVirtualMachines/jdk1.8.0_40.jdk/Contents/Home/jre/lib/endorsed:/tmp/dev_update2/wls12130/wlserver/../oracle_common/modules/endorsed  -ea -da:com.bea... -da:javelin... -da:weblogic... -ea:com.bea.wli... -ea:com.bea.broker... -ea:com.bea.sbconsole... -Dwls.home=/tmp/dev_update2/wls12130/wlserver/server -Dweblogic.home=/tmp/dev_update2/wls12130/wlserver/server -Dweblogic.utils.cmm.lowertier.ServiceDisabled=true weblogic.Server
The console will display a message confirming the debug Java VM is using:
Listening for transport dt_socket at address: 8453

Now on the NetBeans side, create a new Server entry for your WebLogic instance using the new Remote Domain option:
 

Check the Server debug mode enabled option and specify the port (8453 by default):

 
Now specify the new server as the destination to use to run the application:

Now to debug your application, simply set a break point in your code and select the Debug option for the project:


 
This will build and deploy the application to the WebLogic remotely then open the Debugger in the IDE and connect it automatically to the WebLogic debug port. 

From here you can use all the standard debug facilities to step through your code, view stacks, instance data and so forth.

Debug away!




WordPress 4.2.3 : Hurry up and … let it fix itself…

Tim Hall - Thu, 2015-07-23 06:28

WordPress 4.2.3wordpress has been released.

It contains fixes for some pretty nasty stuff. Usually, the updates have to be manually triggered for a day or so before the auto-update feature picks them up. I was on the blog this morning and there were no “manually triggered auto-updates” available, so it looks like this one has been pushed straight out, which probably makes sense.

By the time you’ve read this you are probably up to date already, but if not, get on your blog and give it a nudge. :)

You can see the changelog here.

Cheers

Tim…

WordPress 4.2.3 : Hurry up and … let it fix itself… was first posted on July 23, 2015 at 1:28 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Emergency Monitoring, Real-Time ADDM, Compare Period ADDM, and Active Session History (ASH) Analytics

Tim Hall - Thu, 2015-07-23 05:00

My recent dalliance with YouTube (141 subscribers and growing! :) ) has left the blog feeling a little bit unloved of late, but then why write when you can waffle in the car? :)

Anyway, the 12c learning train keeps on rolling. I’ve recently put the following articles live.

These are all listed as 12c new features in the 1Z0-060 “Upgrade to Oracle Database 12c” OCP upgrade exam, which I find a bit odd. Two of them are EM12c features, not database features. The other two are existing EM12c features that are now available directly from the database, but I can’t see myself ever using them on the server when it is much easier to do it from Cloud Control. Whatever! :)

Getting close to the end of the OCP syllabus now… I’ll soon have to consider sitting the exam…

Cheers

Tim…

Emergency Monitoring, Real-Time ADDM, Compare Period ADDM, and Active Session History (ASH) Analytics was first posted on July 23, 2015 at 12:00 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Unizin Updates on Florida State System and Acquisition of Courseload

Michael Feldstein - Wed, 2015-07-22 19:29

By Phil HillMore Posts (345)

I’m not sure when e-Literate was awarded the exclusive rights for non-PR Unizin coverage, but there were two announcements this week to cover.

State University System of Florida Joins

The first announcement is an update and confirmation of my recent post about the new associate membership option. If a member institution (one of the 11 members paying $1.050 million) sponsors their statewide system, that system can join Unizin as “associate members” for $100 thousand per year but without retaining a board seat and vote on product direction. The week the State University System of Florida (SUSFL) announced they are joining Unizin.

Building on its growing record of collaboration, the State University System of Florida, comprised of Florida’s 12 public universities, has joined Unizin, a group with a mission to have more control and influence over the digital learning ecosystem.

The decision helps secure Florida’s leadership in the realm of digital learning and gives access to tools under development, including a Digital Objects Repository and Learning Analytics. Florida is the first State University System to join the collaborative organization, which is a consortium of major research universities. The University of Florida is a founding member, alongside other top universities such as Pennsylvania State University, Ohio State University and the University of Michigan.The organization is a not-for-profit service operation and its membership is by invitation only.

It is not clear which of the 12 public universities beyond the University of Florida are actually planning to participate in Unizin. If you want details on the SUSFL plans and what associate membership means, go read the earlier post.

Courseload Acquisition And Content Relay

The second update is that Unizin acquired the IP, trademark, and remains of Courseload, a provider of e-reader platform for digital textbooks. From the announcement:

Unizin announced the acquisition of the Courseload software today. Courseload includes an eText reader platform and collaborative learning tools for the delivery of digital learning materials including Open Educational Resources, faculty-authored course packs, and publisher content. The addition of Courseload is a vital component for connecting content to learners in Unizin’s digital learning ecosystem.

This move now determines the second component of Unizin, as the plan is for the acquired Courseload employees will modify and develop a portion of their software to become the basis for the Content Relay. Previously Unizin had been planning to license or contract another organization to provide the Content Relay.

This acquisition means that Unizin will now be in the software development business and not just to integrate various products. This approach changes what had previously been the plans to not develop product, as Unizin co-founder and co-chairman of the board Brad Wheeler shared with me last year.

Unizin is not a Community Source effort in the way that I understand Community Source as we started applying the label 10+ years ago. Unizin is better understood, as you have reported, as a cloud-scale service operator somewhat like I2 [Internet2]. It does not plan to do lots of software development other than as needed for integrations. No biggie, just a nuanced observation from the end of the story.

When I asked Brad if this means that Unizin is ruling out product development, he replied:

Unizin is working on its roadmap for each area. If we do need to head down some development approach that is more than integration, we’ll give thought to the full range of options for best achieving that, but there is no plan to begin an open/community source effort at this time.

Courseload is based in Indianapolis, IN while Unizin is based in Austin, TX. This creates an interesting situation where a new organization will be managing a remote development team that likely outnumbers the pre-existing Unizin employees.

Common Origins

The Chronicle described the origins of Courseload in 2010.

Courseload, the e-book broker, started in 2000, when a co-founder, Mickey Levitan, a former Apple employee inspired by the company’s transformative role in the music industry, devised the idea and teamed up with a professor at Indiana University at Bloomington to try it. But the company failed to find enough takers, and it all but shut down after a brief run.

Then last year an official at Indiana, Bradley C. Wheeler, called Mr. Levitan and talked him into trying again.

Update (7/23): The following paragraph has been revised based on private communication from source which pointed out that Crunchbase data is wrong in this case.

Based on that company revival, in 2012 Courseload raised $1.6 million from IU’s Innovate Indiana fund according to Crunchbase. In 2012 the Innovate Indiana Fund, an organization that represents Indiana University’s push for economic development, joined other investment groups in helping to fund the new Courseload. The IIF investment was in the lower single digit % of the total raised. The The tight relationship with IU was further described in the Innovate Indiana end-of-year 2012 report.

In 2000, Mickey Levitan and IU Professor Alan Dennis had an idea that was ahead of its time. Through Courseload, the start-up learning platform company they cofounded, the two endeavored to make college course materials accessible online.

A decade later, Indiana University became the first customer, implementing the Courseload platform across all its campuses. Now with 50 clients and 32 employees, Courseload is leading the online course text revolution—lowering costs for students and providing capabilities that can improve educational outcomes, while offering professors the discretion to use the platform on a course-by-course basis. [snip]

Levitan is grateful for the company’s broad-reaching partnership with IU. Early support from [VP of IT Brad] Wheeler was critical to the company’s success, Levitan says. “He’s a wonderful partner and an extraordinary leader—a visionary who is ready to go out and shape the
world rather than be shaped by it.”

Levitan is also grateful for the company’s early and ongoing relationship with the IU Research and Technology Corporation (IURTC). Tony Armstrong, president and CEO of the IURTC, identified an early funding opportunity for Courseload through the Innovate Indiana Fund. Kenneth Green, manager of the Innovate Indiana Fund, sits on Courseload’s board of directors.

This Inside Higher Ed article from 2012 highlights the common origins of both Unizin and Courseload – both in terms of founder, Internet2, and common justification. As a reminder, Unizin is technically operates as part of Internet2.

In a session at the 2011 Educause conference in October, Bradley Wheeler, the chief information officer at Indiana University, issued a challenge to his colleagues. Unless universities assert their power as customers, the vendors that sell them products and services will continue squeezing those institutions for cash while dictating the terms under which they go digital.

That conversation revolved around expensive, institution-level investments such as learning-management platforms and enterprise resource planning software. Now Wheeler and his colleagues are looking to apply the same principles of “aggregated demand” to help students save money on electronic textbooks.

Internet2, a consortium of 221 colleges and universities, which last year brokered landmark deals with Box.com and Hewlett-Packard that gave its members discounts on cloud computing services, announced today that it had entered into a contract with McGraw-Hill, a major textbook publisher, aimed at creating similar discounts for students on digital course materials.

Moving Ahead

Unizin is now up to 11 full member institutions and 1 state-wide system associate member. Despite or because of the tangled paths of Unizin and Courseload, we finally have some clarity on the second component (the Content Relay) of the consortium’s services. It’s not what I would have guessed ahead of time, but I have to admit that seems to be a willing list of schools ready to join.

The post Unizin Updates on Florida State System and Acquisition of Courseload appeared first on e-Literate.

MORE Readiness Content for Oracle Applications Cloud Release 10

Linda Fishman Hoyle - Wed, 2015-07-22 13:08

A Guest Post by Senior Director Louvaine Thomson, (pictured left), Product Management, Oracle Cloud Applications

Last month we announced these Release 10 preview materials:

Spotlight Videos: Hosted by senior development staff, these webcast-delivered presentations highlight top level messages and product themes, and are reinforced with a product demo
Release Content Documents (RCDs): This content includes a summary level description of each new feature and product

We are now pleased to announce the next significant wave of readiness content. Specifically, the following content types are now available on the Release 10 Readiness page:

  • What's New: Learn about what's new in the upcoming release by reviewing expanded discussions of each new feature and product, including capability overviews, business benefits, setup considerations, usage tips, and more
  • Release Training: Created by product management, these self-paced, interactive training sessions are deep dives into key new enhancements and products. Also referred to as Transfers of Information (TOIs)
  • Product Documentation: Oracle's online documentation includes detailed product guides and training tutorials to ensure your successful implementation and use of the Oracle Applications Cloud


 Access is simple: From the Cloud Site: Click on Resources > Release Readiness to access the Release Readiness page.


Access is simple:
1. From the Cloud Site: Click on Resources > Release Readiness to access the Release Readiness page, or