Skip navigation.

Feed aggregator

August 12: Atradius Collections Oracle Sales Cloud Customer Forum

Linda Fishman Hoyle - Wed, 2015-07-29 11:06

Join us for another Oracle Customer Reference Forum on August 12th, 2015 at 8:00 a.m. PT / 11:00 a.m. ET / 5:00 p.m. CEST.

Sonja van Haasteren, Global Customer Experience Manager of Atradius Collections, will talk about the company’s journey with Oracle CX products focused on Oracle Sales Cloud with Oracle Marketing Cloud and its path to expand with Oracle Data Cloud.

Atradius Collections is a global leader in trade-invoice-collection services. It provides solutions to recover domestic and international trade invoices. Atradius Collections handles more than 100,000 cases a year for more than 14,500 customers, covering over 200 countries.

Register now to confirm your attendance for this informative event on August 12.

TekStream Reduces Project Admin Costs by 30% with Oracle Documents Cloud

WebCenter Team - Wed, 2015-07-29 07:54

Read this latest announcement from Oracle to find out more about how TekStream Solutions, a solution services company in North America streamlined project management and administration and improved client project delivery with Oracle Documents Cloud Service, an enterprise-grade cloud collaboration and file sync and share solution. Learn how, within the first month of its use, TekStream was able to cut project administration costs by 30% and reduce complexity to not only drive client results faster but also provide a superior project experience to both its consultants as well as its clients.

And here's a brief video with Judd Robins, executive vice president, Consulting Services of TekStream Solutions as he discusses the specific areas where they were looking to make improvements and how Oracle Documents Cloud enabled easy and yet secure cloud collaboration not only among its consultants who are always on the go, but also with its clients.

To learn more about Oracle Documents Cloud Service and how it can help your enterprise visit us at cloud.oracle.com/documents.

Existence

Jonathan Lewis - Wed, 2015-07-29 06:05

A recent question on the OTN Database Forum asked:

I need to check if at least one record present in table before processing rest of the statements in my PL/SQL procedure. Is there an efficient way to achieve that considering that the table is having huge number of records like 10K.

I don’t think many readers of the forum would consider 10K to be a huge number of records; nevertheless it is a question that could reasonably be asked, and should prompt a little discssion.

First question to ask, of course is: how often do you do this and how important is it to be as efficient as possible. We don’t want to waste a couple of days of coding and testing to save five seconds every 24 hours. Some context is needed before charging into high-tech geek solution mode.

Next question is: what’s wrong with writing code that just does the job, and if it finds that the job is complete after zero rows then you haven’t wasted any effort. This seems reasonable in (say) a PL/SQL environment where we might discuss the following pair of strategies:


Option 1:
=========
-- execute a select statement to see in any rows exist

if (flag is set to show rows) then
    for r in (select all the rows) loop
        do something for each row
    end loop;
end if;

Option 2:
=========
for r in (select all the rows) loop
    do something for each row;
end loop;

If this is the type of activity you have to do then it does seem reasonable to question the sense of putting in an extra statement to see if there are any rows to process before processing them. But there is a possibly justification for doing this. The query to find just one row may produce a very efficient execution plan, while the query to find all the rows may have to do something much less efficient even when (eventually) it finds that there is no data. Think of the differences you often see between a first_rows_1 plan and an all_rows plan; think about how Oracle can use index-only access paths and table elimination – if you’re only checking for existence you may be able to produce a MUCH faster plan than you can for selecting the whole of the first row.

Next question, if you think that there is a performance benefit from the two-stage approach: is the performance gain worth the cost (and risk) of adding a near-duplicate statement to the code – that’s two statements that have to be maintained every time you make a change. Maybe it’s worth “wasting” a few seconds on every execution to avoid getting the wrong results (or an odd extra hour of programmer time) once every few months. Bear in mind, also, that the optimizer now has to optimize two statement instead of one – you may not notice the extra CPU usage in testing but perhaps in the live environment the execution benefit will be eroded by the optimization cost.

Next question, if you still think that the two-stage process is a good idea: will it result in an inconsistent database state ?! If you select and find a row, then run and find that there are no rows to process because something modified and “hid” the row you found on the first pass – what are you going to do. Will this make the program crash ? Will it produce an erroneous result on this run, or will a silent side effect be that the next run will produce the wrong results. (See Billy Verreynne’s comment on the original post). Should you set the session to “serializable” before you start the program, or maybe lock a critical table to make sure it can’t change.

So, assuming you’ve decided that some form of “check for existence then do the job” is both desirable and safe, what’s the most efficient strategy. Here’s one of the smarter solutions that miminises risk and effort (in this case using a pl/sql environment).


select  count(*)
into    m_counter
from    dual
where   exists ({your original driving select statement})
;

if m_counter = 0 then
    null;
else
    for c1 in {your original driving select statement} loop
        -- do whatever
    end loop;
end if;

The reason I describe this solution as smarter, with minimum risk and effort, is that (a) you use EXACTLY the same SQL statement in both locations so there should be no need to worry about making the same effective changes twice to two slightly different bits of SQL and (b) the optimizer will recognise the significance of the existence test and run in first_rows_1 mode with maximum join elimination and avoidance of redundant table visits. Here’s a little data set I can use to demonstrate the principle:


create table t1
as
select
        mod(rownum,200)         n1,     -- scattered data
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from
        dual
connect by
        level <= 10000
;

delete from t1 where n1 = 100;
commit;

create index t1_i1 on t1(n1);

begin
        dbms_stats.gather_table_stats(
                user,
                't1',
                cascade => true,
                method_opt => 'for all columns size 1'
        );
end;
/

It’s just a simple table with index, but the index isn’t very good for finding the data – it’s repetitive data widely scattered through the table: 10,000 rows with only 200 distinct values. But check what happens when you do the dual existence test – first we run our “driving” query to show the plan that the optimizer would choose for it, then we run with the existence test to show the different strategy the optimizer takes when the driving query is embedded:


alter session set statistics_level = all;

select  *
from    t1
where   n1 = 100
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

select  count(*)
from    dual
where   exists (
                select * from t1 where n1 = 100
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

Notice how I’ve enabled rowsource execution statistics and pulled the execution plans from memory with their execution statistics. Here they are:


select * from t1 where n1 = 100

-------------------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |    38 (100)|      0 |00:00:00.01 |     274 |
|*  1 |  TABLE ACCESS FULL| T1   |      1 |     50 |    38   (3)|      0 |00:00:00.01 |     274 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1"=100)

select count(*) from dual where exists (   select * from t1 where n1 = 100  )

---------------------------------------------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |     3 (100)|      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE    |       |      1 |      1 |            |      1 |00:00:00.01 |       2 |
|*  2 |   FILTER           |       |      1 |        |            |      0 |00:00:00.01 |       2 |
|   3 |    FAST DUAL       |       |      0 |      1 |     2   (0)|      0 |00:00:00.01 |       0 |
|*  4 |    INDEX RANGE SCAN| T1_I1 |      1 |      2 |     1   (0)|      0 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( IS NOT NULL)
   4 - access("N1"=100)

For the original query the optimizer did a full tablescan – that was the most efficient path. For the existence test the optimizer decided it didn’t need to visit the table for “*” and it would be quicker to use an index range scan to access the data and stop after one row. Note, in particular, that the scan of the dual table didn’t even start – in effect we’ve got all the benefits of a “select {minimum set of columns} where rownum = 1″ query, without having to work out what that minimum set of columns was.

But there’s an even more cunning option – remember that we didn’t scan dual when when there were no matching rows:


for c1 in (

        with driving as (
                select  /*+ inline */
                        *
                from    t1
        )
        select  /*+ track this */
                *
        from
                driving d1
        where
                n1 = 100
        and     exists (
                        select
                                *
                        from    driving d2
                        where   n1 = 100
                );
) loop

    -- do your thing

end loop;

In this specific case the subquery would automatically go inline, so the hint here is actually redundant; in general you’re likely to find the optimizer materializing your subquery and bypassing the cunning strategy if you don’t use the hint. (One of the cases where subquery factoring doesn’t automatically materialize is when you have no WHERE clause in the subquery.)

Here’s the execution plan pulled from memory (after running this SQL through an anonymous PL/SQL block):


SQL_ID  7cvfcv3zarbyg, child number 0
-------------------------------------
WITH DRIVING AS ( SELECT /*+ inline */ * FROM T1 ) SELECT /*+ track
this */ * FROM DRIVING D1 WHERE N1 = 100 AND EXISTS ( SELECT * FROM
DRIVING D2 WHERE N1 = 100 )

---------------------------------------------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |    39 (100)|      0 |00:00:00.01 |       2 |
|*  1 |  FILTER            |       |      1 |        |            |      0 |00:00:00.01 |       2 |
|*  2 |   TABLE ACCESS FULL| T1    |      0 |     50 |    38   (3)|      0 |00:00:00.01 |       0 |
|*  3 |   INDEX RANGE SCAN | T1_I1 |      1 |      2 |     1   (0)|      0 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( IS NOT NULL)
   2 - filter("T1"."N1"=100)
   3 - access("T1"."N1"=100)

You’ve got just one statement – and you’ve only got one version of the complicated text because you put it into a factored subquery; but the optimizer manages to use one access path for one instantiation of the text and a different one for the other. You get an efficient test for existence and only run the main query if some suitable data exists, and the whole thing is entirely read-consistent.

I have to say, though, I can’t quite make myself 100% enthusiastic about this code strategy – there’s just a nagging little doubt that the optimizer might come up with some insanely clever trick to try and transform the existence test into something that’s supposed to be faster but does a lot more work; but maybe that’s only likely to happen on an upgrade, which is when you’d be testing everything very carefully anyway (wouldn’t you) and you’ve got the “dual/exists” fallback position if necessary.

Footnote:

Does anyone remember the thing about reading execution plan “first child first” – this existence test is one of the interesting cases where it’s not the first child of a parent operation that runs first: it’s the case I call the “constant subquery”.


Protect Your APEX Application PL/SQL Source Code

Pete Finnigan - Wed, 2015-07-29 03:35

Oracle Application Express is a great rapid application development tool where you can write your applications functionality in PL/SQL and create the interface easily in the APEX UI using all of the tools available to create forms and reports and....[Read More]

Posted by Pete On 21/07/15 At 04:27 PM

Categories: Security Blogs

Oracle Security and Electronics

Pete Finnigan - Wed, 2015-07-29 03:35

How does Oracle Security and Electronic mix together? - Well I started my working life in 1979 as an apprentice electrician in a factory here in York, England where I live. The factory designed and built trains for the national....[Read More]

Posted by Pete On 09/07/15 At 11:24 AM

Categories: Security Blogs

New Conference Speaking Dates Added

Pete Finnigan - Wed, 2015-07-29 03:35

In the last few years I have not done as many conference speaking dates as I used to. This is simply because when offered they usually clashed with pre-booked work. I spoke for the UKOUG in Dublin last year and....[Read More]

Posted by Pete On 06/07/15 At 09:40 AM

Categories: Security Blogs

Happy 10th Belated Birthday to My Oracle Security Blog

Pete Finnigan - Wed, 2015-07-29 03:35

Make a Sad Face..:-( I seemed to have missed my blogs tenth which happened on the 20th September 2014. My last post last year and until very recently was on July 23rd 2014; so actually its been a big gap....[Read More]

Posted by Pete On 03/07/15 At 11:28 AM

Categories: Security Blogs

Oracle Database Vault 12c Paper by Pete Finnigan

Pete Finnigan - Wed, 2015-07-29 03:35

I wrote a paper about Oracle Database Vault in 12c for SANS last year and this was published in January 2015 by SANS on their website. I also prepared and did a webinar about this paper with SANS. The Paper....[Read More]

Posted by Pete On 30/06/15 At 05:38 PM

Categories: Security Blogs

Unique Oracle Security Trainings In York, England, September 2015

Pete Finnigan - Wed, 2015-07-29 03:35

I have just updated all of our Oracle Security training offerings on our company website. I have revamped all class pages and added two page pdf flyers for each of our four training classes. In have also updated the list....[Read More]

Posted by Pete On 25/06/15 At 04:36 PM

Categories: Security Blogs

Coding in PL/SQL in C style, UKOUG, OUG Ireland and more

Pete Finnigan - Wed, 2015-07-29 03:35

My favourite language is hard to pin point; is it C or is it PL/SQL? My first language was C and I love the elegance and expression of C. Our product PFCLScan has its main functionallity written in C. The....[Read More]

Posted by Pete On 23/07/14 At 08:44 PM

Categories: Security Blogs

Integrating PFCLScan and Creating SQL Reports

Pete Finnigan - Wed, 2015-07-29 03:35

We were asked by a customer whether PFCLScan can generate SQL reports instead of the normal HTML, PDF, MS Word reports so that they could potentially scan all of the databases in their estate and then insert either high level....[Read More]

Posted by Pete On 25/06/14 At 09:41 AM

Categories: Security Blogs

Better Ways to Play and Try

Oracle AppsLab - Tue, 2015-07-28 20:50
  • Fact 1: Dazzling animated displays (sprites, shaders, parallax, 3D) are more plentiful and easier to make than ever before.
  • Fact 2: More natural and expressive forms of input (swiping, pinching, gesturing, talking) are being implemented and enhanced every day.
  • Fact 3: Put these two together and the possible new forms of human computer interaction are endless. The only limit is our imagination.

That’s the problem: our imagination. We can’t build new interactions until A) someone imagines them, and B) the idea is conveyed to other people. As a designer in the Emerging Interactions subgroup of the AppsLab, this is my job – and I’m finding that both parts of it are getting harder to do.

If designers can’t find better ways of imagining – and by imagining I mean the whole design process from blank slate to prototype – progress will slow and our customers will be unable to unleash their full potential.

So what does it mean to imagine and how can we do it better?

colliding_orgcharts
Imagination starts with a daydream or an idle thought. “Those animations of colliding galaxies are cool. I wonder if we could show a corporate acquisition as colliding org charts. What would that look like?”

What separates a mere daydreamer from an actual designer is the next step: playing. To really imagine a new thing in any meaningful way you have to roll up your sleeves and actually start playing with it in detail. At first you can do this entirely in your mind – what Einstein called a “thought experiment.”  This can take weeks of staring into space while your loved ones look on with increasing concern.

Playing is best done in your mind because your mind is so fluid. You can suspend the laws of physics whenever they get in the way. You can turn structures inside out in the blink of an eye, changing the rules of the game as you go. This fluidity, this fuzziness, is the mind’s greatest strength, but also its greatest weakness.

So sooner or later you have to move from playing to trying. Trying means translating the idea into a visible, tangible form and manipulating it with the laws of physics (or at least the laws of computing) re-enabled. This is where things get interesting. What was vaguely described must now be spelled out. The inflexible properties of time and space will expose inconvenient details that your mind overlooked; dealing with even the smallest of these details can derail your entire scheme – or take it wild, new directions. Trying is a collaboration with Reality.

Until recently, trying was fairly easy to do. If the thing you were inventing was a screen layout or a process flow, you could sketch it on paper or use a drawing program to make sure all the pieces fit together. But what if the thing you are inventing is moving? What if it has hundreds of parts each sliding and changing in a precise way? How do you sketch that?

My first step in the journey to a better way was to move from drawing tools like Photoshop and OmniGraffle to animation tools like Hype and Edge – or to Keynote (which can do simple animations). Some years ago I even proposed a standard “animation spec” so that developers could get precise frame-by-frame descriptions.

The problem with these tools is that you have to place everything by hand, one element at a time. I often begin by doing just that, but when your interface is composed of hundreds of shifting, spinning, morphing shapes, this soon becomes untenable. And when even the simplest user input can alter the course and speed of everything on the screen, and when that interaction is the very thing you need to explore, hand drawn animation becomes impossible.

To try out new designs involving this kind of interaction, you need data-driven animation – which means writing code. This is a significant barrier for many designers. Design is about form, color, balance, layout, typography, movement, sound, rhythm, harmony. Coding requires an entirely different skill set: installing development environments, converting file formats, constructing database queries, parsing syntaxes, debugging code, forking githubs.

A software designer needs a partial grasp of these things in order to work with developers. But most designers are not themselves coders, and do not want to become one. I was a coder in a past life, and even enjoy coding up to a point. But code-wrangling, and in particular debugging, distracts from the design process. It breaks my concentration, disrupts my flow; I get so caught up in tracking down a bug that I forget what I was trying to design in the first place.

The next stage of my journey, then, was to find relatively easy high-level programming languages that would let me keep my eye on the ball. I did several projects using Processing (actually Processing.js), a language developed specifically for artists. I did another project using Python – with all coding done on the iPad so that I could directly experience interactions on the tablet with every iteration of the code.

These projects were successful but time-consuming and painful to create. Traditional coding is like solving a Rubik’s Cube: twist and twist and twist until order suddenly emerges from chaos. This is not the way I want to play or try. I want a more organic process, something more like throwing a pot: I want to grab a clump of clay and just continuously shape it with my hands until I am satisfied.

I am not the only one looking for better ways to code. We are in the midst of an open source renaissance, an explosion of literally thousands of new languages, libraries, and tools. In my last blog post I wrote about people creating radically new and different languages as an art form, pushing the boundaries in all directions.

In “The Future of Programming,” Eric Elliott argues for reactive programming, visual IDEs, even genetic and AI-assisted coding. In “Are Prototyping Tools Becoming Essential?,” Mark Wilcox argues that exploring ideas in the Animation Era requires a whole range of new tools. But if you are only going to follow one of these links, see Bret Victor’s “Learnable Programming.”

After months of web surfing I stumbled upon an interesting open source tool originally designed for generative artists that I’ve gotten somewhat hooked on. It combines reactive programming and a visual IDE with some of Bret Victor’s elegant scrubbing interactions.

More about that tool in my next blog post…Possibly Related Posts:

Top 5 Ways to Personalize My Oracle Support

Joshua Solomin - Tue, 2015-07-28 17:47
Top 5 Ways to Personalize My Oracle Support

It doesn't take long using My Oracle Support (MOS) to realize just how massive the pool of data underlying it all is—knowledge articles, patches and updates, advisories and security alerts, for every version of every Oracle product line.

My first week using MOS my jaw dropped at the sheer scale of available info. Not surprisingly, some of the first tips we get as Oracle employees is how to personalize My Oracle Support to target just the areas we're interested in.

Take a minute and follow our Top 5 Ways to personalize My Oracle Support to better suit your workflow. These easy-to-follow tips can help you get the most from the application, and avoid drowning in the MOS "Sea of Information."

1. Customizing the Screen Panels

One of the easiest personalization features is to adjust the panels displayed on a given page or tab. Nearly every activity tab allows you to reorganize, move, or even hide displayed panels on the screen using the Customize Page.... link in the top right area of the screen.

When you click the link, the page will update and display a series of widgets on each panel, allowing you to customize the content.

The wrench icon lets you customize the panel name, while the circular gear icon lets you move the panel within the column. The Add Content action displays a context-sensitive panel of new content areas that can be added to the column.

2. Enable PowerView

The PowerView applet is one of the fastest ways to limit information displayed in MOS. PowerView filters information presented to you based on a products, support identifiers, or other custom filters you select. Once you've set up a PowerView filter set, any activities going forward—searches, patches and bugs, service requests (SRs)—will only appear if they are tied to your selected filters.

To build a PowerView, click the PowerView icon in the upper-left area of the screen.

To create the view, first select the primary filter criteria. "Support Identifier", "Product", and "Product Line" are common primary filters.

Remember, the goal is to use PowerView to filter everything you see in MOS against the relevent contexts you establish.

3. Set Up SR Profiles

This one's a bit trickier than the first two, but can be an enormous time-saver if you regularly enter service requests into MOS.

Go to the Settings tab in MOS, and look for Service Request Profiles link on your left. In some cases you may need to click the More... dropdown to find the Settings tab.

In the profiles view you'll see any existing profiles and an action button to create a new profile. The goal for an SR profile is to streamline the process of creating an SR for a specific hardware or software product that you're responsible for managing. When creating an SR you'll select the pre-generated profile you created earlier, and MOS fills in the relevant details you input.

4. Enable Hot Topics Email

Hot Topics Email is a second option available in the main MOS Settings tab.

Hot Topics is an automated notification system that will alert you any time specified SRs, Knowledge Documents, or security notices are published or updated.

There are dozens of options to choose from in setting up your alerts, based on product, Support Identifier (SI), content you've marked as as "Favorite", and more. See the video training "How to Use Hot Topics Email Notifications" (Document 793436.2) to get a better understanding of how to use this feature.

5. Enable Service Request Email Updates

Back in the main MOS Settings tab, click the link for My Account on the left. This will take you to a general profile view of your MOS account. What we're looking for is a table cell in the Support Identifiers table at the top that reads SR Details.

By checking this box, you are indicating that you want to be automatically notified via email any time a service request tied to the support identifier gets updated.

The goal behind this is to stay abreast of any changes to SRs for the chosen support identifier. You don't have to keep "checking in" or wait for an Oracle Support engineer to reach out to you when progress is made on SRs. If a Support engineer requests additional information on a particular configuration, for example, that would be conveyed in the SR Email Update sent to you.

The trick is to be judicious using this setting. My Oracle Support could quickly inundate you with SR details notices if there are lots of active SRs tied to the support identifier(s), so this may not be desirable in some cases.

Conclusion

With these five options enabled, you've started tailoring your My Oracle Support experience to better streamline your workflow, and keep the most relevant, up-to-date information in front of you.

Give them a whirl, and let us know how it goes!

Reuters: Blackboard up for sale, seeking up to $3 billion in auction

Michael Feldstein - Tue, 2015-07-28 13:48

By Phil HillMore Posts (346)

As I was writing a post about Blackboard’s key challenges, I get notice from Reuters (anonymous sources, so interpret accordingly) that the company is on the market, seeking up to $3 billion. From Reuters:

Blackboard Inc, a U.S. software company that provides learning tools for high school and university classrooms, is exploring a sale that it hopes could value it at as much as $3 billion, including debt, according to people familiar with the matter.

Blackboard’s majority owner, private equity firm Providence Equity Partners LLC, has hired Deutsche Bank AG and Bank of America Corp to run an auction for the company, the people said this week. [snip]

Providence took Blackboard private in 2011 for $1.64 billion and also assumed $130 million in net debt.

A pioneer in education management software, Blackboard has seen its growth slow in recent years as cheaper and faster software upstarts such as Instructure Inc have tried to encroach on its turf. Since its launch in 2011, Instructure has signed up 1,200 colleges and school districts, according to its website.

This news makes the messaging from BbWorld as well as their ability to execute on strategy, particularly delivering the new Ultra user experience across all product lines – including the core LMS – much more important. I’ll get to that subject in the next post.

This news should not be all that unexpected, as one common private equity strategy is to reorganize and clean up a company (headcount, rationalize management structures, reorient the strategy) and then sell within 3 – 7 years. As we have covered here at e-Literate, Blackboard has gone through several rounds of layoffs, and many key employees have already left the company due to new management and restructuring plans. CEO Jay Bhatt has been consistent in his message about moving from a conglomeration of silo’d mini-companies based on past M&A to a unified company. We have also described the significant changes in strategy – both adopting open source solutions and planning to rework the entire user experience.

Also keep in mind that there is massive investment in ed tech lately, not only from venture capital but also from M&A.

Update 1: I should point out that the part of this news that is somewhat surprising is the potential sale while the Ultra strategy is incomplete. As Michael pointed out over the weekend:

Ultra is a year late: Let’s start with the obvious. The company showed off some cool demos at last year’s BbWorld, promising that the new experience would be Coming Soon to a Campus Near You. Since then, we haven’t really heard anything. So it wasn’t surprising to get confirmation that it is indeed behind schedule. What was more surprising was to see CEO Jay Bhatt state bluntly in the keynote that yes, Ultra is behind schedule because it was harder than they thought it would be. We don’t see that kind of no-spin honesty from ed tech vendors all that often.

Ultra isn’t finished yet: The product has been in use by a couple of dozen early adopter schools. (Phil and I haven’t spoken with any of the early adopters yet, but we intend to.) It will be available to all customers this summer. But Blackboard is calling it a “technical preview,” largely because there are large swathes of important functionality that have not yet been added to the Ultra experience–things like tests and groups. It’s probably fine to use it for simple (and fairly common) on-campus use cases, but there are still some open manholes here.

Update 2: I want to highlight (again) the nature of this news story. It’s from Reuters using multiple anonymous sources. While Reuters should be trustworthy, please note that the story has not yet been confirmed.

Update 3: In contact with Blackboard, I received the following statement (which does not answer any questions, but I am sharing nonetheless).

Blackboard, like many successful players in the technology industry, has become subject of sale rumors. Although we are transparent in our communications about the Blackboard business and direction when appropriate, it is our policy not to comment on rumors or speculation.

Blackboard is in an exciting industry that is generating substantial investor interest. Coming off a very successful BbWorld 2015 and a significant amount of positive customer and market momentum, potential investor interest in our company is not surprising.

We’ll update as we learn more, including if someone confirms the news outside of Reuters and their sources.

The post Reuters: Blackboard up for sale, seeking up to $3 billion in auction appeared first on e-Literate.

Power BI General Availability

Pythian Group - Tue, 2015-07-28 09:18

In the last months I’ve been keeping an eye on the Power BI Preview, the new version of the Power BI cloud solution that was completely revamped compared to the first Power BI v1.
This last Friday, July 24th, Power BI was finally released to the public and it is now available globally. The best part? Now they have a FREE version, so you can start playing with it right now and get insights from your data.

Learn more about Power BI and how to use it. And if you have no idea what I’m talking about or what Power BI is, please take 1 minute of your time and watch a quick Youtube video to see it in action.

In the coming weeks I will start writing a series of posts explaining more about the BI suite and what we can use it for, so keep an eye on this blog. Also, if you are already using it, please comment below what you are thinking so far and if you are facing any difficulties.

 

Discover more about our expertise in SQL Server and Cloud

The post Power BI General Availability appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Breakthrough Solution for A/R and A/P Automation

WebCenter Team - Tue, 2015-07-28 08:12


Breakthrough Solution for A/R and A/P Automation

Webcast Scheduled July 30th @ 2:00 PM EST | Register

  • Do you need to expedite your A/R Cash Receipt processing and reduce Days Sales Outstanding?
  • Are un-applied payments affecting closure of your accounting period and impacting your cash flow?
  • Are your paying through your nose to process check information through your bank?

===============================================================

Join this complimentary session from Oracle and KPIT to learn about an exciting new solution that is providing Complete Automation and Real Time Visibility to your A/R Cash Receipt process.

KPIT offers an integrated imaging and workflow solution for your Oracle ERP for A/R Cash Receipt Processing called the Accounts Receivable Solution Accelerator

You will Learn How to:

  • Improve your A/R Cash Receipt Processing through automated capture & entry of bank remittance information
  • Automate exception handling (e.g. short payments) through automated workflows
  • Maximize user productivity by minimizing manual data entry
  • Speed payment application through configurable payment application rules
  • Improve visibility into your un-applied cash and process bottlenecks
  • Use business rules to generate appropriate notifications to customers
  • Reduce steep bank charges by avoiding lockbox processing services
  • Avoid heartburn by achieving timely closure of your accounting period
  • Reduce Days Sales Outstanding (DSO) and improve Cash Flow

Do not miss this opportunity. This is the first time a solution like this that has been offered on Oracle Technology! Register now!

PeopleSoft-Taleo Out-Of-The-Box Integration Deprecated

Duncan Davies - Tue, 2015-07-28 06:47

Oracle have made the interesting announcement that the delivered integration between PeopleSoft (HCM 9.1 or 9.2) and Taleo is being deprecated. This is a positive move from Oracle and simplifies things greatly from a client and partner perspective.

Previously Oracle delivered two different integration methods, prebuilt integration using web services and PeopleSoft Integration Broker, or the Taleo Connect Client (TCC) tool – which is a highly flexible tool for any kind of Taleo integration. It’s only the former that is being deprecated.

TCC remains the tool to use for integrating Taleo and PeopleSoft (or indeed, Taleo and any system) and it’s mature, robust, widely used and much liked.

Integration between Taleo and PeopleSoft rarely fits the ‘standard’ requirements as each client will have different processes around whether someone leaving in PeopleSoft automatically creates a requisition in Taleo or not, and what the approval chains are. Likewise, the new hires coming back in from Taleo can go to more than one place in PeopleSoft, and the amount and types of data brought over can vary wildly – especially if Taleo Onboarding is included. It’s preferable to have a TCC-built integration that fits the requirements than a bare-bones delivered process that can’t be improved.

The removal of the Integration Broker/Web Services integration as an option is a positive move as clients would regularly spend precious effort evaluating it, before concluding that it was too restrictive for their specific requirements and then using TCC.

The official announcement is here:
http://www.oracle.com/partners/secure/campaign/eblasts/peoplesoft-taleo-out-of-the-box-2613641.html


Multiple invisible indexes on the same column in #Oracle 12c

The Oracle Instructor - Tue, 2015-07-28 00:44

After invisible indexes got introduced in 11g, they have now been enhanced in 12c: You can have multiple indexes on the same set of columns with that feature. Why would you want to use that? Actually, this is always the first question I ask when I see a new feature – sometimes it’s really hard to answer :-)

Here, a plausible use case could be that you expect a new index on the same column to be an improvement over the existing old index, but you are not 100% sure. So instead of just dropping the old index, you make it invisible first to see the outcome:

 

[oracle@uhesse ~]$ sqlplus adam/adam

SQL*Plus: Release 12.1.0.2.0 Production on Tue Jul 28 08:11:16 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Tue Jul 28 2015 08:00:34 +02:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> col index_name for a10
SQL> select index_name,index_type,visibility from user_indexes;

INDEX_NAME INDEX_TYPE		       VISIBILIT
---------- --------------------------- ---------
BSTAR	   NORMAL		       VISIBLE

SQL> col segment_name for a10
SQL> select segment_name,bytes/1024/1024 from user_segments;

SEGMENT_NA BYTES/1024/1024
---------- ---------------
BSTAR		       160
SALES		       600

SQL> set timing on
SQL> select count(*) from sales where channel_id=3;

  COUNT(*)
----------
   2000000

Elapsed: 00:00:00.18
SQL> set timing off
SQL> @lastplan

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
SQL_ID	b7cvb9nu10qdb, child number 0
-------------------------------------
select count(*) from sales where channel_id=3

Plan hash value: 2525234362

---------------------------------------------------------------------------
| Id  | Operation	  | Name  | Rows  | Bytes | Cost (%CPU)| Time	  |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |	  |	  |	  |  3872 (100)|	  |
|   1 |  SORT AGGREGATE   |	  |	1 |	3 |	       |	  |

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------
|*  2 |   INDEX RANGE SCAN| BSTAR |  2000K|  5859K|  3872   (1)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("CHANNEL_ID"=3)


19 rows selected.

So I have an ordinary B* index here that supports my query, but I suspect that it would work better with a bitmap index. In older versions, you would get this if you try to create it with the old index still existing:

SQL> create bitmap index bmap on sales(channel_id) nologging;
create bitmap index bmap on sales(channel_id) nologging
                                  *
ERROR at line 1:
ORA-01408: such column list already indexed

Enter the 12c New Feature:

SQL> alter index bstar invisible;

Index altered.

SQL> create bitmap index bmap on sales(channel_id) nologging;

Index created.

Now I can check if the new index is really an improvement while the old index remains in place and is still being maintained by the system. So in case the new index turns out to be a bad idea – no problem to fall back on the old one!

SQL> select index_name,index_type,visibility from user_indexes;

INDEX_NAME INDEX_TYPE		       VISIBILIT
---------- --------------------------- ---------
BMAP	   BITMAP		       VISIBLE
BSTAR	   NORMAL		       INVISIBLE

SQL> select segment_name,bytes/1024/1024 from user_segments;

SEGMENT_NA BYTES/1024/1024
---------- ---------------
BMAP			 9
BSTAR		       160
SALES		       600

SQL> set timing on
SQL> select count(*) from sales where channel_id=3;

  COUNT(*)
----------
   2000000

Elapsed: 00:00:00.01
SQL> @lastplan

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
SQL_ID	b7cvb9nu10qdb, child number 0
------------------------------------------------------------------------------------
select count(*) from sales where channel_id=3

Plan hash value: 3722975061

------------------------------------------------------------------------------------
| Id  | Operation		    | Name | Rows  | Bytes | Cost (%CPU)| Time	   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	    |	   |	   |	   |   216 (100)|	   |
|   1 |  SORT AGGREGATE 	    |	   |	 1 |	 3 |		|	   |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
|   2 |   BITMAP CONVERSION COUNT   |	   |  2000K|  5859K|   216   (0)| 00:00:01 |
|*  3 |    BITMAP INDEX SINGLE VALUE| BMAP |	   |	   |		|	   |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CHANNEL_ID"=3)


20 rows selected.

Looks like everything is better with the new index, right? Let’s see what the optimizer thinks about it:

SQL> alter index bmap invisible;

Index altered.

SQL> select index_name,index_type,visibility from user_indexes;

INDEX_NAME INDEX_TYPE		       VISIBILIT
---------- --------------------------- ---------
BMAP	   BITMAP		       INVISIBLE
BSTAR	   NORMAL		       INVISIBLE

SQL> alter session set optimizer_use_invisible_indexes=true;

Session altered.

SQL> select count(*) from sales where channel_id=3;

  COUNT(*)
----------
   2000000

SQL> @lastplan

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------
SQL_ID	b7cvb9nu10qdb, child number 0
-------------------------------------------------------------------------------------
select count(*) from sales where channel_id=3

Plan hash value: 3722975061

------------------------------------------------------------------------------------
| Id  | Operation		    | Name | Rows  | Bytes | Cost (%CPU)| Time	   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT	    |	   |	   |	   |   216 (100)|	   |
|   1 |  SORT AGGREGATE 	    |	   |	 1 |	 3 |		|	   |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
|   2 |   BITMAP CONVERSION COUNT   |	   |  2000K|  5859K|   216   (0)| 00:00:01 |
|*  3 |    BITMAP INDEX SINGLE VALUE| BMAP |	   |	   |		|	   |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("CHANNEL_ID"=3)


20 rows selected.

The optimizer agrees that the new index is better. I could keep both indexes here in place, but remember that the old index still consumes space and requires internal maintenance. Therefore, I decide to drop the old index:

SQL> drop index bstar;

Index dropped.

SQL> alter index bmap visible;

Index altered.

Hope that helped to answer the question why you would want to use that 12c New Feature. As always: Don’t believe it, test it! :-)


Tagged: 12c New Features, Performance Tuning
Categories: DBA Blogs

Secure By Default Clarifications

Anthony Shorten - Mon, 2015-07-27 22:50

In the Oracle Utilities Application Framework, the software is now installed in a mode called "Secure By Default". This has a number of connotations for new and existing installations:

  • HTTPS is the now the default protocol for accessing the product. The installer supplies a demonstration trust and demonstration identity store that can be used for default installations.
  • The permissions on the files and directories are now using common Oracle standards.

Now there are a few clarifications about these features:

  • Customers that are upgrading from older versions will use the same regime for the file permissions and access protocols that were in past releases for backward compatibility.
  • Customers on past releases can convert to the new file and directory permissions using the "setpermissions" utility shipped with the product. The Administration and Security guides outline the new permissions.
  • Customers on past releases can convert to the new HTTPS protocol like they did in the past releases. The new keystore is provided as a way of adopting it quickly.
  • We supply a basic certificate to be used for HTTPS. This is a demonstration certificate is limited in strength and scope (much the same scope and strength as the demonstration one supplied with Oracle WebLogic). It is not supported for use in production systems. It is recommended that customers who want to use HTTPS should use a valid certificate from a valid certificate issuing authority or build a self signed certificate. Note, if you use a self signed certificate some browsers may issue a warning upon login. Additionally, Customers using native mode installations can use the Oracle WebLogic demonstration certificates as well.
  • HTTPS was always supported in Oracle Utilities Application Framework. In past releases it was, what is termed, an opt-in decision (you are opt'ing in to use HTTPS). This meant that we installed using HTTP by default and then you configured HTTPS separately with additional configuration on your domain. In this new release, we have shifted the decision to an opt-out decision. We install HTTPS with a demonstration certificate as the default and you must disable it using additional steps (basically you do not specify a HTTPS port and only supply a HTTP port to reverse the decision). This is an opt-out decision as you are deciding to opt-out of the secure setup. The decision whether to use HTTPS or HTTP is an implementation one (we just have a default of HTTPS).
  • Customers using native mode (or IBM WebSphere) can manage certificates from the console or command lines supplied by the that product.

Secure by default now ensures that Oracle Utilities Application Framework products are consistent with installations standards employed by other Oracle products.

UC Davis: A look inside attempts to make large lecture classes active and personal

Michael Feldstein - Mon, 2015-07-27 13:16

By Phil HillMore Posts (346)

In my recent keynote for the Online Teaching Conference, the core argument was as follows:

While there will be (significant) unbundling around the edges, the bigger potential impact [of ed innovation] is how existing colleges and universities allow technology-enabled change to enter the mainstream of the academic mission.

Let’s look at one example. Back in December the New York Times published an article highlighting work done at the University of California at Davis to transform large lecture classes into active learning formats.

Hundreds of students fill the seats, but the lecture hall stays quiet enough for everyone to hear each cough and crumpling piece of paper. The instructor speaks from a podium for nearly the entire 80 minutes. Most students take notes. Some scan the Internet. A few doze.

In a nearby hall, an instructor, Catherine Uvarov, peppers students with questions and presses them to explain and expand on their answers. Every few minutes, she has them solve problems in small groups. Running up and down the aisles, she sticks a microphone in front of a startled face, looking for an answer. Students dare not nod off or show up without doing the reading.

Both are introductory chemistry classes at the University of California campus here in Davis, but they present a sharp contrast — the traditional and orderly but dull versus the experimental and engaging but noisy. Breaking from practices that many educators say have proved ineffectual, Dr. Uvarov’s class is part of an effort at a small but growing number of colleges to transform the way science is taught.

This article follows the same argument laid out in the Washington Post nearly three years earlier.

Science, math and engineering departments at many universities are abandoning or retooling the lecture as a style of teaching, worried that it’s driving students away. [snip]

Lecture classrooms are the big-box retailers of academia, paragons of efficiency. One professor can teach hundreds of students in a single room, trailed by a retinue of teaching assistants.

But higher-education leaders increasingly blame the format for high attrition in science and math classes. They say the lecture is a turn-off, higher education at its most passive, leading to frustration and bad grades in highly challenging disciplines.

What do these large lecture transformations look like? We got the chance in our recent e-Literate TV case study to get an inside look at the work done at UC Davis (episode 1, episode 2, episode 3), including first-person accounts from faculty members and students.

The organizing idea is to apply active learning principles such as the flipped classroom to large introductory science classes.

Phil Hill: It sounds to me like you have common learning design principles that are being implemented, but they get implemented in different ways. So, you have common things of making students accountable, having the classes much more interactive where students have to react and try to apply what they’re learning.

Chris Pagliarulo: Yeah, the main general principle here is we’re trying to get—if you want to learn something complex, which is what we try to at an R1 university, that takes a lot of practice and feedback. Until recently, much of that was supposed to be going on at home with homework or whatnot, but it’s difficult to get feedback at home when the smart people aren’t there that would help you—either your peers or your professor.

So, that’s the whole idea of the flipped classroom where come prepared with some basic understand and take that time where you’re all together to do the high-quality practice and get the feedback while we’re all together. Everything that we’re doing is focused on that sort of principle—getting that principle into the classroom.

Professor Mitch Singer then describes his background in the redesign.

Phil Hill: Several years ago, the iAMSTEM group started working with the biology and chemistry departments to apply some of these learning concepts in an iterative fashion.

Mitch Singer: My (hopefully) permanent assignment now, at least for the next five years, will be what we call “BIS 2A,” which is the first introductory course of biology here at UC Davis. It’s part of a series, and its primary goal is to teach fundamentals of cellular and molecular biology going from origins up to the formation of a cell. We teach all the fundamentals in this class: the stuff that’s used for future ones.

About three to four years ago, I got involved in this class to sort of help redesign it, come up with a stronger curriculum, and primarily bring in sort of hands-on, interactive learning techniques, and we’ve done a bunch of experiments and changed the course in a variety of ways. It’s still evolving over the last several years. The biggest thing that we did was add a discussion section, which is two hours long where we’ve done a lot of our piloting for this interactive, online, personalized learning (as the new way of saying things, I guess). This year (last quarter in the fall) was the first time we really tried to quote, flip part of the classroom.

That is make the students take a little bit more responsibility for their own reading and learning, and then the classic lecture is more asking questions trying to get them to put a and b together to come up with c. It’s sort of that process that we’d like to emphasize and get them to actually learn, and that’s what we want to test them on not so much the facts, and that’s the biggest challenge.

If you want to see the potential transformation of this core, it is crucial to look at the large lecture classes and how to make them more effective. The UC Davis case study highlights what is actually happening in the field, with input from real educators and students.

The post UC Davis: A look inside attempts to make large lecture classes active and personal appeared first on e-Literate.