Skip navigation.

Feed aggregator

Discoverer and Windows 10

Michael Armstrong-Smith - Thu, 2015-07-30 22:33
Hi everyone
Has anyone had the courage to upgrade to Windows 10 and see if Discoverer Plus still works?

How about the Discoverer server? Anyone tried that.

If you have drop me a reply

Michael

August 6, 2015: Oracle ERP Cloud Customer Forum―The Rancon Group

Linda Fishman Hoyle - Thu, 2015-07-30 17:57

Join us for another Oracle Customer Reference Forum on August 6, 2015, at 9:00 a.m. PT to hear Steven Van Houten, CFO at The Rancon Group. The company is a leader in Southern California community development, commercial building, and land use.

During this Customer Forum call, Van Houten will share with you The Rancon Group’s lessons learned during its implementation and the benefits it is receiving by using Oracle ERP Cloud. He will explain how Oracle ERP Cloud helps The Rancon Group make intelligent decisions, get information out to its mobile workforce, and meet its needs now and in the future.

Register now to attend the live Forum on Thursday, August 6, 2015, at 09:00 a.m. Pacific Time / 12:00 p.m Eastern Time.

CVSS Version 3.0 Announced

Oracle Security Team - Thu, 2015-07-30 16:04

Hello, this is Darius Wiles.

Version 3.0 of the Common Vulnerability Scoring System (CVSS) has been announced by the Forum of Incident Response and Security Teams (FIRST). Although there have been no high-level changes to the standard since the Preview 2 release which I discussed in a previous blog post, there have been a lot of improvements to the documentation.

Soon, Oracle will be using CVSS v3.0 to report CVSS Base scores in its security advisories. In order to facilitate this transition, Oracle plans to release two sets of risk matrices, both CVSS v2 and v3.0, in the first Critical Patch Update (Oracle’s security advisories) to provide CVSS version 3.0 Base scores. Subsequent Critical Patch Updates will only list CVSS version 3.0 scores.

While Oracle expects most vulnerabilities to have similar v2 and v3.0 Base Scores, certain types of vulnerabilities will experience a greater scoring difference. The CVSS v3.0 documentation includes a list of examples of public vulnerabilities scored using both v2 and v3.0, and this gives an insight into these scoring differences. Let’s now look at a couple of reasons for these differences.

The v3.0 standard provides a more precise assessment of risk because it considers more factors than the v2 standard. For example, the important impact of most cross-site scripting (XSS) vulnerabilities is that a victim's browser runs malicious code. v2 does not have a way to capture the change in impact from the vulnerable web server to the impacted browser; basically v2 just considers the impact to the former. In v3.0, the Scope metric allows us to score the impact to the browser, which in v3.0 terminology is the impacted component. v2 scores XSS as "no impact to confidentiality or availability, and partial impact to integrity", but in v3.0 we are free to score impacts to better fit each vulnerability. For example, a typical XSS vulnerability, CVE-2013-1937 is scored with a v2 Base Score of 4.3 and a v3.0 Base Score of 6.1. Most XSS vulnerabilities will experience a similar CVSS Base Score increase.

Until now, Oracle has used a proprietary Partial+ metric value for v2 impacts when a vulnerability "affects a wide range of resources, e.g., all database tables, or compromises an entire application or subsystem". We felt this extra information was useful because v2 always scores vulnerabilities relative to the "target host", but in cases where a host's main purpose is to run a single application, Oracle felt that a total compromise of that application warrants more than Partial. In v3.0, impacts are scored relative to the vulnerable component (assuming no scope change), so a total compromise of an application now leads to High impacts. Therefore, most Oracle vulnerabilities scored with Partial+ impacts under v2 are likely to be rated with High impacts and therefore more precise v3.0 Base scores. For example, CVE-2015-1098 has a v2 Base score of 6.8 and a v3.0 Base score of 7.8. This is a good indication of the differences we are likely to see. Refer to the CVSS v3.0 list of examples for more details on score this vulnerability.

Overall, Oracle expects v3.0 Base scores to be higher than v2, but bear in mind that v2 scores are always relative to the "target host", whereas v3.0 scores are relative to the vulnerable component, or the impacted component if there is a scope change. In other words, CVSS v3.0 will provide a better indication of the relative severity of vulnerabilities because it better reflects the true impact of the vulnerability being rated in software components such as database servers or middleware.


For More Information

The CVSS v3.0 documents are located on FIRST's web site at http://www.first.org/cvss/

Oracle's use of CVSS [version 2], including a fuller explanation of Partial+ is located at http://www.oracle.com/technetwork/topics/security/cvssscoringsystem-091884.html

My previous blog post on CVSS v3.0 preview is located at https://blogs.oracle.com/security/entry/cvss_version_3_0_preview

Eric Maurice's blog post on Oracle's use of CVSS v2 is located at https://blogs.oracle.com/security/entry/understanding_the_common_vulne_2

Oracle Priority Support Infogram for 30-JUL-2015

Oracle Infogram - Thu, 2015-07-30 13:09

Open World
Oracle OpenWorld 2015 - Registrations Open, from Business Analytics - Proactive Support.
Oracle Support
Top 5 Ways to Personalize My Oracle Support, from the My Oracle Support blog.
RDBMS
A set of three updates from Upgrade your Database - NOW! in this issue:
ORAchk - How to log SRs and ERs for ORAchk
Things to consider BEFORE upgrading to Oracle 12.1.0.2 to AVOID poor performance and wrong results
Optimizer Issue in Oracle 12.0.1.2: "Reduce Group By"
PeopleSoft/SES
Upgrade your SES Database From 11.2.0.3 to 11.2.0.4 for the PeopleSoft Search Framework, from the PeopleSoft Technology Blog.
Java
JShell and REPL in Java 9, from The Java Source.
Modifying the run configuration for the JUnit test runner, from Andreas Fester's Blog.
MySQL
Learn About Queries, Stored Routines, and More MySQL Developer Skills, from Oracle's MySQL Blog.
Fusion Applications
Careful Use of Aggregate Functions, from the Fusion Applications Developer Relationsblog.
ADF
ADF 11.1.1.9 Goodies – Conveyor Belt Component and Alta UI, from WebLogic Partner Community EMEA.
And from the same source:
Create and set clientAttribute to ADF Faces component programmatically to pass value on client side JavaScript
Solaris
Docker coming to Oracle Solaris, from the Oracle Solaris blog.
Live storage migration for kernel zones, from The Zones Zone blog.
Ops Center
Recovering LDoms From a Failed Server, from the Ops Center blog.
EBS
From the Oracle E-Business Suite Support blog:
Webcast: Setup & Troubleshooting Dunning Plans in Oracle Advanced Collections
Troubleshooting the Closing of Work Orders in EAM and WIP
From the Oracle E-Business Suite Technology blog:
Database 12.1.0.2 Certified with EBS 11i on Additional Platforms
Transportable Database 12c Certified for EBS 12.2 Database Migration
Quarterly EBS Upgrade Recommendations: July 2015 Edition


Why Move to Cassandra?

Pythian Group - Thu, 2015-07-30 12:05

Nowadays Cassandra is getting a lot of attention, and we’re seeing more and more examples of companies moving to Cassandra. Why is this happening? Why are companies with solid IT structures and internal knowledge shifting, not only to a different paradigm (Read: NoSQL vs SQL), but also to completely different software? Companies don’t simply move to Cassandra because they feel like it. A drive or need must exist. In this post, I’m going to review a few use cases and highlight some of the interesting parts to explain why these particular companies adopted Cassandra. I will also try to address concerns about Cassandra in enterprise environments that have critical SLAs and other requirements. And at the end of this post, I will go over our experience with Cassandra.

Cassandra Use Cases Instagram

Cutting costs. How? Instagram was using an in-memory database before moving to Cassandra. Memory is expensive compared to disk. So if you do not need the advanced performance of an in-memory datastore, Cassandra can deliver the performance you need and help you save money on storage costs. Plus, as mentioned in the use case, Cassandra allows Instagram to continually add data to the cluster.  They also loved Cassandra’s reliability and availability features.

Ebay

Cassandra proved to be the best technology, among the ones they tested, for their scaling needs. With Cassandra, Ebay can look up historical behavioral data quickly and update their recommendation models with low latency. Ebay has deployed Cassandra across multiple data centers.

Spotify

Spotify moved to Cassandra because it’s a highly reliable and easily scalable datastore. Their old datastore was not able to keep up with the volume of writes and reads they had. Cassandra’s scalability with its multi-datacenter replication, plus its reliability, proved to be a hit for them.

Comcast

They were looking for 3 things: scale, availability, and active-active. Only Cassandra provided all of them. There transition to Cassandra went smoothly and enjoy the ease of development Cassandra offers.

Cassandra brings something new to the game

NoSQL existed before Cassandra. There were also other mature technologies when Cassandra was released. So why didn’t companies move to those technologies?

Like the subtitle says, Cassandra brings something new to the game. In my experience, and as discussed in some of the use cases above, one of the strongest points is Cassandra’s ease of use. Once you know how to configure  Cassandra, it’s almost “fire-and-forget”! It just works. In an era like ours, where you see new technologies appear every day, on different stacks, with different dependencies, Cassandra easy installation and basic configuration is refreshingly simple, which leads us to…

Scalability!! Yes it scales linearly. This level of scalability, combined with its ease of deployment, takes your infrastructure to another level.

Last but not least, Cassandra is highly flexible. You can tweak your consistency settings per transaction. You need more speed? Pick less consistency. You want data integrity? Push those consistency settings up. It is up to you, your project, and your requirements. And you can easily change it.

Also don’t forget its other benefits: open source, free, geo-replication, low latency etc…

Pythian’s Experience with Cassandra

Cassandra is not without its challenges. Like I said earlier, it is new technology that makes you think differently about databases. And because it’s easy to deploy and work with, it can lead to mistakes that could seriously impact scalability, and applications/services performance when they start to scale.

And that is where we come in. We ensure that companies just starting out with Cassandra have well built and well designed deployments, so they don’t run into these problems. Starting with a solid architecture plan for a Cassandra deployment and the correct data model can make a whole lot of difference!

We’ve seen some deployments that started out well, but without proper maintenance, fell into some of the pitfalls or edge-cases mentioned above. We help out by fixing the problem and/or providing recommended changes to the original deployment, so it will keep performing well without without issues! And because Cassandra delivers high resilience, many of these problems can be solved without having to deal with downtime.

Thinking about moving to Cassandra? Not sure if open source or enterprise is right for you? Need project support? Schedule a free assessment so we can help you with next steps!

The post Why Move to Cassandra? appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Always be publishing

Sean Hull - Thu, 2015-07-30 11:23
Join 28,000 others and follow Sean Hull on twitter @hullsean. As an advisor to New York area startups & an long time entrepreneur, I’ve found writing & publishing to be extremely valuable use of time. I follow the motto “Always be publishing” here’s why. 1. Form your voice According to Fred Wilson, blogging has been … Continue reading Always be publishing →

Advantages of using REST-based Integrations in PeopleSoft

Javier Delgado - Thu, 2015-07-30 08:49
REST-based services support were introduced in PeopleTools 8.52, although you may also build your own REST services using IScripts in previous releases (*). With PeopleTools 8.52, Integration Broker includes support for REST services, enabling PeopleSoft to act as both a consumer and a provider.

What is REST?
There is plenty of documentation in the Web about REST, its characteristics and benefits. I personally find the tutorial published by Dr. Elkstein (http://rest.elkstein.org) particularly illustrating.

In a nutshell, REST can be seen as a lightweight alternative to other traditional Web Services mechanisms such as RPC or SOAP. A REST integration has considerably less overhead than the two previously mentioned methods, and as a result is more efficient for many types of integrations.

Today, REST is the dominating standard for mobile applications (many of which use REST integrations to interact with the backend) and Rich Internet Applications using AJAX.

PeopleSoft Support
As I mentioned before, PeopleSoft support was included in PeopleTools 8.52. This included the possibility to use the Provide Web Service Wizard for REST services on top of the already supported SOAP services. Also, the Send Master and Handler Tester utilities were updated so they could be used with REST.

PeopleTools 8.53 delivered support for one of the most interesting features of REST GET integrations: caching. Using this feature, PeopleSoft can, as a service provider, indicate that the response should be cached (using the SetRESTCache method of the Message object). In this way, the next time a consumer asks for the service, the response will be retrieved from the cache instead of executing the service again. This is particularly useful when the returned information does not change very often (ie.: list of countries, languages, etc.), and can lead to performance gains over a similar SOAP integration.

PeopleTools 8.54 brought, as in many other areas, significant improvements to the PeopleSoft support. In first place, the security of inbound services (in which PeopleSoft acts as the provider) was enhanced to require that the services are consumed using SSL, basic HTTP authentication, and basic HTTP authentication and SSL, or none of these.

On top of that, Query Access Services (QAS) were also made accessible through REST, so the creation of new provider services can be as easy as creating a new query and exposing it to REST.

Finally, the new Mobile Application Platform (an alternative way to FLUID to mobilise PeopleSoft contents) also uses REST as a cornerstone.

Conclusions
Although REST support is relatively new compared to SOAP web services, it has been supported by PeopleSoft for a while now. Its efficiency and performance (remember GET services caching) makes it an ideal choice for multiple integration scenarios. I'm currently building a mobile platform that interacts with PeopleSoft using REST services. This is keeping me busy and you may have noticed that I'm not posting so regularly in this blog, but hopefully in some time from now I will be able to share with you some learned lessons from a large scale REST implementation.


(*) Although it's possible to build REST services using IScripts, the Integration Broker solution introduced in PeopleTools 8.52 is considerably easier to implement and maintain. So, if you are in PeopleTools 8.52 release or higher, Integration Broker would be the preferred approach. If you are in an earlier release, actually a PeopleTools upgrade would the preferred approach, but I understand there might be other constraints. :)

Using Shared AM to Cache and Display Table Data

Andrejus Baranovski - Wed, 2015-07-29 23:12
This post is based on Steve Muench sample Nr. 156. In my personal opinion, ADF samples implemented by Steve Muench still remain one of the best source of examples and solutions for various ADF use cases. Sample Nr. 156 describes how to use Shared AM to display cached data in UI table. Typically Shared AM's are used to implement cached LOV's (session or application scope). But it could go beyond LOV, based on the use case we could display cached data in the table or form. I have tested this approach with 12c and it works fine.

Download sample application - ADFBCSharedSessionLOVApp.zip. AM is defined with application scope cache level - this means cached data will be available for multiple users:


In order to display cached data on UI and pass it through ADF bindings layer, we need to use Shared AM configuration in bindings:


You should create new Data Control reference entry manually in DataBindings.cpx file. JDeveloper doesn't provide an option to select Shared AM configuration. Simply change configuration property to Shared (HrModuleShared as per my example):


Make sure to use correct Data Control entry for iterator in the Page Definition. Cached table iterator binding should point to shared Data Control configuration:


This is how it looks like on UI - readonly table data is fetched once and cached in application scope cache. Other users will be reusing cached data, without re-fetching it from DB:


Jobs VO is set with AutoRefresh = true property. This turns on DB change notification listener mechanism and keeps VO data in synch, when changes happen in DB. This helps to auto refresh cached VO (read more about it Auto Refresh for ADF BC Cached LOV):


Here is the test. Let's change Job Title attribute value in DB:


Click on any row from Jobs table, or use any buttons (make a new request). Cached VO will be re-executed and new data will be fetched from DB, including latest changes:


You should see in the log, DB change notification was received and VO was re-executed, VO data was re-fetched:

The role of Coherence in Batch

Anthony Shorten - Wed, 2015-07-29 19:48

Lately I have been talking to partners and customers on older versions of the Oracle Utilities Application Framework and they are considering upgrading to the latest version. One of the major questions they ask is about the role of Oracle Coherence in our architecture. Here are some clarifications:

  • We supply a subset of the runtime Oracle Coherence libraries we use in the batch architecture with the installation. It does not require a separate Oracle Coherence license (unless you are intending to use Coherence for some customizations which requires the license). 
  • We only use a subset of the Oracle Coherence API around the cluster management and load balancing of the batch architecture. If you are a customer who uses the Oracle Coherence Pack within Oracle Enterprise Manager for monitoring the batch component, it is not recommended at the present time. The Coherence pack will return that components are missing and therefore give erroneous availability information. We have developed our own monitoring API within the framework that is exposed via the Oracle Application Management Pack for Oracle Utilities.
  • The idea behind the use of Oracle Coherence is as follows:
    • The Batch Architecture uses a Coherence based Cluster. This can be configured to use uni-cast or multi-cast to communicate across the cluster.
    • A cluster has a number of members (also known as nodes to some people). In our case members are threadpools and job threads.
    • A threadpool is basically a running Java Virtual Machine, preloaded with the Oracle Utilities Application Framework, ready to accept work. The reason we use threadpools is that when you execute java processes in Oracle Utilities Application Framework, there is an overhead in memory of loading the framework cache and objects, as well as java itself, before a job can execute. By creating a threadpool, this overhead is minimized and the threadpool can be used across lots of job threads.
    • Threadpools are named (this is important) and have a thread limit (this is a batch limit not a java limit as batch threads are heavier than online threads. The weight is used to describe batch because  batch thread are long running threads. Online threads are typically short running.
    • When a threadpool is started, locally or remotely, it is added to the cluster. A named threadpool can have multiple instances (even on the same machine). The threadpool limit is the sum of the limits across all its instances.
    • When a batch job thread is executed (some jobs are single threaded or multi-threaded) it is submitted to the cluster. Oracle Coherence then load balances those threads across the name of threadpool allocated on the job thread parameters.
    • Oracle Coherence tracks the threadpools and batch job threads so that if any failure occurs then the thread and threadpool are aware. For example, if a threadpool crashes the cluster is made aware and the appropriate action can be taken. This keeps the architecture in synchronization at all times.
  • We have built a wizard (bedit) to help build the properties files that drive the architecture. This covers clusters, threadpools and even individual batch jobs.
  • When building a cluster we tend to recommend the following:
    • Create a cache threadpool per machine to minimize member to member network traffic. A cache threadpool does not run jobs, it just acts as a co-ordination point for Oracle Coherence. Without a cache threadpool, each member communicates to each member which can be quite a lot of networking when you have a complex network of members (including lots of active batch job threads).
    • Create an administration threadpool with no threads to execute. This is just a configuration concept where you can connect to the JMX via this member. The JMX API is available from any active threadpool but it is a good practice to isolate JMX traffic from other traffic.
    • Create a pool of threadpools to cover key jobs and other pools for other jobs. The advantage is for monitoring and controlling resources within the JVM.

For more information about this topic and other advice on batch refer to the Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support.

Upgrade your SES Database From 11.2.0.3 to 11.2.0.4 for the PeopleSoft Search Framework

PeopleSoft Technology Blog - Wed, 2015-07-29 17:33
An Oracle database Upgrade from 11.2.0.3 to 11.2.0.4 is available for Secure Enterprise Search (SES) with PeopleSoft.  This document on My Oracle Support provides step by step instructions for performing the upgrade.  Note that this upgrade is available for PeopleTools 8.53 or higher on Unix/Linux environments.

I Wish I Sold More

Cary Millsap - Wed, 2015-07-29 17:26
I flew home yesterday from Karen’s memorial service in Jacksonville, on a connecting flight through Charlotte. When I landed in Charlotte, I walked with all my stuff from my JAX arrival gate (D7) to my DFW departure gate (B15). The walk was more stressful than usual because the airport was so crowded.

The moment I set my stuff down at B15, a passenger with expensive clothes and one of those permanent grins established eye contact, pointed his finger at me, and said, “Are you in First?”

Wai... Wha...?

I said, “No, platinum.” My first instinct was to explain that I had a right to occupy the space in which I was standing. It bothers me that this was my first instinct.

He dropped his pointing finger, and his eyes went no longer interested in me. The big grin diminished slightly.

Soon another guy walked up. Same story: the I’m-your-buddy-because-I’m-pointing-my-finger-at-you thing, and then, “First Class?” This time the answer was yes. “ALRIGHT! WHAT ROW ARE YOU IN?” Row two. “AGH,” like he’d been shot in the shoulder. He holstered his pointer finger, the cheery grin became vaguely menacing, and he resumed his stalking.

One guy who got the “First Class?” question just stared back. So, big-grin guy asked him again, “Are you in First Class?” No answer. Big-grin guy leaned in a little bit and looked him square in the eye. Still no answer. So he leaned back out, laughed uncomfortably, and said half under his breath, “Really?...”

I pieced it together watching this big, loud guy explain to his traveling companions so everybody could hear him, he just wanted to sit in Row 1 with his wife, but he had a seat in Row 2. And of course it will be so much easier to take care of it now than to wait and take care of it when everybody gets on the plane.

Of course.

This is the kind of guy who sells things to people. He has probably sold a lot of things to a lot of people. That’s probably why he and his wife have First Class tickets.

I’ll tell you, though, I had to battle against hoping he’d hit his head and fall down on the jet bridge (I battled coz it’s not nice to hope stuff like that). I would never have said something to him; I didn’t want to be Other Jackass to his Jackass. (Although people might have clapped if I had.)

So there’s this surge of emotions, none of them good, going on in my brain over stupid guy in the airport. Sales reps...

This is why Method R Corporation never had sales reps.

But that’s like saying I’ve seen bad aircraft engines before and so now in my airline, I never use aircraft engines. Alrighty then. In that case, I hope you like gliders. And, hey: gliders are fine if that makes you happy. But a glider can’t get me home from Florida. Or even take off by itself.

I wish I sold more Method R software. But never at the expense of being like the guy at the airport. It seems I’d rather perish than be that guy. This raises an interesting question: is my attitude on this topic just a luxury for me that cheats my family and my employees out of the financial rewards they really deserve? Or do I need to become that guy?

I think the answer is not A or B; it’s C.

There are also good sales people, people who sell a lot of things to a lot of people, who are nothing like the guy at the airport. People like Paul Kenny and the honorable, decent, considerate people I work with now at Accenture Enkitec Group who sell through serving others. There were good people selling software at Hotsos, too, but the circumstances of my departure in 2008 prevented me from working with them. (Yes, I do realize: my circumstances would not have prevented me from working with them if I had been more like the guy at the airport.)

This need for duality—needing both the person who makes the creations and the person who connects those creations to people who will pay for them—is probably the most essential of the founder’s dilemmas. These two people usually have to be two different people. And both need to be Good.

In both senses of the word.

Three Steps to Get Big Data Ready for HR

Linda Fishman Hoyle - Wed, 2015-07-29 13:02

A Guest Post by Melanie Hache-Barrois, Oracle HCM Strategy Director, Southern Europe

Big data will revolutionize HR practices―here is how to hit the ground running with your implementation.

Big data analytics promises to deliver new insights into the workforce; these insights can help HR better predict trends and policy outcomes, and thereby, make the right decisions. It has the power to help HR to predict and plan organizational performance, to minimize the cost, time, and risk of taking on new HR initiatives, and to understand, develop, and maintain a productive workforce over a single technology platform, and much more.

Big data analytics has a huge role to play in the future of HR, but it is important that HR teams get prepared in the right way. Here are our tips to make sure that your data is ready for the big data revolution.

Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri",sans-serif; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-GB; mso-fareast-language:EN-GB;}

1.  Remove data ‘islands’

The first step is to identify what kind of data you need for a truly successful HR strategy. Too often, HR teams experience data organized in silos, cut off from the rest of the organization. The migration to big data provides the perfect opportunity to identify data islands within your HR systems and define a strategy to integrate and reorganize them.

2.  Use a single interface

Understanding how data is collected within your organization is fundamental to a successful big data strategy. You have to avoid ‘copy and paste’ practices, and instead, make sure that data is collected automatically, and seamlessly integrated in one interface. The less you have to manually record information and integrate it into your HR systems, the better. It is therefore crucial to choose a single simple interface that will collect all your data and make it easily accessible to your team.

3.  Start simple

Once you have chosen the type of data you need and the way and where you will collect it, you can decide the kind of analytics you need. To be efficient and keep it simple, you can start with simple correlations to understand how big data analytics works and what kind of results you can get. You can then slowly increase the analytical complexity, heading to predictive analytics.

These three steps will ensure that Oracle big data solution will help you deliver an enhanced HR strategy that meets your corporate goals.

My Friend Karen

Cary Millsap - Wed, 2015-07-29 11:54
My friend Karen Morton passed away on July 23, 2015 after a four-month battle against cancer. You can hear her voice here.

I met Karen Morton in February 2002. The day I met her, I knew she was awesome. She told me the story that, as a consultant, she had been doing something that was unheard-of. She guaranteed her clients that if she couldn’t make things on their systems go at least X much faster on her very first day, then they wouldn’t have to pay. She was a Give First person, even in her business. That is really hard to do. After she told me this story, I asked the obvious question. She smiled her big smile and told me that her clients had always paid her—cheerfully.

It was an honor when Karen joined my company just a little while later. She was the best teammate ever, and she delighted every customer she ever met. The times I got to work with Karen were bright spots in my life, during many of the most difficult years of my career. For me, she was a continual source of knowledge, inspiration, and courage.

This next part is for Karen’s family and friends outside of work. You know that she was smart, and you know she was successful. What you may not realize is how successful she was. Your girl was famous all over the world. She was literally one of the top experts on Earth at making computing systems run faster. She used her brilliant gift for explaining things through stories to become one of the most interesting and fun presenters in the Oracle world to go watch, and her attendance numbers proved it. Thousands of people all over the world know the name, the voice, and the face of your friend, your daughter, your sister, your spouse, your mom.

Everyone loved Karen’s stories. She and I told stories and talked about stories, it seems like, all the time we were together. Stories about how Oracle works, stories about helping people, stories about her college basketball career, stories about our kids and their sports, ...

My favorite stories of all—and my family’s too—were the stories about her younger brother Ted. These stories always started out with some middle-of-the-night phone call that Karen would describe in her most somber voice, with the Tennessee accent turned on full-bore: “Kar’n: This is your brother, Theodore LeROY.” Ted was Karen’s brother Teddy Lee when he wasn’t in trouble, so of course he was always Theodore LeROY in her stories. Every story Karen told was funny and kind.

We all wanted to have more time with Karen than we got, but she touched and warmed the lives of literally thousands of people. Karen Morton used her half-century here on Earth with us as well as anyone I’ve ever met. She did it right.

God bless you, Karen. I love you.

August 12: Atradius Collections Oracle Sales Cloud Customer Forum

Linda Fishman Hoyle - Wed, 2015-07-29 11:06

Join us for another Oracle Customer Reference Forum on August 12th, 2015 at 8:00 a.m. PT / 11:00 a.m. ET / 5:00 p.m. CEST.

Sonja van Haasteren, Global Customer Experience Manager of Atradius Collections, will talk about the company’s journey with Oracle CX products focused on Oracle Sales Cloud with Oracle Marketing Cloud and its path to expand with Oracle Data Cloud.

Atradius Collections is a global leader in trade-invoice-collection services. It provides solutions to recover domestic and international trade invoices. Atradius Collections handles more than 100,000 cases a year for more than 14,500 customers, covering over 200 countries.

Register now to confirm your attendance for this informative event on August 12.

TekStream Reduces Project Admin Costs by 30% with Oracle Documents Cloud

WebCenter Team - Wed, 2015-07-29 07:54

Read this latest announcement from Oracle to find out more about how TekStream Solutions, a solution services company in North America streamlined project management and administration and improved client project delivery with Oracle Documents Cloud Service, an enterprise-grade cloud collaboration and file sync and share solution. Learn how, within the first month of its use, TekStream was able to cut project administration costs by 30% and reduce complexity to not only drive client results faster but also provide a superior project experience to both its consultants as well as its clients.

And here's a brief video with Judd Robins, executive vice president, Consulting Services of TekStream Solutions as he discusses the specific areas where they were looking to make improvements and how Oracle Documents Cloud enabled easy and yet secure cloud collaboration not only among its consultants who are always on the go, but also with its clients.

To learn more about Oracle Documents Cloud Service and how it can help your enterprise visit us at cloud.oracle.com/documents.

Existence

Jonathan Lewis - Wed, 2015-07-29 06:05

A recent question on the OTN Database Forum asked:

I need to check if at least one record present in table before processing rest of the statements in my PL/SQL procedure. Is there an efficient way to achieve that considering that the table is having huge number of records like 10K.

I don’t think many readers of the forum would consider 10K to be a huge number of records; nevertheless it is a question that could reasonably be asked, and should prompt a little discssion.

First question to ask, of course is: how often do you do this and how important is it to be as efficient as possible. We don’t want to waste a couple of days of coding and testing to save five seconds every 24 hours. Some context is needed before charging into high-tech geek solution mode.

Next question is: what’s wrong with writing code that just does the job, and if it finds that the job is complete after zero rows then you haven’t wasted any effort. This seems reasonable in (say) a PL/SQL environment where we might discuss the following pair of strategies:


Option 1:
=========
-- execute a select statement to see in any rows exist

if (flag is set to show rows) then
    for r in (select all the rows) loop
        do something for each row
    end loop;
end if;

Option 2:
=========
for r in (select all the rows) loop
    do something for each row;
end loop;

If this is the type of activity you have to do then it does seem reasonable to question the sense of putting in an extra statement to see if there are any rows to process before processing them. But there is a possibly justification for doing this. The query to find just one row may produce a very efficient execution plan, while the query to find all the rows may have to do something much less efficient even when (eventually) it finds that there is no data. Think of the differences you often see between a first_rows_1 plan and an all_rows plan; think about how Oracle can use index-only access paths and table elimination – if you’re only checking for existence you may be able to produce a MUCH faster plan than you can for selecting the whole of the first row.

Next question, if you think that there is a performance benefit from the two-stage approach: is the performance gain worth the cost (and risk) of adding a near-duplicate statement to the code – that’s two statements that have to be maintained every time you make a change. Maybe it’s worth “wasting” a few seconds on every execution to avoid getting the wrong results (or an odd extra hour of programmer time) once every few months. Bear in mind, also, that the optimizer now has to optimize two statement instead of one – you may not notice the extra CPU usage in testing but perhaps in the live environment the execution benefit will be eroded by the optimization cost.

Next question, if you still think that the two-stage process is a good idea: will it result in an inconsistent database state ?! If you select and find a row, then run and find that there are no rows to process because something modified and “hid” the row you found on the first pass – what are you going to do. Will this make the program crash ? Will it produce an erroneous result on this run, or will a silent side effect be that the next run will produce the wrong results. (See Billy Verreynne’s comment on the original post). Should you set the session to “serializable” before you start the program, or maybe lock a critical table to make sure it can’t change.

So, assuming you’ve decided that some form of “check for existence then do the job” is both desirable and safe, what’s the most efficient strategy. Here’s one of the smarter solutions that miminises risk and effort (in this case using a pl/sql environment).


select  count(*)
into    m_counter
from    dual
where   exists ({your original driving select statement})
;

if m_counter = 0 then
    null;
else
    for c1 in {your original driving select statement} loop
        -- do whatever
    end loop;
end if;

The reason I describe this solution as smarter, with minimum risk and effort, is that (a) you use EXACTLY the same SQL statement in both locations so there should be no need to worry about making the same effective changes twice to two slightly different bits of SQL and (b) the optimizer will recognise the significance of the existence test and run in first_rows_1 mode with maximum join elimination and avoidance of redundant table visits. Here’s a little data set I can use to demonstrate the principle:


create table t1
as
select
        mod(rownum,200)         n1,     -- scattered data
        mod(rownum,200)         n2,
        rpad(rownum,180)        v1
from
        dual
connect by
        level <= 10000
;

delete from t1 where n1 = 100;
commit;

create index t1_i1 on t1(n1);

begin
        dbms_stats.gather_table_stats(
                user,
                't1',
                cascade => true,
                method_opt => 'for all columns size 1'
        );
end;
/

It’s just a simple table with index, but the index isn’t very good for finding the data – it’s repetitive data widely scattered through the table: 10,000 rows with only 200 distinct values. But check what happens when you do the dual existence test – first we run our “driving” query to show the plan that the optimizer would choose for it, then we run with the existence test to show the different strategy the optimizer takes when the driving query is embedded:


alter session set statistics_level = all;

select  *
from    t1
where   n1 = 100
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

select  count(*)
from    dual
where   exists (
                select * from t1 where n1 = 100
        )
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last cost'));

Notice how I’ve enabled rowsource execution statistics and pulled the execution plans from memory with their execution statistics. Here they are:


select * from t1 where n1 = 100

-------------------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |    38 (100)|      0 |00:00:00.01 |     274 |
|*  1 |  TABLE ACCESS FULL| T1   |      1 |     50 |    38   (3)|      0 |00:00:00.01 |     274 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("N1"=100)

select count(*) from dual where exists (   select * from t1 where n1 = 100  )

---------------------------------------------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |     3 (100)|      1 |00:00:00.01 |       2 |
|   1 |  SORT AGGREGATE    |       |      1 |      1 |            |      1 |00:00:00.01 |       2 |
|*  2 |   FILTER           |       |      1 |        |            |      0 |00:00:00.01 |       2 |
|   3 |    FAST DUAL       |       |      0 |      1 |     2   (0)|      0 |00:00:00.01 |       0 |
|*  4 |    INDEX RANGE SCAN| T1_I1 |      1 |      2 |     1   (0)|      0 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter( IS NOT NULL)
   4 - access("N1"=100)

For the original query the optimizer did a full tablescan – that was the most efficient path. For the existence test the optimizer decided it didn’t need to visit the table for “*” and it would be quicker to use an index range scan to access the data and stop after one row. Note, in particular, that the scan of the dual table didn’t even start – in effect we’ve got all the benefits of a “select {minimum set of columns} where rownum = 1” query, without having to work out what that minimum set of columns was.

But there’s an even more cunning option – remember that we didn’t scan dual when when there were no matching rows:


for c1 in (

        with driving as (
                select  /*+ inline */
                        *
                from    t1
        )
        select  /*+ track this */
                *
        from
                driving d1
        where
                n1 = 100
        and     exists (
                        select
                                *
                        from    driving d2
                        where   n1 = 100
                );
) loop

    -- do your thing

end loop;

In this specific case the subquery would automatically go inline, so the hint here is actually redundant; in general you’re likely to find the optimizer materializing your subquery and bypassing the cunning strategy if you don’t use the hint. (One of the cases where subquery factoring doesn’t automatically materialize is when you have no WHERE clause in the subquery.)

Here’s the execution plan pulled from memory (after running this SQL through an anonymous PL/SQL block):


SQL_ID  7cvfcv3zarbyg, child number 0
-------------------------------------
WITH DRIVING AS ( SELECT /*+ inline */ * FROM T1 ) SELECT /*+ track
this */ * FROM DRIVING D1 WHERE N1 = 100 AND EXISTS ( SELECT * FROM
DRIVING D2 WHERE N1 = 100 )

---------------------------------------------------------------------------------------------------
| Id  | Operation          | Name  | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |      1 |        |    39 (100)|      0 |00:00:00.01 |       2 |
|*  1 |  FILTER            |       |      1 |        |            |      0 |00:00:00.01 |       2 |
|*  2 |   TABLE ACCESS FULL| T1    |      0 |     50 |    38   (3)|      0 |00:00:00.01 |       0 |
|*  3 |   INDEX RANGE SCAN | T1_I1 |      1 |      2 |     1   (0)|      0 |00:00:00.01 |       2 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter( IS NOT NULL)
   2 - filter("T1"."N1"=100)
   3 - access("T1"."N1"=100)

You’ve got just one statement – and you’ve only got one version of the complicated text because you put it into a factored subquery; but the optimizer manages to use one access path for one instantiation of the text and a different one for the other. You get an efficient test for existence and only run the main query if some suitable data exists, and the whole thing is entirely read-consistent.

I have to say, though, I can’t quite make myself 100% enthusiastic about this code strategy – there’s just a nagging little doubt that the optimizer might come up with some insanely clever trick to try and transform the existence test into something that’s supposed to be faster but does a lot more work; but maybe that’s only likely to happen on an upgrade, which is when you’d be testing everything very carefully anyway (wouldn’t you) and you’ve got the “dual/exists” fallback position if necessary.

Footnote:

Does anyone remember the thing about reading execution plan “first child first” – this existence test is one of the interesting cases where it’s not the first child of a parent operation that runs first: it’s the case I call the “constant subquery”.


Better Ways to Play and Try

Oracle AppsLab - Tue, 2015-07-28 20:50
  • Fact 1: Dazzling animated displays (sprites, shaders, parallax, 3D) are more plentiful and easier to make than ever before.
  • Fact 2: More natural and expressive forms of input (swiping, pinching, gesturing, talking) are being implemented and enhanced every day.
  • Fact 3: Put these two together and the possible new forms of human computer interaction are endless. The only limit is our imagination.

That’s the problem: our imagination. We can’t build new interactions until A) someone imagines them, and B) the idea is conveyed to other people. As a designer in the Emerging Interactions subgroup of the AppsLab, this is my job – and I’m finding that both parts of it are getting harder to do.

If designers can’t find better ways of imagining – and by imagining I mean the whole design process from blank slate to prototype – progress will slow and our customers will be unable to unleash their full potential.

So what does it mean to imagine and how can we do it better?

colliding_orgcharts
Imagination starts with a daydream or an idle thought. “Those animations of colliding galaxies are cool. I wonder if we could show a corporate acquisition as colliding org charts. What would that look like?”

What separates a mere daydreamer from an actual designer is the next step: playing. To really imagine a new thing in any meaningful way you have to roll up your sleeves and actually start playing with it in detail. At first you can do this entirely in your mind – what Einstein called a “thought experiment.”  This can take weeks of staring into space while your loved ones look on with increasing concern.

Playing is best done in your mind because your mind is so fluid. You can suspend the laws of physics whenever they get in the way. You can turn structures inside out in the blink of an eye, changing the rules of the game as you go. This fluidity, this fuzziness, is the mind’s greatest strength, but also its greatest weakness.

So sooner or later you have to move from playing to trying. Trying means translating the idea into a visible, tangible form and manipulating it with the laws of physics (or at least the laws of computing) re-enabled. This is where things get interesting. What was vaguely described must now be spelled out. The inflexible properties of time and space will expose inconvenient details that your mind overlooked; dealing with even the smallest of these details can derail your entire scheme – or take it wild, new directions. Trying is a collaboration with Reality.

Until recently, trying was fairly easy to do. If the thing you were inventing was a screen layout or a process flow, you could sketch it on paper or use a drawing program to make sure all the pieces fit together. But what if the thing you are inventing is moving? What if it has hundreds of parts each sliding and changing in a precise way? How do you sketch that?

My first step in the journey to a better way was to move from drawing tools like Photoshop and OmniGraffle to animation tools like Hype and Edge – or to Keynote (which can do simple animations). Some years ago I even proposed a standard “animation spec” so that developers could get precise frame-by-frame descriptions.

The problem with these tools is that you have to place everything by hand, one element at a time. I often begin by doing just that, but when your interface is composed of hundreds of shifting, spinning, morphing shapes, this soon becomes untenable. And when even the simplest user input can alter the course and speed of everything on the screen, and when that interaction is the very thing you need to explore, hand drawn animation becomes impossible.

To try out new designs involving this kind of interaction, you need data-driven animation – which means writing code. This is a significant barrier for many designers. Design is about form, color, balance, layout, typography, movement, sound, rhythm, harmony. Coding requires an entirely different skill set: installing development environments, converting file formats, constructing database queries, parsing syntaxes, debugging code, forking githubs.

A software designer needs a partial grasp of these things in order to work with developers. But most designers are not themselves coders, and do not want to become one. I was a coder in a past life, and even enjoy coding up to a point. But code-wrangling, and in particular debugging, distracts from the design process. It breaks my concentration, disrupts my flow; I get so caught up in tracking down a bug that I forget what I was trying to design in the first place.

The next stage of my journey, then, was to find relatively easy high-level programming languages that would let me keep my eye on the ball. I did several projects using Processing (actually Processing.js), a language developed specifically for artists. I did another project using Python – with all coding done on the iPad so that I could directly experience interactions on the tablet with every iteration of the code.

These projects were successful but time-consuming and painful to create. Traditional coding is like solving a Rubik’s Cube: twist and twist and twist until order suddenly emerges from chaos. This is not the way I want to play or try. I want a more organic process, something more like throwing a pot: I want to grab a clump of clay and just continuously shape it with my hands until I am satisfied.

I am not the only one looking for better ways to code. We are in the midst of an open source renaissance, an explosion of literally thousands of new languages, libraries, and tools. In my last blog post I wrote about people creating radically new and different languages as an art form, pushing the boundaries in all directions.

In “The Future of Programming,” Eric Elliott argues for reactive programming, visual IDEs, even genetic and AI-assisted coding. In “Are Prototyping Tools Becoming Essential?,” Mark Wilcox argues that exploring ideas in the Animation Era requires a whole range of new tools. But if you are only going to follow one of these links, see Bret Victor’s “Learnable Programming.”

After months of web surfing I stumbled upon an interesting open source tool originally designed for generative artists that I’ve gotten somewhat hooked on. It combines reactive programming and a visual IDE with some of Bret Victor’s elegant scrubbing interactions.

More about that tool in my next blog postPossibly Related Posts:

Top 5 Ways to Personalize My Oracle Support

Joshua Solomin - Tue, 2015-07-28 17:47
Top 5 Ways to Personalize My Oracle Support

It doesn't take long using My Oracle Support (MOS) to realize just how massive the pool of data underlying it all is—knowledge articles, patches and updates, advisories and security alerts, for every version of every Oracle product line.

My first week using MOS my jaw dropped at the sheer scale of available info. Not surprisingly, some of the first tips we get as Oracle employees is how to personalize My Oracle Support to target just the areas we're interested in.

Take a minute and follow our Top 5 Ways to personalize My Oracle Support to better suit your workflow. These easy-to-follow tips can help you get the most from the application, and avoid drowning in the MOS "Sea of Information."

1. Customizing the Screen Panels

One of the easiest personalization features is to adjust the panels displayed on a given page or tab. Nearly every activity tab allows you to reorganize, move, or even hide displayed panels on the screen using the Customize Page.... link in the top right area of the screen.

When you click the link, the page will update and display a series of widgets on each panel, allowing you to customize the content.

The wrench icon lets you customize the panel name, while the circular gear icon lets you move the panel within the column. The Add Content action displays a context-sensitive panel of new content areas that can be added to the column.

2. Enable PowerView

The PowerView applet is one of the fastest ways to limit information displayed in MOS. PowerView filters information presented to you based on a products, support identifiers, or other custom filters you select. Once you've set up a PowerView filter set, any activities going forward—searches, patches and bugs, service requests (SRs)—will only appear if they are tied to your selected filters.

To build a PowerView, click the PowerView icon in the upper-left area of the screen.

To create the view, first select the primary filter criteria. "Support Identifier", "Product", and "Product Line" are common primary filters.

Remember, the goal is to use PowerView to filter everything you see in MOS against the relevent contexts you establish.

3. Set Up SR Profiles

This one's a bit trickier than the first two, but can be an enormous time-saver if you regularly enter service requests into MOS.

Go to the Settings tab in MOS, and look for Service Request Profiles link on your left. In some cases you may need to click the More... dropdown to find the Settings tab.

In the profiles view you'll see any existing profiles and an action button to create a new profile. The goal for an SR profile is to streamline the process of creating an SR for a specific hardware or software product that you're responsible for managing. When creating an SR you'll select the pre-generated profile you created earlier, and MOS fills in the relevant details you input.

4. Enable Hot Topics Email

Hot Topics Email is a second option available in the main MOS Settings tab.

Hot Topics is an automated notification system that will alert you any time specified SRs, Knowledge Documents, or security notices are published or updated.

There are dozens of options to choose from in setting up your alerts, based on product, Support Identifier (SI), content you've marked as as "Favorite", and more. See the video training "How to Use Hot Topics Email Notifications" (Document 793436.2) to get a better understanding of how to use this feature.

5. Enable Service Request Email Updates

Back in the main MOS Settings tab, click the link for My Account on the left. This will take you to a general profile view of your MOS account. What we're looking for is a table cell in the Support Identifiers table at the top that reads SR Details.

By checking this box, you are indicating that you want to be automatically notified via email any time a service request tied to the support identifier gets updated.

The goal behind this is to stay abreast of any changes to SRs for the chosen support identifier. You don't have to keep "checking in" or wait for an Oracle Support engineer to reach out to you when progress is made on SRs. If a Support engineer requests additional information on a particular configuration, for example, that would be conveyed in the SR Email Update sent to you.

The trick is to be judicious using this setting. My Oracle Support could quickly inundate you with SR details notices if there are lots of active SRs tied to the support identifier(s), so this may not be desirable in some cases.

Conclusion

With these five options enabled, you've started tailoring your My Oracle Support experience to better streamline your workflow, and keep the most relevant, up-to-date information in front of you.

Give them a whirl, and let us know how it goes!

Reuters: Blackboard up for sale, seeking up to $3 billion in auction

Michael Feldstein - Tue, 2015-07-28 13:48

By Phil HillMore Posts (350)

As I was writing a post about Blackboard’s key challenges, I get notice from Reuters (anonymous sources, so interpret accordingly) that the company is on the market, seeking up to $3 billion. From Reuters:

Blackboard Inc, a U.S. software company that provides learning tools for high school and university classrooms, is exploring a sale that it hopes could value it at as much as $3 billion, including debt, according to people familiar with the matter.

Blackboard’s majority owner, private equity firm Providence Equity Partners LLC, has hired Deutsche Bank AG and Bank of America Corp to run an auction for the company, the people said this week. [snip]

Providence took Blackboard private in 2011 for $1.64 billion and also assumed $130 million in net debt.

A pioneer in education management software, Blackboard has seen its growth slow in recent years as cheaper and faster software upstarts such as Instructure Inc have tried to encroach on its turf. Since its launch in 2011, Instructure has signed up 1,200 colleges and school districts, according to its website.

This news makes the messaging from BbWorld as well as their ability to execute on strategy, particularly delivering the new Ultra user experience across all product lines – including the core LMS – much more important. I’ll get to that subject in the next post.

This news should not be all that unexpected, as one common private equity strategy is to reorganize and clean up a company (headcount, rationalize management structures, reorient the strategy) and then sell within 3 – 7 years. As we have covered here at e-Literate, Blackboard has gone through several rounds of layoffs, and many key employees have already left the company due to new management and restructuring plans. CEO Jay Bhatt has been consistent in his message about moving from a conglomeration of silo’d mini-companies based on past M&A to a unified company. We have also described the significant changes in strategy – both adopting open source solutions and planning to rework the entire user experience.

Also keep in mind that there is massive investment in ed tech lately, not only from venture capital but also from M&A.

Update 1: I should point out that the part of this news that is somewhat surprising is the potential sale while the Ultra strategy is incomplete. As Michael pointed out over the weekend:

Ultra is a year late: Let’s start with the obvious. The company showed off some cool demos at last year’s BbWorld, promising that the new experience would be Coming Soon to a Campus Near You. Since then, we haven’t really heard anything. So it wasn’t surprising to get confirmation that it is indeed behind schedule. What was more surprising was to see CEO Jay Bhatt state bluntly in the keynote that yes, Ultra is behind schedule because it was harder than they thought it would be. We don’t see that kind of no-spin honesty from ed tech vendors all that often.

Ultra isn’t finished yet: The product has been in use by a couple of dozen early adopter schools. (Phil and I haven’t spoken with any of the early adopters yet, but we intend to.) It will be available to all customers this summer. But Blackboard is calling it a “technical preview,” largely because there are large swathes of important functionality that have not yet been added to the Ultra experience–things like tests and groups. It’s probably fine to use it for simple (and fairly common) on-campus use cases, but there are still some open manholes here.

Update 2: I want to highlight (again) the nature of this news story. It’s from Reuters using multiple anonymous sources. While Reuters should be trustworthy, please note that the story has not yet been confirmed.

Update 3: In contact with Blackboard, I received the following statement (which does not answer any questions, but I am sharing nonetheless).

Blackboard, like many successful players in the technology industry, has become subject of sale rumors. Although we are transparent in our communications about the Blackboard business and direction when appropriate, it is our policy not to comment on rumors or speculation.

Blackboard is in an exciting industry that is generating substantial investor interest. Coming off a very successful BbWorld 2015 and a significant amount of positive customer and market momentum, potential investor interest in our company is not surprising.

We’ll update as we learn more, including if someone confirms the news outside of Reuters and their sources.

The post Reuters: Blackboard up for sale, seeking up to $3 billion in auction appeared first on e-Literate.