Skip navigation.

Feed aggregator

Here Are Your First Links of 2015

Oracle AppsLab - Wed, 2015-01-07 20:16

Our team has been busy since the New Year, competing in the AT&T Developer Summit hackathon, which is Noel’s (@noelportugal) Everest, i.e. he tries to climb it every year, see 2013 and 2014.

If you follow our Twitter (@theappslab) or Facebook page, you might have seen the teaser. If not, here it is:

Image courtesy of AT&T Developer Program

Image courtesy of AT&T Developer Program’s Facebook page

Look for details later this week.

While you wait for that, enjoy these tidbits from our Oracle Applications User Experience colleagues.

Fit for Work: A Team Experience of Wearable Technology

Wearables are a thing, just look at the CES 2015 coverage, so Misha (@mishavaughandecided to distribute Fitbits among her team to collect impressions.

Good idea, get everyone to use the same device, collect feedback, although it seems unfair, given Ultan (@ultan) is perhaps the fittest person I know. Luckily, this wasn’t a contest of fitness or of most-wrist-worn-gadgets. Rather, the goal was to gather as much anecdotal experience as possible.

Bonus, there’s a screenshot of the Oracle HCM Cloud Employee Wellness prototype.

¡Viva Mexico!

Fresh off a trip to Jolly Old England, the OAUX team will be in Santa Fe, Mexico in late February. Stay tuned for details.

Speaking of, one of our developers in Oracle’s Mexico Development Center, Sarahi Mireles (@sarahimireles) wrote up her impressions and thoughts on the Shape and ShipIt we held in November, en español.

And finally, OAUX and the Oracle College Hire Program

Oracle has long had programs for new hires right out of college. Fun fact, I went through one myself many eons ago.

Anyway, we in OAUX have been graciously invited to speak to these new hires several times now, and this past October, Noel, several other OAUX luminaries and David (@dhaimes) were on a Morning Joe panel titled “Head in the Clouds,” focused loosely around emerging technologies, trends and the impact on our future lives.

Ackshaey Singh (from left to right), DJ Ursal (@djursal), Misha Vaughan (@mishavaughan), Joe Goldberg, Noel Portugal (@noelportugal), and David Haimes (@dhaimes)

 

Interesting discussion to be sure, and after attending three of these Morning Joe panels now, I’m happy to report that the attendance seems to grow with each iteration, as does the audience interaction.

Good times.Possibly Related Posts:

APEX Conferences in 2015

Dimitri Gielis - Wed, 2015-01-07 17:30
In 2015 most APEX-only conferences are scheduled in history. I definitely recommend to attend at least one of the conferences as it's a great time to meet other people, industry experts and the APEX Development Team.

You'll find me at following conferences:

25-MAR-2015: APEX World - Rotterdam, the Netherlands

Every year a hit - and this year extra special as it will be at a different location - the beautiful SS Rotterdam. It's not decided yet which topics I'll present in the Netherlands.



9/10-JUN-2015 - APEXConnect - Düsseldorf, Germany

It will be my first time at a conference hosted by DOAG. I've heard so many great things about the excitement there is in Germany for APEX, so I look forward meeting the people over there.

On the 10th at 11h I'll present about the integration of APEX and Microsoft Sharepoint.



21/25-JUN-2015 - ODTUG KScope - Hollywood, Florida

I attended KScope for the first time in 2006 and since then returned almost every year. One of my most favourite conferences for sure. On Sunday the APEX Development is hosting the APEX Symposium: a full day of talks by the development team themselves. It also gives you an opportunity to meet them and many other experts in a nice informal setting.

This year these are my talks:

  • A primer on Web Components in APEX
  • How to make APEX print through node.js




There are some other conferences with great APEX tracks, for example GLOC, Collaborate, DOAG, UKOUG, ...

Next to the conferences there are many local APEX Meetups too. It's a totally different setting and concept compared to a conference, so definitely something to check out too. You can take your code to the meetups and ask questions or show what you did yourself.

You'll find me at most (if not all) of the Belgium APEX Meetups and I might attend some other meetups or conferences depending my schedule.



Look forward seeing you at one of the conferences or meetups.

Categories: Development

<b>Contribution by Angela Golla,

Oracle Infogram - Wed, 2015-01-07 15:56
Contribution by Angela Golla, Infogram Deputy Editor

My Oracle Support Essentials Webcast Series
The My Oracle Support Essentials Series brings interactive expertise straight to your desktop. The goal of this program is to communicate with our customers and partners, provide tips and tricks on how to effectively work with Oracle Support, and take advantage of the PROACTIVE tools that are provided with your Oracle Support contract.  The January schedule is available in Doc Id 553747.1





Recommended Reading: Oracle Database 12c NUMA-Related Topics

Kevin Closson - Wed, 2015-01-07 15:42

This is a short post to recommend some recent blog posts by Nikolay Manchev and Bertrand Drouvot on the topic of Oracle Database 12c NUMA awareness.

Nikolay provides a very helpful overview on Linux Control Groups and how they are leveraged by Oracle Database 12c. Bertrand Drouvot carried the topic a bit further by leveraging SLOB to assess the impact of NUMA remote memory on a cached Oracle Database workload. Yes, SLOB is very useful for more than physical I/O! Good job, Bertrand!

These are good studies and good posts!

Also, one can refer to MOS 1585184.1 for more information on Control Groups and a helpful script to configure CGROUPS.

The following links will take you to Nikolay and Bertrand’s writings on the topic:

http://manchev.org/2014/03/processor-group-integration-in-oracle-database-12c/

https://bdrouvot.wordpress.com/2015/01/07/measure-the-impact-of-remote-versus-local-numa-node-access-thanks-to-processor_group_name/

 


Filed under: oracle

"2015: Year Of The Enterprise Selfie And 9 Other Predictions For Business Technology" by Michael Hickens

Linda Fishman Hoyle - Wed, 2015-01-07 13:29

We ponder the posts in late December that list the highs and lows of the previous 12 months. We read about the most popular food trends, baby names, and music videos of 2014. There are lists of top legal stories, top celebrities, and top tech quotes. We remember the ice bucket challenge, ISIS, Ebola, Ukraine, Robin Williams, Cuba, Sony, Taylor Swift, and Uber—all included on some 2014 list.

And then the crystal ball drops in Times Square on New Year’s Eve and we’re flung into 2015 on a short night’s sleep. Come January 1 every news service leads with a list of predictions for the coming year.

Our own Michael Hickins was out in front of the reviewers and the predictors and published a great OracleVoice piece for the Forbes audience on December 29. He says, “The annual ritual of predicting the future of technology is as humbling as it is fun.”

You can tell Hickins had fun writing the article. His ten predictions are snappy, smart, and useful.

Most Recent

Jonathan Lewis - Wed, 2015-01-07 12:21

There’s a thread on the OTN database forum at present asking for advice on optimising a query that’s trying to find “the most recent price” for a transaction given that each transaction is for a stock item on a given date, and each item has a history of prices where each historic price has an effective start date. This means the price for a transaction is the price as at the most recent date prior to the transaction date.

There is an absolutely standard way of expressing “the most recent occurrence” in SQL. Assume we have a table of (item_code, effective_date, price) with the obvious primary key of (item_code, effective_date), then a requirement to find “the most recent price for item XXXX as at 25th Dec 2014″ case would give us code like the following (note – all the examples in this note were run against Oracle 11.2.0.4):


select  *
from    prices  pri1
where   item_code = 'XXXX'
and     effective_date = (
                select  max(effective_date)
                from    prices  pri2
                where   pri2.item_code = 'XXXX'
                and     pri2.effective_date <= date'2014-12-25'
        )
/

The ideal execution plan that we should expect to see for this query is as follows (with a small variation if you had created the prices table as an index-organized table – which would probably be sensible in many cases):


-----------------------------------------------------------------------------------------
| Id  | Operation                      | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |        |     1 |    52 |     2   (0)| 00:00:01 |
|   1 |  TABLE ACCESS BY INDEX ROWID   | PRICES |     1 |    52 |     2   (0)| 00:00:01 |
|*  2 |   INDEX UNIQUE SCAN            | PRI_PK |     1 |       |     1   (0)| 00:00:01 |
|   3 |    SORT AGGREGATE              |        |     1 |    32 |            |          |
|   4 |     FIRST ROW                  |        |     1 |    32 |     2   (0)| 00:00:01 |
|*  5 |      INDEX RANGE SCAN (MIN/MAX)| PRI_PK |     1 |    32 |     2   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - access("ITEM_CODE"='XXXX' AND "EFFECTIVE_DATE"= (SELECT
              MAX("EFFECTIVE_DATE") FROM "PRICES" "PRI2" WHERE
              "PRI2"."EFFECTIVE_DATE"<=TO_DATE(' 2014-12-25 00:00:00', 'syyyy-mm-dd hh24:mi:ss')
              AND "PRI2"."ITEM_CODE"='XXXX'))

   5 - access("PRI2"."ITEM_CODE"='XXXX' AND "PRI2"."EFFECTIVE_DATE"<=
             TO_DATE('2014-12-25 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))

As you can see, this plan is using the “driving subquery” approach – the order of operation is 5, 4, 3, 2, 1, 0: we do an index min/max range scan in line 5 to find the maximum effective date for the item, then pass that up through the (essentially redundant) First Row and Sort Aggregate operations to use as an input to the index unique scan at operation 2 which passes the rowid up to operation 1 to find the specific row. In my case this was 2 consistent gets for the range scan, 2 more for the unique scan, and one for the table access.

You might point out that my example uses the item_code ‘XXXX’ twice, once in the main query, once in the subquery; and you might decide that this was in very poor taste since we should clearly be using a correlated subquery – the correlating predicate ought to be: pri2.item_code = pri1.item_code. Here’s the execution plan I got when I made that change:


----------------------------------------------------------------------------------------
| Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |     1 |    78 |     3   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |         |     1 |    78 |     3   (0)| 00:00:01 |
|   2 |   NESTED LOOPS               |         |     1 |    78 |     3   (0)| 00:00:01 |
|   3 |    VIEW                      | VW_SQ_1 |     1 |    26 |     2   (0)| 00:00:01 |
|*  4 |     FILTER                   |         |       |       |            |          |
|   5 |      HASH GROUP BY           |         |     1 |    32 |     2   (0)| 00:00:01 |
|*  6 |       INDEX RANGE SCAN       | PRI_PK  |     1 |    32 |     2   (0)| 00:00:01 |
|*  7 |    INDEX UNIQUE SCAN         | PRI_PK  |     1 |       |     0   (0)| 00:00:01 |
|   8 |   TABLE ACCESS BY INDEX ROWID| PRICES  |     1 |    52 |     1   (0)| 00:00:01 |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   4 - filter("PRI2"."ITEM_CODE"='XXXX')
   6 - access("PRI2"."ITEM_CODE"='XXXX' AND "PRI2"."EFFECTIVE_DATE"<=
              TO_DATE('2014-12-25 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   7 - access("ITEM_CODE"='XXXX' AND "EFFECTIVE_DATE"="MAX(EFFECTIVE_DATE)")

The plan changes dramatically, the optimizer has unnested the subquery. In my case this didn’t make any difference to the overall performance as my data set was small, I only had one or two prices per item code, and the query was very basic; but in most other cases the change could be catastrophic.

The Problem Query

The requirement on OTN had a stock transactions (xo_stock_trans) table and a prices (xo_prices) table, and the OP had supplied some code to create and populate these tables with 6.4 million and 4.5 million rows respectively. Unfortunately the xo_prices table didn’t have a suitable unique constraint on it and ended up with lots of items having multiple prices for the same date.  The OP had created a function to return a price for an item given a driving date and price_type, and had a query that called that function three times per row (once for each of three price types); but this did not perform very well and the OP wanted to know if there was a way of addressing the requirement efficiently using pure SQL; (s)he had already tried the following:


select tr.item, tr.trans_date, tr.quantity
    , pr.gross_price
    , pr.net_price
    , pr.special_price
from xo_stock_trans tr
join xo_prices pr on pr.item = tr.item
                and pr.price_date = (select max(pr2.price_date)
                                     from xo_prices pr2
                                     where pr2.item = pr.item
                                       and pr2.price_date <= tr.trans_date
                                     )
where tr.trans_date between '01-AUG-2014' and '31-AUG-2014';  

That was SO close – it’s clearly implementing the right sort of strategy: but it didn’t perform well, so let’s check the execution plan:

------------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name               | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                    |     1 |    70 |       |   168M(100)|234:06:13 |
|   1 |  NESTED LOOPS                 |                    |     1 |    70 |       |   168M(100)|234:06:13 |
|   2 |   NESTED LOOPS                |                    |     9 |    70 |       |   168M(100)|234:06:13 |
|   3 |    NESTED LOOPS               |                    |     9 |   450 |       |   168M(100)|234:06:13 |
|   4 |     VIEW                      | VW_SQ_1            |   286 | 10010 |       |   168M(100)|234:06:11 |
|   5 |      HASH GROUP BY            |                    |   286 |  7722 |       |   168M(100)|234:06:11 |
|   6 |       MERGE JOIN              |                    |   456G|    11T|       |  9153K(100)| 12:42:50 |
|   7 |        SORT JOIN              |                    |   202K|  2960K|       |   548   (2)| 00:00:03 |
|*  8 |         INDEX RANGE SCAN      | XO_STOCK_TRANS_IX2 |   202K|  2960K|       |   548   (2)| 00:00:03 |
|*  9 |        SORT JOIN              |                    |  4045K|    46M|   154M| 19043   (6)| 00:01:36 |
|* 10 |         INDEX FAST FULL SCAN  | XO_PRICES_IX1      |  4045K|    46M|       |  1936  (10)| 00:00:10 |
|* 11 |     TABLE ACCESS BY USER ROWID| XO_STOCK_TRANS     |     1 |    15 |       |     1   (0)| 00:00:01 |
|* 12 |    INDEX RANGE SCAN           | XO_PRICES_IX1      |     1 |       |       |     2   (0)| 00:00:01 |
|  13 |   TABLE ACCESS BY INDEX ROWID | XO_PRICES          |     1 |    20 |       |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   8 - access("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "TR"."TRANS_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   9 - access(INTERNAL_FUNCTION("PR2"."PRICE_DATE")<=INTERNAL_FUNCTION("TR"."TRANS_DATE"))
       filter(INTERNAL_FUNCTION("PR2"."PRICE_DATE")<=INTERNAL_FUNCTION("TR"."TRANS_DATE"))
  10 - filter("PR2"."PRICE_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
  11 - filter("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "TR"."TRANS_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
  12 - access("ITEM_1"="PR"."ITEM" AND "PR"."PRICE_DATE"="MAX(PR2.PRICE_DATE)")
       filter("PR"."ITEM"="TR"."ITEM")

The query was limited to August 2014, which was about 198,000 rows in my table, so we might expect some signs of a brute-force approach (tablescans and hash joins rather than indexes and nested loops) – but what we get ends up with a high-precision approach with a very bad cardinality estimate after a brute-force unnesting of the “max(price_date)” subquery. The unnesting has done a range scan over 200,000 stock_trans rows, and an index fast full scan on 4.5 million prices to do a merge join and hash aggregation to find the maximum price_date for each target row in the xo_stock_trans table. (See my earlier posting on table duplication for a variation and explanation of what Oracle has done here). This step is a lot of work, but the optimizer thinks it’s going to produce only 286 rows in the aggregated result, so the next steps in the plan are indexed nested loops – which actually operate 198,000 times.

With the clue from my initial description, we need to aim for a strategy where Oracle doesn’t unnest that subquery – so let’s experiment with a basic /*+ no_unnest */ hint in the subquery and see what happens. Here’s the resulting execution plan:


--------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name           | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |                |   527 | 18445 |       |  6602M  (1)|999:59:59 |
|*  1 |  FILTER                       |                |       |       |       |            |          |
|*  2 |   HASH JOIN                   |                |  3423M|   111G|  5336K| 76973  (90)| 00:06:25 |
|*  3 |    TABLE ACCESS FULL          | XO_STOCK_TRANS |   202K|  2960K|       |  2531  (13)| 00:00:13 |
|   4 |    TABLE ACCESS FULL          | XO_PRICES      |  4571K|    87M|       |  2275  (11)| 00:00:12 |
|   5 |   SORT AGGREGATE              |                |     1 |    12 |       |            |          |
|   6 |    FIRST ROW                  |                |     1 |    12 |       |     3   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN (MIN/MAX)| XO_PRICES_IX1  |     1 |    12 |       |     3   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter("PR"."PRICE_DATE"= (SELECT /*+ NO_UNNEST */ MAX("PR2"."PRICE_DATE") FROM
              "XO_PRICES" "PR2" WHERE "PR2"."PRICE_DATE"<=:B1 AND "PR2"."ITEM"=:B2))
   2 - access("PR"."ITEM"="TR"."ITEM")
   3 - filter("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND
              "TR"."TRANS_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
   7 - access("PR2"."ITEM"=:B1 AND "PR2"."PRICE_DATE"<=:B2)

The subquery now survives, and we can see a min/max range scan in the plan – but the subquery is a filter() subquery and is applied to the result of joining the 200,000 transactions to every price that applies for the item in each transaction. The optimizer thinks that this join will produce roughly 3.4 million rows but in fact with the sample data set (which had many prices per item) the join resulted in 4.4 Billion rows. The min/max subquery is as efficient as it can be, but it’s running far too often; ideally we would like it to run at most once per transaction, so why is it running late ? We could try adding the /*+ push_subq */ hint to the subquery but if we do the plan doesn’t change.

Our rapid “most recent occurrence” revolved around accessing the prices table by index while “pre-querying” for the date using a min/max subquery that knew the relevant item code already. In this case, though, we’re doing a full tablescan of the xo_prices table so the method doesn’t apply. So let’s manipulate the query to force an indexed access path for the join to the xo_prices table by adding the hints /*+ leading(tr pr) use_nl(pr) index(pr) */ to the main body of the query. This is the resulting plan:


--------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                |   527 | 18445 |  6614M  (1)|999:59:59 |
|   1 |  NESTED LOOPS                   |                |  3413K|   113M|    11M  (2)| 16:29:13 |
|   2 |   NESTED LOOPS                  |                |  3413K|   113M|    11M  (2)| 16:29:13 |
|*  3 |    TABLE ACCESS FULL            | XO_STOCK_TRANS |   202K|  2960K|  2531  (13)| 00:00:13 |
|*  4 |    INDEX RANGE SCAN             | XO_PRICES_IX1  |    16 |       |    52   (2)| 00:00:01 |
|   5 |     SORT AGGREGATE              |                |     1 |    12 |            |          |
|   6 |      FIRST ROW                  |                |     1 |    12 |     3   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN (MIN/MAX)| XO_PRICES_IX1  |     1 |    12 |     3   (0)| 00:00:01 |
|   8 |   TABLE ACCESS BY INDEX ROWID   | XO_PRICES      |    17 |   340 |    59   (2)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "TR"."TRANS_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
   4 - access("PR"."ITEM"="TR"."ITEM")
       filter("PR"."PRICE_DATE"= (SELECT /*+ NO_UNNEST */ MAX("PR2"."PRICE_DATE") FROM
              "XO_PRICES" "PR2" WHERE "PR2"."PRICE_DATE"<=:B1 AND "PR2"."ITEM"=:B2))
   7 - access("PR2"."ITEM"=:B1 AND "PR2"."PRICE_DATE"<=:B2)

We’re nearly there, the shape of the execution plan – lines 4 to 7, at any rate – matches the shape of the very simple example at the start of this article, we seem to be driving from the min/max subquery at line 7; unfortunately when we look at the predicate section of line 4 of the plan we can see that the subquery is still a filter() subquery not an access() subquery – it’s (nominally) being performed for every index entry in the range scan of the xo_prices index that we do for each xo_stock_trans row. What we want to see is an access() subquery – and checking the SQL we can see how to get there: the subquery currently correlates the item back to the xo_prices table, not to the xo_stock_trans table,  so let’s correct that correlation. Here’s our final query (though not formatted to my preference) with execution plan:


select /*+ leading(tr pr) use_nl(pr) index(pr) */  -- hint added
       tr.item, tr.trans_date, tr.quantity
    , pr.gross_price
    , pr.net_price
    , pr.special_price
from xo_stock_trans tr
join xo_prices pr on pr.item = tr.item
                and pr.price_date = (select /*+ no_unnest */  -- hint added
                                         max(pr2.price_date)
                                     from xo_prices pr2
                                     where pr2.item = tr.item  -- correlate to tr, not pr
                                       and pr2.price_date <= tr.trans_date
                                     )
where tr.trans_date between '01-AUG-2014' and '31-AUG-2014'
;

--------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name           | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |                |  3423M|   111G|  1824K  (1)| 02:32:02 |
|   1 |  NESTED LOOPS                   |                |  3413K|   113M|  1824K  (1)| 02:32:02 |
|   2 |   NESTED LOOPS                  |                |  3413K|   113M|  1824K  (1)| 02:32:02 |
|*  3 |    TABLE ACCESS FULL            | XO_STOCK_TRANS |   202K|  2960K|  2531  (13)| 00:00:13 |
|*  4 |    INDEX RANGE SCAN             | XO_PRICES_IX1  |    16 |       |     2   (0)| 00:00:01 |
|   5 |     SORT AGGREGATE              |                |     1 |    12 |            |          |
|   6 |      FIRST ROW                  |                |     1 |    12 |     3   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN (MIN/MAX)| XO_PRICES_IX1  |     1 |    12 |     3   (0)| 00:00:01 |
|   8 |   TABLE ACCESS BY INDEX ROWID   | XO_PRICES      |    17 |   340 |     9   (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - filter("TR"."TRANS_DATE">=TO_DATE(' 2014-08-01 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss') AND "TR"."TRANS_DATE"<=TO_DATE(' 2014-08-31 00:00:00', 'syyyy-mm-dd
              hh24:mi:ss'))
   4 - access("PR"."ITEM"="TR"."ITEM" AND "PR"."PRICE_DATE"= (SELECT /*+ NO_UNNEST */
              MAX("PR2"."PRICE_DATE") FROM "XO_PRICES" "PR2" WHERE "PR2"."PRICE_DATE"<=:B1 AND
              "PR2"."ITEM"=:B2))
   7 - access("PR2"."ITEM"=:B1 AND "PR2"."PRICE_DATE"<=:B2)

Finally we can see (from the predicate for line 4) the we run the subquery at most once for each row from xo_stock_trans and we use the result of each subquery execution to drive the index range scan to pick up the matching rows from xo_prices with no further filtering. The order of operation is: 3, 7, 6, 5, 4, 2, 8, 1, 0

The only thing we can do now is decide whether the strategy for indexing into the xo_prices table 200,000 times (for our 30 day requirement) is better than a brute force approach that does a massive join and sort, or a data duplication approach that puts a “price end date” on each xo_prices row to avoid the need to check all prices for an item to find the appropriate one. Ultimately the choice may depend on trading off the human development resources against the machine run-time resources, with an eye on the number of times the query runs and the size of the date range typically involved.

Footnote:

There’s plenty more I could say about this query and how to handle it – but there are too many questions about the correctness of the data definition and content to make it worth pursuing in detail.  You will note, however, that the various execution plans which logically should be returning the same data report dramatically different cardinalities for the final row source; if nothing else this should warn you that maybe the optimizer is going to have trouble producing a good plan because it’s model produced a bad cardinality estimate at some point in a series of transformations.

In fact, when I first saw this query I converted to traditional Oracle syntax (anticipating, incorrectly, a need to do something messy with hints), corrected the subquery correlation to the “obvious” choice, and put in a cardinality hint /*+ cardinality(tr 100) */ for the xo_stock_trans table, and got the execution plan that I’ve managed to produce as the final plan above.

Tactically the correlation column is the really important bit – if that can be set up suitably we just have to work around the optimizer’s arithmetic assumptions.

 

 


Securing Big Data - Part 2 - understanding the data required to secure it

Steve Jones - Wed, 2015-01-07 09:00
In the first part of Securing Big Data I talked about the two different types of security.  The traditional IT and ACL security that needs to be done to match traditional solutions with an RDBMS but that is pretty much where those systems stop in terms of security which means they don't address the real threats out there, which are to do with cyber attacks and social engineering.  An ACL is only
Categories: Fusion Middleware

Fit for Work: A Team Experience of Wearable Technology

Usable Apps - Wed, 2015-01-07 04:08

By Sandra Lee (@sandralee0415)

What happens when co-workers try out wearable technology? Misha Vaughan (@mishavaughan), Director of Oracle Applications Communications and Outreach, explored just that.

“Instead of a general perspective, I wanted the team to have a personal experience of wearable technology”, said Misha. So, she gave each member of her team a Fitbit Flex activity tracker to use. The exercise proved insightful, with team members providing useful personal and enterprise-related feedback on device usage.

Fitbit Flex Awaits

Your Fitbit Flex awaits [Photo: Sandra Lee]

Team Dynamic and Initial Reactions

It was a free choice for team members to wear the Fitbit device or not. Those that did were inspired and enjoyed comparing activities and goals. Shannon Whiteman, Communication Operations and Online Training Manager, loved the competitive aspect. “If I saw someone had 100 more steps than I did, I’d take the stairs and walk an extra 101 steps to beat them.” Kathy Miedema, Senior Market Research Analyst, noted that the Fitbit “really motivated and validated my personal fitness activity”.

Fitbit Dashboard for Ultan O'Broin

Example of recorded activity: Ultan O’Broin’s (@usableapps) Fitbit dashboard

The exercise also provided observations on team dynamics in the workplace. Some chose not to wear the device whether for personal reasons, set-up issues, or lack of time; a reminder that although fun to try, such devices are not for everyone, and that’s OK.

The Fashion Perspective

Sarahi Mireles, User Experience Developer in Mexico, tried the Fitbit, but it didn’t fit her lifestyle, saying that “the interest is there [for wearables in general], but the design attraction is lacking.” Sarahi feels the ideal fitness tracker for her world is one with interchangeable looks, so she can wear it to work and to dinner. This typical user need is where fashion designers like Tory Burch offer value to technology developers, in this case partnering with Fitbit to make devices look more like beautiful bracelets and necklaces.

Tory Burch bracelet for Fitbit

Tory Burch for Fitbit metal-hinged bracelet

The Enterprise Employee Investment

Fitness plays a role in work/life balance, and health, happiness, and productivity are intrinsically linked. Overall, wellness contributes to the bottom line in a big way. Oracle is focused on such solutions too, researching user experiences that best engage, promote and support employee wellness.

Oracle HCM Cloud Wellness Page Prototype

Oracle HCM Cloud: Employee Wellness page prototype

Externally, at HCM World for example, Oracle's interest in this space offered analysts and customers complimentary Fitbit Zip devices for a voluntary wellness competition; the winner receiving a donation to the American Cancer Society.

Karen Scipi (@karenscipi), Senior Usability Engineer, reflected that companies like Oracle, in facilitating the use of the fitness device, are placing importance on employee health and fitness as an “employee investment.” Healthier individuals are happier and therefore more productive employees.

Jeremy Ashley (@jrwashley), Vice President of Applications User Experience, already leads his team in embracing wellness within the workplace, participating in the American Heart Association Bay Walk, for example. He explained how encouraging and measuring activity during the working day, whether through walking meetings or using activity trackers, is a meaningful way to identify with the Oracle Applications Cloud User Experience strategy too.  

Jeremy described how sensors in activity trackers—along with smart watches, heads-up displays, smart phones, and beacons—are part of the Internet of Things: that ubiquitous connectivity of technology and the Cloud that realize daily experiences for today's enterprise users to empathize with.

Your Data and the Enterprise Bottom Line

From the business perspective, employee activity data gathered from corporate wellness programs could lead to negotiated discounts and rewards for users from health care companies, for example; one possible incentive to enterprise adoption. Gamification, the encouraging of team members to engage and interact in collaborative and productive ways in work using challenges and competitions, is another strategy for workplace wellness programs uptake.

Ultan O’Broin, Director of User Experience, who travels globally out of Ireland, noted that although he personally hasn’t experienced any negative reactions to wearable technology, the issue of privacy of the data gathered, especially in Europe, is a huge concern.  

Data accuracy, permitting employees to voluntarily opt in or out of fitness and wellness programs, privacy issues, and what to do with that data once its collected, all need to reassure users and customers alike. Having HR involved in tracking, storing and using employee activity data is an enterprise dimension being explored.

User Experience Trends

Smart watch usage is on the rise, combining ability to unobtrusively track activity with other glanceable UI capabilities. Analysts now predict a shift in usage patterns as smart watches begin to replace fitness bands, but time will tell in this fast-moving space.

Regardless of your wearable device of choice, and the fashion, personal privacy, employee data, and corporate deployment considerations we’ve explored, wearable technology and wellness programs are enterprise happenings that are here to stay. It’s time to get on board and think about how your business can benefit.

Perhaps your team could follow Misha’s great initiative and explore wearable technology user experience for yourselves? Let us know in the comments!

You can read more about Oracle Applications User Experience team’s innovation and exploration of wearable technology on the Usable Apps Storify social story.

Series of SaaS Implementation Workshops for EMEA Partners

We are pleased to announce a series of different SaaS/Cloud Implementation Workshops. Oracle will organize several Workshops between January and June 2015. It will be possible to join the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How Effective is Blogging for Software Developer Outreach?

Usable Apps - Wed, 2015-01-07 02:05

By Joe Dumas, Oracle Applications User Experience

When you blog, are you reaching the right audience? Is blogging an effective way to spread your message? These are some of the questions that the Oracle Applications User Experience (OAUX) Communications and Outreach team asked me to help answer.

The team made the Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook available for free on the web. They announced its availability on the Usable Apps blog.

Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook

Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook in use.

The eBook contains user experience design guidance and examples for building the Oracle Applications Cloud simplified UI. The target audience was developers building applications with Oracle ADF in the Oracle Java Cloud Service. To download the eBook (in a tablet-friendly format of choice), developers registered their name and email address on the eBook landing page.

To gather the information for analysis, I created a short online survey of questions and, using that database of thousands of email addresses, invited those registered users to complete the survey, without either obligation or incentive.

Of course, developers might have heard about the eBook in other ways, such as attending an OAUX workshop or visiting the Usable Apps website.

However, when I tabulated the survey results, more than half of the respondents had found out about the eBook from the blog.

Furthermore, I found that of those who used the book extensively, some 70% said they had first heard about it from the blog.

I also found that the survey respondents were mostly the very people for whom the book was intended. 70% of respondents made user interface design decisions for applications development teams, and all either worked for Oracle Partners or were applications development consultants for Oracle products.

I’ll explore in a further blog article about what parts of the eBook developers found most useful and other insights. But, as a taster, I can let you know now about receiving positive comments again and again about developers being “thrilled” with the content.

In these days of pervasive social media and other communications channels and a debate about the effectiveness of different online platforms, these findings show that blogs are indeed an effective way to reach out to a target audience, especially one committed to finding ways to work faster and smarter.

Do you communicate with developers or other information technology professionals using a blog? How often do you blog, and why? Share your experience in the comments.

For more eBook goodness from OAUX, download the Oracle Applications Cloud UX Strategy and Trends eBook too. More details are on the Voice of User Experience (VoX) blog.

Change Subversion password in Jdeveloper

Darwin IT - Wed, 2015-01-07 01:50
On most projects I used TortoiseSVN voor versioning with Subversion. It gives me a feeling of being more in control of my working copy. In the past I used JDevelopers built in versioning tool on a project, due to lack of administrator rights for installing Tortoise. And although you need to give it some time to get used to it, the versioning is pretty good actually.

On my current project I gave it a go again, since the customer is used to do versioning from inside the IDE (be it Eclipse or JDeveloper).

One thing that I remember to be hard to do was changing your subversion password. I remember that I could not find it in Jdeveloper 11g. And again in Jdeveloper 12c it was hard to find as well. So I started getting affraid to have to hack in the preference or property files. But it turns out that you can do it pretty easy from the tool. Be it that it is at a less obvious place.

In Jdeveloper go to the Team menu and choose Versions.

It will open the Versions Browser or Navigator:
You can right click on the particular connection (in my example the blurred one):

And here you can change your password.
You can also add or remove repository connections, import and export them. Which is handy because in SQLDeveloper there's also a Subversion client. Of course, since it's based on the same framework.


Some versioning features I'm happy about in JDeveloper:
  • The pending changes tools is neat: it gives a nice overview of pending changes divided over tabs with incoming outgoing (your own changes), (updates from coworkers), and candidates (added files that are not versioned yet)
  • Integrated in project navigator in JDeveloper, where you can do changes on file, project and application level.
I'm less happy about or getting used to:
  • Versioning functionality is pretty much scattered over the IDE in different context menu's. Not only because of the different versioning levels, but the preferences are done at different places. See the Versions Navigator above, but I tend to search for preferences in the Preferences menu.
  • There are folders that you don't want to have versioned, for example: files generated at build like compiled classes, processes and jar/war/ear files. You want to be able to set the 'svn:ignore' property for those. But those folders and files tend to be invisible in the project navigator. So you have to add the parent folders of those files and folders as a resource, set the property, commit and remove them as a resource again. 
  • When I want to set keywords (like Id, author, etc.) for svn keyword replacement, JDeveloper doesn't know of the possible values. 
But for most jobs the SVN integration is impressing complete.

Happy versioning in JDeveloper!

Test locally first, then deploy to the cloud.

Eric Rajkovic - Tue, 2015-01-06 22:12
As recommended by Cheuk Chau on Medium, I read 'How to Develop a Daily Writing Habit’ - I’ll try to put it in practice here with my second advise - should have been my #1 advise:
Run a local copy of your application first, before to test it on your cloud instance.
While it’s simple to run directly on the Cloud, and we have a lot of samples available to get started, working local first is the best way to get ready when trouble come your way.
Samples are available from:One challenge you will face : there are many ways to get code deployed - you need to find the best option for you and your organization. Here are a few :
  1. JDeveloper workspace, using 11.1.1.7.1 build.
  2. ant build.xml (part of the SDK samples)
  3. maven pom.xml (part of the SDK samples)
  4. java cli (part of the SDK samples)
  5. the web console
The other step important is to check the output of the deployment and minimize the number of error and warning you see.
In one case, I started to look at some error I was seeing first in my Cloud instance, assuming it was a new issue. After a few trials, I finally run another test with my local setup and found that I was getting the same error. I made a mistake while editing some of the source code for my application and did not catch the error with local testing, which would have been cheaper for me.

For my next post, my advise is going to be of a different form - what you should not try. The do not advices are as valuable as the do advises - In some case. you have to learn from your mistakes, but sometime, it’s cheaper to leverage someone else mistakes.

My Oracle Support Essentials Webcasts

Chris Warticki - Tue, 2015-01-06 17:58

My Oracle Support Essentials Webcasts

January 2015

Webcast My Oracle Support Essentials


Oracle Support delivers various seminars on Support Policies, Processes and Proactive Tools via live web conference and recorded sessions. Available to you as an Oracle Support user free of charge, these seminars, available in multiple languages and in different time zones, help ensure you optimize the value offered by Oracle Support.

Click on the corresponding session link to register for an upcoming seminar. Bookmark Note 553747.1 for your quick access to the latest live My Oracle Support Essentials webcasts schedule.

LIVE WEBCASTS

Webcast Language US Eastern Central Europe Singapore Register Customer User Administration English Jan 19, 09:00 PM Jan 20, 03:00 AM Jan 20, 10:00 AM Register Jan 20, 04:00 AM Jan 20, 10:00 AM Jan 20, 05:00 PM Register Jan 20, 11:00 AM Jan 20, 05:00 PM Jan 21, 12:00 AM Register Spanish Jan 20, 10:00 AM Jan 20, 04:00 PM Jan 20, 11:00 PM Register Finding Answers in My Oracle Support English Jan 20, 09:00 PM Jan 21, 03:00 AM Jan 21, 10:00 AM Register Jan 21, 04:00 AM Jan 21, 10:00 AM Jan 21, 05:00 PM Register Jan 21, 01:00 PM Jan 21, 07:00 PM Jan 22, 02:00 AM Register French Jan 21, 10:00 AM Jan 21, 04:00 PM Jan 21, 11:00 PM Register Portuguese Jan 21, 07:30 AM Jan 21, 01:30 PM Jan 21, 08:30 PM Register Spanish Jan 21, 10:00 AM Jan 21, 04:00 PM Jan 21, 11:00 PM Register Introduction to Premier Support English Jan 13, 09:00 PM Jan 14, 03:00 AM Jan 14, 10:00 AM Register Jan 14, 04:00 AM Jan 14, 10:00 AM Jan 14, 05:00 PM Register Jan 14, 01:00 PM Jan 14, 07:00 PM Jan 15, 02:00 AM Register Spanish Jan 14, 10:00 AM Jan 14, 04:00 PM Jan 14, 11:00 PM Register My Oracle Support Basics English Jan 14, 09:00 PM Jan 15, 03:00 AM Jan 15, 10:00 AM Register Jan 15, 04:00 AM Jan 15, 10:00 AM Jan 15, 05:00 PM Register Jan 15, 01:00 PM Jan 15, 07:00 PM Jan 16, 02:00 AM Register French Jan 15, 10:00 AM Jan 15, 04:00 PM Jan 15, 11:00 PM Register Portuguese Jan 15, 07:30 AM Jan 15, 01:30 PM Jan 15, 08:30 PM Register Spanish Jan 15, 10:00 AM Jan 15, 04:00 PM Jan 15, 11:00 PM Register Oracle Cloud Support English Jan 12, 09:00 PM Jan 13, 03:00 AM Jan 13, 10:00 AM Register Jan 13, 04:00 AM Jan 13, 10:00 AM Jan 13, 05:00 PM Register Jan 13, 01:00 PM Jan 13, 07:00 PM Jan 14, 02:00 AM Register Spanish Jan 13, 10:00 AM Jan 13, 04:00 PM Jan 13, 11:00 PM Register Service Request Flow and Best Practices English Jan 26, 09:00 PM Jan 27, 03:00 AM Jan 27, 10:00 AM Register Jan 27, 04:00 AM Jan 27, 10:00 AM Jan 27, 05:00 PM Register Jan 27, 01:00 PM Jan 27, 07:00 PM Jan 28, 02:00 AM Register French Jan 27, 10:00 AM Jan 27, 04:00 PM Jan 27, 11:00 PM Register Portuguese Jan 27, 07:30 AM Jan 27, 01:30 PM Jan 27, 08:30 PM Register Spanish Jan 27, 10:00 AM Jan 27, 04:00 PM Jan 27, 11:00 PM Register Support Configuration Based Services Essentials English Jan 27, 09:00 PM Jan 28, 03:00 AM Jan 28, 10:00 AM Register Jan 28, 04:00 AM Jan 28, 10:00 AM Jan 28, 05:00 PM Register Jan 28, 01:00 PM Jan 28, 07:00 PM Jan 29, 02:00 AM Register Spanish Jan 28, 10:00 AM Jan 28, 04:00 PM Jan 28, 11:00 PM Register Understanding Support Identifier Groups English Jan 20, 12:00 AM Jan 20, 06:00 AM Jan 20, 01:00 PM Register Jan 20, 07:00 AM Jan 20, 01:00 PM Jan 20, 08:00 PM Register Jan 20, 01:00 PM Jan 20, 07:00 PM Jan 21, 02:00 AM Register Spanish Jan 20, 12:00 PM Jan 20, 06:00 PM Jan 21, 01:00 AM Register Using the My Oracle Support Community Platform English Jan 21, 09:00 PM Jan 22, 03:00 AM Jan 22, 10:00 AM Register Jan 22, 04:00 AM Jan 22, 10:00 AM Jan 22, 05:00 PM Register Jan 22, 01:00 PM Jan 22, 07:00 PM Jan 23, 02:00 AM Register Spanish Jan 22, 10:00 AM Jan 22, 04:00 PM Jan 22, 11:00 PM Register
You may visit My Oracle Support for schedules of other live seminars. To see current local times around the world, visit world clock.

RECORDED TRAININGS

The following recorded My Oracle Support Essentials Webcasts can be viewed on demand. You may refer to Note 603505.1 for other available topics and recordings.


Topic Language View Customer User Administration English View Finding Answers in My Oracle Support English View Hardware Support Best Practices English View My Oracle Support Basics English View My Oracle Support New Features English View Oracle Cloud Support English View Service Request Flow and Best Practices English View Support Configuration Based Services Essentials English View Understanding Support Identifier Groups English View Using the My Oracle Support Community Platform English View
If you have further questions, please contact us by submitting a question in Using My Oracle Support. Hardware and Software Engineered to Work Together

Oracle Support

Copyright © 2015, Oracle. All rights reserved. Contact Us | Legal Notices | Privacy

Generating sample data for your APEX application

Dimitri Gielis - Tue, 2015-01-06 17:30
You need some sample data sometimes too? When I'm showing some new concepts at a customer or when I'm doing some training I just want some "random" data. Well, it's not really random data, it's a specific type of data I need depending the column and it should be a text that is somewhat meaningful and not hard to read like "1RT3HFIY".
When my wife is doing design and lay-out and she needs text, she's using Lorem Ipsum. In fact it's build in the Adobe tools she's using and the text looks readable (although it isn't). It would be so cool if for example SQL Developer had that feature "populate my table(s) with sample data" (even keeping relationships into account).
Before, I used data from all_objects or generated data with dbms_random and a connect by clause for the amount of records I wanted, but it wasn't ideal. I also looked at scrambling my data, which is nice because existing relations keep intact, but for me it didn't really work nicely if I needed to scramble a lot of columns. There're some companies having solutions for generating/scrambling data too, but below I want to share what I'm currently doing.
Go to generatedata.com and enter the definition of your table and which kind of data you want per column.

Once the definition is there you can define how you want to receive the data. I found the SQL tab didn't really work well, so I use CSV as output.
Next in Oracle SQL Developer I right click on my table and say "Import data" and select the csv.It automatically knows the format etc. and maps it correctly to my table. Hit Next and you have your sample data available :) 

You can also load the data straight from the Data Workshop in APEX.


Categories: Development

Career Day – Oracle Database Administrator

Bobby Durrett's DBA Blog - Tue, 2015-01-06 16:10

I will be talking at my daughter’s high school for career day on Monday, explaining my job as an Oracle Database Administrator.  Wish me luck!

The funny thing is that no one understands what Oracle DBAs do, unless they are one or work closely with one.  I have a feeling that my talk is going to fall flat, but if it helps one of the students in any way it will be worth it.

To me the best thing about being an Oracle DBA is that you can do a pretty interesting and technically challenging job and companies that are not technology centric will still hire you to do it.  I’ve always been interested in computer technology but have worked in non-technical companies my entire career – mainly a non-profit ministry and a food distribution company.  Neither companies make computers or sell software!

My other thought is how available computer technology is to students today.  Oracle, in one of the company’s more brilliant moves, made all of its software available for download so students can try out the very expensive software for free.  Plus all the manuals are available online.  What is it like to grow up as a student interested in computer technology in the age of the internet?  I can’t begin to compare it to my days in the 1980s when I was in high school and college.  Did we even have email?  I guess we must have but I can’t remember using it much.  Today a student who owns a laptop and has an internet connection has a world of technology at their fingertips far beyond what I had at their age.

Hopefully I wont bore the students to tears talking about being an Oracle DBA.  They probably still won’t know what it really is after I’m done.  But at least they will know that such a job exists, and maybe that will be helpful to them.

– Bobby

P.S.  There were over 100 students there.  They were pretty polite with only a little talking.  Here is a picture of myself on the left, my daughter in the center, and a coworker who also spoke at the career day on the right.

careerday





Categories: DBA Blogs

Updated Native Install/Clustering Whitepapers

Anthony Shorten - Tue, 2015-01-06 15:04

The Implementing Oracle ExaLogic and/or Oracle WebLogic Clustering  (Doc Id: 1334558.1) and Native Installation Oracle Utilities Application Framework (Doc Id: 1544969.1) have been updated with the latest information and advice from customers and partners.

The updates include:

  • Configuring additional parameters for UTF8 sites
  • Compatibility settings for various versions of Oracle WebLogic
  • Java Flight Control configuration for Java 7.

The whitepapers are available from My Oracle Support.

January 27th: Acorn Paper Products Sales Cloud Reference Forum

Linda Fishman Hoyle - Tue, 2015-01-06 14:42

Join us for an Oracle Sales Cloud Customer Reference Forum on Tuesday, January 27, 2015, with Acorn Paper Products' Jake Weissberg, Director of IT and David Karr, Chief Operating Officer. In this session, Weissberg and Karr will share why Oracle Sales Cloud was the right choice to optimize sales team productivity and effectiveness, while gaining executive visibility to the pipeline. They also will talk about how they were able to streamline their sales-to-order process with Oracle E-Business Suite.

Founded by Jack Bernstein in 1946, Acorn Paper Products Company started by selling job lot (over-run) boxes with five employees in an 11,000-square-foot warehouse. Today, Acorn, which is the end-user distribution arm of parent holding company Oak Paper Products Company, is a fourth-generation, family-owned business with more than 500,000 square feet of warehouse space, operating four specialty product divisions: creative services, janitorial and sanitary products, wine packaging, and agricultural packaging.

You can register now to attend the live Forum on Tuesday, January 27, 2015, at 9:00 a.m. Pacific Time / 12:00 p.m. Eastern Time and learn more from Acorn Paper Products directly.

Count (*)

Jonathan Lewis - Tue, 2015-01-06 12:04

The old chestnut about comparing speeds of count(*), count(1), count(non_null_column) and count(pk_column) has come up in the OTN database forum (at least) twice in the last couple of months. The standard answer is to point out that they will all execute the same code, and that the corroborating evidence for that claim is that, for a long time, the 10053 trace files have had a rubric reporting: CNT – count(col) to count(*) transformation or, for an even longer time, that the error message file (oraus.msg for the English Language version) has had an error code 10122 which produced (from at least Oracle 8i, if not 7.3):


SQL> execute dbms_output.put_line(sqlerrm(-10122))
ORA-10122: Disable transformation of count(col) to count(*)

But the latest repetition of the question prompted me to check whether a more recent version of Oracle had an even more compelling demonstration, and it does. I extracted the following lines from a 10053 trace file generated by 11.2.0.4 (and I know 10gR2 is similar) in response to selecting count(*), count(1) and count({non-null column}) respectively:


Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)" FROM "TEST_USER"."SAVED_ASH" "SAVED_ASH"

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(1)" FROM "TEST_USER"."SAVED_ASH" "SAVED_ASH"

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(SAMPLE_ID)" FROM "TEST_USER"."SAVED_ASH" "SAVED_ASH"

As you can see, Oracle has transformed all three select lists into count(*), hiding the transformation behind the original column alias. As an outsider’s proof of what’s going on, I don’t think you could get a more positive indicator than that.

 


No Discernible Growth in US Higher Ed Online Learning

Michael Feldstein - Tue, 2015-01-06 11:34

By 2015, 25 million post-secondary students in the United States will be taking classes online. And as that happens, the number of students who take classes exclusively on physical campuses will plummet, from 14.4 million in 2010 to just 4.1 million five years later, according to a new forecast released by market research firm Ambient Insight.

- Campus Technology, 2011

On the positive side, Moody’s notes that the U.S. Department of Education projects a 20-percent growth in master’s degrees and a 9-percent growth in associate degrees, opportunities in both online education and new certificate programs, and a rising earnings premium for those with college degrees.

- Chronicle of Higher Ed, 2014

Q.  How likely would it be that this fraction [% students taking online courses] would grow to become a majority of students over the next five years? A [from institutional academic leaders]. Nearly two-thirds responded that this was “Very likely,” with an additional one-quarter calling it “Likely.” [That’s almost 90% combined]

- Grade Change, Babson Survey 2013

More than two-thirds of instructors (68 percent) say their institutions are planning to expand their online offerings, but they are split on whether or not this is a good idea (36 percent positive, 38 percent negative, 26 percent neutral).

- Inside Higher Ed 2014

Still, the [disruptive innovation] theory predicts that, be it steam or online education, existing consumers will ultimately adopt the disruption, and a host of struggling colleges and universities — the bottom 25 percent of every tier, we predict — will disappear or merge in the next 10 to 15 years.

- Clayton Christensen in NY Times 2013

You could be forgiven for assuming that the continued growth of online education within US higher ed was a foregone conclusion. We all know it’s happening; the questions is how to adapt to the new world.

But what if the assumption is wrong? Based on the official Department of Education / NCES new IPEDS data for Fall 2013 term, for the first time there has been no discernible growth in postsecondary students taking at least one online course in the US.

From 2002 through 2013 the most reliable measure of this metric has been the Babson Survey Research Group (BSRG) annual reporting. While there are questions on absolute numbers due to questions on definition of what makes a course “online”, the year-over-year growth numbers have been quite reliable and are the most-referenced numbers available. Starting last year, using Fall 2012 data, the official IPEDS data started tracking online education, and last week they put out Fall 2013 data – allowing year-over-year changes.

I shared the recent overall IPEDS data in this post, noting the following:

By way of comparison, it is worth noting the similarities to the Fall 2012 data. The percentage data (e.g. percent of a sector taking exclusive / some / no DE courses) has not changed by more than 1% (rounded) in any of the data. This unfortunately makes the problems with IPEDS data validity all the more important.

It will be very interesting to see the Babson Survey Research Group data that is typically released in January. While Babson relies on voluntary survey data, as opposed to mandatory federal data reporting for IPEDS, their report should have better longitudinal validity. If this IPEDS data holds up, then I would expect the biggest story for this year’s Babson report to be the first year of no significant growth in online education since the survey started 15 years ago.

I subsequently found out that BSRG is moving this year to use the IPEDS data for online enrollment. So we already have the best data available, and there is no discernible growth. Nationwide there are just 77,493 more students taking at least one online class, a 1.4% increase.

Y-o-Y Analysis

Why The Phrase “No Discernible Growth”?

Even though there was a nationwide increase of 77,493 students taking at least one online course, representing a 1.4% growth, there is too much noise in the data for this to be considered real growth. Even with the drop in total enrollment, the percentage of students taking at least one online course only changed from 26.4% TO 27.1%.

Just take one school – Suffolk County Community College – who increased by roughly 21,600 student enrollments taking at least one online course from 2012 to 2013 due to a change in how they report data and not from actual enrollment increases. More than a quarter of the annual nationwide increase can be attributed to this one reporting change[1]. These and similar issues are why I use the phrase “no discernible growth” – the year-over-year changes are now lower than the ability of our data collection methods to accurately measure.

Combine Babson and IPEDS Growth Data

While we should not directly compare absolute numbers, it is reasonable to combine the BSRG year-over-year historical growth data (2003 – 2012) with the new IPEDS data (2012 – 2013).

Y-o-Y Growth Chart

One thing to notice is that is really a long-term trend of declining growth in online. With the release of last year’s BSRG report they specifically called out this trend.

The number of additional students taking at least one online course continued to grow at a rate far in excess of overall enrollments, but the rate was the lowest in a decade.

What has not been acknowledged or fully understood is the significance of this rate hitting zero, at least within the bounds of the noise in data collection.

Implications

Think of the implications here if online education has stopped growing in US higher education. Many of the assumptions underlying institutional strategic plans and ed tech vendor market data is based on continued growth in online learning. It is possible that there will be market changes leading back to year-over-year growth, but for now the assumptions might be wrong.

Rather than focusing just on this year, the more relevant questions are based on the future, particularly if you look at the longer-term trends. Have we hit a plateau in terms of the natural level of online enrollment? Will the trend continue to the point of online enrollments actually dropping below the overall enrollment? Will online enrollments bottom out and start to rise again once we get the newer generation of tools and pedagogical approaches such as personalized learning or competency-based education beyond pilot programs?

I am not one to discount the powerful effect that online education has had and will continue to have in the US, but the growth appears to be at specific schools rather than broad-based increases across sectors. Southern New Hampshire, Arizona State University, Grand Canyon University and others are growing their online enrollments, but University of Phoenix, DeVry University and others are dropping.

One issue to track is the general shift from for-profit enrollment to not-for-profit enrollment, even if the overall rates of online courses has remained relatively stable within each sector. There are approximately 80,000 fewer students taking at least one online course at for-profit institutions while there are approximately 157,000 more students in the same category at public and private not-for-profit sectors.

I suspect the changes will continue to happen in specific areas – number of working adults taking courses, often in competency-based programs, at specific schools and statewide systems with aggressive plans – but it also appears that just making assumptions of broad-based growth needs to be reconsidered.

Update: Please note that the data release is new and these are early results. If I find mistakes in the data or analysis that changes the analysis above, I’ll share in an updated post.

  1. Russ Poulin and I documented these issues in a separate post showing the noise is likely in the low hundreds of thousands.

The post No Discernible Growth in US Higher Ed Online Learning appeared first on e-Literate.

Another Echo Hack from Noel

Oracle AppsLab - Tue, 2015-01-06 10:44

Noel (@noelportugal) spent a lot of time during his holidays geeking out with his latest toy, Amazon Echo. Check out his initial review and his lights hack.

For a guy whose name means Christmas, seems it was a logical leap to use Alexa to control his Christmas tree lights too.

Let’s take a minute to shame Noel for taking portrait video. Good, moving on, oddly, I found out about this from a Wired UK article about Facebook’s acquisition of Wit.ai, an interesting nugget in its own right.

If you’re interested, check out Noel’s code on GitHub. Amazon is rolling out another batch of Echos to those who signed up back when the device was announced in November.

How do I know this? I just accepted my invitation and bought my very own Echo.

With all the connected home announcements coming out of CES 2015, I’m hoping to connect Alexa to some of the IoT gadgets in my home. Stretch goal for sure, given all the different ecosystems, but maybe this is finally the year that IoT pushes over the adoption hump.

Fingers crossed. The comments you must find.Possibly Related Posts: