Skip navigation.

Feed aggregator

SafeDrop – Part 2: Function and Benefits

Oracle AppsLab - Mon, 2016-01-25 02:14

SafeDrop is a secure box for receiving a physical package delivery, without the need of recipient to be present. If you recall, it was my team’s project at the AT&T Developer Summit Hackathon earlier this month.

SafeDrop is implemented with Intel Edison board at its core, coordinating various peripheral devices to produce a secure receiving product, and it won second place in the “Best Use of Intel Edison” at the hackathon.

SafeDrop box with scanner

SafeDrop box with scanner

Components built in the SafeDrop

Components built in the SafeDrop

While many companies have focused on the online security of eCommerce, the current package delivery at the last mile is still very much insecure. ECommerce is ubiquitous, and somehow people need to receive the physical goods.

The delivery company would tell you the order will be delivered on a particular day. You can wait at home all day to receive the package, or let the package sit in front of your house and risk someone stealing it.

Every year there are reports of package theft during holiday season, but more importantly, the inconvenience of staying home to accept goods and the lack of peace of mind, really annoys many people.

With SafeDrop, your package is delivered, securely!

How SafeDrop works:

1. When a recipient is notified a package delivery with a tracking number, he puts that tracking number in a mobile app, which essentially registers it to SafeDrop box that it is expecting a package with that tracking number.

2. When delivery person arrives, he just scans the tracking number barcode on the SafeDrop box, and that barcode is the unique key to open the SafeDrop. Once the tracking number is verified, the SafeDrop will open.

Delivery person scans the package in front of SafeDrop

Delivery person scans the package in front of SafeDrop

3. When the SafeDrop is opened, a video is recorded for the entire duration when the door is open, as a security measure. If the SafeDrop is not closed, a loud buzz sound continues until it is closed properly.

 Intel Edison board, sensor, buzzer, LED, servo, and webcam

Inside of SafeDrop: Intel Edison board, sensor, buzzer, LED, servo, and webcam

4. Once the package is within SafeDrop, a notification is sent to recipient’s mobile app, indicating the expected package has been delivered. Also shows a link to the video recorded.

In the future, SafeDrop could be integrated with USPS, UPS and FedEx, to verify the package tracking number for the SafeDrop automatically. When delivery person scans tracking number on SafeDrop, it automatically update status to “delivered” into that tracking record in delivery company’s database too. That way, the entire delivery process is automated in a secure fashion.

This SafeDrop design highlights three advantages:

1. Tracking number barcode as the key to the SafeDrop.
That barcode is tracked during the entire delivery, and it is with the package always, and it is fitting to use that as “key” to open its destination. We do not introduce any “new” or “additional” thing for the process.

2. Accurate delivery, which eliminates human mistakes.
Human error sometimes causes deliveries to the wrong address. With SafeDrop integrated into the shipping system, the focus is on a package (with tracking number as package ID) going to a target (SafeDrop ID associated with an address).

In a sense, the package (package ID) has its intended target (SafeDrop ID). The package can only be deposited into one and only one SafeDrop, which eliminates the wrong delivery issue.

3. Non-disputable delivery.
This dispute could happen: delivery company says a package has been delivered, but the recipient says that it has not arrived. The possible reasons: a) delivery person didn’t really deliver it; b) delivery person dropped it to a wrong address; c) a thief came by and took it; d) the recipient got it but was making a false claim.

SafeDrop makes things clear! If it is really delivered to SafeDrop, it is being recorded, and delivery company has done its job. If it is in the SafeDrop, the recipient has it. Really there is no dispute.

I will be showing SafeDrop at Modern Supply Chain Experience in San Jose, January 26 and 27 in the Maker Zone in the Solutions Showcase. If you’re attending the show, come by and check out SafeDrop.Possibly Related Posts:

Links for 2016-01-24 [del.icio.us]

Categories: DBA Blogs

OBIEE 12c – Three Months In, What’s the Verdict?

Rittman Mead Consulting - Sun, 2016-01-24 13:22

I’m over in Redwood Shores, California this week for the BIWA Summit 2016 conference where I’ll be speaking about BI development on Oracle’s database and big data platforms. As it’s around three months since OBIEE12c came out and we were here for Openworld, I thought it’d be a good opportunity to reflect on how OBIEE12c has been received by ourselves, the partner community and of course by customers. Given OBIEE11g was with us for around five years it’s clearly still early days in the overall lifecycle of this release, but it’s also interesting to compare back to where we were around the same time with 11g and see if we can spot any similarities and differences to take-up then.

Starting with the positives; Visual Analyzer (note – not the Data Visualization option, I’ll get to that later) has gone down well with customers, at least over in the UK. The major attraction for customers seems to be “Tableau with a shared governed data model and integrated with the rest of our BI platform” (see Oracle’s slide from Oracle Openworld 2015 below), and seeing as the Data Visualization Option is around the same price as Tableau Server the synergies around not having to deploy and support a new platform seem to outweigh the best-of-breed query functionality of Tableau and Qlikview.

NewImage

Given that VA is an extra-cost option what I’m seeing is customers planning to upgrade their base OBIEE platform from 11g to 12c as part of their regular platform refresh, and considering the VA part after the main upgrade as part of a separate cost/benefit exercise. VA however seems to be the trigger for customer interest in an upgrade, with the business typically now holding the budget for BI rather than IT and Visual Analyzer (like Mobile with 11g) being the new capability that unlocks the upgrade spend.

On the negative side, Oracle charging for VA hasn’t gone down well, either from the customer side who ask what it is they actually get for their 22% upgrade and maintenance fee if they they have to pay for anything new that comes with the upgrade; or from partners who now see little in the 12c release to incentivise customers to upgrade that’s not an additional cost option. My response is usually to point to previous releases – 11g with Scorecards and Mobile, the database with In-Memory, RAC and so on – and say that it’s always the case that anything net-new comes at extra cost, whereas the upgrade should be something you do anyway to keep the platform up-to-date and be prepared to uptake new features. My observation over the past month or so is that this objection seems to be going away as people get used to the fact that VA costs extra; the other push-back I get a lot is from IT who don’t want to introduce data mashups into their environment, partly I guess out of fear of the unknown but also partly because of concerns around governance, how well it’ll work in the real world, so on and so on. I’d say overall VA has gone down well at least once we got past the “shock” of it costing extra, I’d expect there’ll be some bundle along the lines of BI Foundation Suite (BI Foundation Suite Plus?) in the next year or so that’ll bundle BIF with the DV option, maybe include some upcoming features in 12c that aren’t exposed yet but might round out the release. We’ll see.

The other area where OBIEE12c has gone down well, surprisingly well, is with the IT department for the new back-end features. I’ve been telling people that whilst everyone thinks 12c is about the new front-end features (VA, new look-and-feel etc) it’s actually the back-end that has the most changes, and will lead to the most financial and cost-saving benefits to customers – again note the slide below from last year’s Openworld summarising these improvements.

NewImage

Simplifying install, cloning, dev-to-test and so on will make BI provisioning considerably faster and cheaper to do, whilst the introduction of new concepts such as BI Modules, Service Instances, layered RPD customizations and BAR files paves the way for private cloud-style hosting of multiple BI applications on a single OBIEE12c domain, hybrid cloud deployments and mass customisation of hosted BI environments similar to what we’ve seen with Oracle Database over the past few years.

What’s interesting with 12c at this point though is that these back-end features are only half-deployed within the platform; the lack of a proper RPD upload tool, BI Modules and Services Instances only being in the singular and so on point to a future release where all this functionality gets rounded-off and fully realised in the platform, so where we are now is that 12c seems oddly half-finished and over-complicated for what it is, but it’s what’s coming over the rest of the lifecycle that will make this part of the product most interesting – see the slide below from Openworld 2014 where this full vision was set-out, but in Openworld this year was presumably left-out of the launch slides as the initial release only included the foundation and not the full capability.

NewImage

Compared back to where we were with OBIEE11g (11.1.1.3, at the start of the product cycle) which was largely feature-complete but had significant product quality issues, with 12c we’ve got less of the platform built-out but (with a couple of notable exceptions) generally good product quality, but this half-completed nature of the back-end must confuse some customers and partners who aren’t really aware of the full vision for the platform.

And finally, cloud; BICS had an update some while ago where it gained Visual Analyzer and data mashups earlier than the on-premise release, and as I covered in my recent UKOUG Tech’15 conference presentation it’s now possible to upload an on-premise RPD (but not the accompanying catalog, yet) and run it in BICS, giving you the benefit of immediate availability of VA and data mashups without having to do a full platform upgrade to 12c.

NewImage

In-practice there are still some significant blockers for customers looking to move their BI platform wholesale into Oracle Cloud; there’s no ability yet to link your on-premise Active Directory or other security setup to BICS meaning that you need to recreate all your users as Oracle Cloud users, and there’s very limited support for multiple subject areas, access to on-premise data sources and other more “enterprise” characterises of an Oracle BI platform. And Data Visualisation Cloud Service (DVCS) has so far been a non-event; for partners the question is why would we get involved and sell this given the very low cost and the lack of any area we can really add value, while for customers it’s perceived as interesting but too limited to be of any real practical use. Of course, over the long term this is the future – I expect on-premise installs of OBIEE will be the exception rather than the rule in 5 or 10 years time – but for now Cloud is more “one to monitor for the future” rather than something to plan for now, as we’re doing with 12c upgrades and new implementations.

So in summary, I’d say with OBIEE12c we were pleased and surprised to see it out so early, and VA in-particular has driven a lot of interest and awareness from customers that has manifested itself in enquires around upgrades and new features presentations. The back-end for me is the most interesting new part of the release, promising significant time-saving and quality-improving benefits for the IT department, but at present these benefits are more theoretical than realisable until such time as the full BI Modules/multiple Services Instances feature is rolled-out later this year or next. Cloud is still “one for the future” but there’s significant interest from customers in moving either part or all of their BI platform to the cloud, but given the enterprise nature of OBIEE it’s likely BI will follow after a wider adoption of Oracle Cloud for the customer rather than being the trailblazer given the need to integrate with cloud security, data sources and the need to wait for some other product enhancements to match on-premise functionality.

The post OBIEE 12c – Three Months In, What’s the Verdict? appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Semijoin_driver

Jonathan Lewis - Sun, 2016-01-24 05:42

Here’s one of those odd little tricks that (a) may help in a couple of very special cases and (b) may show up at some future date – or maybe it already does – in the optimizer if it is recognised as a solution to a more popular problem. It’s about an apparent restriction on how the optimizer uses the BITMAP MERGE operation, and to demonstrate a very simple case I’ll start with a data set with just one bitmap index:


create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        rownum                  id,
        mod(rownum,1000)        n1,
        rpad('x',10,'x')        small_vc,
        rpad('x',100,'x')       padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6
;
begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'T1',
                method_opt       => 'for all columns size 1'
        );
end;
/

create bitmap index t1_b1 on t1(n1);

select index_name, leaf_blocks, num_rows from user_indexes;

/*
INDEX_NAME           LEAF_BLOCKS   NUM_ROWS
-------------------- ----------- ----------
T1_B1                        500       1000
*/

Realistically we don’t expect to use a single bitmap index to access data from a large table, usually we expect to have queries that give the optimizer the option to choose and combine several bitmap indexes (possibly driving through dimension tables first) to reduce the target row set in the table to a cost-effective level.

In this example, though, I’ve created a column data set that many people might view as “inappropriate” as the target for a bitmap index – in one million rows I have one thousand distinct values, it’s not a “low cardinality” column – but, as Richard Foote (among others) has often had to point out, it’s wrong to think that bitmap indexes are only suitable for columns with a very small number of distinct values. Moreover, it’s the only index on the table, so no chance of combining bitmaps.

Another thing to notice about my data set is that the n1 column has been generated by the mod() function; because of this the column cycles through the 1,000 values I’ve allowed for it, and this means that the rows for any given value are scattered widely across the table, but it also means that if I find a row with the value X in it then there could well be a row with the value X+4 (say) in the same block.

I’ve reported the statistics from user_indexes at the end of the sample code. This shows you that the index holds 1,000 “rows” – i.e. each key value requires only one bitmap entry to cover the whole table, with two rows per leaf block.  (By comparison, a B-tree index oon the column was 2,077 leaf block uncompressed, or 1,538 leaf blocks when compressed).

So here’s the query I want to play with, followed by the run-time execution plan with stats (in this case from a 12.1.0.2 instance):


alter session set statistics_level = all;

select
        /*+
                qb_name(main)
        */
        max(small_vc)
from
        t1
where
        n1 in (1,5)
;

select * from table(dbms_xplan.display_cursor(null,null,'allstats last outline'));

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |       |      1 |        |      1 |00:00:00.03 |    2006 |      4 |
|   1 |  SORT AGGREGATE                       |       |      1 |      1 |      1 |00:00:00.03 |    2006 |      4 |
|   2 |   INLIST ITERATOR                     |       |      1 |        |   2000 |00:00:00.03 |    2006 |      4 |
|   3 |    TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      2 |   2000 |   2000 |00:00:00.02 |    2006 |      4 |
|   4 |     BITMAP CONVERSION TO ROWIDS       |       |      2 |        |   2000 |00:00:00.01 |       6 |      4 |
|*  5 |      BITMAP INDEX SINGLE VALUE        | T1_B1 |      2 |        |      2 |00:00:00.01 |       6 |      4 |
------------------------------------------------------------------------------------------------------------------


Predicate Information (identified by operation id):
---------------------------------------------------
   5 - access(("N1"=1 OR "N1"=5))

The query is selecting 2,000 rows from the table, for n1 = 1 and n1 = 5, and the plan shows us that the optimizer probes the bitmap index twice (operation 5), once for each value, fetching all the rows for n1 = 1, then fetching all the rows for n1 = 5. This entails 2,000 buffer gets. However, we know that for every row where n1 = 1 there is another row nearby (probably in the same block) where n1 = 5 – it would be nice if we could pick up the 1 and the 5 at the same time and do less work.

Technically the optimizer has the necessary facility to do this – it’s known as the BITMAP MERGE – Oracle can read two or more entries from a bitmap index, superimpose the bits (effectively a BITMAP OR), then convert to rowids and visit the table. Unfortunately there are cases (and it seems to be only the simple cases) where this doesn’t appear to be allowed even when we – the users – can see that it might be a very effective strategy. So can we make it happen – and since I’ve asked the question you know that the answer is almost sure to be yes.

Here’s an alternate (messier) SQL statement that achieves the same result:


select
        /*+
                qb_name(main)
                semijoin_driver(@subq)
        */
        max(small_vc)
from
        t1
where
        n1 in (
                select /*+ qb_name(subq) */
                        *
                from    (
                        select 1 from dual
                        union all
                        select 5 from dual
                        )
        )
;

-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |       |      1 |        |      1 |00:00:00.02 |    1074 |       |       |          |
|   1 |  SORT AGGREGATE                      |       |      1 |      1 |      1 |00:00:00.02 |    1074 |       |       |          |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| T1    |      1 |   2000 |   2000 |00:00:00.02 |    1074 |       |       |          |
|   3 |    BITMAP CONVERSION TO ROWIDS       |       |      1 |        |   2000 |00:00:00.01 |       6 |       |       |          |
|   4 |     BITMAP MERGE                     |       |      1 |        |      1 |00:00:00.01 |       6 |  1024K|   512K| 8192  (0)|
|   5 |      BITMAP KEY ITERATION            |       |      1 |        |      2 |00:00:00.01 |       6 |       |       |          |
|   6 |       VIEW                           |       |      1 |      2 |      2 |00:00:00.01 |       0 |       |       |          |
|   7 |        UNION-ALL                     |       |      1 |        |      2 |00:00:00.01 |       0 |       |       |          |
|   8 |         FAST DUAL                    |       |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|   9 |         FAST DUAL                    |       |      1 |      1 |      1 |00:00:00.01 |       0 |       |       |          |
|* 10 |       BITMAP INDEX RANGE SCAN        | T1_B1 |      2 |        |      2 |00:00:00.01 |       6 |       |       |          |
-----------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
  10 - access("N1"="from$_subquery$_002"."1")

Key points from this plan – and I’ll comment on the SQL in a moment: The number of buffer visits is roughly halved (In many cases we picked up two rows as we visited each buffer); operation 4 shows us that we did a BITMAP MERGE, and we can see in operations 5 to 10 that we did a BITMAP KEY ITERATION (which is a bit like a nested loop join – “for each row returned by child 1 (operation 6) we executed child 2 (operation 10)”) to probe the index twice and get two strings of bits that operation 4 could merge before operation 3 converted to rowids.

For a clearer picture of how we visit the table, here are the first few rows and last few rows from a version of the two queries where we simply select the ID column rather than aggregating on the small_vc column:

select  id from ...

Original query structure
         1
      1001
      2001
      3001
...
    997005
    998005
    999005

2000 rows selected.

Modified query structure:

         1
         5
      1001
      1005
      2001
      2005
...
    998001
    998005
    999001
    999005
    
2000 rows selected.

As you can see, one query returns all the n1 = 1 rows then all the n1 = 5 rows while the other query alternates as it walks through the merged bitmap. You may recall the Exadata indexing problem (now addressed, of course) from a few years back where the order in which rows were visited after a (B-tree) index range scan made a big difference to performance. This is the same type of issue – when the optimizer’s default plan gets the right data in the wrong order we may be able to find ways of modifying the SQL to visit the data in a more efficient order. In this case we save only fractions of a second because all the data is buffered, but it’s possible that in a production environment with much larger tables many, or all, of the re-visits could turn into physical reads.

Coming back to the SQL, the key to the re-write is to turn my IN-list into a subquery, and then tell the optimizer to use that subquery as a “semijoin driver”. This is essentially the mechanism used by the Star Tranformation, where the optimizer rewrites a simple join so that each dimension table (typically) appears twice, first as an IN subquery driving the bitmap selection then as a “joinback”. But (according to the manuals) a star transformation requires at least two dimension tables to be involved in a join to the central fact table – and that may be why the semi-join approach is not considered in this (and slightly more complex) cases.

 

 

My reference: bitmap_merge.sql, star_hack3.sql


Trace Files -- 10c : Query and DML (INSERT)

Hemant K Chitale - Sun, 2016-01-24 04:52
In the previous posts, I have traced either
SELECT
or
INSERT or UPDATE or DELETE
statements

I have pointed out that the block statistics are reported as "FETCH" statistics for SELECTs and "EXECUTE" statistics for the DMLs

What if we have an INSERT ... AS SELECT ?

SQL ID: 5pf0puy1pmcvc Plan Hash: 2393429040

insert into all_objects_many_list select * from all_objects_short_list


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.01 0.01 0 0 0 0
Execute 1 0.73 16.19 166 1655 14104 28114
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.74 16.21 166 1655 14104 28114

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=1766 pr=166 pw=0 time=16204380 us)
28114 28114 28114 TABLE ACCESS FULL ALL_OBJECTS_SHORT_LIST (cr=579 pr=0 pw=0 time=127463 us cost=158 size=2614881 card=28117)


insert into all_objects_many_list
select * from all_objects_short_list
where rownum < 11

call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.00 0 6 11 10
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.00 0.00 0 6 11 10

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=6 pr=0 pw=0 time=494 us)
10 10 10 COUNT STOPKEY (cr=3 pr=0 pw=0 time=167 us)
10 10 10 TABLE ACCESS FULL ALL_OBJECTS_SHORT_LIST (cr=3 pr=0 pw=0 time=83 us cost=158 size=2614881 card=28117)


Since it is an INSERT statement, the block statistics are reported against EXECUTE (nothing reported against the FETCH), even though the Row Source Operations section shows use Table Access (i.e. SELECT) against the ALL_OBJECTS_SHORT_LIST table. Also note, as we have seen in the previous trace on INSERTs, the target table does not get reported in the Row Source Operations.

Here's another example.

SQL ID: 5fgnrpgk3uumc Plan Hash: 2703984749

insert into all_objects_many_list select * from dba_objects


call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.04 0.25 0 0 0 0
Execute 1 0.63 3.94 82 2622 13386 28125
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.67 4.20 82 2622 13386 28125

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 87
Number of plan statistics captured: 1

Rows (1st) Rows (avg) Rows (max) Row Source Operation
---------- ---------- ---------- ---------------------------------------------------
0 0 0 LOAD TABLE CONVENTIONAL (cr=2736 pr=82 pw=0 time=3954384 us)
28125 28125 28125 VIEW DBA_OBJECTS (cr=360 pr=2 pw=0 time=1898588 us cost=105 size=5820219 card=28117)
28125 28125 28125 UNION-ALL (cr=360 pr=2 pw=0 time=1735711 us)
0 0 0 TABLE ACCESS BY INDEX ROWID SUM$ (cr=2 pr=2 pw=0 time=65154 us cost=1 size=11 card=1)
1 1 1 INDEX UNIQUE SCAN I_SUM$_1 (cr=1 pr=1 pw=0 time=61664 us cost=0 size=0 card=1)(object id 986)
0 0 0 TABLE ACCESS BY INDEX ROWID OBJ$ (cr=0 pr=0 pw=0 time=0 us cost=3 size=25 card=1)
0 0 0 INDEX RANGE SCAN I_OBJ1 (cr=0 pr=0 pw=0 time=0 us cost=2 size=0 card=1)(object id 36)
28124 28124 28124 FILTER (cr=354 pr=0 pw=0 time=1094905 us)
28124 28124 28124 HASH JOIN (cr=354 pr=0 pw=0 time=911322 us cost=102 size=3402036 card=28116)
68 68 68 TABLE ACCESS FULL USER$ (cr=5 pr=0 pw=0 time=328 us cost=3 size=1224 card=68)
28124 28124 28124 HASH JOIN (cr=349 pr=0 pw=0 time=612040 us cost=99 size=2895948 card=28116)
68 68 68 INDEX FULL SCAN I_USER2 (cr=1 pr=0 pw=0 time=159 us cost=1 size=1496 card=68)(object id 47)
28124 28124 28124 TABLE ACCESS FULL OBJ$ (cr=348 pr=0 pw=0 time=147446 us cost=98 size=2277396 card=28116)
0 0 0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us cost=2 size=30 card=1)
0 0 0 INDEX SKIP SCAN I_USER2 (cr=0 pr=0 pw=0 time=0 us cost=1 size=20 card=1)(object id 47)
0 0 0 INDEX RANGE SCAN I_OBJ4 (cr=0 pr=0 pw=0 time=0 us cost=1 size=10 card=1)(object id 39)
1 1 1 NESTED LOOPS (cr=4 pr=0 pw=0 time=262 us cost=3 size=38 card=1)
1 1 1 TABLE ACCESS FULL LINK$ (cr=2 pr=0 pw=0 time=180 us cost=2 size=20 card=1)
1 1 1 TABLE ACCESS CLUSTER USER$ (cr=2 pr=0 pw=0 time=50 us cost=1 size=18 card=1)
1 1 1 INDEX UNIQUE SCAN I_USER# (cr=1 pr=0 pw=0 time=21 us cost=0 size=0 card=1)(object id 11)

Only the source tables are reported in the Row Source Operations section. All the blocks are reported in the EXECUTE phase. Why does the "query" count in the EXECUTE statistics differ from the "cr" count reported for the LOAD TABLE CONVENTIONAL. (LOAD TABLE CONVENTIONAL indicates a regular non-direct-path INSERT).
.
.
.

Categories: DBA Blogs

Database Change Notification Listener Implementation

Andrejus Baranovski - Sat, 2016-01-23 08:43
Oracle DB could notify client, when table data changes. Data could be changed by third party process or by different client session. Notification implementation is important, when we want to inform client about changes in the data, without manual re-query. I had a post about ADS (Active Data Service), where notifications were received from DB through change notification listener - Practical Example for ADF Active Data Service. Now I would like to focus on change notification listener, because it can be used in ADF not only with ADS, but also with WebSockets. ADF BC is using change notification to reload VO, when configured for auto refresh - Auto Refresh for ADF BC Cached LOV.

Change notification is iniatilized by DB and handled by JDBC listener. There are two important steps for such listener - start and stop. Sample application - WebSocketReusableApp.zip, implements servlet DBNotificationManagerImpl:


I'm starting and stopping DB change listener in servlet init and destroy methods:


DB listener initialization code is straightforward. Data changes are handled in onDatabaseChangeNotification method. Listener is registered againt DB table executed in initial query:


In my example I'm listening for changes in Employees table. Each time, when change happens - query is re-executed to calculate new average salary values for jobs:


On first load, when servlet is initialized - listener is registered. Initial data read happens and listener is reported as started:


I can change value in DB, commit it:


DB change listener is notified and query is being executed to fetch new averages. Value for SA_REP is changed:


I hope it makes it more clear how to use DB change listener. In my future posts I will describe how to push changes to JET through WebSocket.

Packt - Time to learn Oracle and Linux

Surachart Opun - Sat, 2016-01-23 00:01
What is your resolution for learning? Learn Oracle, Learn Linux or both. It' s a good news for people who are interested in improving Oracle and Linux skills. Packt Promotional (discount of 50%) for eBooks & videos from today until 23rd Feb, 2016. 

 XM6lxr0 for OracleOracle (Code:XM6lxr0) 
 ILYTW for LinuxLinux (Code:ILYTW)Written By: Surachart Opun http://surachartopun.com
Categories: DBA Blogs

Formatting a Download Link

Scott Spendolini - Fri, 2016-01-22 14:38
Providing file upload and download capabilities has been native functionality in APEX for a couple major releases now. In 5.0, it's even more streamlined and 100% declarative.
In the interest of saving screen real estate, I wanted to represent the download link in an IR with an icon - specifically fa-download. This is a simple task to achieve - edit the column and set the Download Text to this:
<i class="fa fa-lg fa-download"></i>
The fa-lg will make the icon a bit larger, and is not required. Now, instead of a "download" link, you'll see the icon rendered in each row. Clicking on the icon will download the corresponding file. However, when you hover over the icon, instead of getting the standard text, it displays this:
2016 01 13 16 28 16
Clearly not optimal, and very uninformative. Let's fix this with a quick Dynamic Action. I placed mine on the global page, as this application has several places where it can download files. You can do the same or simply put on the page that needs it.
The Dynamic Action will fire on Page Load, and has one true action - a small JavaScript snippet:
$(".fa-download").prop('title','Download File');
This will find any instance of fa-download and replace the title with the text "Download File":
2016 01 13 16 28 43
If you're using a different icon for your download, or want it to say something different, then be sure to alter the code accordingly.

Blackboard Did What It Said It Would Do. Eventually.

Michael Feldstein - Fri, 2016-01-22 10:21

By Michael FeldsteinMore Posts (1052)

Today we have a prime example of how Blackboard has been failing by not succeeding fast enough. The company issued a press release announcing “availability of new SaaS offerings.” After last year’s BbWorld, I wrote a post about how badly the company was communicating with its customers about important issues. One of the examples I cited was the confusion around their new SaaS offerings versus managed hosting:

What is “Premium SaaS”? Is it managed hosting? Is it private cloud? What does it mean for current managed hosting customers? What we have found is that there doesn’t seem to be complete shared understanding even among the Blackboard management team about what the answers to these questions are.

A week later, (as I wrote at the time), the company acted to clarify the situation. We got some documentation on what the forthcoming SaaS tiers would look like and how they related to existing managed hosting options. Good on them for responding quickly and appropriately to criticism.

Now, half a year after the announcement, the company has released said SaaS offerings. Along with it, they put out an FAQ and a comparison of the tiers. So they said what they were going to do, they did it, and they said what they did. All good. But half a year later?

In my recent post about Blackboard’s new CEO, I wrote,

Ballhaus inherits a company with a number of problems. Their customers are increasingly unhappy with the support they are getting on the current platform, unclear about how they will be affected by future development plans, and unconvinced that Blackboard will deliver a next-generation product in the near future that will be a compelling alternative to the competitors in the market. Schools going out to market for an LMS seem less and less likely to take Blackboard serious as a contender, which is particularly bad news since a significant proportion of those schools are currently Blackboard schools. The losses have been incremental so far, but it feels like we are at an inflection point. The dam is leaking, and it could burst.

Tick-tock tick-tock.

The post Blackboard Did What It Said It Would Do. Eventually. appeared first on e-Literate.

[Cloud | On-Premise | Outsourcing | In-Sourcing] and why you will fail!

Tim Hall - Fri, 2016-01-22 07:47

error-24842_640I was reading this article about UK government in-sourcing all the work they previously outsourced.

This could be a story about any one of a number of failed outsourcing or cloud migration projects I’ve read about over the years. They all follow the same pattern.

  • The company is having an internal problem, that they don’t know how to solve. It could be related to costs, productivity, a paradigm shift in business practices or just an existing internal project that is failing.
  • They decide launch down a path of outsourcing or cloud migration with unrealistic expectations of what they can achieve and no real ideas about what benefits they will get, other than what Gartner told them.
  • When it doesn’t go to plan, they blame the outsourcing company, the cloud provider, the business analysts, Gartner, terrorists etc. Notably, the only thing that doesn’t get linked to the failure is themselves.

You might have heard this saying,

“You can’t outsource a problem!”

Just hoping to push your problems on to someone else is a guaranteed fail. If you can’t clearly articulate what you want and understand the consequences of your choices, how will you ever get a result you are happy with?

Over the years we’ve seen a number of high profile consultancies get kicked off government projects. The replacement consultancy comes in, hires all the same staff that failed last time, then continue on the failure train. I’m not going to mention names, but if you have paid any attention to UK government IT projects over the last decade you will know who and what I mean.

Every time you hear someone complaining about failing projects or problems with a specific model (cloud, on-premise, outsourcing, in-sourcing), it’s worth taking a step back and asking yourself where the problem really is. It’s much easier to blame other people than admit you’re part of the problem! These sayings spring to mind.

“Garbage in, garbage out!”

“A bad workman blames his tools!”

Cheers

Tim…

PS. I’ve never done anything wrong. It’s the rest of the world that is to blame… :)

Update: I wasn’t suggesting this is only an issue in public sector projects. It just so happens this rant was sparked by a story about public sector stuff. :)

[Cloud | On-Premise | Outsourcing | In-Sourcing] and why you will fail! was first posted on January 22, 2016 at 2:47 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

FREE Live Demo class with Hands-On : Oracle Access Manager : Sunday

Online Apps DBA - Fri, 2016-01-22 04:42
This entry is part 5 of 5 in the series Oracle Access Manager

 Sunday

This Sunday 24th Jan at 7:30 AM PST / 10:30 AM EST/ 3:30 PM GMT/ 9 PM GMT  I’ll be doing FREE Demo Class on Oracle Access Manager with Hands-on for my Upcoming Oracle Access Manager course.

But before I can Jump into this  FREE Demo class with some Hands-On exercises I need your help!

I’m starting this class just for you…that’s why I’ve have decided why not to take your respective suggestion.

I want to get crystal clear idea about where you need help with Oracle Access Manager ?

  • What Topic you would like me to cover in class?
  • What are the Topics that you feel you are lack in and want to increase your knowledge?

If I know exactly what kind of struggles you’ve with Oracle Access Manager & Integration with Oracle Applications, I can tailor this class specifically to what you need right now.

Would you be so kind as to take 5 minutes to give me your respective suggestions!

Just follow below button to reserve you seat for Demo class on Oracle Access Manager and giving your suggestion what you want me to cover at time registration ?

Click Here to Join Class Click Here to Join Class

If you do, I’ve got something for you in return! If you  provide your valuable suggestion I’ll be very glad to provide you some extra discount on my Oracle Access Manager Training that is going to start from 31st Jan

You’ll have first-round access to this Demo class that will give you a sneak-peek It will fill up fast, but you will be guaranteed a seat!

Thanks so much for helping me make this class amazing. I can’t wait to show it to you!

Click here  to get your premium access to January’s free Demo class.

Don’t forget to share this post if you think this could be useful to others and also Subscribe to this blog for more such FREE Webinars and useful content related to Oracle.

The post FREE Live Demo class with Hands-On : Oracle Access Manager : Sunday appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Links for 2016-01-21 [del.icio.us]

Categories: DBA Blogs

Cloudera in the cloud(s)

DBMS2 - Fri, 2016-01-22 01:46

Cloudera released Version 2 of Cloudera Director, which is a companion product to Cloudera Manager focused specifically on the cloud. This led to a discussion about — you guessed it! — Cloudera and the cloud.

Making Cloudera run in the cloud has three major aspects:

  • Cloudera’s usual software, ported to run on the cloud platform(s).
  • Cloudera Director, which for example launches cloud instances.
  • Points of integration, e.g. taking information about security-oriented roles from the platform and feeding then to the role-based security that is specific to Cloudera Enterprise.

Features new in this week’s release of Cloudera Director include:

  • An API for job submission.
  • Support for spot and preemptable instances.
  • High availability.
  • Kerberos.
  • Some cluster repair.
  • Some cluster cloning.

I.e., we’re talking about some pretty basic/checklist kinds of things. Cloudera Director is evidently working for Amazon AWS and Google GCP, and planned for Windows Azure, VMware and OpenStack.

As for porting, let me start by noting:

  • Shared-nothing analytic systems, RDBMS and Hadoop alike, run much better in the cloud than they used to.
  • Even so, it seems that the future of Hadoop in the cloud is to rely on object storage, such as Amazon S3.

That makes sense in part because:

  • The applications where shared nothing most drastically outshines object storage are probably the ones in which data can just be filtered from disk — spinning-rust or solid-state as the case may be — and processed in place.
  • By way of contrast, if data is being redistributed a lot then the shared nothing benefit applies to a much smaller fraction of the overall workload.
  • The latter group of apps are probably the harder ones to optimize for.

But while it makes sense, much of what’s hardest about the ports involves the move to object storage. The status of that is roughly:

  • Cloudera already has a lot of its software running on Amazon S3, with Impala/Parquet in beta.
  • Object storage integration for Windows Azure is “in progress”.
  • Object storage integration for Google GCP it is “to be determined”.
  • Security for object storage — e.g. encryption — is a work in progress.
  • Cloudera Navigator for object storage is a roadmap item.

When I asked about particularly hard parts of porting to object storage, I got three specifics. Two of them sounded like challenges around having less detailed control, specifically in the area of consistency model and capacity planning. The third I frankly didn’t understand,* which was the semantics of move operations, relating to the fact that they were constant time in HDFS, but linear in size on object stores.

*It’s rarely obvious to me why something is o(1) until it is explained to me.

Naturally, we talked about competition, differentiation, adoption and all that stuff. Highlights included:

  • In general, Cloudera’s three big marketing messages these days can be summarized as “Fast”, “Easy”, and “Secure”.
  • Notwithstanding the differences as to which parts of the Cloudera stack run on premises, on Amazon AWS, on Microsoft Azure or on Google GCP, Cloudera thinks it’s important that its offering is the “same” on all platforms, which allows “hybrid” deployment.
  • In general, Cloudera still sees Hortonworks as a much bigger competitor than MapR or IBM.
  • Cloudera fondly believes that Cloudera Manager is a significant competitive advantage vs. Ambari. (This would presumably be part of the “Easy” claim.)
  • In particular, Cloudera asserts it has better troubleshooting/monitoring than the cloud alternatives do, because of superior drilldown into details.
  • Cloudera’s big competitor on the Amazon platform is Elastic MapReduce (EMR). Cloudera points out that EMR lacks various capabilities that are in the Cloudera stack. Of course, versions of these capabilities are sometimes found in other Amazon offerings, such as Redshift.
  • Cloudera’s big competitor on Azure is HDInsight. Cloudera sells against that via:
    • General Cloudera vs. Hortonworks distinctions.
    • “Hybrid”/portability.

Cloudera also offered a distinction among three types of workload:

  • ETL (Extract/Transform/Load) and “modeling” (by which Cloudera seems to mean predictive modeling).
    • Cloudera pitches this as batch work.
    • Cloudera tries to deposition competitors as being good mainly at these kinds of jobs.
    • This can be reasonably said to be the original sweet spot of Hadoop and MapReduce — which fits with Cloudera’s attempt to portray competitors as technical laggards. :)
    • Cloudera observes that these workloads tend to call for “transient” jobs. Lazier marketers might trot out the word “elasticity”.
  • BI (Business Intelligence) and “analytics”, by which Cloudera seems to mainly mean Impala and Spark.
  • “Application delivery”, by which Cloudera means operational stuff that can’t be allowed to go down. Presumably, this is a rough match to what I — and by now a lot of other folks as well — call short-request processing.

While I don’t agree with terminology that says modeling is not analytics, the basic distinction being drawn here make considerable sense.

Categories: Other

Launch Of Moodle Users Association: 44 members sign up, mostly as individuals

Michael Feldstein - Thu, 2016-01-21 22:00

By Phil HillMore Posts (385)

The Moodle Users Association (MUA), a crowd-funding group for additional Moodle core development, announced today that it is open for members to join. Technically the site was internally announced on its web site last Friday, but the press release came out today. As of this writing (Thurs evening PST), 44 members have signed up: 37 at the individual level, 1 at the bronze level, 3 at the silver level, 2 not sharing details, and Moodle Pty as the trademark holder. This equates to $8,680 – $22,580 of annual dues, depending on what level the two anonymous members chose.

Update (1/25): As of Monday morning, the numbers are 51 members: 44 at the individual level, 1 bronze, 3 silver, 2 not sharing details, and Moodle Pty as trademark holder; leading to $8,960 – $22,860 of annual dues.

Moodle News was the first outlet to describe the new organization (originally called Moodle Association but changed to Moodle Users Association when the organization was formalized), and in early December they summarized the motivation:

As mentioned recently in an article on e-Literate, it’s possible that a majority of all funding to Moodle.org originates from Blackboard. While this may be ironic, knowing the history of Blackboard, it is appropriate since the LMS company has quietly become the largest Moodle Partner by number of clients through acquisitions and growth over the last few years.

The Moodle Partner network is the lifeblood of Moodle HQ/Moodle.com and funds all of the full-time staff at Moodle HQ. Ten percent of all profit from Moodle partners is contributed back to fund the HQ’s staff, who then in turn organize developers around the world, coordinate QA cycles, keep releases on schedule, provide training and a free trial option for individual Moodle users.

In early 2015, Martin Dougiamas unveiled a plan to diversify Moodle funding by garnering support from non-Moodle Partners: organizations, users, and schools who were interested in contributing to Moodle.org, but were not providing Moodle related services, and perhaps wanted to get a little more out of contributions than a strict donation might give. The MUA is scheduled to kick off this December with it’s inaugural committee and funding which will be used to drive, in part, the development of the Moodle project through a democratic mechanism.

At e-Literate we have covered the inflection point faced by what is likely the the world’s most-used LMS since this past June. The inflection point comes from a variety of triggers:

  • Blackboard acquisition of several Moodle Partners causing Moodle HQ, other Moodle Partners, and some subset of users’ concerns about commercialization;
  • Creation of the Moodle Users Association as well as Moodle Cloud services as alternate paths to Moodle Partners for revenue and hosting; and
  • Several Moodle-derivative efforts in 2015 such as Remote-Learner leaving the Moodle Partner program, Totara forking its Moodle code, and creation of the POET working group hosted by Apereo.

To be fair, Martin does not agree with my characterization of such an inflection point as described in this interview from September 2015:

Martin: Sorry, I don’t really agree with your characterization. Unlike nearly all other LMS companies, Moodle is not profit-focussed (all our revenue goes into salaries). We are an organisation that is completely focussed on supplying a true open source alternative for the world without resorting to venture capital and the profit-driven thinking that comes with that. [snip]

Phil: My note on “inflection point” is not based on a profit-driven assumption. The idea is that significant changes are underway that could change the future direction of Moodle. A lot depends on Blackboard’s acquisition strategy (assuming it goes beyond Remote-Learner UK and Nivel Siete), whether other Moodle Partners follow Remote-Learner’s decision, and whether Moodle Association shows signs of producing similar or larger revenues than the Moodle Partner program. What I don’t see happening is extension of the status quo.

Martin: Moodle’s mission is not changing at all, we are just expanding and improving how we do things in response to a shifting edtech world. We are starting the Moodle Association to fill a gap that our users have often expressed to us – they wanted a way to have some more direct input over major changes in core Moodle. There is no overlap between this and the Moodle Partners – in fact we are also doing a great deal to improve and grow the Moodle Partner program and as well as the user experience for those who need Moodle services from them.

I maintain my belief that there are big changes afoot in the Moodle community. The two primary drivers of how these changes might impact product development of Moodle core are whether and how Blackboard and Moodle HQ “mend fences” in their business relationship and whether the Moodle Users Association shows signs of producing significant revenue and significant influence on product development within the next 1 – 2 years. With the formal announcement today, let’s look at more details on MUA and its prospects.

There are four levels of MUA membership, ranging from 100AUD to 10,000AUD annual dues (roughly $70 to $7,000 in US dollars).

Moodle_Users_Association__About_membership

When Moodle News surveyed their readers about plans for membership (results in early December post), they found 70% planning to join at individual level, 23 % at bronze, 4% at silver, and 2% at gold. Thus far the actuals from the public members (excluding Moodle Pty and the two anonymous members) are 90% individual, 2% bronze, 7% silver, and 0% gold. It is quite possible, however, that organizations choosing bronze, silver, and gold levels will take longer to decide than individuals. Nevertheless, it initially appears that the majority of members will be at the individual level. Given the disparity in dues, however, the actual public revenue commitments are 30% individual, 7% bronze, 63% silver, and 0% gold. If either or both of the two anonymous members are organizations, this would tilt the numbers even further.

What would it take to generate “significant revenue”? As of a year ago, Moodle HQ had 34 employees which would require $3 – $5m of revenue to continue support[1] For MUA to have significant impact, I would say that it needs minimum revenues of $500k – $1m within a couple of years. Without this level, the revenue itself is just in the noise and revenue has not been diversified. This minimum level would require 200+ organizational memberships in rough numbers.

Moving forward, it will be important to track organizational memberships primarily, seeing if there is a clear path for MUA to reach at least 200 in a year or two.

What would it take to generate “significant influence on product development”? Well first it is worth understanding how this works. Twice a year the MUA will “work through a cycle of project proposal development and project voting in three phases”.

Timeline

The number of votes in the process are determined by membership levels. The pot of money to allocate to the specific development projects is based on the revenue. What this means is that MUA is not voting, per current rules, on overall Moodle Core development – they are voting on how to allocate MUA funds within Moodle Core development. Again, if there is not some reasonable level of revenue ($500k +), then the impact of this development will be minor.

I’m not suggesting that $500k is a hard delimiter of significant or not; it is just a rough number to help us understand MUA’s future importance.

We should not forget the introduction of MoodleCloud:

MoodleCloud is our own hosting platform, designed and run by us, the people who make Moodle.

The revenue to cover the costs of this services are provided by ads or by 5AUD / month payment to avoid ads. The service is limited to sites with 50 users or less. As currently designed, MoodleCloud would primarily pay for itself – scale does not necessarily create funds to pay for product development. But this service does provide a potential pathway for additional revenue with just a policy change[2]

A strategic inflection point is a time in the life of business when its fundamentals are about to change. That change can mean an opportunity to rise to new heights. But it may just as likely signal the beginning of the end.

– Andy Grove, Only the Paranoid Survive

More broadly, the issue is whether MUA and even MoodleCloud will change the dynamics to allow Moodle to modernize and compete more effectively against within the overall LMS market. The fundamentals are changing for Moodle.

  1. Note: I do not have actual numbers on revenue amounts and am just using rough industry metrics of $90k – $150k per employee, fully loaded with office and equipment.
  2. I am not aware if Moodle Partners have any limitations on MoodleCloud terms in their contracts.

The post Launch Of Moodle Users Association: 44 members sign up, mostly as individuals appeared first on e-Literate.

My Joyful Consumption of Data

Oracle AppsLab - Thu, 2016-01-21 21:47

I love data, always have.

To feed this love and to compile data sets for my quantified self research, I recently added the Netatmo Weather Station to the other nifty devices that monitor and quantify my everyday life, including Fitbit AriaAutomatic and Nest.

I’ve been meaning to restart my fitness data collection too, after spending most of last year with the Nike+ Fuelband, the Basis Peak, the Jawbone UP24, the Fitbit Surge and the Garmin Vivosmart.

FWIW I agree with Ultan (@ultan) about the Basis Peak, now simply called Basis, as my favorite among those.

Having so many data sets and visualizations, I’ve observed my interest peak and wane over time. On Day 1, I’ll check the app several times, just to see how it’s working. Between Day 2 and Week 2, I’ll look once a day, and by Week 3, I’ve all but forgotten the device is collecting data.

This probably isn’t ideal, and I’ve noticed that even something that I expected would work, like notifications, I tend to ignore, e.g. the Netatmo app can send notifications on carbon dioxide levels indoor, temperature outside and rain accumulation outside, if you have the rain gauge.

These seem useful, but I tend to ignore them, a very typical smartphone behavior.

Unexpectedly, I’ve come to love the monthly email many devices send me and find them much more valuable than shorter interval updates.

Initially, I thought I’d grow tired of these and unsubscribe, but turns out, they’re a happy reminder about those hard-working devices that are tirelessly quantifying my life for me and adding a dash of data visualization, another of my favorite things for many years.

Here are some examples.

Pasted_Image_1_21_16__7_15_PM

My Nest December Home Report

December Monthly Driving Insight from Automatic

December Monthly Driving Insight from Automatic

Although it’s been a while, I did enjoy the weekly summary emails some of the fitness trackers would send. Seems weekly is better in some cases, at least for me.

Weekly Progress Report from Fitbit

Weekly Progress Report from Fitbit

Pasted_Image_1_21_16__7_41_PM

Basis Weekly Sleep Report

A few years ago, Jetpack, the WordPress analytics plugin, began compiling a year in review report for this blog, which I also enjoy annually.

If I had to guess about my reasons, I’d suspect that I’m not interested enough to maintain a daily velocity, and a month (or a week for fitness trackers) is just about the right amount of data to form good and useful data visualizations.

Of course, my next step is dumping all these data into a thinking pot, stirring and seeing if any useful patterns emerge. I also need to reinvigorate myself about wearing fitness trackers again.

In addition to Ultan’s new fave, the Xiaomi Mi Band, which seems very interesting, I have the Moov and Vector Watch waiting for me. Ultan talked about the Vector in his post on #fashtech.

Stay tuned.Possibly Related Posts:

If you use Internet Explorer, change is coming for you in Oracle Application Express 5.1

Joel Kallman - Thu, 2016-01-21 13:45
With the ever-changing browser landscape, we needed to make some tough decisions as to which browsers and versions are going to be deemed "supported" for Oracle Application Express.  There isn't enough time and money to support all browsers and all versions, each with different bugs and varying levels of support of standards.

A position that's been adopted for the Oracle Cloud services and products is to support the current version of a browser and the prior major release.  We are adopting this same standard for Oracle Application Express beginning with Oracle Application Express 5.1.  This will most likely have the greatest impact on those people who use Microsoft Internet Explorer. 

Beginning with Oracle Application Express 5.1, the planned minimum version of Internet Explorer to both build and deploy applications, will be Internet Explorer 11.  I say "planned", because it's possible (but unlikely) that Microsoft releases a new browser version prior to the release of Oracle Application Express 5.1.

Granted, even Microsoft themselves has already dropped support for any version of IE before Internet Explorer 11.  And with no security fixes planned for any version of IE prior to Internet Explorer 11, hopefully this will be enough to encourage all users of IE to adopt IE 11 as their minimum version.

Oracle APEX development and multiple developers/branches

Joel Kallman - Thu, 2016-01-21 11:53
Today, I observed an exchange inside of Oracle about a topic that comes up from time to time.  And it has to do with the development of APEX applications, and how you manage this across releases and a larger number of developers.  This topic tends to vex some teams when they start working with Oracle Application Express on broader development projects, especially when people are not accustomed to a hosted declarative development model.  I thought Koen Lostrie of Oracle Curriculum Development provided a brilliant response, and it was worth sharing with the broader APEX community.
Alec from Oracle asked:
"Are there any online resources that discuss how to work with APEX with multiple developers and multiple branches of development for an application?  Our team is using Mercurial to do source control management now. The basic workflow is that there are several developers who are working on mostly independent features.  There are production, staging, development, and personal versions of the application code.  Developers implement bug fixes or new features and those get pushed to the development version.  Certain features from development get approved to go to staging and pushed.  Those features in staging may be rolled back or promoted to go on to production.  Are there resources which talk about implementing such a workflow using APEX?  Or APEX instructors to talk to about this workflow?"
And to which I thought Koen gave a very clear reply, complete with evidence of how they are successfully managing this today in their Oracle Curriculum Development team.  Koen said:

"I think a lot of teams struggle with what you are describing because of the nature of APEX source code and Database-based development.  I personally think that the development flow should be adapted to APEX rather than trying to use an existing process and apply that for APEX.

Let me explain how we do it in our team:

  • We release patches to production every 3 weeks. We have development/build/stage and production and use continuous integration to apply patches on build and stage.
  • We use an Agile-based process. At the start of each cycle we determine what goes in the patch.
  • Source control is done on Oracle Developer Cloud Service (ODCS)  – we use git and source tree. We don’t branch.
  • All developers work directly on development (the master environment) for bugs/small enhancement requests. We use the BUILD OPTION feature of APEX to prevent certain functionality from being exposed in production. This is a great feature which allows developer to create new APEX components in development but the changes are not visible in the other environments.
  • For big changes like prototypes, a developer can work on his own instance but this rarely happens. It is more common for a developer to work on a copy of the app to test something out. Once the change gets approved. it will go into development.

From what I see in the process you describe, the challenge in your process is that new changes get pulled back after they have made it to stage. This is a very expensive step. The developers need to roll back their changes to an earlier state which is a very time consuming process. And… very frustrating for the individual developer.  Is this really necessary ? Can the changes not be reviewed when in development ? Because that is what is proposed in the Agile methodology: the developer talks directly to the person/team that requests the new feature and they review as early as on development.  In our case stage is only for testing changes. We fix bugs when the app is in stage, but we  don’t roll back features once they are in stage – worst case we can delay the patch entirely but that happens very rarely.

There is a good paper available by Rob Van Wijk. He describes how each developer works on his own instance but keeps his environment in sync with the master. In his case too, they’re working on a central master environment. The setup of such an environment is quite complex. You can find the paper here: http://rwijk.blogspot.com/2013/03/paper-professional-software-development.html"

SQL On The Edge #7 – Azure Data Factory

Pythian Group - Thu, 2016-01-21 08:46

As the amount of data generated around us continues to grow exponentially, organizations have to keep coming up with the solutions of our new technological landscape. Data integration has been part of this challenge for many years now and there are many tools that have been developed specifically for these needs. Some tools are geared specifically from moving data from point A to point B, other tools provide a full ETL (Extract-Transform-Load) solution that can work with many products using all kinds of different drivers.

For many years, the first party tool of choice for SQL Server professionals has been SSIS. Interestingly, even though it’s called SQL Server Integration Services, SSIS is really a general purpose ETL tool. If you want to extract data from Oracle, transform it with the full expressive capabilities of .NET and then upload it to a partner’s FTP as a flat file, you can do it in SSIS!

As we continue our journey into the cloud and hybrid environments, more tools will start coming up that will work as an ETL PaaS offering. You won’t have to manage the pipeline’s OS, hardware or underlying software, you’ll just create your data pipelines and be off to the races.

What is it?
Azure Data Factory (ADF) is Microsoft’s cloud offering for data integration and processing as a service. You don’t have to install any bits or manage any software, you’re only responsible of creating the pipelines. Since it’s developed to run inside the Azure the tool also has some pre-made hooks that make it really easy to interoperate with other Azure services such as blob storage, HDInsight or Azure Machine Learning.

On premises you would need a machine (VM or physical), you would need a license for your ETL tool (let’s say SSIS), then you would need to keep SSIS patched up, the machine up to date, think about software and hardware refreshes and so on. Using ADF, you can focus on the pipeline itself and not have to worry about what underlying sofware and hardware is actually making it work. The service supports a wide array of sources and targets (and continues to grow) and also robust options for scheduling the pipeline or running continuously to look for new slices of data.

When should you use it?
If you’re thinking about creating a new SSIS package and find that your sources are all web or cloud based then ADF is a good choice. Build a prototype of your pipeline, make sure that it supports your expected transformations and then you can operationalize it on the cloud. As a PaaS offering, it takes away the time, cost and effort of having to deal with the underlying bits and you can just focus on delivering quality data pipelines in a shorter timeframe.

Service Limitations

Like all new things in Azure, there are still some service limitations. The biggest one at the moment is that the service is only available in the West US and North Europe regions. If you don’t have resources in those regions and will be moving a lot of data then I would advise to start learning the service and prototyping but not put in production the pipelines. The reason for that is that any data movement from outside the region will have an outbound transfer cost. If your resources are in those regions then there’s no charge and you can ignore this warning.

Demo
In the Demo video we’ll look at the user interface of Azure Data Factory, how to add a source and target, scheduling and checking the status of the pipeline. Enjoy!

Discover more about our expertise with SQL Server in the Cloud.

Categories: DBA Blogs

Video: Oracle Linux Virtual Machine (VM) on Amazon Web Services (AWS)

Tim Hall - Thu, 2016-01-21 03:14

Continuing the cloud theme, here is a quick run through of the process of creating an Oracle Linux virtual machine on Amazon Web Services (AWS).

A few months ago I wrote an article about installing an Oracle database on AWS.

I updated the images in that article last night to bring them in line with this video.

The cameo today is by Joel Pérez, who was a bit of a perfectionist when recording “.com”. I’ve included about half of his out-takes at the end of the video. Don’t ever hire him for a film or you will run over budget! :)

Cheers

Tim…

Video: Oracle Linux Virtual Machine (VM) on Amazon Web Services (AWS) was first posted on January 21, 2016 at 10:14 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.