Skip navigation.

Feed aggregator

Connecting to DBaaS, did you know this trick?

Kris Rice - Wed, 2015-07-15 15:10
SSHTunneling Trick The new command line is a must try, says 10 out of 10 people that built it.  The tool has sshtunneling of ports built in as described by Barry. This means you can script opening your sshtunnel from the command line and run sql very quickly.  Here's the one I used recently at Kscope15. Now the trick? is that once this port is forwarded, any tool can now use it.  In case

Webcast: Introducing Oracle Mobile Cloud Service

WebCenter Team - Wed, 2015-07-15 12:57
Oracle Corporation  Introducing Oracle Mobile Cloud Service Introducing Oracle Mobile Cloud Service part of the Cloud Platform Webcast Series

As a kickoff to the Oracle Cloud Platform Series: Oracle, Pella Corporation, Xamarin, introduce Oracle Mobile Cloud Service and the value it provides in building engaging apps quickly, while reducing costs and simplifying enterprise mobility.

Mobile computing has experienced explosive growth in the past decade, and this is just the beginning. At the heart of any organizations’ digital strategy, Mobile is the primary screen and engagement channel for everyone – customers, partners and employees. Organizations both IT and business are looking at new ways to embrace enterprise mobility and lead the digital transformation.

You are invited to join this three part webcast: Introducing Oracle Mobile Cloud Service and understand how Oracle is leading this transformation:
  • Introducing Oracle Mobile Cloud Service Part 1: Keeping Mobile Simple
  • Introducing Oracle Mobile Cloud Service Part 2: Pella Creating a Better View
  • Introducing Oracle Mobile Cloud Service Part 3: Go Native Fast with Xamarin
Register Now to attend this exclusive complimentary webcast.


Red Button Top Register Now Red Button Bottom Live Webcast Calendar July 22, 2015
10:00 AM PT/1:00 PM ET
#OraclePaaS SPEAKERS:

Inderjeet Singh Inderjeet Singh, Executive Vice President, Fusion Middleware Development, Oracle Kaj Van de Loo Kaj Van de Loo
Vice President, Mobile Development
Oracle Jim Thomas Jim Thomas
Director of IT Operations and Information Security,
Pella Corporation Nat Friedman Nat Friedman
CEO and Co-Founder,
Xamarin Rimi S. Bewtra Rimi S. Bewtra
Sr. Director,
Mobile Product Marketing
Oracle Integrated Cloud Applications & Platform Services Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

The Pen is Mightier with the User’s Experience

Oracle AppsLab - Wed, 2015-07-15 10:04

If you’re involved in enterprise user experience (UX) it will come as no surprise that the humble pen and paper remains in widespread use for everyday business.

Sales reps, for example, are forever quickly scribbling down opportunity info. HR pros use them widely. Accountants? Check. At most meetings you will find both pen and paper and digital technology on the table.

That’s what UX is all about, understanding all the tools, technology, and job aids, and the rest, that the user touches along that journey to getting the task done.

Although Steve Jobs famously declared that the world didn’t need another stylus, innovation in digital styli, or digital pens (sometimes called smartpens), has never been greater.

Microsoft is innovating with the device, h/t @bubblebobble. Apple is ironically active with patents for styli, and the iPen may be close. Kickstarter boasts some great stylus ideas such as the Irish-designed Scriba (@getbscriba), featured in the Irish Times.

It is the tablet and the mobility of today’s work that has reinvigorated digital pen innovation, whether it’s the Apple iPad or Microsoft Surface.

Livescribe Echo smartpen and notebook

Livescribe Echo smartpen and notebook

I’ve used digital pens, or smartpens, such as the Livescribe Echo for my UX work. The Echo is great way to wireframe or create initial designs quickly and to communicate the ideas to others working remotely, using a pencast.

Livescribe Echo pencast viewed from the desktop

Livescribe Echo pencast viewed from the desktop

Personally, I feel there is a place for digital pens, but that the OG pen and paper still takes some beating when it comes to rapid innovation, iteration, and recall, as pondered on UX StackExchange.

An understanding of users demands that we not try to replace the pen and paper altogether but to enhance or augment their use, depending on the context. For example, using the Oracle Capture approach to transfer initial strokes and scribbles to the cloud for enhancement later.

tumblr_inline_nrf4beWUSO1s3qib7_400

You can read more about this in the free Oracle Applications Cloud User Experience Innovations and Trends eBook.

Sure, for some users, a funky new digital stylus will rock their world. For others, it won’t.

And we’ll all still lose the thing.

The pen is back? It’s never been away.

Cross-posted from Ültan’s Über Üsable Apps, thanks Ultan (@ultan).Possibly Related Posts:

Starting a Process using a Timer with a Duration in Oracle BPM

Jan Kettenis - Wed, 2015-07-15 09:34
In this blog article I explain three options to configure a timer start event based upon some configurable duration.

As far as I know firing a timer based on a duration is only applicable in case of a Timer Event Sub-process. Let me know if you think otherwise.

In case of an Event Sub-process the timer starts at the same moment when the process instance starts. There is no way to change it at any point after that. Given this , you can use one of the following three options that I discuss below. If you know of some oher way, again: let me know!

Input ArgumentYou can use an element that is part of the request of the process. In the following example there is one input argument called 'expiry' of type duration which is mapped to a process variable:

 The process variable can then used to start the timer using an straightforward simple XPath assignment:


Preference in composite.xmlYou can also configure a preference in the composite.xml file. Such a preference belongs to a specific component, and starts with "preference" (or "bpel.preference", but you can leave "bpel." out). Using the dot as a delimiter you can post-fix that with the preference name to use:

You can then set the timer using the ora:getPreference() XPath function. All these preferences are strings, but if the value is an ISO duration it will automatically be converted to a duration.

Domain Value MapA third option is to configure the duration using a Domain Value Map or DVM for short. In the following example a DVM file is used for configuration parameters as a name-value pair:
 

The timer can be instantiated using the dvm:lookupValue() XPath function, as show in the following picture:

What to Choose?This depends on the requirements.

If your consumer should be able to determine the duration, you should pass it on as a request parameter.

If the business wants to change it run-time than using the DVM is the best option. The initial value is determined design-time but can be changed run-time via SOA Composer (the same tool via which business rules can be changed).

Otherwise the composite preference is your weapon of choice. Also for this preference the initial value is determined design-time, but can still be changed after deployment by IT using the MBean Browser in Enterprise Manager.

New search capabilities in SharePoint Server 2013-2016

Yann Neuhaus - Wed, 2015-07-15 07:32

alt

For the ones who have not migrated their environment on SharePoint 2013, or the novices, here is an article in which we will discover the new search capabilities in SharePoint Server 2013.
We will have an overview of:

  • Search User interface
  • Crawling
  • Structure
  • Index & Search Schema
  • Health reports
  • Search architecture

 

What is the SP Search tool? what

SharePoint contains a search technology, which combines advanced search and analytics features. This feature is highly customizable. The content of documents (including PDFs) are searched.

FUNCTIONAL DRAW

search

 

Search user interface

Users can quickly identify useful results in ways such as the following:

  • Users can just move the pointer above the result for preview.

  • Results can be distinguished based on their type. The application icon is placed in front of the title of the search result. Lync availability and the people picture is shown on the results.

  • Certain types of related results are displayed in groups called result blocks. A result block contains a small subset of results that are related in a particular way. For example, results that are Excel documents appear in a result block searching for terms like the word "budget".

The search tool helps users to return to a previous search because the system is keeping search history.

Site collection administrators and site owners can specify display templates that determine how result types appear.

alt

Crawling improvements In SharePoint Server 2013, you can configure crawl schedules for SharePoint content sources so that crawls are performed continuously. Setting this option eliminates the need to schedule incremental crawls and automatically starts crawls as necessary to keep the search index fresh. Administrators should still configure full crawls as necessary.   

For more information, see on TechNet site: Manage continuous crawls in SharePoint Server 2013.

Index and Search schema

By defining crawled properties, managed properties, and the mappings between them, the search schema determines how the properties of crawled content are saved to the search index.
The search index stores the contents of the managed properties. The attributes of the managed properties determine the search index structure.

SharePoint Server 2013 introduces new attributes that you can apply to manage properties, such as sortable and refinable. The sortable attribute reduces the time that is required to return large search result sets by sorting results before they are returned. The refinable attribute enables you to create a refiner based on a particular managed property.

In SharePoint Server 2013, you can have multiple search schemas. The main search schema is defined at the Search service application level. Site collection administrators can create customized search schemas for different site collections.

For more information, see on TechNet site: Manage the search schema in SharePoint Server 2013.

Health reports

SharePoint Server 2013 provides many query health reports and crawl health reports. In SharePoint Server 2010 and FAST Search Server 2010 for SharePoint, similar reports were called Search Administration Reports. For more information, see View search diagnostics in SharePoint Server 2013.

Search architecture

SharePoint Server 2013 introduces a new search architecture that includes significant changes and additions to the search components and databases. For examples and more information, see the Search technical diagrams in Technical diagrams for SharePoint 2013.


All the SharePoint 2013 Improvements are kept in SharePoint 2016, actually the first info done my Microsoft about the search in SharePoint 2016 is regarding the Delve:

Search with Delve

SharePoint 2016 will be having Search with Delve app.

What is Delve?what

Delve is a new way of searching & presenting contents based on user’s interest. Delve’s roots goes to Office 365.

Delve can present information from Exchange, OneDrive for Business, SharePoint Online and Yammer based on user’s interactions.
For more information, please have a look on this reference: https://support.office.com/en-us/article/What-is-Office-Delve--1315665a-c6af-4409-a28d-49f8916878ca?ui=en-US&rs=en-US&ad=US

Conclusion

Search could easily become a nightmare sometimes, please use the Best Practices related to any search whether be people, objects, information, needles in a haystack ... with the right information and the right settings we always seem to find back what we are looking for.

Image-1_4

 

Best practices for organizing content for search in SharePoint Server 2013: https://technet.microsoft.com/en-us/library/jj683124.aspx

Plan search in SharePoint Server 2013: https://technet.microsoft.com/en-us/library/cc263400.aspx

 

 

 


Disable Wrap Data Types

Darwin IT - Wed, 2015-07-15 06:50
Just  a moment ago I stumbled on a blog entry of Eric Elsinga about the wrapping of datatypes in Weblogic Datasources, related to the DB-Adapter.

Weblogic wraps objects returned by the database-driver to provide functionality related to debugging, connection utilization tracking and transparent transaction support.
However for some native database objects like BLOBS, CLOBS, ARRAYS etc. this wrapping can affect the performance significantly. When this wrapping is disabled, the application (in our case the DB-Adapter) can work directly on the native objects provided by the Database driver.
To disable the object wrapping do the following:
  1. In the Domain Structure tree, expand Services, then select Data Sources.
  2. On the Summary of Data Sources page, click the data source name.
  3. Select the Configuration: Connection Pool tab.
  4. Scroll down and click Advanced to show the advanced connection pool options.
  5. In Wrap Data Types, deselect the checkbox to disable wrapping.
  6. Click Save.
Of course on a production-mode server you need to Lock&Edit upfront and Activate Changes afterwards.
See also:

Another new APEX-based public website goes live

Tony Andrews - Wed, 2015-07-15 06:48
Another APEX public website I worked on with Northgate Public Services has just gone live: https://londontribunals.org.uk/ This is a website to handle appeals against parking fines and other traffic/environmental fines issues by London local authorities. It is built on APEX 4.2 using a bespoke theme that uses the Bootstrap framework.  A responsive design has been used so that the site works Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2015/07/another-new-apex-based-public-website.html

APEX 5 - Opening and Closing Modal Window

Denes Kubicek - Wed, 2015-07-15 04:56
This example is showing how to open a Modal Page from any element in your application. It is easy to get it working using some standards like a button or a link in a report. However, it is not 100% clear how to get it working with some other elements which don't have the redirect functionality built in (item, region title, custom links, etc.). This example is also showing how to get the success message displayed on the parent page after closing of the Modal Page.

Categories: Development

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Wed, 2015-07-15 03:05

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

PQ Index anomaly

Jonathan Lewis - Wed, 2015-07-15 01:42

Here’s an oddity prompted by a question that appeared on Oracle-L last night. The question was basically – “Why can’t I build an index in parallel when it’s single column with most of the rows set to null and only a couple of values for the non-null entries”.

That’s an interesting question, since the description of the index shouldn’t produce any reason for anything to go wrong, so I spent a few minutes on trying to emulate the problem. I created a table with 10M rows and a column that was 3% ‘Y’ and 0.1% ‘N’, then created and dropped an index in parallel in parallel a few times. The report I used to prove that the index build had run  parallel build showed an interesting waste of resources. Here’s the code to build the table and index:


create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        case
                when mod(rownum,100) < 3 then 'Y'
                when mod(rownum,1000) = 7 then 'N'
        end                     flag,
        rownum                  id,
        rpad('x',30)            padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e7
;

-- gather stats here

explain plan for
create index t1_i1 on t1(flag) parallel 4 nologging
;

select * from table(dbms_xplan.display);

create index t1_i1 on t1(flag) parallel 4 nologging;

select index_name, degree, leaf_blocks, num_rows from user_indexes;
alter index t1_i1 noparallel;

As you can see, I’ve used explain plan to get Oracle’s prediction of the cost and size, then I’ve created the index, then checked its size (and set it back to serial from its parallel setting). Here are the results of the various queries (from 11.2.0.4) – it’s interesting to note that Oracle thinks there will be 10M index entries when we know that “completely null entries don’t go into the index”:

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------
|   0 | CREATE INDEX STATEMENT   |          |    10M|    19M|  3073   (3)| 00:00:16 |        |      |            |
|   1 |  PX COORDINATOR          |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (ORDER)     | :TQ10001 |    10M|    19M|            |          |  Q1,01 | P->S | QC (ORDER) |
|   3 |    INDEX BUILD NON UNIQUE| T1_I1    |       |       |            |          |  Q1,01 | PCWP |            |
|   4 |     SORT CREATE INDEX    |          |    10M|    19M|            |          |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE          |          |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,01 | PCWP |            |
|   6 |       PX SEND RANGE      | :TQ10000 |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,00 | P->P | RANGE      |
|   7 |        PX BLOCK ITERATOR |          |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS FULL| T1       |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------

Note
-----
   - estimated index size: 243M bytes

INDEX_NAME           DEGREE                                   LEAF_BLOCKS   NUM_ROWS
-------------------- ---------------------------------------- ----------- ----------
T1_I1                4                                                562     310000

Although the plan says it’s going to run parallel, and even though the index says it’s a parallel index, we don’t have to believe that the creation ran as a parallel task – so let’s check v$pq_tqstat, the “parallel query table queue” statistics – and this is the result I got:


DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
         1          0 Ranger                 1 QC                      12        528          4          0           0
                      Producer               1 P004               2786931   39161903          9          1           0
                                             1 P005               2422798   34045157         11          1           0
                                             1 P006               2359251   33152158         12          1           0
                                             1 P007               2431032   34160854         14          2           0
                      Consumer               1 P000               3153167   44520722          3          0           0
                                             1 P001               1364146   19126604          4          1           0
                                             1 P002               2000281   28045742          3          0           0
                                             1 P003               3482406   48826476          3          0           0

                    1 Producer               1 P000                     1        298          0          0           0
                                             1 P001                     1        298          0          0           0
                                             1 P002                     1        298          0          0           0
                                             1 P003                     1         48          0          0           0
                      Consumer               1 QC                       4       1192          2          0           0

Check the num_rows column – the first set of slaves distributed 10M rows and roughly 140MB of data to the second set of slaves – and we know that most of those rows will hold (null, rowid) which are not going to go into the index. 97% of the data that went through the message queues would have been thrown away by the second set of slaves, and “should” have been discarded by the first set of slaves.

As for the original question about the index not being built in parallel – maybe it was, but not very parallel. You’ll notice that the parallel distribution at operation 6 in the plan is “RANGE”. If 97% of your data is null and only 3% of your data is going to end up in the index then you’d need to run at higher than parallel 33 to see any long lasting executions – because at parallel 33 just one slave in the second set will get all the real data and do all the work of sorting and building the index while the other slaves will (or ought to) be just throwing their data away as it arrives. When you’ve got 500M rows with only 17M non-null entries (as the OP had) to deal with, maybe the only thing happening by the time you get to look might be the one slave that’s building a 17M row index.

Of course, one of the reasons I wanted to look at the row distribution in v$pq_tqstat was that I wanted to check whether I was going to see all the data going to one slave, or a spread across 2 slaves (Noes to the left, Ayes to the right – as they used to say in the UK House of Commons), or whether Oracle had been very clever and decided to distribute the rows by key value combined with rowid to get a nearly even spread. I’ll have to set up a different test case to check whether that last option is possible.

Footnote

There was another little oddity that might be a simpler explanation of why the OP’s index creation might actually have run serially. I dropped and recreated the index in my test case several times and at one point I noticed (from view v$pq_slave) that I had 16 slave processes live (though, at that point, IDLE). Since I was the only user of the instance my session should probably have been re-using the same set of slaves each time I ran the test; instead, at some point, one of my test runs had started up a new set of slaves. Possibly something similar had happened to the OP, and over the course of building several indexes one after the other his session had reached the stage where it tried to start “yet another” set of slaves, failed, and decided to run serially rather than reuse any of the slaves that were nominally available and IDLE.

Update

It gets worse. I decided to query v$px_sesstat (joined to v$statname) while the query was running, and caught some statistics just before the build completed. Here are a few critical numbers taken from the 4 sessions that received the 10M rows and built the final index:

Coord   Grp Deg    Set  Sno   SID
264/1     1 4/4      1    1   265
---------------------------------
            physical writes direct                            558
            sorts (memory)                                      1
            sorts (rows)                                2,541,146

264/1     1 4/4      1    2    30
---------------------------------
            sorts (memory)                                      1
            sorts (rows)                                2,218,809

264/1     1 4/4      1    3    35
---------------------------------
            physical writes direct                          7,110
            physical writes direct temporary tablespace     7,110
            sorts (disk)                                        1
            sorts (rows)                                2,886,184

264/1     1 4/4      1    4   270
---------------------------------
            sorts (memory)                                      1
            sorts (rows)                                2,353,861

Not only did Oracle pass 10M rows from one slave set to the other, the receiving slave set sorted those rows before discarding them. One of the slaves even ran short of memory and spilled its sort to disc to do the sort. And we can see (physical writes direct = 558) that one slave set was responsible for handling all the “real” data for that index.

 

Update 2

A couple of follow-ups on the thread have introduced some other material that’s worth reading.  An item from Mohamed Houri about what happens when a parallel slave is still assigned to an executing statement but isn’t given any work to do for a long time; and an item from Stefan Koehler about _px_trace and tracking down why the degree of parallelism of a statement was downgraded.


Security patches released for OBIEE 11.1.1.7/11.1.1.9, and ODI DQ 11.1.1.3

Rittman Mead Consulting - Wed, 2015-07-15 00:14

Oracle issued their quarterly Critical Patch Update yesterday, and with it notice of several security issues of note:

  • The most serious for OBIEE (CVE-2013-2186) rates 7.5 (out of 10) on the CVSS scale, affecting the OBIEE Security Platform on both 11.1.1.7 and 11.1.1.9. The access vector is by the network, there’s no authentication required, and it can partially affect confidentiality, integrity, and availability.
    • The patch for users of OBIEE 11.1.1.7 is to install the latest patchset, 11.1.1.7.150714 (3GB, released – by no coincidence I’m sure – just yesterday too).
    • For OBIEE 11.1.1.9 there is a small patch (64Kb), number 21235195.
  • There’s also an issue affecting BI Mobile on the iPad prior to 11.1.1.7, the impact being partial impact on integrity.
  • For users of ODI DQ 11.1.1.3 there’s a whole slew of issues, fixed in CPU patch 21418574.
  • Exalytics users who are on ILOM versions earlier that 3.2.6 are also affected by two issues (one of which is 10/10 on the CVSS scale)

The CPU document also notes that it is the final patch date for 10.1.3.4.2. If you are still on 10g, now really is the time to upgrade!

Full details of the issues can be found in Critical Patch Update document, and information about patches on My Oracle Support, DocID 2005667.1.

Categories: BI & Warehousing

Shift Command in Shell Script in AIX and Linux

Pakistan's First Oracle Blog - Tue, 2015-07-14 21:42
Shell in Unix never ceases to surprise. Stumbled upon 'shift 2' command in AIX few hours ago and it's very useful.

'Shift n' command shifts the parameters passed to a shell script by 'n' numbers to the left.

For example:

if you have a shell script which takes 3 parameters like:

./mytest.sh arg1 arg2 arg3

and you use shift 2 in your shell script, then the values of arg1 and arg2 will be lost and the value of arg3 will get assigned to arg1.

For example:

if you have a shell script which takes 2 parameters like:

./mytest arg1 and arg2

and you use shift 2, then values of both arg1 and arg2 will be lost.

Following is a working example of shift command in AIX:

testsrv>touch shifttest.sh

testsrv>chmod a+x shifttest.sh

testsrv>vi shifttest.sh

testsrv>cat shifttest.sh
#!/bin/ksh
SID=$1
BACKUP_TYPE=$2
echo "Before Shift: $1 and $2 => SID=$SID and BACKUPTYPE=$BACKUP_TYPE"
shift 2
echo "After Shift: $1 and $2 => SID=$SID and BACKUPTYPE=$BACKUP_TYPE"


testsrv>./shifttest.sh orc daily

Before Shift: orc and daily => SID=orc and BACKUPTYPE=daily
After Shift:  and  => SID=orc and BACKUPTYPE=daily


Note that the values of arguments passed has been shifted to left, but the values of variables has remained intact.
Categories: DBA Blogs

On Oracle Corporate Citizenship

Oracle AppsLab - Tue, 2015-07-14 19:39

Yesterday, our entire organization, Oracle Applications User Experience (@usableapps) got a treat. We learned about Oracle’s corporate citizenship from Colleen Cassity, Executive Director of the Oracle Education Foundation (OEF).

oef-logo

I’m familiar with Oracle’s philanthropic endeavors, but only vaguely so. I’ve used the corporate giving match, but beyond that, this was all new information.

During her presentation, we learned about several of Oracle’s efforts, which I’m happy to share here, in video form.

First, there’s the OEF Wearable Technology Workshop for Girls, which several of our team members supported.

Next Colleen talked about Oracle’s efforts to support and promote the Raspberry Pi, which is near and dear to our hearts here. We’ve done a lot of Raspi projects here. Expect that to continue.

Next up was Wecyclers, an excellent program to promote recycling in Nigeria.

And finally, we learned about Oracle’s 26-year-old, ongoing commitment to the Dian Fossey Gorilla Fund.

This was an eye-opening session for me. Other than the Wearable Technology Workshop for Girls, I hadn’t heard about Oracle’s involvement in these other charitable causes, and  I’m honored that we were able to help with one.

I hope we’ll be able to assist with similar, charitable events in the future.

Anyway, food for thought and possibly new information. Enjoy.Possibly Related Posts:

This Is Not Glossy Marketing But You Still Won’t Believe Your Eyes. EMC XtremIO 4.0 Snapshot Refresh For Agile Test / Dev Storage Provisioning in Oracle Database Environments.

Kevin Closson - Tue, 2015-07-14 18:18

This is just a quick blog post to direct readers to a YouTube video I recently created to help explain to someone how flexible EMC XtremIO Snapshots are. The power of this array capability is probably most appreciated in the realm of provisioning storage for Test and Development environments.

Although this is a silent motion picture I think it will speak volumes–or at least 1,000 words.

Please note: This is just a video demonstration to show the base mechanisms and how they relate to Oracle Database with Automatic Storage Management. This is not a scale demonstration. XtremIO snapshots are supported to in the thousands and extremely powerful “sibling trees” are fully supported.

Not Your Father’s Snapshot Technology

No storage array on the market is as flexible as XtremIO in the area of writable snapshots. This video demonstration shows how snapshots allow the administrator of a “DEV” host–using Oracle ASM–to quickly refresh to current or past versions of ASM disk group contents from the “PROD” environment.

The principles involved in this demonstration are:

  1. XtremIO snapshots are crash consistent.
  2. XtremIO snapshots are immediately created, writeable and space efficient. There is no fixed “donor” relationship. Snapshots can be created from other snapshots and refreshes can go in any direction.
  3. XtremIO snapshot refresh does not involve the host operating system. Snapshot and volume contents can be immediately “swapped” (refreshed) at the array level without any action on the host.

Regarding number 3 on that list, I’ll point out that while the operating system does not play a role in the snapshot operations per se, applications will be sensitive to contents of storage immediately changing. It is only for this reason that there are any host actions at all.

Are Host Operations Involved? Crash Consistent Does Not Mean Application-Coherent

The act of refreshing XtremIO snapshots does not change the SCSI WWN information so hosts do not have any way of knowing the contents of a LUN have changed. In the Oracle Database use case the following must be considered:

  1. With a file system based database one must unmount the file systems before refreshing a snapshot otherwise the file system will be corrupted. This should not alarm anyone. A snapshot refresh is an instantaneous content replacement at the array level. Operationally speaking, file system based databases only require database instance shutdown and the unmounting of the file system in preparation for application-coherent snapshot refresh.
  2. With an ASM based database one must dismount the ASM disk group in preparation for snapshot refresh. To that end, ASM database snapshot restore does not involve system administration in any way.

The video is 5 minutes long and it will show you the following happenings along a timeline:

  1. “PROD” and “DEV” database hosts (one physical and one virtual) each showing the same Oracle database (identical DBID) and database creation time as per dictionary views. This establishes the “donor”<->clone relationship. DEV is a snapshot of PROD. It is begat of a snapshot of a PROD consistency group
  2. A single-row token table called  “test” in the PROD database has value “1.” The DEV database does not even have the token table (DEV is independent of PROD…it’s been changing..but its origins are rooted in PROD as per point #1)
  3. At approximately 41 seconds into the video I take a snapshot of the PROD consistency group with “value 1” in the token table. This step prepares for “time travel” later in the demonstration
  4. I then update the PROD token table to contain the value “42”
  5. At ~2:02 into the video I have already dismounted DEV ASM disk groups and started clobbering DEV with the current state of PROD via a snapshot refresh. This is “catching up to PROD”
    1. Please note: No action at all was needed on the PROD side. The refresh of DEV from PROD is a logical, crash-consistent point in time image
  6. At ~2:53 into the video you’ll see that the DEV database instance has already been booted and that it has value “42” (step #4). This means DEV has “caught up to PROD”
  7. At ~3:32 you’ll see that I use dd(1) to copy the redo LUN over the data LUN on the DEV host to introduce ASM-level corruption
  8. At 3:57 the DEV database is shown as corrupted. In actuality, the ASM disk group holding the DEV database is corrupted
  9. In order to demonstrate traveling back in time, and to recover from the dd(1) corrupting of the ASM disk group,  you’ll see at 4:31 I chose to refresh from the snapshot I took at step #3
  10. At 5:11 you’ll see that DEV has healed from the dd(1) destruction of the ASM disk group, the database instance is booted, and the value in the token table is reverted to 1 (step #3) thus DEV has traveled back in time

Please note: In the YouTube box you can click to view full screen or on youtube.com if the video quality is a problem:

More Information

For information on the fundamentals of EMC XtremIO snapshot technology please refer to the following EMC paper: The fundamentals of XtremIO snapshot technology

For independent validation of XtremIO snapshot technology in a highly-virtualized environment with Oracle Database 12c please click on the following link: Principled Technologies, Inc Whitepaper

For a proven solution whitepaper showing massive scale data sharing with XtremIO snapshots please click on the following link: EMC Whitepaper on massive scale database consolidation via XtremIO


Filed under: oracle

Unizin Offering “Associate” Membership For Annual $100k Fee

Michael Feldstein - Tue, 2015-07-14 16:33

By Phil HillMore Posts (345)

Alert unnamed readers prompted me after the last post on the Unizin contract to pursue the rumored secondary method of joining for $100k. You know who you are – thanks.

While researching this question, I came across a presentation by the University of Florida provost to the State University System of Florida (SUSFL) seeking to get the system to join Unizin under these new terms. The meeting was March 19, 2015, and the video archive is here (first 15 minutes), and the slide deck is here. The key section (full transcript below):

Associate Membership FLSUS

Joe Glover: One of the things that Unizin has done – as I’ve said it consists of those 10 large research universities – is that the Unizin board decided that member institutions may nominate their system – in this case the state university system of Florida – for Associate Membership for an annual fee of $100,000 per system.

For $100,000 the entire state university system of Florida (SUSFL) could become an associate member of Unizin and enjoy all the benefits that Unizin brings forward, whether it’s reduced pricing of products that it’s licensing, or whether it products that Unizin actually produces. Associate Membership does not qualify for board representation, but as I mentioned you do enjoy the benefits of Unizin products and services.

This section reminded me of one item I should have highlighted in the contract. In appendix B:

The annual membership fees are waived for Founding Investors through June 30, 2017.

Does this mean that founding institutions that “invested” $1.050 million over three years will have to start paying annual fees of $100,000 starting in June 2017? That’s my assumption, but I’m checking to see what this clause means and will share at e-Literate.

Update (7/17): I talked to Amin Qazi today (CEO of Unizin) who let me know that the annual membership fee for institutional members (currently the 11 schools paying $1.050 million) has not be determined yet.

What is clear is that Unizin considers the board seat – therefore input on the future direction and operations of Unizin – to be worth $700,000.[1]

Full Transcript

The presentation is fascinating in its entirety, so I’m sharing it below. There are many points that should be analyzed, but I’ll save that for other posts and for other people to explore.

Joe Glover: I’d like to begin by explaining the problem that Unizin was created to try and avoid, and I’m going to do it by analogy with the publishing problem with scientific journals. About 30 years ago there was a plethora of publishing companies that would take the intellectual property being produced by universities in the form of journal articles, and they would print them and publish them. There was a lot of competition, prices were relatively low to do that.

Then in the ensuing 30 years there was tremendous consolidation in that industry to the point that there are only three or four major publishers of scientific articles. As a consequence they have a de facto monopoly, and they’re in the position of now taking what we produce, packaging it, and selling it back to the libraries of universities basically at whatever price they want to charge. This is a national problem. It is not a problem that is unique to Florida, and I think that every state in the nation is trying to figure out how to resolve this problem because we can’t afford to continue to pay exorbitant prices for journals.

That is a situation that we got ourselves in by not looking ahead to the future. We believe we are in a similar position with respect to distance learning at this point.

We have a plethora of universities and commercial firms. all trying to get into the digital space. Most of us believe that over the next 10 – 15 – 20 years there will be tremendous consolidation in this industry, and it is likely that there will emerge a relatively small number of players who control the digital space.

This consortium of universities wanted to make sure that the universities were not cut out of this process or this industry in much the same way that they had been cut out of scholarly publishing.

Every university in some sense runs a mom & pop operation in distance learning at this point, at least in comparison with large organizations like IBM and Pearson Learning that can bring hundreds of millions of dollars to the table. No university can afford to do that.

So a consortium of major research universities in the country, in an effort to look down the road and to avoid this problem, and to secure our foothold in the digital industry, formed a consortium called Unizin. I’m going to go briefly through this to tell you what this is, and then to lay before you an opportunity that the state university system can consider for membership in this consortium to enjoy the advantages that we expect it to bring.

Slide 1

This consortium is very new – it was launched in 2014. Its current membership is by invitation only. You cannot apply to become a member of this consortium, it is by invitation. As I mentioned, its objective is to promote greater control and influence over the digital learning ecosystem.

It’s governance is fairly standard. It has a board of directors that is drawn from the founding members. It has a CEO. It has a staff and it’s acquiring more staff. As a legal entity it is a not-for-profit service operation which is hosted by Internet2.

Slide 2

It’s current members include the universities that you see listed on this screen. These are 10 major universities in the nation – they’re all large research universities. There are other research universities that are considering joining. Unizin actually started out with four universities and quickly acquired the other six that are on this list.

Associate Membership FLSUS

The primary goals for Unizin as defined by its board of directors are the following. To acquire a learning management system that will serve as the foundation for what Unizin produces and performs. Secondly, to acquire or create a repository for digital learning objects. At the moment we are all producing all sorts of things, ranging from videos to little scientific clips, demonstrations, to illustrations, to lectures, notes, in all sorts of different formats – some retrievable, some not retrievable, some shareable, some not shareable. None of which is indexed, none of which I can see outside the University of Florida.

We believe there needs to be a repository that all of the members of Unizin can place the objects that they create to promote digital learning into, with an index. And in principle there will be developed a notion of sharing of these objects. It could be free sharing, it could be licensing, it could be selling. That’s something to be discussed in the future.

The third goal for Unizin is to acquire, create, or develop learning analytics. Some of the learning management systems have a rather primitive form of learning analytics. Unizin will build on what they have, and this will go from very mechanical types of learning analytics in terms of monitoring student progress and enabling intrusive advising and tutoring; all the way up to personalized learning, which is something that really does not exist yet but is one of the objectives of Unizin.

Those are the three primary goals for Unizin. If you believe that those are three important elements of infrastructure then you are probably interested in Unizin.

I have alluded to the possibility of a club, or of sharing content. We could think about sharing content. We could think about sharing courses. We could think about sharing degree programs. That is not really Unizin’s objective at this point. I will tell you that the universities that form the board for Unizin are in conversation about that, and we expect that to be one of the things that Unizin enables us to do as we create this repository, as we develop learning analytics we expect to be able to begin to collaborate with these universities. There are a lot of interesting questions as you approach that frontier, and by no means have these been resolved, but we believe it is inevitable and important for universities to begin sharing what they do in the digital learning space, and so Unizin would form the foundation for that.

One of the things that Unizin has done – as I’ve said it consists of those 10 large research universities – is that the Unizin board decided that member institutions may nominate their system – in this case the state university system of Florida – for Associate Membership for an annual fee of $100,000 per system.

For $100,000 the entire state university system of Florida (SUSFL) could become an associate member of Unizin and enjoy all the benefits that Unizin brings forward, whether it’s reduced pricing of products that it’s licensing, or whether it products that Unizin actually produces. Associate Membership does not qualify for board representation, but as I mentioned you do enjoy the benefits of Unizin products and services.

Slide 4

The potential benefits to the state university system I believe are the following. Unizin has settled on Canvas as the learning management system which would underlie the Unizin projects of building a repository and learning analytics. If you did not use Canvas you would still enjoy the benefits of Unizin and their products, but the use of them would not be as seamless as if you were on Canvas. You would have to build a crosswalk from the Unizin products to whatever LMS you are using. If you happen to be using Canvas you would enjoy the benefits of the Unizin products in a seamless fashion.

Unizin has negotiated a discount with Canvas. And so actually the University of Florida had signed the contract with Canvas before Unizin even existed. As soon as Unizin was created and negotiated a contract with Canvas, we actually received a discount from the price that we had negotiated. Because there were 10 large universities working on this, and there is some power in purchasing.

The second benefit, or second potential benefit which I think the system could enjoy is access to the tools which are under development as I’ve mentioned, including a digital repository and learning analytics.

Third, the system would enjoy membership in a consortium of large public universities that intends to secure its niche in the evolving digital ecosystem. As I have mentioned, we do see some potential risk as the industry consolidates, that we could be cut out of this industry if we don’t take the proper precautions.

Finally, as I’ve mentioned, there is the potential for cooperative relationships within the consortium to share digital instruction and to share digital objects and courses and degrees. That is really at the beginning conversation stage, that is not a goal of the Unizin organization itself but is a goal of the universities that underpin Unizin.

Q. I guess the real question is, tell me to what extent you can, how this will benefit each of the other universities who are not members at this time. And number two, could some of our other universities eventually become members?

A. Thank you for that question because I didn’t clarify one point that the question gives me the opportunity to clarify. Additional universities could be members of Unizin, and there are some universities in conversation with Unizin at this point. However, there is a larger charge for universities to become full board members of Unizin. University of Florida committed a million dollars over three years as part of the capitalization of Unizin. Every board member has done exactly the same. If a university in the system were interested in joining Unizin as a board member to help direct Unizin’s goals and operations, we could talk about that, but it would involve that level of investment.

At the lower level of investment, the $100,000 level which would be for the whole system – let’s say you join tomorrow – then an individual university would immediately have access to the preferred pricing for the Canvas learning management system. That would be a benefit to individual universities in the system who already are on Canvas or are considering going on Canvas. As the other tools or products are either acquired or developed by Unizin, the individual campuses would have access to those as well.

Q. I’d like to hear from John Hitt [president of UCF]. How does your university look at this proposal as it relates to online?

JH. I think the group membership for the system makes sense. I don’t think that it would make a lot of sense to have multiple institutions paying in a million bucks apiece. We would probably be interested in the $100,000 share. I doubt we would go for the full membership.

Q. Do you see the benefits they’re offering to benefit to UCF at this point, or would you use it?

JH. Yes, I think we would use some of it. We have more enthusiasm for some aspects of the membership than others. Yes, I think it would be useful.

There were no further questions, but it was apparent that some board members were not sure if they were being asked to pay $1 million for each campus or $100,000. Despite this short questioning, the motion passed as shared in the meeting minutes.

Chair Hosseini recognized Mr. Lautenbach for the Innovation and Online Committee report. Mr. Lautenbach reported the Committee heard an update from Provost Joe Glover on the Unizin Consortium and the Committee directed Chancellor Criser to work with university leadership in pursuing membership for the State University System in the consortium.

  1. The $1.050 million investment over three years minus alternate cost of $100,000 for these same three years.

The post Unizin Offering “Associate” Membership For Annual $100k Fee appeared first on e-Literate.

Coming Soon - PeopleTools Customer Beta Program

PeopleSoft Technology Blog - Tue, 2015-07-14 14:07
The PeopleTools team continues to push forward, ever improving the features and capabilities of PeopleTools.  Recently, you may have seen some of the planned enhancements for PeopleTools 8.55 discussed on MyOracleSupport in the Planned Features and Enhancements area.  This document has replaced the Release Value Proposition that has been used previously to highlight features to look for in the upcoming PeopleTools release. 

There are a number of cool features that we’re working on, including the Cloud Deployment Architecture (CDA) which will provide greater flexibility in the installation and patching of environments.  Additional planned features include Analytics for PeopleSoft Update Manager (PUM), Fluid dashboards/homepages and Simplified Analytics….just to name a few.

 We plan to kick off the PeopleTools 8.55 Beta Program in the relatively near future, and have an opening for a customer who’s willing to closely partner with us.  If you are looking to get your hands on the next release so that you can thoroughly test out some of these features in your own environment to see the benefits, perhaps you are the one we’re looking for.  Does your team have the skills and desire to take beta code and run with it?  Can your organization get a standard beta trial license agreement signed promptly?  We want to work with a customer that’s going to dive in, and really exercise the new features - If that’s you, email me (mark.hoernemann@oracle.com) and let’s talk.  Please keep in mind that this is a small beta – I’ve only got room for one, maybe two customers.   

July 2015 Critical Patch Update Released

Oracle Security Team - Tue, 2015-07-14 13:59

Hello, this is Eric Maurice.

Oracle today released the July 2015 Critical Patch Update. The Critical Patch Update program is Oracle’s primary mechanism for the release of security fixes across all Oracle products, including security fixes intended to address vulnerabilities in third-party components included in Oracle’s product distributions.

The July 2015 Critical Patch Update provides fixes for 193 new security vulnerabilities across a wide range of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Enterprise Manager, Oracle E-Business Suite, Oracle Supply Chain Suite, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle Communications Applications, Oracle Java SE, Oracle Sun Systems Products Suite, Oracle Linux and Virtualization, and Oracle MySQL.

Out of these 193 fixes, 44 are for third-party components included in Oracle products distributions (e.g., Qemu, Glibc, etc.)

This Critical Patch Update provides 10 fixes for the Oracle Database, and 2 of the Database vulnerabilities fixed in today’s Critical Patch Update are remotely exploitable without authentication. The most severe of these database vulnerabilities has received a CVSS Base Score of 9.0 for the Windows platform and 6.5 for Linux and Unix platforms. This vulnerability (CVE-2015-2629) reflects the availability of new Java fixes for the Java VM in the database.

With this Critical Patch Update, Oracle Fusion Middleware receives 39 new security fixes, 36 of which are for vulnerabilities which are remotely exploitable without authentication. The highest CVSS Base Score for these Fusion Middleware vulnerabilities is 7.5.

This Critical Patch Update also includes a number of fixes for Oracle applications. Oracle E-Business Suite gets 13 fixes, Oracle Supply Chain Suite gets 7, PeopleSoft Enterprise gets 8, and Siebel gets 5 fixes. Rounding up this list are 2 fixes for the Oracle Commerce Platform.

The Oracle Communications Applications receive 2 new security fixes. The highest CVSS Base Score for these vulnerabilities is 10.0, this score is for vulnerability CVE-2015-0235, which affects Glibc, a component used in the Oracle Communications Session Border Controller. Note that this same Glibc vulnerability is also addressed in a number of Oracle Sun Systems products.

Also included in this Critical Patch Update are 25 fixes Oracle Java SE. 23 of these Java SE vulnerabilities are remotely exploitable without authentication. 16 of these Java SE fixes are for Java client-only, including one fix for the client installation of Java SE. 5 of the Java fixes are for client and server deployment. One fix is specific to the Mac platform. And 4 fixes are for JSSE client and server deployments. Please note that this Critical Patch Update also addresses a recently announced 0-day vulnerability (CVE-2015-2590), which was being reported as actively exploited in the wild.

This Critical Patch Update addresses 25 vulnerabilities in Oracle Berkeley DB, and none of these vulnerabilities are remotely exploitable without authentication. The highest CVSS Base score reported for these vulnerabilities is 6.9.

Note that the CVSS standard was recently updated to version 3.0. In a previous blog entry, Darius Wiles highlighted some of the enhancements introduced by this new version. Darius will soon publish another blog entry to discuss this updated CVSS standard and its implication for Oracle’s future security advisories. Note that the CVSS Base Score reported in the risk matrices in today’s Critical Patch Update were based on CVSS v2.0.

For More Information:

The July 2015 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpujul2015-2367936.html

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance

Publish data over REST with Node.js

Kris Rice - Tue, 2015-07-14 09:47
Of course the best way to expose database data over REST is with Oracle REST Data Services.  If you haven't read over the Statement of Direction, it's worth the couple minutes it takes.  The auto table enablement and filtering is quite nice. For anyone interested in node.js and oracle, this is a very quick example of publishing the emp table over REST for use by anyone that would prefer REST

APEX Listener supported App Servers

Kris Rice - Tue, 2015-07-14 09:47
     With the latest news on Glassfish, I thought it may be a good time to review the options for the APEX Listener to deploy.  The huge caveat is this is as of today, 11/6/2013 , the future can change anything however there’s nothing planned. The Licenses I'm just putting the important parts here for reference.  They are linked to the entire license. OTN License  The APEX Listener is

How to use RESTful to avoid DB Links with ā'pěks

Kris Rice - Tue, 2015-07-14 09:47
So the question came up of avoiding a db link by using the APEX Listener's RESTful services to get at the same data.  This is all in the context of an Apex application so apex_collections is the obvious place to stuff transient data that could be used over a session. Step 1:  Make the RESTful Service. The only catch is to turn pagination off ( make it zero ).  I didn't need it for now so this