Skip navigation.

Feed aggregator

Auto Refresh for ADF BC Cached LOV

Andrejus Baranovski - Thu, 2015-07-16 08:19
You can configure auto refresh for ADF BC cached LOV and this works out of the box, no special coding is needed. My previous post describes how ADF BC session cached LOV behaves - ADF BC Session Cached LOV - Understanding Expire Time. Auto refresh can be configured for both application and session caching. One important thing - auto refresh doesn't work when ADF BC is configured with disconnected mode (ADF BC Tuning with Do Connection Pooling and TXN Disconnect Level). I would guess this is related to DB notification listener, it can't notify about DB changes, when DB connection is being taken from AM.

Make sure to remember to configure AM without disconnected mode. Auto refresh works for both application and session caching levels:


As a reminder - in order to use shared LOV functionality, you must select VO instance for LOV from shared AM configuration. See how it looks in the LOV definition wizard:


Auto refresh functionality is enabled with a single property - AutoRefresh = true. This can be set through VO general properties or directly in the VO source code:


Make sure Disconnect Application Module Upon Release option is not set:


I have overriden VO method processChangeNotification for logging purposes, this method is invoked automatically for VO's set with AutoRefresh = true. No need to code anything here, I'm just logging method invocation:


Let's do a test. First of all, check entries of Job list:


Go to DB and update one of the entries. For example, change Stock Clerk title and commit changes:


Come back to UI and press any button. This will trigger new request and Jobs list shared VO will be notified about DB change to refresh:


Check the list again, you should see changed entry for Stock Clerk:


In the log we can observe a call for change notification, it triggers shared LOV query re-execution and data re-fetch:


See recorded demonstration video, where I show refresh behaviour live:


Download sample application - ADFBCSharedSessionLOVApp.zip.

Oracle Supply Chain Management Cloud

OracleApps Epicenter - Thu, 2015-07-16 06:11
Cloud = Service-based computing model providing self-service, elasticity, shared resources, and pay-as-you-go New cloud computing technologies are enabling breakthrough innovations in supply chain management (SCM) applications delivered via software as a service (SaaS) models. To help companies support their complete quote-to-cash process in the Cloud, Oracle has expanded the Oracle Supply Chain Management Cloud with […]
Categories: APPS Blogs

Oracle APEX 5.0.1 now available

Patrick Wolf - Thu, 2015-07-16 05:48
Oracle Application Express 5.0.1 is now released and available for download. If you wish to download the full release of Oracle Application Express 5.0.1, you can get it from the Downloads page on OTN. If you have Oracle APEX 5.0.0 … Continue reading →
Categories: Development

How to solve accent sensitive problems?

Yann Neuhaus - Thu, 2015-07-16 02:45

Some days ago, one of my customer claimed that searching for “Muller” doesn’t return “Mueller” and “Müller” as well!
This is typically an issue due to the collation of the SQL Server database, but how to resolve this problem?
The collation of the database is Latin1_General_CI_AS, so a case insensitive and accent sensitive collation.

If a run the following query:

Select name from Person where name like 'muller'

 

alt


I get only “Muller” which is normal as I use an accent sensitive collation so u is not the same as ü
Next, if I execute the same query by using an accent insensitive collation:

Select name from Person where name like 'muller' collate Latin1_GENERAL_CI_AI

I have as result:

alt

This time, “Muller” and “Müller” are retrieved as my collation is fully insensitive. For Latin1_General and AI (Accent Insensitive) collation u = ü, o = ö, a = ä…
But I get not yet “Mueller” which is a synonym of “Müller” without using a ü in German writing.
So I decided to use a German collation to see if it could solve my issue by returning my three forms of “Muller”. In this phonebook collation, ü is sorted like ue, ä like ae…

Select name from Person where name like 'muller' collate German_PhoneBook_CI_AI

 

alt


As expected, I received just “Muller” which is quite normal as “Muller” in German speaking is not “Müller”…
Let’s try with:

Select name from Person where name like 'müller' collate German_PhoneBook_CI_AI

 

alt


This result is consistent with the German speaking where “Mueller” and “Müller” are the same. But I cannot yet get my three forms of “Muller”…

Getting the result excepted by my customer seems like an impossible task by just changing the column collation.

Another possibility is to use the SOUNDEX string function. This function converts an alphanumeric string to a four-character code based on how the string sounds when spoken. So, let’s try with those queries:

select * from Person where soundex(Name) like soundex('muller')

select soundex('muller'), soundex('müller'), soundex('mueller')

 

alt


This string function was able to retrieve all forms of “muller” without any collation change. I saw that all version of “Muller” is converted to the same SOUNDEX code. The only problem is the utilization of indexes by this function which is not ensure.

Finally, I took a look at the FullText catalog feature which can be accent insensitive and that will include a FullText index with German language:

CREATE FULLTEXT CATALOG FTSCatalog WITH ACCENT_SENSITIVITY=OFF AS DEFAULT

CREATE FULLTEXT INDEX ON Person(name LANGUAGE German) KEY INDEX PK_Person

GO

 

After I used the following queries based on the contains clause and the Formsof predicate with the inflectional option for my different forms of Muller:

Select name from Person where contains(name,'FORMSOF(INFLECTIONAL,"muller")')

Select name from Person where contains(name,'FORMSOF(INFLECTIONAL,"müller")')

Select name from Person where contains(name,'FORMSOF(INFLECTIONAL,"mueller")')

 

alt


As expected the result was consistent with the other ones as we don’t have all forms when searching for “muller”. In contrary searching for “müller” or “Mueller” gives me all the results.

In conclusion, the FullText capabilities of SQL Server is certainly the best solution as it will be also faster with a huge number of rows and give the possibility to not change the collation which could be sometimes a real nightmare but we have to use “Müller” instead of “muller” to retrieve all the expected results.

Unizin Perspective: Personalized learning’s existence and distance education experience

Michael Feldstein - Wed, 2015-07-15 18:46

By Phil HillMore Posts (344)

By reading the Unizin pitch for the State University System of Florida shared yesterday, we can see quite a few claims about the (potential) benefits to be provided by the consortium. “Make sure that the universities were not cut out of [distance ed] process”; “Secure our foothold in the digital industry”; “Promote greater control and influence over the digital learning ecosystem”; Provide “access to the Canvas LMS at the Unizin price”; Provide “access to tools under development, including a Learning Object Repository and Learning Analytics”; Provide “potential for cooperative relationships to ‘share’ digital instruction within and across the consortium”.

I want to pick up on University of Florida provost Joe Glover’s further comment on Learning Analytics, however.

The third goal for Unizin is to acquire, create, or develop learning analytics. Some of the learning management systems have a rather primitive form of learning analytics. Unizin will build on what they have, and this will go from very mechanical types of learning analytics in terms of monitoring student progress and enabling intrusive advising and tutoring; all the way up to personalized learning, which is something that really does not exist yet but is one of the objectives of Unizin.

Personalized learning “really does not exist yet”? You can argue that personalized learning as a field is evolving and mostly in pilot programs, or that it is poorly defined and understood, or that there are not yet credible studies independently reviewing the efficacy of this family of approaches. But you cannot accurately say that personalized learning “really does not exist yet”. And is Unizin claiming that the consortium is key to making personalized learning a reality? This seemed to be one of the arguments in the pitch.

If A Tree Falls In A Different Sector . . .

There are multiple examples of personalized learning in practice, particularly at community colleges to deal with developmental math challenges. I have written about the massive investment in the emporium approach at Austin Community College’s ACCelerator Lab.

Rather than a pilot program, which I have argued plagues higher ed and prevents diffusion of innovations, Austin CC has committed to a A) a big program up front (~700 students in the Fall 2014 inaugural semester) and ~1,000 students in Spring 2015, yet B) they offer students the choice of traditional or emporium. To me, this offers the best of both worlds in allowing a big bet that doesn’t get caught in the “purgatory of pilots” while offering student choice.

We also shared through e-Literate TV an entire case study on Essex County College, showing their personalized learning approach.

In another e-Literate TV case study that does not focus on developmental math, we shared the personalized learning program at Empire State College, and they have been trying various personalized approaches for more than 40 years.

If A Tree Falls In A Non-Unizin Campus . . .

Personalized learning does exist, and Unizin schools could learn from the pioneers in this field. It would be wonderful if Unizin ends up helping to spread innovative teaching & learning practices within the research university community, but even there I would note that there are also some great examples in that group of schools (including at Arizona State University, UC Davis, and even at Unizin member Penn State). For that matter, the University of Florida would do well to travel two hours south and see the personalized learning programs in place at the University of Central Florida.

If this “consortium of large public universities that intends to secure its niche in the evolving digital ecosystem” means that the schools want to learn primarily among themselves, then Unizin will be falling prey to the same mistake that the large MOOC providers made – ignoring the rich history of innovation in in the field and thinking they are creating something without precedent leveraging their unique insight.

If A Tree Falls In A Distance Forest . . .

While Unizin has never claimed to be focused only on distance education, Glover does bring bring up the topic twice as the core of his argument.

That is a situation that we got ourselves in by not looking ahead to the future. We believe we are in a similar position with respect to distance learning at this point. [snip]

Every university in some sense runs a mom & pop operation in distance learning at this point, at least in comparison with large organizations like IBM and Pearson Learning that can bring hundreds of millions of dollars to the table. No university can afford to do that.

Let’s ignore the non sequitur about IBM for now. A few notes:

While these are larger non-profit online programs, it is not accurate to say that “every university in some sense runs a mom & pop operation”. It might be accurate based on the Unizin member institution experience, however. And the University of Florida did recently sign a long-term contract with Pearson Embanet to create its UF Online program, largely based on the school’s inexperience (beyond some masters programs) with fully online education.

In the graph below taken from the Fall 2013 IPEDS data, the Y axis is ratio of students taking exclusively DE courses (fully online programs), and the X axis is ratio of students taking some, but not all, DE courses (online courses within f2f program).

DE_Comparison_Unizin_vs_Public_4-year_-_philonedtech___Tableau_Public

We see that the U Florida and Penn State U has fairly high percentage of students taking some online courses,Penn State World Campus is fully online (not sure if World Campus is part of Unizin or not, but I included it to be safe), and that Oregon State seems to have some fully online presence. But in general Unizin schools are not leaders in distance learning compared to other public 4-year universities. This is not a solid basis to think they have the answers on distance learning needs within the consortium.

Look Outward, Not Inward

In my mind, Unizin is looking the wrong direction. The money and focus thus far has been for the 10 (now 11) member institutions to look inward – form a club, talk amongst themselves, and figure out what should happen in a digital ecosystem. A different, more useful approach would be to look outward: get the club together and look beyond their own successes (e.g. Penn State World Campus), go visit schools that are at the forefront of digital education, invite them in to present and share, and learn from others.

What I’m suggesting is that Unizin should focus a lot more on participating in open communities and focus a lot less on forming an exclusive club. If the schools then set the consortium’s mission as leading instructional change within the member institutions, and forming a support community based on the similar profile of schools, then we might see real results.

The post Unizin Perspective: Personalized learning’s existence and distance education experience appeared first on e-Literate.

Were you at Alliance, Collaborate, Interact this year, or wished you were?

PeopleSoft Technology Blog - Wed, 2015-07-15 17:40

This year, as well as uploading my PDF presentation, I've uploaded a couple of additional files.

The one you may find interesting is a short form Security Check List.

You can find it here: 

This is a supplement to the Securing Your PeopleSoft Application Red Paper (it includes the link) and it covers a number of points I've discussed with customers over the years. I include most of the check list as slides in my session but the PDF is an expanded set. The check list also contains a number of useful links.

In the discussions with customers we frequently find there are topics they have overlooked because they don't appear directly related to PeopleSoft security, but they are part of the overall infrastructure security and often managed by people outside of the PeopleSoft team. It's more important that as teams are reduced in size, that you build collaborative, virtual teams in the rest of your organization. I hope the check list will also provide the conversation starters to help build those virtual teams.

If you think some of the points are  topics by themselves, let me know and I can work on building out the information.

I appreciate any and all feedback. 

How to become/learn Oracle Apps DBA R12.2 : Part I

Online Apps DBA - Wed, 2015-07-15 16:10

I started this blog 9 years back with first post as How to become Oracle Apps DBA (back then it was 11i) and with 225 comments, this is still the most common question I get in mail or on this blog.

We are starting our new batch for Oracle Apps DBA training (R12.2) from August 8, 2015 and first thing we cover is Architecture of Oracle E-Business Suite.  If you are learning (getting trained) on Oracle E-Business Suite on your own then first thing you should learn is Architecture of Oracle Apps.

As shown below Oracle E-Business suite is Three Tier Architecture

a) Database Tier : With Oracle Database where data resides
b) Application Tier : With Application & Web Server where business logic resides
c) Client Tier : browser based client from where end user access application

apps_architecture

 

Note: Till Oracle E-Business Suite R12.1 (prior versions include 12.0 & 11i), Application Tier uses 10g Application Server (or 9 for some versions of 11i). From Oracle E-Business Suite 12.2 onwards Application Tier is deployed on Oracle WebLogic Server as application Server.

 

applicationTierArchitecture

You can get more information on Architecture of Oracle E-Business Suite in Concepts Guide or learn it from our expert team by registering to Oracle Apps DBA Training (starting on 8th August) where Day1 covers

Architecture and File System
  • Architecture of R12.2
  • Changes in Oracle Apps from previous version
  • Requirement/Hardware Sizing Guidelines
  • File System Overview
  • Benefit of New Architecture
  • File System including Changes from previous version
Architecture and File System (Lab Activity)
  • Provide one working instance of R12.2 to the Trainee with Front end and backend access
  • Get comfortable with the Terminology/File system/Environment Variables
  • Understand the Architecture via Navigation

 

Get 200 USD off by registering before 20th July and use code A2OFF at time of checkout (We limit seats per batch to register early to avoid disappointment).

 

Previous in series Related Posts for 12.2 New Features
  1. ADOP : Online Patching in Oracle Apps (E-Business Suite) R12 12.2 : Apps DBA’s Must Read
  2. How to become/learn Oracle Apps DBA R12.2 : Part I

The post How to become/learn Oracle Apps DBA R12.2 : Part I appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

Connecting to DBaaS, did you know this trick?

Kris Rice - Wed, 2015-07-15 15:10
SSHTunneling Trick The new command line is a must try, says 10 out of 10 people that built it.  The tool has sshtunneling of ports built in as described by Barry. This means you can script opening your sshtunnel from the command line and run sql very quickly.  Here's the one I used recently at Kscope15. Now the trick? is that once this port is forwarded, any tool can now use it.  In case

Webcast: Introducing Oracle Mobile Cloud Service

WebCenter Team - Wed, 2015-07-15 12:57
Oracle Corporation  Introducing Oracle Mobile Cloud Service Introducing Oracle Mobile Cloud Service part of the Cloud Platform Webcast Series

As a kickoff to the Oracle Cloud Platform Series: Oracle, Pella Corporation, Xamarin, introduce Oracle Mobile Cloud Service and the value it provides in building engaging apps quickly, while reducing costs and simplifying enterprise mobility.

Mobile computing has experienced explosive growth in the past decade, and this is just the beginning. At the heart of any organizations’ digital strategy, Mobile is the primary screen and engagement channel for everyone – customers, partners and employees. Organizations both IT and business are looking at new ways to embrace enterprise mobility and lead the digital transformation.

You are invited to join this three part webcast: Introducing Oracle Mobile Cloud Service and understand how Oracle is leading this transformation:
  • Introducing Oracle Mobile Cloud Service Part 1: Keeping Mobile Simple
  • Introducing Oracle Mobile Cloud Service Part 2: Pella Creating a Better View
  • Introducing Oracle Mobile Cloud Service Part 3: Go Native Fast with Xamarin
Register Now to attend this exclusive complimentary webcast.


Red Button Top Register Now Red Button Bottom Live Webcast Calendar July 22, 2015
10:00 AM PT/1:00 PM ET
#OraclePaaS SPEAKERS:

Inderjeet Singh Inderjeet Singh, Executive Vice President, Fusion Middleware Development, Oracle Kaj Van de Loo Kaj Van de Loo
Vice President, Mobile Development
Oracle Jim Thomas Jim Thomas
Director of IT Operations and Information Security,
Pella Corporation Nat Friedman Nat Friedman
CEO and Co-Founder,
Xamarin Rimi S. Bewtra Rimi S. Bewtra
Sr. Director,
Mobile Product Marketing
Oracle Integrated Cloud Applications & Platform Services Copyright © 2015, Oracle Corporation and/or its affiliates.
All rights reserved.
Contact Us | Legal Notices and Terms of Use | Privacy Statement

The Pen is Mightier with the User’s Experience

Oracle AppsLab - Wed, 2015-07-15 10:04

If you’re involved in enterprise user experience (UX) it will come as no surprise that the humble pen and paper remains in widespread use for everyday business.

Sales reps, for example, are forever quickly scribbling down opportunity info. HR pros use them widely. Accountants? Check. At most meetings you will find both pen and paper and digital technology on the table.

That’s what UX is all about, understanding all the tools, technology, and job aids, and the rest, that the user touches along that journey to getting the task done.

Although Steve Jobs famously declared that the world didn’t need another stylus, innovation in digital styli, or digital pens (sometimes called smartpens), has never been greater.

Microsoft is innovating with the device, h/t @bubblebobble. Apple is ironically active with patents for styli, and the iPen may be close. Kickstarter boasts some great stylus ideas such as the Irish-designed Scriba (@getbscriba), featured in the Irish Times.

It is the tablet and the mobility of today’s work that has reinvigorated digital pen innovation, whether it’s the Apple iPad or Microsoft Surface.

Livescribe Echo smartpen and notebook

Livescribe Echo smartpen and notebook

I’ve used digital pens, or smartpens, such as the Livescribe Echo for my UX work. The Echo is great way to wireframe or create initial designs quickly and to communicate the ideas to others working remotely, using a pencast.

Livescribe Echo pencast viewed from the desktop

Livescribe Echo pencast viewed from the desktop

Personally, I feel there is a place for digital pens, but that the OG pen and paper still takes some beating when it comes to rapid innovation, iteration, and recall, as pondered on UX StackExchange.

An understanding of users demands that we not try to replace the pen and paper altogether but to enhance or augment their use, depending on the context. For example, using the Oracle Capture approach to transfer initial strokes and scribbles to the cloud for enhancement later.

tumblr_inline_nrf4beWUSO1s3qib7_400

You can read more about this in the free Oracle Applications Cloud User Experience Innovations and Trends eBook.

Sure, for some users, a funky new digital stylus will rock their world. For others, it won’t.

And we’ll all still lose the thing.

The pen is back? It’s never been away.

Cross-posted from Ültan’s Über Üsable Apps, thanks Ultan (@ultan).Possibly Related Posts:

Starting a Process using a Timer with a Duration in Oracle BPM

Jan Kettenis - Wed, 2015-07-15 09:34
In this blog article I explain three options to configure a timer start event based upon some configurable duration.

As far as I know firing a timer based on a duration is only applicable in case of a Timer Event Sub-process. Let me know if you think otherwise.

In case of an Event Sub-process the timer starts at the same moment when the process instance starts. There is no way to change it at any point after that. Given this , you can use one of the following three options that I discuss below. If you know of some oher way, again: let me know!

Input ArgumentYou can use an element that is part of the request of the process. In the following example there is one input argument called 'expiry' of type duration which is mapped to a process variable:

 The process variable can then used to start the timer using an straightforward simple XPath assignment:


Preference in composite.xmlYou can also configure a preference in the composite.xml file. Such a preference belongs to a specific component, and starts with "preference" (or "bpel.preference", but you can leave "bpel." out). Using the dot as a delimiter you can post-fix that with the preference name to use:

You can then set the timer using the ora:getPreference() XPath function. All these preferences are strings, but if the value is an ISO duration it will automatically be converted to a duration.

Domain Value MapA third option is to configure the duration using a Domain Value Map or DVM for short. In the following example a DVM file is used for configuration parameters as a name-value pair:
 

The timer can be instantiated using the dvm:lookupValue() XPath function, as show in the following picture:

What to Choose?This depends on the requirements.

If your consumer should be able to determine the duration, you should pass it on as a request parameter.

If the business wants to change it run-time than using the DVM is the best option. The initial value is determined design-time but can be changed run-time via SOA Composer (the same tool via which business rules can be changed).

Otherwise the composite preference is your weapon of choice. Also for this preference the initial value is determined design-time, but can still be changed after deployment by IT using the MBean Browser in Enterprise Manager.

New search capabilities in SharePoint Server 2013-2016

Yann Neuhaus - Wed, 2015-07-15 07:32

alt

For the ones who have not migrated their environment on SharePoint 2013, or the novices, here is an article in which we will discover the new search capabilities in SharePoint Server 2013.
We will have an overview of:

  • Search User interface
  • Crawling
  • Structure
  • Index & Search Schema
  • Health reports
  • Search architecture

 

What is the SP Search tool? what

SharePoint contains a search technology, which combines advanced search and analytics features. This feature is highly customizable. The content of documents (including PDFs) are searched.

FUNCTIONAL DRAW

search

 

Search user interface

Users can quickly identify useful results in ways such as the following:

  • Users can just move the pointer above the result for preview.

  • Results can be distinguished based on their type. The application icon is placed in front of the title of the search result. Lync availability and the people picture is shown on the results.

  • Certain types of related results are displayed in groups called result blocks. A result block contains a small subset of results that are related in a particular way. For example, results that are Excel documents appear in a result block searching for terms like the word "budget".

The search tool helps users to return to a previous search because the system is keeping search history.

Site collection administrators and site owners can specify display templates that determine how result types appear.

alt

Crawling improvements In SharePoint Server 2013, you can configure crawl schedules for SharePoint content sources so that crawls are performed continuously. Setting this option eliminates the need to schedule incremental crawls and automatically starts crawls as necessary to keep the search index fresh. Administrators should still configure full crawls as necessary.   

For more information, see on TechNet site: Manage continuous crawls in SharePoint Server 2013.

Index and Search schema

By defining crawled properties, managed properties, and the mappings between them, the search schema determines how the properties of crawled content are saved to the search index.
The search index stores the contents of the managed properties. The attributes of the managed properties determine the search index structure.

SharePoint Server 2013 introduces new attributes that you can apply to manage properties, such as sortable and refinable. The sortable attribute reduces the time that is required to return large search result sets by sorting results before they are returned. The refinable attribute enables you to create a refiner based on a particular managed property.

In SharePoint Server 2013, you can have multiple search schemas. The main search schema is defined at the Search service application level. Site collection administrators can create customized search schemas for different site collections.

For more information, see on TechNet site: Manage the search schema in SharePoint Server 2013.

Health reports

SharePoint Server 2013 provides many query health reports and crawl health reports. In SharePoint Server 2010 and FAST Search Server 2010 for SharePoint, similar reports were called Search Administration Reports. For more information, see View search diagnostics in SharePoint Server 2013.

Search architecture

SharePoint Server 2013 introduces a new search architecture that includes significant changes and additions to the search components and databases. For examples and more information, see the Search technical diagrams in Technical diagrams for SharePoint 2013.


All the SharePoint 2013 Improvements are kept in SharePoint 2016, actually the first info done my Microsoft about the search in SharePoint 2016 is regarding the Delve:

Search with Delve

SharePoint 2016 will be having Search with Delve app.

What is Delve?what

Delve is a new way of searching & presenting contents based on user’s interest. Delve’s roots goes to Office 365.

Delve can present information from Exchange, OneDrive for Business, SharePoint Online and Yammer based on user’s interactions.
For more information, please have a look on this reference: https://support.office.com/en-us/article/What-is-Office-Delve--1315665a-c6af-4409-a28d-49f8916878ca?ui=en-US&rs=en-US&ad=US

Conclusion

Search could easily become a nightmare sometimes, please use the Best Practices related to any search whether be people, objects, information, needles in a haystack ... with the right information and the right settings we always seem to find back what we are looking for.

Image-1_4

 

Best practices for organizing content for search in SharePoint Server 2013: https://technet.microsoft.com/en-us/library/jj683124.aspx

Plan search in SharePoint Server 2013: https://technet.microsoft.com/en-us/library/cc263400.aspx

 

 

 


Disable Wrap Data Types

Darwin IT - Wed, 2015-07-15 06:50
Just  a moment ago I stumbled on a blog entry of Eric Elsinga about the wrapping of datatypes in Weblogic Datasources, related to the DB-Adapter.

Weblogic wraps objects returned by the database-driver to provide functionality related to debugging, connection utilization tracking and transparent transaction support.
However for some native database objects like BLOBS, CLOBS, ARRAYS etc. this wrapping can affect the performance significantly. When this wrapping is disabled, the application (in our case the DB-Adapter) can work directly on the native objects provided by the Database driver.
To disable the object wrapping do the following:
  1. In the Domain Structure tree, expand Services, then select Data Sources.
  2. On the Summary of Data Sources page, click the data source name.
  3. Select the Configuration: Connection Pool tab.
  4. Scroll down and click Advanced to show the advanced connection pool options.
  5. In Wrap Data Types, deselect the checkbox to disable wrapping.
  6. Click Save.
Of course on a production-mode server you need to Lock&Edit upfront and Activate Changes afterwards.
See also:

Another new APEX-based public website goes live

Tony Andrews - Wed, 2015-07-15 06:48
Another APEX public website I worked on with Northgate Public Services has just gone live: https://londontribunals.org.uk/ This is a website to handle appeals against parking fines and other traffic/environmental fines issues by London local authorities. It is built on APEX 4.2 using a bespoke theme that uses the Bootstrap framework.  A responsive design has been used so that the site works Tony Andrewshttp://www.blogger.com/profile/16750945985361011515noreply@blogger.com0http://tonyandrews.blogspot.com/2015/07/another-new-apex-based-public-website.html

APEX 5 - Opening and Closing Modal Window

Denes Kubicek - Wed, 2015-07-15 04:56
This example is showing how to open a Modal Page from any element in your application. It is easy to get it working using some standards like a button or a link in a report. However, it is not 100% clear how to get it working with some other elements which don't have the redirect functionality built in (item, region title, custom links, etc.). This example is also showing how to get the success message displayed on the parent page after closing of the Modal Page.

Categories: Development

Automatically Add License Protection and Obfuscation to PL/SQL

Pete Finnigan - Wed, 2015-07-15 03:05

Yesterday we released the new version 2.0 of our product PFCLObfuscate . This is a tool that allows you to automatically protect the intellectual property in your PL/SQL code (your design secrets) using obfuscation and now in version 2.0 we....[Read More]

Posted by Pete On 17/04/14 At 03:56 PM

Categories: Security Blogs

PQ Index anomaly

Jonathan Lewis - Wed, 2015-07-15 01:42

Here’s an oddity prompted by a question that appeared on Oracle-L last night. The question was basically – “Why can’t I build an index in parallel when it’s single column with most of the rows set to null and only a couple of values for the non-null entries”.

That’s an interesting question, since the description of the index shouldn’t produce any reason for anything to go wrong, so I spent a few minutes on trying to emulate the problem. I created a table with 10M rows and a column that was 3% ‘Y’ and 0.1% ‘N’, then created and dropped an index in parallel in parallel a few times. The report I used to prove that the index build had run  parallel build showed an interesting waste of resources. Here’s the code to build the table and index:


create table t1
nologging
as
with generator as (
        select  --+ materialize
                rownum id
        from dual
        connect by
                level <= 1e4
)
select
        case
                when mod(rownum,100) < 3 then 'Y'
                when mod(rownum,1000) = 7 then 'N'
        end                     flag,
        rownum                  id,
        rpad('x',30)            padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e7
;

-- gather stats here

explain plan for
create index t1_i1 on t1(flag) parallel 4 nologging
;

select * from table(dbms_xplan.display);

create index t1_i1 on t1(flag) parallel 4 nologging;

select index_name, degree, leaf_blocks, num_rows from user_indexes;
alter index t1_i1 noparallel;

As you can see, I’ve used explain plan to get Oracle’s prediction of the cost and size, then I’ve created the index, then checked its size (and set it back to serial from its parallel setting). Here are the results of the various queries (from 11.2.0.4) – it’s interesting to note that Oracle thinks there will be 10M index entries when we know that “completely null entries don’t go into the index”:

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name     | Rows  | Bytes | Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------
|   0 | CREATE INDEX STATEMENT   |          |    10M|    19M|  3073   (3)| 00:00:16 |        |      |            |
|   1 |  PX COORDINATOR          |          |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (ORDER)     | :TQ10001 |    10M|    19M|            |          |  Q1,01 | P->S | QC (ORDER) |
|   3 |    INDEX BUILD NON UNIQUE| T1_I1    |       |       |            |          |  Q1,01 | PCWP |            |
|   4 |     SORT CREATE INDEX    |          |    10M|    19M|            |          |  Q1,01 | PCWP |            |
|   5 |      PX RECEIVE          |          |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,01 | PCWP |            |
|   6 |       PX SEND RANGE      | :TQ10000 |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,00 | P->P | RANGE      |
|   7 |        PX BLOCK ITERATOR |          |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS FULL| T1       |    10M|    19M|  2158   (4)| 00:00:11 |  Q1,00 | PCWP |            |
------------------------------------------------------------------------------------------------------------------

Note
-----
   - estimated index size: 243M bytes

INDEX_NAME           DEGREE                                   LEAF_BLOCKS   NUM_ROWS
-------------------- ---------------------------------------- ----------- ----------
T1_I1                4                                                562     310000

Although the plan says it’s going to run parallel, and even though the index says it’s a parallel index, we don’t have to believe that the creation ran as a parallel task – so let’s check v$pq_tqstat, the “parallel query table queue” statistics – and this is the result I got:


DFO_NUMBER      TQ_ID SERVER_TYPE     INSTANCE PROCESS           NUM_ROWS      BYTES      WAITS   TIMEOUTS AVG_LATENCY
---------- ---------- --------------- -------- --------------- ---------- ---------- ---------- ---------- -----------
         1          0 Ranger                 1 QC                      12        528          4          0           0
                      Producer               1 P004               2786931   39161903          9          1           0
                                             1 P005               2422798   34045157         11          1           0
                                             1 P006               2359251   33152158         12          1           0
                                             1 P007               2431032   34160854         14          2           0
                      Consumer               1 P000               3153167   44520722          3          0           0
                                             1 P001               1364146   19126604          4          1           0
                                             1 P002               2000281   28045742          3          0           0
                                             1 P003               3482406   48826476          3          0           0

                    1 Producer               1 P000                     1        298          0          0           0
                                             1 P001                     1        298          0          0           0
                                             1 P002                     1        298          0          0           0
                                             1 P003                     1         48          0          0           0
                      Consumer               1 QC                       4       1192          2          0           0

Check the num_rows column – the first set of slaves distributed 10M rows and roughly 140MB of data to the second set of slaves – and we know that most of those rows will hold (null, rowid) which are not going to go into the index. 97% of the data that went through the message queues would have been thrown away by the second set of slaves, and “should” have been discarded by the first set of slaves.

As for the original question about the index not being built in parallel – maybe it was, but not very parallel. You’ll notice that the parallel distribution at operation 6 in the plan is “RANGE”. If 97% of your data is null and only 3% of your data is going to end up in the index then you’d need to run at higher than parallel 33 to see any long lasting executions – because at parallel 33 just one slave in the second set will get all the real data and do all the work of sorting and building the index while the other slaves will (or ought to) be just throwing their data away as it arrives. When you’ve got 500M rows with only 17M non-null entries (as the OP had) to deal with, maybe the only thing happening by the time you get to look might be the one slave that’s building a 17M row index.

Of course, one of the reasons I wanted to look at the row distribution in v$pq_tqstat was that I wanted to check whether I was going to see all the data going to one slave, or a spread across 2 slaves (Noes to the left, Ayes to the right – as they used to say in the UK House of Commons), or whether Oracle had been very clever and decided to distribute the rows by key value combined with rowid to get a nearly even spread. I’ll have to set up a different test case to check whether that last option is possible.

Footnote

There was another little oddity that might be a simpler explanation of why the OP’s index creation might actually have run serially. I dropped and recreated the index in my test case several times and at one point I noticed (from view v$pq_slave) that I had 16 slave processes live (though, at that point, IDLE). Since I was the only user of the instance my session should probably have been re-using the same set of slaves each time I ran the test; instead, at some point, one of my test runs had started up a new set of slaves. Possibly something similar had happened to the OP, and over the course of building several indexes one after the other his session had reached the stage where it tried to start “yet another” set of slaves, failed, and decided to run serially rather than reuse any of the slaves that were nominally available and IDLE.

Update

It gets worse. I decided to query v$px_sesstat (joined to v$statname) while the query was running, and caught some statistics just before the build completed. Here are a few critical numbers taken from the 4 sessions that received the 10M rows and built the final index:

Coord   Grp Deg    Set  Sno   SID
264/1     1 4/4      1    1   265
---------------------------------
            physical writes direct                            558
            sorts (memory)                                      1
            sorts (rows)                                2,541,146

264/1     1 4/4      1    2    30
---------------------------------
            sorts (memory)                                      1
            sorts (rows)                                2,218,809

264/1     1 4/4      1    3    35
---------------------------------
            physical writes direct                          7,110
            physical writes direct temporary tablespace     7,110
            sorts (disk)                                        1
            sorts (rows)                                2,886,184

264/1     1 4/4      1    4   270
---------------------------------
            sorts (memory)                                      1
            sorts (rows)                                2,353,861

Not only did Oracle pass 10M rows from one slave set to the other, the receiving slave set sorted those rows before discarding them. One of the slaves even ran short of memory and spilled its sort to disc to do the sort. And we can see (physical writes direct = 558) that one slave set was responsible for handling all the “real” data for that index.

 

Update 2

A couple of follow-ups on the thread have introduced some other material that’s worth reading.  An item from Mohamed Houri about what happens when a parallel slave is still assigned to an executing statement but isn’t given any work to do for a long time; and an item from Stefan Koehler about _px_trace and tracking down why the degree of parallelism of a statement was downgraded.


Security patches released for OBIEE 11.1.1.7/11.1.1.9, and ODI DQ 11.1.1.3

Rittman Mead Consulting - Wed, 2015-07-15 00:14

Oracle issued their quarterly Critical Patch Update yesterday, and with it notice of several security issues of note:

  • The most serious for OBIEE (CVE-2013-2186) rates 7.5 (out of 10) on the CVSS scale, affecting the OBIEE Security Platform on both 11.1.1.7 and 11.1.1.9. The access vector is by the network, there’s no authentication required, and it can partially affect confidentiality, integrity, and availability.
    • The patch for users of OBIEE 11.1.1.7 is to install the latest patchset, 11.1.1.7.150714 (3GB, released – by no coincidence I’m sure – just yesterday too).
    • For OBIEE 11.1.1.9 there is a small patch (64Kb), number 21235195.
  • There’s also an issue affecting BI Mobile on the iPad prior to 11.1.1.7, the impact being partial impact on integrity.
  • For users of ODI DQ 11.1.1.3 there’s a whole slew of issues, fixed in CPU patch 21418574.
  • Exalytics users who are on ILOM versions earlier that 3.2.6 are also affected by two issues (one of which is 10/10 on the CVSS scale)

The CPU document also notes that it is the final patch date for 10.1.3.4.2. If you are still on 10g, now really is the time to upgrade!

Full details of the issues can be found in Critical Patch Update document, and information about patches on My Oracle Support, DocID 2005667.1.

Categories: BI & Warehousing

Shift Command in Shell Script in AIX and Linux

Pakistan's First Oracle Blog - Tue, 2015-07-14 21:42
Shell in Unix never ceases to surprise. Stumbled upon 'shift 2' command in AIX few hours ago and it's very useful.

'Shift n' command shifts the parameters passed to a shell script by 'n' numbers to the left.

For example:

if you have a shell script which takes 3 parameters like:

./mytest.sh arg1 arg2 arg3

and you use shift 2 in your shell script, then the values of arg1 and arg2 will be lost and the value of arg3 will get assigned to arg1.

For example:

if you have a shell script which takes 2 parameters like:

./mytest arg1 and arg2

and you use shift 2, then values of both arg1 and arg2 will be lost.

Following is a working example of shift command in AIX:

testsrv>touch shifttest.sh

testsrv>chmod a+x shifttest.sh

testsrv>vi shifttest.sh

testsrv>cat shifttest.sh
#!/bin/ksh
SID=$1
BACKUP_TYPE=$2
echo "Before Shift: $1 and $2 => SID=$SID and BACKUPTYPE=$BACKUP_TYPE"
shift 2
echo "After Shift: $1 and $2 => SID=$SID and BACKUPTYPE=$BACKUP_TYPE"


testsrv>./shifttest.sh orc daily

Before Shift: orc and daily => SID=orc and BACKUPTYPE=daily
After Shift:  and  => SID=orc and BACKUPTYPE=daily


Note that the values of arguments passed has been shifted to left, but the values of variables has remained intact.
Categories: DBA Blogs

On Oracle Corporate Citizenship

Oracle AppsLab - Tue, 2015-07-14 19:39

Yesterday, our entire organization, Oracle Applications User Experience (@usableapps) got a treat. We learned about Oracle’s corporate citizenship from Colleen Cassity, Executive Director of the Oracle Education Foundation (OEF).

oef-logo

I’m familiar with Oracle’s philanthropic endeavors, but only vaguely so. I’ve used the corporate giving match, but beyond that, this was all new information.

During her presentation, we learned about several of Oracle’s efforts, which I’m happy to share here, in video form.

First, there’s the OEF Wearable Technology Workshop for Girls, which several of our team members supported.

Next Colleen talked about Oracle’s efforts to support and promote the Raspberry Pi, which is near and dear to our hearts here. We’ve done a lot of Raspi projects here. Expect that to continue.

Next up was Wecyclers, an excellent program to promote recycling in Nigeria.

And finally, we learned about Oracle’s 26-year-old, ongoing commitment to the Dian Fossey Gorilla Fund.

This was an eye-opening session for me. Other than the Wearable Technology Workshop for Girls, I hadn’t heard about Oracle’s involvement in these other charitable causes, and  I’m honored that we were able to help with one.

I hope we’ll be able to assist with similar, charitable events in the future.

Anyway, food for thought and possibly new information. Enjoy.Possibly Related Posts: