Skip navigation.

Feed aggregator

EBS VMs explained

Wim Coekaerts - Thu, 2014-12-04 16:59
A great blog entry from the EBS team explaining the various Oracle VM appliances for EBS :

https://blogs.oracle.com/stevenChan/entry/e_business_suite_virtual_machines

Oracle Priority Support Infogram for 04-DEC-2014

Oracle Infogram - Thu, 2014-12-04 14:46

RAC
Oracle Database In-Memory on RAC - Part 2, from Oracle Database In-Memory.
Performance
From flashdba: awr-parser.sh – Script for parsing Oracle AWR Reports.
OVM
Oracle VM 3.2.9 Released, from Oracle's Virtualization Blog.
ZFS
New ZFS Videos, from Oracle EMEA Value-Added Distributor News.
Getting Down to the Metal
Announcing Oracle Server X5-2 and X5-2L, from Systems Technology Enablement for Partners (STEP).
From the same source: Now Available: Oracle FS1 Flash Storage System Implementation Exam.
VCA
From Wim Coekaerts Blog, SAP certification for Oracle's Virtual Compute Appliance X4-2 (VCA X4-2).
SOA
Best Practices for SOA Suite 11g to 12c Upgrade, from the SOA & BPM Partner Community Blog.
From the same source: SOA 12c demo system (12.1.3) hosted at Oracle Cloud – free for Oracle Partners.
WLS
Additional new material WebLogic Community, from WebLogic Partner Community EMEA.
Java
From the JCP Program Office: JSR Updates - Java EE 8 & Java SE 9.
ADF
From Archbeat: 2 Minute Tech Tip: Using Oracle ADF Libraries.
Analytics
Advisor Webcast: Getting Started with Essbase Aggregate Storage Option - ASO 101, from Business Analytics - Proactive Support.
EBS
From Oracle E-Business Suite Technology:
JRE Support Ends Earlier than JDK Support
EBS VMs: Appliances, Templates, and Assemblies Explained
November 2014 Updates to AD and TXK for EBS 12.2
From Oracle E-Business Suite Support Blog:
Webcast: Rapid Planning: Enabling Mass Updates to Demand Priorities and Background Processing
Webcast: Discrete Costing Functional Changes And Bug Fixes For 12.2.3 And 12.2.4
Considering Customizations with POR_CUSTOM_PKG for iProcurement Requisitions? Check this out!
Webcast: Get Proactive with Doc ID 432.1

Adaptive Case Management 12c and ADF Human Tasks

Andrejus Baranovski - Thu, 2014-12-04 14:08
I'm diving into the new topic - Adaptive Case Management 12c and ADF integration. Today will be the first post in the category and there are more posts planned for the future. I strongly believe that ACM (Adaptive Case Management) makes a great extension for standard BPM. Mainly because it allows to define a loose process, without strict order steps. Process steps can be executed in different order, depending on the situation requirements, at given time. I will be explaining how to implement ADF Human Task for ACM activity and will share several tips, how to make it run in BPM Workspace application.

This is how sample application (HotelBookingProcessing_v1.zip) is constructed, there are two Human Tasks (AddHotelBooking and ValidateHoteBooking) and HotelBookingProcessing Case control:


HotelBookinfProcessing case is defined with Hotel Booking Details data type (this type is based on XSD schema and is defined as Business Component variable - don't mix up with ADF Business Components) - you can think about it as about main data structure type for the case, this can be transferred into every case activity:


There are two stakeholders defined, this could help to control who could have access to human task and case activity. Customer Service Representative is supposed to add new hotel booking, while Financial Accountant can approve or reject it:


I have created Human Task activity directly through composite, it is not necessary to have BPM process to define human tasks. Important to set Application Context property for Human Task to be OracleBPMProcessRolesApp, this will help later with security roles configuration in BPM workspace:


In order to register human task with Case management, we are given option to promote human task as a case activity. This will allow to initiate human task from the case management:


We can define input and output for the case activity, based on the same data type defined in the case. This will allow to transfer data from the case to the activity, and to the underlying human task in our situation:


You could generate ADF form case data, this form will be rendered in BPM workspace case UI. I'm going to look into customisation options of this kind of form in my future posts (checkbox is set to generate editable form):


This is how case data form is rendered, out of the box is given option to save and reset data for the case - Hotel Booking Details:


Human task form is generated in the same way as it was in 11g - no change here for 12c. You could auto generate this form, it generates a lot of code and I would prefer to build custom light form instead:


Important hint - auto generated human task form will not render in BPM workspace window. You need to change FRAME_BUSTING parameter generated in web.xml from differentDomain to never. With differentDomain option it doesn't render human task form, in 11g it was generating with option set to never, for some reason this was changed in 12c - not for good:


With FRAME_BUSTING set to never, human task form renders well:


Human task is started directly from the case activity - Add Hotel Booking from the available list of activities:


We can track in the case activity log - when activity was started, completed or modified. This is quite helpful info to track activity history:


One of the main advantages - user could decide the order of activities, on contrary to strict BPM process. Start Validate Hotel Booking activity, this will create new task for Financial Accountant:


Activity was started, we can see this from the audit log:


This is readonly human task, rendered in BPM workspace - Financial Accountant could approve or reject it:


Case can be closed and hotel booking approved:

Presenting at #UKOUG_APPS14 (8th Dec Monday 4:30 PM) : EBS integration with Identity Management

Online Apps DBA - Thu, 2014-12-04 14:05
  I am presenting paper Integrating Oracle E-Business Suite with Identity & Access Management & Lessons Learned with Neha Mittal. Presentation in on 8th December Monday 4:30 PM at Liverpool (UK) covering Overview of Oracle Identity & Access Management  Integration options including OAM (SSO), OIM (Provisioning & Reconciliation) & GRC (SoD) High level lessons learned from our various [...]

This is a content summary only. Visit my website http://onlineAppsDBA.com for full links, other content, and more!
Categories: APPS Blogs

Getting Started with Oracle Fusion Cloud Integrations

Angelo Santagata - Thu, 2014-12-04 12:32

Hey all,

If your getting started with integrating your application with Oracle Fusion Cloud then I wholeheartedly recommend you read the following resources before starting.. Most of the below is specific to Oracle Sales Cloud because it has App Composer, however much of the below is also applicable to HCM, ERP and other Fusion products.. 

Some of these are a MUST have read before you start integrating/coding/customizing :-) I've put them here in the order I think would work for most people... Kinda like a getting started check-list

I consider this blog entry an living blog entry, in that  I'll be updating it on a regular basis, so make sure you periodically check this location 



Top 5 Fusion Integrations Must Reads 

1. Familiarise yourself with the Sales Cloud Documentation. Specifically :
    • Go through the "User" section, documents like "Using Sales Cloud", "book. If your a techie like me you'll sit there and think, "Hey this is functional why do I need to read this?", well you do.. Even as a technical person, reading through the various user documents like the Using Sales Cloud" bits as an end user helps you understand what the different concepts/topics are.. You'll also understand things like the difference between a Prospect and a Sales Account, territories, assessments and much more.. Its worth a quick read, but do make sure you have a functional consultant to hand to make sure your not building something which can be done by configuration....
    • Read through all the books in the "Extensibility" section. The only anomaly here is the "Business Card Scanner mobile App" document. Its a walkthrough of how to integrate SalesCloud with a 3rd party Service to do business card scanning with MAF... Id leave that till last...
    • Peruse the Development section, this section contains a number of example usecases, ie how to create a customer in R8, how to call an outbound service, its a good read....
2. Get an overview of the tasks you might do
    • Once you've this then look at the "Tasks" section of the docs....Here the curriculum development folk have categorised some of the most common tasks and put short cuts to the documentation detailing how to do this.. e.g. like adding a field to SalesCloud, calling a soap webservice etc
3. Are you going to be customizing the SalesCloud User Interface?
    • Most integrations customize the SalesCloud User Interface. The customization could be as simple as adding a few fields to a standard object (like Opportunity), creating new objects (like MyOrder), validation or adding external content to one or many pages.
    • If your adding fields make sure you read the "Introduction to SalesCloud Customizations" section.
    • If you will be adding validation, triggers or calling webservices from SalesCloud then make sure you read up on groovy scripting, and specifically the chapter on calling outbound SOAP webservices from groovy.
    • Make sure you understand the difference between calling a SOAP Service from groovy and creating an outbound webservice call using object workflows
      • In a nutshell , calling SOAP Services from groovy is a synchronous call, and calling a SOAP Service from a object workflow is a fire-and-forget asynchronous call
    • On the subject of groovy be aware that in Sales Cloud you do not have access to the entire groovy language, make sure you understand that we only support a number of groovy functions (whitelisting) and these are documented at the end of the book , Appendix A Supported Groovy Classes and Methods
4. Are you going to be accessing data from SalesCloud from the external app??
    • If you think you will be calling SOAP WebServices in SalesCloud then the "Getting started with WebServices" is a MUST read...  This doc goes into details into how to look up the SOAP webservice in Fusion OER, how to create static proxies, querying data and how to perform CRUD operations...
    • Get to know Oracle Fusion OER,, its a gold mine of information.......
5. Do you need your app to know who is calling it? 
    • Many integrations involve embedding a 3rd party web app into Oracle Sales Cloud as an iFrame or pressing a button in SalesCloud and calling the 3rd party app (either a UI or WebService call) . If your doing this then you'll almost certainly need to pass a "token" to the 3rd party application so it can use that it can call back to Sales Cloud with a key rather than a plain text username/password combo.. We call this key JWT TOKEN and its based on industry standards (http://jwt.io/) .  For a starters read my JWT Getting started blog  entry and then use the links to read the core documentation

That covers the top 5 areas of integration.. Now for a list of locations where you can get even MORE useful information :

More Information sources

  1. Oracle Learning Centres Quick Webinars on SalesCloud Integration
    • I worked with Development to get this mini tutorial series done, its excellent but Im obviously not biased eh  ;-) 
  2. R9 Simplified WebServices doc
    • This is a new document we recently completed based on how to use the new R9 Simplified SOAP TCA Services..  Although the document is targetted at R9 developers, it covers many of the standard topics like how to create a proxy, how to create a create operation etc.. It even has some sample CRUD payloads which are really really useful 
  3. Oracle Fusion Developer Relations
    1. Good friends of mine, they host a fantastic blog, youtube channel and whitepapers for Fusion Developers, another gold mine of information covering customization , extensions and integration code.
  4. Oracle Fusion Developer Relations Youtube channel
    • Not content with an awesome blog the Developer Relations folk even have a you tube channel where they host a collection of short "tutorials", showing all sorts such as "How to add a field to a page" , " How to call a webservice" etc.. 
  5. Oracle Fusion Developer Relations Whitepapers
    1. on topics including custom application development, ESS development, and Groovy and Expression Language.
  6. And finally there is my humble blog where I try and blog on things which arent documented anywhere else.. if they are documented and are interesting I often link to it.. mainly because I want to find it myself :-)

Thats it folks!

If there are blog entries you'd like to see or specific how to's then feel free to contact me

Angelo 


Debugging PeopleSoft Absence Management Forecast

Javier Delgado - Thu, 2014-12-04 12:02
Forecasting is one of the most useful PeopleSoft Absence Management functionalities. It allows users to know which is going to be the resulting balance when entering an absence. The alternative is to wait until the Global Payroll calendar group is calculated, which naturally is far from being an online calculation.

Although this is a handy functionality, the calculation process does not always return the expected results. For some specific needs, the system element FCST ASOF DT, FCST BGN DT and FCST END DT may be needed. These elements are null for normal Global Payroll runs, so the formulas may behave differently in these runs than in the actual forecast execution. If you ever hit a calculation issue in the forecast process that cannot be solved by looking at the element definitions, you may be stuck.

When this type of issues are found in a normal Global Payroll execution, one handy functionality is to enable the Debug information and then review the Element Resolution Chain page. This page shows the step by step calculation of each element and it is particularly helpful in identifying how an element is calculated.

Unfortunately, this information is not available in the standard forecast functionality. Luckily, it can be enabled using a tiny customisation.

In PeopleSoft HCM 9.1, the forecast functionality is executed from two different places:

DERIVED_GP.FCST_PB.FieldFormula - Abs_ForecastSetup function
FUNCLIB_GP_ABS.FCST_PB.FieldFormula - Abs_ForecastExec function

In both PeopleCode events, you will find a sentence like this one:

SQLExec("INSERT INTO PS_GP_RUNCTL(OPRID, RUN_CNTL_ID, CAL_RUN_ID, TXN_ID, STRM_NUM, GROUP_LIST_ID, RUN_IDNT_IND, RUN_UNFREEZE_IND, RUN_CALC_IND, RUN_RECALC_ALL_IND, RUN_FREEZE_IND, SUSP_ACTIVE_IND, STOP_BULK_IND, RUN_FINAL_IND, RUN_CANCEL_IND, RUN_SUSPEND_IND, RUN_TRACE_OPTN, RUN_PHASE_OPTN, RUN_PHASE_STEP, IDNT_PGM_OPTN, NEXT_PGM, NEXT_STEP, NEXT_NUM, CANCEL_PGM_OPTN, NEXT_EMPLID, UPDATE_STATS_IND, LANGUAGE_CD, EXIT_POINT, SEQ_NUM5, UE_CHKPT_CH1, UE_CHKPT_CH2, UE_CHKPT_CH3, UE_CHKPT_DT1, UE_CHKPT_DT2, UE_CHKPT_DT3, UE_CHKPT_NUM1, UE_CHKPT_NUM2, UE_CHKPT_NUM3,PRC_NUM,OFF_CYCLE) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,%datein(:33),%datein(:34),%datein(:35),:36,:37,:38,:39,:40)", &OprID, &RunCntl_ID, &CalcRunId, &TxnID, 0, &SpaceFiller, "Y", "N", "Y", "N", "N", "N", &ApprByInd, "N", "N", "N", "N", &RunPhaseOptN, &RunPhaseStep, &SpaceFiller, &SpaceFiller, 0, 0, &SpaceFiller, &SpaceFiller, "N", "ENG", &SpaceFiller, 0, &SpaceFiller, &SpaceFiller, &SpaceFiller, "", "", "", 0, 0, 0, 0, "N");

You will notice that the RUN_TRACE_OPTN field is set to "N". If you use "A" instead as the trace option value, you will obtain the Element Resolution Chain:

SQLExec("INSERT INTO PS_GP_RUNCTL(OPRID, RUN_CNTL_ID, CAL_RUN_ID, TXN_ID, STRM_NUM, GROUP_LIST_ID, RUN_IDNT_IND, RUN_UNFREEZE_IND, RUN_CALC_IND, RUN_RECALC_ALL_IND, RUN_FREEZE_IND, SUSP_ACTIVE_IND, STOP_BULK_IND, RUN_FINAL_IND, RUN_CANCEL_IND, RUN_SUSPEND_IND, RUN_TRACE_OPTN, RUN_PHASE_OPTN, RUN_PHASE_STEP, IDNT_PGM_OPTN, NEXT_PGM, NEXT_STEP, NEXT_NUM, CANCEL_PGM_OPTN, NEXT_EMPLID, UPDATE_STATS_IND, LANGUAGE_CD, EXIT_POINT, SEQ_NUM5, UE_CHKPT_CH1, UE_CHKPT_CH2, UE_CHKPT_CH3, UE_CHKPT_DT1, UE_CHKPT_DT2, UE_CHKPT_DT3, UE_CHKPT_NUM1, UE_CHKPT_NUM2, UE_CHKPT_NUM3,PRC_NUM,OFF_CYCLE) values (:1,:2,:3,:4,:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:16,:17,:18,:19,:20,:21,:22,:23,:24,:25,:26,:27,:28,:29,:30,:31,:32,%datein(:33),%datein(:34),%datein(:35),:36,:37,:38,:39,:40)", &OprID, &RunCntl_ID, &CalcRunId, &TxnID, 0, &SpaceFiller, "Y", "N", "Y", "N", "N", "N", &ApprByInd, "N", "N", "N", "A", &RunPhaseOptN, &RunPhaseStep, &SpaceFiller, &SpaceFiller, 0, 0, &SpaceFiller, &SpaceFiller, "N", "ENG", &SpaceFiller, 0, &SpaceFiller, &SpaceFiller, &SpaceFiller, "", "", "", 0, 0, 0, 0, "N");

By performing this change, you will notice that GP_AUDIT_TBL table starts to be populated with the Element Resolution Chain information. However, it may still not be visible from the page itself, because some tables are only populated temporarily in the forecast execution. In order to enable the access for the forecast runs, you will need to customise the GP_AUDIT_SEG_VW search record by adding the lines in italics to the SQL definition:

SELECT DISTINCT A.CAL_RUN_ID 
 , A.EMPLID 
 , A.EMPL_RCD 
 , A.GP_PAYGROUP 
 , A.CAL_ID 
 , A.ORIG_CAL_RUN_ID 
 , B.RSLT_SEG_NUM 
 , A.FICT_CAL_ID 
 , A.FICT_CAL_RUN_ID 
 , A.FICT_RSLT_SEG_NUM 
 , B.RSLT_VER_NUM 
 , B.RSLT_REV_NUM 
 , B.SEG_BGN_DT 
 , B.SEG_END_DT 
  FROM PS_GP_AUDIT_TBL A 
  , PS_GP_PYE_SEG_STAT B 
 WHERE A.CAL_RUN_ID = B.CAL_RUN_ID 
   AND A.EMPLID = B.EMPLID 
   AND A.EMPL_RCD = B.EMPL_RCD 
   AND A.GP_PAYGROUP = B.GP_PAYGROUP 
   AND A.CAL_ID = B.CAL_ID 
  UNION ALL 
 SELECT DISTINCT A.CAL_RUN_ID 
 , A.EMPLID 
 , A.EMPL_RCD 
 , A.GP_PAYGROUP 
 , A.CAL_ID 
 , A.ORIG_CAL_RUN_ID 
 , A.RSLT_SEG_NUM 
 , A.FICT_CAL_ID 
 , A.FICT_CAL_RUN_ID 
 , A.FICT_RSLT_SEG_NUM 
 , 1 
 , 1 
 , NULL 
 , NULL 
  FROM PS_GP_AUDIT_TBL A 
 WHERE NOT EXISTS ( 
 SELECT 'X' 
  FROM PS_GP_PYE_SEG_STAT B 
 WHERE A.CAL_RUN_ID = B.CAL_RUN_ID 
   AND A.EMPLID = B.EMPLID 
   AND A.EMPL_RCD = B.EMPL_RCD 
   AND A.GP_PAYGROUP = B.GP_PAYGROUP 
   AND A.CAL_ID = B.CAL_ID)

I hope you find this useful. Should you have any question or doubt, I will be happy to assist.

Note: Keep in mind that it is not a good idea to leave the Debug information enabled for Production environments, at least permanently. The time needed to run a forecast calculation with this type of information is significantly higher than without it. So, if you do not want to hit performance issues, my recommendation is to store in a table a flag indicating if the Element Resolution Chain for forecast should be enabled or not.


What Does “Backup Restore Throttle Speed” Wait Mean?

Pythian Group - Thu, 2014-12-04 11:29

After migrating a 10g database to 11g, I asked the Application team to start their tests in order to validate that everything was working as expected. I decided to keep an eye on OEM’s Top Activity page while they were running the most mportant job. I already knew what kind of “colors” I would  find because I had checked its behavior in the former version. Suddenly, a strange kind of wait appeared on my screen: it was my first encounter with Backup Restore Throttle Speed.

 

OEM graph 2

I had never seen this wait before. It was listed in a user’s session so its name really confused me. No RMAN operations were running at that time. FRA was almost empty. I checked Oracle’s documentation and My Oracle Support. I found nothing but one Community post from 24-SEP-2013 with no conclusions. In the meantime, the job ended and I got the confirmation that everything was well, even faster than in the old version. Weird, very weird. It was time to review the PL/SQL code.

After reading lots of lines, a function inside the package caught my attention:

Sleep (l_master_rec.QY_FIRST_WAIT_MIN * 60);

Since the job was using a log table to keep track of its execution, I was able to match the wait time with this function pretty quickly. This code was inside the function’s DDL:

for i in 1 .. trunc( seconds_to_sleep/600 )
loop
sys.DBMS_BACKUP_RESTORE.SLEEP( 600 );
end loop;
sys.DBMS_BACKUP_RESTORE.SLEEP( seconds_to_sleep-trunc(seconds_to_sleep/
600)*600 );

Finally I found the reason for this wait (and the explanation for its backup/restore related name): DBMS_BACKUP_RESTORE.SLEEP. As described in MOS note “How to Suspend Code Execution In a PL/SQL Application (Doc ID 1296382.1)”, the package was used to pause job’s execution while waiting for another task to be finished.

Lastly, it’s worth noting that OEM did not graph this wait on the 10g database but it was always there.

Categories: DBA Blogs

All You Need, and Ever Wanted to Know About the Dynamic Rolling Year

Rittman Mead Consulting - Thu, 2014-12-04 09:50

DRY_1

Among the many reporting requirements that have comprised my last few engagements has been what, from now on, I’ll be referring to as the ‘Dynamic Rolling Year’. That is, through the utilization of a dashboard prompt, the user is able to pick a year – month combination and the chosen view, in this case a line graph, will dynamically display a years’ worth of figures with the chosen year – month as the most current starting point. The picture above better demonstrates this feature. You can see that we’ve chosen the ‘201212’ year – month combination and as a result, our view displays revenue data from 201112 through to 201212. Let’s choose another combination and make sure things work properly and to demonstrate functionality.

DRY_2

From the results on the picture above, it seems we’ve gotten our view to do as expected. We’ve chosen ‘201110’ as our starting year – month combination and the view has changed to display revenue trend back to ‘201010’. Now how do we do this? A quick web search will offer up myriad complex and time-consuming solutions which, while completely feasible, can be onerous to implement. While it was tempting to reap the benefits of someone else’s perhaps hard fought solution, I just knew there had to be an easier way. As a note of caution, I’ll preface the next bit by saying that this only works on a dashboard through using a dashboard prompt in tandem with the target analysis. This solution was tested via a report prompt, however the desired results were not obtained, even when ‘hard coding’ default values for our presentation variables. As well, there are a few assumptions I’m making around what kinds of columns you have in your subject area’s time dimension, namely that you have some sort of Year and Month column within your hierarchy.

1. Establish Your Analysis and Prompt

DRY_3

I’ve brought in the Month (renamed it Year – Month) column from Sample App’s Sample Sales subject area along with the Revenue fact. For the sake of this exercise, we’re going to keep things fairly simple and straightforward. Our Month column exists in the ‘YYYY / MM’ format so we’re going have to dice it up a little bit to get it to where we need it to be. To get our dynamic rolling year to work, all we’re going to do is a little math. In a nutshell, we’re going to turn our month into an integer as ‘YYYYMM’ and simply subtract a hundred as a filter using the presentation variables that correspond to our new month column. To make the magic happen, bring in a dummy column from your time dimension, concatenate the year and month together, cast them as a char and then the whole statement as an integer (as seen in the following formula: CAST(substring(“Time”.”T02 Per Name Month”, 1, 4)||substring(“Time”.”T02 Per Name Month”, -2, 2) AS INT) ). Then, apply a ‘between’ filter to this column using two presentation variables.

DRY_4

Don’t worry that we’ve yet to establish them on our dashboard prompt as that will happen later. In the picture above, I’ve set the filter to be between two presentation variables entitled ‘YEARMONTH_VAR’. As you can see, I’ve subtracted a hundred as, in this case, my year – month concatenation exists in the ‘YYYYMM’ format. You may alter this number as needed. For example, a client already had this column established in their Time dimension, yet it was in the ‘YYYYMMM’ format with a leading zero before the month. In that case, we had to subtract 1000 in order to establish the rolling year. Moving on…. After you’ve set up the analysis, save it off and let’s build the dashboard prompt. The results tab will error out. This is normal. We’re going to be essentially doing the same thing here.

We need to get our analysis and prompt to work together (like they should) and so we’ll need to, once again, format the year – month column on the prompt the same way we did it on our analysis through the ‘prompt for column’ link. You can simply copy and paste the formula you used in the ‘Edit Formula’ column on your analysis into this screen. Don’t forget to assign a presentation variable to this column with whatever name you used on the filter in your analysis (in this case it was YEARMONTH_VAR).

DRY_6

DRY_7

DRY_8

2. Build the Dashboard

2014-12-04_10-04-56

Starting a new dashboard, bring in your prompt and analysis from the Shared Folder. In this exercise, I’ve placed the prompt below the analysis and have done some custom formatting. If you’d like to create this example exactly, then in addition to the default table view, you’ll need to create a line graph view that groups by our year – month column and uses the revenue column as the measure. And voila. Save and run the dashboard. Note that if you didn’t adopt a default value for your dashboard prompt, it will error out before you actually pick and apply a value from the prompt drop-down. Admittedly, the closer to a concatenated year-month (YYYYMM) column you have in your time dimension, the easier the implementation of this method will be. You could even set your default value to an appropriate server/repository variable so that your dashboard opens to the most current and / or a desired year-month combination. However, now that you’ve seen this, hopefully it’s sparked some inspiration into how you might apply this technique to other facets of your work and future reporting.

DRY_1

Categories: BI & Warehousing

Private or Public Training? (and a Discount Code)

Rittman Mead Consulting - Thu, 2014-12-04 08:22

Rittman Mead offers world class training that covers various Business Intelligence, Data Warehousing and Data Integration tools.

Rittman Mead currently offers two types of training, Public and Private – so what is the best option for your organization?

Both Private and Public options are staffed by the same highly qualified, experienced instructors. One thing to note is our instructors aren’t merely technology instructors, but they are real world consultants.

What does that mean for you? It means questions aren’t answered by the instructor flipping to a slide in a deck and reading off the best practice. You’ll probably hear an answer more like “I was on site with a company a few weeks ago who had the same issue. Let me tell you how we helped that client!”

Let’s start with Public courses. Our offerings include an OBIEE Bootcamp, an ODI Bootcamp, Exalytics for System Administrators, RPD Modeling and our all new OBIEE Front End Development, Design and Best Practices.

Why Public? If you have four or fewer attendees, the public classes are typically the most cost effective choice. You’ll attend the course along with other individuals from various public and private sector organizations. There’s plenty of time to learn the concepts, get your hands dirty by completing labs as well as getting all of your questions answered by our highly skilled instructors.

Now, our Private courses. There’s primarily two reasons a company would choose private training – cost, and the ability to customize.

Cost : Our Private Training is billed based on the instructor’s time not the number of attendees. If you have five or more individuals, it is most likely going to be a more cost effective option for us to come to you. Also, you’ll be covering travel costs for one instructor rather than your entire team.

Customization : Once you select the course that is best for your team, we’ll set up a call with a trainer to review your team’s experience level and your main goals. Based on that conversation, we can tweak the syllabus to make sure the areas you need extra focus are thoroughly covered.

One thing to note is we do offer a few courses (OWB 11gR2 & ODI 11g) privately that aren’t offered publicly.

We also offer a Custom Course Builder (http://www.rittmanmead.com/training/customobiee11gtraining/) which allows you to select modules out of various training classes and build a customized course for your team – this flexibility is a great reason to select Private Training.

Some of our clients love having their team get away for the week and focus strictly on training while others prefer to have training on site at their offices with the flexibility to cut away for important meetings or calls. Whichever program works best for your team, we have got you covered!

For more information, reach out to us at info@rittmanmead.com. If you’ve made it all the way to the end of this blog post, congratulations! As a way of saying thanks, you can mention the code “RMTEN” to save 10% on your next private or public course booking!

Categories: BI & Warehousing

Influence execution plan without adding hints

Oracle in Action - Thu, 2014-12-04 04:54

RSS content

We often encounter situations when a SQL runs optimally when it is hinted but  sub-optimally otherwise. We can use hints to get the desired plan but it is not desirable to use hints in production code as the use of hints involves extra code that must be managed, checked, and controlled with every Oracle patch or upgrade. Moreover, hints freeze the execution plan so that you will not be able to benefit from a possibly better plan in future.

So how can we make such queries use optimal plan until a provably better plan comes along without adding hints?

Well, the answer is to use SQL Plan Management which ensures that you get the desirable plan which will evolve over time as optimizer discovers better ones.

To demonstrate the procedure, I have created two tables CUSTOMER and PRODUCT having CUST_ID and PROD_ID respectively as primary keys. PROD_ID column in CUSTOMER table is the foreign key and is indexed.

SQL>onn hr/hr

drop table customer purge;
drop table product purge;

create table product(prod_id number primary key, prod_name char(100));
create table customer(cust_id number primary key, cust_name char(100), prod_id number references product(prod_id));
create index cust_idx on customer(prod_id);

insert into product select rownum, 'prod'||rownum from all_objects;
insert into customer select rownum, 'cust'||rownum, prod_id from product;
update customer set prod_id = 1000 where prod_id > 1000;

exec dbms_stats.gather_table_stats (USER, 'customer', cascade=> true);
exec dbms_stats.gather_table_stats (USER, 'product', cascade=> true);

– First, let’s have a look at the undesirable plan which does not use the index on PROD_ID column of CUSTOMER table.

SQL>conn / as sysdba
    alter system flush shared_pool;

    conn hr/hr

    variable prod_id number
    exec :prod_id := 1000

    select cust_name, prod_name
    from customer c, product p
    where c.prod_id = p.prod_id
    and c.prod_id = :prod_id;

    select * from table (dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
----------------------------------------------------------------------SQL_ID  b257apghf1a8h, child number 0
-------------------------------------
select cust_name, prod_name from customer c, product p where c.prod_id
= p.prod_id and c.prod_id = :prod_id

Plan hash value: 3134146364

----------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |   412 (100)|          |
|   1 |  NESTED LOOPS                |              | 88734 |    17M|   412   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| PRODUCT      |     1 |   106 |     2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | SYS_C0010600 |     1 |       |     1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS FULL          | CUSTOMER     | 88734 |  9098K|   410   (1)| 00:00:01 |
----------------------------------------------------------------------

– Load undesirable plan into baseline  to establish a SQL plan baseline for this query into which the desired plan will be loaded later

SQL>variable cnt number
    exec :cnt := dbms_spm.load_plans_from_cursor_cache(sql_id => 'b257apghf1a8h');

    col sql_text for a35 word_wrapped
    col enabled for a15

    select  sql_text, sql_handle, plan_name, enabled 
    from     dba_sql_plan_baselines
    where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------- ----------------------------------------------------------------------
select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   YES

– Disable undesirable plan so that this plan will not be used

SQL>variable cnt number
    exec :cnt := dbms_spm.alter_sql_plan_baseline (-
    SQL_HANDLE => 'SQL_7d3369334b24a117',-
    PLAN_NAME => 'SQL_PLAN_7ucv96d5k988rfe19664b',-
    ATTRIBUTE_NAME => 'enabled',-
    ATTRIBUTE_VALUE => 'NO');

    col sql_text for a35 word_wrapped
    col enabled for a15

    select  sql_text, sql_handle, plan_name, enabled 
    from   dba_sql_plan_baselines
     where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------------------------------------------select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   NO

– Now we use hint in the above SQL to generate the optimal plan which uses index on PROD_ID column of CUSTOMER table

SQL>conn hr/hr

variable prod_id number
exec :prod_id := 1000

select /*+ index(c)*/ cust_name, prod_name
from customer c, product p
where c.prod_id = p.prod_id
and c.prod_id = :prod_id;

select * from table (dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  5x2y12dzacv7w, child number 0
-------------------------------------
select /*+ index(c)*/ cust_name, prod_name from customer c, product p
where c.prod_id = p.prod_id and c.prod_id = :prod_id

Plan hash value: 4263155932

-----------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |              |       |       |  1618 (100)|          |
|   1 |  NESTED LOOPS                        |              | 88734 |    17M|  1618   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID        | PRODUCT      |     1 |   106 |    2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN                 | SYS_C0010600 |     1 |       |    1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER     | 88734 |  9098K|  1616   (1)| 00:00:01 |
|   5 |    INDEX FULL SCAN                   | SYS_C0010601 | 89769 |       |  169   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

– Now we will load the hinted plan  into baseline –
– Note that we have SQL_ID and PLAN_HASH_VALUE of the hinted statement and SQL_HANDLE for the unhinted statement i.e. we are associating hinted plan with unhinted statement.

SQL>variable cnt number
exec :cnt := dbms_spm.load_plans_from_cursor_cache(-
sql_id => '5x2y12dzacv7w',  -
plan_hash_value => 4263155932, -
sql_handle => 'SQL_7d3369334b24a117');

– Verify that there are now two plans loaded for that SQL statement:

  •  Unhinted sub-optimal plan is disabled
  •  Hinted optimal plan which even though is for a  “different query,”  can work with earlier unhinted query (SQL_HANDLE is same)  is enabled.
SQL>col sql_text for a35 word_wrapped
col enabled for a15

select  sql_text, sql_handle, plan_name, enabled from dba_sql_plan_baselines
where sql_text like   'select cust_name, prod_name%';

SQL_TEXT                            SQL_HANDLE                                      PLAN_NAME                                                                        ENABLED
----------------------------------------------------------------------
select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rea320380                                                   YES

select cust_name, prod_name         SQL_7d3369334b24a117                            SQL_PLAN_7ucv96d5k988rfe19664b                                                   NO

– Verify that hinted plan is used even though we do not use hint in the query  –
– The note confirms that baseline has been used for this statement

SQL>variable prod_id number
exec :prod_id := 1000

select cust_name, prod_name
from customer c, product p
where c.prod_id = p.prod_id
and c.prod_id = :prod_id;

select * from table (dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID  b257apghf1a8h, child number 0
-------------------------------------
select cust_name, prod_name from customer c, product p where c.prod_id
= p.prod_id and c.prod_id = :prod_id

Plan hash value: 4263155932

-----------------------------------------------------------------------------------------------------
| Id  | Operation                            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                     |              |       |       |  1618 (100)|          |
|   1 |  NESTED LOOPS                        |              | 88734 |    17M|  1618   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID        | PRODUCT      |     1 |   106 |    2   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN                 | SYS_C0010600 |     1 |       |    1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS BY INDEX ROWID BATCHED| CUSTOMER     | 88734 |  9098K|  1616   (1)| 00:00:01 |
|   5 |    INDEX FULL SCAN                   | SYS_C0010601 | 89769 |       |  169   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("P"."PROD_ID"=:PROD_ID)
4 - filter("C"."PROD_ID"=:PROD_ID)

Note
-----
- SQL plan baseline SQL_PLAN_7ucv96d5k988rea320380 used for this statement

With this baseline solution, you need not employ permanent hints the production code and hence no upgrade issues. Moreover, the plan will evolve with time as optimizer discovers better ones.

Note:  Using this method, you can swap  the plan for only a query which is fundamentally same i.e. you should get the desirable plan by adding hints, modifying  an optimizer setting, playing around with statistics etc. and then associate sub-optimally performing statement with the optimal plan.

I hope this post was useful.

Your comments and suggestions are always welcome!

References:
http://www.oracle.com/technetwork/issue-archive/2014/14-jul/o44asktom-2196080.html

—————————————————————————————————————————————–

Related links:

HOME
Tuning Index



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [Influence execution plan without adding hints], All Right Reserved. 2014.

The post Influence execution plan without adding hints appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Updated (XML) Content Section

Marco Gralike - Wed, 2014-12-03 15:34
“Once per year I try to update the “XML Content” page that, in principle, should…

SSL with PKCS12 truststore

Laurent Schneider - Wed, 2014-12-03 14:31

Many many moons ago I vaguely remember having a similar issue with java keystore / truststore and microsoft certificates stores.

When you start using SSL for your listener, you could potentially face a large number of issues amoung your toolsets. In my opinion, the most disastrous one is that you cannot monitor your database with Enterprise Manager and a TCPS listener. I tried with 10g, 11g, 12c and I seriously doubt it will come in 13c, even a dozen of ERs have been filled. The best workaround I found is to use a separate listener to monitor your database and monitor the ssl-listener itself with IPC.

Today I had to deal with a driver from Datadirect, which finally works perfectly fine with SSL, but the challenge was to know what to put in the keystore and truststore…

In SQLNET, you use the single-sign-on wallet (cwallet.sso) created by OWM/orapki or ldap.

In Java, per default, you use a java keystore, that you generate with keytool (or even use the default cacerts). There is only a lexical difference between a keystore and a truststore, they could both be the same file. As documented in the JSSE Ref
A truststore is a keystore that is used when making decisions about what to trust

But for some other tools, the java keytool won’t do the trick, if the truststore cannot be of the type JKS.

One common type is the PKCS12. This is the your ewallet.p12 you get with the Wallet Manager.

E.g. from java :


  -Djavax.net.ssl.trustStore=ewallet.p12
  -Djavax.net.ssl.trustStoreType=PKCS12
  -Djavax.net.ssl.trustStorePassword=***

To use single-sign-on, use trustStoreType=SSO as I wrote there : jdbc-ssl

Other command formats are X509 base64 or DER file. The openssl command line allows you easy conversion


openssl pkcs12 -in ewallet.p12 -out file.pem
openssl x509 -outform der -in file.pem -out file.der

or in Windows Explorer, just click on your p12 file and then click on the certificate to export in the certificate store.

Is Kuali Guilty of “Open Washing”?

Michael Feldstein - Wed, 2014-12-03 14:05

Phil and I don’t write a whole lot about Student Information Systems (SISs) and the larger Enterprise Resource Management (ERP) suites that they belong to, not because they’re unimportant but because the stories about them often don’t fit with our particular focus in educational technology. Plus, if I’m being completely honest, I find them to be mostly boring. But the story of what’s going with Kuali, the open source ERP system for higher education, is interesting and relevant for a couple of reasons. First, while we hear a lot of concern from folks about the costs of textbooks and LMSs, those expenses pale in comparison to what colleges and universities pay in SIS licensing and support costs. A decent sized university can easily pay a million dollars or more for their SIS, and a big system like UC or CUNY can pay tens or even hundreds of millions. One of the selling points of Kuali has been to lower costs, thus freeing up a lot of that money to meet other needs that are more closely related to the core university mission. So what happens in the SIS space has a potentially huge budgetary impact on teaching and learning. Second, there’s been a lot of conversation about what “open” means in the context of education and, more specifically, when that label is being abused. I am ambivalent about the term “open washing” because it encourages people to flatten various flavors of openness into “real” and “fake” open when, in fact, there are often legitimately different kinds of open with different value (and values). That said, there certainly exist cases where use of the term “open” stretches beyond the bounds of even charitable interpretation.

When Kuali, a Foundation-developed project released under an open source license, decided to switch its model to a company-developed product where most but not all of its code would be released under a different open source license, did they commit an act of “open washing”? Finding the answer to that question turns out to be a lot more complicated than you might expect. As I started to dig into it, I found myself getting deep into details of both open source licensing and cloud computing, since the code that KualiCo will be withholding is related to cloud offerings. It turned out to be way too long and complex to deal with both of those topics in one post, so I am primarily going to follow up on Phil’s earlier posts on licensing here and save the cloud computing part for a future post.

Let’s start by recapping the situation:

  • The Kuali Foundation is a non-profit entity formed by a group of schools that wanted to pool their resources to build a suite of open source components that together could make up an ERP system suitable for colleges and universities.
  • One of the first modules, Kuali Financials, has been successfully installed and run by a small number of schools which have reported dramatic cost savings, both in the obvious licensing but also in the amount of work it took to get the system running.
  • Other elements of the suite have been been more problematic. For example, the registrar module, Kuali Student, has been plagued with multi-year delays and members dropping out of the project.
  • Kuali code has historically been released under an open source license called the “Educational Community License (ECL)”, which is a slight variation of the Apache license.
  • The Kuali Foundation announced that ownership of the code would be transferred to a new commercial entity, KualiCo. Their reason for doing so is to accelerate the development of the suite, although they have not been entirely clear about what they think the causes of their development speed problem are and why moving to a commercial structure will solve the problems.
  • KualiCo intends to release most, but not all, of future code under a different open source license, the Affero Gnu Public License (AGPL).
  • Some of the new code to be developed by KualiCo will not be released as open source. So far the only piece they have named in this regard is their multitenant code.
  • These changes happened fairly quickly, driven by the board, without a lot of community discussion.

So is this “open washing”? Is it a betrayal of trust of an open source community, or deceptively claiming to be open and proving to be otherwise? In one sense, the answer to that question depends on the prior expectations of the community members. I would argue that the biggest and clearest act of open washing may have occurred over a decade ago with the creation of the term “community source.” Back in the earlier days of Sakai—another “community source” project—I made a habit of asking participants what they thought “community source” meant. Very often, the answer I got was something along the lines of, “It’s open source, but with more emphasis on building a community.” But that’s not what Brad Wheeler meant at all when he popularized the term. As I have discussed in an earlier post, “community source” was intended to be a consortial model of development with an open source license tacked onto the end product. Far from emphasizing community, it was explicitly designed to maximize the control and autonomy of the executive decision-makers from the consortial partners—and to keep a lid on the decision-making power of other community participants. Remember the motto: “If you’ve got the gold, then you make the rules.” Community source, as defined by those who coined the term, is a consortium with a license. But community source was always marketed as an improvement on open source. “It’s the pub between the cathedral and the bazaar where all the real work gets done,” Brad liked to say.

Different “community source” projects followed the centralized, financially-driven model to different degrees. Kuali, for example, was always explicitly and deliberately more centralized in its decision-making processes than Sakai. As an outside observer of Kuali’s culture and decision-making processes, and as a close reader of Brad Wheeler’s articles and speeches about community source, I can’t say that the move to KualiCo surprised me terribly. Nor can I say that it is inconsistent with what Brad has said all along about how community source works and what it is for. The consortial leaders, whose membership I assume was roughly defined by their financial contributions to the consortium, made a decision that supports what they believe is in their interest. All code that was previously released under an open source license will remain under an open source license. Presumably the Kuali Foundation and KualiCo will be clear going forward about which consortial contributions go toward creating functionality that will be open source or private source in the future. I am not privy to the internal politics of the foundation and therefore am not in a position to say whether some of those who brought the gold were left out of the rule-making process. To the degree that Brad’s golden rule was followed, the move to KualiCo is consistent with the clearly stated (if craftily marketed) philosophy of community source.

The question of how much these changes practically affect open source development of Kuali is a more complicated one to answer. It is worth stating that another tenet of community source was that it was specifically intended to be commercial-friendly, meaning that the consortia tried to avoid licensing or other practices that discouraged the development of a healthy and diverse ecosystem of support vendors. (Remember, community source frames problems with the software ecosystem as procurement problems. As such, its architects are concerned with maintaining a robust range of support contracting options.) Here the balancing act is more delicate. On the one hand, to the degree that the changes give KualiCo advantages over potential competitors, Kuali will be violating the commercial-friendly principle of community source. On the other hand, to the degree that the changes do not give KualiCo advantages over potential competitors, it’s not clear why one would think that KualiCo will be a viable and strong enough company to move development faster than the way it has been until now.

The first thing to point out here is that, while KualiCo has only said so far that it will keep the multitenant code private source, there is nothing to prevent them from keeping more code private in the future. Instructure Canvas, which started out with only the multitenant code as private source, currently has the following list of features that are not in the open source distribution:

  • Multi-tenancy extensions
  • Mobile integration
  • Proprietary SIS integrations
  • Migration tools for commercial LMSs
  • Other minor customizations that only apply to our hosted environment
  • Chat Tool
  • Attendance Tool (Roll Call)

I don’t think there is a clear and specific number of private source features that marks the dividing line between good faith open source practices and “open washing”; nor am I arguing that Instructure is open washing here. Rather, my point is that, once you make the decision to be almost-all-but-not-completely open source, you place your first foot at the top of a slippery slope. By saying that they are comfortable withholding code on any feature for the purposes of making their business viable, KualiCo’s leadership opens the door to private sourcing as much of the code as they need to in order to maintain their competitive advantage.

Then there’s the whole rather arcane but important question about the change in open source licenses. Unlike the ECL license that Kuali has used until now, AGPL is “viral,” meaning that anybody who combines AGPL-licensed code with other code must release that other code under the AGPL as well. Anybody, that is, except for the copyright holder. Open source licenses are copyright licenses. If KualiCo decides to combine  open source code to which they own the copyright with private source code, they don’t have to release the private source code under the AGPL. But if a competitor, NOTKualiCo, comes along and combines KualiCo’s AGPL-licensed code with their own proprietary code, then NOTKualiCo has to release their own code under the AGPL. This creates two theoretical problems for NOTKualiCo. First, NOTKualiCo does not have the option of making the code they develop a proprietary advantage over their competitors. They have to give it away. Second, while NOTKualiCo has to share its code with KualiCo, KualiCo doesn’t have the same obligation to NOTKualiCo. So theoretically, it would be very hard for any company to compete on product differentiators when they are building upon AGPL-licensed code owned by another company.

I say “theoretically” because, in practice, it is much more complicated than that. First, there is the question of what it means to “combine” code. The various GPL licenses recognize that some software is designed to work with other software “at arm’s length” and therefore should not be subject to the viral clause. For example, it is permissible under the license to run AGPL applications on a Microsoft Windows or Apple Mac OS X operating system without requiring that those operating systems also be released under the GPL. Some code combinations fall clearly into this category, while others fall clearly into the category of running as part of the original open source program and therefore subject to the viral clause of the GPL. But there’s a vast area in the murky middle. Do tools that use APIs specifically designed for integration fall under the viral clause? It depends on the details of how they integrate as well as who you ask. It doesn’t help that the language used to qualify what counts as “combining” in Gnu’s documentation uses terms that are specific to the C programming language.

KualiCo has said that they will specifically withhold multitenant capabilities from future open source distributions. If competitors developed their own multitenant capabilities, would they be obliged to release that code under the AGPL? Would such code be “combining” with Kuali, or could it be sufficiently arm’s-length that it could be private source? It depends on how it’s developed. Since KualiCo’s CEO is the former CTO of Instructure, let’s assume for the sake of argument that Kuali’s multitenant capabilities will be developed similarly to Canvas’. Zach Wily, Instructure’s Chief Architect, described their multitenant situation to me as follows:

[O]ur code is open-source, but only with single-tenancy. The trick there is that most of our multi-tenancy code is actually in the open source code already! Things like using global identifiers to refer to an object (instead of tenant-local identifiers), database sharding, etc, are all in the code. It’s only a couple relatively thin libraries that help manage it all that are kept as closed source. So really, the open-source version of Canvas is more like a multi-tenant app that is only convenient to run with a single tenant, rather than Canvas being a single-tenant app that we shim to be multi-tenant.

The good news from NOTKualiCo’s (or NOTInstructureCo’s) perspective is that it doesn’t sound like there’s an enormous amount of development required to duplicate that multitenant functionality. Instructure has not gone through contortions to make the development of multitenant code harder for competitors; I will assume here that KualiCo will follow a similar practice. The bad news is that the code would probably have to be released under the AGPL, since it’s a set of libraries that are intended to run as a part of Kuali. That’s far from definite, and it would probably require legal and technical experts evaluating the details to come up with a strong conclusion. But it certainly seems consistent with the guidance provided by the Gnu Foundation.

OK, so how much of a practical difference does this make for NOTKualiCo to be able to compete with KualiCo? Probably not a huge amount, for several reasons. First, we’re not talking about an enormous amount of code here; nor is it likely to be highly differentiated. But also, NOTKualiCo owns the copyright on the libraries that they release. While anybody can adopt them under the AGPL, if KualiCo wanted to incorporate any of NOTKualiCo’s code, then the viral provision would have to apply to KualiCo. The practical net effect is that KualiCo would almost certainly never use NOTKualiCo’s code. A third competitor—call them ALSONOTKualiCo—could come in and use NOTKualiCo’s code without incurring any obligations beyond those that they already assumed by adopting KualiCo’s AGPL code, so there’s a disadvantage there for NOTKualiCo. But overall, I don’t think that withholding multitenant code from KualiCo’s open source releases—assuming that it’s done the way Instructure has done it—is a decisive barrier to entry for competitors. Unfortunately, that may just mean that KualiCo will end up having to withhold other code in order to maintain a sustainable advantage.

So overall, is Kuali guilty of “open washing” or not? I hope this post has helped make clear why I don’t love that term. The answer is complicated and subjective. I believe that “community source” was an overall marketing effort that entailed some open washing, but I also believe that (a) Brad has been pretty clear about what he really meant if you listened closely enough, and (b) not every project that called itself community source followed Brad’s tenets to the same degree or in the same way. I believe that KualiCo’s change in license and withholding of code are a violation of the particular flavor of openness that community source promised, but I’m not sure how much of a practical difference that makes to the degree that one cares about the “commercial friendliness” of the project. Would I participate in work with the Kuali Foundation today if I were determined to work only on projects that are committed to open source principles and methods? No I would not. But I would have given you the same answer a year ago. So, after making you wade through all of those arcane details, I’m sorry to say that my answer to the question of whether Kuali is guilty of open washing is, “I’m not sure that I know what the question means, and I’m not sure how much the answer matters.”

The post Is Kuali Guilty of “Open Washing”? appeared first on e-Literate.

Where Is Talent When Sizing Up The Success of a Business?

Linda Fishman Hoyle - Wed, 2014-12-03 13:51

I like this picture of an animated Mark Hurd (Oracle's CEO) with the holiday baubles hanging in the background. I like it even more that he’s talking to a group of HR executives about the importance of talent in an organization. “I want the best people, I don’t want to lose them, and I want them completely engaged,” said Hurd.

At the London, England conference, Hurd explained the hurdles companies face in the search for good talent. One is an aging workforce that will affect attrition rates when employees start to retire. The other is new college grads who have higher expectations of their career paths and reward systems. Both of these demographic forces will require “HR as a real-time application, on a par with financials” according to Hurd.

Companies who take advantage of the internet, mobile devices, and social networks will be in a much better position that those with outdated systems. Technology, and specifically cloud applications, can help companies standardize best practices across geographies and keep up with the latest features and functionality.

However, the SaaS-based modern HR tools won’t help any company differentiate itself in the marketplace without an earnest pledge to use them to help attract, retain, and motivate the best people for the jobs that need to be done. According to Michael Hickins, who wrote the article in Forbes entitled Oracle CEO Mark Hurd: Make HR A Real-Time Application, a company’s future can hinge on how well it runs the race for talent.

Innovative Work Life Solutions Help Employees Balance Their Professional and Personal Lives

Linda Fishman Hoyle - Wed, 2014-12-03 13:47

A Guest Post by Mark Bennett, Oracle Director of HCM Product Strategy (pictured left)

The best human capital management applications have evolved from just automating standard human resource (HR) processes to become strategic tools for managing talent and analyzing workforce capabilities.

Now, a new wave of innovation is taking HR evolution a step further with work/life applications that are designed to help employees better balance their professional and personal lives. In the following interview Oracle Director of HCM Product Strategy Mark Bennett explains how Oracle’s new Oracle Work Life Solutions Cloud offerings can enhance creativity, boost productivity, and help individuals become more effective inside and outside of work. 

Q: How do you define work/life solutions?

A: They’re applications and features of products that help individuals better understand various facets of their professional and personal lives. Many people want to find a better balance between these two areas, but they’re finding they don’t always have the information they need to make that happen.

Q: Can you give us some examples of work/life solutions?

A: Oracle Work Life Solutions Cloud offers an extensible set of solutions including My Reputation, a feature that helps employees better understand how they are seen within their social media communities. It can also help them develop their professional brand as it aligns with that of their company to help with their career development and advancement. Another example is My Wellness, a feature that leverages wearable devices that enable people to monitor their activity levels throughout the day and their sleep patterns at night. So for example, people with a low energy level may decide to get more sleep to see if that makes a difference.

Offerings that are based around competition, such as the My Competitions feature of Oracle Human Capital Management Cloud, can help organizations create incentives for employees. Software developers may engage in friendly competition to see who fixes the most bugs in a week. Or someone might organize a contest for workgroup members to come up with ideas about new features for a product redesign project.

In the future, work/life solutions might include taking advantage of the video capabilities in smart phones to provide an easy way to upload clips to ask a question of coworkers, share a new research findings, or even to link to short profiles of a person’s appointment contacts—helping individuals connect on a more personal level and gain a better understanding of the context of meetings.

Q: How do organizations benefit from making work/life solutions available to employees?

A: There are three primary ways. First, individuals can better understand their work habits, so, for example, they don’t push themselves so hard they risk burnout or don’t pay enough attention to other facets of their lives.

Second, these solutions send a powerful signal to the workforce by indicating a company’s interest in their employees that goes beyond their professional contributions. Many employees will appreciate that their company is paying attention to their personal as well as professional needs.

Third, there may be a dollars-and-cents payoff. If the information collected from one of the solutions encourages more employees to participate in wellness programs, healthcare costs may start to decrease through rebates from carriers or from reduced claims.

Q: What should employers keep in mind to assure work/life solutions succeed?

A: It’s essential that companies make it clear they are not monitoring any of this data but instead are simply providing tools to help employees gain insights into their lives to help them decide whether they want to make adjustments based on their personal goals. The key is giving employees the tools they need to make their own decisions about their own lives.

Q: How do we learn more about these solutions?

A: Resources for Work Life Solutions include:

December 10: AXA HCM Cloud Customer Forum

Linda Fishman Hoyle - Wed, 2014-12-03 13:37

Join us for an Oracle HCM Cloud Customer Forum on Wednesday, December 10, 2014, at 9:00 a.m. Pacific Time / 12:00 p.m. Eastern Time. 

AXA had multiple core HR systems across the globe, including PeopleSoft and SAP. After much review of the market, AXA determined that a cloud solution would suit its requirements more effectively than going through an expensive upgrade and customization process.

In this Forum, Kwasi Boateng, Head of Group HRIS Development at AXA, will talk about the company's selection process for new HR software, its implementation experience with Oracle HCM Cloud, and the expectations and benefits of its new modern HR system.

AXA is a French, multinational company, present in 56 countries. With 157,000 employees and distributors, AXA is committed to serving 102 million clients. AXA’s areas of expertise are reflected in a range of products and services adapted to the needs of each client in three major business lines: property-casualty insurance, life and savings, and asset management.

Invite your customers and prospects. You can register now to attend the live Forum on Wednesday, December 10, 2014 at 9:00 a.m. Pacific Time / 12:00 p.m. Eastern Time and learn more from AXA directly.

SAP certification for Oracle's Virtual Compute Appliance X4-2 (VCA X4-2)

Wim Coekaerts - Wed, 2014-12-03 12:17
We have been working with SAP to certify their products, based on SAP NetWeaver 7.x (specifically on the following OS versions : Oracle Linux 5, Oracle Linux 6, Oracle Solaris 10, Oracle Solaris 11), in a Virtual Compute Appliance Environment. It is also possible to run 2-tier and 3-tier configurations/installations of Oracle Database and SAP applications on VCA.

For more detail you can go to SAP Note 2052912.

The Virtual Compute Appliance is a great, cost effective, easy to deploy converged infrastructure solution. Installations can be done by the customer, it takes just a few hours to bring up and start using a VCA system and deploy applications. The entire setup is pre-wired, pre-installed, pre-configured, using our best practices. All the software and hardware components in VCA are standard off the shelf, proven, products. Oracle Linux for the management node OS, Oracle VM for the compute nodes. Any application or product certified with Oracle VM will work without change or without the need for re-certification inside a VCA environment.

It is very exciting to have the SAP certification for VCA published.

Reminder: Fishbowl Solutions Webinar Tomorrow at 1 PM CST

Cole OrndorffThere’s still time to register for the webinar that Fishbowl Solutions and Oracle will be holding tomorrow from 1 PM-2 PM CST! Innovation in Managing the Chaos of Everyday Project Management will feature Fishbowl’s AEC Practice Director Cole Orndorff. Orndorff, who has a great deal of experience with enterprise information portals, said the following about the webinar:

“According to Psychology Today, the average employee can lose up to 40% of their productivity switching from task to task. The number of tasks executed across a disparate set of systems over the lifecycle of a complex project is overwhelming, and in most cases, 20% of each solution is utilized 80% of the time.

I am thrilled to have the opportunity to present on how improving workforce effectiveness can enhance your margins. This can be accomplished by providing a consistent, intuitive user experience across the diverse systems project teams use and by reusing the intellectual assets that already exist in your organization.”

To register for the webinar, visit Oracle’s website. To learn more about Fishbowl’s new Enterprise Information Portal for Project Management, visit our website.

The post Reminder: Fishbowl Solutions Webinar Tomorrow at 1 PM CST appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

From Zero to Hero....In About 2 Hours

Joel Kallman - Wed, 2014-12-03 11:23


This is an example of a real-world problem, an opportunistic one, being solved via a mobile application created with Oracle Application Express.

First, a brief bit of background.  Our son is 9 years old and is in the Cub Scouts.  Cub Scouts in the United States is an organization that is associated with Boy Scouts of America.  It's essentially a club that is geared towards younger boys, and teaches them many valuable skills - hiking, camping out, shooting a bow and arrow, tying different knots, nutrition, etc.  This club has a single fundraiser every year, where the boys go door-to-door selling popcorn, and the proceeds of the popcorn sale fund the activities of the Cub Scouts local group for the next year.  There is a leader who organizes the sale of this popcorn for the local Cub Scout group, and this leader gets the unenvious title of "Popcorn Kernel".  For the past 2 years, I've been the "Popcorn Kernel" for our Cub Scout Pack (60 Scouts).

I was recently at the DOAG Konferenz in Nürnberg, Germany and it wasn't until my flight home that I began to think about how I was going to distribute the 1,000 items to 60 different Scouts.  My flight home from Germany was on a Sunday and I had pre-scheduled the distribution of all of this popcorn to all 60 families on that next day, Monday afternoon.  Jet lag would not be my friend.

The previous year, I had meticulously laid out 60 different orders across a large meeting room and let the parents and Scouts pick it up.  This year, I actually had 4 volunteer helpers, but I had no time.  All I had in my possession was an Excel spreadsheet which was used to tally the orders across all 60 Cub Scouts.   But I knew I could do better than 60 pieces of paper, which was the "solution" last year.

On my flight home, on my iPad, I sketched out the simple 4-page user interface to locate and manage the orders.  As well, I wrote the DDL on my iPad for a single table.  Normally, I would use SQL Developer Data Modeler as my starting point, but this application and design needed to be quick and simple, so a single denormalized table was more than sufficient.



Bright and early on Monday morning, I logged into an existing workspace on apex.oracle.com.  I created my single table using the Object Browser in SQL Commands, created a trigger on this table, uploaded the spreadsheet data into this table, and then massaged the data using some DML statements in SQL Commands.  Now that my table and data were complete, it was now time for my mobile application!

I created a simple Mobile User Interface application with navigation links on the home page.  There are multiple "dens" that make up each group in a Cub Scout Pack, and these were navigation aids as people would come and pick up their popcorn ("Johnny is in the Wolf Den").  These ultimately went to the same report page but with different filters.



Once a list view report was accessed, I showed the Scout's name, the total item count for them, and then via a click, drill down to the actual number of items to be delivered to the Scout.  Once the items were handed over and verified, the user of this application had to click a button to complete the order.  This was the only DML update operation in the entire application.



I also added a couple charts to the starting page, so we could keep track of how many orders for each den had already been delivered and how many were remaining.



I also added a chart page to show how many of each item was remaining, at least according to our records. This enabled us to do a quick "spot check" at any given point in time, and assess if the current inventory we had remaining was also accurately reflected in our system.  It was invaluable!  And remember - this entire application was all on a single table in the Oracle Database.  At one point in time, 8 people were all actively using this system - 5 to do updates and fulfill orders, and the rest to simply view and monitor the progress from their homes.  Concurrency was never even a consideration.  I didn't have to worry about it.



Now some would say that this application:
  • isn't pixel perfect
  • doesn't have offline storage
  • isn't natively running on the device
  • can't capitalize on the native features of the phone
  • doesn't have a badge icon
  • isn't offered in a store

And they would be correct.  But guess what?  None of it mattered.  The application was used by 5 different people, all using different devices, and I didn't care what type of devices they were using.  They all thought it was rocket science.  It looked and felt close enough to a native application that none of them noticed nor cared.  The navigation and display were consistent with what they were accustomed to.  More importantly, it was a vast improvement over the alternative - consisting of either a piece of paper or, worse yet, 5 guys huddling around a single computer looking at a spreadsheet.  And this was something that I was able to produce, starting from nothing to completed solution, in about two hours.  If I hadn't been jet lagged, I might have been able to do it in an hour.

You might read this blog post and chuckle to yourself.  How possibly could this trivial application for popcorn distribution to Cub Scouts relate to a "real" mobile enterprise application?  Actually, it's enormously relevant.

  • For this application, I didn't have to know CSS, HTML or mobile user interfaces.
  • I only needed to know SQL.  I wrote no PL/SQL.  I only wrote a handful of SQL queries for the list views, charts, and the one DML statement to update the row.
  • It was immediately accessible to anyone with a Web browser and a smart phone (i.e., everyone).
  • Concurrency and scalability were never a concern.  This application easily could have been used by 1,000 people and I still would not have had any concern.  I let the Oracle Database do the heavy lifting and put an elegant mobile interface on it with Oracle Application Express.

This was a simple example of an opportunistic application.  It didn't necessarily have to start from a spreadsheet to be opportunistic.  And every enterprise on the planet (including Oracle) has a slew of application problems just like this, and which today are going unsolved.  I went from zero to hero to rocket scientist in the span of two hours.  And so can you.

A demo version of this application (with fictitious names) is here.  I left the application as is - imperfect on the report page and the form (I should have used a read-only display).  Try it on your own mobile device.

ORA 700's make me grumpy ( well or at least confused )

Grumpy old DBA - Wed, 2014-12-03 09:16
Geez Louise I guess I coulda/shoulda known about this before now.

At some point the oracle software ( 11.2 ish ? 11.1 ish ? ) now starts getting worried and kind of grumpy.

You apparently can get ORA 700's under certain circumstances.  It's not a huge problem apparently ( yet ) when the software decides to let you know ... ( but it might become one later maybe ? ).

I saw this one on a new test vm I am setting up ( Database 12.1.0.2 using ASM and also OEM 12.1.0.4 ).

ORA 700 [kskvmstatact: excessive swapping observed]

So anyway I started doing some looking around at the memory config stuff after seeing that ( but that's a longer story ).

Categories: DBA Blogs