Skip navigation.

Fusion Middleware

Dept/Emp POJO's with sample data for Pivotal GemFire

Pas Apicella - Tue, 2014-08-12 21:57
I constantly blog about using DEPARTMENT/EMPLOYEE POJO'S with sample data. Here is how to create a file with data to load into GemFire to give you that sample set.

Note: You would need to create POJO'S for Department/Empployee objects that have getter/setter for the attributes mentioned below.

Dept Data

put --key=10 --value=('deptno':10,'name':'ACCOUNTING') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=20 --value=('deptno':20,'name':'RESEARCH') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=30 --value=('deptno':30,'name':'SALES') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;
put --key=40 --value=('deptno':40,'name':'OPERATIONS') --value-class=pivotal.au.se.deptemp.beans.Department --region=departments;

Emp Data

put --key=7369 --value=('empno':7369,'name':'SMITH','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7370 --value=('empno':7370,'name':'APPLES','job':'MANAGER','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7371 --value=('empno':7371,'name':'APICELLA','job':'SALESMAN','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7372 --value=('empno':7372,'name':'LUCIA','job':'PRESIDENT','deptno':30) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7373 --value=('empno':7373,'name':'SIENA','job':'CLERK','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7374 --value=('empno':7374,'name':'LUCAS','job':'SALESMAN','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7375 --value=('empno':7375,'name':'ROB','job':'CLERK','deptno':30) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7376 --value=('empno':7376,'name':'ADRIAN','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7377 --value=('empno':7377,'name':'ADAM','job':'CLERK','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7378 --value=('empno':7378,'name':'SALLY','job':'MANAGER','deptno':20) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7379 --value=('empno':7379,'name':'FRANK','job':'CLERK','deptno':10) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7380 --value=('empno':7380,'name':'BLACK','job':'CLERK','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;
put --key=7381 --value=('empno':7381,'name':'BROWN','job':'SALESMAN','deptno':40) --value-class=pivotal.au.se.deptemp.beans.Employee --region=employees;

Load into GemFire (Assumed JAR for POJO'S exists in class path of GemFireCache Servers)

The script bellows uses GFSH to load the file into the correct region references the correct POJO inside the files created above.

export CUR_DIR=`pwd`

gfsh <<!
connect --locator=localhost[10334];
run --file=$CUR_DIR/dept-data
run --file=$CUR_DIR/emp-data
!

Below is what the Department.java POJO would look like for example.
  
package pivotal.au.se.deptemp.beans;

public class Department
{
private int deptno;
private String name;

public Department()
{
}

public Department(int deptno, String name) {
super();
this.deptno = deptno;
this.name = name;
}

public int getDeptno() {
return deptno;
}

public void setDeptno(int deptno) {
this.deptno = deptno;
}

public String getName() {
return name;
}

public void setName(String name) {
this.name = name;
}

@Override
public String toString() {
return "Department [deptno=" + deptno + ", name=" + name + "]";
}

}
http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Test your WebLogic 12.1.3 enviroment with Robot

Edwin Biemond - Sun, 2014-08-10 11:42
Robot Framework is a generic test automation framework which has an easy-to-use tabular test data syntax and it utilizes the keyword-driven testing approach. This means we can write our tests in readable and understandable text. If we combine this with the REST Management interface of WebLogic 12.1.3 we are able to test every detail of a WebLogic domain configuration and when we combine this

Create with WLST a SOA Suite, Service Bus 12.1.3 Domain

Edwin Biemond - Thu, 2014-08-07 13:14
When you want to create a 12.1.3 SOA Suite, Service Bus Domain, you have to use the WebLogic config.sh utility.  The 12.1.3 config utility is a big improvement when you compare this to WebLogic 11g. With this I can create some complex cluster configuration without any after configuration. But if you want to automate the domain creation and use it in your own (provisioning) tool/script then you

Whistler, Microsoft and how far cloud has come

Steve Jones - Thu, 2014-08-07 09:00
In six years Microsoft has come from almost zero corporate knowledge about how cloud computing works to it being an integral part of their strategy.  Sure back in early 2008 there were some pieces of Microsoft that knew about cloud but that really wasn't a corporate view it was what a very few people inside the company knew. How do I know this? Well back in 2008 I was sitting on the top of a
Categories: Fusion Middleware

Test your Application with the WebLogic Maven plugin

Edwin Biemond - Thu, 2014-07-31 05:47
In this blogpost I will show you how easy it is to add some unit tests to your application when you use Maven together with the 12.1.3 Oracle software ( like WebLogic , JDeveloper or Eclipse OEPE). To demonstrate this, I will create a RESTful Person Service in JDeveloper 12.1.3 which will use the Maven project layout. We will do the following: Create a Project and Application based on a Maven

Using HAWQ with PHD service in PCF 1.2

Pas Apicella - Mon, 2014-07-28 06:01
The following demo shows how to use the PCF 1.2 PHD service with HAWQ by loading data into the PCF PaaS platform.

1. First lets setup our ENV to use the correct version of HADOOP on our local laptop.

export HADOOP_INSTALL=/Users/papicella/vmware/software/hadoop/hadoop-2.0.5-alpha
export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home

export PATH=$PATH:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

export HADOOP_OPTS="$HADOOP_OPTS  -Djava.awt.headless=true -Djava.security.krb5.realm=-Djava.security.krb5.kdc="

export YARN_OPTS="$YARN_OPTS -Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk -Djava.awt.headless=true"

hadoop version

2. Set the HADOOP_USER_NAME to ensure you have write access to load a file.

export HADOOP_USER_NAME=ucc3a04008db2486

3. Create a file called person.txt with some pipe delimited data , example below.

[Mon Jul 28 21:47:37 papicella@:~/vmware/software/hadoop/cloud-foundry/pcf12/demo ] $ head person.txt
1|person1
2|person2
3|person3
4|person4
5|person5

4. Load the file into the PHD instance running in PCF 1.2. You will need to use the name node / path which is correct for your PHD instance.

[Mon Jul 28 21:51:43 papicella@:~/vmware/software/hadoop/cloud-foundry/pcf12/demo ] $ hadoop fs -put person.txt hdfs://x.x.x.x:8020/user/ucc3a04008db2486/

5. Create a HAWQ table to the file person.txt using PXF as shown below.

CREATE EXTERNAL TABLE person (id int, name text)
LOCATION ('pxf://x.x.x.x:50070/user/ucc3a04008db2486/person.txt?Fragmenter=HdfsDataFragmenter&Accessor=TextFileAccessor&Resolver=TextResolver')
FORMAT 'TEXT' (DELIMITER = '|');

6. Query the table as shown below.



For more information on the PHD service see the link below.

http://docs.pivotal.io/pivotalhd-ds/index.html

http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Silence

Greg Pavlik - Sat, 2014-07-26 10:26
Silence. Sometimes sought after, but in reality almost certainly feared - the absence of not just sound but voice. Silence is often associated with divine encounter - the neptic tradition of the Philokalia comes to mind - but also and perhaps more accurately with abandonment, divine or otherwise. I recently read Shusaku Endo's Silence, a remarkable work, dwelling on the theme of abandonment in the context of the extirpation of Kakure Kirishitan communities in Tokagawa Japan. Many resilient families survived and eventually came out of hiding in the liberalization in the mid-19th century, but the persecutions were terrible. Their story is deeply moving (sufficiently so that over time I find myself drawn to devotion to the image of Maria-Kannon). Endo's novel was not without controversy but remains one of the great literary accomplishments of the 20th century.

In fact, the reason for this post is a kind of double entendre on silence: the relative silence in literate western circles with respect to Japanese literature of the past century. Over the last month, I realized that virtually no one I had spoken with had read a single Japanese novel. Yet, like Russia of the 19th century, Japan produced a concentration of great writers and great novelists in the last 20th century that is set apart: the forces of of profound national changes (and defeat) created the crucible of great art. That art carries the distinctive aesthetic sense of Japan - a kind of openness of form, but is necessarily the carrier of universal, humanistic themes.

Endo is a writer in the post war period - the so-called third generation, and in my view the last of the wave of great Japanese literature. Read him. But don't stop - perhaps don't start - there. The early 20th century work of Natsume Soseki are a product of the Meiji period. In my view, Soseki is not only a father of Japenese literature but one of the greatest figures of world literature taken as a whole - I am a Cat remains one of my very favorite novels. Two troubling post-war novels by Yukio Mishima merit attention - Confessions of a Mask and the Sailor Who Fell From Grace with the Sea, both I would characterize broadly as existential masterpieces. The topic of identity in the face of westernization is also a moving theme in Osamu Dazai's No Longer Human. I hardly mean this as a complete survey - something in any case I am not qualified to provide -just a pointer toward something broader and important.

My encounter with contemporary Japanese literature - albeit limited - has been less impactful (I want to like Haruki Murakami in the same way I want to like Victor Pelevin, but both make me think of the distorted echo of something far better). And again like Russia, it is difficult to know what to make of Japan today - where its future will lead, whether it will see a cultural resurgence or decline. It is certain that its roots are deep and I hope she finds a way to draw on them and to flourish.


Spark: A Discussion

Greg Pavlik - Wed, 2014-07-23 08:36
A great presentation, worth watching in its entirety.

With apologies to my Hadoop friends but this is good for you too.

Javascript Driven ADF Taskflows for WebCenter Portal

This is a continuation from my previous post - Developing WebCenter Content Cross Platform iDoc Enabled Components for Mobile, ADF, Sharepoint, Liferay.

You can see a video of JIVE Forums integration with a JS Taskflows vs ADF Taskflow running in WebCenter Portal here -

Click here for hi-resolution

This post is aimed at Web Developers, Designers and Marketing web teams who aren’t familiar with ADF and want to create reusable dynamic taskflows without the need to learn ADF or Java to provide interactive dynamic regions using Javascript, HTML and CSS with custom frameworks like jQuery designed not to conflict with ADF JS environment.

Read on for a step by step run through on creating JS driven taskflows  -

    1. You will need to download JDeveloper – I’m using JDev 11.1.1.7.0 for WebCenter Portal 11g where I will deploy my custom taskflow driven entirely with Javascript.
    2. Run through the following Oracle guide to setup your project to extend Portal (11.1.1.8.3) - Developing Components for WebCenter Portal Using JDeveloper
    3. Add new taskflow to library by right-clicking WebCenterSpacesExtensions and selecting “New…”
    4. Add ADF Task Flow (JSF)
      .
      1
      .
    5. Name the xml file, leaving the Directory the JDev default
      .
      2
      .
    6. Double click the new xml file and drag a View element into the diagram from the Component Palette
      .
      3
      .
    7. Rename “view1″ to “[taskflow name]View”.
    8. Double click the new view to create a page fragment.
      Update the directory and add \taskflows\[taskflow name]\view
      This will make it easier to sort through in the future when you develop more taskflows.
      .
      4
      .
    9. Edit the JSFF and display code in source view.
      .
      5
      .
    10. Replace with the following -
      <?xml version='1.0' encoding='UTF-8'?>
      <jsp:root xmlns:jsp="http://java.sun.com/JSP/Page" version="2.1"
                xmlns:af="http://xmlns.oracle.com/adf/faces/rich"
                xmlns:f="http://java.sun.com/jsf/core">
      <af:resource type="javascript">
      <![CDATA[
      /**
       * CREATE BASE JS CONTAINER OBJ
       * This is base class to assist PSA javascript methods to init after page loaded.
       * You can add this script in the head of you template instead of the portlet.
       */
      var FB = window.FB || {},
      	Base = Base || (function() {
      		return {
      			//create multi-cast delegate.
      			onPortalInit: function(function1, function2) {
      				return function() {
      					if (function1) {
      						function1();
      					}
      					if (function2) {
      						function2();
      					}
      				}
      			},
      			//used for chaining methods
      			chainPSA: function() {}
      		}
      	})();
      
      //Use Base method if FB.Base hasn't been created
      FB.Base = FB.Base || Base;
      /************************/
      
      
      
      
      /**
       * CREATE CHAIN WRAPPER
       * Chain method will initialise from Base requirejs core script
       */
      FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {
      	//set base mustache template name to load and inject
      	var vUID = 'FB_sampleContainer_${pageFlowScope.containerID}', //(UID) Unique Classname to inject template into - can't use IDs in portal 
      		oConstructor = {
      			vTemplate: 		'import/tpl/sampleTpl', //location of sampleTpl.mustache to load
      			oParams: { //Obj list of default params pulled from sample.xml Input definition
      				title:			'${pageFlowScope.title}',
      				displayTitle: 	'${pageFlowScope.displayTitle}',
      				activeUser: 	'${pageFlowScope.activeUser}'
      			},
      			containerID: 		vUID
      		};
      	
      	//check if array exists from other custom JS Portlets
      	if (typeof(FB.loadTemplate) === 'object') {
      		FB.loadTemplate.portletUIDList.push(vUID);
      	//create empty object
      	} else {
      		FB.loadTemplate = {
      			portletUIDList:[vUID],
      			portlets: {}
      		};
      	}
      
      	//inject params
      	FB.loadTemplate.portlets[vUID] = oConstructor;
      
      });
      /************************/
      ]]>
      </af:resource>
      
      
      <!-- Sample template will be injected here -->
      <af:panelGroupLayout layout="vertical" id="FB-SampleContainer" styleClass="FB_sampleContainer_#{pageFlowScope.containerID} portlet-sampleContainer"></af:panelGroupLayout>
      <!-- xSample template will be injected here -->
      
      
      </jsp:root>

      OVERVIEW:

      This is where the mustache template will be injected into to provide the sample component functionality.

    11. <af:panelGroupLayout layout="vertical" id="FB-SampleContainer" styleClass="FB_sampleContainer_#{pageFlowScope.containerID} portlet-sampleContainer"></af:panelGroupLayout>

      The oConstructor specifies the configuration of the the component to inject.
      vTemplate points to a JS file that requireJS imports and configures the base multiUploader components from the params defined.

      oParams contains all configuration for the App at the moment these are scoped params associated with the taskflow that you can allow the user to define and use within you sample component as a JS var.

      var vUID = 'FB_sampleContainer_${pageFlowScope.containerID}', //(UID) Unique Classname to inject template into - can't use IDs in portal 
      		oConstructor = {
      			vTemplate: 		'import/tpl/sampleTpl', //location of sampleTpl.mustache to load
      			oParams: { //Obj list of default params pulled from sample.xml Input definition
      				title:			'${pageFlowScope.title}',
      				displayTitle: 	'${pageFlowScope.displayTitle}',
      				activeUser: 	'${pageFlowScope.activeUser}'
      			},
      			containerID: 		vUID
      		};

      A simple check to see if other components exist on the page and append the new component within the JS Array “PortletUIDList” associated with a JS Object holding the component params in “portlets”

      //check if array exists from other custom JS Portlets
      	if (typeof(FB.loadTemplate) === 'object') {
      		FB.loadTemplate.portletUIDList.push(vUID);
      	//create empty object
      	} else {
      		FB.loadTemplate = {
      			portletUIDList:[vUID],
      			portlets: {}
      		};
      	}
      
      	//inject params
      	FB.loadTemplate.portlets[vUID] = oConstructor;

      Finally the JS configuration is wrapped in JS chain wrapper that will only initialise when requireJS has loaded in all its core base libraries like Jquery etc.

      FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {
      
      //code
      
      });

      Make sure within your ADF Template you have setup requirejs core and have the following to initialise the FB.Base.chainPSA and loop through the custom taskflows to display on the page -

      //load JS Components
      		if (FB.Base.chainPSA) {
      			FB.Base.chainPSA();
      		}

      //loop and request all templates required
      			for (x;x<lPortletList;x++) {
      				var vPortletUID 	= aPortletList[x],
      					oPortlet 		= FB.loadTemplate.portlets[vPortletUID];
      				
      				//define temp object info to pass into script when init	
      				define('temp'+x, oPortlet);
      				
      				//request and initialise portlet template & pass params
      				require([oPortlet.vTemplate,'temp'+x], function(tpl,oPortlet) {
      					console.log('[IMPORTED TEMPLATE]',tpl.component,oPortlet);
      					tpl.init(oPortlet);
      				});
      			}

    12. To add taskflow parameters open the xml file again.
    13. Select Overview tab bottom left of the screen.
      Select the Parameters side tab.
      Add the following four example params -
      .
      6
      You will see these when we add and edit the taskflow to a portal page in WebCenter Composer.
    14. Deploy the taskflow to WebCenter Portal following the last steps in the Oracle GuideOnce the new taskflow / spaces extension project has been deployed load WebCenter Portal.
      The following screenshots from PS5 the UI has changed since PS7 but you should be able to work out the differences.
    1. Go into administration area of the portal and select the “Resources” Tab
    2. Select the “Resource Catalogs” from the items on the left under the “Structure” heading.
      A list of Resource Catalogs will be available. You can create a new one or use an existing one. Make sure the one you are updating is the one being used by the portal you want to add the taskflow into.
      .
      7
      .
    3. Select the resource catalogue and Edit from the Edit Menu drop down down.
      .
      8
      .
    4. A window will appear hear you can add folders and where you want your components to appear.
      I have created a Demo Taskflow folder.
    5. Select “Add From Library” from the Add dropdown menu.
      .
      9
      .
    6. Drill into Taskflows and add your [Taskflow] – I am adding the sample taskflow I created earlier.
    7. Go into your portal create a new page and add the new taskflow.
      Here is an example of the Jive Forums that I recreated as a JS driven taskflow.
      .
      11
      .
      12
      .
    8. And the final output of the taskflow on the page.
      13

 

 

 

The post Javascript Driven ADF Taskflows for WebCenter Portal appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Developing WebCenter Content Cross Platform iDoc Enabled Components for Mobile, ADF, Sharepoint, Liferay

frankensteinSo over the last couple of months I’ve been thinking and tinkering with code, wondering, “What’s the best approach for creating WebCenter Content (WCC) components that I can consume and reuse across multiple platforms and environments?”
Is it pagelet producer or maybe an iFrame? These solutions just weren’t good enough or didn’t allow the flexibility I really wanted.

I needed a WCC Solution that could easily be consumed into mobile, either Cordova (Hybrid APP) or ADF Mobile (AMX views), and that worked on different devices/platforms as well as on any enterprise app, i.e. Sharepoint (.Net), Lifreray,  WebCenter Portal (ADF) or even consumed into the new WebCenter Content ADF WebUI. It also needed to provide the added advantage that there would not need to be multiple branches of code or redevelopment of the component for each platform and environment.

And in the famous words of Victor Frankenstein.. “It’s Alive!!”

After tinkering around and trying different approaches, this is the solution I created to support the above model.
I’m not saying this is the right approach or supported by the enterprise vendors, but an approach that is reusable and can work on all enterprise apps.

 

[VIDEO CONVERTING]…

Here’s a quick video of a drag/drop MultiUploader component I created for WebCenter Content Classic that I can reuse on .Net and ADF WebCenter Portal/Content as well as mobile.

Read on to find out more on how this was achieved.

1) First, I’m going to dig into WebCenter Content and explain the underlying structure of the component.

To create a flexible base model, I created a light Javascript framework, very similar to AngularJS or ReactJS.

This would be the base component that would enable additional components on the page with the use of Mustache (JS templates) to drive and inject dynamic functional areas of content into a specified DOM node by ID or className.
Any changes of layout with the component are handled via an AJAX request to a cached mustache template which updates the DOM when needed (similar to ADFs PPR). Any user interaction is handled through event-driven actions from the imported templates.

RequireJS is used to supply a flexible module loading framework, where I do not need to be worried over conflicts of JS libraries and is used to load in mustache templates and additional JS functionality when needed.

You’re probably thinking that there are going to be a lot of AJAX requests going back and forth and it’s going to be slow. Just check out the video – the answer is not really. The mustache templates are going to be smaller than average images you load on a page.

So as an example for the MultiUploader, I only have 1 mustache template that is 9kb. All interaction is handled by 2 JS files that are 39kb uncompressed.

2) As mentioned, a base model WCC component, “FishbowlModuleLoader”, will load in and initiate all other components on the page and will only load and cache required templates and JS files as and when is needed. There is no point to load in all templates and JS functionality on a page if it is not needed, which improves performance and interaction of the component.

3) Following is a quick overview of how the WCC component “FishbowlMultiUploader” works.

WebCenter Content Resource Asset

This is the base structure of the Content Component configuration, “fb_multi_upload_page_body”. It is consumed into a custom template, “MULTI_UPLOAD_PAGE”, which is requested via a custom service request, “?IdcService=GET_FB_MULTI_UPLOAD_PAGE”.

<!--
Name:           fb_multi_upload_page_body
Author:         John Sim  [18/06/2014]
Parameters:		
Description:	Page Body for Multi Checkin used in MULTI_UPLOAD_PAGE template
-->
<@dynamichtml fb_multi_upload_page_body@>
[[% FB fb_multi_upload_page_body Template body MULTI_UPLOAD_PAGE %]]

<div id="FB-multiCheckin" class="FB_multiCheckin"></div>

<script>
/**
 * CREATE CHAIN WRAPPER
 * Chain method will load from Base ModuleLoader requirejs core script
 */
FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {
	//set base mustache template name to load and inject
	var vUID = 'FB_multiCheckin', //(UID) Unique Classname to inject template into - can't use IDs in portal 
		oConstructor = {
			vTemplate: 'import/tpl/multiUploadTpl', //location of template.mustache to load
			oParams: { //Obj list of default params pulled from multiUploader.xml Input definition
				maxUploadSize:			'10mb',
				defaultDocType:			('<$multiUploadDefaultType$>' !== '')? '<$multiUploadDefaultType$>': 'Document', 
				defaultSecurityGroup:		('<$multiUploadDefaultSecurityGroup$>' !== '')? '<$multiUploadDefaultSecurityGroup$>': 'Public',
				defaultAccount:			'Workspace/'+userName, 
				author:				(typeof(userName) !== 'undefined')? userName: '', 
				httpEnterpriseCgiPath: 		(typeof(httpEnterpriseCgiPath) !== 'undefined')? httpEnterpriseCgiPath: '',
				idcToken: 			(typeof(idcToken) !== 'undefined')? idcToken: '',
				httpWebRoot: 			(typeof(httpWebRoot) !== 'undefined')? httpWebRoot: '',
				enableTagging:			true,
				enableEmails:			true,
				enableBarcode:			true,
				enableCheckinProfiles: 		true,
				showHelpOption: 		true
			},
			containerID: 		vUID
		};
	
	//check if array exists from other custom JS Portlets
	if (typeof(FB.loadTemplate) === 'object') {
		FB.loadTemplate.portletUID.push('FB_multiUploadContainer_' + vUID);
	//create empty object
	} else {
		FB.loadTemplate = {
			portletUIDList:['FB_multiUploadContainer_' + vUID],
			portlets: {}
		};
	}

	//inject params
	FB.loadTemplate.portlets['FB_multiUploadContainer_' + vUID] = oConstructor;
});
/************************/
</script>


<@end@>

This is where the mustache template will be injected into to provide the multiUpload component functionality.

<div id="FB-multiCheckin" class="FB_multiCheckin"></div>

The oConstructor specifies the configuration of the the component to inject.
vTemplate points to a JS file that requireJS imports and configures the base multiUploader components from the params defined.

oParams contains all configuration for the app; at the moment, these are mostly hard coded, but could be defined as iDoc Variables when you install and enable the component within WCC.

var vUID = 'FB_multiCheckin', //(UID) Unique Classname to inject template into - can't use IDs in portal 
		oConstructor = {
			vTemplate: 'import/tpl/multiUploadTpl', //location of template.mustache to load
			oParams: { //Obj list of default params pulled from multiUploader.xml Input definition
				maxUploadSize:			'10mb',
				defaultDocType:			('<$multiUploadDefaultType$>' !== '')? '<$multiUploadDefaultType$>': 'Document', 
				defaultSecurityGroup:		('<$multiUploadDefaultSecurityGroup$>' !== '')? '<$multiUploadDefaultSecurityGroup$>': 'Public',
				defaultAccount:			'Workspace/'+userName, 
				author:				(typeof(userName) !== 'undefined')? userName: '', 
				httpEnterpriseCgiPath: 		(typeof(httpEnterpriseCgiPath) !== 'undefined')? httpEnterpriseCgiPath: '',
				idcToken: 			(typeof(idcToken) !== 'undefined')? idcToken: '',
				httpWebRoot: 			(typeof(httpWebRoot) !== 'undefined')? httpWebRoot: '',
				enableTagging:			true,
				enableEmails:			true,
				enableBarcode:			true,
				enableCheckinProfiles: 		true,
				showHelpOption: 		true
			},
			containerID: 		vUID
		};

This is a simple check to see if other components exist on the page and append the new component within the JS Array “PortletUIDList” associated with a JS Object holding the component params in “portlets”.

//check if array exists from other custom JS Portlets
	if (typeof(FB.loadTemplate) === 'object') {
		FB.loadTemplate.portletUID.push('FB_multiUploadContainer_' + vUID);
	//create empty object
	} else {
		FB.loadTemplate = {
			portletUIDList:['FB_multiUploadContainer_' + vUID],
			portlets: {}
		};
	}

	//inject params
	FB.loadTemplate.portlets['FB_multiUploadContainer_' + vUID] = oConstructor;

Finally, the JS configuration is wrapped in JS chain wrapper that will only initialize when required. JS has loaded in all its core base libraries like Jquery, etc.

FB.Base.chainPSA = FB.Base.onPortalInit(FB.Base.chainPSA, function() {

//code

});

 

4) So lets take a look at how the base component “FishbowlModuleLoader” works.

Essentially, this defines the FB.Base.chainPSA chain wrapper method in the header – does not need jquery or any other library.

<!--
Name:           std_html_head_declarations
Author:         John Sim  [18/06/2014]
Parameters:		
Description:	Add required header resources
-->
<@dynamichtml std_html_head_declarations@>
[[% FB std_html_head_declaration Update head add JS libs for module loader %]]

<$include super.std_html_head_declarations$>

<script>
/**
 * CREATE BASE JS CONTAINER OBJ
 * DONOT ADD JQUERY this is base class to assist PSA javascript methods to init after page loaded.
 */
var FB = window.FB || {},
	Base = Base || (function() {
		return {
			//create multi-cast delegate.
			onPortalInit: function(function1, function2) {
				return function() {
					if (function1) {
						function1();
					}
					if (function2) {
						function2();
					}
				}
			},
			//used for chaining methods
			chainPSA: function() {}
		}
	})();

//Use Base method if FB.Base hasn't been created
FB.Base = FB.Base || Base;
/************************/
</script>

<@end@>

You could cache this and put it in a script file, I’ve just put it inline easier for you to read.

In the footer, we define requireJS and the configuration that loads in base libraries that we need for all components ie Jquery and maybe a few others.
Also we setup fb.core.js as our base script to import and load in the core framework I built to handle routing and DOM event interaction as well as global vars.

<!--
Name:           std_page_end
Author:         John Sim  [18/06/2014]
Parameters:		
Description:	Component Module Loader RequireJS setup
-->
<@dynamichtml std_page_end@>
[[% FB std_page_end Add Module Loader RequireJS lib %]]

<$include super.std_page_end$>


<!-- Init FB Component Module Loader -->
<script src="<$HttpWebRoot$>resources/FishbowlModuleLoader/js/core/config.js"></script>
<script src="<$HttpWebRoot$>resources/FishbowlModuleLoader/js/libs/requirejs/require.min.js" data-main="fb.core"></script>
<!-- Init FB Component Module Loader -->
<@end@>

fb.core.js so here is where the magic begins:

// REQUIREJS Base configuration
require([
	//Dom ready req plugin
	'domReady',
	
	
	//core 
	'import/Layout',
	'import/Action',
	'import/Navigation',
	'import/Global',
	
	
	//Plugins
	'Moment',		//date plugin momentjs
	'ftlabsFastClick', 	//fix touch 300ms delay
	'fb'			//fb global methods
	

	
], function(domReady, Layout){
console.info('[ALL MODULES LOADED]');

	domReady(function() {
		console.info('[DOM READY]');
		
		//initialise layout DOM events ie click, touch etc.
		Layout.init();
		
		//load JS Components
		if (FB.Base.chainPSA) {
			FB.Base.chainPSA();
		}
		
		//check if any JS driven template containers exist
		if (typeof(FB.loadTemplate) !== 'undefined') {
			var aPortletList 	= FB.loadTemplate.portletUIDList,
				lPortletList 	= aPortletList.length,
				x 				= 0;
				
			//loop and request all templates required
			for (x;x<lPortletList;x++) {
				var vPortletUID 	= aPortletList[x],
					oPortlet 		= FB.loadTemplate.portlets[vPortletUID];
				
				//define temp object info to pass into script when init	
				define('temp'+x, oPortlet);
				
				//request and initialise portlet template & pass params
				require([oPortlet.vTemplate,'temp'+x], function(tpl,oPortlet) {
					console.log('[IMPORTED TEMPLATE]',tpl.component,oPortlet);
					tpl.init(oPortlet);
				});
			}
		}
		
	});
	
});

Once the Dom has fully loaded, FB.Base.chainPSA(); is initiated. This sets up and configures the FB.loadTemplate object that contains all information associated to required components that will need to be loaded into the page.

Here we loop through and load in all templates, and pass across the component configuration to the templates to be initialized:

//loop and request all templates required
			for (x;x<lPortletList;x++) {
				var vPortletUID 	= aPortletList[x],
					oPortlet 		= FB.loadTemplate.portlets[vPortletUID];
				
				//define temp object info to pass into script when init	
				define('temp'+x, oPortlet);
				
				//request and initialise portlet template & pass params
				require([oPortlet.vTemplate,'temp'+x], function(tpl,oPortlet) {
					console.log('[IMPORTED TEMPLATE]',tpl.component,oPortlet);
					tpl.init(oPortlet);
				});
			}

And thats all there is to it.

5) Lets dig into WebCenter Portal now. How can you reuse all that code you’ve written for WebCenter Content Classic within ADF?

Easy: let’s create a JS driven taskflow template that we can dump into the resource catalogue and drag, drop, and reuse it throughout any page where ever it is needed.

I’ve created a new post for this part:
Read on here to find out how to create JS Driven Taskflow templates.

 

Some gotcha’s - 

Some things to think about if you do decide to use this approach.

  1. You will need to make sure that all AJAX requests are made on the same domain.
    1. or enable CORs from UCM to accepts requests cross domain. (Mobile works crossdomain)
  2. WCC needs to be accessible by the users browser
    1. You can setup a proxy service and only allow access to the custom services you require to lock down other UCM environment access if needed.

And finally - one thing that comes to mind here: I am using static mustache templates but there is nothing stopping you from creating a custom WCC service to generate mustache templates with embedded idoc if you want..

The post Developing WebCenter Content Cross Platform iDoc Enabled Components for Mobile, ADF, Sharepoint, Liferay appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Exactly Wrong

Greg Pavlik - Mon, 2014-07-21 08:58
I normally avoid anything that smacks of a competitive discussion on what I consider to be a space for personal reflection. So while I want to disclose the fact that I am not disinterested in the points I am making from a professional standpoint, my main interest is to frame some architecture points that I think are extremely important for the maturation and success of the Hadoop ecosystem.

A few weeks back, Mike Olson of Cloudera spoke at Spark Summit on how Spark relates to the future of Hadoop. The presentation can be found here:

http://youtu.be/8kcdwnbHnJo

In particular I want to draw attention to the statement made at 1:45 in the presentation that describes Spark as the "natural successor to MapReduce" - it becomes clear very quickly that what Olson is talking about is batch processing. This is fascinating as everyone I've talked to immediately points out one obvious thing: Spark isn't a general purpose batch processing framework - that is not its design center. The whole point of Spark is to enable fast data access and interactivity.
 
The guys that clearly "get" Spark - unsurprisingly - are DataBricks. In talking with Ion and company, it's clear they understand the use cases where Spark shines - data scientist driven data exploration and algorithmic development, machine learning, etc. - things that take advantage of the memory mapping capabilities and speed of the framework. And they have offered an online service that allows users to rapidly extract value from cloud friendly datasets, which is smart.

Cloudera's idea of pushing SQL, Pig and other frameworks on to Spark is actually a step backwards - it is a proposal to recreate all the problems of MapReduce 1: it fails to understand the power of refactoring resource management away from the compute model. Spark would have to reinvent and mature models for multi-tenancy, resource managemnet, scheduling, security, scaleout, etc that are frankly already there today for Hadoop 2 with YARN.

The announcement of an intent to lead an implementation of Hive on Spark got some attention. This was something that I looked at carefully with my colleagues almost 2 years ago, so I'd like to make a few observations on why we didn't take this path then.

The first was maturity, in terms of the Spark implementation, of Hive itself, and Shark. Candidly, we knew Hive itself worked at scale but needed significant enhancement and refactoring for both new features on the SQL front and to work at interactive speeds. And we wanted to do all this in a way that did not compromise Hive's ability to work at scale - for real big data problems. So we focused on the mainstream of Hive and the development of a Dryad like runtime for optimal execution of operators in physical plans for SQL in a way that meshed deeply with YARN. That model took the learnings of the database community and scale out big data solutions and built on them "from the inside out", so to speak.

Anyone who has been tracking Hadoop for, oh, the last 2-3 years will understand intuitively the right architectural approach needs to be based on YARN. What I mean is that the query execution must - at the query task level - be composed of tasks that are administered directly by YARN. This is absolutely critical for multi-workload systems (this is one reason why a bolt on MPP solution is a mistake for Hadoop - it is at best a tactical model while the system evolves).  This is why we are working with the community on Tez, a low level framework for enabling YARN native domain specific execution engines. For Hive-on-Tez, Hive is the engine and Tez provides the YARN level integration for resource negotiation and coorindation for DAG execution: a DAG of native operators analogous the the execution model found in the MPP world (when people compare Tez and Spark, they are fundamentally confused - Spark could be run on Tez for example for a much deeper integration with Hadoop 2 for example). This model allows the full range of use cases from interactive to massive batch to be administered in a deeply integrated, YARN native way.

Spark will undoubtedly mature into a great tool for what it is designed for: in memory, interactive scenarios - generally script driven - and likely grow to subsume new use cases we aren't anticipating today. It is, however, exactly the wrong choice for scale out big data batch processing in anything like the near term; worse still, returning to a monolithic general purpose compute framework for all Hadoop models would be a huge regression and is a disastrously bad idea.

Dependent Rational Animals

Greg Pavlik - Sun, 2014-07-20 16:32
I wanted to briefly comment on Alisdair MacIntyre's lectures collected as "Dependent Rational Animals", but let me precede that with a couple of comments for context: first, as I alluded in my last post referencing Levinas, it is my view that the the ethics demands a certain primacy in any healthy conception of life and society; second, in the area of ethics, Macintyre's After Virtue is the book that has had perhaps the biggest impact on my own thinking.

One of the criticisms of MacIntyre is that his critique of rational ethics is, on the one hand, devastating; on the other hand, his positive case for working out a defense of his own position - a revivification of social ethics in the Aristotelian-Thomist tradition(s) was somewhat pro forma. I think this is legitimate in so far as it relates to After Virtue itself (I believe I have read the latest edition - 3 - most recently), though I am not enough of a MacIntyre expert to offer a defensible critique of his work overall.

I do, however, want to draw attention to Dependent Rational Animals specifically in this light. Here MacIntyre begins with is the position of human as animal - as a kind of naturalist starting point for developing another pass at the importance of the tradition of the virtues. What is most remarkable is that in the process of exploring the implications of our "animality" MacIntyre manages to subvert yet another trajectory of twentieth century philosophy, this time as it relates to the primacy of linguistics. The net effect is to restore philosophical discourse back toward the reality of the human condition in the context of the broader evolutionary context of life on earth without - and this I must say is the most amazing part of this book - resorting to fables-masked-as-science (evolutionary psychology).

Big Data doom mongers need to look outside of the marketing department

Steve Jones - Mon, 2014-07-14 09:10
In every change there are hype machines that over play and sages who call doom.  Into the Big Data arena steps David Searls to proclaim that Big Data is a myth and simply hype which is set to burst in an article over at ZDNet. But big data, he said, is nothing more than the myth that collecting vast amounts of data can help companies know customers better than those customers even know
Categories: Fusion Middleware

George EP Box

Greg Pavlik - Mon, 2014-07-07 15:22
"Essentially, all models are wrong. Some models are useful."

The Other

Greg Pavlik - Thu, 2014-07-03 11:33
It is the nature of short essays or speeches that they can at best explore the surface of an idea. This is a surprisingly difficult task, since ideas worth exploring usually need to be approached with some rigor. The easy use of the speech form is to promote an idea to listeners or readers who already share a common view - that is one reason speeches are effective forms for political persuasion for rallying true believers. It's much more difficult to create new vantage points or vistas into a new world - a sense of something grander that calls for further exploration.

Yet this is exactly what Ryszard Kapuscinski accomplishes in his series of talks published as The Other. Here, the Polish journalist builds on his experience and most importantly on the reflections on the Lithuanian-Jewish philosopher Emmanual Levinas to reflect on how the encounter with the Other in a broad, cross cultural sense is the defining event - and opportunity - in late (or post) modernity. For Kapuscinski, the Other is the specifically the non-European cultures in which he spent most of his career as a journalist. For another reader it might be someone very much like Kapuscinski himself.

There are three simple points that Kapuscinski raises that bear attention:

1) The era we live in provides a unique, interpersonal opportunity for encounter with the Other - which is to say that we are neither in the area of relative isolation from the Other that dominated much of human history nor are we any longer in the phase of violent domination that marked the period of European colonial expansion. We have a chance to make space for encounter to be consistently about engagement and exchange, rather than conflict.

2) This encounter cannot primarily technical, its must be interpersonal. Technical means are not only anonymous but more conducive to inculcating mass culture rather than creating space for authentic personal engagement. The current period of human history - post industrial, urbanized, technological - is given to mass culture, mass movements, as a rule - this is accelerated by globalization and communications advances. And while it is clear that the early "psychological" literature of the crowd - and I am thinking not only of the trajectory set by Gustave LeBon, but the later and more mature reflections of Ortega y Gasset - were primarily reactionary, nonetheless they point consistently to the fact that the crowd involves not just a loss of identity, but a loss of the individual: it leaves little room for real encounter and exchange.

While the increasing ability to encounter different cultures offers the possibility of real engagement,  at the same time modern mass culture is the number one threat to the Other - in that it subordinates the value of whatever is unique to whatever is both common and most importantly sellable. In visiting Ukraine over the last few years, what fascinated me the most were the things that made the country uniquely Ukrainian. Following a recent trip, I noted the following in a piece by New York Times columnist Nicholas Kristof on a visit to Karapchiv: "The kids here learn English and flirt in low-cut bluejeans. They listen to Rihanna, AC/DC and Taylor Swift. They have crushes on George Clooney and Angelina Jolie, watch “The Simpsons” and “Family Guy,” and play Grand Theft Auto. The school here has computers and an Internet connection, which kids use to watch YouTube and join Facebook. Many expect to get jobs in Italy or Spain — perhaps even America."

What here makes the Other both unique and beautiful is being obliterated by mass culture. Kristof is, of course, a cheerleader for this tragedy, but the true opportunity Kapuscinski asks us to look for ways to build up and offer support in encounter.

3) Lastly and most importantly, for encounter with the Other to be one of mutual recognition and sharing, the personal encounter must have an ethical basis. Kapuscinski observes that the first half of the last century was dominated by Husserl and Heidegger - in other words by epistemic and ontological models. It is no accident, I think, that the same century was marred by enormities wrought by totalizing ideologies - where ethics is subordinated entirely, ideology can rage out of control. Kapuscinski follows Levinas in response - ultimately seeing the Other as a source of ethical responsibility is an imperative of the first order.

The diversity of human cultures is, as Solzhenitzyn rightly noted, the "wealth of mankind, its collective personalities; the very least of them wears its own special colors and bears within itself a special facet of God's design." And yet is only if we can encounter the Other in terms of mutual respect and self-confidence, in terms of exchange and recognition of value in the Other, that we can actually see the Other as a treasure - one that helps ground who I am as much as reveals the treasure for what it is. And this is our main challenge - the other paths, conflict and exclusion, are paths we cannot afford to tread.

Maven support for 12.1.3 Service Bus & SOA Suite artifacts

Edwin Biemond - Fri, 2014-06-27 14:56
With the 12.1.3 release of Oracle Service Bus and Oracle SOA Suite we finally can build all our soa projects with Maven. And this time we can do it natively without calling a utility like configjar or ANT from Maven . We start by setting all the required variables like JAVA_HOME,M2_HOME and PATH export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/jre export M2_HOME=

Open Source as religion - when the Bazaar becomes a Cathedral

Steve Jones - Fri, 2014-06-27 09:00
The seminal book on Open Source development "Cathedral and the Bazaar" talks eloquently about the difference between commercial software development and open source development.  In the past few years however there has been another shift, a shift where companies are actively releasing their technology into Open Source as a competitive differentiation.  A claim of 'we are open' because the source
Categories: Fusion Middleware

Pivotal Cloud Foundry Installed lets create an ORG / USER to get started

Pas Apicella - Thu, 2014-06-26 18:16
I installed Pivotal Cloud Foundry 1.2 recently and the commands below is what I run using the CLI to quickly create an ORG and a USER to get started with. Below assumes your connected as the ADMIN user to set a new ORG up.

Cloud Foundry CLI Commands as follows

cf api {cloud end point}
cf create-org pivotal
cf create-user pas pas
cf set-org-role pas pivotal OrgManager
cf target -o pivotal
cf create-space development
cf create-space test
cf create-space production
cf set-space-role pas pivotal production SpaceDeveloper
cf set-space-role pas pivotal development SpaceDeveloper
cf set-space-role pas pivotal test SpaceDeveloper
cf login -u pas -p pas -s developmenthttp://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Announcing Fishbowl’s Technical Support Offerings for Oracle WebCenter

support_logo

Supporting an enterprise software system like Oracle WebCenter is no easy task. Technical complexities, customizations, and multiple versions make it difficult to resolve issues quickly and keep the system up and running. Without a dedicated and knowledgeable support team, WebCenter environments can suffer from system downtime, poor performance, and frustrated users.

Join Fishbowl Solutions for a webinar as they discuss their Oracle WebCenter technical support offerings. These offerings include specific technical services to support WebCenter administrators, end users, as well as customized environments. If you are a WebCenter administrator, power user, or an IT Director/Manager that oversees your company’s WebCenter environment, this webinar is for you. Come hear how Fishbowl’s support offerings could help you increase up-time, improve SR issue resolution, and ensure overall user satisfaction.

Attendees of this webinar will learn:

  • The reasons Fishbowl is best positioned to be your single point of contact for Oracle WebCenter technical support
  • What support services does Fishbowl offer and what does each include
  • The benefits Cascade Corporation has already realized with Fishbowl’s Enterprise Support offering for Oracle WebCenter

Date: Thursday, June 12th
Time: 1:00 – 2:00 PM EST, 12 – 1:00 PM CST

Register: https://www2.gotomeeting.com/register/941379506

 

The post Announcing Fishbowl’s Technical Support Offerings for Oracle WebCenter appeared first on Fishbowl Solutions' C4 Blog.

Categories: Fusion Middleware, Other

Telling Tales

Greg Pavlik - Sun, 2014-06-08 17:50
After struggling to find time for many months, I finally was able to sit down and watch without interruption Benjamin Bagby's Beowulf performance - an adaptation that relies on Bagby's voice and a reconstruction of a 6th century 6 tone Anglo-Saxon harp. The performance is engrossing and provokes a strong imaginative response, one that would have been communally experienced. Of course the only way to revive a sense of communal experience in the case of Bagby is to see him perform live - however, given the performance is entirely in Old English and as such most unintelligible without subtitles, I think a digital adaptation may be a necessary tradeoff. In many ways, Bagby's Beowulf is a reminder of how impoverished our notion of entertainment is - ephemeral, base, isolating and essentially throw away as a rule.

By the way, it's not entirely the case that the English are unable to create something of the same texture today - several times during the performance I thought of Judith Weir's one person, unaccompanied opera King Harald's Saga. Weir's work is much shorter, principally a musical composition and less poetically rich, so it is difficult to compare the two directly: Beowulf remains the provenance of a balladeer first and foremost, and this is a genre that more and more feels lost to our world - poetry today rarely seems to be meant to be read allowed and even more rarely follows epic formats. This is a lost social phenomena, for which we are impoverished: in fact, the last long work of a balladeer I read was Ethiopian Enzira Sebhat, itself a medieval work dedicated to the Virgin Mary. The closest - though only indirectly comparable - works to the Enzira Sebhat that I am aware of currently being composed are akathistos hymns of the Russian Orthodox tradition. And while many of those recent compositions are less-than-accomplished literary works, they unquestionably represent a rich and living and at times very beautiful means of transmission of communal memory and values. I am not aware of any recent akathistos compositions that have the expressive beauty and originality of the Byzantine hymnographer Romanos the Melodist, the modern akathist has sometimes proven a source of inspiration for exceptionally great art: the late Sir John Tavener's setting of the "thanksgiving akathist" being perhaps the most significant case in point.