Skip navigation.

Anthony Shorten

Syndicate content
Technical Advice for Oracle Tax and Utilities products
Updated: 10 hours 1 min ago

Password Change Sample - Updated

Wed, 2014-07-16 17:31

In the Technical Best Practices whitepaper ((Doc Id: 560367.1), available from My Oracle Support, there is a section (Password Management Solution for Oracle WebLogic) that mentions a sample password change JSP that used to be provided by BEA for WebLogic. That site is no longer available but the sample code is now available on this blog.

Now, this is an example only and is very generic. It is not a drop and install feature that you can place in your installation but the example is sufficient to give an idea of the Oracle WebLogic API available for changing your password. It is meant to allow you to develop a CM JSP if you required this feature.

There is NO support for this as it is sample code only. It is merely an example of the API available. Link to this code is here. Examine it to get ideas for your own solutions.

The API used will most probably work for any security system that is configured as an authentication security provider.

Introduction to BatchEdit

Sun, 2014-07-06 21:14

BatchEdit is a new wizard style utility to help you build a batch architecture quickly with little fuss and technical knowledge. Customers familiar with the WLST tool that is shipped with Oracle WebLogic will recognize the style of utility I am talking about it. The idea behind BatchEdit is simple. It is there to provide a simpler method of configuring batch by boiling down the process to its simplest form. The power of the utility is the utility itself and the set of preoptimized templates shipped with the utility to generate as much of the configuration as possible but still have a flexible approach to configuration.

First of all, the BatchEdit utility, shipped with OUAF 4.2.0.2.0 and above, is disabled by default for backward compatibility. To enable it  you must execute the configureEnv[.sh] -a utility and in option 50 set the Enable Batch Edit Functionality to true and save the changes. The facility is now available to use.

Once enabled, the BatchEdit facility can be executed using the bedit[.sh] <options> utility where <options> are the options you want to use with the command. The most useful is the -h and --h which display the help for the command options and extended help. You will find lots of online help in the utility. Just typing help <topic> you will get an explanation and further advice on a specific topic.

The next step is using the utility. The best approach is to think of the configuration is various layers. The first layer is the cluster. The next layer is the definition of threadpools in that cluster and then the submitters (or jobs) that are submitted to those threadpools. Each of those layers has configuration files associated with them.

Concepts

Before understanding the utility, lets discuss a few basic concepts:

  • The BatchEdit allows for "labels" to be assigned to each layer. This means you can group like configured components together. For example, say you wanted to setup a specific threadpoolworker for a specific set of processes and that threadpoolworker had unique characteristics like unique JVM settings. You can create a label template for that set of jobs and dynamically build that. At runtime you would tell the threadpoolworker[.sh] command to use that template (using the -l option). For submitters the label is the Batch Code itself.
  • The BatchEdit will track if changes are made during a session. If you try and exit without saving a warning is displayed to remind you of unsaved changes. Customers of Oracle Enterprise Manager pack for Oracle Utilities will be able to track configuration file version changes within Oracle Enterprise Manager, if desired.
  • BatchEdit essentially edits existing configuration files (e.g. tangosol-coherence-override.xml for the cluster, threadpoolworker.properties for threadpoolworker etc). To ascertain what particular file is being configured during a session use the what command.
  • BatchEdit will only show the valid options for the scope of the command and the template used. This applies to the online help which is context sensitive.
Using the utility

The BatchEdit utility has two distinct modes to build and maintain various configuration files.

  • Initiation Mode - The first mode of the utility is to invoke the utility with the scope or configuration file to create and/or manage. This is done by specifying the valid options at the command line. This mode is recorded in a preferences file to remember specific settings across invocations. For example, once you decide which cluster type you want to adopt, the utility will remember this preference and show  the options for that preference only. It is possible to switch preferences by re-invoking the command with the appropriate options.
  • Edit Mode - Once you have invoked the command, a list of valid options are presented which can be altered using the set command. For example, the set port 42020 command will set the port parameter to 42020. You can add new sections using the add command, and so forth. Online help will show the valid commands. The most important is the save command which saves all changes.
Process for configuration

To use the command effectively here is a summary of the process you need to follow:

  • Decide your cluster type first. Oracle Utilities Application Framework supports, multi-cast, uni-cast and single server clusters. Use the bedit[.sh] -c [-t wka|mc|ss] command to set and manage the cluster parameters. For example:
$ bedit.sh -c
Editing file /oracle/FW42020/splapp/standalone/config/tangosol-coherence-override.xml using template /oracle/FW42020/etc/tangoso
l-
coherence-override.ss.be

Batch Configuration Editor 1.0 [tangosol-coherence-override.xml]
----------------------------------------------------------------

Current Settings

  cluster (DEMO_SPLADM)
  address (127.0.0.1)
  port (42020)
  loglevel (1)
  mode (dev)

> help loglevel

loglevel
--------
Specifies which logged messages will be output to the log destination.

Legal values are:

  0    - only output without a logging severity level specified will be logged
  1    - all the above plus errors
  2    - all the above plus warnings
  3    - all the above plus informational messages
  4-9  - all the above plus internal debugging messages (the higher the number, the more the messages)
  -1   - no messages

> set loglevel 2

Batch Configuration Editor 1.0 [tangosol-coherence-override.xml]
----------------------------------------------------------------

Current Settings

  cluster (DEMO_SPLADM)
  address (127.0.0.1)
  port (42020)
  loglevel (2)
  mode (dev)

> save
Changes saved
> exit
  • Setup your threadpoolworkers. For each group of threadpoolworkers use the bedit[.sh] -w [-l <label>] where <label> is the group name. We supply a default (no label) and cache threadpool templates. For example:
$ bedit.sh -w
Editing file /oracle/FW42020/splapp/standalone/config/threadpoolworker.properties using template /oracle/FW42020/etc/threadpoolw
orker.be

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (LOCAL)
      threads (0)

> set pool.2 poolname FRED

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)

> add pool

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (DEFAULT)
      threads (5)

> set pool.3 poolname LOCAL

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (LOCAL)
      threads (5)

> set pool.3 threads 0

Batch Configuration Editor 1.0 [threadpoolworker.properties]
------------------------------------------------------------

Current Settings

  minheap (1024m)
  maxheap (1024m)
  maxperm (256m)
  daemon (true)
  rmiport (6510)
  dkidisabled (false)
  storage (true)
  distthds (4)
  invocthds (4)
  role (OUAF_Base_TPW)
  pool.1
      poolname (DEFAULT)
      threads (5)
  pool.2
      poolname (FRED)
      threads (0)
  pool.3
      poolname (LOCAL)
      threads (0)

>
  • Setup your global submitter settings using the bedit[.sh] -s command or batch job specific settings using the bedit[.sh] -b <batchcode> command where <batchcode> is the Batch Control Id for the job. For example:
$ bedit.sh -b F1-LDAP
File /oracle/FW42020/splapp/standalone/config/job.F1-LDAP.properties does not exist - create? (y/n) y
Editing file /oracle/FW42020/splapp/standalone/config/job.F1-LDAP.properties using template /oracle/FW42020/etc/job.be

Batch Configuration Editor 1.0 [job.F1-LDAP.properties]
-------------------------------------------------------

Current Settings

  poolname (DEFAULT)
  threads (1)
  commit (10)
  user (SYSUSER)
  lang (ENG)
  soft.1
      parm (maxErrors)
      value (500)
>

The BatchEdit facility is an easier way of creating and maintaining the configuration files with little bit of effort. For more examples and how to migrate to this new facility is documented in the Batch Best Practices for Oracle Utilities Application Framework based products (Doc Id: 836362.1) whitepaper available from My Oracle Support.

New ConfigTools Training available on Youtube

Thu, 2014-07-03 18:12

The Oracle Public Sector Revenue Management product team have released a series of training videos for the Oracle Utilities Application Framework ConfigTools component. This component allows customers to use meta data and scripting to enhance and customize Oracle Utilities Application Framework based solutions without the need for Java programming.

The series uses examples and each recording is around 30-40 minutes in duration.

The channel for the videos is Oracle PSRM Training. The videos are not a substitute for the training courses available, through Oracle University, on ConfigTools, but are useful for people trying to grasp individual concepts while getting an appreciation for the power of this functionality.

 At time of publication, the recordings currently available are:


    New Batch Configuration Wizard (BatchEdit) available

    Tue, 2014-07-01 16:36

    A new configuration facility is available s part of Oracle Utilities Application Framework V4.2.0.2.0 called Batch Edit. One of the concerns customers and partners asked us to address is to make configuration of the batch architecture simpler and less error prone. A new command line utility has been introduced to allow customers to quickly and easily implement a robust technical architecture for batch. The feature provides the following:

    • A simple command interface to create and manage configurations for clusters, threadpools and submitters in batch.
    • A set of optimized templates to simplify configuration but also promote stability amongst configurations. Complex configurations can be error prone which can cause instability. These templates, using optimal configurations from customers, partners and Oracle's own performance engineer group, attempt to simply the configuration process, whilst supporting flexibility in configuration to cover implementation scenarios.
    • The cluster interface supports multi-cast and unicast configurations and adds a new template optimized for single server clusters. The single server cluster is ideal for use in non-production situations such as development, testing, conversion and demonstrations. The cluster templates have been optimized to support advanced facilities in Oracle Coherence to use the high availability and optimize network operations
    • The threadpoolworker interface allows implements to configure all the attributes from a single interface including for the first the the ability to create cache threadpools. These special type of threadpoolworker dod not run submitters but allow implementations to reduce the network overheads for individual components communicating across a cluster. They provide a mechanism for Coherence to store and relay state of all the components in a concentrated format and also serve as a convenient conduit for the Global JMX capability.
    • The submitter interface allows customers and implementors to create global and job specific properties files.
    • Tagging is supported in the utility to allow groups of threadpools and submitters to share attributes.
    • The utility provides helpful context sensitive help for all its functions, parameters and configurations with advanced help also available.

    Over the next few weeks I will be publishing articles highlighting features and functions of this new facility.

    More information about Batch Edit is available from the Batch Server Administration Guide shipped with your product and the Batch Best Practices (Doc Id: 836362.1) available from My Oracle Support.

    Application Management Pack for Utilities Self Running Demonstration

    Thu, 2014-06-19 20:32

    A self running demonstration of the Application Management Pack for Oracle Utilities is now available from My Oracle Support at Doc Id: 1474435.1. The demonstration in SWF (Flash) format, covers the features of the pack available for Oracle Enterprise Manager which is annotated for ease of use.

    The demonstration covers the following topics:

    • Discovery of the Oracle Utilities Targets
    • Registration of Oracle Utilities Targets
    • Starting and Stopping Oracle Utilities Targets
    • Patching Oracle Utilities Targets
    • Migrating patches across Oracle Utilities Targets
    • Cloning Environments including basic and advanced cloning
    • Miscellaneous functions

    The demonstration requires a browser with Adobe Flash installed. The demonstrate can be streamed from My Oracle Support or downloaded for offline replay.


    New ILM Whitepapers available

    Tue, 2014-06-17 14:03

    In Oracle Utilities Application Framework V4.2.0.2.0 and above, a new archiving and data management capability was introduced to help customers design a cost effective storage solution and data management strategy for their implementations of Oracle Utilities products. The facility will allow customers to retain data according to their policies (legal or otherwise) whilst offering strategies for cost effective storage of that data.

    To help with the implementation of this new facility two new whitepapers have been published to My Oracle Support for download:

    Additional products will be releasing additional whitepaper as they release the ILM based components.

    Audit On Inquiry Example - Zones

    Mon, 2014-05-19 22:41

    One of the features of Oracle Utilities Application Framework V4.x is the ability to audit inquiries from zones and pages. This allows you to track information that is read rather than what is updated (which is the prime focus of the internal audit facility).

    This example shown below is a sample only. It just illustrates the process. The Script is invoked upon broadcast (the sample does not include the global context but that can be added as normal).

    To use this facility here is the basic design pattern (in order you would perform it):

    • Decide where you want to store the inquiry data first. You cannot store the inquiry data in the same audit tables/objects as updates or deletes are recorded as the Audit Object Maintenance Object is read only (as it is only used internally). You have three options here:
      • If the Maintenance Object has a child log table then this is ideal for recording when that object is read or viewed by an end user. The advantage of this option is that there is more than likely a user interface for viewing those records.
      • If the Maintenance Object does not have a child log table then you can use the generic Business Event Log object (F1-BUSEVTLOG). This can be used to store such audit information. You may want to create a UI to view that information in a format you want to expose as well as adding records to this table. If you use this option, remember to setup a message to hold the audit message you want to display on the screen. This is needed in the Business Object definition. This is in the sample used.
      • You can create a custom Maintenance Object to store this information. This is the least desirable as you need to build Java objects to maintain the Maintenance Object but it is still possible. For the rest of this article I will ignore this alternative.
    • Create a Business Object with the data you want to store the audit within using the Business Object Maintenance Schema Editor. You can structure the information as you see fit including adding flattened fields for the collections if you wish.

    For example, for Business Event Log I created a BO called CM-BusinessAuditLog like below:

        <logId mapField="BUS_EVT_LOG_ID"/>  
        <logDateTime mapField="LOG_DTTM" default="%CurrentDateTime"/>
        <user mapField="USER_ID" default="%CurrentUser"/>  
        <maintenanceObject mapField="MAINT_OBJ_CD" default="F1-BUSEVTLOG"/>   
        <businessObject mapField="BUS_OBJ_CD" default="CM-BusinessAuditLog"/>  
        <primaryKeyValue1 mapField="PK_VALUE1" default="001"/>
        <messageCategory mapField="MESSAGE_CAT_NBR" default="90000"/>  
        <messageNumber mapField="MESSAGE_NBR" default="1000"/>   
        <version mapField="VERSION" suppress="true"/>
        <parmUser mapField="MSG_PARM_VAL"> 
           <row mapChild="F1_BUS_EVT_LOG_MSG_PARM">  
             <PARM_SEQ is="1"/>
           </row>
        </parmUser>
        <parmPortal mapField="MSG_PARM_VAL">
           <row mapChild="F1_BUS_EVT_LOG_MSG_PARM">
             <PARM_SEQ is="2"/>
           </row>
        </parmPortal>
        <parmZone mapField="MSG_PARM_VAL">
           <row mapChild="F1_BUS_EVT_LOG_MSG_PARM">
             <PARM_SEQ is="3"/>
           </row> 
        </parmZone>
        <parmF1 mapField="MSG_PARM_VAL"> 
            <row mapChild="F1_BUS_EVT_LOG_MSG_PARM">  
              <PARM_SEQ is="4"/>
            </row> 
        </parmF1>   
        <parmH1 mapField="MSG_PARM_VAL"> 
            <row mapChild="F1_BUS_EVT_LOG_MSG_PARM">  
               <PARM_SEQ is="5"/>
            </row>
        </parmH1>  
        <parmXML1 mapField="MSG_PARM_VAL">  
            <row mapChild="F1_BUS_EVT_LOG_MSG_PARM">
               <PARM_SEQ is="6"/>
            </row> 
        </parmXML1>   
        <parmGC1 mapField="MSG_PARM_VAL">
            <row mapChild="F1_BUS_EVT_LOG_MSG_PARM">
               <PARM_SEQ is="7"/>
            </row>
        </parmGC1>
        <parmF1Label mapField="MSG_PARM_VAL">
            <row mapChild="F1_BUS_EVT_LOG_MSG_PARM"> 
                <PARM_SEQ is="8"/>  
            </row>
        </parmF1Label>
        <parmH1Label mapField="MSG_PARM_VAL">
            <row mapChild="F1_BUS_EVT_LOG_MSG_PARM">
                <PARM_SEQ is="9"/> 
            </row>
        </parmH1Label> 
    • Note: I set up a basic message (message category 90000 and message number 10000) to hold the desired message

    User %1 has read value %6 on Portal %2 within Zone %3

    • Create a Service Script (say CM-AuditZone) to populate the fields on the Business Object according to your site standards. Remember to add the Business Object as a Data Area For example:
         move "parm/userId" to "CM-BusinessAuditLog/parmUser";
         move "parm/portalName" to "CM-BusinessAuditLog/parmPortal";
         move "parm/zoneCd" to "CM-BusinessAuditLog/parmZone";
         move "parm/pk1" to "CM-BusinessAuditLog/parmXML1";
    
         invokeBO 'CM-BusinessAuditLog' using "CM-BusinessAuditLog" for add;
    

    • To reduce performance impact on creating audit records (also to add audit records when the change mode on the prime object is read only) it is recommended to create another Service script (say CM-ZoneAuditing) and use F1-ExecuteScriptInNewSession to execute it in a new thread. Remember to add the script as a Data Area. For example:

      move "parm/input" to "CM-AuditZone";
      move 'CM-AuditZone' to "F1-ExecuteScriptInNewSession/scriptName";
      move "CM-AuditZone" to "F1-ExecuteScriptInNewSession/scriptData";
      
      invokeBS 'F1-ExecuteScriptInNewSession' using "F1-ExecuteScriptInNewSession";
      • Add the schema to the script to accept the input from the Zone parameters as per the Help entry for the Audit Service Script. For example:
          <input type="group"> 
              <userId/>  
              <zoneCd/>  
              <portalName/>  
              <mo/>  
              <pk1/>  
              <pk2/>  
              <pk3/>  
              <pk4/>  
              <pk5/>  
          </input> 
      
    • Attach the Audit Service Script (CM-ZoneAuditing). For example:

    ss='CM-ZoneAuditing' input=[zone=zone portal=portal user=userId pk1=F1Label pk2=F1 
    pk3=F3Label pk4=F3]

    This example is just a sample with some basic processing. It can be extended to capture additional information. It is recommended to use Log files on the Object if they are available.

    SSO Integration Patterns

    Sun, 2014-05-11 18:52
    Single Sign On Support is one of the common questions I get asked from customers, partners and sales people.

    Single Sign On is basically an implementation mechanism or technology that allows customers of multiple browser applications to specify credentials once (at login typically) that are reused for that session for subsequent applications. This avoids logging on more than once. This aids in cross product navigation where a user logs onto one application and when transfer to another application avoid logging into that other product.

    Single Sign On is not a product requirement it is an infrastructure requirement. Therefore there are infrastructure solutions available.

    Typically there are two main styles of Single Sign On with different approaches for implementation.

    The first style is best described as "Desktop" Single Sign-On. This is where you logon to your client machine (usually a windows based PC) and the credentials you used to logon to that machine are reused for ANY product used after authentication. Typically this is implemented using the Kerberos protocol and Simple and Protected Negotiate (SPNEGO) protocol. This is restricted to operating systems (typically Windows) where you perform the following:

    • Setup the client machine browsers to accept and pass the credentials to the server. This sets the browser to read the kerberos credentials and pass them to the server.
    • Setup the Microsoft Active Directory Services Network Domain Controller to accept Kerberos and pass onto the subsequent applications.
    • Create a keytab file for Oracle WebLogic to use.
    • Configure Oracle WebLogic Indentity Assertion Provider to specify that the keytab is to be used and that Kerberos is to be used for the Identity.
    • Configure Oracle WebLogic to startup using the provider and Kerberos.
    • Set the login preferences within OUAF to CLIENT-CERT to indicate the login is passed from somewhere else. This turns off our login screen.

    As you can see the majority of the work is in Oracle WebLogic and is documented in Configuring Single Sign-On with Microsoft Clients.

    The second style of is best described as "Browser" Single Sign-On. This typically means you login to the machine and then open the browser to logon. At this point as long as the browser is open, any subsequent application will reuse the credentials specified for the browser session. This is the style i implemented by SSO products such as Oracle Access Manager, Oracle Enterprise SSO and other SSO products (including third party ones). Typically implementing this involves the following:

    • Setting Up Oracle Access Manager or the SSO product to your requirements. Oracle Access Manager supports lots of variations for SSO including Single Network Domain SSO, Multiple Network Domains, Application SSO, etc. This is all outlined in Introduction to Single Sign-On with Access Manager.
    • Setting up Oracle WebLogic with Oracle Access Manager (this allows Oracle WebLogic to get the credentials from Oracle Access Manager). This is outined in Configuring Single Sign-On with Oracle Access Manager 11g.
    • Set the login preferences within OUAF to CLIENT-CERT to indicate the login is passed from somewhere else. This turns off our login screen

    Again, as you can see the majority of the work is in Oracle WebLogic and Oracle Access Manager.

    Information about implementing Single Sign-On withour products (both styles) is contained in

    • Single Sign On Integration for Oracle Utilities Application Framework based products (Doc Id: 799912.1) available from My Oracle Support.
    • Oracle Identity Management Suite Integration with Oracle Utilities Application Framework based products (Doc Id: 1375600.1) available from My Oracle Support.

    While the first style is lower cost typically, it is restricted to specific platforms that support Kerberos and SPNEGO. It is restricted also in flexibility, it passes the credentials from the client all the way to the server so they must match. Oracle Access Manager on the other hand is far more flexible supporting a wide range of architectures as well as including Access Control features, password control and user tracking features within WebGate. These features allow additional features to be implemented:

    • Access Control - This allows for additional security rules to be implemented. For example, turning off part of a product during time periods. I have heard of customers using Oracle Access Manager to stop online payments from being accessible after business hours from a call center, due to customer specific payment processes being implemented. This augments the inbuilt security model available from Oracle Utilities Application Framework.
    • User Tracking - Oracle Utilities Application Framework is stateless, therefore you can only see active users when they are actively running transactions, not when they are idle. WebGate has information about idle users as well as active users allowing for enhanced user tracking.

    Whatever the style you choose to adopt, we have a flexible set of solutions to implement SSO. The only common element and the only step Oracle Utilities Application Framework is to change the J2EE login preference from the default FORM based to CLIENT-CERT.

    Archiving/ILM Part 2 - ILM Date And ILM Archive Flag

    Thu, 2014-05-08 21:02

    As part if the new data management capabilities of Oracle Utilities Application Framework V4.2.0.2.0, two new columns have been added to products to be managed by this capability.

    • ILM Date (ILM_DT) - This is a field populated with the system date at record creation time. This sets the starting date (plus the retention period) where the ILM solution will evaluate the eligibility of the record for archiving by the ILM Crawler. This date is set by the Maintenance Object at object creation time but like ANY other column in the object can be altered by algorithms, batch processes etc. Manipulating the date can delay (or speed up) ILM activities on a particular object. For example, it is possible to set this date in an appropriate algorithm (set by your business practices) to manipulate when a particular object is to be considered for ILM consideration.
    • ILM Archive Flag (ILM_ARCH_FLG) - This is a flag, set to N by default, that determines whether the record is eligible for archiving (removal) or any other ILM activities. This column is maintained by the ILM Crawler assigned to the object, which will assess the rules for eligibility after the ILM_DT has passed. If the record is deemed eligible for archiving then the value will be set to Y to indicate that other ILM activities can be safely performed on this object.

    The ILM Crawlers uses these columns and the associated ILM Eligibility algorithm n the Maintenance Object to determine the eligibility of the objects. These values are managed for you automatically. If the basic setup is not sufficient for your data retention needs the ILM Eligibility algorithm can be altered to suit your needs or other algorithms can be extended to help you set these values.

    Archiving/ILM Introduction - Part 1

    Sun, 2014-05-04 20:21

    As part of Oracle Utilities Application Framework 4.2.0.2.0 and Oracle Utilities Customer Care And Billing 2.4.0.2.0, a new Archiving/Data Management engine based around the Information Lifecycle Management (ILM) capabilities within the Oracle Database (with options).

    The first part of the solution is actually built into the Oracle Utilities Application Framework to define the business definition of active transaction data. Active transaction data is transactional data that is regularly added, changed or deleted as part of a business process. Transaction data that is read regularly is not necessarily active from an ILM point of view. Data that read can be compressed, for example, with little impact to performance of that data.

    Note: The ILM solution only applies to the objects that are transactional and that ILM has been enabled against. Refer to the DBA Guide shipped with the product for a list of objects that are shipped with each product.

    To set the business definition of active transaction data is using a master configuration record for ILM. For example:

    Master Configuration

    It is possible to define the data retention period, in days, for individual objects that are covered by the data management capability. These settings are set on the Maintenance Object Options shipped with the ILM solution. For example:

    Maintenance Object Options

    Essentially the configuration allows for the following:

    • A global retention period can be defined, in days. Objects that are covered by ILM can inherit this setting if you do not want to manage at the Maintenance Object level.
    • Each Maintenance Object that is enabled for ILM, has a number of Maintenance Object options to define the following:
      • ILM Retention Period In Days - Sets the retention period for the object at creation time.
      • ILM Crawler Batch Control - The batch control for the crawler which will traverse the objects and set the ILM dates and ILM flags.
      • ILM Eligibility Algorithm - The algorithm containing the business rules to assess the eligibility of individual objects for data management. This algorithm can be altered to implement additional business rules or additional criteria to implement object specific rules. For example, it is possible to implement specific rules for specific object definitions (i.e. say, have different rules for residential customers to industrial/commercial customers).
    • An ILM crawler has been provided for each object to set ILM dates and assess eligibility for objects. This batch process can be run whenever the business rules need to be implemented for data management and also used for when the business rules need to be changed, due to business changes.

    At the end of this stage, a number of ILM specific fields on those objects have been set ready for the technical implementation of the ILM facilities in the database (which will be a subject of a future post).

    The date that is set by this configuration does not mean that this data will disappear, it just defines the line where the business hands the data over to the technical database setup.

    As you can see from this post, the data management capability from the business perspective is simple and flexible. You can define take the default eligibility rules and setup as provided or customize this first stage to implement more complex rules that match your data retention rules.

    Information Lifecycle Management or Archiving V2

    Thu, 2014-05-01 23:16

    Oracle Utilities Application Framework V4.2.0.2.0 has been released with Oracle Utilities Customer Care and Billing V2.4.0.2.0. In this major release, a new data management facility has been released to replace the original Archiving facility that was provided with Oracle Utilities Customer Care and BillingV2.1/V2.2/V2.3.1.

    This new facility has a number of major advantages for effective data management:

    • A new set of fields have been added to objects specifically to allow implementations to control the data lifecycle for those objects. This includes dates and a flag to determine what the lifecycle for objects is as well as the eligibility of individual objects for data management.
    • The facility allows the customer to define how long key objects are active for across an implementation. This allows a new date within these objects to be specifically set for data management activities independent of when they are actually active. This allows flexible data retention policies to be implemented.
    • A crawler batch job per object, implements business and integrity rules to trigger data management activities. This allows data retention policies to be adjusted for changes to business needs.
    • The data management capability uses Oracle's Information Lifecycle Management capabilities to implement the storage and data retention policies based upon the data management dates and flags. This allows IT personnel to define the physical data retention policies to perform the following types of activities:
      • Allows use of compression including base compression in Oracle, Advanced Compression or Hybrid Columnar Compression in Oracle ExaData. Externalized compression in SAN hardware is also supported.
      • Allows the ability to, optionally, use lower cost storage to manage groups of data using Oracle Partitioning. This allows saving of costs of storing less active data in your implementation.
      • Allows specification and simulation of data management policies using ILM Assistant including estimating expected storage cost savings.
      • Allows data management manually or automatically using Automatic Storage Management (ASM), Automatic Data Optimization (ADO) and/or Heat Maps. The latter are available in Oracle 12c to provide additional facilities.
      • Allows use of transportable tablespaces via Oracle Partitioning to quickly remove data that has been identified as archived.
    • The definition of the lifecycle can be simple or as complex as your individual data retention policies dictate, with the business and IT together defining the business and technical implementations of the rules within the product and the Information Lifecycle Management components within the database.
    • The Oracle Utilities Application Framework has been altered to recognize data management policies. This means that once a policy has been implemented, access to that data will conform to that policy. For example, data that is removed via transportable tablespaces will be recognized as archived and the online/batch process will take this into account.

    Data Management documentation is provided with the products to allow  implementations to take advantage of this new capability. This allows data management retention policies to be flexible and use the data management capabilities within the database to efficiently manage the lifecycle of critical data in Oracle Utilities Applications.

    Over the next few weeks a number of blog entries will be published to walk through the various aspects of the solution.

    New Web Services Capabilities available

    Wed, 2014-04-09 17:59

    As part of Oracle Utilities Application Framework V4.2.0.2.0, a new set of Web Services capabilities is now available to replace the Multi-Purpose Listener (MPL) and also XAI Servlet completely with more exciting capabilities.

    Here is a summary of the facilities:

    • There is a new Inbound Web Services (IWS) capability to replace the XAI Inbound Services and XAI Servlet (which will be deprecated in a future release). This capability combines the meta data within the Oracle Utilities Application Framework with the power of the native Web Services capability within the J2EE Web Application Server to give the following advantages:
      • It is possible to define individual Web Services to be deployed on the J2EE Web Application Server. Web based and command line utilities have been provided to allow developers to design, deploy and manage individual Inbound Web Services.
      • It is now possible to define multiple operations per Web Service. XAI was restricted to a single operation with multiple transaction types. IWS supports multiple operations separated by transaction type. Operations can even extend to different objects within the same Web Service. This will aid in rationalizing Web Services.
      • IWS  makes it  possible to monitor and manage individual Web Services from the J2EE Web Application Server console (or Oracle Enterprise Manager). These metrics are also available from Oracle Enterprise Manager to provide SLA and trend tracking capabilities. These metrics can also be fine grained to the operation level within a Web Service.
      • IWS allows greater flexibility in security. Individual Services can now support standards such as WS-Policy, WS-ReliableMessaging etc as dictated by the capabilities of the J2EE Web Application Server. This includes message and transport based security, such as SAML, X.509 etc and data encryption.
      • For customers lucky enough to be on Oracle WebLogic and/or Oracle SOA Suite, IWS now allows full support for Oracle Web Services Manager (OWSM) on individual Web Services. This also allows the Web Services to enjoy additional WS-Policy support, as well as, for the first time, Web Service access rules. These access rules allow you to control when and who can run the individual service using simple or complex criteria ranging from system settings (such as dates and times), security (the user and roles) or individual data elements in the payload.
      • Customers migrating from XAI to IWS will be able to reuse a vast majority of their existing definitions. The only change is that each IWS service has to be registered and redeployed to the server, using the provided tools, and the URL for invoking the service will be altered. XAI can be used in parallel to allow for flexibility in migration.
    • The IWS capability and the migration path for customers using XAI Inbound Services is available in a new whitepaper Migrating from XAI to IWS (Doc Id: 1644914.1) available from My Oracle Support.

    Over the next few weeks I will be publishing articles highlighting capabilities for both IWS and the OSB to help implementations upgrade to these new capabilities.

    Oracle Service Bus transports available

    Mon, 2014-04-07 15:35

    As outlined in the whitepaper Oracle Service Bus Integration with Oracle Utilities Application Framework (Doc Id: 1558279.1) available from My Oracle Support, the Multi-Purpose Listener (MPL) is bsing replaced by Oracle Service Bus (OSB). Whilst transaction inbound to the Oracle Utilities Application Framework based product are handled natively using Web Services, transactions outbound from the products are handled by Oracle Service Bus. Consequently a number of protocol adapters have been developed that are installed in Oracle Service Bus to allow Oracle Service Bus to initiate the following outbound communications:

    • Outbound Messages
    • Notification Download Staging (Oracle Utilities Customer Care and Billing and Oracle Public Service Revenue Management only)

    The transports are now available from My Oracle Support as a patch on any OUAF 4.2.0.0.0 and above product as Patch 18512327: OUAF Transports for OSB 1.0.0.

    Installation instructions for Oracle Service Bus and Oracle Enterprise Pack for Eclipse are included in the patch as well as the whitepaper.

    Password Change Sample

    Sun, 2014-04-06 20:27

    In the Technical Best Practices whitepaper ((Doc Id: 560367.1), available from My Oracle Support, there is a section (Password Management Solution for Oracle WebLogic) that mentions a sample password change JSP that used to be provided by BEA for WebLogic. That site is no longer available but the sample code is now available on this blog.

    Now, this is an example only and is very generic. It is not a drop and install feature that you can place in your installation but the example is sufficient to give an idea of the Oracle WebLogic API available for changing your password. It is meant to allow you to develop a CM JSP if you required this feature.

    There is NO support for this as it is sample code only. It is merely an example of the API available. Link to this code is here. Examine it to get ideas for your own solutions.

    The API used will most probably work for any security system that is configured as an authentication security provider.

    Private Cloud Planning Guide available for Oracle Utilities

    Sun, 2014-04-06 17:56

    Oracle Utilities Application Framework based applications can be housed in private cloud infrastructure which is either onsite or as a partner offering. Oracle provides a Private Cloud foundation set of software that can be used to house Oracle Utilities software. To aid in planning for installing Oracle Utilities Application Framework based products on private cloud a whitepaper has been developed and has been published.

    The Private Cloud Planning Guide (Doc Id: 1308165.1) which is available from My Oracle Support, provides and architecture and software manifest for implementing a fully functional private cloud offering onsite or via a partner. It refers to other documentation to install and configure specific components of a private cloud solution.