The Oracle Application Management Pack for Oracle Utilities is usually installed using the Self Update feature in Oracle Enterprise Manager. It is easiest method of installation. This requires the Oracle Management Server to be connected to the internet (directly of via proxy) which some site do not allow. Luckily Oracle Enterprise Manager has an offline mode as well and the installation of the pack can also be done offline.
Here is how you install the pack in offline mode:
- Download the pack from Oracle Software Delivery Cloud. This is known as an OPAR which is the install format for the pack.
- Transfer the OPAR to the machine housing the Oracle Management Server (OMS) component of Oracle Enterprise Manager. If your installation is multi-OMS then refer to the Oracle Enterprise Manager installation guide for more instructions. Ensure the user you are using to install has read access to the file at least.
- Logon to the OMS machine and execute the following command from the oms/bin directory.
./emcli setup -url=https://<omsserver>:<omsport>/em -username=sysman -password=<password> -trustall
<omserver> is the OMS server name
<omsport> is the administration port for the OMS
<password> is the password for the OMS administrator
- Execute the command to install the pack:
./emcli import_update -file=<oparname> -omslocal
<oparname> - Fully qualified name and path of the file you downloaded.
- Now deploy the pack on the OMS using the PlugIn maintenance from the console. This may require an outage as pack installs may change the OMR and OMS.
- Deploy the pack on agents located on the Oracle Utilities servers.
That is how you install the pack in offline mode.
In Oracle Utilities Application Framework V4.x, a new column was added to the user object to add an additional layer of security. This field is a user hash that generates on the complete user object. The idea behind the hash is that when a user logs in a hash is calculated for the session and is checked against the user record registered in the system. If the user hash generated does not match the user hash recorded on the user object then the user object may not be valid so the user cannot login.
This hash is there to detect any attempt to alter the user definition using an invalid method. If there is an alteration was not using the provided interfaces (using the online or a Web Service) then the record cannot be trusted so the user cannot use that identity. The idea is that if someone "hacks" the user definition using an invalid method, the user object will become invalid and therefore effectively locked. It protects the integrity of the user definition.
This facility typically causes no issues but here are a few guidelines to use it appropriately:
- The user object should only be modified using the online maintenance transaction, F1-LDAP job, user preferences maintenance or a Web Service against the user object. The user hash is regenerated correctly when a valid access method is used.
- If you are loading new users from a repository, the user hash must be generated. It is recommended to use a Web Services based interface to the user object to load the users to avoid the hash becoming invalid.
- If a user uses a valid identity and the valid password but gets a message Invalid Login then it is more likely the user hash compare has found an inconsistency. You might want to investigate this before resolving the user hash inconsistency.
- The user hash is generated using the keystore key used by the product installation. If the keystore or values in the keystore are changed, you will need to regenerate ALL the hash keys.
- There are two ways of addressing this use:
- A valid administrator can edit the individual user object within the product and make a simple change to force the hash key to be regenerated.
- Regenerate the hash keys globally using the commands outlined in the Security Guide. This should be done if it is a global issue or at least an issue for more than one user.
For more information about this facility and other security facilities, refer to the Security Guide shipped with your product.
Lately I have been talking to partners and customers on older versions of the Oracle Utilities Application Framework and they are considering upgrading to the latest version. One of the major questions they ask is about the role of Oracle Coherence in our architecture. Here are some clarifications:
- We supply a subset of the runtime Oracle Coherence libraries we use in the batch architecture with the installation. It does not require a separate Oracle Coherence license (unless you are intending to use Coherence for some customizations which requires the license).
- We only use a subset of the Oracle Coherence API around the cluster management and load balancing of the batch architecture. If you are a customer who uses the Oracle Coherence Pack within Oracle Enterprise Manager for monitoring the batch component, it is not recommended at the present time. The Coherence pack will return that components are missing and therefore give erroneous availability information. We have developed our own monitoring API within the framework that is exposed via the Oracle Application Management Pack for Oracle Utilities.
- The idea behind the use of Oracle Coherence is as follows:
- The Batch Architecture uses a Coherence based Cluster. This can be configured to use uni-cast or multi-cast to communicate across the cluster.
- A cluster has a number of members (also known as nodes to some people). In our case members are threadpools and job threads.
- A threadpool is basically a running Java Virtual Machine, preloaded with the Oracle Utilities Application Framework, ready to accept work. The reason we use threadpools is that when you execute java processes in Oracle Utilities Application Framework, there is an overhead in memory of loading the framework cache and objects, as well as java itself, before a job can execute. By creating a threadpool, this overhead is minimized and the threadpool can be used across lots of job threads.
- Threadpools are named (this is important) and have a thread limit (this is a batch limit not a java limit as batch threads are heavier than online threads. The weight is used to describe batch because batch thread are long running threads. Online threads are typically short running.
- When a threadpool is started, locally or remotely, it is added to the cluster. A named threadpool can have multiple instances (even on the same machine). The threadpool limit is the sum of the limits across all its instances.
- When a batch job thread is executed (some jobs are single threaded or multi-threaded) it is submitted to the cluster. Oracle Coherence then load balances those threads across the name of threadpool allocated on the job thread parameters.
- Oracle Coherence tracks the threadpools and batch job threads so that if any failure occurs then the thread and threadpool are aware. For example, if a threadpool crashes the cluster is made aware and the appropriate action can be taken. This keeps the architecture in synchronization at all times.
- We have built a wizard (bedit) to help build the properties files that drive the architecture. This covers clusters, threadpools and even individual batch jobs.
- When building a cluster we tend to recommend the following:
- Create a cache threadpool per machine to minimize member to member network traffic. A cache threadpool does not run jobs, it just acts as a co-ordination point for Oracle Coherence. Without a cache threadpool, each member communicates to each member which can be quite a lot of networking when you have a complex network of members (including lots of active batch job threads).
- Create an administration threadpool with no threads to execute. This is just a configuration concept where you can connect to the JMX via this member. The JMX API is available from any active threadpool but it is a good practice to isolate JMX traffic from other traffic.
- Create a pool of threadpools to cover key jobs and other pools for other jobs. The advantage is for monitoring and controlling resources within the JVM.
In the Oracle Utilities Application Framework, the software is now installed in a mode called "Secure By Default". This has a number of connotations for new and existing installations:
- HTTPS is the now the default protocol for accessing the product. The installer supplies a demonstration trust and demonstration identity store that can be used for default installations.
- The permissions on the files and directories are now using common Oracle standards.
Now there are a few clarifications about these features:
- Customers that are upgrading from older versions will use the same regime for the file permissions and access protocols that were in past releases for backward compatibility.
- Customers on past releases can convert to the new file and directory permissions using the "setpermissions" utility shipped with the product. The Administration and Security guides outline the new permissions.
- Customers on past releases can convert to the new HTTPS protocol like they did in the past releases. The new keystore is provided as a way of adopting it quickly.
- We supply a basic certificate to be used for HTTPS. This is a demonstration certificate is limited in strength and scope (much the same scope and strength as the demonstration one supplied with Oracle WebLogic). It is not supported for use in production systems. It is recommended that customers who want to use HTTPS should use a valid certificate from a valid certificate issuing authority or build a self signed certificate. Note, if you use a self signed certificate some browsers may issue a warning upon login. Additionally, Customers using native mode installations can use the Oracle WebLogic demonstration certificates as well.
- HTTPS was always supported in Oracle Utilities Application Framework. In past releases it was, what is termed, an opt-in decision (you are opt'ing in to use HTTPS). This meant that we installed using HTTP by default and then you configured HTTPS separately with additional configuration on your domain. In this new release, we have shifted the decision to an opt-out decision. We install HTTPS with a demonstration certificate as the default and you must disable it using additional steps (basically you do not specify a HTTPS port and only supply a HTTP port to reverse the decision). This is an opt-out decision as you are deciding to opt-out of the secure setup. The decision whether to use HTTPS or HTTP is an implementation one (we just have a default of HTTPS).
- Customers using native mode (or IBM WebSphere) can manage certificates from the console or command lines supplied by the that product.
Secure by default now ensures that Oracle Utilities Application Framework products are consistent with installations standards employed by other Oracle products.
This is the first in a series of articles on implementation advice to optimize the designs and configuration of Oracle Utilities products.
Early in my career my mentor at the time suggested that I expand my knowledge outside the technical area. The idea is that non-technical techniques would augment my technical knowledge. He suggested a series of books and articles that would expand my thinking. Today I treasure those books and articles and regularly reread them to reinforce my skills.
Recently I was chatting to customers about optimizing their interface designs using a techniques typically called "Responsibility Led Design". The principle is basically that each participant in an interface has distinct responsibilities for the data interchanged and it was important to make sure designs took this into account. This reminded me of one of favorite books titled "The One Minute Manager Meets The Monkey" by Ken Blanchard, William Oncken Jr. and Hal Burrows. I even have a copy of the audio version which is both informative and very entertaining. The book was based on a very popular Harvard Review article entitled "Management Time: Who's Got The Monkey" and expands on that original idea.
To paraphrase the article, a monkey is a task that is not your responsibility that is somehow assigned to you. The terms for this is the "monkey jumping on your back" or simply "Monkey on your back". This epitomizes the concepts of responsibility.
So what has this got to with design or even Oracle Utilities products, you might ask?
One of the key designs for all implementation is sending data INTO the Oracle Utilities products. These are inbound interfaces, for obvious reasons. In every interface there is a source application and a target application. The responsibility of the source application is to send valid data to the target application for processing. Now, one of the problems I see with implementations is when the source application sends invalid data to the target. There are two choices in this case:
- Send back the invalid request - This means that if the data transferred from the source in invalid for the target then the target should reject the data and ask the source to resend. Most implementations use various techniques for achieve this. This keeps the target clean of invalid data and ensures the source corrects their data before sending it off again. This is what I call correct behavior.
- Accept the invalid request (usually in a staging area) and correct it within the target for reprocessing - This means the data is accepted by the target. regardless of the error and corrected within the target application to complete the processing.
More and more I am seeing implementations taking the latter design philosophy. This is not efficient as the responsibility for data clensing (the monkey in this context) has jumped on the back of the target application. At this point, the source application has no responsibility for cleaning their own data and has no real incentive to ever send clean data to the target as the target is now has the monkey firmly on their back. This has consequences for the target application as it can increase resource usage (human and non-human) to now correct data errors from the source application. Some of the customers I chatted to found that while initially they found the volume of these types of transactions were low that over time the same errors kept being sent, and over time the cumulative effect of the data clensing on the target started to get out of control. Typically, at this point, customers ask for advice to try and reduce the impact.
In an Oracle Utilities product world, this exhibits itself as a large number of interface To Do's to manage as well as staging records to manage and additional storage to manage. The latter is quite important as typically implementations keep forgetting to remove completed transactions that have been corrected once they have been applied from the staging area. The product ship special purge jobs to remove complete staged transactions and we recently added support for ILM to staging records.
My advice to these customers are:
- Make sure that you assign invalid transactions back to the source application. This will ensure the source application maximizes the quality of their data and also hopefully prevents common transaction errors to reoccur. In other words, the monkey does not jump from the source to the target.
- If you choose to let the monkey jump on the target's back, for any reason, then use staging tables and make sure they are cleaned up to minimize the impact. Monitor the error rates and number of errors and ensure the source application is informed to correct the error trends.
In short, avoid the monkey in your inbound transactions. This will make sure the resources you allocate to both your source and target are responsible and are allocated in an efficient manner.
One of the major activities in any implementation of an Oracle Utilities Application Framework based product is to manage extensions to the product. These are customizations that partners, customers or consultants have built to extend or alter the product to suit the needs of an individual site.
There are typically a number of components that constitute extensions:
- ConfigTools objects - In newer versions of Oracle Utilities Application Framework special configuration based objects were introduced to allow implementers to build configuration based objects such as Business Objects, Business Services, Service Scripts, Data Areas and UI Maps. These are typically managed using Bundling, Configuration Migration Assistant or ConfigLab. The latter is available for older versions of Oracle Utilities Customer Care And Billing as Configuration Migration Assistant was introduced in Oracle Utilities Application Framework V4.2 and above.
- Administration Data - One of the major features of the Oracle Utilities Application Framework is the ability to define business values, business rules and business logic in administration data. This data is typically available from the Administration menu of the product and is managed using Configuration Migration Assistant or ConfigLab. The latter is available for older versions of Oracle Utilities Customer Care And Billing as Configuration Migration Assistant was introduced in Oracle Utilities Application Framework V4.2 and above
- Database Scripts - Database changes are new database objects delivered with the extensions that conform to the guidelines in the Oracle Utilities SDK and DBA Guides shipped with the products. They are typically managed by the database tools of choice at a site.
- Miscellaneous Files - One of the facilities in the Oracle Utilities Application Framework is the ability to extend the technical configuration with a set of custom templates and custom user-exits which include any extensions related to the technical configuration of the product. These are typically managed manually.
Now to minimize the impact of all these changes the following guidelines are recommended:
- For SDK based files use the Oracle Utilities SDK tools to deploy your customizations. Avoid directly undeploying and manually building WAR/EAR files. This will avoid manual effort and also reduce manual errors. Do not manually splice and dice your code into the product files. Whilst it is technically possible to manually use jar and ant to build files, using the SDK utilities are lower risk and ensure the customizations, regardless of complexity, are placed in the right locations.
- When using the SDK utilities, always build full releases but use the patch build facility to build smaller deployment files. Using these patches will greatly minimize the build times and subsequent deployment times.
- Avoid deploying the appViewer, unless necessary. The appViewer is only a tool helpful for developers and administrators, it is not recommended for production use or even for environments where developers are not working regularly. Administrators and other people can use the appViewer deployed on a local server, in offline mode, rather than included in your implementation. The main reason is that the appViewer is a large set of documentation and takes time to rebuild. This will greatly reduce outages in patch installation as well as deployment. In newer versions of Oracle Utilities Application Framework, appViewer is now a completely optional installation.
- If using ConfigLab or Configuration Migration Assistant, run the comparison ahead of time. This will mean that the application of the changes will be the only thing that needs to be applied. Ensure all changes have been preapproved or at least approved by the business prior to the application. If you are a customer that has access to both ConfigLab and Configuration Migration Assistant then use the latter as it is more logical and quicker for application of changes. Note: ConfigLab has been removed from Oracle Utilities Customer Care And Billing V2.5 and above.
- Use the Database Lifecycle Management Pack to manage database changes. Oracle ships an extension pack to track and manage database changes across your enterprise. Additionally third party solutions are also available to manage database change history.
- Centralize your templates and user exits for deployment. Custom Templates and Custom user exits can be environment specific or global depending on individual needs. If they are global, using commonly used configuration management practices can be used to copy them. It is recommended that copying of these files be done in the initial phases of the migration to take advantage of any rebuilds.
- Avoid multiple rebuilds. The application of changes in EAR/WAR files requires a rebuild of the files and so do other activities such as patching. By using the options on the initialSetup utility you can minimize the build and deployment process. This is documented in the Server Administration Guides (or Operations and Configuration Guides) shipped with your product.
- Consider using native installations rather than embedded installations. This is quite a saving. In the embedded installation the product must be down to build the EAR/WAR files as they need to be updated and are actively being used by the Oracle WebLogic server. You cannot update open files. In the native installation, the files are deployed in the Oracle WebLogic domain through the deployment (or update deployment process). This means you can build the EAR/WAR files when they are being used as they are copied during the deployment or update deployment process. The update deployment process can be executed from the WLST command line, Oracle WebLogic console or Oracle Enterprise Manager (on the Oracle WebLogic targets). This update process will take a far shorter time than a full load and deployment. In some cases this can be done live. Customers using Inbound Web Services already use this technique as it updates the deployment (effectively it is being copied to Oracle WebLogic's staging area to be used). For example, on test systems we notice that update deployment takes less than a minute (depending on the hardware of course).
- Concepts - General Concepts and Introduction.
- Environment Management - Principles and techniques for creating and managing environments.
- Version Management - Integration of Version control and version management of configuration items.
- Release Management - Packaging configuration items into a release.
- Distribution - Distribution and installation of releases across environments.
- Change Management - Generic change management processes for product implementations.
- Status Accounting - Status reporting techniques using product facilities.
- Defect Management - Generic defect management processes for product implementations.
- Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation.
- Implementing Service Packs - Discussion on the service packs and how to use them in an implementation.
- Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades.
Two new whitepapers are available for Oracle Utilities Application Framework based products from My Oracle Support.
The new whitepapers are:
- Oracle Application Testing Suite for Oracle Utilities - Overview (Doc Id: 2014163.1) - This is an overview of the Functional and Load Testing accelerator released for Oracle Utilities applications that uses Oracle Application Testing Suite to accelerate functional, regression, load and performance testing. This whitepaper outlines the features of the accelerator, the role Oracle Application Testing Suite pays in the testing process and a frequently asked questions section to clarify the accelerator.
- Oracle Utilities Application Framework - Keystore Configuration (Doc Id: 2014161.1) - This is an overview of keystore management for Oracle Utilities Application Framework based products. The keystore is used for advanced security of transport and for inbuilt application encryption.
One of the major advantages of the Oracle WebLogic Server Management Pack Enterprise Edition is the JVM Diagnostics (JVMD) engine. This tool allows java internals from JVM's to be sent to Oracle Enterprise Manager for analysis. It has a lot of advantages:
- It provides class level diagnostics for all classes in executed including base and custom classes.
- It provided end to end diagnostics when the engine is deployed with the application and the database.
- It has minimal impact on performance as the engine uses the JVM monitoring API's in memory.
It is possible to use JVMD with Oracle Utilities Application Framework in a number of ways:
- It is possible to deploy JVMD agent to the WebLogic servers used for the Online and Web Services tiers.
- It is possible to deploy the JVMD database agent to the database to capture the code execution against the database.
- It is possible to use standalone JVMD agent within threadpoolworkers to gather diagnostics for batch.
This article will outline the general process for deploying JVMD on the online servers. The other techniques will be discussed in future articles.
The architecture of JVMD can be summarized as follows:
- JVMD Manager - A co-ordination and collection node that collates JVM diagnostic information sent by JVM Agents attached to JVM's. This manager exposes the information to Oracle Enterprise Manager. The Manager can be installed within an OMS, standalone and multiple JVM Managers are supported to support large networks of agents.
- JVMD Agents - A small java based agent that is deployed within a JVM it is monitoring that collects Java diagnostics (primarily from memory, to minimize performance impact of collection) and sends them to a JVMD Manager. Each agent is hardwired to a particular JVMD Manager. JVMD Agents can be deployed to J2EE containers, standalone JVM's and the database.
The diagram below illustrates this architecture:
Before starting the process, ensure that the Oracle WebLogic Server Management Pack Enterprise Edition is licensed and installed (manually or via Self Update).
- Install the JVMD Manager - Typically the JVMD Manager is deployed to the OMS Server but can also be deployed standalone and/or multiple JVMD managers can be installed for larger numbers of targets to manage. There is a video from Oracle Learning Library on Youtube explaining how to do this step.
- Deploy the JVMD Agent to the Oracle WebLogic Server housing the product online using the Middleware Management function within Oracle Enterprise Manager using the Application Performance Management option. This will add the Agent to your installation. There is a process for deploying the agent automatically to a running WebLogic Server. Again there is a Youtube video describing this technique.
One the agent is installed the JVMD agent will start sending diagnostics of java code running within that JVM to Oracle Enterprise Manager.
Customers using the Oracle Application Management Pack for Oracle Utilities will see the JVMD link from their Oracle Utilities targets (it is also available from the Oracle WebLogic targets). For example:
Once selecting the Java Virtual Machine Pool for the server you will get access to the full diagnostics information.
This include historical analysis
JVMD is a useful tool for identifying bottlenecks in code and in the architecture. In future articles I will add database diagnostics and batch diagnostics to get a full end to end picture of diagnostics.