Skip navigation.

Jan Kettenis

Syndicate content
Jan Kettenis
Updated: 14 hours 22 min ago

How to Keep Your Business Process Looking Simple

Thu, 2015-11-12 12:32
There are two key words in Business Process Management Notation (and Language) or BPMN for short that very often seemed to be missed. The first is "business" the second "management". In this posting I will discuss the significance of the first, and how you are in control of that.

In BPMN the word business does not wants to express that it is "just" about modeling business processes. The idea is also that these models should be understandable, or even created by the business. Now one can argue that with respect to the latter BPMN does not always seems to deliver on the promise, or at least not for every business. But I know of a few cases where the business analyst creates the non-technical versions of the model (level 1, and 2 as Bruce Silver would call them), and I know of a significant amount of cases where the business or at least the analyst is able to understand BPMN process models. That is to say, if these models have not been cluttered with technical details.

Unfortunately this cluttering happens quicker that you wish, and too often the executable process models are almost beyond comprehension for the business, while there is no good reason for that. And that is too bad, because you then miss the opportunity to let the executable process model being validated by that business. Observing how process modeling is done at some of my projects, unfortunately I have to conclude that quite a few people are not aware of the problem or don't know how to prevent it, and as I did not (yet) found any references that gives a comprehensive overview of the options offered by the Oracle BPM Suite that can help you out, I discuss them in the following.

Embedded Sub-ProcessThe embedded sub-process is one of the options that most people are aware of, and (generally) reasonably well used. In the example below an embedded sub-process with name "Store Order" contains a script activity "Create Message Header" that constructs the header for the message to be used in the service call activity "Save Order Data". By simply collapsing the embedded sub-process the technical details of how an order is stored, can be hidden for the business that typically does not want to know that a header needs to be created. One could argue they should not even be interested in the fact that this is done synchronously (using a service activity) instead of asynchronously (using a send and receive activity), which also is conveniently hidden by the embedded sub-process.

Except for using it to hide technical details, embedded sub-processes can also be used to determine a scope. This can be done from a business perspective (for example to determine a scope of activities that might be repeated or for which multiple instances should be handled in parallel), but also from a technical perspective (for example as a scope for temporary variables, or exception handling).

The issue I often see with embedded sub-process in action, is that developers very often do not bother collapsing them, still exposing technical details to the business.

One should be aware of a couple of aspects concerning embedded sub-processes. The first is that they are not reusable (meaning you cannot use them elsewhere in the same or any other process model). The second that they come with a little overhead from an audit perspective, as every embedded sub-process results in 2 extra entries (one for the start and one for the end of it).

Reusable Sub-processA reusable sub-process is created as a separate process. The only thing that distinguishes it from other types of processes, is that it has a none start as well as a none end event, and it cannot have an initiator activity. As the name already suggests, a reusable sub-process is never started directly, but only by calling it from some parent process. This is done by the Call activity.

Going back to the step in the example where we want to save order data, and let's assume the order has to be updated more than once, than this makes it a typical candidate for reuse. In the following example a reusable "Order Storage" reusable sub-process has been created that contains this functionality. It has been made a little bit more complex by including a notification activity that will notify the sales representative every time an update of the order has taken place.

The reusable sub-process has access to the /project/ variables (by value), and its own /process/ variables. In other words, the reusable sub-process has access to the "order" project variable. A choice has been made to pass on the email address of the one that has been notified, as an argument. In the reusable sub-process this email address is stored in a (local) "email" process variable.

The choice to define a variable at project versus process level should be made carefully. Project variables are global variables with the following properties:
  • In case of functionality that is executed in parallel, one should be careful that the parallel threads do not make conflicting changes to the same project variable.
  • Simple type project variables are mapped to protected attributes (also known as mapped attributes or flex field), of which there is a limited number (for example 20 protected text attributes). Their values are stored in separated columns (instead of part of the process payload).
  • The lifespan of a project variable is from its initialization up to the end of the (main) process instance.
Like an embedded sub-process, a reusable sub-process is executed in the same thread. A reusable sub-process is only reusable in the same BPM project (composite) and cannot be shared with other projects. A reusable sub-process adds a little bit more auditing overhead than the embedded sub-process to auditing.

Finally, up to version 12.1.2 a Call activity in a BPM project makes it incompatible with any other revision, meaning that you cannot migrate instances. Period. Not even when you deploy the same revision without changing any bit of your code. For most customers I work with, this is a major limitation, and some therefore choose not to use reusable sub-processes.

Process As a ServiceThe next alternative to a reusable sub-process is the process-as-a-service, which means that you start it with a message start event or send activity. Any response is returned by a message end event or receive activity. As long as the process-as-a-service is part of the same BPM project (composite) it can make use of the project variables, but only by definition, not by value. So all data has to be mapped to and from the process. You can put the process in the same composite, or put it in a composite of its own. The criteria to do the latter would be reuse over composites. When in a separate composite, you cannot reuse the business objects, nor the project variable definitions.

From a functional perspective, the process-as-a-service is equivalent to a reusable sub-process. From a technical perspective it requires more work if you implement it in a separate composite, and it will add extra overhead to auditing (not only BPM auditing, but also every instance will have its own entry in the COMPOSITE_INSTANCE and CUBE_INSTANCE tables). In 11g you will also have to create some custom mechanism to propagate cancellation of the parent instance to child instances, but in 12c this is automatically done (see also

Detail Activity
Since 12c you can "detail" an activity. With that you can hide logic that is tightly related to an activity, but has to be done using an activity of its own. From the outside a detailed activity looks like any other activity, and keeps the original icon associated with it. The fact that it is detailed you can see by a + sign at the bottom, very much like an embedded sub-process. And basically that is what it is, a specialized embedded activity. You can even have local variables, and in the structure pane it is represented as an embedded sub-process. Again, to keep the business process a "business" process you should try not to get over-exited and put all sorts of logic in it that really belongs somewhere else. Use it only for logic that is tightly coupled to the main activity, but of any importance to the business.

In the following example I have implemented a call to some service that has to happen right after the user activity. It is a technical service call that we don't want to bother the business with, as it concerns a call to a service to confirm the order to the customer. As far as the business is concerned, this is an integral part of the Contact Provider activity, and they should not care if that service is called from the UI or from the process for that matter.

Hope you can make good use of this, and let me know if you have any other suggestion!!

Oracle SOA/BPM: Payload Validation per Composite

Fri, 2015-10-23 12:14
In this article I will explain how you can enable payload validation in the Oracle SOA/BPM Suite per composite, both design and deployment time. This works for 11g as well as 12c

When developing BPM processes or SOA services it is advisable enable payload validation on the development server. The reason being that this will force you to work with more representable test data, and in some occasions help you preventing coding errors (like assignment of a string to an integer, or forgetting to map mandatory data in a call). Specifically there where you have to communicate with external systems, this might become very important, not speak of the situation where payload validation is enforced for example by a server bus.

Preferable you have payload validation switched on from the beginning, starting with the development server, but better also for the test server(s). Normally you would leave it off (the default) for production and load and stress test environments (for performance reasons).

However, sometimes you find yourself in a situation where existing composites already violate one or more XML rules. This can make it practically impossible to switch payload validation on for the whole server. You then will have to do it on a composite by composite basis. Fortunately this is supported out-of-the-box by the validateSchema property you can set on a composite, as shown below:

Assuming that you use configuration plans per environment you deploy to, you can switch it on for any environment you want to enable it for, using the following entry in the configuration plan:
When deployed, payload validation will automatically have been enabled for the composite, preventing that you have to do so manually every time you deploy:

No excuses for those lazy developers hiding behind someone else's bad written code!

Oracle SOA/BPM: What are Business Faults Really?

Wed, 2015-09-16 13:12
You may have read that it is a best practice to let a service return a "business fault" as a fault. In this article I point out some pitfalls with this "best practice", and will argue that you should have a clear understanding of what "business fault" means before you start applying it. The examples are based upon the Oracle SOA Suite 11g, but apply as well to 12c.

To allow the consumer to recognize a specific fault, you add it as a fault to the WSDL. This looks similar to the following:

Sometimes you see that for every individual error situation a specific fault is defined, in some other cases you might find that all errors are wrapped in some generic fault with a code to differentiate between them. In the example above two different, specific faults are defined.

What you should realize is how a business fault manifests itself in Enterprise Manager. When you throw a fault from a service, it will be represented as a BusinessFault in the flow trace of the consumer (but not in the Recent Faults and Rejected Messages section):

Any instance of the consumer that threw a fault will have an instance state that is flagged as faulted.

Now, if the fault really concerns an error, meaning some system exception or an invalid invocation of the service by the consumer (e.g. wrong values for some of the arguments), than that probably is exactly how you would like it to respond. Such errors should stand out in EM because you probably either have some issue in the infrastructure (e.g. some service not being available), or some coding error. However, what I also see in some of the examples you can find on the internet as well as in practice, is that faults are thrown in situations that do not really concern an error. For example, for some CustomerService.find() operation a fault is returned when no customer could be found.

The problem with such a practice is that this type of error generally is of no interest to the systems administrator. In the Oracle SOA/BPM Suite 11g there is an option to search on Business Faults Only or System Faults Only but that does not work. So when thrown enough, these "pseudo errors" start cluttering their view. The log files are equally cluttered:

This cluttering in EM and logs introduces the risk that systems administrators can no longer tell the real errors from the faults, and may no longer takes them very serious. Exactly the opposite of what you want.

But system administrators are not the only ones suffering from this. Also BPM developers very often are confronted with tough challenges when integrating such services in their process model. For example look at the following model:

In this example the service throws 2 faults that are not really errors, but just some result that you may expect from the service. Each fault has to be handled, but in a different way. At the top of the service calls are two Boundary Error events, one for each type of error. In case of BusinessFaults you either have to catch each one individually, or have one to catch all business faults:

Unlike with system exceptions, there is no way to do both at the same time.

A BusinessFault manifest itself as a fault in the flow trace of the business process, suggesting that something went wrong while that is not the case at all.

Given this issue with of making the process model less clear, and cluttering the flow trace, I therefore prefer handling such "errors" as a normal response instead, as is done on the right-hand side of the service call in the process model. I used an exclusive gateway to filter them out from the normal flow, making it easier to follow how the process responds to it.

By the way, the "faultCode" and "faultString" elements are available because I defined them as element of the fault thrown from the BPEL process I use in the example. When you define a Business Exception object in BPM, then by default you only have one single "errorInfo" element at your disposal:

As I explain in this article, you can customize a Business Exception object by manually modifying its XSD.

I included handling of system exceptions (remoteFault and other system exceptions at the bottom) in the process model only for the sake of example. Rather than handling system faults in the process, you should use the Fault Management Framework. However, using this framework is not an option for the 2 BusinessFaults in the example.

In case of a system exception you have a couple of out-of-the-box elements at your disposal, but unfortunately this is not the same for a specific exception compared to a catch all:

Conclusion: the (real) best practice is to only throw a fault when it really concerns an error that is of interest to a systems administrator. Any other type of error should be returned as a normal response.

For example, for the CustomerService.find() operation you could choose to return an element that only contains a child node when one found, and that has an extra element "noOfCustomersFound" that returns 0 when none exists, or use some choice element that either returns the customers found, or some other element with a text like "customer not found".

More information:
"Fault Handling and Prevention, Part 1", Guide Smutz & Ronald van Luttikhuizen
"SOAP faults or results object", discussion on The Server Side

Oracle SOA/BPM 12c: Propagation of Flow Instance Title and Instance Abortion

Wed, 2015-08-12 12:23
Recently I wrote this posting regarding an improvement for setting the title of a flow instance in Oracle BPEL, and BPMN 12c. In this posting I will discuss two related improvements that comes with SOA/BPM Suite 12c, being that the flow instance abortion is automatically propagated from one instance to the other, as well as the flow instance title. Or more precisely, for every child instance the initiating instance is shown together with its name.

Since 12c the notion of composite instance is superseded by that of flow instance, which refers to the complete chain of calls starting from one main instance to any other composite, and further. Every flow has a unique flowId which is automatically propagated from one instance to the other.

Propagation of Flow Instance TitleThis propagation does not only apply to the flowId, but also to the flowInstanceTitle, meaning that if you set the flowInstanceTitle for the main instance all called composites automatically get the same title.

So if the flowInstanceTitle is set on the main instance:

Then you will automatically see it for every child instance as well:

Trust but verify is my motto, so I tried it for a couple of combinations of composite types calling each other, including:
  • BPM calling BPEL calling another BPEL
  • BPM initiating a another composite with a Mediator and BPEL via an Event
  • Mediator calling BPEL

Flow Instance AbortionWhen you abort the instance of the parent, then all child instances are aborted as well.

In the following flow trace you see a main BPM process that kicks of:
  1. A (fire&forget) BPEL process
  2. Throws an Event that is picked up by a Mediator
  3. Calls another BPM process
  4. Schedules a human task

On its turn the BPEL process in step 1 kicks of another BPEL process (request/response). Finally the BPM process in step 3 also has a human task:

Once the instance of the main process is aborted, all child instances are automatically aborted as well, including all Human Tasks and composites that are started indirectly.

The flip-side of the coin is that you will not be able to abort any individual child instance. When you go to a child composite, select a particular child instance and abort, the whole flow will be aborted. That is different from how it worked with 11g, and I can imagine this will not always meet the requirements you have.
Another thing that I find strange is that the Mediator that is started by means of an event, even is aborted when the consistency level is set to 'guaranteed' (which means that event delivery happens in a local instead of a global transaction). Even though an instance is aborted, you may have a requirement to process that event.
But all in all, a lot easier to get rid of a chain of processes instances than with 11g!!