Darwin IT
Darwin-IT professionals do ICT-projects based on a broad range of Oracle products and technologies. We write about our experiences and share our thoughts and tips.Martien van den Akkerhttp://www.blogger.com/profile/05183907832966359401noreply@blogger.comBlogger434125
Updated: 2 hours 18 min ago
Javascript in ANT
Earlier I wrote about an ANT script to scan JCA adapters files in your projects home, subversion working copy or github local repo.
In my current project we use sensors to kick-of message-archiving processes, without cluttering the BPEL process. I'm not sure if I would do that like that if I would do on a new project, but technically the idea is interesting. Unfortunately, we did not build a registry what BPEL processes make use of it and how. So I tought of how I could easily find out a way to scan that, and found that based on the script to scan JCA files, I could easily scan all the BPEL sensor files. If you have found the project folders, like I did in the JCA scan script, you can search for the *_sensor.xml files.
So in a few hours I had a basic sript. Now, in a second iteration, I would like to know what sensorActions the sensors trigger. For that I need to interpret the accompanying *_sensorAction.xml file. There for, based on the found sensor filename I need to determine the name of the sensor action file.
The first step to that is to figure out how to do a substring in ANT. With a quick google on "ant property substring", I found a nice stackoverflow thread, with a nice example of an ANT script defininition based on Javascript:
And that can be called like:
The javascript substring() function is zero-based, so the first character is indexed by 0.
Not every sensor file name has the same length, the file is called after the BPEL file that it is tight too. And so to get the base name, the part without the "_sensor.xml" postfix, we need to determine the length of the filename. A script that determines that can easily be extracted from the script above:
Perfect! Using this I could create the logic in ANT to determine the sensorAction file name. However, I thought that it would be easier to determine the filename in Javascript all the way. Using the strength of the proper language at hand:
Superb! I found ANT a powerfull language/tool already. But with a few simple JavaScript snippets you can extend it easily.
Notice by the way also the use of xslt in the Scan JCA adapters files article. You can read xml files as properties, but to do that conveniently you need to transform a file like the sensors.xml in a way that you can easily reference the properties following the element-hierarchy. This is also explained in the Scan JCA adapters files article.
I'll go further with my sensors scan script. Maybe I'll write about it when done.
In my current project we use sensors to kick-of message-archiving processes, without cluttering the BPEL process. I'm not sure if I would do that like that if I would do on a new project, but technically the idea is interesting. Unfortunately, we did not build a registry what BPEL processes make use of it and how. So I tought of how I could easily find out a way to scan that, and found that based on the script to scan JCA files, I could easily scan all the BPEL sensor files. If you have found the project folders, like I did in the JCA scan script, you can search for the *_sensor.xml files.
So in a few hours I had a basic sript. Now, in a second iteration, I would like to know what sensorActions the sensors trigger. For that I need to interpret the accompanying *_sensorAction.xml file. There for, based on the found sensor filename I need to determine the name of the sensor action file.
The first step to that is to figure out how to do a substring in ANT. With a quick google on "ant property substring", I found a nice stackoverflow thread, with a nice example of an ANT script defininition based on Javascript:
<scriptdef name="substring" language="javascript">
<attribute name="text"/>
<attribute name="start"/>
<attribute name="end"/>
<attribute name="property"/>
<![CDATA[
var text = attributes.get("text");
var start = attributes.get("start");
var end = attributes.get("end") || text.length();
project.setProperty(attributes.get("property"), text.substring(start, end));
]]>
</scriptdef>
And that can be called like:
<substring text="${sensor.file.name}" start="0" end="20" property="sensorAction.file.name"/>
<echo message="Sensor Action file: ${sensorAction.file.name1}"></echo>
The javascript substring() function is zero-based, so the first character is indexed by 0.
Not every sensor file name has the same length, the file is called after the BPEL file that it is tight too. And so to get the base name, the part without the "_sensor.xml" postfix, we need to determine the length of the filename. A script that determines that can easily be extracted from the script above:
<scriptdef name="getlength" language="javascript">
<attribute name="text"/>
<attribute name="property"/>
<![CDATA[
var text = attributes.get("text");
var length = text.length();
project.setProperty(attributes.get("property"), length);
]]>
</scriptdef>
Perfect! Using this I could create the logic in ANT to determine the sensorAction file name. However, I thought that it would be easier to determine the filename in Javascript all the way. Using the strength of the proper language at hand:
<!-- Script to get the sensorAction filename based on the sensor filename.And then I can get the sensorAction filename as follows:
1. Cut the extension "_sensor.xml" from the filename.
2. Add "_sensorAction.xml" to the base filename.
-->
<scriptdef name="getsensoractionfilename" language="javascript">
<attribute name="sensorfilename"/>
<attribute name="property"/>
<![CDATA[
var sensorFilename = attributes.get("sensorfilename");
var sensorFilenameLength = sensorFilename.length();
var postfixLength = "_sensor.xml".length();
var sensorFilenameBaseLength=sensorFilenameLength-postfixLength;
var sensorActionFilename=sensorFilename.substring(0, sensorFilenameBaseLength)+"_sensorAction.xml";
project.setProperty(attributes.get("property"), sensorActionFilename);
]]>
</scriptdef>
<getsensoractionfilename sensorfilename="${sensor.file.name}" property="sensorAction.file.name"/>
<echo message="Sensor Action file: ${sensorAction.file.name}"></echo>
Superb! I found ANT a powerfull language/tool already. But with a few simple JavaScript snippets you can extend it easily.
Notice by the way also the use of xslt in the Scan JCA adapters files article. You can read xml files as properties, but to do that conveniently you need to transform a file like the sensors.xml in a way that you can easily reference the properties following the element-hierarchy. This is also explained in the Scan JCA adapters files article.
I'll go further with my sensors scan script. Maybe I'll write about it when done.
My Weblogic on Kubernetes Cheatsheet, part 1.
Last week I had the honour to present at the UKOUG TechFest 19, together with my 'partner in crime', I think I can say now: Simon Haslam. We combined our sessions into a part 1 and a part 2.
For me this presentation is the result of having done a workshop at the PaaSForum in Mallorca, and then to work that around into a setup where I was able to run the MedRec Weblogic sample application against a managed Database under Kubernetes.
Kubernetes Weblogic Operator TutorialI already wrote a blog about my workshop at the PaaSForum this year, but Marc Lameriks from Amis, did a walkthrough on the workshop. It basically comes down to this tutorial, which you can do as a self-paced tutorial. Or checkout a Meetup in your neighbourhoud. If you're in the Netherlands, we'll be happy to organized one, or if you like I would come over to your place and we could set something up. See also the links at the end of part 2 of our presentations for more info on the tutorial for instance.
I did the tutorial more or less three times now, once at the PaaSForum, then I re-did it, but deliberately changed namespace-names, domain-name, etc. Just to see where the dependencies are, and actually to see where the pitfalls are. It's based on my method to get to know an unfamiliar city: deliberately get lost in it. Two years ago we moved to another part of Amersfoort. To get to know my new neighbourhood, I often took another way home then I when I left. And this is basically what I did with the tutorial too.
The last time I did it was to try to run a more real-life application with an actual database. And therefor I setup a new OKE cluster, this time in a compartment of our own company cloud subscription. Interesting in that is that you work with a normal customer-alike subscription within a compartment. Another form of a deliberate D-Tour. But also to setup a database and see that configuration overrides to change your runtime datasource-connection pool actually works.
CheatsheetWhen doing the tutorial, you'll find that besides all the configurations on the Cloud Pages, to setup your OKE Cluster, configure Oracle Pipelines, you'll find that you'll have to enter a lot of commandline-commands. Most of them are kubectl commands, some helm, and a bit of OCI commandline interface. Doing it the first time I soon got lost in the meaning of them and what I was doing with it. Also, most kubectl commands work with namespaces where your Weblogic has another namespace then the Weblogic Operator. And as is my habit nowadays, I soon put the commands in smart but simple scripts. And those I want to share with you. Maybe not all, but at least enough so you'll get the idea.
I also found the official kubernetes.io kubectl cheat sheet and this one on github. But those are more explanations of the particular commands.
I found it helpfull to set up this Cheatsheet following the tutorial. I guess this helps in relating the commands in what they're meant for.
Shell vs. AliasAt the UKOUG TechFest, someone pointed that you could use aliases too. Of course. You could do an alias like
However, you'll still need to extend every command with the proper namespace, pod naming, etc.
Therefor, I used the approach of creating a oke_env.sh script that I can include in every script, and a property file to store the credentials to put in secrets. Then call (source) the oke_env.sh script in every other script.
Setup Oracle Kubernetes Engine instance on Oracle Cloud InfrastructureThese scripts refer to the first part of the tutorial: 0. Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure.
oke_env.shIt all starts with my oke_env.sh. Here you'll find all the particular necessary variables that are used in most other scripts. I think in a next iteration I would move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID to my credential properties file. But I introduced that later on, during my experiments.
credentials.properties
This stores the most important credentials. That allows me to abstract those from the scripts. However, as mentioned, I should move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID variables to this file.
create_kubeconfig.shAfter having setup the OKE Cluster in OCI, and configured your OCI CLI, the first actuall command you issue is to create a Kube Config file, using the OCI CLI. This one is executed only once, normally for every setup. So this script is merely to document my commands:
The SCRIPTPATH variable declaration is a trick to be able to refer to other scripts relatively from that variable. Then as you will see in all my subsequent scripts, I source here the oke_env.sh script. Doing so I can refer to the particular variables in the oci command. There for, as described in the tutorial, you should note down your OCID_CLUSTERID and update that into the oke_env.sh file, as well as the REGION variable.
Note by the way, that recently Oracle Kubernetes Engine upgraded to only support the Kubeconfig token version 2.0.0. See also this document.
getnodes.shThis one is a bit dumb, and could as easily be created by an alias:
Even the call to the oke_env.sh doesn't add anything, really but it is a base for the other scripts and when needing to add namespaces it makes sense.
create_clr_rolebinding.shThe last part of setting up the OKE cluster is to create a role binding. This is done with:
Install WebLogic OperatorThe second part of the tutorial is about seting up your project environment with Github and have Oracle Pipelines build your projects image. This is not particularly related to K8S, so no relevant scripts there.
The next part of the tutorial is about installing the operator: 2. Install WebLogic Operator.
create_kubeaccount.shInstallling Weblogic Operator is done using Helm. As far as I have understood is Helm a sort of package manager for Kubernetes. Funny thing in naming is that where Kubernetes is Greek for the steering officer on a ship, helm is the steering device of a ship. It makes use of Tiller, the server side part of Helm. A tiller is the "steering stick" or lever that manages the Helm device. (To be honest, to me it feels a bit the otherway around, I guess I would have named the server side Helm and the client Tiller).
As a first step is to create a Helm Cluster admin role binding, a kubernetes namespace for the Weblogic Operator and a serviceaccount within this namespace. To do so the script create_kubeaccount.sh does the following:
install_weblogic_operator.sh
Installing the Weblogic operator is done with this script. Notice that you need to execute the helm command within the folder in which you checked out the Weblogic Operator github repository.
The script will cd to the Weblogic Operator local repository and executes helm. In the begin of the script the current folder is saved as SCRIPTPATH. After running the helm command, it does a cd back to it.
delete_weblogic_operator.shDuring my investigations the Weblogic Operator was upraded. If you take a closer look to the command in the tutorial, you'll notice that the image that is used is oracle/weblogic-kubernetes-operator:2.0, but I used oracle/weblogic-kubernetes-operator:2.3.0 in the script above.
I found it usefull to be able to delete the operator to be able to re-install it again. To delete the weblogic operator run the delete_weblogic_operator.sh script:
Again in this script the helm command is surrounded by a cd to the helm charts folder of the Weblogic Operator local github repository, and back again to the current folder.
getpods.shAfter having installed the Weblogic Operator, you can list the pods of the kubernetes namespace it runs in, using this script:
list_wlop.shYou can check the Weblogic Operator installion by performing a helm list of the Weblogic Operator charts. I wrapped that ino this script:
ConclusionIf you would have followed the workshop, and maybe used my scripts, uptil now you have installed the Weblogic Operator. Let's not make this article too long and call this Part 1. And quickly move on to part 2, to install/configure and monitor the rest of the setup. Maybe at the end I move these contents to an easy to navigate set of articles.
For me this presentation is the result of having done a workshop at the PaaSForum in Mallorca, and then to work that around into a setup where I was able to run the MedRec Weblogic sample application against a managed Database under Kubernetes.
Kubernetes Weblogic Operator TutorialI already wrote a blog about my workshop at the PaaSForum this year, but Marc Lameriks from Amis, did a walkthrough on the workshop. It basically comes down to this tutorial, which you can do as a self-paced tutorial. Or checkout a Meetup in your neighbourhoud. If you're in the Netherlands, we'll be happy to organized one, or if you like I would come over to your place and we could set something up. See also the links at the end of part 2 of our presentations for more info on the tutorial for instance.
I did the tutorial more or less three times now, once at the PaaSForum, then I re-did it, but deliberately changed namespace-names, domain-name, etc. Just to see where the dependencies are, and actually to see where the pitfalls are. It's based on my method to get to know an unfamiliar city: deliberately get lost in it. Two years ago we moved to another part of Amersfoort. To get to know my new neighbourhood, I often took another way home then I when I left. And this is basically what I did with the tutorial too.
The last time I did it was to try to run a more real-life application with an actual database. And therefor I setup a new OKE cluster, this time in a compartment of our own company cloud subscription. Interesting in that is that you work with a normal customer-alike subscription within a compartment. Another form of a deliberate D-Tour. But also to setup a database and see that configuration overrides to change your runtime datasource-connection pool actually works.
CheatsheetWhen doing the tutorial, you'll find that besides all the configurations on the Cloud Pages, to setup your OKE Cluster, configure Oracle Pipelines, you'll find that you'll have to enter a lot of commandline-commands. Most of them are kubectl commands, some helm, and a bit of OCI commandline interface. Doing it the first time I soon got lost in the meaning of them and what I was doing with it. Also, most kubectl commands work with namespaces where your Weblogic has another namespace then the Weblogic Operator. And as is my habit nowadays, I soon put the commands in smart but simple scripts. And those I want to share with you. Maybe not all, but at least enough so you'll get the idea.
I also found the official kubernetes.io kubectl cheat sheet and this one on github. But those are more explanations of the particular commands.
I found it helpfull to set up this Cheatsheet following the tutorial. I guess this helps in relating the commands in what they're meant for.
Shell vs. AliasAt the UKOUG TechFest, someone pointed that you could use aliases too. Of course. You could do an alias like
alias k=kubectl
However, you'll still need to extend every command with the proper namespace, pod naming, etc.
Therefor, I used the approach of creating a oke_env.sh script that I can include in every script, and a property file to store the credentials to put in secrets. Then call (source) the oke_env.sh script in every other script.
Setup Oracle Kubernetes Engine instance on Oracle Cloud InfrastructureThese scripts refer to the first part of the tutorial: 0. Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure.
oke_env.shIt all starts with my oke_env.sh. Here you'll find all the particular necessary variables that are used in most other scripts. I think in a next iteration I would move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID to my credential properties file. But I introduced that later on, during my experiments.
#!/bin/bash
echo Set OKE Environment
export OCID_USER="ocid1.user.oc1..{here goes that long string of characters}"
export OCID_TENANCY="ocid1.tenancy.oc1..{here goes that other long string of characters}"
export OCID_CLUSTERID="ocid1.cluster.oc1.eu-frankfurt-1.{yet another long string of characters}"
export REGION="eu-frankfurt-1" # or your other region
export CLR_ADM_BND=makker-cluster-admin-binding
export K8S_NS="medrec-weblogic-operator-ns"
export K8S_SA="medrec-weblogic-operator-sa"
export HELM_CHARTS_HOME=/u01/content/weblogic-kubernetes-operator
export WL_OPERATOR_NAME="medrec-weblogic-operator"
export WLS_DMN_NS=medrec-domain-ns
export WLS_USER=weblogic
export WLS_DMN_NAME=medrec-domain
export WLS_DMN_CRED=medrec-domain-weblogic-credentials
export OCIR_CRED=ocirsecret
export WLS_DMN_YAML=/u01/content/github/weblogic-operator-medrec-admin/setup/medrec-domain/domain.yaml
export WLS_DMN_UID=medrec-domain
export MR_DB_CRED=mrdbsecret
export ADM_POD=medrec-domain-adminserver
export MR1_POD=medrec-domain-medrec-server1
export MR2_POD=medrec-domain-medrec-server2
export MR3_POD=medrec-domain-medrec-server3
export DMN_HOME=/u01/oracle/user_projects/domains/medrec-domain
export LCL_LOGS_HOME=/u01/content/logs
export ADM_SVR=AdminServer
export MR_SVR1=medrec-server1
export MR_SVR2=medrec-server2
export MR_SVR3=medrec-server3
credentials.properties
This stores the most important credentials. That allows me to abstract those from the scripts. However, as mentioned, I should move the OIC_USER, OCID_TENANCY and OCID_CLUSTERID variables to this file.
weblogic.user=weblogic
weblogic.password=welcome1
ocir.user=my.email@address.nl
ocir.password=my;difficult!pa$$w0rd
ocir.email=my.email@address.nl
oci.tenancy=ourtenancy
oci.region=fra
db.medrec.username=MEDREC_OWNER
db.medrec.password=MEDREC_PASSWORD
db.medrec.url=jdbc:oracle:thin:@10.11.12.13:1521/pdb1.subsomecode.medrecokeclstr.oraclevcn.com
create_kubeconfig.shAfter having setup the OKE Cluster in OCI, and configured your OCI CLI, the first actuall command you issue is to create a Kube Config file, using the OCI CLI. This one is executed only once, normally for every setup. So this script is merely to document my commands:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create Kubeconfig -> Copy command from Access Kube Config from cluster
mkdir -p $HOME/.kube
oci ce cluster create-kubeconfig --cluster-id $OCID_CLUSTERID --file $HOME/.kube/config --region $REGION --token-version 2.0.0
The SCRIPTPATH variable declaration is a trick to be able to refer to other scripts relatively from that variable. Then as you will see in all my subsequent scripts, I source here the oke_env.sh script. Doing so I can refer to the particular variables in the oci command. There for, as described in the tutorial, you should note down your OCID_CLUSTERID and update that into the oke_env.sh file, as well as the REGION variable.
Note by the way, that recently Oracle Kubernetes Engine upgraded to only support the Kubeconfig token version 2.0.0. See also this document.
getnodes.shThis one is a bit dumb, and could as easily be created by an alias:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s nodes
kubectl get node
Even the call to the oke_env.sh doesn't add anything, really but it is a base for the other scripts and when needing to add namespaces it makes sense.
create_clr_rolebinding.shThe last part of setting up the OKE cluster is to create a role binding. This is done with:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create cluster role binding
echo kubectl create clusterrolebinding $CLR_ADM_BND --clusterrole=cluster-admin --user=$OCID_USER
kubectl create clusterrolebinding $CLR_ADM_BND --clusterrole=cluster-admin --user=$OCID_USER
Install WebLogic OperatorThe second part of the tutorial is about seting up your project environment with Github and have Oracle Pipelines build your projects image. This is not particularly related to K8S, so no relevant scripts there.
The next part of the tutorial is about installing the operator: 2. Install WebLogic Operator.
create_kubeaccount.shInstallling Weblogic Operator is done using Helm. As far as I have understood is Helm a sort of package manager for Kubernetes. Funny thing in naming is that where Kubernetes is Greek for the steering officer on a ship, helm is the steering device of a ship. It makes use of Tiller, the server side part of Helm. A tiller is the "steering stick" or lever that manages the Helm device. (To be honest, to me it feels a bit the otherway around, I guess I would have named the server side Helm and the client Tiller).
As a first step is to create a Helm Cluster admin role binding, a kubernetes namespace for the Weblogic Operator and a serviceaccount within this namespace. To do so the script create_kubeaccount.sh does the following:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Create helm-user-cluster-admin-role
cat << EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: helm-user-cluster-admin-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: default
namespace: kube-system
EOF
echo Create namespace $K8S_NS
kubectl create namespace $K8S_NS
echo kubectl create serviceaccount -n $K8S_NS $K8S_SA
kubectl create serviceaccount -n $K8S_NS $K8S_SA
install_weblogic_operator.sh
Installing the Weblogic operator is done with this script. Notice that you need to execute the helm command within the folder in which you checked out the Weblogic Operator github repository.
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Install Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm install kubernetes/charts/weblogic-operator \
--name $WL_OPERATOR_NAME \
--namespace $K8S_NS \
--set image=oracle/weblogic-kubernetes-operator:2.3.0 \
--set serviceAccount=$K8S_SA \
--set "domainNamespaces={}"
cd $SCRIPTPATH
The script will cd to the Weblogic Operator local repository and executes helm. In the begin of the script the current folder is saved as SCRIPTPATH. After running the helm command, it does a cd back to it.
delete_weblogic_operator.shDuring my investigations the Weblogic Operator was upraded. If you take a closer look to the command in the tutorial, you'll notice that the image that is used is oracle/weblogic-kubernetes-operator:2.0, but I used oracle/weblogic-kubernetes-operator:2.3.0 in the script above.
I found it usefull to be able to delete the operator to be able to re-install it again. To delete the weblogic operator run the delete_weblogic_operator.sh script:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Delete Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm del --purge $WL_OPERATOR_NAME
cd $SCRIPTPATH
Again in this script the helm command is surrounded by a cd to the helm charts folder of the Weblogic Operator local github repository, and back again to the current folder.
getpods.shAfter having installed the Weblogic Operator, you can list the pods of the kubernetes namespace it runs in, using this script:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo Get K8s pods for $K8S_NS
kubectl get po -n $K8S_NS
list_wlop.shYou can check the Weblogic Operator installion by performing a helm list of the Weblogic Operator charts. I wrapped that ino this script:
#!/bin/bash
SCRIPTPATH=$(dirname $0)
#
. $SCRIPTPATH/oke_env.sh
echo List Weblogic Operator $WL_OPERATOR_NAME
cd $HELM_CHARTS_HOME
helm list $WL_OPERATOR_NAME
cd $SCRIPTPATH
ConclusionIf you would have followed the workshop, and maybe used my scripts, uptil now you have installed the Weblogic Operator. Let's not make this article too long and call this Part 1. And quickly move on to part 2, to install/configure and monitor the rest of the setup. Maybe at the end I move these contents to an easy to navigate set of articles.
Create a Vagrant box with Oracle Linux 7 Update 7 Server with GUI
Yesterday and today I have been attending the UKOUG TechFest '19 in Brighton. And it got me eager to try things out. For instance with new Oracle DB 19c features. And therefor I should update my vagrant boxes to be able to install one. But I realized my basebox is still on Oracle Linux 7U5, and so I wanted to have a neatly fresh, latest OL 7U7 box.
Use Oracle's base boxNow, last year I wrote about how to create your own Vagrant Base Box: Oracle Linux 7 Update 5 is out: time to create a new Vagrant Base Box. So I could create my own, but already quite some time ago I found out that Oracle supplies those base boxes.
They're made available at https://yum.oracle.com/boxes, and there are boxes for OL6, OL7 and even OL8. I want to use OL 7U7, and thus I got started with that one. It's neatly described at the mentioned link and it all comes down to:
And in my case:
Before you do that vagrant up, you might want to edit your vagrant file, to add a name for your VM:
Otherwise your VM name in Virtual box would be someting like ol7_default_1235897983, something cryptic with a random number.
If you do a vagrant up now it will boot up nicely.
VirtualBox Guest AdditionsThe VirtualBox GuestAdditions are from version 6.12, while my VirtualBox installation already has 6.14. I found it handy to have a plugin that auto-updates it. My co-Oracle-ACE Maarten Smeets wrote about that earlier. It comes down to executing the following in a command line:
If you do a vagrant up now, it will update the guest additions. However, to be able to do so, it needs to install all kinds of kernel packages to compile the drivers. So, be aware that this might take some time, and you'll need internet connection.
Server with GUIThe downloaded box is a Linux Server install, without a UI. This probably is fine for most of the installations you do. But I like to be able to log on to the desktop from time to time, and I want to be able to connect to that using MobaXterm, and be able to run a UI based installer or application. A bit of X-support is handy. How to do that, I found at this link.
GUI support is one of the group packages that are supported by Oracle Linux 7, and this works exactly the same as RHEL7 (wonder why that is?).
To list the available packages groups are supported, you can do:
(After having executed vagrant ssh.)
You'll find 'Server with GUI' as one of the options. This will install all the necessary packages to run Gnome. But, if you want to have KDE there's also package group for that.
To install it you would run:
It will list a whole bunch of packages with dependencies that it will install. If you're up to it, at this point you would confirm with 'y'. Notice that there will be a bit over a 1000 packages installed, so it will be busy with that for a while.
This is because it will install the complete Gnome Desktop environment.
You could also do:
That will install only the minimum, necessary packages to run Gnome. I did not try that yet.
If it finished installing all the packages, the one thing that is left, is to change the default runlevel, since obviously you want to start in the GUI by default. I think most in the cases, at least.
This is done by:
I could have put that in a provision script, like I've done before. And maybe I will do that.
Package the boxYou will have noticed that it would have stamped quite some time to update the kernel packages for installing the latest Guest Additons and the GUI desktop. To prevent us from doing that over and over again, I thought it was wise to package the box into a ol77SwGUI box (Server with GUI). I described that in my previous article last year:
The result
Use Oracle's base boxNow, last year I wrote about how to create your own Vagrant Base Box: Oracle Linux 7 Update 5 is out: time to create a new Vagrant Base Box. So I could create my own, but already quite some time ago I found out that Oracle supplies those base boxes.
They're made available at https://yum.oracle.com/boxes, and there are boxes for OL6, OL7 and even OL8. I want to use OL 7U7, and thus I got started with that one. It's neatly described at the mentioned link and it all comes down to:
$ vagrant box add --name <name> <url>
$ vagrant init <name>
$ vagrant up
$ vagrant ssh
And in my case:
$ vagrant box add --name ol77 https://yum.oracle.com/boxes/oraclelinux/ol77/ol77.box
$ vagrant init ol77
$ vagrant up
$ vagrant ssh
Before you do that vagrant up, you might want to edit your vagrant file, to add a name for your VM:
BOX_NAME="ol77"
VM_NAME="ol77"
# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = BOX_NAME
...
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |vb|
vb.name = VM_NAME
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = "1024"
end
#
...
Otherwise your VM name in Virtual box would be someting like ol7_default_1235897983, something cryptic with a random number.
If you do a vagrant up now it will boot up nicely.
VirtualBox Guest AdditionsThe VirtualBox GuestAdditions are from version 6.12, while my VirtualBox installation already has 6.14. I found it handy to have a plugin that auto-updates it. My co-Oracle-ACE Maarten Smeets wrote about that earlier. It comes down to executing the following in a command line:
vagrant plugin install vagrant-vbguest
If you do a vagrant up now, it will update the guest additions. However, to be able to do so, it needs to install all kinds of kernel packages to compile the drivers. So, be aware that this might take some time, and you'll need internet connection.
Server with GUIThe downloaded box is a Linux Server install, without a UI. This probably is fine for most of the installations you do. But I like to be able to log on to the desktop from time to time, and I want to be able to connect to that using MobaXterm, and be able to run a UI based installer or application. A bit of X-support is handy. How to do that, I found at this link.
GUI support is one of the group packages that are supported by Oracle Linux 7, and this works exactly the same as RHEL7 (wonder why that is?).
To list the available packages groups are supported, you can do:
[vagrant@localhost ~]$ sudo yum group list
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Available Environment Groups:
Minimal Install
Infrastructure Server
File and Print Server
Cinnamon Desktop
MATE Desktop
Basic Web Server
Virtualization Host
Server with GUI
Available Groups:
Backup Client
Base
Cinnamon
Compatibility Libraries
Console internet tools
Development tools
E-mail server
Educational Software
Electronic Lab
Fedora Packager
Fonts
General Purpose Desktop
Graphical Administration Tools
Graphics Creation Tools
Hardware monitoring utilities
Haskell
Input Methods
Internet Applications
KDE Desktop
Legacy UNIX Compatibility
MATE
Milkymist
Network Infrastructure Server
Networking Tools
Office Suite and Productivity
Performance Tools
Scientific support
Security Tools
Smart card support
System Management
System administration tools
Technical Writing
TurboGears application framework
Web Server
Web Servlet Engine
Xfce
Done
(After having executed vagrant ssh.)
You'll find 'Server with GUI' as one of the options. This will install all the necessary packages to run Gnome. But, if you want to have KDE there's also package group for that.
To install it you would run:
[vagrant@localhost ~]$ sudo yum groupinstall 'Server with GUI'
There is no installed groups file.
Maybe run: yum groups mark convert (see man yum)
Resolving Dependencies
--> Running transaction check
---> Package ModemManager.x86_64 0:1.6.10-3.el7_6 will be installed
--> Processing Dependency: ModemManager-glib(x86-64) = 1.6.10-3.el7_6 for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libmbim-utils for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libqmi-utils for package: ModemManager-1.6.10-3.el7_6.x86_64
--> Processing Dependency: libqmi-glib.so.5()(64bit) for package: ModemManager-1.6.10-3.el7_6.x86_64
....
....
python-firewall noarch 0.6.3-2.0.1.el7_7.2 ol7_latest 352 k
systemd x86_64 219-67.0.1.el7_7.2 ol7_latest 5.1 M
systemd-libs x86_64 219-67.0.1.el7_7.2 ol7_latest 411 k
systemd-sysv x86_64 219-67.0.1.el7_7.2 ol7_latest 88 k
Transaction Summary
========================================================================================================================
Install 303 Packages (+770 Dependent packages)
Upgrade ( 7 Dependent packages)
Total download size: 821 M
Is this ok [y/d/N]:
It will list a whole bunch of packages with dependencies that it will install. If you're up to it, at this point you would confirm with 'y'. Notice that there will be a bit over a 1000 packages installed, so it will be busy with that for a while.
This is because it will install the complete Gnome Desktop environment.
You could also do:
[vagrant@localhost ~]$ sudo yum groupinstall 'X Window System' 'GNOME'
That will install only the minimum, necessary packages to run Gnome. I did not try that yet.
If it finished installing all the packages, the one thing that is left, is to change the default runlevel, since obviously you want to start in the GUI by default. I think most in the cases, at least.
This is done by:
[vagrant@localhost ~]$ sudo systemctl set-default graphical.target
I could have put that in a provision script, like I've done before. And maybe I will do that.
Package the boxYou will have noticed that it would have stamped quite some time to update the kernel packages for installing the latest Guest Additons and the GUI desktop. To prevent us from doing that over and over again, I thought it was wise to package the box into a ol77SwGUI box (Server with GUI). I described that in my previous article last year:
vagrant package --base ol77_default_1575298630482_71883 --output d:\Projects\vagrant\boxes\OL77SwGUIv1.0.box
The result
This will deliver you a Vagrant Box/VirtualBox image with:
- Provider: VirtualBox
- 64 bit
- 2 vCPUs
- 2048 MB RAM
- Minimal package set installed
- 32 GiB root volume
- 4 GiB swap
- XFS root filesystem
- Extra 16GiB VirtualBox disk image attached, dynamically allocated
- Guest additions installed
- Yum configured for Oracle Linux yum server. _latest and _addons repos enabled as well as _optional_latest, _developer, _developer_EPEL where available.
- And as an extra addon: Server with GUI installed.
SOA Suite 12c Stumbling on parsing Ampersands
Yesterday I ran into a problem parsing xml in BPEL. A bit of context: I get messages from a JMS queue, that I read 'Opaque'. Because I want to be able to dispatch the messages to different processes based on a generic WSDL, but with a different payload.
So after the Base64 Decode, for which I have a service, I need to parse the content to XML. Now, I used to use the oraext:parseEscapedXML() function for it. This function is known to have bugs, but I traced that down to BPEL 10g. And I'm on 12.2.1.3 now.
Still I got exceptions as:
<bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://docs.oasis-open.org/wsbpel/2.0/process/executable"><part name="summary"><summary>An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)</summary></part><part name="code"><code>XPath expression failed to execute</code></part><part name="detail"><detail>XPath expression failed to execute.
An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)
The XPath expression failed to execute; the reason was: oracle.fabric.common.xml.xpath.XPathFunctionException: Expected ';'.
Check the detailed root cause described in the exception message text and verify that the XPath query is correct.
</detail></part></subLanguageExecutionFault></bpelFault>
Or:
<bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://docs.oasis-open.org/wsbpel/2.0/process/executable"><part name="summary"><summary>An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)</summary></part><part name="code"><code>XPath expression failed to execute</code></part><part name="detail"><detail>XPath expression failed to execute.
An error occurs while processing the XPath expression; the expression is oraext:parseEscapedXML($Invoke_Base64EncodeDecodeService_decode_OutputVariable.part1/ns5:document)
The XPath expression failed to execute; the reason was: oracle.fabric.common.xml.xpath.XPathFunctionException: Expected name instead of .
Check the detailed root cause described in the exception message text and verify that the XPath query is correct.
</detail></part></subLanguageExecutionFault></bpelFault>
It turns out that it was due to ampersands (&) in the message. The function oraext:parseEscapedXML() is known to stumble on that.
A work around is suggested in a forum on Integration Cloud Service (ICS). It suggests to use oraext:get-content-as-string() first. And feed the contents to oraext:parseEscapedXML(). It turns out that that helps, although I had to fiddle around with xpath expressions, to get the correct child element, since I also got the parent element surrounding the part I actually wanted to parse.
But then I found this blog, suggesting that it was replaced by oraext:parseXML() in 12c (I found that it is actually introduced in 11g).
Strange that I didn't find this earlier. Digging deeper down memory-lane, I think I must have seen the function before. However, it shows I'm still learning all the time.
Oracle Ground Breakers Appreciation Day - Something about Weblogic....
Our most appreciated Oracle ACE Director Tim Hall organizes this yearly initiative, with this years name Oracle Ground Breakers Appreciation Day, and appointed this day to blog about our favorite Oracle Technology, Service or sub-community.
Last week I presented the 'Oracle Kubernetes Managed Weblogic Revival', the introduction of the Weblogic Kubernetes Operator opens the future for Weblogic.
This week I deliver our Weblogic 12c Tuning and Troubleshooting training for ATOS The Netherlands in Groningen. So, hmmm. what to blog, on this years Ground Breakers Appreciation day? There are several other technologies that I use and follow, but mostly around Fusion Middleware: SOA Suite, BPM Suite and Oracle Service Bus. But also Oracle Integration Cloud, that in fact heavily depend on this technologies. And honestly, bottom line here is Oracle Weblogic.
I frequently hear voices that state that Customers should move away from Weblogic. Honestly, I don't relate to that. It has served customers very well over the last decade under the Oracle brand and before. And I still think it was a smart move of Oracle to acquire it and make it a strategic part of the Oracle platform.
Last few years I've been active on the community.oracle.com forums, where I've grown to level 13, almost level 14, by answering questions and participating in discussions around Fusion Middleware technologies. My first thank therefor is to this community, for having me participating.
My second thank goes to the whole Weblogic and related Fusion Middleware toolstack. During the Tuning and Troubleshooting training I again realize how smart and rich the Weblogic Suite is. Although, I stated before that Oracle could do something about the footprint. It seems to me that there are quite some duplicate libraries or different versions of the same library. And maybe some old parts could be cut out: maybe only support SAML2.0 and improve that, for instance.
One great, but quite rarely used feature of Weblogic is the Weblogic Diagnostic Framework. And especially the Policies and Actions part. It is quite difficult to configure, the console's UI does not help here and there, and to think of usages of it. However, every time I present it, I find myself thinking: I should use this more often in my daily developments.
So I started to create a wlst script to create a Diagnostic Module, create a few collectors, a JMS Notification Action and 2 policies on it. It is actually the solution to Lab 6 of our training. To me it is a start to be able to expand this. You could create a version per technology: OSB, SOA Suite, or custom application like MedRec. And you can create a more generic version that based on different property files configure different collectors, policies and actions specific for that target environment.
WLDF Diagnostic ModuleThe script first creates a diagnostic module like this:
It checks if the Diagnostic Module already exists as a WLDFSystemResource. If not, it will create it as module = cmo.createWLDFSystemResource(diagModuleName) and target it to a targetServer.
CollectorsThen for creating a collector I created the following function:
Again, it first checks for the existing of the collector as a so called HarvestedType, within a WLDFResource in the Diagnostic Module. If not it creates it. Here you need to provide the metricType as a HavervestedType. And then attributes that you want to collect. The function expects it as a comma separated values string, that it converts to an array via a List.
Then you can provide Metric Type Instances or None if you want to collect it over all instances.
You can call this as:
or if you want to add instances, it's also done by creating an array:
This is a bit more complicated, since the strings describing the instances that you want to add are comma seperated values them selfs.
ActionsCreating an action is again pretty simple, for a JMS Notification that is:
There are different types of actions, so they're created differently. You can add one using the console and record that. It's what I did and then transformed the recorded script to functions as shown here.
Policies
Policies can be created with the following function. You need to provide a rule type and a rule expression, plus a array of actions you want to add:
An example of calling this is:
As you can see the actions to add are actually expressions to the MBeans of the actions configured earlier. It apparently depend on the type and the diagnostic module that contains it. So I could create a function that assembles this expression. If you want a custom rule expression you can create it as follows:
Again this is an expression that could be assembled using a function.
ConclusionThe complete script can be reviewed and downloaded from my GitHub Repo.
I hit two flies with one beat: Thank you Ground Breakers, fellow ACEs and other Oracle enthousiasts, and I guess my first article about the Weblogic Diagnostic Framework (but not my first one to include WLST scripts...). Happy OGB Appreciation Day y'all!
Last week I presented the 'Oracle Kubernetes Managed Weblogic Revival', the introduction of the Weblogic Kubernetes Operator opens the future for Weblogic.
This week I deliver our Weblogic 12c Tuning and Troubleshooting training for ATOS The Netherlands in Groningen. So, hmmm. what to blog, on this years Ground Breakers Appreciation day? There are several other technologies that I use and follow, but mostly around Fusion Middleware: SOA Suite, BPM Suite and Oracle Service Bus. But also Oracle Integration Cloud, that in fact heavily depend on this technologies. And honestly, bottom line here is Oracle Weblogic.
I frequently hear voices that state that Customers should move away from Weblogic. Honestly, I don't relate to that. It has served customers very well over the last decade under the Oracle brand and before. And I still think it was a smart move of Oracle to acquire it and make it a strategic part of the Oracle platform.
Last few years I've been active on the community.oracle.com forums, where I've grown to level 13, almost level 14, by answering questions and participating in discussions around Fusion Middleware technologies. My first thank therefor is to this community, for having me participating.
My second thank goes to the whole Weblogic and related Fusion Middleware toolstack. During the Tuning and Troubleshooting training I again realize how smart and rich the Weblogic Suite is. Although, I stated before that Oracle could do something about the footprint. It seems to me that there are quite some duplicate libraries or different versions of the same library. And maybe some old parts could be cut out: maybe only support SAML2.0 and improve that, for instance.
One great, but quite rarely used feature of Weblogic is the Weblogic Diagnostic Framework. And especially the Policies and Actions part. It is quite difficult to configure, the console's UI does not help here and there, and to think of usages of it. However, every time I present it, I find myself thinking: I should use this more often in my daily developments.
So I started to create a wlst script to create a Diagnostic Module, create a few collectors, a JMS Notification Action and 2 policies on it. It is actually the solution to Lab 6 of our training. To me it is a start to be able to expand this. You could create a version per technology: OSB, SOA Suite, or custom application like MedRec. And you can create a more generic version that based on different property files configure different collectors, policies and actions specific for that target environment.
WLDF Diagnostic ModuleThe script first creates a diagnostic module like this:
def createDiagnosticModule(diagModuleName, targetServerName):
module=getMBean('/WLDFSystemResources/'+diagModuleName)
if module==None:
print 'Create new Diagnostic Module'+diagModuleName
edit()
startEdit()
cd('/')
module = cmo.createWLDFSystemResource(diagModuleName)
targetServer=getMServer(targetServerName)
module.addTarget(targetServer)
# Activate changes
save()
activate(block='true')
print 'Diagnostic Module created successfully.'
else:
print 'Diagnostic Module'+diagModuleName+' already exists!'
return module
It checks if the Diagnostic Module already exists as a WLDFSystemResource. If not, it will create it as module = cmo.createWLDFSystemResource(diagModuleName) and target it to a targetServer.
CollectorsThen for creating a collector I created the following function:
def createCollector(diagModuleName, metricType, namespace, harvestedInstances,attributesCsv):
harvesterName='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/Harvester/'+diagModuleName
harvestedTypesPath=harvesterName+'/HarvestedTypes/';
print 'Check Collector '+harvestedTypesPath+metricType
collector=getMBean(harvestedTypesPath+metricType)
if collector==None:
print 'Create new Collector for '+metricType+' in '+diagModuleName
edit()
startEdit()
cd(harvestedTypesPath)
collector=cmo.createHarvestedType(metricType)
cd(harvestedTypesPath+metricType)
attributeArray=jarray.array([String(x.strip()) for x in attributesCsv.split(',')], String)
collector.setHarvestedAttributes(attributeArray)
collector.setHarvestedInstances(harvestedInstances)
collector.setNamespace(namespace)
# Activate changes
save()
activate(block='true')
print 'Collector created successfully.'
else:
print 'Collector '+metricType+' in '+diagModuleName+' already exists!'
return collector
Again, it first checks for the existing of the collector as a so called HarvestedType, within a WLDFResource in the Diagnostic Module. If not it creates it. Here you need to provide the metricType as a HavervestedType. And then attributes that you want to collect. The function expects it as a comma separated values string, that it converts to an array via a List.
Then you can provide Metric Type Instances or None if you want to collect it over all instances.
You can call this as:
createCollector(diagModuleName, 'weblogic.management.runtime.JDBCDataSourceRuntimeMBean','ServerRuntime', None, 'ActiveConnectionsCurrentCount,CurrCapacity,LeakedConnectionCount')
or if you want to add instances, it's also done by creating an array:
harvestedInstancesList=[]
harvestedInstancesList.append('com.bea:ApplicationRuntime=medrec,Name=TTServer_/medrec,ServerRuntime=TTServer,Type=WebAppComponentRuntime')
harvestedInstances=jarray.array([String(x.strip()) for x in harvestedInstancesList], String)
createCollector(diagModuleName, 'weblogic.management.runtime.WebAppComponentRuntimeMBean','ServerRuntime', harvestedInstances,'OpenSessionsCurrentCount')
This is a bit more complicated, since the strings describing the instances that you want to add are comma seperated values them selfs.
ActionsCreating an action is again pretty simple, for a JMS Notification that is:
def createJmsNotificationAction(diagModuleName, actionName, destination, connectionFactory):
policiesActionsPath='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/WatchNotification/'+diagModuleName
jmsNotificationPath=policiesActionsPath+'/JMSNotifications/'
print 'Check notification action '+jmsNotificationPath+actionName
jmsNtfAction=getMBean(jmsNotificationPath+actionName)
if jmsNtfAction==None:
print 'Create new JMS NotificationAction '+actionName+' in '+diagModuleName
edit()
startEdit()
cd(policiesActionsPath)
jmsNtfAction=cmo.createJMSNotification(actionName)
jmsNtfAction.setEnabled(true)
jmsNtfAction.setTimeout(0)
jmsNtfAction.setDestinationJNDIName(destination)
jmsNtfAction.setConnectionFactoryJNDIName(connectionFactory)
# Activate changes
save()
activate(block='true')
print 'JMS NotificationAction created successfully.'
else:
print 'JMS NotificationAction '+actionName+' in '+diagModuleName+' already exists!'
return jmsNtfAction
There are different types of actions, so they're created differently. You can add one using the console and record that. It's what I did and then transformed the recorded script to functions as shown here.
Policies
Policies can be created with the following function. You need to provide a rule type and a rule expression, plus a array of actions you want to add:
def createPolicy(diagModuleName, policyName, ruleType, ruleExpression, actions):
policiesActionsPath='/WLDFSystemResources/'+diagModuleName+'/WLDFResource/'+diagModuleName+'/WatchNotification/'+diagModuleName
policiesPath=policiesActionsPath+'/Watches/'
print 'Check Policy '+policiesPath +policyName
policy=getMBean(policiesPath +policyName)
if policy==None:
print 'Create new Policy '+policyName+' in '+diagModuleName
edit()
startEdit()
cd(policiesActionsPath)
policy=cmo.createWatch(policyName)
policy.setEnabled(true)
policy.setExpressionLanguage('EL')
policy.setRuleType(ruleType)
policy.setRuleExpression(ruleExpression)
policy.setAlarmType('AutomaticReset')
policy.setAlarmResetPeriod(300000)
cd(policiesPath +policyName)
set('Notifications', actions)
schedule=getMBean(policiesPath +policyName+'/Schedule/'+policyName)
schedule.setMinute('*')
schedule.setSecond('*')
schedule.setSecond('*/15')
# Activate changes
save()
activate(block='true')
print 'Policy created successfully.'
else:
print 'Policy '+policyName+' in '+diagModuleName+' already exists!'
return policy
An example of calling this is:
actionsList=[]
actionsList.append('com.bea:Name=JMSAction,Type=weblogic.diagnostics.descriptor.WLDFJMSNotificationBean,Parent=[TTDomain]/WLDFSystemResources[TTDiagnostics],Path=WLDFResource[TTDiagnostics]/WatchNotification[TTDiagnostics]/JMSNotifications[JMSAction]')
actions=jarray.array([ObjectName(action.strip()) for action in actionsList], ObjectName)
createPolicy(diagModuleName,'HiStuckThreads', 'Harvester', 'wls:ServerHighStuckThreads(\"30 seconds\",\"10 minutes\",5)', actions)
As you can see the actions to add are actually expressions to the MBeans of the actions configured earlier. It apparently depend on the type and the diagnostic module that contains it. So I could create a function that assembles this expression. If you want a custom rule expression you can create it as follows:
actionsList=[]
ruleExpression='wls:ServerGenericMetricRule(\"com.bea:Name=MedRecGlobalDataSourceXA,ServerRuntime=TTServer,Type=JDBCDataSourceRuntime\",\"WaitingForConnectionHighCount\",\">\",0,\"30 seconds\",\"10 minutes\")'
createPolicy(diagModuleName,'OverloadedDS', 'Harvester', ruleExpression, actions)
Again this is an expression that could be assembled using a function.
ConclusionThe complete script can be reviewed and downloaded from my GitHub Repo.
I hit two flies with one beat: Thank you Ground Breakers, fellow ACEs and other Oracle enthousiasts, and I guess my first article about the Weblogic Diagnostic Framework (but not my first one to include WLST scripts...). Happy OGB Appreciation Day y'all!
SOASuite Composite Sensors: the why and how...
IntroductionLong time ago BPEL PM was acquired by Oracle, and as part of the first release of SOA Suite (10g), it was a more or less standalone component. For initiated BPEL flow instances in the soa infrastructure database there were 2 tables:
These 2 tables store the BPEL instances, along with a set of indexes that you could, and in 11g and 12c still can, set with a value that you determine during the flow. Yes, these tables still exist in the soa infra database. So, let's say in your BPEL you have several string based variables that you fill with a value from the input message using an assing. Then within an Embedded Java activity, you can do something like:
When you spin of a set of new instances, you can use the following query to find the particular instances:
With the flow_id you can query the SCA flow instance (in 12c) and/or find the instance in EM.
Unfortunately, not even in 10g you can query on the indexes in EM directly. You need to query on them in the database and copy and paste the resulting flow-id in EM - FMW Control.
You might have done this in the past, or still do. You might have created a JSP that helps you with this. We did in 10g at least.
Define Composite Sensors
Select one of the Services or References and click on the blue plus icon:
In this dialog, set a Name, check/validate the Service and Operation, and click on the pencil icon to define an Expression:
A good source for creating the Composite sensors is the Embedded Java that sets the indexes of the BPEL, as described in the intro of this article. Create a sensor for every index, and base it on the Service Operation on which the variable is based from which the setting of the indexes are based.
I would highly recommend to create an Excel sheet to register which Sensors are defined on which Service/Operation and how they are filled. For instance, you could have several services that work with documents. And on all those composites you might have sensors that fetch the DocumentID. One of your developers would define a sensor called docId, another uses documentId, again another would define docNumber, etc., etc. An end user or administrator would need and know all those variants. Wouldn't it be much easier that you could just search on documentId over all those composites? Thus, introduce a method in your team that everyone uses the same sensor name for elements that mean the same.
Search on CompositeSensor valuesOn the Soa Infra dashboard in Enterprise Manager - Fusion MiddleWare Control (em) you can quickly search on a Sensor:
Fill in a Sensor Name and a search value and click on the Search Instances button.
These are free format fields, so it make sense to have a list of possible sensors that can be distributed along your admins or end users.
In the Search Instance panel of the flow instances tab, you have a more comprehensive search possibility:
This is not available when you click directly on the Flow Instances tab, without performing a search first.
In that case you need to click on Add/Remove Filters on the Flow Instances tab:
In this dialog, check the Flow Instance checkbox:
Having done that, you can add up to 6 sensor search conditions. Click on the magnifier glass to search on a sensor:
Here you can search on a composite on which you know there is sensor. Then you can select a sensor and an operator to search on. Unfortunately this is the only place to choose an operator, which means that you need to search for a sensor through a Composite Revision, before being able to choose an operator. Would be nice being able to just type in a sensor name (or copy and paste it from your excel sheet), select an operator and type a value to search over composites.
What is nice is that if you select a particular flow instance, you can view its composite sensor values:
This is especially handy, when in a busy environment where there are several instances of the same composite within a certain timeframe. Then you can quite easily click through the instances and identify if the particular one is the one you're interesting in. In stead of the need to open the flow trace, click to the bpel flow, select the receive activity and open the XML. In many cases this can be a very tedious job.
- cube_instance: bpel flow instances
- ci_indexes: 6 indexes related to the bpel flow that can be set with an embedded java call
These 2 tables store the BPEL instances, along with a set of indexes that you could, and in 11g and 12c still can, set with a value that you determine during the flow. Yes, these tables still exist in the soa infra database. So, let's say in your BPEL you have several string based variables that you fill with a value from the input message using an assing. Then within an Embedded Java activity, you can do something like:
//Get Variables
String messageType = (String) getVariableData("messageType");
String messageId = (String) getVariableData("messageId");
String processId = (String) getVariableData("processId");
String referenceNr = (String) getVariableData("referenceNr");
String branchId = (String) getVariableData("branchId");
String cmrNr = (String) getVariableData("cmrNr");
//Set Title and indexes
setFlowInstanceTitle("MyProcessFlow " + messageType + '-" + messageId);
setIndex(1,messageType);
setIndex(2,messageId);
setIndex(3,processId);
setIndex(4,referenceNr);
setIndex(5,branchId);
setIndex(6,cmrNr);
When you spin of a set of new instances, you can use the following query to find the particular instances:
select ci.flow_id, ci.cmpst_id cube_composite_id, ci.cikey cube_cikey, cix.index_1, cix.index_2, cix.index_3, cix.index_4, cix.index_5, cix.index_6
from cube_instance ci
join ci_indexes cix on ci.cikey = cix.cikey
where index_1 like '123456789';
With the flow_id you can query the SCA flow instance (in 12c) and/or find the instance in EM.
select * from sca_flow_instance fi where fi.flow_id=100173;
Unfortunately, not even in 10g you can query on the indexes in EM directly. You need to query on them in the database and copy and paste the resulting flow-id in EM - FMW Control.
You might have done this in the past, or still do. You might have created a JSP that helps you with this. We did in 10g at least.
Define Composite Sensors
Since 11g, there is a much more convenient way to do. And it's all declarative and usable from EM. It's called Composite Sensors. You can read more about it in the docs.
I haven't blogged about it earlier, because, ...., honestly I haven't used them much until lately.
Composite Sensors can be set in the composite editor:
This will get you to the following dialog:
In this dialog, set a Name, check/validate the Service and Operation, and click on the pencil icon to define an Expression:
- Variables: clicking this will provide you a navigator that will allow you to drilldown the variable structure of the service operation message type, to select the element to sense the value.
- Expression: this will show you the expression builder you should be familiar with: it's the same as the one in the assign activity copy rules in bpel. It allows you to create more complex xpath statements like: substring($in.payload/doc:RegisterDocument/doc:Document/doc:BinairyObject/@fileName,0, 100)
- Properties: allows you to select endpoint properties, for instance JCA properies as JMS Type, JMS CorrelationID. The same as the properties on a BPEL Invoke.
A good source for creating the Composite sensors is the Embedded Java that sets the indexes of the BPEL, as described in the intro of this article. Create a sensor for every index, and base it on the Service Operation on which the variable is based from which the setting of the indexes are based.
I would highly recommend to create an Excel sheet to register which Sensors are defined on which Service/Operation and how they are filled. For instance, you could have several services that work with documents. And on all those composites you might have sensors that fetch the DocumentID. One of your developers would define a sensor called docId, another uses documentId, again another would define docNumber, etc., etc. An end user or administrator would need and know all those variants. Wouldn't it be much easier that you could just search on documentId over all those composites? Thus, introduce a method in your team that everyone uses the same sensor name for elements that mean the same.
Search on CompositeSensor valuesOn the Soa Infra dashboard in Enterprise Manager - Fusion MiddleWare Control (em) you can quickly search on a Sensor:
Fill in a Sensor Name and a search value and click on the Search Instances button.
These are free format fields, so it make sense to have a list of possible sensors that can be distributed along your admins or end users.
In the Search Instance panel of the flow instances tab, you have a more comprehensive search possibility:
This is not available when you click directly on the Flow Instances tab, without performing a search first.
In that case you need to click on Add/Remove Filters on the Flow Instances tab:
In this dialog, check the Flow Instance checkbox:
Having done that, you can add up to 6 sensor search conditions. Click on the magnifier glass to search on a sensor:
Here you can search on a composite on which you know there is sensor. Then you can select a sensor and an operator to search on. Unfortunately this is the only place to choose an operator, which means that you need to search for a sensor through a Composite Revision, before being able to choose an operator. Would be nice being able to just type in a sensor name (or copy and paste it from your excel sheet), select an operator and type a value to search over composites.
What is nice is that if you select a particular flow instance, you can view its composite sensor values:
This is especially handy, when in a busy environment where there are several instances of the same composite within a certain timeframe. Then you can quite easily click through the instances and identify if the particular one is the one you're interesting in. In stead of the need to open the flow trace, click to the bpel flow, select the receive activity and open the XML. In many cases this can be a very tedious job.
Weblogic under Kubernetes: the weblogic topology of the future
Already 4 months ago I attended the PaaSForum 2019 in Mallorca. As every year it was great to meet members of the big EMEA Oracle Partner family.
And of course a lot of interesting talks and workshops. This year I was especially interested in announcements around SOA Suite and Project Helidon as a Microservice framework. But certainly also Weblogic under Kubernetes.
And actually, to me, the Kubernetes Weblogic Operator that was this years most enthusing subject.
With his WebLogic on Kubernetes talk Maciej Gruszka, Director Product Management, enlightened the future Oracle envisions for WebLogic. He started with stating that 'Weblogic is not dead!'. Well, he got me with that already!
The road ahead is making WebLogic fit to run in Docker and managed by Kubernetes. It might not be exactly what I had in mind, but it is certainly great news to learn that WebLogic will be around and alive for a future ahead. Oracle thrives to make future releases of Weblogic available as Docker images.
Today already, WebLogic is fully supported to run in a Docker container. And according to Marciej, the team is busy with the SOA and OSB teams to get those products fit and available for Docker too. It might even be possible that future releases are going to be delivered as a Docker image.
What is the Weblogic Operator?To run in a Kubernetes managed cluster, Kubernetes needs to be able to perform lifecycle operation on a Weblogic Managed server. For that the Weblogic Operator for Kubernetes is created and introduced. A Kubernetes Operator is a sort of Adapter on top of a non-Kubernetes system that translates Kubernetes lifecycle commands to operations within the specific application.
The Weblogic Operator uses Kubernetes API to implement operations like:
Some interesting links
With the persistent volume topology the domain is stored outside the container. Changes are persisted. This topology is more in line with an On Premises installation of Weblogic. However, High Availability and Disaster Recovery is limited, because Persistent Volume needs to be shared and the domain configuration needs to be synced across datacenters. With 'In Image' Domains, things get simpler, because the domain is transported within the container. Downside is that changes in the domain require creating a new image through the CI/CD pipeline.
Most customers seem to choose for the 'Domain in Image' topology. In practice, domains don't change that much.
You can adapt specific artifacts like data source connections, urls and username/passwords using Configuration Overrides.
WorkshopAt the PaaSForum we got the chance to play around with Kubernetes and Weblogic. The workshop is described here: https://github.com/nagypeter/weblogic-operator-tutorial. You should fork this to a repository with your own Github account, because it contains the files and scripts to create an image, the tutorial works you through configuring Oracle Container Pipelines (Worker) and for that it needs a Github repo.
There is a Domain In Image variant and a persistent volume variant of the tutorial.
Steps to follow for the Domain In Image variant
ConclusionI refrained discussing why you would want to run Weblogic under Docker. I have thoughts and had discussions about it. However, it made me enthousiastic that this way Weblogic can be taken with us into the containerized future.
For me the next things to explore are:
And of course a lot of interesting talks and workshops. This year I was especially interested in announcements around SOA Suite and Project Helidon as a Microservice framework. But certainly also Weblogic under Kubernetes.
And actually, to me, the Kubernetes Weblogic Operator that was this years most enthusing subject.
With his WebLogic on Kubernetes talk Maciej Gruszka, Director Product Management, enlightened the future Oracle envisions for WebLogic. He started with stating that 'Weblogic is not dead!'. Well, he got me with that already!
The road ahead is making WebLogic fit to run in Docker and managed by Kubernetes. It might not be exactly what I had in mind, but it is certainly great news to learn that WebLogic will be around and alive for a future ahead. Oracle thrives to make future releases of Weblogic available as Docker images.
Today already, WebLogic is fully supported to run in a Docker container. And according to Marciej, the team is busy with the SOA and OSB teams to get those products fit and available for Docker too. It might even be possible that future releases are going to be delivered as a Docker image.
What is the Weblogic Operator?To run in a Kubernetes managed cluster, Kubernetes needs to be able to perform lifecycle operation on a Weblogic Managed server. For that the Weblogic Operator for Kubernetes is created and introduced. A Kubernetes Operator is a sort of Adapter on top of a non-Kubernetes system that translates Kubernetes lifecycle commands to operations within the specific application.
The Weblogic Operator uses Kubernetes API to implement operations like:
- Provisioning
- Life cycle maangement
- Updates
- Scaling
- Security
Some interesting links
- Docker images
- Docker files
- Examples
- Weblogic Kubernetes integration how-to's, loads of (a bit older) interesting documents to read through
- Weblogic Operator
- Monitoring Exporter (Prometheus)
- Deploy Tooling for migration: this helps you creating wlst scripts to create a domain based on a more declarative format.
- Logging Exporter (Elastic Stack)
- Domain within the Docker Image
- Domain on a Persistent Volume
With the persistent volume topology the domain is stored outside the container. Changes are persisted. This topology is more in line with an On Premises installation of Weblogic. However, High Availability and Disaster Recovery is limited, because Persistent Volume needs to be shared and the domain configuration needs to be synced across datacenters. With 'In Image' Domains, things get simpler, because the domain is transported within the container. Downside is that changes in the domain require creating a new image through the CI/CD pipeline.
Most customers seem to choose for the 'Domain in Image' topology. In practice, domains don't change that much.
You can adapt specific artifacts like data source connections, urls and username/passwords using Configuration Overrides.
WorkshopAt the PaaSForum we got the chance to play around with Kubernetes and Weblogic. The workshop is described here: https://github.com/nagypeter/weblogic-operator-tutorial. You should fork this to a repository with your own Github account, because it contains the files and scripts to create an image, the tutorial works you through configuring Oracle Container Pipelines (Worker) and for that it needs a Github repo.
There is a Domain In Image variant and a persistent volume variant of the tutorial.
Steps to follow for the Domain In Image variant
- Setup Oracle Kubernetes Engine instance on Oracle Cloud Infrastructure. You'll need a trial accound on cloud.oracle.com. It will then guide you through the setup of an Kubernetes cluster on OCI.
- Build WebLogic container image using Oracle Container Pipelines (Wercker). The second time I did the workshop I decided to change all the labels, namespaces and the domain name. Every where there is a reference to 'sample', I entered 'makker'. In this step the image is created from your fork of the github repo. If you change the name of the domain, there are two files to edit:
- The Dockerfile.create is called at the initial creation of the image. If there is a base image, the Dockerfile.update is called, to update the image. The Dockerfile.create creates an image with a complete domain, including the application. But the Dockerfile.update only updates the application. So you need to update the Dockerfile.create to change the domain name in the DOMAIN_NAME environment variable in the top of the file.
- The Dockerfile.create copies the scripts folder into the image. That folder contains a wlst script, called model.py. At the top, a variable domain_name is declared with the same domain name assigned to it.
- Install WebLogic Operator: installs the Weblogic Operator.
- Install and configure Traefik: this installs a Traefik loadbalancer on your environment. It will loadbalance over your Weblogic managed servers.
- Deploy WebLogic domain: this step lets you prepare your Kubernetes cluster to run the Weblogic domain. Reuse the same domain name as explained in step 2.
- Scaling WebLogic cluster: This one I found particularly cool. In this step you update the domain resource yaml file, to update the number of managed servers in the domain. After that, automagically a new Kubernetes pod is spawned that starts a new Managed Server. By the way, the domain will have a dynamic cluster with predefined Managed Servers based on Server Templates.
- Override domain configuration: this will show you how to perform domain configuration overrides to update the datasource.
- Update the appliation: The whole point of this exercise is to show you how to setup a CI/CD chain that when you update your application, the image is updated and the domain can be restarted through Kubernetes, with the new image.
- Assing the Weblogic Pods to specific nodes or licenced nodes. The latter is important because Weblogic is licensed, so you can't just run it on any number of nodes.
ConclusionI refrained discussing why you would want to run Weblogic under Docker. I have thoughts and had discussions about it. However, it made me enthousiastic that this way Weblogic can be taken with us into the containerized future.
For me the next things to explore are:
- Create a database on another OCI image, and create a new domain, with a sample application that actually uses that database. It would be fun to create an actuall application on it.
- Try the same with a persistent volume. A few months ago I was busy with creating java classes to start Kafka. The goal was to create Weblogic Startup classes to have Kafka started at startup of a Weblogic server. Now, it may not seem logical to you, but wouldn't it be great to combine the two and have Kafka embedded in a Weblogic cluster on a Kubernetes Cluster? Well, at least it seems fun to me. Since Kafka needs to log it's messages in a persistent log, we need to do this with a Persistent Volume.
- Check out other topologies and related technologies. Like accessing the logs. I really would like to be able to inspect the Weblogic log files within the container.
Debugging code - Identifying the bug
Julia Evans is a very smart woman in IT who creates very nice, funny and insight full comics, she calls 'zines', on Linux and Coding topics.
This morning I read she came up with a question that triggered me:
Last week I realized I'm already 25 years in IT, after that my forced membership of a famous Dutch shooting club ended after 9 months (the kind of shooting club where you got free clothing, survival courses, and in my case also a truck-driving-license. Which other club offers that?).
Anyway, during the years I discovered that despite of all the smart people I got to know and work with, this part of our work isn't obvious. In very many cases people seem to 'just do something'. No offense, but for developers it's often frustrating and just not fun to work on bugs or problems. And administrators that are confronted with a problem are often 'too busy with other stuff.' So they try something, don't find the thing, and at a later moment try something else. So, when I got involved I ask the obvious questions and in most cases I try out the same thing myself. Even though I do believe them, I want, I need, to see the behavior with my own eyes.
By the way, I'm always reluctant to call it a bug. A bug is only a bug when you have reproduced it and based on common interpretation, together with the tester (if he found the issue), the functional/solution designer come to a consensus that the code does not do what it is supposed to. The functional specs are interpreted by both the tester and the developer. And in a certain way also the designer. It might be that the tester finds an anomaly, but that it is either a miss-interpretaton from him or a problem with the formulating of the specs. There are cases that the coder is right. But, of course, your program can work with an unexpected logic.
But, back to Julia's tweets: they triggered me, so I jotted down some thoughts that I got and are the basis of my search for issues.
To me it starts with identifying a case where it goes wrong, but equally importantly, together with a similar situation where it goes right. And, as far as possible, creating a unit test for both. Since, my work is mostly done on message processing platforms (Oracle SOA Suite, BPM Suite, Service Bus), I love it when a tester can hand me over a triggering message of the case and the involving response messages. I can then add them to my unit-test-set in SoapUI/ReadyAPI.
Then I add instrumentation (log lines, etc.) on key positions, that identify to which points the code is executed and what lines aren't reached. SOA Suite produces a flow trace of the execution. But often expressions are used that are quite complex one liners. I then split those up into several separate assignments to 'in between' variables. In Java, JavaScript, etc., I do not like complex one-liners. I prefer several variables for 'in between' values, and assignments with short expressions. That helps with line-by-line debugging.
Next, I iteratively narrow that gap between the point I can conclude the code reaches and the point I find not reached, until the statement or point of execution that fails can be identified.
In the log lines I add key variable values that are involved.
In very rare, very difficult, cases, I sometimes break down the code, cut away all the code that is not touched, until I get a minimal working Mickey Mouse (or in Dutch: Jip en Janneke) case. From there I build it up, and test iteratively in very small steps, until it breaks.
Also, very important, for difficult problems, I document very meticulously what I have done and concluded. My slogan here is: 'Deduction, my dear Whatson!' When having a problem, one can quickly come up with some potential causes and tests to check those. A unit test for a potential cause can go two ways, it can confirm or disapprove the suspicion. Both outcomes have consequences for the follow-up. Disaproving a potential cause, can strikethrough other potential paths as well.
But, approving it, need additional steps to narrow down. I see it as a decision-tree to follow.
What I have found through the years, is that structurally document the steps done with the particular conclusions and the follow-ups is not quite obvious. But in many cases I found them important. Especially working in a Taskforce, or when I got hired to get involved in a case. In those cases the customer that hired you has the right to have something in hands that represents what he payed for.
I once was involved in a case that turned out to be a database bug. So I could not help the customer to solve it. But they where very pleased in the structured method I used to check out what could be the problem. And for those administrators and developers that got to do this as a side job, besides there regular things: please do yourself a favor and document. I found Google Docs very usefull in this.
Oh, and by the way: I work with BPEL, BPMN, Oracle Service Bus, Java, Pl/Sql, XSLT, XQuery, Python/Jython/WLST, sometimes JavaScript, you name it. And actually, my way of structured code or systems analysis comes down to the same procedures. Regardless of technology.
Weblogic 12.2.1.3 Signs SAML2 requests and responses with SHA-256
Today I reviewed a few responses on a 'What's new in Weblogic 12.2.1.3' question.
One of the responses mentioned the whats-new document.
Now, I'm not used to study these documents. But today I browsed through them and one thing caught my eye.
I did some implementations of Weblogic as a SAML2 Service Provider against MS ADFS. I'm even invited to do a talk 'SSO with ADFS for Apex Using Weblogic and ORDS: How I did it and Where I Tripped' at the UKOUG Southern Technology Summit 2019, july 2nd.
What's interesting here is that about 2 years ago I already wrote about my earlier experiences, and mentioned that Weblogic 12c did not support SHA-256 for the signing of SAML requests. So you had to configure ADFS to use SHA-1. In my latest implementation it stroke me that I did not have to force my ADFS counterpart to set that, at least I think I didn't. ADFS as you might expect for really some time now, uses SHA2 (SHA-256) as a default. But only today I saw that under Manageability Improvements -> Security is mentioned that Weblogic 12.2.1.3 also has SHA2 as a default now.
Knowing this will improve my talk greatly. I'm glad I saw this. It might seem to be a minor thing, but I think it's quite important.
I use Weblogic mostly as a FMW Infrastructure for SOA Suite, OSB, etc. And occasionally I do assignments with specifics like SAML2. If you're interested in what changed in a specific Weblogic version, I think it's important to know what you're looking for. Know the functionality that you're actively using or interested in.
One of the responses mentioned the whats-new document.
Now, I'm not used to study these documents. But today I browsed through them and one thing caught my eye.
I did some implementations of Weblogic as a SAML2 Service Provider against MS ADFS. I'm even invited to do a talk 'SSO with ADFS for Apex Using Weblogic and ORDS: How I did it and Where I Tripped' at the UKOUG Southern Technology Summit 2019, july 2nd.
What's interesting here is that about 2 years ago I already wrote about my earlier experiences, and mentioned that Weblogic 12c did not support SHA-256 for the signing of SAML requests. So you had to configure ADFS to use SHA-1. In my latest implementation it stroke me that I did not have to force my ADFS counterpart to set that, at least I think I didn't. ADFS as you might expect for really some time now, uses SHA2 (SHA-256) as a default. But only today I saw that under Manageability Improvements -> Security is mentioned that Weblogic 12.2.1.3 also has SHA2 as a default now.
Knowing this will improve my talk greatly. I'm glad I saw this. It might seem to be a minor thing, but I think it's quite important.
Oracle Java Support: why should I pay for something that used to be free?
A few weeks ago, I discussed with a colleague about the new licensing model of Oracle Java.
Customers may have concerns about this, since until now a customer was used to be entitled to download Java Updates for free. At least I was.
During the discussion I posed a way of thinking that made sense to me, and that seems to be supported by document references.
For some time now you can download Open JDK, which is an open source reference implementation based on Oracle JDK, as I understand it. It states that it is a production ready. Although this story may be a bit more nuanced as I state here. In the past it was considered to be inferior to the Oracle JDK, whilst the Oracle JDK was also free. With the new release cycles, introduced with Java 9, Oracle committed to make OpenJDK as indistinguishable from OracleJDK as possible. So functional and security features are up to the level of Oracle JDK.
In short, if you don't want to pay for support you can go and use Open JDK. Or stay on your current version.
But, since Oracle is a Sales based organization, I'm not surprised if they want to be payed for delivering (Long Time) support on Java. Especially, when more and more software is from other vendors is based on Java, and when the competitors of Cloud platforms rely on it.
If you want to have support for Java, you should have a Support contract.
I mentioned already above, but what also changed, with Oracle 9, is the release cycle. Until Oracle Java 8, Oracle supported the JDK for a very, very long time. The globally, publicly available major Java versions were released in a few years pace. Java 7 was around for around 4 years, before Java 8 was released. Java 8 has been a round for 5, before Java 9 saw the light.
To get more in pace with the developments in the market, Oracle decided to start with half-yearly release cycles, starting with Java 9 in 2017. And now with every 6 months, a new Java version is released with new features. Features that do not make the cut, are delayed to the next release when they are ready. But the major Java version gets released. With that, also the support of the version is changed: the support on the release only lasts for the live span of the release, which is 6 months. To keep up with security and features, you to need to move on to the next major version, to keep supported. Currently we're at Java 12, from march '19.
If you can't keep up with that, Oracle provides a Long Time Support version, that is supported for time frame comparable of those of Java 6, 7 and 8. One of those half yearly releases are denoted LTS, and currently it is Java 11. It's most comparable with for instance RedHat Linux, providing Fedora as an open, publicly available version (like OpenJDK) and ReadHat Enterprise Linux, which is the LTS version.
Now, what if you have Licenses for Oracle products that rely on Java? Fusion Middleware, for instance, is only supported on Oracle Java, currently Oracle Java 8. You may have licenses for Weblogic, Coherence, Forms & Reports, etc. In those cases you have a restricted license of Oracle Java. Much like when you have an E-Business Suite, Siebel or any other Enterprise Information System of Oracle that uses the database. Then you can use the database, when you use it to support that setup. You cannot run custom code in it, do reporting on it or use the database in any other way.
The same counts for Java. If you run Weblogic, or have an application that uses Coherence, etc., you're entitled to download the updates for Java. See for instance this document about Restricted Oracle Java SE License in combination with Weblogic, or Support Entitlement for Java SE When Used As Part of Another Oracle Product. Also interesting: you can file support requests against that Oracle product, but not directly against Java SE, unless you have Java Support.
And, products like SQLDeveloper, sqlcl and ORDS are supported through the Database license, which also uses Java. So, having a Database license, you have support on SQLDeveloper and the Oracle Java, used by SQLDeveloper.
Notice that if you have a Weblogic License, but also have a custom java application not running in a Weblogic instance, it's not allowed to use the same JDK Updates! If the application uses HTTP to communicate with a Weblogic Server, for instance to call a REST or SOAP Service, you're not allowed to download updates for that Java Home.
Also, if you hava a custom java application that uses JDBC drivers to connect to a licensed Oracle database, then you're not allowed to download the Java updates. Oracle states that the JDBC drivers do not use an Oracle product-specific protocol.
I encountered a little while ago that JavaDB is not delivered with Oracle JDK anymore. I suppose that this is related to the changed licensing of Java.
I hope that this little article makes sense, to you and helps to understand the licensing model.
To sum up, the options you have
- Stay on the version you currently use, with out changes. If you can live or cope with being behind with security updates, this can be an acceptable choice.
- Keep up with the 6 months major version update pace, you can use OpenJDK. You keep up to date with the major versions and are secure.
- Stay on a LTS release and move with to another LTS at your own pace (but only for Oracle JDK).
I did my best to blend my thoughts, with the review tips, and the notes of support. I put down what I think and learned in my own words, but I might have rephrased things a bit incorrectly. Check out these more formal articles and statements:
Test Remote Asynchronous Request Response services
A few years ago, I described how you can test Asynchronous Request Response services.
The thing with Asynchronous Request Response services is, as I used to describe it, that they're in essence two complementary Request-Only (Fire and Forget) services. That is, the client submits a request to the Asynchronous Request Response service, and at a certain point waits for the response by listening to an endpoint.
To make this work, the responding Asynchronous Request Response service should be told, which endpoint it should call with the response and which correlation id should be used. The WS-Addressing standard is used for that. All nicely explained in the before mentioned article.
In most customer-cases the problem is that your Client SoapUI or ReadyAPI project should catch the response, but the service is running on a SOA Suite in the datacenter and is not allowed to get to your local machine.
MobaXterm makes it very easy to create a tunnel. You can have a remote tunnel, that enables a local listening endpoint, that forwards every request to a remote service. Very handy if you have a Vagrant project with only a NAT NetworkAdapter, where Vagrant enabled a ssh endpoint on port 2222. You can easily create a Local tunnel on port 7101, for instance, to the remote ssh session on port 2222, that enables you to get to the weblogic console on the remote VM running on http://darlin-vce:7101/console.
To create a tunnel, just open the MobaSSHTunnel - Grahpical port forwarding tool:
This will open:
You can create a new SSH tunnel or edit a current one using the cogs icon under settings. For instance, to be able to do the Local port forwarding to get to your Weblogic console on your Vagrant box, create a tunnel as follows:
On the left you can enter a local port. That is the port you can use on your localhost. On the top right you can enter an host and port for the address to post your request to (does not need to be localhost). Then bottom right you need to provide an ssh session. A bit inconvenient is that you can't select a session from the sessions pane. Provide a host, port and user to connect to your ssh server.
What happens is that MobaXterm creates an SSH session, and a local endpoint. Every thing posted to the local endpoint is posted on the remote server to the give address. In this case I can go on my browser and enter https;//localhost:7101/console and it will bring me to the Weblogic Console on my Vagrant box. Neat, isn't it?
To get the remote Async Service respond to your local machine, you can also create a we need a tunnel that works the otherway around: we need Remote Port Forwarding:
Configuring is similar to Local port forwarding, however, now on the remote server a listen endpoint is created, and everything that is posted to the localhost:7777 adress (in this example) is forwarded to the address entered on the local server. In this case it is forwarded to localhost:7777, but it could be something else.
In our ReadyAPI project I created a Groovy script as follows:
In ReadyAPI you can define environments, with the project property activeEnvironment.name it can be queried.
If the environment points to one of our development environments, I set the callbackIp testcase property to "localhost". But for the default environment, I use InetAddress.localHost.hostAddress to get the local ip address. This will be the ip address of our CD/CI tool, that runs ReadyAPI from a script.
You can set the WS-Addressing ReplyTo address as follows, for instance:
Then this address is used to do the callback. Make sure the tunnel is started:
You can also have the tunnel auto started (with the blue man-running-icon) or auto-reconnected (with the purple lightning icon).
This may also be very relevant in testing services on Oracle SOA Cloud Service, or Integration Cloud.
Happy tunneling!
The thing with Asynchronous Request Response services is, as I used to describe it, that they're in essence two complementary Request-Only (Fire and Forget) services. That is, the client submits a request to the Asynchronous Request Response service, and at a certain point waits for the response by listening to an endpoint.
To make this work, the responding Asynchronous Request Response service should be told, which endpoint it should call with the response and which correlation id should be used. The WS-Addressing standard is used for that. All nicely explained in the before mentioned article.
In most customer-cases the problem is that your Client SoapUI or ReadyAPI project should catch the response, but the service is running on a SOA Suite in the datacenter and is not allowed to get to your local machine.
MobaXterm makes it very easy to create a tunnel. You can have a remote tunnel, that enables a local listening endpoint, that forwards every request to a remote service. Very handy if you have a Vagrant project with only a NAT NetworkAdapter, where Vagrant enabled a ssh endpoint on port 2222. You can easily create a Local tunnel on port 7101, for instance, to the remote ssh session on port 2222, that enables you to get to the weblogic console on the remote VM running on http://darlin-vce:7101/console.
To create a tunnel, just open the MobaSSHTunnel - Grahpical port forwarding tool:
This will open:
You can create a new SSH tunnel or edit a current one using the cogs icon under settings. For instance, to be able to do the Local port forwarding to get to your Weblogic console on your Vagrant box, create a tunnel as follows:
On the left you can enter a local port. That is the port you can use on your localhost. On the top right you can enter an host and port for the address to post your request to (does not need to be localhost). Then bottom right you need to provide an ssh session. A bit inconvenient is that you can't select a session from the sessions pane. Provide a host, port and user to connect to your ssh server.
What happens is that MobaXterm creates an SSH session, and a local endpoint. Every thing posted to the local endpoint is posted on the remote server to the give address. In this case I can go on my browser and enter https;//localhost:7101/console and it will bring me to the Weblogic Console on my Vagrant box. Neat, isn't it?
To get the remote Async Service respond to your local machine, you can also create a we need a tunnel that works the otherway around: we need Remote Port Forwarding:
Configuring is similar to Local port forwarding, however, now on the remote server a listen endpoint is created, and everything that is posted to the localhost:7777 adress (in this example) is forwarded to the address entered on the local server. In this case it is forwarded to localhost:7777, but it could be something else.
In our ReadyAPI project I created a Groovy script as follows:
def testCase = testRunner.testCase
def env = testCase.testSuite.project.activeEnvironment.name
if (env != "o02-12c" && env != "o02" ) {
log.info "Environment: "+env+", so set callbackIp to "+InetAddress.localHost.hostAddress
testRunner.testCase.setPropertyValue( "callbackIp", InetAddress.localHost.hostAddress)
} else {
log.info "Environment: "+env+", so set callbackIp to localhost"
testRunner.testCase.setPropertyValue( "callbackIp", "localhost")
}
In ReadyAPI you can define environments, with the project property activeEnvironment.name it can be queried.
If the environment points to one of our development environments, I set the callbackIp testcase property to "localhost". But for the default environment, I use InetAddress.localHost.hostAddress to get the local ip address. This will be the ip address of our CD/CI tool, that runs ReadyAPI from a script.
You can set the WS-Addressing ReplyTo address as follows, for instance:
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:add="http://schemas.xmlsoap.org/ws/2003/03/addressing">
<soapenv:Header>
<add:ReplyTo>
<add:Address>http://${#TestCase#callbackIp}:7777/MyMockResponseURI</add:Address>
</add:ReplyTo>
</soapenv:Header>
<soapenv:Body>
Then this address is used to do the callback. Make sure the tunnel is started:
You can also have the tunnel auto started (with the blue man-running-icon) or auto-reconnected (with the purple lightning icon).
This may also be very relevant in testing services on Oracle SOA Cloud Service, or Integration Cloud.
Happy tunneling!
Split your Vagrant provisioners
For a while now, I'm quite into Vagrant in combination with VirtualBox. A few years ago I started with trying to script FMW environments, and since my discovery, and resulting fancy, of Vagrant, I also created a project for creating and provisioning a SOA Suite box.
Until now, my project all had one shell-type provisioner looking like:
This seems quite simple, but for my SOA Suite box, I had quite a lengthy provisioner, that somehow failed at running the RCU and therefor with the creation of the domain. There is a synchronization thingy with the database. The database is up, but at the time it reaches the RCU creation, it isn't able to connect. When running it seperately it works like a charm.
So, don't know how to solve that, but I want to re-provision only the part of the RCU and domain creation. Last night I fiddled around with it. Following the Vagrant Up basic usage explanation, you can create multiple provisioners with different names and different types. You can then force the provisioning for certain provisioners by type or by name.
I played around with that, because I couldn't get the syntax right. Although the explanation is proper, I wanted to have it slightly different and did not got it at first. Finally, I got it working.
Let's look into it.
First I split up my shell script, and found that I can put those in a variable. I now have a init script, that adapts the Linux OS, creates a new filesystem and creates an oracle user :
And one for installing the FMW software:
And one for configuring FMW, that is running the RCU and creating the domain:
Cool, so far, right?
Now, after that we need to define the 3 provisioners:
These provisioners
The run option has the following possible values:
Until now, my project all had one shell-type provisioner looking like:
config.vm.provision "shell", inline: <<-SHELL
export SCRIPT_HOME=/vagrant/scripts
. $SCRIPT_HOME/install_env.sh
echo _______________________________________________________________________________
echo 0. Prepare Oracle Linux
$SCRIPT_HOME/0.PrepOEL.sh
echo _______________________________________________________________________________
echo 1. Create Filesystem
$SCRIPT_HOME/1.FileSystem.sh
echo _______________________________________________________________________________
echo 2. Create Oracle User
$SCRIPT_HOME/2.MakeOracleUser.sh
#
echo _______________________________________________________________________________
echo 3. Java SDK 8
sudo runuser -l oracle -c '/vagrant/scripts/fmw/installJava.sh'
SHELL
This seems quite simple, but for my SOA Suite box, I had quite a lengthy provisioner, that somehow failed at running the RCU and therefor with the creation of the domain. There is a synchronization thingy with the database. The database is up, but at the time it reaches the RCU creation, it isn't able to connect. When running it seperately it works like a charm.
So, don't know how to solve that, but I want to re-provision only the part of the RCU and domain creation. Last night I fiddled around with it. Following the Vagrant Up basic usage explanation, you can create multiple provisioners with different names and different types. You can then force the provisioning for certain provisioners by type or by name.
I played around with that, because I couldn't get the syntax right. Although the explanation is proper, I wanted to have it slightly different and did not got it at first. Finally, I got it working.
Let's look into it.
First I split up my shell script, and found that I can put those in a variable. I now have a init script, that adapts the Linux OS, creates a new filesystem and creates an oracle user :
$initScript = <<-SCRIPT
export SCRIPT_HOME=/vagrant/scripts
echo _______________________________________________________________________________
echo 0. Prepare Oracle Linux
$SCRIPT_HOME/0.PrepOEL.sh
echo _______________________________________________________________________________
echo 1. Create Filesystem
$SCRIPT_HOME/1.FileSystem.sh
echo _______________________________________________________________________________
echo 2. Create Oracle User
$SCRIPT_HOME/2.MakeOracleUser.sh
SCRIPT
And one for installing the FMW software:
$installFMWScript = <<-SCRIPT
echo _______________________________________________________________________________
echo 3. Java SDK 8
sudo runuser -l oracle -c '/vagrant/scripts/fmw/installJava.sh'
echo _______________________________________________________________________________
echo 4. Database 12c
sudo runuser -l oracle -c '/vagrant/scripts/database/installDB.sh'
echo _______________________________________________________________________________
echo 5.1 SQLCL and SQLDeveloper
sudo runuser -l oracle -c '/vagrant/scripts/database/installSqlcl.sh'
echo _______________________________________________________________________________
echo 5.2 SQLDeveloper
sudo runuser -l oracle -c '/vagrant/scripts/database/installSqlDeveloper.sh'
echo _______________________________________________________________________________
echo 6. Fusion Middleware
sudo runuser -l oracle -c '/vagrant/scripts/fmw/installFMW.sh'
echo _______________________________________________________________________________
echo 6.1 Fusion Middleware - SOA
sudo runuser -l oracle -c '/vagrant/scripts/fmw/installSOA.sh'
echo _______________________________________________________________________________
echo 6.2 Fusion Middleware - SB
sudo runuser -l oracle -c '/vagrant/scripts/fmw/installSB.sh'
echo _______________________________________________________________________________
echo 6.3 Fusion Middleware - OHS
sudo runuser -l oracle -c '/vagrant/scripts/fmw/installOHS.sh'
echo _______________________________________________________________________________
echo 7. BPM Quickstart
sudo runuser -l oracle -c '/vagrant/scripts/fmw/installBpmQS.sh'
SCRIPT
And one for configuring FMW, that is running the RCU and creating the domain:
$configFMWScript = <<-SCRIPT
echo _______________________________________________________________________________
echo 8.1 Fusion Middleware - RCU SOA
sudo runuser -l oracle -c '/home/oracle/bin/startDB.sh'
sudo runuser -l oracle -c '/vagrant/scripts/fmw/rcuSOA.sh'
echo _______________________________________________________________________________
echo 8.2 Fusion Middleware - Create Domain
sudo runuser -l oracle -c '/vagrant/scripts/fmw/fmw1221_domain/1.recreateFMWDomain.sh'
echo !!! TODO: Machine configuration update to use Plain - 5555
echo !!! TODO: Modify domain creation and property naming to create machine in accordance to nodemanager config.
echo _______________________________________________________________________________
echo 8.3 Fusion Middleware - Modify Nodemanager
sudo runuser -l oracle -c '/vagrant/scripts/fmw/fmw1221_domain/2.modifyNodeManager.sh'
echo _______________________________________________________________________________
echo 8.4 Fusion Middleware - Create Nodemanager service
sudo runuser -l oracle -c '/vagrant/scripts/fmw/fmw1221_domain/3.createNodemanagerService.sh'
#
SCRIPT
Cool, so far, right?
Now, after that we need to define the 3 provisioners:
config.vm.provision "init", type: "shell", inline: $initScript
config.vm.provision "installFMW", type: "shell", inline: $installFMWScript
config.vm.provision "configFMW", type: "shell", inline: $configFMWScript
These provisioners
- init -> provisioning/config of Oracle Linux, creation of oracle user, etc. This will be about equal for every box.
- installFMW -> installation of all FMW software.
- configFMW -> run the RCU and create domein.
- vagrant up --provision-with configureFMW
- vagrant reload --provision-with configureFMW
- vagrant provision --provision-with configureFMW
config.vm.provision "init", run: "once", type: "shell", inline: $initScript
config.vm.provision "installFMW", type: "shell", run: "once", inline: $installFMWScript
config.vm.provision "configFMW", type: "shell", run: "once", inline: $configFMWScript
The run option has the following possible values:
- "once": this is actually the default, the provisioner is only executed at first up. Or if you force it to run as described above.
- "always": the provisioner is executed at every up. This can be used for something you want to be done every time you do up. A good one would be to start the database.
- "never": this one is interesting. This makes the provisioner optional. That means it won't be executed, unless you ask for it. A good one would be to drop the RCU and delete the domain. So that you can reprovision the repository and the domain.
My Seemless Linux Desktop using VirtualBox, Vagrant, MobaXterm and TotalCommander
Years ago I played around with VMWare Unity Mode in VMWare player or VirtualBox's Seemless mode. In those modes you start an application on your virtual machine, but the windows appear as running on your host system. Back in the days I ran OpenSuse on my laptop, and had Windows XP or 7 on a Virtual Machine for those cases I had to run PowerPoint.
I wasn't too enthousiastic about those modes. Of the two I found that VMWare implemented it the most transparent. But it was quite hard to work with multiple screens, and to start the applications using the embedded menu. Now, I didn't use this in a long time, so it might have been improved. But, lately I work with my VM's (mostly Oracle Linux) using Vagrant most of the time. And I use MobaXterm to connect with them and shortly, I use it to start my Oracle tools from MobaXterm.
With the XWindows Server in MobaXterm, working with either SQLDeveloper or JDeveloper is very convenient. And it will allow me to cleanup even my local installments of SQLDeveloper or JDeveloper.
So, I have been looking for automating some startup and working on the most convenient setup for me. And I found it (for now) in the combination of:
Let's go through them.
VirtualBoxIt must have been around 2003 that I got introduced into VMWare Workstation by my former colleague Robert. It was an eyeopener to see that you could run multiple PCs on your laptop, separating different versions of Oracle Products that could even 'talk' to eachother!
Since all is saved in a folder of files, very handy was that I could do one install and share it with colleagues.
Since Oracle acquired VirtualBox, and because of several direction-changes at VMWare (introduction and revocation of VMWare Server, VMWare player that couldn't create VMs and then it could, and then it couldn't anymore,...), after a while it made sense to me to switch to VirtualBox fully. I had a time that I had both on my laptop.
So, it's VirtualBox now and we've come to version 6.0.4 at time of writing. Downloadable from the VirtualBox Download Page. Choose the download for your platform. Installing follows the familiar NNF-Pattern (Next-Next-Finish). And don't forget to download and install the Platform independent Extension Pack. I guess this is piece of cake for you followers.
VagrantThis is quite new for me, since a bit over a year now. I transferred my main installations in a Vagrant project and it's still work in progress.
I already wrote a bit about my Vagrant solutions. Last few years I worked on scripting my installations of Oracle Products. Vagrant allows me to automate the creation and provisioning of my VM's. Relieving me from the need to keep multiple VMs up to date. Sharing can be both simpler but also harder. Simpler, because I could share my vagrant project and scripts. But also harder, because the provision scripts should be placed in the proper order. I should implement it in a VCS repo. But also the install-binaries should be placed with the proper name in the proper place.
I now have a Software Stage Repository on my second laptop disk, and separate Vagrant projects. But they all have copies of provision scripts for several products. So, database, java, SOA/BPM QuickStart, Weblogic is duplicated per particular Vagrant Project. I want to split it up in a common provision folder and a Choose&Select approach in my Vagrant Projects in a way that I have a simpel Vagrant provisioning in where I can refer to the provisioning of particular products.
Anyway, starting up and eventual provisioning of a box is simple: just issue the vagrant up command in the folder with the vagrant file is all you have to do. Suspending a box is done using vagrant suspend.
My respected con-colleague Maarten Smeets wrote quite a bit about Vagrant and lately about a few good tips.
Vagrant has reached version 2.2.4 recently and can be downloaded here. It follows the famous NNF-pattern. But although it allows you to choose the install directory, it is very devoted to be installed in the c:\HashiCorp\Vagrant\. I quit trying to force it elsewhere.
MobaXtermMany of my even respected coworkers stick with their all-time favorite Putty. Putty stays ubiquitous. It is simple, but I allways have found it a bit archaic. I thought I once noticed that the support was terminated. However, just now they just released the stunning version number of 0.71. I don't mean to be sarcastic, and Putty has it's own right of existance. But give MobaXterm a try. It's loaded with nice features, including an SCP client that can follow your SSH session. And as said, also a XWindows Server. So, connect with ssh to your Linux server and run jmc, visualVM, Oracle Universal Installer, Weblogic Configurator, JDeveloper, SQLDeveloper, etc. etc. and the UI will pop-up on your desktop. Also MobaXterm includes cygwin so you're able to run a terminal session on Windows. It even allows you to do ps -ef to show your running windows apps!
I also discovered that really easy way to implement a tunnel with MobaXterm!
You can download it here, in a portable and an installer version. I choose the portable. The free edition includes games (why?) and a limit number of sessions, macros and tunnels. But the professional edition only costs a few bucks/euros.
TotalCommanderEver since the introduction of Windows '95, I disliked the Windows Explorer. Luckily I soon discovered Windows Commander, under pressure of Microsoft, renamed to Total Commander. And it's even a better name, because it's about the second tool I install on a new Windows Desktop. Just after Firefox, to be able to close IE/Edge...
It's my cockpit, allows me to navigate to hot folders quickly, introspect files, navigate through archives, edit them or unpack them, multi-rename files, compare files, etc., etc. I just don't make coffee with it. One of the nice features is the button bar, where you can launch applications. And this is the thing I use for this blog.
Tie it all togetherI realize that I overloaded you with sales talk about my favorite tools.
After installing all the tools and having your Vagrant project in place all can be tied together.
I have a project that provisions a VM with an Oracle Database, SQLDeveloper, SOASuite, and BPM QuickStart.
My TotalCommander Toolbar looks like:
You can right-click anywhere in the toolbar to edit it, create new buttons. A 'button' that is left empty (no command) is presented as a separation bar.
Startup & suspend the VM
Let's take a look at the Vagrant SOA Start button:
It's simple: the command is vagrant up and important here is that it should be executed in the folder where the vagrant file resides. I provide a tool tip, and I created an icon file from the Vagrant Logo using my favorite image app IrfanView.
Clicking the button will fire up the VM and potentially provision it. I copied the button to create a button to suspend the VM. The command there is vagrant suspend. But for the rest it is exactly the same.
MobaXterm local terminalWhen you start MobaXterm you'll get to:
When you click on the 'Start local terminal' button, you get a shell window running in the user home folder that is presented by the tool:
You can ssh to a remote server from this terminal. Of course you can create a session to a remote server. Doing so for the first time, logging on to the particular user, allows you to save the password for that user. I've already done that, and then I can do a ssh oracle without the need to provide a password.
Start Database
On my remote server that is started already (using the TotalCommander Button), I have a script that starts the database.
When the VM is started with vagrant, by default it fires up an ssh deamon on port 2222. The command to start my database on the remote server is:
And as you can see in the screendump: I put that in a script in the home folder.
MobaXterm provides several commandline options, that allows you to run a script command at startup of MobaXterm. That is what I used to create a StartDatabase Button:
The command is just MobaXterm (I should put MobaXterm in a version-less folder name, with a version-less executable, or create a script for that).
As parameters I provided -newtab ./startDB.sh. This is to ensure that the script is started on a new tab in MobaXterm, in a new potential session. Little side affect is that it creates a tab on the MobaXterm every button click. So, I might end up closing a few tabs...
Start SQLDeveloper and JDeveloper (BPM QuickStart)To start SQLDeveloper I have a sqldev.sh script with the following content:
And the button looks like:
Similarly, I have a button to start JDeveloper. And it alls and up in the following desktop:
It might need some tweeking. But for now I love it and it works like a charm.
I wasn't too enthousiastic about those modes. Of the two I found that VMWare implemented it the most transparent. But it was quite hard to work with multiple screens, and to start the applications using the embedded menu. Now, I didn't use this in a long time, so it might have been improved. But, lately I work with my VM's (mostly Oracle Linux) using Vagrant most of the time. And I use MobaXterm to connect with them and shortly, I use it to start my Oracle tools from MobaXterm.
With the XWindows Server in MobaXterm, working with either SQLDeveloper or JDeveloper is very convenient. And it will allow me to cleanup even my local installments of SQLDeveloper or JDeveloper.
So, I have been looking for automating some startup and working on the most convenient setup for me. And I found it (for now) in the combination of:
- VirtualBox
- Vagrant
- MobaXTerm
- And my All time favorite cockpit: TotalCommander
Let's go through them.
VirtualBoxIt must have been around 2003 that I got introduced into VMWare Workstation by my former colleague Robert. It was an eyeopener to see that you could run multiple PCs on your laptop, separating different versions of Oracle Products that could even 'talk' to eachother!
Since all is saved in a folder of files, very handy was that I could do one install and share it with colleagues.
Since Oracle acquired VirtualBox, and because of several direction-changes at VMWare (introduction and revocation of VMWare Server, VMWare player that couldn't create VMs and then it could, and then it couldn't anymore,...), after a while it made sense to me to switch to VirtualBox fully. I had a time that I had both on my laptop.
So, it's VirtualBox now and we've come to version 6.0.4 at time of writing. Downloadable from the VirtualBox Download Page. Choose the download for your platform. Installing follows the familiar NNF-Pattern (Next-Next-Finish). And don't forget to download and install the Platform independent Extension Pack. I guess this is piece of cake for you followers.
VagrantThis is quite new for me, since a bit over a year now. I transferred my main installations in a Vagrant project and it's still work in progress.
I already wrote a bit about my Vagrant solutions. Last few years I worked on scripting my installations of Oracle Products. Vagrant allows me to automate the creation and provisioning of my VM's. Relieving me from the need to keep multiple VMs up to date. Sharing can be both simpler but also harder. Simpler, because I could share my vagrant project and scripts. But also harder, because the provision scripts should be placed in the proper order. I should implement it in a VCS repo. But also the install-binaries should be placed with the proper name in the proper place.
I now have a Software Stage Repository on my second laptop disk, and separate Vagrant projects. But they all have copies of provision scripts for several products. So, database, java, SOA/BPM QuickStart, Weblogic is duplicated per particular Vagrant Project. I want to split it up in a common provision folder and a Choose&Select approach in my Vagrant Projects in a way that I have a simpel Vagrant provisioning in where I can refer to the provisioning of particular products.
Anyway, starting up and eventual provisioning of a box is simple: just issue the vagrant up command in the folder with the vagrant file is all you have to do. Suspending a box is done using vagrant suspend.
My respected con-colleague Maarten Smeets wrote quite a bit about Vagrant and lately about a few good tips.
Vagrant has reached version 2.2.4 recently and can be downloaded here. It follows the famous NNF-pattern. But although it allows you to choose the install directory, it is very devoted to be installed in the c:\HashiCorp\Vagrant\. I quit trying to force it elsewhere.
MobaXtermMany of my even respected coworkers stick with their all-time favorite Putty. Putty stays ubiquitous. It is simple, but I allways have found it a bit archaic. I thought I once noticed that the support was terminated. However, just now they just released the stunning version number of 0.71. I don't mean to be sarcastic, and Putty has it's own right of existance. But give MobaXterm a try. It's loaded with nice features, including an SCP client that can follow your SSH session. And as said, also a XWindows Server. So, connect with ssh to your Linux server and run jmc, visualVM, Oracle Universal Installer, Weblogic Configurator, JDeveloper, SQLDeveloper, etc. etc. and the UI will pop-up on your desktop. Also MobaXterm includes cygwin so you're able to run a terminal session on Windows. It even allows you to do ps -ef to show your running windows apps!
I also discovered that really easy way to implement a tunnel with MobaXterm!
You can download it here, in a portable and an installer version. I choose the portable. The free edition includes games (why?) and a limit number of sessions, macros and tunnels. But the professional edition only costs a few bucks/euros.
TotalCommanderEver since the introduction of Windows '95, I disliked the Windows Explorer. Luckily I soon discovered Windows Commander, under pressure of Microsoft, renamed to Total Commander. And it's even a better name, because it's about the second tool I install on a new Windows Desktop. Just after Firefox, to be able to close IE/Edge...
It's my cockpit, allows me to navigate to hot folders quickly, introspect files, navigate through archives, edit them or unpack them, multi-rename files, compare files, etc., etc. I just don't make coffee with it. One of the nice features is the button bar, where you can launch applications. And this is the thing I use for this blog.
Tie it all togetherI realize that I overloaded you with sales talk about my favorite tools.
After installing all the tools and having your Vagrant project in place all can be tied together.
I have a project that provisions a VM with an Oracle Database, SQLDeveloper, SOASuite, and BPM QuickStart.
My TotalCommander Toolbar looks like:
You can right-click anywhere in the toolbar to edit it, create new buttons. A 'button' that is left empty (no command) is presented as a separation bar.
Startup & suspend the VM
Let's take a look at the Vagrant SOA Start button:
It's simple: the command is vagrant up and important here is that it should be executed in the folder where the vagrant file resides. I provide a tool tip, and I created an icon file from the Vagrant Logo using my favorite image app IrfanView.
Clicking the button will fire up the VM and potentially provision it. I copied the button to create a button to suspend the VM. The command there is vagrant suspend. But for the rest it is exactly the same.
MobaXterm local terminalWhen you start MobaXterm you'll get to:
When you click on the 'Start local terminal' button, you get a shell window running in the user home folder that is presented by the tool:
You can ssh to a remote server from this terminal. Of course you can create a session to a remote server. Doing so for the first time, logging on to the particular user, allows you to save the password for that user. I've already done that, and then I can do a ssh oracle without the need to provide a password.
Start Database
On my remote server that is started already (using the TotalCommander Button), I have a script that starts the database.
When the VM is started with vagrant, by default it fires up an ssh deamon on port 2222. The command to start my database on the remote server is:
ssh oracle@localhost -p 2222 /home/oracle/bin/startDB.sh
And as you can see in the screendump: I put that in a script in the home folder.
MobaXterm provides several commandline options, that allows you to run a script command at startup of MobaXterm. That is what I used to create a StartDatabase Button:
The command is just MobaXterm (I should put MobaXterm in a version-less folder name, with a version-less executable, or create a script for that).
As parameters I provided -newtab ./startDB.sh. This is to ensure that the script is started on a new tab in MobaXterm, in a new potential session. Little side affect is that it creates a tab on the MobaXterm every button click. So, I might end up closing a few tabs...
Start SQLDeveloper and JDeveloper (BPM QuickStart)To start SQLDeveloper I have a sqldev.sh script with the following content:
nohup ssh oracle@localhost -p 2222 /home/oracle/bin/sqldev.sh > sqldev.out 2>&1 &
And the button looks like:
Similarly, I have a button to start JDeveloper. And it alls and up in the following desktop:
It might need some tweeking. But for now I love it and it works like a charm.
SOASuite 12c upgrade - Composite DVMs
Today I found something curious in a composite upgraded from 11g to 12c, regarding DVMs. I sometimes use DVMs in BPEL to prevent the use of complex xpath expressions with many conditions. For instance, if I need to know if a JMSType is in a certain range and if it is I need to continue, I can create a DVM that has those JVMTypes correlated to an indicator.
Now, in 12c we have a new project structure. Where in 11g, about every component is in the root of the project, in 12c those are moved to a subfolder. That is, if you would create a new project:
Folder like xsd, wsdl, xsl in 11g are renated to Schemas, WSDLs and Transformations in 12c. We decided to refactor the upgraded projects to the new structure in 12c. So our BPEL processes are moved to the BPEL subfolder. This means that when referencing a transformation (xsl) you would adapt your xslt functions as:
Keeping things transparant and uniform, we adapted the DVM references accordingly:
However, we got exceptions like:
After some investigation and trial&error, I found it very peciuliar, that the reference apparently evoluated to: oramds:/deployed-composites/default/DVMs/DWN_Types.dvm. This actualy means that it searches for the DVM in the MDS relative to the deployed composite, but outside it's own folder. After the ../default/.. folder reference it should have the composites name and version/id.
I tried it without the '../' reference in the path, and that worked!
Conclusion:
Now, in 12c we have a new project structure. Where in 11g, about every component is in the root of the project, in 12c those are moved to a subfolder. That is, if you would create a new project:
Folder like xsd, wsdl, xsl in 11g are renated to Schemas, WSDLs and Transformations in 12c. We decided to refactor the upgraded projects to the new structure in 12c. So our BPEL processes are moved to the BPEL subfolder. This means that when referencing a transformation (xsl) you would adapt your xslt functions as:
<assign name="Transform2MessageProperties">Here you see that the reference to the transformation is relative to the BPEL process, and thus '../Transformations'.
<bpelx:annotation>
<bpelx:pattern>transformation</bpelx:pattern>
</bpelx:annotation>
<copy>
<from expression="ora:doXSLTransformForDoc('../Transformations/Transform2MessageProperties.xsl', $Receive_InkomendBericht_InkomendBericht_InputVariable.InkomendBericht)"/>
<to variable="messageProperties"/>
</copy>
</assign>
Keeping things transparant and uniform, we adapted the DVM references accordingly:
<assign name="Assign_JmsType">
<copy>
<from expression="dvm:lookupValue('../DVMs/DWN_Types.dvm', 'Type', bpws:getVariableData('messageProperties','/ns1:messageProperties/ns1:type') , 'JmsType', 'onbekend')"/>
<to variable="jmsType"/>
</copy>
</assign>
However, we got exceptions like:
<bpelFault><faultType>0</faultType><subLanguageExecutionFault xmlns="http://schemas.oracle.com/bpel/extension"><part name="summary"><summary>XPath expression failed to execute.
An error occurs while processing the XPath expression; the expression is dvm:lookupValue('../DVMs/DWN_Types.dvm', 'Type', bpws:getVariableData('messageProperties','/ns1:messageProperties/ns1:type') , 'JmsType', 'onbekend')
The XPath expression failed to execute; the reason was: oracle.fabric.common.xml.xpath.XPathFunctionException: Unable to get Metadata Manager for DVM "oramds:/deployed-composites/default/DVMs/DWN_Types.dvm"
Please ensure the Metadata Manager is available.
Check the detailed root cause described in the exception message text and verify that the XPath query is correct.
</summary></part><part name="code"><code>XPathExecutionError</code></part></subLanguageExecutionFault></bpelFault>
After some investigation and trial&error, I found it very peciuliar, that the reference apparently evoluated to: oramds:/deployed-composites/default/DVMs/DWN_Types.dvm. This actualy means that it searches for the DVM in the MDS relative to the deployed composite, but outside it's own folder. After the ../default/.. folder reference it should have the composites name and version/id.
I tried it without the '../' reference in the path, and that worked!
Conclusion:
- In SOA 12c the ora:doXSLTransformForDoc() function is apparently executed in the context of the BPEL process and thus relative references to the XSL file should be done relative to the BPEL process.
- In SOA 12c the dvm:lookupValue() function appears to be executed in the context of the composite, so the reference to the DVM file should relative to the composite (composite.xml).
Now, you might state that DVM's should be in the MDS and then it should not give any problems. But in this case, the DVMs are strictly, particularly meant to drive the execution of the BPEL process. And thus should be part of the Composite. You could see them as an simple alternative for a business rule, or a more configurable condition-evaluation. DVMs in this category should not be shared through the MDS.
JavaDB not bundled anymore with JDK 8, as of U181
Today I was struggling with helping a colleague with a deployment of a SOA Project of his.
I couldn't get it deployed. It seemed I hit the problem described here. However when trying to connect to my Derby DB I got the following error:
I was very surprised. I checked and double checked my config. And check the library:
So, I checked those folders and found that they're not existing!
Now searching around I found in these release notes that as of Update 181 (let it just be the case that I just had this version of the JDK!) Java DB isn't bundled anymore:
Following the links it turns out that you should download it here.
I choose the zip and copied and unzipped it into my jdk:
Then I moved/renamed the folder to 'db':
After this I'm able to connect to the JavaDB:
So, that was my discovery of the day!
I couldn't get it deployed. It seemed I hit the problem described here. However when trying to connect to my Derby DB I got the following error:
I was very surprised. I checked and double checked my config. And check the library:
So, I checked those folders and found that they're not existing!
Now searching around I found in these release notes that as of Update 181 (let it just be the case that I just had this version of the JDK!) Java DB isn't bundled anymore:
Following the links it turns out that you should download it here.
I choose the zip and copied and unzipped it into my jdk:
[oracle@darlin-vce jdk]$ cp /media/sf_Stage/OpenSource/JavaDB/db-derby-10.14.2.0 -bin.zip .
[oracle@darlin-vce jdk]$ unzip db-derby-10.14.2.0-bin.zip
Archive: db-derby-10.14.2.0-bin.zip
creating: db-derby-10.14.2.0-bin/
inflating: db-derby-10.14.2.0-bin/KEYS
inflating: db-derby-10.14.2.0-bin/LICENSE
inflating: db-derby-10.14.2.0-bin/NOTICE
inflating: db-derby-10.14.2.0-bin/RELEASE-NOTES.html
...
Then I moved/renamed the folder to 'db':
[oracle@darlin-vce jdk]$ mv db-derby-10.14.2.0-bin db
[oracle@darlin-vce jdk]$ ls db/lib/
derbyclient.jar derbyLocale_it.jar derbyLocale_zh_TW.jar
derby.jar derbyLocale_ja_JP.jar derbynet.jar
derbyLocale_cs.jar derbyLocale_ko_KR.jar derbyoptionaltools.jar
derbyLocale_de_DE.jar derbyLocale_pl.jar derbyrun.jar
derbyLocale_es.jar derbyLocale_pt_BR.jar derbytools.jar
derbyLocale_fr.jar derbyLocale_ru.jar derby.war
derbyLocale_hu.jar derbyLocale_zh_CN.jar
[oracle@darlin-vce jdk]$
After this I'm able to connect to the JavaDB:
So, that was my discovery of the day!
Upgrade SOASuite process to 12c - Sensor Actions JMS to AQ
At my current customer we're busy with upgrading our projects from 11g to 12c.
One of the solution my predecessors implemented, is to kick of archive processes using sensor actions.The archive processes listen to JMS Queues, that are implemented as AQ Queues. For that a Foreign Server is configured:
The Foreign Server has a reference to the datasource that points to the schema owning the queues. It has also one or more Connnection Factories:
And the queues have a mapping from a local JNDI to a remote JNDI. The remote JNDI is the name of the particular queue prefixed with Queue:
In the sensor actions we used to have a JMS Adapter configured with as a connection factory the JNDI name of the outbound connection pool, for instance eis/aqjms/DwnQueueDB. The connection factory in that outbound connection factory refers to the JNDI of the connection factory in the Foreign Server.
Now, it turned out that our archiving processes weren't kicked off. I found a few things.
Sensor property filesThe sensors can be configured using in the Monitor view of the BPEL Designer. It can be accessed using the Monitor Icon top left. When an Sensor is defined you can click the attena icon. You can of course create new ones by right clicking on the activity.
Sensor actions can be edited by selecting them and click the pencil-edit-icon.
In 11g, all the artefacts land in the root folder of the composite by default. We refactored the composites by moving artefacts to respective folders, like SOA Suite 12c would do in a new project.
But we skipped the files ${bpel-process-name}_sensor.xml and ${bpel-process-name}_sensorAction.xml. I moved those to the same folder as the BPEL process. With a refresh, the attena-icons re-appeared.
But, also the files are referenced in the composite.xml:
These references aren't updated automatically when moving them. But it turns out that the properties are renamed as well (probably from 10g to 11g already):
I have a query that allows me to select from the queue tables and introspect the queues as JMS Queues:
This query lists the contents of several queue tables (always query queue tables via their AQ$Queue_table_name view) unioned together. From that you can introspect the user data and their properties witht he dot notation. The UserData has a header object, that contains a properties collection, that holds the JMS properties. You can select those as seen above. It turns out that Sensor Actions should set the followign properties:
After some researching, with no luck, I figured that I could try using a JMS Queue configuration. I wondered what the difference would be. Well, it turned out that using a JMS Queue did work.
Reconfiguring the Sensor Action to use a JMS Queue means:
So, in our case this worked.
One of the solution my predecessors implemented, is to kick of archive processes using sensor actions.The archive processes listen to JMS Queues, that are implemented as AQ Queues. For that a Foreign Server is configured:
The Foreign Server has a reference to the datasource that points to the schema owning the queues. It has also one or more Connnection Factories:
And the queues have a mapping from a local JNDI to a remote JNDI. The remote JNDI is the name of the particular queue prefixed with Queue:
In the sensor actions we used to have a JMS Adapter configured with as a connection factory the JNDI name of the outbound connection pool, for instance eis/aqjms/DwnQueueDB. The connection factory in that outbound connection factory refers to the JNDI of the connection factory in the Foreign Server.
Now, it turned out that our archiving processes weren't kicked off. I found a few things.
Sensor property filesThe sensors can be configured using in the Monitor view of the BPEL Designer. It can be accessed using the Monitor Icon top left. When an Sensor is defined you can click the attena icon. You can of course create new ones by right clicking on the activity.
Sensor actions can be edited by selecting them and click the pencil-edit-icon.
In 11g, all the artefacts land in the root folder of the composite by default. We refactored the composites by moving artefacts to respective folders, like SOA Suite 12c would do in a new project.
But we skipped the files ${bpel-process-name}_sensor.xml and ${bpel-process-name}_sensorAction.xml. I moved those to the same folder as the BPEL process. With a refresh, the attena-icons re-appeared.
But, also the files are referenced in the composite.xml:
…
</componentType>
<property name="configuration.sensorLocation" type="xs:string" many="false">BPEL/${bpel-process-name}_sensor.xml</property>
<property name="configuration.sensorActionLocation" type="xs:string" many="false">BPEL/${bpel-process-name}_sensorAction.xml</property>
</component>
These references aren't updated automatically when moving them. But it turns out that the properties are renamed as well (probably from 10g to 11g already):
- pre-11g: bpel.config.sensorLocation => 11g/12c onwards: configuration.sensorLocation
- pre-11g: bpel.config.sensorActionLocation=> 11g/12c onwards: configuration.sensorActionLocation
I have a query that allows me to select from the queue tables and introspect the queues as JMS Queues:
select qtb.queue_table
, qtb.queue
, qtb.msg_id
, qtb.msg_state
, qtb.enq_timestamp
, qtb.user_data.header.replyto
, qtb.user_data.header.type type
, qtb.user_data.header.userid userid
, qtb.user_data.header.appid appid
, qtb.user_data.header.groupid groupid
, qtb.user_data.header.groupseq groupseq
, qtb.user_data.header.properties properties
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'tracking_compositeInstanceId') tracking_compositeInstanceId
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'JMS_OracleDeliveryMode') JMS_OracleDeliveryMode
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'tracking_ecid') tracking_ecid
, (select num_value from table (qtb.user_data.header.properties) prp where prp.name = 'JMS_OracleTimestamp') JMS_OracleTimestamp
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'tracking_parentComponentInstanceId') tracking_prtCptInstanceId
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'tracking_conversationId') tracking_conversationId
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'BPEL_SENSOR_NAME') bpel_sensor_name
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'BPEL_PROCESS_NAME') bpel_process_name
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'BPEL_PROCESS_REVISION') bpel_process_rev
, (select str_value from table (qtb.user_data.header.properties) prp where prp.name = 'BPEL_DOMAIN') bpel_domain
, qtb.user_data.header
, qtb.user_data.text_lob text
, qtb.expiration_reason
--, qtb.*
from ( select 'DWN_OUTBOUND_TABLE' queue_table
, qtb.*
from aq$dwn_outbound_table qtb
union all
...
union all
select 'DWN_INBOUND_TABLE' queue_table
, qtb.*
from AQ$DWN_INBOUND_TABLE qtb) qtb
order by enq_timestamp desc;
This query lists the contents of several queue tables (always query queue tables via their AQ$Queue_table_name view) unioned together. From that you can introspect the user data and their properties witht he dot notation. The UserData has a header object, that contains a properties collection, that holds the JMS properties. You can select those as seen above. It turns out that Sensor Actions should set the followign properties:
- BPEL_SENSOR_NAME
- BPEL_PROCESS_NAME
- BPEL_PROCESS_REVISION
- BPEL_DOMAIN
After some researching, with no luck, I figured that I could try using a JMS Queue configuration. I wondered what the difference would be. Well, it turned out that using a JMS Queue did work.
Reconfiguring the Sensor Action to use a JMS Queue means:
- Set Publish Type to JMS Queue (obviously)
- The JMS Connection Factory need to hold the JNDI name of the connection factory to use. In our case the one registered at the Foreign Server. (With JMSAdapter as Publish Type, this property is called JMSConnectionName.)
- The Publish Target is now the JNDI Name of the destination. In our example (see the screen shot above) it is the local jndi of the queue to publish to. With the JMS Adapter it was the queue name.
So, in our case this worked.
Generate a formatted guid from database - Use Snippets
A very simple quick post today. I'm re-engineering a few Java based Mock Webservices into a SOA Suite/BPEL service.
Some of those generate a soap fault when a message id contains "8888" for instance.
I'd like to generate a GUID based message id, that is formatted with groups of four digits.
Of course there are loads of methods to do that. For instance, the Oracle database has a sys_guid() function for years. It generates a guid like: '824F95ECCB1C0EB7E053120B260A2D0F'.
But, I'd like it in a form '824F-95EC-CB1F-0EB7-E053-120B-260A-2D0F'. It can easily done by concatenating substr() calls. But you do not want to re-generate the guid with every 4 digit substr().
So, I put it into the following select:
What might be less obvious for the regular SQL developer is the with class. It is explained excelently by Tim Hall (and although around since Oracle 9.2 already, only recently put in my personal skill-box). This allows me in this query to call the sysguid() and reuse the value in the three columns.
Although this is a very simple query, it might come in handy more often. And since I'll be around this customer for a longer period, I expect, I want to save it as a snippet.
A feature around in SQLDeveloper for years are the snippets. You can make them visible through the View menu:
I tabbed-it away to the left gutter, to have it out of my way, but still in reach:
Create and edit snippets through the indicated icons. You can create your own categories, by just enter a new category name. Name it, provide a tool tip and paste the snippet. Easy-piecy.
You'll find quite a number of predefined snippets categorized neatly.
If you have gathered several of those snippets like me, and maybe want to take them to other assignments, you might feel the need to backup them.
To reuse a snippet just drag and drop them from the list into your worksheet.
The Snippets are stored in the UserSnippets.xml in the roaming user profile of SQL Developer:
In Windows like 'c:\Users\makker\AppData\Roaming\SQL Developer\'. Just backup/copy the file. Here you see the CodeTemplate.xml file as well, that contains the shorthand acronyms/aliases to much typed pieces of code that you can create too.
By the way, googling "That Jeff Smith Snippets" brought me this archived article (yes, snippets are that old) and with a link to this nice still active library of snippets.
Some of those generate a soap fault when a message id contains "8888" for instance.
I'd like to generate a GUID based message id, that is formatted with groups of four digits.
Of course there are loads of methods to do that. For instance, the Oracle database has a sys_guid() function for years. It generates a guid like: '824F95ECCB1C0EB7E053120B260A2D0F'.
But, I'd like it in a form '824F-95EC-CB1F-0EB7-E053-120B-260A-2D0F'. It can easily done by concatenating substr() calls. But you do not want to re-generate the guid with every 4 digit substr().
So, I put it into the following select:
with get_guid as (select sys_guid() guid
from dual)
select guid
, substr(guid, 1, 4)||'-'||substr(guid, 5, 4)||'-'||substr(guid, 9, 4)||'-'||substr(guid, 13, 4)||'-'||substr(guid, 17, 4)||'-'||substr(guid, 21, 4)||'-'||substr(guid, 25, 4) ||'-'||substr(guid, 29, 4)guid_formatted
, length(guid) guid_length
from get_guid;
What might be less obvious for the regular SQL developer is the with class. It is explained excelently by Tim Hall (and although around since Oracle 9.2 already, only recently put in my personal skill-box). This allows me in this query to call the sysguid() and reuse the value in the three columns.
Although this is a very simple query, it might come in handy more often. And since I'll be around this customer for a longer period, I expect, I want to save it as a snippet.
A feature around in SQLDeveloper for years are the snippets. You can make them visible through the View menu:
I tabbed-it away to the left gutter, to have it out of my way, but still in reach:
Create and edit snippets through the indicated icons. You can create your own categories, by just enter a new category name. Name it, provide a tool tip and paste the snippet. Easy-piecy.
You'll find quite a number of predefined snippets categorized neatly.
If you have gathered several of those snippets like me, and maybe want to take them to other assignments, you might feel the need to backup them.
To reuse a snippet just drag and drop them from the list into your worksheet.
The Snippets are stored in the UserSnippets.xml in the roaming user profile of SQL Developer:
In Windows like 'c:\Users\makker\AppData\Roaming\SQL Developer\'. Just backup/copy the file. Here you see the CodeTemplate.xml file as well, that contains the shorthand acronyms/aliases to much typed pieces of code that you can create too.
By the way, googling "That Jeff Smith Snippets" brought me this archived article (yes, snippets are that old) and with a link to this nice still active library of snippets.
Upgraded my Virtualization environment
A few weeks ago VirtualBox 6.0.4 is released. A minor release of the recent major release of 6.0. Although already anounced by Tim ~Oracle Base ~ Hall, I had not upgraded yet. I was still on 5.2.x. Change log of VirtualBox can be here. There are some interesting improvements. For instance, I'm curious to see what we can expect from the Oracle Cloud integration. And on several points, like shared folders, the performance is improved.
The UI is refreshed. I like the separate Tools bar with quick buttons to Import, Export and create new VM's. But, since I work with Vagrant more and more, I will see this screen less and less.
Also Vagrant has a new version since beginning of january. And I upgraded to 2.2.3. Change log can be found here.
All seem to function fine together. With my recent uprgaded MobaXterm 11.1 I can start my JDeveloper 12c from the started VM perfectly.
The VM was suspended with VBox 5.2.x and Vagrant 2.2.2. And started with the latest greated. With nothing on the hand.
By the way, VMWare Player had VMWare Unity and VirtualBox the Seamless mode for years. It allows you to start your apps in the VM and run them as if they run as separate windows on your host. Years ago when I used VMWare Player, I was quite impressed by it. But I never got used to the VBox Seamless mode. Nowadays my favorite way of working is to connect to start the VM without UI (set the vb.gui property to false in your Vagrantfile), connect to it using MobaXterm and start the app (JDeveloper for instance). The X Server implementation of MobaXterm will take care of the rest. Works like a charm!
The UI is refreshed. I like the separate Tools bar with quick buttons to Import, Export and create new VM's. But, since I work with Vagrant more and more, I will see this screen less and less.
Also Vagrant has a new version since beginning of january. And I upgraded to 2.2.3. Change log can be found here.
All seem to function fine together. With my recent uprgaded MobaXterm 11.1 I can start my JDeveloper 12c from the started VM perfectly.
The VM was suspended with VBox 5.2.x and Vagrant 2.2.2. And started with the latest greated. With nothing on the hand.
By the way, VMWare Player had VMWare Unity and VirtualBox the Seamless mode for years. It allows you to start your apps in the VM and run them as if they run as separate windows on your host. Years ago when I used VMWare Player, I was quite impressed by it. But I never got used to the VBox Seamless mode. Nowadays my favorite way of working is to connect to start the VM without UI (set the vb.gui property to false in your Vagrantfile), connect to it using MobaXterm and start the app (JDeveloper for instance). The X Server implementation of MobaXterm will take care of the rest. Works like a charm!
KafkaSeries: Starting KafkaServers in Java - Implementing the Observer pattern ... again
In my previous article I explained how I start a ZooKeeper Server (potentially more of them) in Java using the Observer pattern. As promised, in this article I will explain how I implement the starting of KafkaServers in about the same way. Again, using the Observer pattern.
In principle we need one ZooKeeper, although you can have run multiple instances in a HighAvailable version. I have to figure that out, by the way.
But we can have multiple KafkaServers. And that makes sense. You might remember that I'm planning to use Kafka in a Weblogic environment, where you can have multiple Managed Servers (for instance OSB or SOA) that run side-by-side in a cluster possibly on mulitple machines. You probably want to have the Kafka Clients (consumers & producers) connect to the local instance. I would. But, they should work together, exchanging messages, so you can track events that originated on the other instance.
So I implemented a KafkaServerDriver extending the Observable class the same way as the ZooKeeperDriver in my previous article (I in fact copied it). I changed it in a way that it can start multiple instances of KafkaObserver.
So, let me go over the particular metods again.
The KafkaServer(s) allow for injecting MetricReporters that can do reporting of ruintme behavior of the particular KafkaServer in a desired way. I did not get that to work in my JDeveloper project, since these are Scala object that JDeveloper got confused by, so to speak. So, in this version I provide an empty Reporters Array.
Then we create a new KafkaServer object. The constructor expects the following parameters.
To get a hold of the instantiated KafkaServer, we set it in our private attribute, and then startup the server.
KafkaServerDriver Properties We can have multiple KafaServers running in our environment. We could have multiple on the same host, or distributed over multiple hosts. Each of them will have their own property-files, since, especially when running on the same host, they need at least their own broker.id and also their own port and data/log folder.
To be able to differentiate over the different Kafka Servers and define which one of them should be started up on the particular host, I introduced my own KafkaServerDriverProperties file.
It looks like:
This defines a list of kafkaservers (server0 and server1 in this example) and then for each of those a list of attributes. Of importance are the properties:
It transforms the comma-separated list into a List object, that enables you to iterate over it. And for each servername on the list it will get the propertyfile and startupEnabled properties and put that, wrapped in a properties object, in a HashMap, identified by servername. The getServerProperties(String serverName) method enables you to fetch those properties for a certain serverName.
Observing the KafkaServer Observable Having the above in place, the KafkaServerDriver Observable can be implemented with the ZooKeeperDriver as an example. But, since we want to be able to fire up multiple KafkaServers, this is slightly more complicated.
StartThe start method within the KafkaServerDriver looks like:
It loops over the server names from the KafkaServerList from the KafkaServerDriverProperties. For each listed servername it will add a KafkaServer.
addKafkaServerThis method has some overloaded variants. One parameterless, that loads the default server.properties file from the class path and calls the variant that takes in a properties parameter.
But let's start with the addKafkaServer(String) variant:
This one takes in the kafkaServerName and gets the approppriate server Properties from the KafkaServerDriverProperties object. It it has the startupEnabled property set to true, then it will fetch the serverProperties file, and load that one. Using that Properties object it will call the addKafkaServer(Properties) variant:
What this does is pretty much equal to the addZookeeper() method in the ZooKeeperDriver class. Create a new KafkaObserver providing the KafkaServerDriver object (this) as a reference and the Kafka Server Properties object. And create a new Thread for it. New is (I didn't had that when I wrote the previous article about starting the ZooKeeper) is that I set the name of the Thread. Then I add the new thread tho the KafkaServer.
Construct a KafkaObserver
We saw that in the addKafkaServer a KafkaObserver is instantiated using a reference to the KafkaServerDriver object as an Observable and the KafkaServer Properties object.
The constructor to do so is as follows:
In it we set the properties, and register the KafkaserverDriver and add this new object as an observer to the referenced KafkaserverDriver.
Run the KafkaObserverSince the KafkaObserver is a Runnable we need to implement the run() method:
Shutdown
Shutdown within KafkaObserver the is as easy as:
The KafkaServerDriver also has a shutdown() method:
It sets the shutdownKafkaServers indicator, as well as the changed indicator. Then it notifies the Observers. This will result in a signal to the update() method of all registered KafkaObservers:
It checks if the registered KafkaServerDriver has the shutdownKafkaServers indicator set. If so (and it obvious will), it will call the shutdown() method, mentioned earlier.
Start & ShutdownAs with the ZooKeeperDriver you need to store the KafkaServerDriver object in a static variable, and call the respective start and shutdown methods. Using the mentioned KafkaServerDriverProperties file in the class path, the particular instance will know which KafkaServers need to be started. Make sure that for each kafkaserver you have a copy of the server.properties file as found in the Kafka distribution (for instance Confluent). Each copy need to have a unique broker.id and references to the data/log folders. And possibly a unique listen-port.
Libraries and ClasspathOne of the things I often miss in articles like this (my excuses that I did not add it to the previous article, is a list of libraries to add to get the lot compiled.
If you take a look at the scripts, you'll find that it would just add all the libraries in the particular folder. I like to know what particular jar's I really need to get things compiled. The following jar files in the Confluent distribution are found to be needed for both having the project compiled as well as being able to run:
Added to that I have the following folders added in my project's library listing:
ConclusionWell, that's about it for now. Next stop: create a Weblogic domain and try to add the startup and shutdown classes to it and see if I can have ZooKeeper and KafaServers booted with Weblogic.
And of course the proof of the pudding: produce and consume messages.
In principle we need one ZooKeeper, although you can have run multiple instances in a HighAvailable version. I have to figure that out, by the way.
But we can have multiple KafkaServers. And that makes sense. You might remember that I'm planning to use Kafka in a Weblogic environment, where you can have multiple Managed Servers (for instance OSB or SOA) that run side-by-side in a cluster possibly on mulitple machines. You probably want to have the Kafka Clients (consumers & producers) connect to the local instance. I would. But, they should work together, exchanging messages, so you can track events that originated on the other instance.
So I implemented a KafkaServerDriver extending the Observable class the same way as the ZooKeeperDriver in my previous article (I in fact copied it). I changed it in a way that it can start multiple instances of KafkaObserver.
So, let me go over the particular metods again.
/**This is essentially the method to start a Kafka Server. It begins with creating a KafkaConfig object, from a plain java.util.Properties object. Again I created an own KafkaServer Properties class that extends the java.util.Properties object. In the ZooKeeper article I explained that I needed a few extra methods to get Int based properties or to default a property based on the value of another propertie. In this case another reason is that I want to be able to differentiate over KafkaServers, each having their own property files. We'll get into that later on.
* Run from a ServerConfig.
* @param config ServerConfig to use.
* @throws IOException
*/
public void runFromProperties(Properties ksProperties) throws IOException {
final String methodName = "runFromProperties";
log.start(methodName);
log.info(methodName, "Starting server");
KafkaConfig config = KafkaConfig.fromProps(ksProperties);
//VerifiableProperties verifiableProps = new VerifiableProperties(ksProperties);
Seq reporters = new ArraySeq(0);
// Seq reporters = (Seq) KafkaMetricsReporter$.MODULE$.startReporters(verifiableProps);
KafkaServer kafkaServer = new KafkaServer(config, new SystemTime(), Option.apply("prefix"), reporters);
setKafkaServer(kafkaServer);
kafkaServer.startup();
log.end(methodName);
}
The KafkaServer(s) allow for injecting MetricReporters that can do reporting of ruintme behavior of the particular KafkaServer in a desired way. I did not get that to work in my JDeveloper project, since these are Scala object that JDeveloper got confused by, so to speak. So, in this version I provide an empty Reporters Array.
Then we create a new KafkaServer object. The constructor expects the following parameters.
- config: the KafkaConfig object, created from the properties.
- new SystemTime(): a new org.apache.kafka.common.utils.SystemTime object.
- Option.apply("prefix"): Option is a Scala way of defining a Map (Kafka is build in Scala). The value "prefix" is used to give a name to the Thread the KafkaServer will run in.
- reporters: a list of reporters that can be provided to the KafkaServer, to monitor it.
To get a hold of the instantiated KafkaServer, we set it in our private attribute, and then startup the server.
KafkaServerDriver Properties We can have multiple KafaServers running in our environment. We could have multiple on the same host, or distributed over multiple hosts. Each of them will have their own property-files, since, especially when running on the same host, they need at least their own broker.id and also their own port and data/log folder.
To be able to differentiate over the different Kafka Servers and define which one of them should be started up on the particular host, I introduced my own KafkaServerDriverProperties file.
It looks like:
kafkaservers=server0,server1
server0.id=0
server0.propertyfile=server0.properties
server0.startupEnabled=true
server1.id=1
server1.propertyfile=server1.properties
server1.startupEnabled=true
This defines a list of kafkaservers (server0 and server1 in this example) and then for each of those a list of attributes. Of importance are the properties:
- <server-name>.propertyfile: naming a copy of the server.properties file that is used for this server. It it's loaded from the classpath, so only the name should be provided.
- <server-name>.startupEnabled: should the server be started on this host (true or false)?
It transforms the comma-separated list into a List object, that enables you to iterate over it. And for each servername on the list it will get the propertyfile and startupEnabled properties and put that, wrapped in a properties object, in a HashMap, identified by servername. The getServerProperties(String serverName) method enables you to fetch those properties for a certain serverName.
Observing the KafkaServer Observable Having the above in place, the KafkaServerDriver Observable can be implemented with the ZooKeeperDriver as an example. But, since we want to be able to fire up multiple KafkaServers, this is slightly more complicated.
StartThe start method within the KafkaServerDriver looks like:
/**
* Start KafkaServers
*/
public void start() {
final String methodName = "start";
log.start(methodName);
for (String kafkaServerName : ksdProperties.getKafkaServerList()) {
log.debug(methodName, "Start KafkaServer: " + kafkaServerName);
addKafkaServer(kafkaServerName);
}
//addKafkaServer();
log.end(methodName);
}
It loops over the server names from the KafkaServerList from the KafkaServerDriverProperties. For each listed servername it will add a KafkaServer.
addKafkaServerThis method has some overloaded variants. One parameterless, that loads the default server.properties file from the class path and calls the variant that takes in a properties parameter.
But let's start with the addKafkaServer(String) variant:
/**
* Add a KafkaServer
* @param kafkaServerName
*/
public void addKafkaServer(String kafkaServerName) {
final String methodName = "addKafkaServer(String)";
log.start(methodName);
try {
Properties serverProperties = ksdProperties.getServerProperties(kafkaServerName);
if (serverProperties.getBoolValue("startupEnabled")) {
log.info(methodName, "Start KafkaServer " + kafkaServerName);
String serverPropertiesFileName = serverProperties.getStringValue("propertyfile");
log.debug(methodName, "KafkaServer propertyfile: " + serverPropertiesFileName);
Properties ksProperties = null;
if (serverPropertiesFileName != null) {
ksProperties = PropertiesFactory.getKSProperties(serverPropertiesFileName);
} else {
ksProperties = PropertiesFactory.getKSProperties();
}
addKafkaServer(ksProperties);
} else {
log.info(methodName, "KafkaServer " + kafkaServerName + " has startupEnabled == false!");
}
} catch (IOException e) {
log.error(methodName, "Failed to load properties!", e);
throw new RuntimeException(e);
}
log.end(methodName);
}
This one takes in the kafkaServerName and gets the approppriate server Properties from the KafkaServerDriverProperties object. It it has the startupEnabled property set to true, then it will fetch the serverProperties file, and load that one. Using that Properties object it will call the addKafkaServer(Properties) variant:
/**
* Add a KafkaServer from properties
* @param ksProperties
*/
public void addKafkaServer(Properties ksProperties) {
final String methodName = "addKafkaServer";
log.start(methodName);
KafkaObserver kafkaServer = new KafkaObserver(this, ksProperties);
Thread newKSThread = new Thread(kafkaServer);
newKSThread.setName("KafkaServer" + ksProperties.getProperty(PRP_BRKR_ID));
kafkaServer.setKsThread(newKSThread);
newKSThread.start();
log.end(methodName);
}
What this does is pretty much equal to the addZookeeper() method in the ZooKeeperDriver class. Create a new KafkaObserver providing the KafkaServerDriver object (this) as a reference and the Kafka Server Properties object. And create a new Thread for it. New is (I didn't had that when I wrote the previous article about starting the ZooKeeper) is that I set the name of the Thread. Then I add the new thread tho the KafkaServer.
Construct a KafkaObserver
We saw that in the addKafkaServer a KafkaObserver is instantiated using a reference to the KafkaServerDriver object as an Observable and the KafkaServer Properties object.
The constructor to do so is as follows:
public KafkaObserver(Observable kafkaServerDriver, Properties ksProperties) {
super();
final String methodName = "KafkaObserver(Observable, Properties)";
log.start(methodName);
this.setKsProperties(ksProperties);
if (kafkaServerDriver instanceof KafkaServerDriver) {
log.info(methodName,
"Add observer " + this.getClass().getName() + " to observable " +
kafkaServerDriver.getClass().getName());
setKafkaServerDriver((KafkaServerDriver) kafkaServerDriver);
kafkaServerDriver.addObserver(this);
}
log.end(methodName);
}
In it we set the properties, and register the KafkaserverDriver and add this new object as an observer to the referenced KafkaserverDriver.
Run the KafkaObserverSince the KafkaObserver is a Runnable we need to implement the run() method:
public void run() {
final String methodName = "run";
log.start(methodName);
try {
runFromProperties(getKsProperties());
} catch (IOException ioe) {
log.error(methodName, "Run failed!", ioe);
}
log.end(methodName);
}
Shutdown
Shutdown within KafkaObserver the is as easy as:
/**
* Shutdown the serving instance
*/
public void shutdown() {
final String methodName = "shutdown";
log.start(methodName);
log.info(methodName, "Let me shutdown " + getKsThread().getName());
KafkaServer kafkaServer = getKafkaServer();
kafkaServer.shutdown();
log.end(methodName);
}
The KafkaServerDriver also has a shutdown() method:
/**
* Shutdown all KafkaServers
*/
public void shutdown() {
final String methodName = "shutdown";
log.start(methodName);
setShutdownKafkaServers(true);
log.info(methodName, "Notify Observers to shutdown!");
this.setChanged();
this.notifyObservers();
log.end(methodName);
}
It sets the shutdownKafkaServers indicator, as well as the changed indicator. Then it notifies the Observers. This will result in a signal to the update() method of all registered KafkaObservers:
public void update(Observable o, Object arg) {
final String methodName = "update(Observable,Object)";
log.start(methodName);
Thread ksThread = getKsThread();
log.info(methodName, ksThread.getName() + " - Got status update from Observable!");
KafkaServerDriver ksDriver = getKafkaServerDriver();
if (ksDriver.isShutdownKafkaServers()) {
log.info(methodName, ksThread.getName() + " - Apparently I´ve got to shutdown myself!");
shutdown();
} else {
log.info(methodName, ksThread.getName() + " - Don't know what to do with this status update!");
}
log.end(methodName);
}
It checks if the registered KafkaServerDriver has the shutdownKafkaServers indicator set. If so (and it obvious will), it will call the shutdown() method, mentioned earlier.
Start & ShutdownAs with the ZooKeeperDriver you need to store the KafkaServerDriver object in a static variable, and call the respective start and shutdown methods. Using the mentioned KafkaServerDriverProperties file in the class path, the particular instance will know which KafkaServers need to be started. Make sure that for each kafkaserver you have a copy of the server.properties file as found in the Kafka distribution (for instance Confluent). Each copy need to have a unique broker.id and references to the data/log folders. And possibly a unique listen-port.
Libraries and ClasspathOne of the things I often miss in articles like this (my excuses that I did not add it to the previous article, is a list of libraries to add to get the lot compiled.
If you take a look at the scripts, you'll find that it would just add all the libraries in the particular folder. I like to know what particular jar's I really need to get things compiled. The following jar files in the Confluent distribution are found to be needed for both having the project compiled as well as being able to run:
- confluent/share/java/kafka/kafka.jar
- confluent/share/java/kafka/kafka-clients-2.0.0-cp1.jar
- confluent/share/java/kafka/log4j-1.2.17.jar
- confluent/share/java/kafka/slf4j-log4j12-1.7.25.jar
- confluent/share/java/kafka/slf4j-api-1.7.25.jar
- confluent/share/java/kafka/kafka-log4j-appender-2.0.0-cp1.jar
- confluent/share/java/kafka/zookeeper-3.4.13.jar
- confluent/share/java/kafka/scala-library-2.11.12.jar
- confluent/share/java/confluent-common/common-metrics-5.0.0.jar
- confluent/share/java/kafka/scala-logging_2.11-3.9.0.jar
- confluent/share/java/kafka/metrics-core-2.2.0.jar
- confluent/share/java/kafka/jackson-core-2.9.6.jar
- confluent/share/java/kafka/jackson-databind-2.9.6.jar
- confluent/share/java/kafka/jackson-annotations-2.9.6.jar
Added to that I have the following folders added in my project's library listing:
- confluent/etc/kafka/
- KafkaClient/config
ConclusionWell, that's about it for now. Next stop: create a Weblogic domain and try to add the startup and shutdown classes to it and see if I can have ZooKeeper and KafaServers booted with Weblogic.
And of course the proof of the pudding: produce and consume messages.
KafkaSeries: Start Zookeeper from Java - Implementing the Observer pattern (while I can)
IntroductionSince a few months I'm diving into Apache Kafka. I've always been fascinated by queuing mechanisms. And Apache Kafka nowadays is the most modern alternative. Lately I did a presentation on an introduction to Apache Kafka:
But now I'm investigating what I can do with it. Since Weblogic is one of my focus areas, I wanted to explore how I can embed Kafka into Weblogic.
I reasoned that when I want to use Kafka with a current customer, the administrators have to install kafka (eg. unzip the Confluent distribution), on a separate virtual server.
By default the distribution comes with startup and shutdown scripts. The administrators should use those, or create their own, and startup the Kafka and Zookeeper services. And of course keep those up-and-running.
I figured that when I would be able to start the services as a thread under a Weblogic server, no additional infra structure is needed. Also starting the Weblogic server would start the Kafka services as well.
Kafka needs a ZooKeeper service. You can see the ZooKeeper as a directory service for a Kafka infrastructure. Slightly comparable to an AdminServer in Weblogic. So it would make sense, as I see it, to start the ZooKeeper with the AdminServer. The Kafka Servers can be started as part of the Weblogic Managed Server(s)
Weblogic has a mechanism to do initializations and finalizations, using startup and shutdown classes, see these documentation. From there the ZooKeeper and KafkaServers can be started.
So I had to figure out how to start those from Java. Let's start with the ZooKeeper.
I put my sources on GitHub, so you can review them. But keep in mind that they're still under construction.
Starting a ZooKeeperMy starting point was this question on StackOverflow, that handles starting a ZooKeeperServer in Java, based on the ZooKeeperServerMain.java class. It was quite promising and soon I had a first version of my startup class working. Quite simple really. But, since I also want to be able to shut it down, I soon ran into some restrictions. Some methods and attributes I needed were protected and only reachable from the same package, for instance. I wasn't quite pleased with the implementation. Digging a bit further I ran into the source of that class over here. I decided to take that class, study it and based on that knowledge implement my own class.
I created a ZooKeeperObserver class, and transformed the public void runFromConfig(ServerConfig config) method from ZooKeeperServerMain.java class, into a public void runFromProperties(ZooKeeperProperties zkProperties) method.
It takes in a properties object, that is interpretted and used to start the ZooKeeper.
Zookeeper Properties To keep things transparent and simple, I created a PropertiesFactory class that provides a method to read the zookeeper.properties from the class path (therefor we should add the /etc/kafka folder to it).
I also created an own Properties class extending java.util.Properties to add a few property getter methods, like getting an int value and defaulting a property based on an other property.
Lastly, I created the ZooKeeperProperties bean, to interpret the relevant ZooKeeper properties, from a read Properties object.
The relevant properties are:
Only the properties dataDir, clientPort and maxClientCnxns are set explicitly in the zookeeper.properties file. See the Zookeeper Administration docs for more info (apparently Zookeeper is created/invented in the Hadoop project).
Run from PropertiesThe runFromProperties is the one that actually starts a ZooKeeperServer instance:
Observing the ObservableNow, you might think: What is it with the name ZooKeeperObserver? Earlier, I named it EmbeddedZooKeeperServer. But I found that name long and not nice. I found it funny that Observer has the word Server in it.
As mentioned in the previous section, when starting up the ConnectionFactory/ZookeeperServer, the control is handed over. The method is not left, until the ZooKeeperServer stops running.
I therefor want (as in many implementations) that the ZooKeeperServer, runs in a seperate thread, that I can control. That is, I want to be able to send a shutdown signal to it. For that I found the Observer pattern suitable. In this pattern, the Observable or Subject maintains a list of Observers that can be notified about an update in the Observable. To do so, the Observable extends the java.util.Observable class. And the Observer implements the java.util.Observer and Runnable interfaces.
How does it work? Let's go through the applicable methods.
Start and Add a ZooKeeperThe Observable is implemented by ZooKeeperDriver. In it we'll find a method start():
Here you see that the ZooKeeperProperties are fetched and a new ZooKeeperObserver is instantiated, using a reference to the ZooKeeperDriver object and the ZooKeeperProperties. Since the ZooKeeperObserver is a Runnable we can add it to a new Thread. That thread is also set to the ZooKeeperObserver so that it has a hold of it's own thread, when that come in handy.
And then the new thread is started.
Instantiate the ZooKeeperObserverIn the previous section, we saw that the ZooKeeperObserver is instantiated using a reference to the ZooKeeperDriver object. Let's see how it looks like:
The ZooKeeperProperties are set. And then it checks if the Observable that is passed is indeed a ZooKeeperDriver. The ZooKeeperDriver is also set, and then the ZooKeeperObserver object is added as an Observer to the ZooKeeperDriver using the addObserver(this) method. This method is part of the java.util.Observable object that is extended. It adds the ZooKeeperObserver to a list, that is used to send the update signal to every instance on the list.
Run the ZooKeeperObserverThe ZooKeeperObserver is a Runnable so the run() method is implemented:
It calls the runFromProperties(), that is explained earlier.
ShutdownThe ZooKeeperDriver has a shutdown() method:
It sets the shutdownZooKeepers indicator to true. This is an attribute that indicates what has been updated. In a more complex Observer pattern more kinds of updates can occur. So, you need to indicate what drove the update.
The most interesting statement is the call to the notifyObservers() method. It will call the implemeneted update() on every Observer in the list.
I implemented this earlier in another situation, a few years ago. And I reused it. But at first it did not work. I found that, apparently changed in Java 7 or 8, I had to add a call to the setChanged() method. The notification to the Observers only works after that call.
As said, notifyObservers() calls the update() method in the Observer:
And this one actually checks in the ZooKeeperDriver if the change is because of the shutDownZooKeepers indicator.
If so, it calls it's own shutdown() method. If not, then the update is ignored. The shutdown does the following:
It gets the Connection factory and sends a shutdown() signal to it. if the ZooKeeper is still running (it shouldn't be), then it gets a shutdown() signal also.
Start and ShutdownIn the end you need to create an instance of the ZooKeeperDriver and save it into a static variable. Then you can call the start() method and later get the object again from the static variable, to call the shutdown() method.
ConclusionThis may look a quite complex to you, to start a server. But, again, I want to be able to embed the Kafka infrastructure in an other system, in my situation Weblogic. This method I'll use to do the same for the Kafka Servers. I'll write about that in a follow-up article. And then, I'll create a set of startup and shutdown classes for Weblogic.
It was fun to implement the Observer pattern again. But, when I encountered that the notifyObserver method did not work as expected at first, searching for a solution, I found that it is deprecated in Java 9. It will still work, but apparently people found that it has it's limitations and a better way of implementing it is developed.
But now I'm investigating what I can do with it. Since Weblogic is one of my focus areas, I wanted to explore how I can embed Kafka into Weblogic.
I reasoned that when I want to use Kafka with a current customer, the administrators have to install kafka (eg. unzip the Confluent distribution), on a separate virtual server.
By default the distribution comes with startup and shutdown scripts. The administrators should use those, or create their own, and startup the Kafka and Zookeeper services. And of course keep those up-and-running.
I figured that when I would be able to start the services as a thread under a Weblogic server, no additional infra structure is needed. Also starting the Weblogic server would start the Kafka services as well.
Kafka needs a ZooKeeper service. You can see the ZooKeeper as a directory service for a Kafka infrastructure. Slightly comparable to an AdminServer in Weblogic. So it would make sense, as I see it, to start the ZooKeeper with the AdminServer. The Kafka Servers can be started as part of the Weblogic Managed Server(s)
Weblogic has a mechanism to do initializations and finalizations, using startup and shutdown classes, see these documentation. From there the ZooKeeper and KafkaServers can be started.
So I had to figure out how to start those from Java. Let's start with the ZooKeeper.
I put my sources on GitHub, so you can review them. But keep in mind that they're still under construction.
Starting a ZooKeeperMy starting point was this question on StackOverflow, that handles starting a ZooKeeperServer in Java, based on the ZooKeeperServerMain.java class. It was quite promising and soon I had a first version of my startup class working. Quite simple really. But, since I also want to be able to shut it down, I soon ran into some restrictions. Some methods and attributes I needed were protected and only reachable from the same package, for instance. I wasn't quite pleased with the implementation. Digging a bit further I ran into the source of that class over here. I decided to take that class, study it and based on that knowledge implement my own class.
I created a ZooKeeperObserver class, and transformed the public void runFromConfig(ServerConfig config) method from ZooKeeperServerMain.java class, into a public void runFromProperties(ZooKeeperProperties zkProperties) method.
It takes in a properties object, that is interpretted and used to start the ZooKeeper.
Zookeeper Properties To keep things transparent and simple, I created a PropertiesFactory class that provides a method to read the zookeeper.properties from the class path (therefor we should add the /etc/kafka folder to it).
I also created an own Properties class extending java.util.Properties to add a few property getter methods, like getting an int value and defaulting a property based on an other property.
Lastly, I created the ZooKeeperProperties bean, to interpret the relevant ZooKeeper properties, from a read Properties object.
The relevant properties are:
Property
Meaning
Default
dataDir The location where ZooKeeper will store the in-memory database snapshots and, unless specified otherwise, the transaction log of updates to the database. /tmp/zookeeper dataLogDir This option will direct the machine to write the transaction log to the dataLogDir rather than the dataDir.dataDir clientPort The port to listen for client connections; that is, the port that clients attempt to connect to.2181 clientPortAddress The address (ipv4, ipv6 or hostname) to listen for client connections; that is, the address that clients attempt to connect to.Empty: every NIC in the server host. maxClientCnxns Limits the number of concurrent connections (at the socket level) that a single client, identified by IP address.0: disabled, since this is a non-production config. tickTime The length of a single tick, which is the basic time unit used by ZooKeeper, as measured in milliseconds.ZooKeeperServer.DEFAULT_TICK_TIME minSessionTimeout The minimum session timeout in milliseconds that the server will allow the client to negotiate. Defaults to 2 times the tickTime.-1: Disabled maxSessionTimeout The maximum session timeout in milliseconds that the server will allow the client to negotiate. Defaults to 20 times the tickTime.-1: DisabledOnly the properties dataDir, clientPort and maxClientCnxns are set explicitly in the zookeeper.properties file. See the Zookeeper Administration docs for more info (apparently Zookeeper is created/invented in the Hadoop project).
Run from PropertiesThe runFromProperties is the one that actually starts a ZooKeeperServer instance:
/**Here you see that a ZooKeeperProperties is passed. A FileTxnSnapLog is initialized for the dataDir and dataLogDir. A ZooKeeperServer is instantiated, and the particular properties are set. Then a ServerCnxnFactory is created (as a class attribute for later use). The connection factory is used to startup the ZooKeeperServer. Actually, at that point the control is handed over to the ZooKeeperServer. So, you want to have this done in a separate thread.
* Run from ZooKeeperProperties .
* @param zkProperties ZooKeeperProperties to use.
* @throws IOException
*/
public void runFromProperties(ZooKeeperProperties zkProperties) throws IOException {
final String methodName = "runFromProperties";
log.start(methodName);
log.info(methodName, "Starting server");
FileTxnSnapLog txnLog = null;
try {
// Note that this thread isn't going to be doing anything else,
// so rather than spawning another thread, we will just call
// run() in this thread.
// create a file logger url from the command line args
ZooKeeperServer zkServer = new ZooKeeperServer();
txnLog = new FileTxnSnapLog(new File(zkProperties.getDataLogDir()), new File(zkProperties.getDataDir()));
zkServer.setTxnLogFactory(txnLog);
zkServer.setTickTime(zkProperties.getTickTime());
zkServer.setMinSessionTimeout(zkProperties.getMinSessionTimeout());
zkServer.setMaxSessionTimeout(zkProperties.getMaxSessionTimeout());
setZooKeeperServer(zkServer);
cnxnFactory = ServerCnxnFactory.createFactory();
log.debug(methodName, "Create Server Connection Factory");
log.debug(methodName, "Server Tick Time: " + zkServer.getTickTime());
log.debug(methodName, "ClientPortAddress: " + zkProperties.getClientPortAddress());
log.debug(methodName, "Max Client Connections: " + zkProperties.getMaxClientCnxns());
cnxnFactory.configure(zkProperties.getClientPortAddress(), zkProperties.getMaxClientCnxns());
log.debug(methodName, "Startup Server Connection Factory");
cnxnFactory.startup(zkServer);
cnxnFactory.join();
if (zkServer.isRunning()) {
zkServer.shutdown();
}
} catch (InterruptedException e) {
// warn, but generally this is ok
log.warn(methodName, "Server interrupted", e);
} finally {
if (txnLog != null) {
txnLog.close();
}
}
log.end(methodName);
}
Observing the ObservableNow, you might think: What is it with the name ZooKeeperObserver? Earlier, I named it EmbeddedZooKeeperServer. But I found that name long and not nice. I found it funny that Observer has the word Server in it.
As mentioned in the previous section, when starting up the ConnectionFactory/ZookeeperServer, the control is handed over. The method is not left, until the ZooKeeperServer stops running.
I therefor want (as in many implementations) that the ZooKeeperServer, runs in a seperate thread, that I can control. That is, I want to be able to send a shutdown signal to it. For that I found the Observer pattern suitable. In this pattern, the Observable or Subject maintains a list of Observers that can be notified about an update in the Observable. To do so, the Observable extends the java.util.Observable class. And the Observer implements the java.util.Observer and Runnable interfaces.
How does it work? Let's go through the applicable methods.
Start and Add a ZooKeeperThe Observable is implemented by ZooKeeperDriver. In it we'll find a method start():
public void start() {That's not too exiting, but it calls the method addZooKeeper():
final String methodName = "start";
log.start(methodName);
addZooKeeper();
log.end(methodName);
}
public void addZooKeeper() {
final String methodName = "addZooKeeper";
log.start(methodName);
try {
ZooKeeperProperties zkProperties = PropertiesFactory.getZKProperties();
ZooKeeperObserver zooKeeperServer = new ZooKeeperObserver(this, zkProperties);
Thread newZooKeeperThread = new Thread(zooKeeperServer);
zooKeeperServer.setMyThread(newZooKeeperThread);
newZooKeeperThread.start();
} catch (IOException e) {
log.error(methodName, "ZooKeeper Failed", e);
}
log.end(methodName);
}
Here you see that the ZooKeeperProperties are fetched and a new ZooKeeperObserver is instantiated, using a reference to the ZooKeeperDriver object and the ZooKeeperProperties. Since the ZooKeeperObserver is a Runnable we can add it to a new Thread. That thread is also set to the ZooKeeperObserver so that it has a hold of it's own thread, when that come in handy.
And then the new thread is started.
Instantiate the ZooKeeperObserverIn the previous section, we saw that the ZooKeeperObserver is instantiated using a reference to the ZooKeeperDriver object. Let's see how it looks like:
public ZooKeeperObserver(Observable zooKeeperDriver, ZooKeeperProperties zkProperties) {
super();
final String methodName="ZooKeeperObserver(Observable, ZooKeeperProperties)";
log.start(methodName);
this.setZkProperties(zkProperties);
if (zooKeeperDriver instanceof ZooKeeperDriver) {
log.info(methodName, "Add observer "+this.getClass().getName()+" to observable "+zooKeeperDriver.getClass().getName());
setZooKeeperDriver((ZooKeeperDriver) zooKeeperDriver);
zooKeeperDriver.addObserver(this);
}
log.end(methodName);
}
The ZooKeeperProperties are set. And then it checks if the Observable that is passed is indeed a ZooKeeperDriver. The ZooKeeperDriver is also set, and then the ZooKeeperObserver object is added as an Observer to the ZooKeeperDriver using the addObserver(this) method. This method is part of the java.util.Observable object that is extended. It adds the ZooKeeperObserver to a list, that is used to send the update signal to every instance on the list.
Run the ZooKeeperObserverThe ZooKeeperObserver is a Runnable so the run() method is implemented:
public void run() {
final String methodName = "run";
log.start(methodName);
try {
runFromProperties(getZkProperties());
} catch (IOException ioe) {
log.error(methodName, "Run failed!", ioe);
}
log.end(methodName);
}
It calls the runFromProperties(), that is explained earlier.
ShutdownThe ZooKeeperDriver has a shutdown() method:
public void shutdown() {
final String methodName = "shutdown";
log.start(methodName);
setShutdownZooKeepers(true);
log.info(methodName, "Notify Observers to shutdown!");
this.setChanged();
this.notifyObservers();
log.end(methodName);
}
It sets the shutdownZooKeepers indicator to true. This is an attribute that indicates what has been updated. In a more complex Observer pattern more kinds of updates can occur. So, you need to indicate what drove the update.
The most interesting statement is the call to the notifyObservers() method. It will call the implemeneted update() on every Observer in the list.
I implemented this earlier in another situation, a few years ago. And I reused it. But at first it did not work. I found that, apparently changed in Java 7 or 8, I had to add a call to the setChanged() method. The notification to the Observers only works after that call.
As said, notifyObservers() calls the update() method in the Observer:
public void update(Observable o, Object arg) {
final String methodName = "update(Observable,Object)";
log.start(methodName);
log.info(methodName, getMyThread().getName() + " - Got status update from Observable!");
ZooKeeperDriver zkDriver = getZooKeeperDriver();
if (zkDriver.isShutdownZooKeepers()) {
log.info(methodName, getMyThread().getName() + " - Apparently I´ve got to shutdown myself!");
shutdown();
} else {
log.info(methodName, getMyThread().getName() + " - Don't know what to do with this status update!");
}
log.end(methodName);
}
And this one actually checks in the ZooKeeperDriver if the change is because of the shutDownZooKeepers indicator.
If so, it calls it's own shutdown() method. If not, then the update is ignored. The shutdown does the following:
final String methodName = "shutdown";
log.start(methodName);
log.info(methodName,"Let me shutdown "+myThread.getName());
ZooKeeperServer zkServer = getZooKeeperServer();
ServerCnxnFactory cnxnFactory = getCnxnFactory();
cnxnFactory.shutdown();
if (zkServer.isRunning()) {
zkServer.shutdown();
}
log.end(methodName);
}
It gets the Connection factory and sends a shutdown() signal to it. if the ZooKeeper is still running (it shouldn't be), then it gets a shutdown() signal also.
Start and ShutdownIn the end you need to create an instance of the ZooKeeperDriver and save it into a static variable. Then you can call the start() method and later get the object again from the static variable, to call the shutdown() method.
ConclusionThis may look a quite complex to you, to start a server. But, again, I want to be able to embed the Kafka infrastructure in an other system, in my situation Weblogic. This method I'll use to do the same for the Kafka Servers. I'll write about that in a follow-up article. And then, I'll create a set of startup and shutdown classes for Weblogic.
It was fun to implement the Observer pattern again. But, when I encountered that the notifyObserver method did not work as expected at first, searching for a solution, I found that it is deprecated in Java 9. It will still work, but apparently people found that it has it's limitations and a better way of implementing it is developed.