Klein Denkraam

Subscribe to Klein Denkraam feed Klein Denkraam
There is always more to learn
Updated: 4 hours 14 min ago

OWB runtime repository

Thu, 2011-05-12 07:25

I have been looking around the OWB runtime repository from time to time. Mainly because the Control Center isn’t allways the speedy friend you need when things get tough. It shows a lot of white screen a lot of the time while waiting for results to show. So I made myself a view on the runtime repository. I have been meaning to share it for some time, but did not get around to it. Until I recently saw a bit of much needed and long overdue OWB 11gR2 documentation for the runtime repository. I have not checked if I have not taken any shortcuts through the model yet, but when that leads to improvement, I will publish them here. So here it is.


CREATE OR REPLACE VIEW VW_RT_AUDIT_INFO
(EXECUTION_NAME, RETURN_RESULT, STARTTIME, ENDTIME, ELAPSE_TIME,
ELAPSE_FORMAT, SELECTED, INSERTED, UPDATED, DELETED,
DISCARDED, MERGED, CORRECTED, ERROR#, EXECUTION_AUDIT_STATUS,
MESSAGE_SEVERITY, MESSAGE_TEXT, PARAMETER_NAME, VALUE, CREATION_DATE,
OBJECT_NAME, OBJECT_LOCATION_NAME, TASK_NAME, TOP_LEVEL_EXECUTION_AUDIT_ID, EXECUTION_AUDIT_ID,
PARENT_EXECUTION_AUDIT_ID)
AS
SELECT    e.execution_name,
e.return_result,
e.created_on starttime,
e.updated_on endtime,
e.elapse_time,
TO_CHAR (TRUNC (SYSDATE, 'DD') + e.elapse_time / (24 * 3600),
'HH24:MI:SS') AS elapse_format,
DECODE (x.sel, NULL, 0, x.sel) AS selected,
DECODE (x.ins, NULL, 0, x.ins) AS inserted,
DECODE (x.upd, NULL, 0, x.upd) AS updated,
DECODE (x.del, NULL, 0, x.del) AS deleted,
DECODE (x.dis, NULL, 0, x.dis) AS discarded,
DECODE (x.mer, NULL, 0, x.mer) AS merged,
DECODE (x.cor, NULL, 0, x.cor) AS corrected,
DECODE (x.err, NULL, 0, x.err) AS error#,
e.execution_audit_status,
m.message_severity,
m.message_text,
p.parameter_name,
p.VALUE,
m.created_on AS creation_date,
e.object_name,
e.object_location_name,
e.task_name,
e.top_level_execution_audit_id,
e.execution_audit_id,
e.parent_execution_audit_id
FROM      all_rt_audit_executions e
LEFT JOIN all_rt_audit_exec_messages m
ON        e.execution_audit_id = m.execution_audit_id
LEFT JOIN all_rt_audit_execution_params p
ON        e.execution_audit_id = p.execution_audit_id
-- AND       p.parameter_name LIKE '%SPEELR%'
AND       p.parameter_name NOT IN ('PROCEDURE_NAME', 'PURGE_GROUP', 'OPERATING_MODE', 'MAX_NO_OF_ERRORS', 'AUDIT_LEVEL', 'BULK_SIZE', 'COMMIT_FREQUENCY', 'ODB_STORE_UOID', 'PACKAGE_NAME')
LEFT JOIN
(SELECT   e.execution_audit_id,
SUM (a.number_errors) AS err,
SUM (a.number_records_selected) AS sel,
SUM (a.number_records_inserted) AS ins,
SUM (a.number_records_updated) AS upd,
SUM (a.number_records_deleted) AS del,
SUM (a.number_records_discarded) AS dis,
SUM (a.number_records_merged) AS mer,
SUM (a.number_records_corrected) AS cor
FROM      all_rt_audit_executions e
LEFT JOIN all_rt_audit_map_runs a
ON e.execution_audit_id = a.execution_audit_id
GROUP BY e.execution_audit_id) x
ON         e.execution_audit_id = x.execution_audit_id




Note:

I have included error messages for each execution. This means rows will be duplicated when more than one error message is found for an execution.

Note 2:

I excluded the ‘default’ parameters for each execution because they too would lead to duplication of rows and most parameters will have default values anyway. Custom parameter values during execution will be shown in this way.


2010 in review (the easy way…)

Mon, 2011-01-03 04:19

The stats helper monkeys at WordPress.com mulled over how this blog did in 2010, and here’s a high level summary of its overall blog health:

Healthy blog!

The Blog-Health-o-Meter™ reads Fresher than ever.

Crunchy numbers

Featured image

A Boeing 747-400 passenger jet can hold 416 passengers. This blog was viewed about 8,100 times in 2010. That’s about 19 full 747s.

 

In 2010, there were 4 new posts, growing the total archive of this blog to 58 posts. There were 3 pictures uploaded, taking up a total of 4mb.

The busiest day of the year was May 28th with 72 views. The most popular post that day was MDL import/export.

Where did they come from?

The top referring sites in 2010 were itnewscast.com, google.com, google.co.in, Private networks, and decipherinfosys.wordpress.com.

Some visitors came searching, mostly for ventoux, oracle dedicated vs shared, mont ventoux, rpe-02062, and oracle shared server mode.

Attractions in 2010

These are the posts and pages that got the most views in 2010.

1

MDL import/export May 2010

2

Just for myself (Shared server vs Dedicated) October 2007
4 comments

3

Change OWB location details October 2009
2 comments

4

OWB Process Flows that keep saying they are running April 2009
8 comments

5

Shrink OWB runtime April 2008


OMBPlus and child types in a mapping or process flow

Thu, 2010-10-07 12:21

Some time ago I needed a script to get a hierarchical list of all mappings and process flows that where tied together. I had some problems finding out the type of activities of a process flow. Because, if I encountered a Sub Process flow I wanted to recursively go into that subprocess and get all activities there. I could not find any property that contained the needed information but I found out that you can get a list of all subprocess activities or mappings in a processflow. Like this:

OMBRETRIEVE PROCESS_FLOW '$p_procesFlow'  GET SUBPROCESS ACTIVITIES
OMBRETRIEVE PROCESS_FLOW '$p_procesFlow'  GET MAPPING ACTIVITIES

My immediate problem was solved. However there are many more types that can be used and it sure was not the most elegant way to do this. But I had no idea how to do it better until I found an undocumented property here. Below the procedure I created to get the type of an activity or operator.

proc get_typ {p_parentType p_parent p_childType p_child o_typ } {
   upvar $o_typ l_typ
   switch $p_childType {
      "OPERATOR" { set pos 6 }
      "ACTIVITY" { set pos 5 }
   }
   set l_typ [lindex \
                [split [OMBRETRIEVE $p_parentType '$p_parent' \
				                $p_childType '$p_child' \
                                GET PROPERTIES (STRONG_TYPE_NAME) ] '.' ] $pos ]
   switch [string toupper $l_typ] {
      "AGGREGATION"             { set l_typ AGGREGATOR }
      "ANYDATACAST"             { set l_typ ANYDATA_CAST }
      "VARIABLES"               { set l_typ CONSTANT }
      "USERTYPES"               { set l_typ CONSTRUCT_OBJECT }
      "PSEUDOCOLUMN"            { set l_typ DATA_GENERATOR }
      "DISTINCT"                { set l_typ DEDUPLICATOR }
      "EXTERNALTABLE"           { set l_typ EXTERNAL_TABLE }
      "FLATFILE"                { set l_typ FLAT_FILE }
      "MAPPINGINPUTPARAMETERS"  { set l_typ INPUT_PARAMETER }
      "JOIN"                    { set l_typ JOINER }
      "KEYLOOKUP"               { set l_typ KEY_LOOKUP }
      "MATERIALIZEDVIEW"        { set l_typ  MATERIALIZED_VIEW }
      "NAMEADDRESS"             { set l_typ NAME_AND_ADDRESS }
      "MAPPINGOUTPUTPARAMETERS" { set l_typ OUTPUT_PARAMETER }
      "SUBMAP"                  { set l_typ PLUGGABLE_MAPPING }
      "POSTMAPTRIGGER"          { set l_typ POSTMAPPING_PROCESS }
      "PREMAPTRIGGER"           { set l_typ PREMAPPING_PROCESS }
      "SETOPERATION"            { set l_typ SET_OPERATION }
      "ORDERBY"                 { set l_typ SORTER }
      "TABLEFUNCTION"           { set l_typ TABLE_FUNCTION }
      "TRANSFORMFUNCTION"       { set l_typ TRANSFORMATION }
      default                   { set l_typ [string toupper $l_typ ] }
   }
}

I created a temporary process flow called TMP and a likewise named mapping. In both I included all possible activitiy or operator types. Not resulting in a valid object, but that was not the point of the excercise. As shown in the procedure above the names of the types retrieved with the STRONG_TYPE_NAME property does not exactly match the name used in the OMB language. So I had to add a translation switch in the procedure. Maybe a bit too straightforward, but it works.
Now I can find the type and use that to generate or alter all kinds of operators or activities.
Below a simple example showing the possible types.

OMBCC '/<project>/<oracle_module>'
set map TMP
set ops [OMBRETRIEVE MAPPING '$map' GET OPERATORS]
foreach op $ops {
   get_typ MAPPING $map OPERATOR $op typ
   puts "Operator $op is of type $typ"
   puts "OMBRETRIEVE MAPPING '$map' GET $typ OPERATORS"
   OMBRETRIEVE MAPPING '$map' GET $typ OPERATORS
}

OMBCC '/<project>/<processflow_module>/<processflow_package>/'
set pf TMP
set acts [OMBRETRIEVE PROCESS_FLOW '$pf' GET ACTIVITIES]
foreach act $acts {
   get_typ PROCESS_FLOW $pf ACTIVITY $act typ
   puts "Activity $act is of type $typ"
   puts "OMBRETRIEVE PROCESS_FLOW '$pf' GET $typ ACTIVITIES"
   if {$typ == "START" } { puts "START activity is special"
   } else {
      OMBRETRIEVE PROCESS_FLOW '$pf' GET $typ ACTIVITIES
   }
}

And the output is below. I explicitly named the operators/activities after the type they were to enable an easy translation.

Operator AGGREGATOR is of type AGGREGATOR
Operator ANYDATA_CAST is of type ANYDATA_CAST
Operator CONSTANT is of type CONSTANT
Operator CONSTRUCT_OBJECT is of type CONSTRUCT_OBJECT
Operator DATA_GENERATOR is of type DATA_GENERATOR
Operator DEDUPLICATOR is of type DEDUPLICATOR
Operator EXPAND_OBJECT is of type CONSTRUCT_OBJECT
Operator EXPRESSION is of type EXPRESSION
Operator EXTERNAL_TABLE is of type EXTERNAL_TABLE
Operator FILTER is of type FILTER
Operator FLAT_FILE is of type FLAT_FILE
Operator INPUT_PARAMETER is of type INPUT_PARAMETER
Operator JOINER is of type JOINER
Operator KEY_LOOKUP is of type KEY_LOOKUP
Operator LCRSPLITTER is of type LCRSPLITTER
Operator LCR_CAST is of type LCRCAST
Operator MATCHMERGE is of type MATCHMERGE
Operator MATERIALIZED_VIEW_1 is of type MATERIALIZED_VIEW
Operator NAME_AND_ADDRESS is of type NAME_AND_ADDRESS
Operator OUTPUT_PARAMETER is of type OUTPUT_PARAMETER
Operator PIVOT is of type PIVOT
Operator PLUGGABLE_MAPPING is of type PLUGGABLE_MAPPING
Operator POST_MAPPING_PROCESS is of type POSTMAPPING_PROCESS
Operator PRE_MAPPING_PROCESS is of type PREMAPPING_PROCESS
Operator SEQEUNCE is of type SEQUENCE
Operator SET_OPERATION is of type SET_OPERATION
Operator SORTER is of type SORTER
Operator SPLITTER is of type SPLITTER
Operator TABLE_ is of type TABLE
Operator TABLE_FUNCTION is of type TABLE_FUNCTION
Operator TRANSFORMATION is of type TRANSFORMATION
Operator UNPIVOT is of type PIVOT
Operator VARRAY_ITERATOR is of type CONSTRUCT_OBJECT
Operator VIEW_ is of type VIEW
Activity WHILE_LOOP is of type WHILE_LOOP
Activity EMAIL is of type EMAIL
Activity NOTIFICATION is of type NOTIFICATION
Activity END_SUCCESS is of type END_SUCCESS
Activity MAPPING is of type MAPPING
Activity ROUTE is of type ROUTE
Activity OR1 is of type OR
Activity TRANSFORMATION is of type TRANSFORMATION
Activity AND1 is of type AND
Activity END_WARNING is of type END_WARNING
Activity END_LOOP is of type END_LOOP
Activity WAIT is of type WAIT
Activity MANUAL is of type MANUAL
Activity USER_DEFINED is of type USER_DEFINED
Activity START1 is of type START
Activity ASSIGN is of type ASSIGN
Activity END_LOOP_1 is of type END_LOOP
Activity FORK is of type FORK
Activity SET_STATUS is of type SET_STATUS
Activity FILE_EXISTS is of type FILE_EXISTS
Activity END_LOOP_2 is of type END_LOOP
Activity FTP is of type FTP
Activity SQLPLUS is of type SQLPLUS
Activity END_ERROR is of type END_ERROR
Activity FOR_LOOP is of type FOR_LOOP
Activity DATA_AUDITOR_MONITOR is of type DATA_AUDITOR
Activity SUBPROCESS is of type SUBPROCESS

MDL import/export

Fri, 2010-05-28 03:58

Some time ago, when we moved from OWB 9 to 10, I noticed the size of OWB exports into mdl files dramatically decreased. That was a good thing in that it was easier to distribute the mdl files e.g. to production sites. It it also meant you could not read (or edit) the contents of the mdl files any more. I always assumed the new mdl format was some kind of binary optimized format, but today I read this blog from the OWB guys. And it turns out that the ‘new’ format is just a normal zip file containing 2 files. And that you can specify the export to be done in the ‘old’ text format. Text you can edit!

It could be a means to move/copy one or more mappings from one project to another project. Not easy as you must ‘hack’ the mdl file, but it can be done. Neat.


Changing the hostname when Oracle is already installed

Wed, 2010-04-07 06:51

Nothing new here, just a reminder for myself where I can find a short, concise description of how to do this. I use it mainly when cloning VMWare Virtual Machines. Safes the trouble of reinstalling Oracle.

Thanks must go to Scott van Vliet, the owner of the blog entry that explains it all.


Deploying several mappings using OMBPlus

Tue, 2010-02-23 10:39

Every now and then it is a pain to deploy mappings from the Control Center Manager, especially when you have a slow connection to your control center. Retrieving all mappings that can be deployed to a location can take some time (putting it very politely), and you can’t stop the control center from doing that. At least I have not found that possibility. Moreover sometimes you have assembled a list of mappings you want to deploy. In Excel or a plain text file. Mappings you have moved to your acceptance – or production environment and now want to deploy. You can do that using OMBPlus. And, as this is a scripting language, you can create a script to do that for all mappings you want, in one go. Unfortunately deploying is not really very straightforward in OMBPlus. So I created a tcl procedure that sort of simplifies the task. Here it is.

# File   : deploy_map.tcl
# Purpose: deploy MAPPING (parameter)
#          in each target module (parameter)
#          of a project (parameter)
# Author : Kadenza - Eric Buddelmeijer 200908
# Prereq.: a valid repository connection is presumed here

proc deploy_map {p_project p_module p_map p_cc_pwd} {
   OMBCC '/${p_project}'
   # cleanup
   set deplPlan [OMBLIST DEPLOYMENT_ACTION_PLANS]
   foreach deplAction $deplPlan {
      OMBDROP   DEPLOYMENT_ACTION_PLAN '$deplAction'
   }
   OMBCOMMIT
   OMBCONNECT CONTROL_CENTER USE '${p_cc_pwd}'
   set moduleList [OMBLIST ORACLE_MODULES '${p_module}.*' ]
   foreach  moduleName $moduleList {
      puts "--------------------- $moduleName -----------------"
      OMBCC '/${p_project}/$moduleName'
      # start of objecttype loop ------------------------------------------
      # order of objecttypes is relevant!!!
      set typeList [list MAPPING ]
      set N 1
      set plural "S"
      foreach objectType $typeList {
         set listType [concat $objectType$plural]
         set J 1
         set objectList [OMBLIST $listType '${p_map}.*']
         puts "... Checking ${listType}"
         if {[llength $objectList] > 0} {
            foreach objectName $objectList {
               set deployAction [OMBRETRIEVE $objectType \
                                 '$objectName' GET PROPERTIES (DEPLOYABLE)]

               if {$deployAction == "true"} {
                  set valid [OMBVALIDATE $objectType '$objectName']
                  if {$valid == "Valid." } {
                     switch $objectType {
                           "TABLE"    {set deployAction "REPLACE"}
                           "SEQUENCE" {set deployAction "REPLACE"}
                           default    {set deployAction "REPLACE" }
                     }
                     puts "...... Adding ${J}.$objectType $objectName"
                     if {$J == 1 } {
                            OMBCREATE TRANSIENT DEPLOYMENT_ACTION_PLAN \
                                      'GEN_DEPLOY_${objectType}' \
                                      ADD ACTION 'DEPLOY_${objectName}'\
                                      SET PROPERTIES (OPERATION) \
                                          VALUES ('$deployAction') \
                                      SET REFERENCE $objectType '$objectName'
                     } else {
                            OMBALTER DEPLOYMENT_ACTION_PLAN \
                                     'GEN_DEPLOY_${objectType}'\
                                     ADD ACTION 'DEPLOY_A_${objectName}' \
                                     SET PROPERTIES (OPERATION) \
                                         VALUES ('$deployAction') \
                                     SET REFERENCE $objectType '$objectName'
                     }
                     incr J
                  } else { puts "###### SKIPPING $valid $objectType \
                              $objectName" }
               } else { puts "...... Skipping explicit NOT deployable \
                           $objectType $objectName" }
            }
            #end of objectName loop ---------------------------
            puts "... Deploying ${listType}"
            set exedeplPlan [OMBLIST DEPLOYMENT_ACTION_PLANS]
            foreach exedeplAction $exedeplPlan {
               OMBDEPLOY DEPLOYMENT_ACTION_PLAN '$exedeplAction'
               OMBDROP   DEPLOYMENT_ACTION_PLAN '$exedeplAction'
            }
            incr N
         }
         # end of if llength objectList > 0
      }
      # end of object type loop
   }
   # end of module loop
}

And the procedure can be executed with:

deploy_map <project name> <module name> <map name> <control center password>

Which can be repeated several times of course. The procedure also tries to expand the parameters ‘module name’ and ‘map name’ using a wildcard. This means the procedure tries to expand the parameter in such a way that it is treated as ‘begins with’.

Have fun deploying.


OWB 11.1 on an 10.2 database with ‘Split Repositories’

Fri, 2009-11-13 11:42

I’m currently working for a customer that is using OWB 10.2.0.1 and a 10.2.0.4 database. All on Windows. Not the best environment in my humble opinion, but hey, it is a customer. They can decide that.

We were having some problems with OWB that have been solved in OWB 10.2.0.4 but also in 11.1.0.6. In other words, we had to do some migration. Because Oracle 11g is the way to go (there were more business-like terms used to persuade the business to accept this decision of course), we decided to skip OWB 10.2.0.4 and start using OWB 11.1.0.6. The DBA’s were not ready to start using database 11g so we also decided to stay on 10.2.0.4 with the database. This means you have to do some extra steps in your database to get OWB 11 running. Perfectly documented in this part of the administration guide for OWB. Worked like a charm.

The difficulties came later on. We have an architecture that looks very much like the Split Repositories Implementation in the same manual. In the said architecture there is only one control center service running, according to the manual. And it is running against the run-time environment. If I tried that I could not deploy from my design repository to my run-time environment and if I stopped the control center service against the run-time environment I could not run any mappings.

So why not start two control center services? One in the design environment and one in the run-time environment. I tried that, but I ran into some obscure java errors that said ‘a file could not be opened because it is in use by another program’. Just that: a file. Not a name, not a directory, just a file. Period. Time to go file hunting.

I imagined it to be a log file of OWB that was opened by the control center service. Both services were using the same OWB_HOME so I went looking there. In <OWB_HOME>/owb/admin/log where 2 files clearly related to the control center service. One was a log file (OWBSYS.LOG) that I traced to a setting in the run time parameters table of the repository owner. The table is owned by owbsys and is called wb_rt_platformproperties. Look for a property_path with value “property.RuntimePlatform.0.messages” and “property.RuntimePlatform.0.errors”. They both have the default property_value “%RTHOME%\owb\log\OWBSYS.log”. I changed these parameters so both control center services would use a different log file. Indeed they did if I started one service, stopped it, started the other service. But if I started both services I still got the ‘file already in use’ message. I left the setting for the parameter because I like to get different log files for different services and went hunting further.

The next log file is created when starting the service with <OWB_HOME>\owb\bin\win32\run_service.bat (or the shortcut to it in the programs menu -> Oracle -> Warehouse Builder -> Administration -> Start Control Center Service). In this bat-file the output of the bat-file itself is sent to a log file using redirection (the >run_service.log at the end of the command). I first started one control center service then changed the bat-file so it would log to a different file and then started the second control center service. And lo and behold, I got me two running control center services.

Eureka!

I then had to do some fiddling to the bat-file because I did not want to change the log file every time I had to start the services. In Unix or Linux this is easy. You just add `date “+%Y%d%m_%H%M%S”` to the name of the file and you get an unique file name every time you start. On Windows this is not so easy. I googled around a bit and found some rather complex bat-files that seemed to work if I started them from the command line but worked differently when using the shortcut. It can probably be solved by someone more savvy than me in the wonders of cmd bat files but I decided to take the easy way out. I used the following bat file to find the number of log files that where already present. Add 1 to that count and append the file name with the resulting count. Now my control center services start every time! Even with split repositories.

This is how the bat file now looks (changed parts in bold and italic, you probably have to scroll a bit to see all changes):

@echo off
set STARTUP_TYPE=%1
shift
set NODEID=%1
shift
set RTHOME=%1
shift
set RTUSER=%1
shift
set HOST=%1
shift
set PORT=%1
shift
set SERVICE=%1
shift
pushd %RTHOME%\owb\bin\win32\
set JVM_OPTIONS=-Xmx768M
set JAVAPATH=%RTHOME%\jdk
set OWB_HOME=%RTHOME%
set JDK_HOME=%JAVAPATH%
set ORA_HOME=%RTHOME%
set OEM_HOME=%RTHOME%
set IAS_HOME=%RTHOME%
set ORACLE_HOME=%RTHOME%
set PATH=%RTHOME%\bin;%JAVAPATH%\jre\bin\client;%JAVAPATH%\jre\bin;%RTHOME%\owb\bin\adminrem if exist ..\..\..\owb\lib\int\rtpplatform.jar goto check_common
rem echo cannot find rtpplatform.jar, exiting
rem goto exitrem :check_common
rem if exist ..\..\..\owb\lib\int\rtpcommon.jar goto run_service
rem echo cannot find rtpcommon.jar, exiting
rem goto exit
rem :run_service
:: aanpassing om control center service voor 2 db's op dezelfde machine te laten lopen
set tel=0
for /f "tokens=*" %%a in ('dir %RTHOME%\owb\log\run_service*.log /B/A-D') do set /a tel+=1
set /a tel+=1
SET RUNSRVCLOG=%RTHOME%\owb\log\run_service_%tel%.log
%JAVAPATH%\jre\bin\javaw.exe %JVM_OPTIONS% -DORACLE_HOME="%RTHOME%" -DOCM_HOME="%RTHOME%" -DOCM_ORACLE_HOME="%RTHOME%" -classpath %RTHOME%\owb\bin\admin\;%RTHOME%\owb\bin\admin\launcher.jar -DOWB_HOME=%OWB_HOME% -DJDK_HOME=%JDK_HOME% -DORA_HOME=%ORA_HOME% -DOEM_HOME=%OEM_HOME% -DIAS_HOME=%IAS_HOME% Launcher %RTHOME%\owb\bin\admin\owb.classpath oracle.wh.runtime.platform.service.Service %STARTUP_TYPE% %NODEID% %RTUSER% %HOST% %PORT% %SERVICE%  &gt; %RUNSRVCLOG%
:exit
popd

This is how the bat file used to look:

@echo off
set STARTUP_TYPE=%1
shift
set NODEID=%1
shift
set RTHOME=%1
shift
set RTUSER=%1
shift
set HOST=%1
shift
set PORT=%1
shift
set SERVICE=%1
shift
pushd %RTHOME%\owb\bin\win32\
set JVM_OPTIONS=-Xmx768M
set JAVAPATH=%RTHOME%\jdk
set OWB_HOME=%RTHOME%
set JDK_HOME=%JAVAPATH%
set ORA_HOME=%RTHOME%
set OEM_HOME=%RTHOME%
set IAS_HOME=%RTHOME%
set ORACLE_HOME=%RTHOME%
set PATH=%RTHOME%\bin;%JAVAPATH%\jre\bin\client;%JAVAPATH%\jre\bin;%RTHOME%\owb\bin\adminrem if exist ..\..\..\owb\lib\int\rtpplatform.jar goto check_common
rem echo cannot find rtpplatform.jar, exiting
rem goto exitrem :check_common
rem if exist ..\..\..\owb\lib\int\rtpcommon.jar goto run_service
rem echo cannot find rtpcommon.jar, exiting
rem goto exit
rem :run_service
%JAVAPATH%\jre\bin\javaw.exe %JVM_OPTIONS% -DORACLE_HOME="%RTHOME%" -DOCM_HOME="%RTHOME%" -DOCM_ORACLE_HOME="%RTHOME%" -classpath %RTHOME%\owb\bin\admin\;%RTHOME%\owb\bin\admin\launcher.jar -DOWB_HOME=%OWB_HOME% -DJDK_HOME=%JDK_HOME% -DORA_HOME=%ORA_HOME% -DOEM_HOME=%OEM_HOME% -DIAS_HOME=%IAS_HOME% Launcher %RTHOME%\owb\bin\admin\owb.classpath oracle.wh.runtime.platform.service.Service %STARTUP_TYPE% %NODEID% %RTUSER% %HOST% %PORT% %SERVICE%  &gt; %RTHOME%\owb\log\run_service.log
:exit
popd

Change OWB location details

Fri, 2009-10-30 07:33

If you have used your locations to deploy objects to them (who doesn’t?), you cannot change the connection details easily. In the Design Center they are greyed out. The only way you can change them is by starting the control center, unregistering the location, reregister with the new details and deploy all objects again to regain the right deployment status. The last is something you cannot always do because that would mean e.g. dropping tables that already contain data you do not want to lose. Not an ideal situation.
Today I was reading this post about upgrading to 11gR2 which was a good post. Although I’m currently upgrading to OWB 11gR1 as 11gR2 is not yet available on Windows. Stewart Bryson had some trouble because during the upgrade his connection details where automagically updated. He first described how to change the connection details in the ‘old-fashioned’ way I just mentioned. But more importantly he described a second possibility I did not know of. It is possible to change the details using the OWB Repository browser! I do not start the Repository browser very often because it tends to slow down once you have had your share of deploying and executing. A simple view suffices most of the time (I’ll reserve that one for a later post).
But it meant I did not see all functionality, alas. It could have saved me some time.
Anyway, at least in 11gR1 (I’ll look into 10gR2 shortly) you can change host, port, sid and password from a convenient browser screen as presented below. Nice.

BTW, works for File locations too.
BTW2, The link to this screen is perfectly hidden under the link called ‘Unknown’ at the end of the line for each location
BTW3, It is available in OWB 10gR2 as well

change_owb_loc_details


One more date trick

Mon, 2009-06-29 04:11

Tyler Muth has a useful addition to the date functions I have published here before.

If you ever want to know how long ago a date is and you want to display it in ‘human readable’ format you (and I) could use his function.

Like this:

select date_text_format(sysdate - 3/86400) the_date from dual;
select date_text_format(sysdate - 5/1440) the_date from dual;
select date_text_format(sysdate - 1/24) the_date from dual;
select date_text_format(sysdate - 3.141549) the_date from dual;
select date_text_format(sysdate - 15) the_date from dual;
select date_text_format(sysdate - 120) the_date from dual;
select date_text_format(sysdate - 365) the_date from dual;
--------------------------------------------------------------------
3 seconds ago
5 minutes ago
1 hour ago
3 days ago
2 weeks ago
4 months ago
1 year ago

One more for the toolbox.


(Integrity) Constraints in a datawarehouse

Wed, 2009-06-24 02:51

In data warehouse land it is not very common to see constraints in the database. I never felt very comfortable with that, but until now I did not get around to analysing why I felt that way. Until I read this article by Tom Kyte. In the article Tom Kyte shows that the CBO (Cost Based Optimizer) can profit from the information that is derived from the presence of constraints by generating better query plans. Better in this case is defined as ‘producing result sets faster’. The examples in the article are not exactly ‘real world’ data warehouse examples. Following Tom Kyte’s line of reasoning I do agree that constraints are capable of improving the performance of queries.

The reasons for not having constraints in a data warehouse are along the lines of ‘I have checked the integrity when I did my ETL, so why would I need constraints to confirm that? And besides, constraints would only delay my ETL because they have to be checked before they are really enabled’. I see a couple of flaws in this reasoning:

  • I suspect that most constraints in a data warehouse cannot be enabled when actually applied. The quality of the ETL might be good, but is it just as good as a constraint would be? I think not.
  • Enabling constraints might take time, but how often do you have to check constraints? Only when doing the ETL, of course. I hope that in your DWH, doing ETL will be during a small part of the time your DWH is being used. Otherwise your DWH will have a problem. The rest of the time your DWH will be used for querying and Tom Kyte just showed that querying can be sped up by applying constraints.

Summing up my pros and cons of applying constraints.

Pro:

  • it will improve the data quality of the DWH
  • it can speed up the queries in your DWH (querying it is the purpose of your DWH anyway)

Con:

  • it will take more time to do your ETL (which is only a means to create your DWH)

My conclusion is that I wil try to incorporate as many constraints as possible in my next DWH. It also means I will have to be smart enough to enable the constraints at just the right moment during my ETL to have an acceptable loading performance.