Skip navigation.

BI & Warehousing

Rittman Mead Sponsors Georgia Oracle Users Group’s Tech Day

Rittman Mead Consulting - Mon, 2016-02-29 09:00

As part of Rittman Mead’s continuing support and participation in the Oracle community, we are proudly sponsoring this year’s Georgia Oracle Users Group’s Tech Day on Thursday, March 3, 2016, in Downtown Atlanta at the Executive Conference Center.

Rittman Mead’s own Consulting Manager, Andy Rocha, will be speaking at this year’s event. Andy will be presenting on how we created a game-tracking page in OBIEE for the 2014 World Cup. He will break down how we created the dashboard to track real-time scores, stats, match events, and player and team information.

We will also have a booth set up, so if you are in the area, come down to see us and get a demo of some of our exciting new products and services.

To learn more about GaOUG and their events, visit their website here.

The post Rittman Mead Sponsors Georgia Oracle Users Group’s Tech Day appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

OBIEE Performance – Why Metrics Matter (and…Announcing obi-metrics-agent v2!)

Rittman Mead Consulting - Fri, 2016-02-26 08:32

One of the first steps to improve OBIEE performance is to determine why it is slow. That may sound obvious—can’t fix it if you don’t know what you’re fixing, right? Unfortunately, the “Drunk Man anti-method”, in which we merrily stumble from one change to another, maybe breaking things along the way and certainly having a headache at the end of it, is far too prevalent. This comes about partly through unawareness of a better method to follow, and partly encouraged by tuning documents comprising reams of configuration settings to “tune” and fiddle with without really knowing why or how to prove if they indeed actually fixed anything…

Determining the cause of performance problems is often a case of working out what it’s not just as much as what it is. This is for two important reasons. Firstly, we begin to narrow down the area of focus and analysis. Secondly, we know what to leave alone. If we can prove that, for example, the database is running the query behind a report quickly, then there is no point “tuning” the database, because the problem doesn’t lie there. Similarly, if we can see that a report taking 60 seconds in total to run spends 59 seconds of that in the database, fiddling with Java Heap Size settings on OBIEE is going to at the very, very most reduce our total runtime to…59 seconds! This kind of time profiling is important to do, and something that we produce automatically in our Performance Analytics Report:

timeprofile01

So, how do we pinpoint what is, or isn’t, going wrong? We need data, and specifically, we need metrics. We need log files, too, maybe for the real nitty-gritty of explain plans, but a huge amount can be understood about a system by looking at the metrics available.

Any modern operating system, from Windows to Linux, AIX to Solaris, will have copious utilities that will expose important metrics such as CPU usage, disk throughout, and so on. These can often be of great assistance in diagnosing performance problems.

OBIEE DMS Metrics

When it comes to OBIEE itself, we are spoilt by the performance counters available that since 11g (and still in 12c) have been exposed through the Dynamic Monitoring System (DMS). They were even there in 10g too, but accessed through JMX. These metrics give us information ranging from things like the number of logged in users, through how many connections are open to a given database, down to real low-level internals like how many threads are in use for handling LDAP lookups. Crucially, there are also metrics showing current and peak levels of queueing within the various internal systems in OBIEE, which is where DMS becomes particularly important.

By being able to prove that OBIEE has, for example, run out of available connections to the database, we can confidently state that by changing a given configuration parameter we will alleviate a bottleneck. Not only that, but we can monitor and determine how many connections we really do need at a given workload level. The chart below illustrates this. The capacity of the connection pool is plotted against the number of busy connections. As the number of active sessions increases so does the pressure on the connection pool, until it hits capacity at which point queueing starts—which now means queries are waiting for a connection to the database before they can even begin to execute (and it’s at this point we’d expect to see response times suffer).

So this is the kind of valuable information that is just not available anywhere other than the DMS metrics, and you can see from the above illustration just how useful it is. To access DMS metrics in OBIEE 11g and 12c, you have several options available out of the box:

Some of these are useful for programmatically scraping the data, others for interactively checking values at a point in time.

obi-metrics-agent – v2

At Rittman Mead, we always recommend collecting and storing DMS metrics (alongside others, including OS) all the time—not just if you find yourself with performance problems. That way you can compare before and after states, you can track historical trends—and you’re all set to hit the ground running with your diagnostics when (if) you do hit performance problems.

You can capture DMS metrics with the BI Management Pack in Enterprise Manager, you can write something yourself, or you can take advantage of an open-source tool from Rittman Mead, obi-metrics-agent.

I wrote about obi-metrics-agent originally when we first open-sourced it almost two years ago. The principle in version 2 is still the same, we’ve just rewritten it in Jython so as to remove the need for any dependencies like Python and associated libraries. We’ve also added native InfluxDB output, as well as retained the option to send data in the original carbon/graphite protocol.

You can run obi-metrics-agent and just write the DMS data to CSV, but our recommendation is always to persist it straight to a time series data store such as InfluxDB. Once you’ve collected the data you can analyse and monitor it with several tools, our favourite being Grafana (read more about this here).

As part of our Performance Analytics Service we’ve built a set of Performance Analytics Dashboards, making available a full-stack view of OBIEE metrics (including DMS, OS, and even Oracle ASH data), as seen in this video here (click on the image to enlarge it):

If you’d like to find out more about these and the Performance Analytics service offered by Rittman Mead, please get in touch. You can download obi-metrics-agent itself freely from our github repository.

The post OBIEE Performance – Why Metrics Matter (and…Announcing obi-metrics-agent v2!) appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Becky’s BI Apps Corner: Incrementals and Future dated Employee records

Rittman Mead Consulting - Wed, 2016-02-24 23:00

During the last few posts, we have delved into a few of the many interesting aspects of a BI Apps installation. Today I wanted to change gears a bit and talk about what starts to happen when you are past installation and configuration and begin running load plans. On a client project, I recently worked through a unique constraint error on W_EMPLOYEE_D that I found really interesting related to how the incremental logic was working in the knowledge module (KM). Before I can really get into the workaround, we need to understand how incremental loads work in general for BI Apps.

High Level Overview

In the initial run, the load will grab a full set of data, i.e. all data from the source system, based on the data load parameters set during configuration. The same load plan will be used to load data incrementally, picking up only data that has changed since the most recent load plan has completed (Last Extract Date). The pre-built mappings have incremental change capture built into the knowledge module logic. When a load runs, it will extract records that have changed or been created since the Last Extract Date. The load plan determines which rows to extract by using the formula Source Last Updated Date >= (Last Extract Date – Prune Days).

In the weeds

Is it an incremental load? How does that get decided? Actually, that isn’t decided at the load plan level. Each individual package (run as a scenario) starts with a step that refreshes a variable called #IS_INCREMENTAL.

This variable’s refresh logic, shown in the below screenshot, will determine if this package previously completed successfully. After every successful completion an entry gets made into W_ETL_LOAD_DATES with the package name and date timestamp, amongst other audit information.

So we have a scenario running now with the #IS_INCREMENTAL be set to ‘Y’. What does the Knowledge Module (KM) do? Incremental runs normally have steps to load an I$ table (flow table) from the source logic and update the records in the target table based on the DETECTION_STRATEGY option in the KM. For Fact table loads, the option can accept the possible values (explanation given).

  • OUTER: Outer join to target table when populating flow table in order to determine insert/update/useless records
  • NOT_EXISTS: NOT EXISTS clause is used when populating flow table in order to exclude records, which identically exist in target.
  • POST_FLOW: all records from source are loaded into flow table. After that an update statement is used to flag all rows in flow table, which identically exist in target.
  • NONE: all records from source are loaded into flow table. All target records are updated even when a target record is identical to flow table record.

In most cases, the option OUTER is used for facts, which updates the records based on primary keys (PK’s). Incremental decisions are based on the values of the system date columns like CHANGED_ON_DT, AUX1_CHANGED_ON_DT, AUX2_CHANGED_ON_DT, AUX3_CHANGED_ON_DT and AUX4_CHANGED_ON_DT columns populated from the source. This is better performing than the NOT_EXISTS and POST_FLOW options that compares each and every column to identify the records present.

For Slowly Changing Dimensions (like W_EMPLOYEE_D), the DETECTION_STRATEGY option can take the possible values (explanation given).

  • MINUS: MINUS clause is used when populating flow table in order to exclude records, which identically exist in target.
  • NOT_EXISTS: NOT EXISTS clause is used when populating flow table in order to exclude records, which identically exist in target.

The default option is NOT_EXISTS and Incremental decisions are based on PK’s and the date columns.

Future dated rows

Imagine now that during a full load, all records from the source tables for EMPLOYEES are brought forward into the data warehouse table W_EMPLOYEE_D. One of those records is an entry with an effective start date 2 weeks in the future. For W_EMPLOYEE_D one of the columns in the primary key is the effective start date. Fast forward two weeks to the date when the future dated row’s effective start date is the current date. During the incremental load on that date, the incremental logic for this one record is comparing the primary keys and all of the change indicator columns, and sees that the effective start date is greater than the last extract date from last night. This incremental comparison incorrectly determines this is a record that needs to be added to the fact table, even though the record is already in the fact table. Now we have an ERROR! The familiar unique constraint on the _U1 unique index rears its ugly head. On top of that, troubleshooting this duplicate is not coming up with any duplicate records in the usual places (I$, DS, source tables, nada!). Isolating the two identical records and tracking them back to the source tables however, there is the one record. The only clue is that the effective start date is today’s date. After a second occurrence, discussions and back and forth on an SR, a workaround is now available.

Workaround

Step 1. Remove any Future dated rows in W_EMPLOYEE_D

Step 2. Add a filter on the interface to prevent future dated rows from loading into W_EMPLOYEE_DS until they are <= current date.

At our client, this mapping continues to run without any additional errors. The steps here are most likely version specific, and this issue is a known bug to Oracle, so please don’t hesitate to open an SR if you are getting this specific issue, as a quick turn around is very likely.

There are some other odds and ends about how incremental load plans work and I plan to gather them up and have another post about those in the coming weeks. If you want to learn more ins and outs of incrementals and more, join me for the upcoming remote ODI for BIApps course on March 14th-16th. We have only a few spots left so sign up here.

The post Becky’s BI Apps Corner: Incrementals and Future dated Employee records appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Data Integration Tips: Oracle Data Integrator – Quotes in Variables

Rittman Mead Consulting - Sun, 2016-02-21 20:37

It’s Sunday night, and we have just enough time for another quick Data Integration Tip from Rittman Mead. In another recent, yet somewhat trivial, challenge I ran into this week was the ability to pass quotations through to a Scenario as a part of an ODI 11g Variable (this is also applicable to ODI 12c). Let’s get right into the example.

To set the stage, I’m working on project that includes integration with Hyperion Planning. This specific task is a command line call to the CalcMgrCmdLineLauncher.cmd to execute a specific business rule against the data in the Planning application. In this case, our business rule name contains spaces and, for various reasons, the name cannot be changed to remove the spaces. The ODI 11g project is setup to run many of these planning business rules by calling a generic Scenario, based on a Package, passing variables to make it dynamic.

The command for the OdiStartScen ODI Tool in the calling Package looks like this:

OdiStartScen "-SCEN_NAME=UTIL_EXECUTE_CALCMGR" "-SCEN_VERSION=001" "-SYNC_MODE=1" "-EPM.PV_PLANNING_CALCMGR_RULE=Revenue and Burden All Contracts"

As you can see, there are double quotes around each of the parameters and their values being passed into the ODI Tool, including the Scenario name and version, the synchronous flag, and additional variables such as the PV_PLANNING_CALCMGR_RULE. What I found was that the Scenario had issues accepting the ODI variable value with spaces.

To work around this issue, I simply added an escape to the double quotes around the variable value with spaces. The double double quotes allow the ” to be passed through as a part of the variable string value.

OdiStartScen "-SCEN_NAME=UTIL_EXECUTE_CALCMGR" "-SCEN_VERSION=001" "-SYNC_MODE=1" "-EPM.PV_PLANNING_CALCMGR_RULE=""Revenue and Burden All Contracts"""

Now, the Scenario UTIL_EXECUTE_CALCMGR can accept the variable value with spaces, and the necessary double quotes, and pass it on through to the CalcMgrCmdLineLauncher.cmd command.

So there you have it, another Data Integration Tip from the folks at Rittman Mead. Simple as it was, I hope you find it useful! Check back regularly for more DI Tips and have a great week!

 

The post Data Integration Tips: Oracle Data Integrator – Quotes in Variables appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Introduction to Oracle R Training

Rittman Mead Consulting - Thu, 2016-02-18 04:00
Rittman Mead is thrilled to announce our new Introduction to Oracle R courses!

With the launch of 12c comes an exciting extension to OBIEE’s analytical capabilities—embedded R execution. Oracle has made several investments in the R language and its related technologies (Oracle R, ROracle, ORAAH, and Oracle R Enterprise). Based on our experience deploying advanced analytics solutions from within business intelligence departments, we view this simplified pairing of OBIEE and R as a great way forward in delivering advanced analytics across organizations.

Oracle R

But the world of R is a big one, with hundreds of open-source R packages and countless training avenues. With so many confusing options on the market, we wanted to simplify things for you. It can be difficult to know where to start, so we’ve designed an R course to help you bridge the gap between existing BI/DW skills and the new skills required to confidently derive insights with R. We want to help you look beyond the dashboard and delve into novel analytical techniques.

The Advanced Analytics course dives deeply into the analytical capabilities of 12c and R. We’ve designed the training so that prior knowledge of R isn’t assumed. The course builds on existing SQL and data visualization skills—covering R use cases from predictive analytics and time series forecasting to natural language processing and many others.

Oracle R

We want to share with you our vision of R-enhanced business intelligence that provides more insights, predictions, and actionable discoveries, all while scaling effortlessly with your data. With our training, you’ll learn to expand your analytic skills and become an invaluable asset to your company’s BI operation.

We are excited about what R brings to the world of BI and can’t wait to share our knowledge.

We will follow up this blog post with one about the business value of our new Introduction to Oracle R course and another post about the more technical aspects of the course.

The post Introduction to Oracle R Training appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Data Integration Tips: ODI 12c Repository Query – Find the Mapping Target Table

Rittman Mead Consulting - Mon, 2016-02-15 04:00

Well, it’s been awhile since I’ve posted one of our Rittman Mead Data Integration Tips, so I thought a recent challenge might be the next great candidate. I was working through the Kimball ETL Subsystems and the Error Event Schema, Subsystem 5 if you’re familiar with the methodology, and attempting to build the schema from Oracle Data Integrator 12c (ODI 12c) metadata tables. If you caught my presentation “A Walk Through the Kimball ETL Subsystems with Oracle Data Integration” at Oracle OpenWorld 2015, then you’ll know a bit more about where I’m coming from. You can still catch my presentation on this topic at both IOUG Collaborate16 and ODTUG KScope16 (being invited to speak at each of these events is always an honor). Now this Data Integration Tip isn’t solely related to the Kimball ETL Subsystems, though the solution did prove rather useful for my presentation (and upcoming article in the IOUG Select Journal). It’s actually an interesting twist in how Oracle Data Integrator mapping metadata is stored differently between the ODI 11g and ODI 12c repositories.

The challenge is to find the target table, or Datastore, for a given Mapping in ODI 12c, or Interface in ODI 11g. When I say “find”, I mean query the ODI work repository tables and return the target table, or tables in 12c, for a given Mapping. Luckily, I’m equipped with some guidance from our friends at My Oracle Support. If you look at support document Oracle Data Integrator 11g and 12c Repository Description (Doc ID 1903225.1) you’ll find the data dictionaries for both ODI11g (11.1.1.6.0+) and ODI12c (12.1.2, 12.1.3, & 12.2.1). These are invaluable resources from Oracle – though the may be works in progress and somewhat incomplete!

Let’s take a look first at ODI 11g and how simple things used to be. Back when Interfaces were the mechanism for extracting, transforming, and loading, we were allowed only 1 single target Datastore. Ahh, those were the good ‘ol days! Digging into the repository we really only had one place to look for the target table – SNP_POP. This table in the ODI 11g repository, SNP_POP (which essentially stands for Synopsis – Populate), contains a column I_TABLE. This identifying column represents the target table for that particular Interface. Here’s a query that ties it all together.

select
    p.pop_name interface_name,
    t.table_name target_table,
    m.cod_mod model_code
from snp_pop p inner join snp_table t on p.i_table = t.i_table
    inner join snp_model m on t.i_mod = m.i_mod;

As you can see, the key to capturing the target table for an Interface is simply in the SNP_POP.I_TABLE column. Because there is only one target, we can easily figure it out.

interface-target-table

Now, ODI 12c is where the real challenge lies. As you may know, with the move from Interfaces in 11g to flow based Mappings in 12c, we were allowed to do new and exciting things, such as load multiple target tables from a single Mapping. We may also have a case where a Datastore component maps to a Filter component, which then maps to another Datastore component, etc. As you can see in the image below, we can have lots of tables, and lots of tables that may be sources or targets, but we’re only interested in the final target table (or tables, for that matter!).

odi12c-map

Ok, so let’s dig into the Work Repository now. It seems an ODI Mapping can’t be that much more difficult than an Interface, right? Well…

First, there are quite a few tables related to the Mapping itself.

SNP_MAPPING
SNP_MAP_ATTR
SNP_MAP_ATTR_INFO
SNP_MAP_COMP
SNP_MAP_COMP_TYPE
SNP_MAP_CONN
SNP_MAP_CP
SNP_MAP_CP_ROLE
SNP_MAP_DATA_TYPE
SNP_MAP_EXPR
SNP_MAP_EXPR_REF
SNP_MAP_PROP
SNP_MAP_PROP_DEF
SNP_MAP_REF
SNP_MAP_REF_PP

Whoa…we’ve got some work to do. Lucky for you, I’ve already done the work. Let’s look at how it all fits together. We can start with the Mapping (SNP_MAPPING) table. We also have different components on the Mapping, such as Lookups, etc, so we can join in the table SNP_MAP_COMP as well. Here’s the information we’ll be able to see with that simple join.

select ...
from snp_mapping m inner join snp_map_comp mc on m.i_mapping = mc.i_owner_mapping

odi12c-map-component

That’s interesting, we’ve captured all components in the mapping. But there are still quite a lot of non-targets here. Ok, maybe if we add the connection points for each component we can find the input and output for each. Components each have an input connector point, allowing an output from a different component to flow into it.

select ...
from snp_mapping m inner join snp_map_comp mc on m.i_mapping = mc.i_owner_mapping
    inner join snp_map_cp cp on mc.i_map_comp = cp.i_owner_map_comp

That just added the connection point information for each component and whether it is an INPUT or OUTPUT. Not extremely useful by itself, so let’s dig a bit deeper. How about adding the SNP_MAP_REF table? This table seems to contain a reference to all types of other attributes about the Mapping and its Components. We also need to consider that Datastores, just as any other Component, will have both an input and output. Right now, the dataset shows both the input and output connectors for each Component. Let’s remove the input connection points to limit our result set.

select ...
from snp_mapping m inner join snp_map_comp mc on m.i_mapping = mc.i_owner_mapping
    inner join snp_map_cp cp on mc.i_map_comp = cp.i_owner_map_comp
    inner join snp_map_ref mr on mc.i_map_ref = mr.i_map_ref
where cp.direction = 'O' --output connection point only

Joining the reference table has now allowed us to focus on the Datastores in the Mapping and their OUTPUT connection point. What I really want is to see only the Datastores that do not have their OUTPUT connection point connected to an INPUT connector. Therefore, if the OUTPUT is empty, it must be the target table!

select ...
from snp_mapping m inner join snp_map_comp mc on m.i_mapping = mc.i_owner_mapping
    inner join snp_map_cp cp on mc.i_map_comp = cp.i_owner_map_comp
    inner join snp_map_ref mr on mc.i_map_ref = mr.i_map_ref
where cp.direction = ‘O' and --output connection point only.
    cp.i_map_cp not in
        (select i_start_map_cp from snp_map_conn) --not a starting connection point

The SNP_MAP_CONN, which stores the mapping connections, will allow me to limit the query to the components that only have an output, but not an input. The connections table will contain all component connections in the ODI 12c mappings. Here’s what we get as a result.

mapping target tables found

Hey, now we’re onto something here. In fact, this is what I was looking for! Target table(s) in a single Mapping in ODI12c. Not quite as simple as ODI11g, but with a bit of SQL and understanding of the repository tables, you can do it. Here’s the final query again, joining in the SNP_TABLE & SNP_MODEL tables to complete the dataset.

select
    m.name mapping_name,
    mr.qualified_name,
    mc.name datastore_alias,
    t.table_name target_table,
    mdl.cod_mod model_code
from snp_mapping m inner join snp_map_comp mc on m.i_mapping = mc.i_owner_mapping
    inner join snp_map_cp cp on mc.i_map_comp = cp.i_owner_map_comp
    inner join snp_map_ref mr on mc.i_map_ref = mr.i_map_ref
    inner join snp_table t on mr.i_ref_id = t.i_table
    inner join snp_model mdl on t.i_mod = mdl.i_mod
where cp.direction = 'O' and --output connection point
    cp.i_map_cp not in
        (select i_start_map_cp from snp_map_conn) --not a starting connection point
;

Please let me know if you find this Data Integration Tip useful or if you have a better way of accessing the target table in a mapping. One of my Rittman Mead colleagues asked, “why not just use the ODI Java API?”. For accessing the ODI repository, I do prefer using some Groovy script and the API. But in this case, I’m interested in building out a dimensional schema and writing ETL to load the dimensions and facts, which lends itself to SQL rather than Groovy script.

As always, if you’re team needs help around Oracle Data Integrator, or Oracle Data Integration Solutions in general, drop us a line at info@rittmanmead.com. Or feel free to reach out to me directly via email (michael.rainey@rittmanmead.com) or Twitter (@mRainey). Cheers!

The post Data Integration Tips: ODI 12c Repository Query – Find the Mapping Target Table appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

OBIEE 12c – Repository Password Corruption Issue

Rittman Mead Consulting - Thu, 2016-02-11 10:00

Here at Rittman Mead we’ve been working with OBIEE 12c for some time now, as part of the beta programme and more recently with clients looking to get the most out an upgrade to OBIEE 12c. We’ve also been hard at work on our brand new OBIEE 12c training course. What we’ve seen in terms of the stability of OBIEE 12c has been pleasantly surprising. Anyone who’s worked with software long enough will be familiar with the reputation that first releases in general have for nasty bugs, and it’s probably fair to say that with the first release of 11g (11.1.1.3) this was proven out. With the first release of OBIEE 12c, however, we’re seeing a stable tool with very few issues so far.

That said…I’m going to demonstrate an issue here that is a bit of a nasty one. It’s nasty because the trigger for it appears innocuous, and what it breaks is one of the really new things in OBIEE 12c—the way in which the RPD is stored on disk and accessed by the BI Server. This makes it a bit of a tough one to get to the bottom of at first, but it’s a good excuse to go digging!

Summary

If you open the RPD in online mode (use File –> Copy As and then use the Save option), the password on the server gets corrupted.

[nQSError: 43113] Message returned from OBIS  
[nQSError: 13042] Repository password is wrong

obiee 12c repository password corruption

From this point on you cannot checkin any changes, and when you restart the BI Server it will fail to start up.

Details

In OBIEE (12c, and before) it is possible to open the RPD as a straight binary file on disk (“Offline” mode), or by connecting directly to the BI Server and opening the copy that it is currently running (“Online mode”). Offline mode suits larger changes and development, with the RPD then being deployed onto the server once the development work is ready to be tested. Online mode is a good way for making changes on a dedicated dev server, minor changes on a shared server, or indeed just for viewing the RPD that’s currently being run.

Here’s what we’ve seen as the problem:

  1. Open RPD in online mode
  2. File -> Copy As
  3. Enter a password with which to protect the RPD being saved on disk.
  4. Do one of:
    • File -> Close, and then when prompted to save changes click Yes
    • File -> Save
    • Click the Save icon on the toolbar
    • Ctrl-S

obiee 12c repository password error

What happens now is two-fold:

  1. You cannot check in any changes made online—the check in fails with an error from the Administration Tool:
    [nQSError: 43113] Message returned from OBIS  
    [nQSError: 13042] Repository password is wrong
  2. The BI Server will fail on restart with the same error:
    Opening latest versioned cached RPD for : /app/oracle/biee/bi/bifoundation/server/empty.rpd which is /app/oracle/biee/user_projects/domains/bi/bidata/service_instances/ssi/metadata/datamodel/customizations/liverpd.rpd_5  
    [nQSError: 13042] Repository password is wrong. [[

We saw this on SampleApp v511, as well as on vanilla installations of OBIEE. Versions on both were 12.2.1.0.0.

After we reported this to Oracle, they agreed it was a bug and have logged it as bug number 22682937, with no patch currently (February 10, 2016) available.

Workaround

If you open the RPD online and use File -> Copy As, don’t hit save or check in, even if prompted by the Admin Tool. Close the RPD straightaway.

Often people will use File -> Copy As to take a copy of the current live RPD before doing some changes to it. At Rittman Mead, we’d always recommend using source control such as git to store all code including the RPD, and using this approach you obviate the need to open the RPD online simply to get the latest copy (because the latest copy is in source control).

You can also use the data-model-cmd downloadrpd option to download the actual live RPD—that’s exactly what this option is provided for.

Solution – if BI Server (OBIS) has not yet been restarted

If you’ve hit this bug and are hitting “Repository password is wrong” when you try to checkin, and if the BI Server is still running, then redeploy the RPD using the data-model-cmd uploadrpd tool. By redeploying the RPD the password appears to get sorted out.

If the BI Server is down, then this is not an option because it has to be running in order for data-model-cmd uploadrpd to work.

Solution – if BI Server (OBIS) has been restarted and failed

At this point using data-model-cmd uploadrpd is not possible because OBIS is not running and so the data-model-cmd uploadrpd will fail with the error:

[oracle@demo ~]$ /app/oracle/biee/user_projects/domains/bi/bitools/bin/data-model-cmd.sh uploadrpd -I /home/oracle/rmoff.rpd -W Password01 -U weblogic -P Admin123 -SI ssi  
Service Instance: ssi

Operation failed.  
An exception occurred during execution, please check server logs.

The only option from this point is to use importServiceInstance to reset the service instance, either to an empty, SampleAppLite, or an existing .bar export of your environment. For example:

/app/oracle/biee/oracle_common/common/bin/wlst.sh  
importServiceInstance('/app/oracle/biee/user_projects/domains/bi','ssi','/app/oracle/biee/bi/bifoundation/samples/sampleapplite/SampleAppLite.bar')

This will enable OBIS to start up correctly, from which point the desired RPD can then be re-uploaded if required using data-model-cmd uploadrpd.

Conclusion

The easiest thing is to simply not use File -> Copy As in online mode. Whilst this on its own is fine, the UI means it’s easy to accidentally use the Save option, which then triggers this problem. Instead, use data-model-cmd downloadrpd, and/or use source control so that you can easily identify the latest RPD that you want to develop against.

If you do hit this repository password corruption problem, then keep calm and don’t restart the BI Server—just re-upload the RPD using data-model-cmd uploadrpd. If you have already uploaded the RPD, then you need to use importServiceInstance to restore things to a working state.

As part of the diagnostics that we did to get to the bottom of this issue, we found some interesting things in OBIEE 12c, such as a web service endpoint for RPD upload/download, as well as the detailed workings of the RPD upload process and that infamous liverpd.rpd file. Stay tuned for a blog post on this and more soon! And in the meantime, be sure to get in touch with us to discuss how we can help you with your OBIEE systems, including OBIEE 12c upgrade, and OBIEE 12c training.

The post OBIEE 12c – Repository Password Corruption Issue appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Implementing OBIEE Commentary

Rittman Mead Consulting - Thu, 2016-02-11 04:00
My Application Development Experience with Rittman Mead

Greetings, readers! My name is Nick Padgett, and this is my first blog post with Rittman Mead. I started with Rittman Mead as an intern over a year ago while I was attending Kennesaw State, a local university, but I’ve graduated from both school and the internship. Since then I’ve been working on several interesting applications. Today, I would like to take some time to discuss developing one such application, our OBIEE Commentary tool, as well as my learning experiences working at Rittman Mead.

When I first started with Rittman Mead in October 2014, I was beginning my final year in a Computer Science program. When I joined, I knew very little about Business Intelligence but was eager to learn everything I could. Learning BI while still in school created a powerful synergy between my class studies and the real world. As I learned web development and security in class, I learned about data warehouses and agile development at work. In addition, I had the opportunity to practice my new knowledge and skills on a daily basis.

The most important lesson I learned during my time as an intern was the importance of developing primarily to meet the user requirements. This subject was often ignored in school, except for the obligatory Software Engineering course. But now, as a formal developer, I was always being reminded of the users we were developing for. All features for all projects were first qualified by asking “Is this what the user wants or needs?” or “Will this add value for the users?”

The first project I was assigned was the Rittman Mead commentary tool. As I assisted in development, the team always looked back to the initial requirements: “Users want commentary in OBIEE.” All development was focused in that direction, and any feature which hindered the goal, despite the technical impressiveness, was removed from the application.

Now, after spending many hours developing the commentary tool, the team is confident it has effectively answered the primary user requirements. We’re all very excited to provide this tool to our customers, and we are confident it will meet all the requirements we spent so long accumulating and developing for.

The Primary Use-case: OBIEE Commentary

In a typical environment, users are required to leave the application to communicate with their peers. This often involves screenshots, long text descriptions of the subject, and a service such as email or Slack. However, forcing users to leave the application in order to do their job is less than optimum. What’s to keep them using the expensive tool you purchased for them if they have to leave it every few minutes?

Many organizations recognize this issue and seek to implement commentary themselves. Frequent approaches typically use either a text object on a dashboard, a write-back enabled input, or even strange iFrame solutions to achieve the desired capabilities. Obviously, these implementations have serious drawbacks, and may not be maintainable or feasible for many organizations.

The commentary tool we have developed uses none of the above approaches, but instead uses a standard web deployment, hosted in WebLogic, to provide similar capabilities. This means no RPD modifications, no requirement for dashboard designers to become involved, and no security threats with iFrames or Cross-Site Scripting (XSS) attacks, while still fulfilling the primary use case of adding commentary to your OBIEE dashboards.

-->
obiee 12c commentary

Our tool allows for users to create several different types of comments, all displayed through standard HTML in a web browser. Comments can be added on a dashboard to provide commentary for the page as a whole, while other comments can be placed on reports, which will be shown anywhere the report is located. You can even submit replies to existing comments, allowing for comment threads over a particular subject. All of these comments are available for use without having to leave the application.

A Second Use-case: Documentation

Some comments are highly structured and curated, containing a wealth of information. Some users like to add comments explaining how a measure is calculated, or a post-mortem for the launch of a new product. These comments are best classified as documentation, as they don’t serve to engage in a conversation, but to present information. In addition, there may be formal documentation that is required to be on a dashboard, and relates directly to a series of reports. The most common way to provide this is to supply links to external applications specifically suited for documentation. Among a myriad of issues pertaining to accessibility concerns, this approach forces users to leave the application, some of whom may never return.

As an alternative, documentation can be placed directly on a dashboard or report, but this leads to design problems. Having a twenty-page document on a dashboard is hardly an acceptable solution, even though this is the best place to put your documentation. In addition, users will have to write their documentation in HTML, or at least be able to understand it.

-->
obiee 12c documentation tool

The commentary tool also allows for this use case. Documentation can be created or edited using a WYSIWYG (What You See Is What You Get) editor, similar to any standard document editor, and added to a “Table of Contents” on a dashboard. The Table of Contents lists all current documents available to view and displays them in a dedicated area. This provides access to any pertinent content directly on a dashboard, rather than forcing users to migrate to an external application, which is the entire point for having native commentary in the first place.

A Third Use-case: Integration

A common concern may be the introduction of yet another channel for communication. For example, many organizations use Slack as a communication tool or Atlassian Confluence for documentation. Rittman Mead does not desire to replace these tools or make them obsolete to your organization. We acknowledge the fact you spend money on these applications, and they may be widely adopted within your organization.

-->
obiee commenting tool

Instead of forcing you to maintain several channels of communication, our commentary tool allows integrations with applications such as JIRA and Confluence. Comments can be submitted as JIRA issues, and documents can be synchronized with Confluence pages. Several other integrations are available by default, while additional implementations are available on a per-customer basis. Now, if users absolutely must leave the application, they can do it on their own terms, with the convenience of a few clicks.

As I have grown during my time here with Rittman Mead, so has our commentary application. While I learned how to cater to user requirements, the tool evolved into an extension of this knowledge and was developed with the intention to meet all customer needs. I am incredibly excited to see this product released soon, and I can’t wait for users to start getting more out of their BI applications.

More information on this tool will be coming very soon, so keep an eye on our blog and website for updates!

The post Implementing OBIEE Commentary appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Using Python to ‘Wrangle’ Your Messy Data

Rittman Mead Consulting - Mon, 2016-02-08 00:00
 or How to Organize Your Comic Book Collection Based on Issue Popularity

In addition to being a product manager at Rittman Mead, I consider myself to be a nerd of the highest order. My love of comic books, fantasy, sci-fi and general geekery began long before the word ‘Stark’ had anything to do with Robert Downey Jr or memes about the impending winter. As such, any chance to incorporate my hobbies into my work is a welcomed opportunity. For my next few blog entries, I’ve decided to construct a predictive classification model using comic book sales data whose eventual goal will be to build a model that can accurately predict whether a comic will rocket off the shelves or if it will be a sales dud. The first blog of the series shows some of the pitfalls that can come up when preparing your data for analysis. Data preparation, or data wrangling as it has come to be known, is an imperfect process that usually takes multiple iterations of transformation, evaluation and refactoring before the data is “clean” enough for analysis.

While the steps involved in data wrangling vary based on the state and availability of the raw data, for this blog I have chosen to focus on the gathering of data from disparate sources, the enrichment of that data by merging their attributes and the restructuring of it to facilitate analysis. Comic book sales data is readily available on the interwebs, however finding that data in a usable format proved to be a more difficult task. In the end, I had to resort to dreaded process of screen scraping the data from a comic research site. For those of you who are lucky enough be unfamiliar with it, screen scraping is the process of programmatically downloading HTML data and stripping away that formatting to make it suitable for use. This is generally used as a last resort because web sites are volatile creatures that are prone to change their appearance as often as my teenage kids do preparing to leave the house. However, for the purposes of this series, as my friend Melvin the handyman would often say, “We works with what we gots.”

blog-ironman-pythonexclamation-point-icon-30522This leads us to the first issue you may run into while wrangling your data. You have access to lots of data but it’s not pretty. So make it pretty.  Working with raw data is like being a sculptor working with wood. Your job is not to change the core composition of the data to suit your purposes but to chip away at the excess to reveal what was there all along, a beautiful horse… er I mean insight. Sorry, I got lost in my analogy.  Actually to expand on this analogy a bit, the first tool I pulled out of my toolbox for this project was Python, the Leatherman of  programming languages.  Python is fast, plays well with other technologies and most importantly in this case, Python is ubiquitous. Used for tasks ranging from process automation and ETL to gaming and academic pursuits, Python is truly a multipurpose tool. As such, if you have a functional need, chances are there is a native module or someone has already written a public library to perform that function.  In my case, I needed some scripts to “scrape” HTML tables containing comic sales data and combine that data with other comic data that I retrieved elsewhere. The “other” data is metadata about each of the issues. Metadata is just data about data. In this case, information about who authored it, how it was sold, when it was published, etc… More on that later.  

blog-sales-tableLuckily for me, the format of the data I was scraping was tabular, so extracting the data and transforming it into Python objects was a relatively simple matter of iterating through the table rows and binding each table column to the designated Python object field. There was still a lot of unnecessary content on the page that needs to be ignored, like the titles and all of the other structural tags, but once I found the specific table holding the data, I was able to isolate it. At that point, I wrote the objects to to a CSV file, to make the data easy to transport and to facilitate usability by other languages and/or processes.

The heavy lifting in this process was performed by three different Python modules: urllib2, bs4 and csv. Urllib2, as the name implies, provides functions to open URLs. In this case, I found a site that hosted a page containing the estimated issue sales for every month going back to the early 1990’s. To extract each month without manually updating the hardcoded URL over and over, I created a script that accepted MONTH and YEAR as arguments, month_sales_scraper.py
blog-get-sales

The response from the urlopen(url) function call was the full HTML code that is typically rendered by a web browser. In that format, it does me very little good, so I needed to employ a parser to extract the data from the HTML. In this context, a parser is a program that is used to read in a specific document format, break it down into its constituent parts while preserving the established relationships between those parts, and then finally provide a means to selectively access said parts. So an HTML parser would allow me to easily access all the <TD> column tags for a specific table within an HTML document. For my purposes, I chose BeautifulSoup, or bs4.

BeautifulSoup provided search functions that I used to find the specific HTML table containing the sales data and loop through each row, while using the column values to populate a Python object.

blog-bs4

This Python object, named data, contains fields populated with data from different sources. The year and month are populated using the arguments passed to the module. The format field is dynamically set based on logic related to the rankings and the remaining fields are set based on their source’s position in the HTML table. As you can see, there is a lot of hard coded logic that would need to be updated, should the scraped site change their format. However, for now this logic gets it done.

The final step of this task was to write those Python objects to a CSV file. The python module, CSV, provides the function writerow(), which accepts an array as a parameter and writes each of the array elements as columns in the CSV.
blog-csv-write

My first pass raised the an exception because the title field contained unicode characters that the CSV writer could not handle.
blog-unicode-error

To rectify this, I had to add a check for unicode and encoded the content as UTF-8. Unicode and UTF-8 are character encodings; meaning they provide a map computers use to identify characters. This includes alphabets and logographic symbols from different languages as well as common symbols like ®.

blog-unicode-to-utf8

Additionally, there was the matter of reformatting the values of some of the numeric fields to allow math to be performed on them later(ie stripping ‘$’ and commas). Other than that, the data load went pretty smoothly. A file named (MONTH)_(YEAR).CSV was generated for each month. Each file turned out like so:

blog-sales-csv

While this generated tens of thousands of rows of comic sales data, it was not enough. Rather, it had the volume but not the breadth of information I needed. In order to make an accurate prediction, I needed to feed more variables to the model than just the comic’s title, issue number, and price. The publisher was not relevant as I decided to limit this exercise to only Marvel comics and passing in the the estimated sales would be cheating, as rank is based on sales. So to enhance my dataset, I pulled metadata about each of the issues down from “the Cloud” using Marvel’s Developer API. Thankfully, since the API is a web service, there was no need for screen scraping.

exclamation-point-icon-30522Retrieving and joining this data was not as easy as one might think. My biggest problem was that the issue titles from the scraped source were not an exact match to the titles stored in the Marvel database. For example, the scraped dataset lists one title as ‘All New All Different Avengers”. Using their API to search the Marvel database with that title retrieved no results. Eventually, I was able to manually find it in their database listed as “All-New All-Different Avengers”. In other cases, there were extra words like “The Superior Foes of Spider-Man” vs “Superior Foes of Spider-Man”. So in order to perform a lookup by name, I needed to know the titles as they expected them. To do this I decided to pull a list of all the series titles whose metadata was modified during the timeframes for which I had sales data. Again, I ran into a roadblock. The Marvel API only allows you to retrieve up to 100 results per request and Marvel has published thousands of titles. To get around this I had to perform incremental pulls, segmented alphabetically. 

blog-incremental-pulls

Even then there were a few minor issues, as some letters like ‘S’ had more than 100 titles. To get around that I had to pull the list for ‘S’ titles sorted ascending and descending then combine the results, making sure to remove duplicates. So my advice on this one is be sure to read up on the limitations of any API you are using. It may enforce limits but you may be able to work around the limits using creative querying.

blog-sticky-tab

At this point I have my list of Marvel series titles, stored in some CSV files that I eventually combined into a single file, MarvelSeriesList.csv, for ease of use. Actually, I have more than that. While retrieving the series titles, I also pulled down the ID for the series and an appropriateness rating. Searching the API by ID will be much more accurate than name and the appropriateness  rating may be useful when building out the prediction model. The next step was to iterate through each row of the CSVs we created from the sales data, find the matching ID from MarvelSeriesList.csv and use that ID to retrieve its metadata using the API.

exclamation-point-icon-30522If you remember, the point of doing that last step was that the titles stored in the sales data files don’t match the titles in the API, so I needed to find a way to join the two sources. Rather than writing cases to handle each of the scenarios (e.g. mismatched punctuation, extra filler words), I looked for a python library to perform some fuzzy matching. What I found was a extremely useful library called, Fuzzy Wuzzy. Fuzzy Wuzzy provides a function called extractOne() that allows you to pass in a term and compare it with an array of values. The extractOne() function will then return the term in the array that has the highest match percentage. Additionally, you can specify a lower bound for acceptable matches (ie. only return result where the match is >= 90%).

Again, it took a few passes to get the configuration to work effectively. The first time through about only about 65% of the titles in the sales file found a match. That was throwing away too much data for my liking so I had to look at the exceptions and figure out why the matches were falling through. One issue that I found was titles that tacked on the publication years in the Marvel database, like “All-New X-Men (2012)”, had a match score in the 80’s when matched against a sales title like, “All New X-Men”. This was a pretty consistent issue, so rather than lowering the match percentage, which could introduce some legitimate mismatches, I decided to strip the year, if present, on mismatches and run it through that matching process again. This got me almost there. The only other issue I ran into was Fuzzy Wuzzy had trouble matching acronyms/acrostics. So ‘S.H.E.I.L.D.’  had a match score in the 50’s when matching ‘SHIELD’. That’s because half the characters (periods) were missing. Since  there were only two titles affected, I built a lookup dictionary of special cases that needed to be translated. For the purposes of this exercise, I would still have had enough matches to skip that step, but by doing it brought us up to 100% matching between the two sources. Once the matching function was working, I pulled out urllib2 and retrieved all the metadata I could for each of the issues.

The resulting files contained not only sales data (title, issue number, month rank, estimated sales), but information about the creative team, issue descriptions, characters, release dates and  associated story arcs. This would be enough to get us started with building our predictive classification model.
blog-csv-all That being said, there was still a lot of structural rearranging required to make it ready for the type of analysis I wanted to do, but we will deal with that in the next blog. Hopefully,  you picked up some useful tips on how to combine data from different sources or at the very least found solace in knowing that while you may not be the coolest person in the world, somewhere out there is a grown man who still likes to read comic books enough to write a blog about it. Be sure to tune in next week, True Believers, as we delve into The Mysteries of R!

The post Using Python to ‘Wrangle’ Your Messy Data appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Becky’s BI Apps Corner: OBIA install Perl Script Patching and troubleshooting when they fail

Rittman Mead Consulting - Thu, 2016-02-04 05:00

During a recent project installing Oracle BI Applications, I became much better acquainted with OPatch, Oracle’s standard tool for managing application patches. By acquainted, I mean how to troubleshoot when OPatch patching fails. Since, at last count, there are around 50 patches total for Oracle BI Applications 11.1.1.9.2, the first patching attempt may not apply all patches successfully. There are any number of reasons for a failure, like an extra slash at the end of a path, a misspelled word, Weblogic or NodeManager still running, or some other reason. We will take a look at the logs for each section, learn where additional logs can be found, and learn how to turn on OPatch debugging to better understand the issue. Then, following the ideas from a previous OPatch post by Robin, I’ll describe how to manually apply the patches with OPatch at the command line for any patches that weren’t applied successfully using the provided perl script.

*Disclaimers – Please read the readme files for patches and follow all Oracle recommendations. Patch numbers are subject to change depending on OS and OBIA versions. Commands and paths here are of the linux/unix variety, but there are similar commands available for Windows OS.

Perl Script patching

Unzip the patch files to a patch folder. I have included the OBIEE patch as well.

unzip pb4biapps_11.1.1.9.2_.zip -d patches/
unzip pb4biapps_11.1.1.9.2_generic_1of2.zip -d patches/
unzip pb4biapps_11.1.1.9.2_generic_2of2.zip -d patches/
unzip p20124371_111170_.zip -d patches/

While installing the Oracle BI Applications versions 11.1.1.7. and up, patches get applied with a perl script called APPLY_PATCHES.pl. Following Oracle’s install documentation for 11.1.1.9 version of Oracle BI Applications here, there is a text file to modify and pass to the perl script. Both the perl script and the text file reside in the following directory: $ORACLE_HOME/biapps/tools/bin. In the text file, called apply_patches_import.txt, parameters are set with the path to the following directories:

JAVA_HOME
INVENTORY_LOC
ORACLE_HOME
MW_HOME
COMMON_ORACLE_HOME
WL_HOME
ODI_HOME
WORKDIR
PATCH_ROOT_DIR
WINDOWS_UNZIP_TOOL_EXE (only needed if running on Windows platforms)

Some pro tips to modifying this text file:
1. Oracle recommends you use the JDK in the ORACLE_BI1 directory.
2. Use ORACLE_BI1 as the ORACLE_HOME.
3. Ensure WORKDIR and PATCH_ROOT_DIR are writeable directories.
4. Don’t add a path separator at the end of the path.
5. Commented lines are safe to remove.

Then you run the APPLY_PATCHES.pl passing in the apply_patches_import.txt. If everything goes well, at the end of the perl script, the results will look similar to the following:

If this is the case, CONGRATULATIONS!!, you can move on to the next step in the install documentation. Thanks for stopping by and come back soon! However, if any patch or group of patches failed, the rest of this post is for you.

Log file location

First, the above patching report does not tell you where to find the logs, regardless of success or failure. If you remember though, you set a WORKDIR path in the text file earlier. In that directory is where you will find the following log files:

  1. final_patching_report.log
  2. biappshiphome_generic_patches.log
  3. odi_generic_patches.log
  4. oracle_common_generic_patches.log
  5. weblogic_patching.log

Open the final_patching_report.log to determine first if all patches were applied and identify ones that were not successful. For example, looking that this log may show that the Oracle Common patches failed.

cd $WORKDIR
vi final_patching_report.log

However, this doesn’t tell you what caused the failure. Next we will want to look into the oracle_common_generic_patches.log to gather more information.

From the $WORKDIR:

vi oracle_common_generic_patches.log

Here you will see the error, that a component is missing. Patch ######## requires component(s) that are not installed in OracleHome. These not-installed components are oracle.jrf.thirdparty.jee:11.1.1.7.0. Notice also that in this log there is a path to another log file location. The path is in the $COMMON_ORACLE_HOME/cfgtoollogs/opatch/ directory. This directory has more detailed logs specific to patches applied to oracle_common. Additionally, there are logs under $ORACLE_HOME/cfgtoollogs/opatch/, $WL_HOME/cfgtoollogs/opatch/, and $ODI_HOME/cfgtoollogs/opatch/. These locations are very helpful to know, so you can find the logs for each group of patches in the same relative path.

Going back to the above error, we are going to open the most recent log file listed in the $COMMON_ORACLE_HOME/cfgtoollogs/opatch/ directory.

cd $COMMON_ORACLE_HOME/cfgtoollogs/opatch/
vi opatch2015-08-08_09-20-58AM_1.log

The beginning of this log file has two very interesting pieces of information to take note of for use later. It has the actual OPatch command used, and it has a path to a Patch History file. Looks like we will have to page down in the file to find the error message.

Now we see our missing component error. Once the error occurs, the java program starts backing out and then starts cleanup by deleting the files extracted earlier in the process. This log does have more detail, but still doesn’t say much about the missing component. After some digging around on the internet, I found a way to get more detailed information out of OPatch by setting export OPATCH_DEBUG=TRUE. After turning OPatch debugging on, run the OPatch command we found earlier that was at the top of the log. A new log file will be generated and we want to open this most recent log file.

Finally, the results now get me detailed information about the component and the failure.

Side Note: If you are getting this specific error, I’ll refer you back to a previous blog post that happened to mention making sure to grab the correct version of OBIEE and ODI. If you have a wrong version of OBIEE or ODI for the Oracle BI Apps version you are installing, unfortunately you won’t start seeing errors until you get to this point.

Manually running Oracle BI Application patches

Normally, the error or reason behind a patch or group of patches failing doesn’t take that level of investigation, and the issue will be identified in the first one or two logs. Once the issue is corrected, there are a couple of options available. Rerunning the perl script is one option, but it will cycle through all of the patches again, even the ones already applied. There is no harm in this, but it does take longer than running the individual patches. The other option is to run the OPatch command at the command line. To do that, first I would recommend setting the variables from the text file. I also added the Oracle_BI1/OPatch directory to the PATH variable.

export JAVA_HOME=$ORACLE_HOME/jdk
export INVENTORY_LOC=
export COMMON_ORACLE_HOME=$MW_HOME/oracle_common
export WL_HOME=$MW_HOME/wlserver_10.3
export SOA_HOME=$MW_HOME/Oracle_SOA1
export ODI_HOME=$MW_HOME/Oracle_ODI1
export WORKDIR=
export PATCH_FOLDER=/patches
export PATH=$ORACLE_HOME/OPatch:$JAVA_HOME/bin:$PATH

Next, unzip the patches in the required directory. For example, the $PATCH_FOLDER/oracle_common/generic might look like this after unzipping files:

Below are the commands for each group of patches:

Oracle Common Patches:

cd $PATCH_FOLDER/oracle_common/generic
unzip "*.zip"

$COMMON_ORACLE_HOME/OPatch/opatch napply $PATCH_FOLDER/oracle_common/generic -silent -oh $COMMON_ORACLE_HOME -id 16080773,16830801,17353546,17440204,18202495,18247368,18352699,18601422,18753914,18818086,18847054,18848533,18877308,18914089,19915810

BIApps Patches:

cd $PATCH_FOLDER/biappsshiphome/generic
unzip "*.zip"

opatch napply $PATCH_FOLDER/biappsshiphome/generic -silent -id 16913445,16997936,19452953,19526754,19526760,19822893,19823874,20022695,20257578

ODI Patches:

cd $PATCH_FOLDER/odi/generic
unzip "*.zip"

/$ODI_HOME/OPatch/opatch napply $PATCH_FOLDER/odi/generic -silent -oh $ODI_HOME -id 18091795,18204886

Operating Specific Patches:

cd $PATCH_FOLDER/
unzip "*.zip"

opatch napply $PATCH_FOLDER/ -silent -id ,,

Weblogic Patches:

cd $PATCH_FOLDER/suwrapper/generic
unzip "*.zip"

cd $PATCH_FOLDER/weblogic/generic

$JAVA_HOME/bin/java -jar $PATCH_FOLDER/suwrapper/generic/bsu-wrapper.jar -prod_dir=$WL_HOME -install -patchlist=JEJW,LJVB,EAS7,TN4A,KPFJ,RJNF,2GH7,W3Q6,FKGW,6AEJ,IHFB -bsu_home=$MW_HOME/utils/bsu -meta=$PATCH_FOLDER/suwrapper/generic/suw_metadata.txt -verbose > $PATCH_FOLDER/weblogic_patching.log

Even though this is a very specific error as an example, understanding the logs and having the break-down of all of the patches will help with any number of patch errors at this step of the Oracle BI Applications installation. I would love to hear your thoughts if you found this helpful or if any part was confusing. Keep an eye out for the next Becky’s BI Apps Corner where I move on from installs and start digging into incremental logic and Knowledge Modules.

The post Becky’s BI Apps Corner: OBIA install Perl Script Patching and troubleshooting when they fail appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Data Mining Scoring Development Process

Dylan Wan - Thu, 2015-04-02 23:39

I think that the process of building a data mining scoring engine is similar to develop an application.


We have the requirement analysis, functional design, technical design, coding, testing, deployment, etc. phases.



Categories: BI & Warehousing