Skip navigation.

Feed aggregator

Security Alert CVE-2016-0603 Released

Oracle Security Team - Fri, 2016-02-05 14:42

Oracle just released Security Alert CVE-2016-0603 to address a vulnerability that can be exploited when installing Java 6, 7 or 8 on the Windows platform. This vulnerability has received a CVSS Base Score of 7.6.

To be successfully exploited, this vulnerability requires that an unsuspecting user be tricked into visiting a malicious web site and download files to the user's system before installing Java 6, 7 or 8. Though considered relatively complex to exploit, this vulnerability may result, if successfully exploited, in a complete compromise of the unsuspecting user’s system.

Because the exposure exists only during the installation process, users need not upgrade existing Java installations to address the vulnerability. However, Java users who have downloaded any old version of Java prior to 6u113, 7u97 or 8u73, should discard these old downloads and replace them with 6u113, 7u97 or 8u73 or later.

As a reminder, Oracle recommends that Java home users visit Java.com to ensure that they are running the most recent version of Java SE and that all older versions of Java SE have been completely removed. Oracle further advises against downloading Java from sites other than Java.com as these sites may be malicious.

For more information, the advisory for Security Alert CVE-2016-0603 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2016-0603-2874360.html

 

How to get nfs info on 1000 or many hosts using Oracle Enterprise Manager

Arun Bavera - Fri, 2016-02-05 11:27
There was a requirement to get nfs info on all the hosts.
Here is the way to get it:

Create a OS JOB in EM12c with following text and execute on all interested hosts. Assuming you have common shared mount on all these hosts.
Otherwise you can create Metric Extension to collect this info and query repository using Configuration Manger or directly to get this info.
 echo -e `echo '\n';hostname --l;echo '\n=====================================\n';nfsstat -m;echo '\n=====================================\n';exit 0` >> /nfs_software/nfs_info_PROD.txt



Categories: Development

Parallel DML

Jonathan Lewis - Fri, 2016-02-05 07:02

A recent posting on OTN presented a performance anomaly when comparing a parallel “insert /*+ append */” with a parallel “create table as select”.  The CTAS statement took about 4 minutes, the insert about 45 minutes. Since the process of getting the data into the data blocks would be the same in both cases something was clearly not working properly. Following Occam’s razor, the first check had to be the execution plans – when two statements that “ought” to do the same amount of work take very different times it’s probably something to do with the execution plans – so here are the two statements with their plans:

First the insert, which took 45 minutes:

insert  /*+ append parallel(a,16) */ into    
        dg.tiz_irdm_g02_cc  a
select
        /*+ parallel (a,16) parallel (b,16) */ 
        *
from    tgarstg.tst_irdm_g02_f01 a, 
        tgarstg.tst_irdm_g02_f02 b
where   a.ip_id = b.ip_id
;

------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                 |                  |    13M|    36G|       |   127K  (1)| 00:00:05 |        |      |            |
|   1 |  LOAD AS SELECT                  | TIZ_IRDM_G02_CC  |       |       |       |            |          |        |      |            |
|   2 |   PX COORDINATOR                 |                  |       |       |       |            |          |        |      |            |
|   3 |    PX SEND QC (RANDOM)           | :TQ10002         |    13M|    36G|       |   127K  (1)| 00:00:05 |  Q1,02 | P->S | QC (RAND)  |
|*  4 |     HASH JOIN BUFFERED           |                  |    13M|    36G|   921M|   127K  (1)| 00:00:05 |  Q1,02 | PCWP |            |
|   5 |      PX RECEIVE                  |                  |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,02 | PCWP |            |
|   6 |       PX SEND HASH               | :TQ10000         |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   7 |        PX BLOCK ITERATOR         |                  |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F02 |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | PCWP |            |
|   9 |      PX RECEIVE                  |                  |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,02 | PCWP |            |
|  10 |       PX SEND HASH               | :TQ10001         |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | P->P | HASH       |
|  11 |        PX BLOCK ITERATOR         |                  |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | PCWC |            |
|  12 |         TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F01 |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------------------

And here’s the ‘create table’ at 4:00 minutes:

create table dg.tiz_irdm_g02_cc 
nologging 
parallel 16 
compress for query high 
as
select
        /*+ parallel (a,16) parallel (b,16) */ 
        *
from    tgarstg.tst_irdm_g02_f01 a , 
        tgarstg.tst_irdm_g02_f02 b 
where
        a.ip_id = b.ip_id

------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                        | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
------------------------------------------------------------------------------------------------------------------------------------------
|   0 | CREATE TABLE STATEMENT           |                  |    13M|    36G|       |   397K  (1)| 00:00:14 |        |      |            |
|   1 |  PX COORDINATOR                  |                  |       |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)            | :TQ10002         |    13M|    36G|       |   255K  (1)| 00:00:09 |  Q1,02 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT                | TIZ_IRDM_G02_CC  |       |       |       |            |          |  Q1,02 | PCWP |            |
|*  4 |     HASH JOIN                    |                  |    13M|    36G|  1842M|   255K  (1)| 00:00:09 |  Q1,02 | PCWP |            |
|   5 |      PX RECEIVE                  |                  |    13M|    14G|       | 11465   (5)| 00:00:01 |  Q1,02 | PCWP |            |
|   6 |       PX SEND HASH               | :TQ10000         |    13M|    14G|       | 11465   (5)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   7 |        PX BLOCK ITERATOR         |                  |    13M|    14G|       | 11465   (5)| 00:00:01 |  Q1,00 | PCWC |            |
|   8 |         TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F02 |    13M|    14G|       | 11465   (5)| 00:00:01 |  Q1,00 | PCWP |            |
|   9 |      PX RECEIVE                  |                  |    13M|    21G|       | 36706   (3)| 00:00:02 |  Q1,02 | PCWP |            |
|  10 |       PX SEND HASH               | :TQ10001         |    13M|    21G|       | 36706   (3)| 00:00:02 |  Q1,01 | P->P | HASH       |
|  11 |        PX BLOCK ITERATOR         |                  |    13M|    21G|       | 36706   (3)| 00:00:02 |  Q1,01 | PCWC |            |
|  12 |         TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F01 |    13M|    21G|       | 36706   (3)| 00:00:02 |  Q1,01 | PCWP |            |
------------------------------------------------------------------------------------------------------------------------------------------

As you can see, the statements are supposed to operate with degree of parallelism 16, and we were assured that the pre-existing table had been declared as nologging with the same level of compression as that given in the CTAS so, assuming the queries did run with the degree expected, they should take virtually the same amount of time.

But there’s an important clue in the plan about why there was a difference, and why the difference could be so great. The first statement is DML, the second is DDL. Parallel DDL is automatically enabled, parallel DML has to be enabled explicitly otherwise the select will run in parallel but the insert will be serialized. Look at operations 1 – 4 of the insert – the query co-ordinator does the “load as select” of the rowsource sent to it by the parallel execution slaves. Not only does this mean that one process (rather than 16) does the insert, you also have all the extra time for all the messaging and the hash join (at line 4) has to be buffered – which means a HUGE amount of data could have been dumped to disc by each slave prior to the join actually taking place and then been read back from disc, joined, and forwarded.

Note that the hash join in the CTAS is not buffered – each slave does the join as the data arrives and writes the result directly to its local segment. Basically the insert could be doing something like twice the I/O of the CTAS (and this is Exadata, so reads from temp can be MUCH slower than the tablescans that supply the data to be joined).

So the OP checked, and found that (although he thought he had enabled parallel DML) he hadn’t actually done so. And after enabling parallel DML the timing was … just as bad. Ooops!! Something else must have gone wrong. Here’s the plan after enabling parallel DML:


--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |                  |    13M|    36G|       |   127K  (1)| 00:00:05 |        |      |            |
|   1 |  PX COORDINATOR                    |                  |       |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)              | :TQ10003         |    13M|    36G|       |   127K  (1)| 00:00:05 |  Q1,03 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT                  | TIZ_IRDM_G02_CC  |       |       |       |            |          |  Q1,03 | PCWP |            |
|   4 |     PX RECEIVE                     |                  |    13M|    36G|       |   127K  (1)| 00:00:05 |  Q1,03 | PCWP |            |
|   5 |      PX SEND RANDOM LOCAL          | :TQ10002         |    13M|    36G|       |   127K  (1)| 00:00:05 |  Q1,02 | P->P | RANDOM LOCA|
|*  6 |       HASH JOIN BUFFERED           |                  |    13M|    36G|   921M|   127K  (1)| 00:00:05 |  Q1,02 | PCWP |            |
|   7 |        PX RECEIVE                  |                  |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,02 | PCWP |            |
|   8 |         PX SEND HASH               | :TQ10000         |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | P->P | HASH       |
|   9 |          PX BLOCK ITERATOR         |                  |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | PCWC |            |
|  10 |           TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F02 |    13M|    14G|       |  5732   (5)| 00:00:01 |  Q1,00 | PCWP |            |
|  11 |        PX RECEIVE                  |                  |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,02 | PCWP |            |
|  12 |         PX SEND HASH               | :TQ10001         |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | P->P | HASH       |
|  13 |          PX BLOCK ITERATOR         |                  |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | PCWC |            |
|  14 |           TABLE ACCESS STORAGE FULL| TST_IRDM_G02_F01 |    13M|    21G|       | 18353   (3)| 00:00:01 |  Q1,01 | PCWP |            |
--------------------------------------------------------------------------------------------------------------------------------------------

As you can see, line 3 has the LOAD AS SELECT after which the slaves message the query co-ordinator – so the DML certainly was parallel even though it wasn’t any faster. But why is the hash join (line 6) still buffered, and why is there an extra data flow (lines 5 and 4 – PX SEND RANDOM LOCAL / PX RECEIVE). The hash join has to be buffered because of that extra data flow (which suggests that the buffering and messaging could still be the big problem) – but WHY is the data flow there at all, it shouldn’t be.

At this point I remembered that the first message in the thread had mentioned testing partitioned tables as well as non-partitioned tables – and if you do a parallel insert to a partitioned table and the data is going to be spread across several partitions, and the number of partitions is not a good match for the degree of parallelism then you’re likely to an extra stage of data distribution as Oracle tries to share the data and the partitions as efficiently as possible across slaves. One of the possible distribution methods is “local random” – which is fairly likely to appear if the number of slaves is larger than the number of partitions. This behaviour can be modified with the newer “single distribution” version of the pq_distribute hint. So I asked the OP if their latest test was on a partitioned table, and suggested they insert the hint /*+ pq_distribute(a none) */ just after the parallel hint.

The answer was yes, and the hint had the effect of dropping the run time down to 7 minutes – still not as good as the CTAS, but then the CTAS wasn’t creating a partitioned table so it’s still not a completely fair test. Here’s the (start of the) final plan:

--------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name             | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |    TQ  |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT                   |                  |    13M|    36G|       |   127K  (1)| 00:00:05 |        |      |            |
|   1 |  PX COORDINATOR                    |                  |       |       |       |            |          |        |      |            |
|   2 |   PX SEND QC (RANDOM)              | :TQ10002         |    13M|    36G|       |   127K  (1)| 00:00:05 |  Q1,02 | P->S | QC (RAND)  |
|   3 |    LOAD AS SELECT                  | TIZ_IRDM_G02_CC  |       |       |       |            |          |  Q1,02 | PCWP |            |
|*  4 |     HASH JOIN                      |                  |    13M|    36G|   921M|   127K  (1)| 00:00:05 |  Q1,02 | PCWP |            |

As you can see, we have a hash join that is NOT buffered; we don’t have a third distribution, and the slaves do the data load and then message the query co-ordinator.

It would be interesting to know if there was a significant skew in the data volumes that went into each partition of the partitioned table, and check where the time was spent for both the partitioned insert and the non-partitioned CTAS (and compare with a non-partitioned insert) – but real-world DBAs don’t necessarily have all the time for investigations that I do.

My reference: parallel_dml.sql


Node-oracledb: Avoiding "ORA-01000: maximum open cursors exceeded"

Christopher Jones - Fri, 2016-02-05 05:45

Developers starting out with Node have to get to grips with the 'different' programming style of JavaScript that seems to cause methods to be called when least expected! While you are still in the initial hacking-around-with-node-oracledb phase you may sometimes encounter the error ORA-01000: maximum open cursors exceeded.

Here are some things to do about it:

  • Avoid having too many incompletely processed statements open at one time:

    • Close ResultSets before releasing the connection.

    • If cursors are opened with dbms_sql.open_cursor() in a PL/SQL block, close them before the block returns - except for REF CURSORS being passed back to node-oracledb. (And if a future node-oracledb version supports Oracle Database 12c Implicit Result Sets, these cursors should likewise not be closed in the PL/SQL block)

    • Make sure your application is handling connections and statements in the order you expect.

  • Choose the appropriate Statement Cache size. Node-oracledb has a statement cache per connection. When node-oracledb internally releases a statement it will be put into the statement cache of that connection, but its cursor will remain open. This makes statement re-execution very efficient.

    The cache size is settable with the stmtCacheSize attribute. The appropriate statement cache size you choose will depend on your knowledge of the locality of the statements, and of the resources available to the application: are statements re-executed; will they still be in the cache when they get executed; how many statements do you want to be cached? In rare cases when statements are not re-executed, or are likely not to be in the cache, you might even want to disable the cache to eliminate its management overheads.

    Incorrectly sizing the statement cache will reduce application efficiency. Luckily with Oracle 12.1, the cache can be automatically tuned using an oraaccess.xml file.

    More information on node-oracledb statement caching is here.

  • Don't forget to use bind variables otherwise each variant of the statement will have its own statement cache entry and cursor. With appropriate binding, only one entry and cursor will be needed.

  • Set the database's open_cursors parameter appropriately. This parameter specifies the maximum number of cursors that each "session" (i.e each node-oracle connection) can use. When a connection exceeds the value, the ORA-1000 error is thrown. Documentation on open_cursors is here.

    Along with a cursor per entry in the connection's statement cache, any new statements that a connection is currently executing, or ResultSets that haven't been released (in neither situation are these yet cached), will also consume a cursor. Make sure that open_cursors is large enough to accommodate the maximum open cursors any connection may have. The upper bound required is stmtCacheSize + the maximum number of executing statements in a connection.

    Remember this is all per connection. Also cache management happens when statements are internally released. The majority of your connections may use less than open_cursors cursors, but if one connection is at the limit and it then tries to execute a new statement, that connection will get ORA-1000: maximum open cursors exceeded.

OGh DBA / SQL Celebration Day 2016 Keynote Speaker

Marco Gralike - Fri, 2016-02-05 05:24
I am very happy and honored, to announce, that Mr. Graham Wood (Oracle US) agreed…

Do you even [ Agile | DevOps ] bruh?

Tim Hall - Fri, 2016-02-05 05:01

It seems I can’t turn around without getting myself involved in some discussion about Agile or DevOps these days.

agile-devops-meme2

I agree with many of the concepts and the aims of Agile, DevOps, Continuous Delivery etc. I find it hard to believe anyone wouldn’t see value in what they are trying to promote. As always, it is how people interpret and implement them that makes all the difference.

It’s just like religion. They all seem to be pretty sound at heart, but let a few lunatics and fundamentalists loose on them and next thing you know…

Things like Agile and DevOps have arisen to address perceived problems. If your organisation doesn’t suffer from those problems, you may not need to consider them, or you may already be doing something like them without knowing you are. :)

Your company can be agile, without following Scrum or Kanban. You will inevitably have arrived at similar patterns I guess. Likewise, your streamlining of process, automation of testing and deployment, good communication between silos (if present) may leave you wondering what all the DevOps fuss is about.

I am both a fan and hater of Agile and DevOps. I’m a fan of what they are able to achieve when used correctly. I’m a hater of all the bullshit that surrounds them!

Rant over. :)

Cheers

Tim…

Do you even [ Agile | DevOps ] bruh? was first posted on February 5, 2016 at 12:01 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Upgrade Oracle Apps (EBS) to 12.2 ? ORA-01804: failure to initialize timezone information – issue while running AutoConfig

Online Apps DBA - Fri, 2016-02-05 03:23
This entry is part 6 of 6 in the series Oracle EBS 12.2 Upgrade

This post covers issue running Autoconfig on DB Tier after upgrading database reported in our Oracle EBS Upgrade R12.2 training (next batch starts on 20th Feb and only limited seats are available . We limit number trainees to 15 and where we cover Architecture, Overview of R12.2 & Major features in Upgrading to R12.2, Different upgrade paths available to R12.2, Best practices for R12.2 Upgrade, How to minimize down time for R12.2 Upgrade, Difficulties/Issues while upgrading to R12.2)

One of the trainee from our previous batch, encountered  issue “ORA-01804: failure to initialize timezone information” in Post Database upgrade 11.2.0.4  step while running AutoConfig.

Note: There are total 9 stages in 12.2 Upgrade where database upgrade to 11gR2 or 12c is one of the step.

Issue:

1. Running Autoconfig on Database tier as

$ORACLE_HOME>/appsutil/bin/adconfig.sh contextfile=$ORACLE_HOME/appsutil/PROD1211_iamdemo01.xml

autocfg_error_new

Error messages in AutoConfig logs: /u01/oracle/PROD1211_11204/db/tech_st/11.2.0/appsutil/log/

PROD1211_iamdemo01/12180900/adconfig.log

SQLPLUS Executable : /u01/oracle/PROD1211_11204/db/tech_st/11.2.0/bin/sqlplus

ERROR:
ORA-01804: failure to initialize timezone information

SP2-0152: ORACLE may not be functioning properly
adcrobj.sh exiting with status 1
ERRORCODE = 1 ERRORCODE_END

Fix:

Apply patch 7651166 (as per ReadMe instructions ) to fix the above issue and run AutoConfig again as

$ORACLE_HOME/ appsutil/ bin/ adconfig.sh contextfile=$ORACLE_HOME/ appsutil/PROD1211_iamdemo01.xml

You should see a message “AutoConfig Completed Successfully”

autoconfig_success

References:

  • EBS 12.1.1: Autoconfig Fails While Running Scripts “afdbprf.sh” and “adcrobj.sh” with 11GR2 Database (Doc ID 1336807.1)
  • AutoConfig On Db Tier Fails With Error – SP2-1503: Unable to initialize Oracle call interface (Doc ID 1187616.1)

If you want to learn more about Oracle EBS Upgrade to R12.2  then click the button below and register for our  Oracle Upgrade 12.2  (next batch starts on 20th February, 2016 )

Note: We are so confident on our workshops that we provide 100% Money back guarantee, in unlikely case of you being not happy after first sessions, just drop us a mail before second session and We’ll refund FULL money

Oracle E-Business Suite Upgrade to R12.2 Training

Live Instructor led Online sessions with Hands-on Lab Exercises, Dedicated Machines to Practice and Recorded sessions of the Training

Click here to learn more with limited time discounts

Stay Tuned for more Information on Oracle Apps 12.2 Upgrade!!

The post Upgrade Oracle Apps (EBS) to 12.2 ? ORA-01804: failure to initialize timezone information – issue while running AutoConfig appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Put Glance on It

Oracle AppsLab - Fri, 2016-02-05 02:59

Because I live in Portland, I’m often asked if “Portlandia” is accurate.

It is, mostly, and so it seems appropriate to channel an early episode to talk about Glance, our wearables framework.

Actually, Glance has grown beyond wearables to support cars and other devices, the latest of which is Noel’s (@noelportugal) gadget du jour, the LaMetric Time (@smartatoms).

Insert mildly amusing video here.

And of course Noel had to push Glance notifications to LaMetric, because Noel. Pics, it happened.

IMG_20160129_101740

The text is truncated, and I tried to take a video of the scrolling notification, but it goes a bit fast for the camera. Beyond just the concept, we’ll have to break up the notification to fit LaMetric’s model better, but this was only a few minutes of effort from Noel. I know, because he was sitting next to me while he coded it.

In case you need a refresher, here’s Glance of a bunch of other things.

IMG_20160202_141310-4

I didn’t have a separate camera so I couldn’t show the Android notification.

Screenshot_20160127-113935

We haven’t updated the framework for them, but if you recall, Glance also worked on Google Glass and Pebble in its 1.0 version.

IMG_0098

Screenshot_2014-09-08-07-20-50Possibly Related Posts:

Network multicast support on Azure

Pythian Group - Thu, 2016-02-04 15:07

 

Today I would like to talk about multicast support on Azure, and how to make it work. While it’s not the most required feature on a Virtual environment, nevertheless, some applications require multicast support for networks. The perfect example is Oracle RAC, where multicast is required starting from version 11.2.0.2. In Oracle RAC, multicast is used for highly available IP (HAIP) on interconnect. If you’re thinking about building a training environment with Oracle RAC on Azure you will need the multicast support.

How can we check if it works, or if it’s working now? First, you can check if it’s supported by your kernel using the netstat utility.

[root@oradb5 ~]# netstat -g | grep mcast
lo 1 all-systems.mcast.net
eth0 1 all-systems.mcast.net
eth1 1 all-systems.mcast.net

You can see that all my interfaces are ready for the multicast support. That’s fine, but how can we check if it works on our network? We can use either iperf utility or a perl script created by Oracle. You can download the script from Oracle support if you have account, from the Oracle note “How to Validate Network and Name Resolution Setup for the Clusterware and RAC (Doc ID 1054902.1)”.
Here’s what I got: I have two Azure VM A3 size with Oracle Linux 6, with two network interfaces each. The VM hostnames were oradb5 and oradb6. You can check out my blog on how to make an Azure VM with two network interface here. The second interface eth1 is one where we are going to enable multicast.

I ran the mcasttest.pl script and saw that:

[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i eth1
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 15:05:23 | Multicast Failed for eth1 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 15:05:53 | Multicast Failed for eth1 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

The output clearly tells us that we don’t have multicast support for either for 230.0.1.0 or 224.0.0.251 multicast addresses.

What does the Virtual Network FAQ for Azure tell us about it?
Here is the answer:

Do VNets support multicast or broadcast?
No. We do not support multicast or broadcast.
What protocols can I use within VNets?
You can use standard IP-based protocols within VNets. However, multicast, broadcast, IP-in-IP encapsulated packets and Generic Routing Encapsulation (GRE) packets are blocked within VNets. Standard protocols that work include:
* TCP
* UDP
* ICMP

So, we need a workaround. Luckily we have one. Some time ago, while discussing RAC on Amazon AWS, I was pointed to an article written by my former colleague Jeremiah Wilton, where he had described how he could work around the same problem on Amazon. You can read the article here. I decided to give a try and see if it works for Azure.

We are going to use a Peer-to-Peer VPN n2n provided by ntop.
They have mentioned that the development for the product has been put on hold, but the tool is still widely used and provides an acceptable solution for our problem. I used a Stuart Buckell’s article on how to set it up and it worked for me.
We could just use an already precompiled package, but compiling the utility from the sources provides us with an opportunity to disable encryption and compression, or change any other options.

Here is what I’ve done:
Installed kernel headers to be able to compile the n2n :

[root@oradb5 n2n_v2]# yum install kernel-headers
ol6_UEK_latest | 1.2 kB 00:00
ol6_u4_base | 1.4 kB 00:00
ol6_u4_base/primary | 2.7 MB 00:00
ol6_u4_base 8396/8396
Setting up Install Process
Resolving Dependencies
....

Installed subversion utility:

[root@oradb5 /]# yum install subversion.x86_64
Setting up Install Process
Resolving Dependencies
--> Running transaction check
---> Package subversion.x86_64 0:1.6.11-15.el6_7 will be installed
.............

Downloaded the sources using svn:

[root@oradb5 /]# svn co https://svn.ntop.org/svn/ntop/trunk/n2n
Error validating server certificate for 'https://svn.ntop.org:443':
- The certificate hostname does not match.
Certificate information:
- Hostname: shop.ntop.org
- Valid: from Sun, 15 Nov 2015 00:00:00 GMT until Wed, 14 Nov 2018 23:59:59 GMT
- Issuer: COMODO CA Limited, Salford, Greater Manchester, GB
- Fingerprint: fb:a6:ff:a7:58:f3:9d:54:24:45:e5:a0:c4:04:18:d5:58:91:e0:34
(R)eject, accept (t)emporarily or accept (p)ermanently? p
A n2n/n2n_v1
A n2n/n2n_v1/lzodefs.h
A n2n/n2n_v1/README
...............

Disabled encryption and compression using this article
Changed directory to n2n/n2n_v2 and compiled it.

[root@oradb5 n2n_v2]# make
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c n2n.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c n2n_keyfile.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c wire.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c minilzo.c
gcc -g3 -Wall -Wshadow -Wpointer-arith -Wmissing-declarations -Wnested-externs -c twofish.c
..............................

Copied files to the both my servers (oradb5 and oradb6) to /usr/sbin directory:

[root@oradb5 n2n_v2]# cp supernode /usr/sbin/
[root@oradb5 n2n_v2]# cp edge /usr/sbin/

Start a supernode daemon on the 1-st node. We only need it running on one machine, and it can even be a totally different machine. I am using port 1200 for it:

[root@oradb5 ~]# supernode -l 1200
[root@oradb5 ~]#

Started the edge on both servers. On oradb5 I am creating a sub-interface with IP 192.168.1.1 and providing some parameters:
-E – Accept multicast MAC addresses (default=drop).
-r – Enable packet forwarding through n2n community.
-c – n2n community name the edge belongs to.
-l – our supernode address:port.

[root@oradb5 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.1 -E -r

[root@oradb6 ~]# edge -l 10.0.2.11:1200 -c RAC -a 192.168.1.2 -E -r

So we are getting an interface edge0 on both nodes and can use it for connection required multicast:

[root@oradb5 ~]# ifconfig edge0
edge0 Link encap:Ethernet HWaddr 52:CD:8E:20:3D:E5
inet addr:192.168.1.1 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::50cd:8eff:fe20:3de5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:592 (592.0 b)

[root@oradb5 ~]#

On the second box:

[root@oradb6 ~]# ifconfig edge0
edge0 Link encap:Ethernet HWaddr 7E:B1:F1:41:7B:B7
inet addr:192.168.1.2 Bcast:192.168.1.255 Mask:255.255.255.0
inet6 addr: fe80::7cb1:f1ff:fe41:7bb7/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1400 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:500
RX bytes:0 (0.0 b) TX bytes:362 (362.0 b)

[root@oradb6 ~]#

Now we can run our multicast test again for edge0 interface and see how it works.

[oracle@oradb5 ~]$ cd mcasttest/
[oracle@oradb5 mcasttest]$ ./mcasttest.pl -n oradb5,oradb6 -i edge0
########### Setup for node oradb5 ##########
Checking node access 'oradb5'
Checking node login 'oradb5'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb5'
Distributing mcast2 binary to node 'oradb5'
########### Setup for node oradb6 ##########
Checking node access 'oradb6'
Checking node login 'oradb6'
Checking/Creating Directory /tmp/mcasttest for binary on node 'oradb6'
Distributing mcast2 binary to node 'oradb6'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Nov 24 16:22:12 | Multicast Succeeded for edge0 using address 230.0.1.0:42000

Test for Multicast address 224.0.0.251

Nov 24 16:22:13 | Multicast Succeeded for edge0 using address 224.0.0.251:42001
[oracle@oradb5 mcasttest]$

As you can see, the test has completed successfully. So, the edge0 interface can be used now for any connections requiring multicast support.

In my next article I will show you how to create an Oracle RAC on Azure using the created multicast interface and a shared storage.

Categories: DBA Blogs

Come Visit the OAUX Gadget Lab

Oracle AppsLab - Thu, 2016-02-04 11:28

In September 2014, Oracle Applications User Experience (@usableapps) opened a brand new lab that showcases Oracle’s Cloud Applications, specifically the many innovations that our organization has made to and around Cloud Applications in the past handful of year.

We call it the Cloud User Experience Lab, or affectionately, the Cloud Lab.

Our team has several projects featured in the Cloud Lab, and many of our team members have presented our work to customers, prospects, partners, analysts, internal groups, press, media and even schools and Girl and Boy Scout troops.

In 2015, the Cloud Lab hosted more than 200 such tours, actually quite a bit more, but I don’t have the exact number in front of me.

Beyond the numbers, Jeremy (@jrwashely), our group vice president, has been asked to replicate the Cloud Lab in other places, on the HQ campus and abroad at other Oracle campuses.

People really like it.

In October 2015, we opened an adjoining space to the Cloud Lab that extends the experience to include more hands-on projects. We call this new lab, the Gadget Lab, and it features many more of our projects, including several you’ve seen here.

In the Gadget Lab, we’re hoping to get people excited about the possible and give them a glimpse of what our team does because saying “we focus on emerging technologies” isn’t nearly as descriptive as showing our work.

RS3499_151029_OAUXHeadquartersLabTours_0262-scr

So, the next time you’re at Oracle HQ, sign up for a tour of the Cloud and Gadget Labs and let us know what you think.Possibly Related Posts:

An UNDO in a PDB in Oracle 12c?

Pythian Group - Thu, 2016-02-04 10:08

 

According to the Oracle 12cR1 documentation and concepts, it is 100% clear that there can be only one UNDO tablespace in a multitenant architecture and it is at CDB level; thus, a PDB cannot have any UNDO tablespace.

Are we really sure about that? Let’s test it!

First, we need a PDB with few tablespaces:

 

FRED_PDB> select NAME, OPEN_MODE, CON_ID from v$pdbs ;

NAME OPEN_MODE CON_ID
-------------------------------------------------- ---------- ----------
FRED_PDB READ WRITE 4

FRED_PDB> select tablespace_name from dba_tablespaces ;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
TBS_DATA

5 rows selected.

FRED_PDB> show parameter undo

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
temp_undo_enabled boolean FALSE
undo_management string AUTO
undo_retention integer 900
undo_tablespace string UNDOTBS1
FRED_PDB>

 

There we have an UNDO tablespace named UNDOTBS1 at CDB level and no UNDO at PDB level. Let’s try to create one :

FRED_CDB> create undo tablespace MY_PDB_UNDO ;

Tablespace created.

FRED_CDB>

 

It worked! Is the Oracle documentation wrong? Let’s verify this weird successful UNDO tablespace creation:

FRED_PDB> select tablespace_name from dba_tablespaces where tablespace_name like '%UNDO%' ;

no rows selected

FRED_PDB> select tablespace_name from dba_tablespaces

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
TEMP
USERS
TBS_DATA

5 rows selected.

FRED_PDB>

 

No UNDO tablespace has in fact been created even if no error message has been raised by Oracle. Digging in the documentation, this is not a not a bug but a feature. Indeed, it is well specified that:

When the current container is a PDB, an attempt to create an undo tablespace fails without returning an error.

 

Please note that this is the behavior of the 12cR1 release; from my side, I think that this a “not yet feature” and we should see some real UNDO tablespaces in PDBs in the next release(s)!

Discover more about our expertise in Oracle

Categories: DBA Blogs

Expert Insights: Oracle NET Troubleshooting for DBAs

Pythian Group - Thu, 2016-02-04 09:21

 

Troubleshooting connection issues with Oracle SQL*Net can be difficult at times due to the many options that can be taken during configuration. One of the options is where the file tnsnames.ora may be found. There are multiple locations available, and at times there is justification for having more than one copy of the file.

Perhaps there is a hybrid database naming configuration. Say there are a number of company-wide databases that are defined in Oracle OID, OpenLDAP or Active Directory, while local test databases are defined in one or more local tnsnames.ora files.

When one of the databases appears to no longer be available, even though you are quite sure it should be available, it’s good to know the default search order used by Oracle to resolve the name.

The Oracle names resolution default search order for Linux and Windows is explained here:

 

 

But wait, there’s more!

You may know that on linux and unix systems tnsnames.ora can be placed in the /etc directory.

Do you know just what happens when /etc/tnsnames.ora is used?  Learn that and more by watching the rest of the presentation.

 

Discover more about our expertise in Oracle.

Categories: DBA Blogs

RHEL vs OEL: Back in The Cold War?

Pythian Group - Thu, 2016-02-04 07:32

 

I recently encountered a déjà vu on a client system, something I’ve seen repeatedly over the last couple of years. I’ve decided to write about it to prevent others from tumbling down the same rabbit hole.

Red Hat Enterprise Linux 6 system with a Red Hat support contract on it. A DBA had installed Oracle’s oracle-rdbms-server-12cR1-preinstall RPM package. The DBA was doing that based on Oracle support note “Linux OS Installation with Reduced Set of Packages for Running Oracle Database Server (Doc ID 728346.1)” which in the main section simply states:

“For Oracle database 12cR1 running on OL6/RHEL6,use command below to install all packages required for running Oracle software and resolve all dependencies.

yum install oracle-rdbms-server-12cR1-preinstall”

I’ve got a bit problem with how that note was written. Let’s take a look at what exactly happens to your RHEL 6 system, if you do that. First of all, you have to add Oracle’s yum repo to your yum configuration in order to be able to install that package. I’m a firm believer that you should never mix repositories of different Linux distributions on a production server, but I digress.

Then, when you actually install the RPM:

Resolving Dependencies
--> Running transaction check
---> Package oracle-rdbms-server-12cR1-preinstall.x86_64 0:1.0-14.el6 will be installed
--> Processing Dependency: kernel-uek for package: oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64
--> Processing Dependency: libaio-devel for package: oracle-rdbms-server-12cR1-preinstall-1.0-14.el6.x86_64
--> Running transaction check
---> Package kernel-uek.x86_64 0:2.6.39-400.264.13.el6uek will be installed
--> Processing Dependency: kernel-uek-firmware = 2.6.39-400.264.13.el6uek for package: kernel-uek-2.6.39-400.264.13.el6uek.x86_64
---> Package libaio-devel.x86_64 0:0.3.107-10.el6 will be installed
--> Running transaction check
---> Package kernel-uek-firmware.noarch 0:2.6.39-400.264.13.el6uek will be installed
--> Finished Dependency Resolution

Some DBAs just skip over that section entirely and don’t pay attention to it, but right there Oracle has just installed their own kernel on a RHEL6 system. It’s also been activated and marked as default in grub.conf (which is the norm when installing a kernel RPM):

default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.32-400.37.15.el6uek.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-400.37.15.el6uek.x86_64 ro root=.....
initrd /initramfs-2.6.32-400.37.15.el6uek.x86_64.img

Let that sink in for a minute.

Leaving the system as it is, we’d be going ahead with the installation of the Oracle software, start running our database and go into production. If at any point in the future when we’re be rebooting our server, or if it crashes, we’d suddenly be running the UEK kernel and no longer the Red Hat kernel. There’s also a fairly ugly can of worms awaiting the DBA in question when the SA sees that someone has changed the default kernel.

But the real question is, what would running a different kernel do to us?

Well, Red Hat has an article that’s locked behind a subscriber-only wall. In a nutshell the message it contains is that third party packages are not supported by Red Hat, and third party kernels render all issues unsupported. Fair enough, that makes perfect sense, doesn’t it?

Thus, in essence, we’ve just voided support for our server. If we would hit any issue, we’d have to first clean out any Oracle packages that have replaced Red Hat’s – including the kernel – and reboot the machine back into a clean state, or we’d have to go to the maker of our custom kernel for support. That’s clearly not something you’d want to do during a critical issue on a production server.

If we read the aforementioned Oracle support note a bit more closely, way at the bottom in “Remarks”, as if it’s of no importance, we see this:

“RHEL systems registered with RHN or without an registration with an update channel and which should remain RedHat, can generate a primary list of missing dependencies (manually download the oracle-validated rpm package on public-yum):
# rpm -Uhv oracle-validated–.rpm”

“RHEL systems which should remain RedHat”.

Wait, what?

Doesn’t this basically mean that the note isn’t really telling us how to prepare for Oracle database software installation, but instead it’s telling us how to convert from RHEL to OEL? How to move our support contract over to Oracle?

Also note how the “Applies to” section in that particular note specifically does not include RHEL? It simply says “Linux”. This somehow reminds me of a certain horse that a certain city got as a present at some point in the distant past. Neatly packaged and easy to use, but potentially severe long term impact if you’re installing the package.

I’d like to appeal to both Oracle and Red Hat at this point, please folks, make this more clear. Both sides could do better here. There’s really no reason why solution 55413 should be locked behind a pay wall. It’s often the DBAs who are dealing with these packages to prep for a software install, and they often don’t have access to this content. On a similar note, support note 728346.1 also could be written in a much clearer manner to prevent this sort of confusion. Why is the kernel a dependency of that preinstall RPM? There’s absolutely no need for that.

We’re not in a cold war, are we?

TLDR; Don’t mix repositories of different distributions. Don’t install oracle-rdbms-server-12cR1-preinstall on RHEL unless you’re willing to deal with the consequences.

Discover more about our experience in Oracle.

Categories: DBA Blogs

Becky’s BI Apps Corner: OBIA install Perl Script Patching and troubleshooting when they fail.

Rittman Mead Consulting - Thu, 2016-02-04 05:00

During a recent project installing Oracle BI Applications, I became much better acquainted with OPatch, Oracle’s standard tool for managing application patches. By acquainted, I mean how to troubleshoot when OPatch patching fails. Since, at last count, there are around 50 patches total for Oracle BI Applications 11.1.1.9.2, the first patching attempt may not apply all patches successfully. There are any number of reasons for a failure, like an extra slash at the end of a path, a misspelled word, Weblogic or NodeManager still running, or some other reason. We will take a look at the logs for each section, learn where additional logs can be found, and learn how to turn on OPatch debugging to better understand the issue. Then, following the ideas from a previous OPatch post by Robin, I’ll describe how to manually apply the patches with OPatch at the command line for any patches that weren’t applied successfully using the provided perl script.

*Disclaimers – Please read the readme files for patches and follow all Oracle recommendations. Patch numbers are subject to change depending on OS and OBIA versions. Commands and paths here are of the linux/unix variety, but there are similar commands available for Windows OS.

Perl Script patching

Unzip the patch files to a patch folder. I have included the OBIEE patch as well.

unzip pb4biapps_11.1.1.9.2_.zip -d patches/
unzip pb4biapps_11.1.1.9.2_generic_1of2.zip -d patches/
unzip pb4biapps_11.1.1.9.2_generic_2of2.zip -d patches/
unzip p20124371_111170_.zip -d patches/

While installing the Oracle BI Applications versions 11.1.1.7. and up, patches get applied with a perl script called APPLY_PATCHES.pl. Following Oracle’s install documentation for 11.1.1.9 version of Oracle BI Applications here, there is a text file to modify and pass to the perl script. Both the perl script and the text file reside in the following directory: $ORACLE_HOME/biapps/tools/bin. In the text file, called apply_patches_import.txt, parameters are set with the path to the following directories:

JAVA_HOME
INVENTORY_LOC
ORACLE_HOME
MW_HOME
COMMON_ORACLE_HOME
WL_HOME
ODI_HOME
WORKDIR
PATCH_ROOT_DIR
WINDOWS_UNZIP_TOOL_EXE (only needed if running on Windows platforms)

Some pro tips to modifying this text file:
1. Oracle recommends you use the JDK in the ORACLE_BI1 directory.
2. Use ORACLE_BI1 as the ORACLE_HOME.
3. Ensure WORKDIR and PATCH_ROOT_DIR are writeable directories.
4. Don’t add a path separator at the end of the path.
5. Commented lines are safe to remove.

Then you run the APPLY_PATCHES.pl passing in the apply_patches_import.txt. If everything goes well, at the end of the perl script, the results will look similar to the following:

If this is the case, CONGRATULATIONS!!, you can move on to the next step in the install documentation. Thanks for stopping by and come back soon! However, if any patch or group of patches failed, the rest of this post is for you.

Log file location

First, the above patching report does not tell you where to find the logs, regardless of success or failure. If you remember though, you set a WORKDIR path in the text file earlier. In that directory is where you will find the following log files:

  1. final_patching_report.log
  2. biappshiphome_generic_patches.log
  3. odi_generic_patches.log
  4. oracle_common_generic_patches.log
  5. weblogic_patching.log

Open the final_patching_report.log to determine first if all patches were applied and identify ones that were not successful. For example, looking that this log may show that the Oracle Common patches failed.

cd $WORKDIR
vi final_patching_report.log

However, this doesn’t tell you what caused the failure. Next we will want to look into the oracle_common_generic_patches.log to gather more information.

From the $WORKDIR:

vi oracle_common_generic_patches.log

Here you will see the error, that a component is missing. Patch ######## requires component(s) that are not installed in OracleHome. These not-installed components are oracle.jrf.thirdparty.jee:11.1.1.7.0. Notice also that in this log there is a path to another log file location. The path is in the $COMMON_ORACLE_HOME/cfgtoollogs/opatch/ directory. This directory has more detailed logs specific to patches applied to oracle_common. Additionally, there are logs under $ORACLE_HOME/cfgtoollogs/opatch/, $WL_HOME/cfgtoollogs/opatch/, and $ODI_HOME/cfgtoollogs/opatch/. These locations are very helpful to know, so you can find the logs for each group of patches in the same relative path.

Going back to the above error, we are going to open the most recent log file listed in the $COMMON_ORACLE_HOME/cfgtoollogs/opatch/ directory.

cd $COMMON_ORACLE_HOME/cfgtoollogs/opatch/
vi opatch2015-08-08_09-20-58AM_1.log

The beginning of this log file has two very interesting pieces of information to take note of for use later. It has the actual OPatch command used, and it has a path to a Patch History file. Looks like we will have to page down in the file to find the error message.

Now we see our missing component error. Once the error occurs, the java program starts backing out and then starts cleanup by deleting the files extracted earlier in the process. This log does have more detail, but still doesn’t say much about the missing component. After some digging around on the internet, I found a way to get more detailed information out of OPatch by setting export OPATCH_DEBUG=TRUE. After turning OPatch debugging on, run the OPatch command we found earlier that was at the top of the log. A new log file will be generated and we want to open this most recent log file.

Finally, the results now get me detailed information about the component and the failure.

Side Note: If you are getting this specific error, I’ll refer you back to a previous blog post that happened to mention making sure to grab the correct version of OBIEE and ODI. If you have a wrong version of OBIEE or ODI for the Oracle BI Apps version you are installing, unfortunately you won’t start seeing errors until you get to this point.

Manually running Oracle BI Application patches

Normally, the error or reason behind a patch or group of patches failing doesn’t take that level of investigation, and the issue will be identified in the first one or two logs. Once the issue is corrected, there are a couple of options available. Rerunning the perl script is one option, but it will cycle through all of the patches again, even the ones already applied. There is no harm in this, but it does take longer than running the individual patches. The other option is to run the OPatch command at the command line. To do that, first I would recommend setting the variables from the text file. I also added the Oracle_BI1/OPatch directory to the PATH variable.

export JAVA_HOME=$ORACLE_HOME/jdk
export INVENTORY_LOC=
export COMMON_ORACLE_HOME=$MW_HOME/oracle_common
export WL_HOME=$MW_HOME/wlserver_10.3
export SOA_HOME=$MW_HOME/Oracle_SOA1
export ODI_HOME=$MW_HOME/Oracle_ODI1
export WORKDIR=
export PATCH_FOLDER=/patches
export PATH=$ORACLE_HOME/OPatch:$JAVA_HOME/bin:$PATH

Next, unzip the patches in the required directory. For example, the $PATCH_FOLDER/oracle_common/generic might look like this after unzipping files:

Below are the commands for each group of patches:

Oracle Common Patches:

cd $PATCH_FOLDER/oracle_common/generic
unzip "*.zip"

$COMMON_ORACLE_HOME/OPatch/opatch napply $PATCH_FOLDER/oracle_common/generic -silent -oh $COMMON_ORACLE_HOME -id 16080773,16830801,17353546,17440204,18202495,18247368,18352699,18601422,18753914,18818086,18847054,18848533,18877308,18914089,19915810

BIApps Patches:

cd $PATCH_FOLDER/biappsshiphome/generic
unzip "*.zip"

opatch napply $PATCH_FOLDER/biappsshiphome/generic -silent -id 16913445,16997936,19452953,19526754,19526760,19822893,19823874,20022695,20257578

ODI Patches:

cd $PATCH_FOLDER/odi/generic
unzip "*.zip"

/$ODI_HOME/OPatch/opatch napply $PATCH_FOLDER/odi/generic -silent -oh $ODI_HOME -id 18091795,18204886

Operating Specific Patches:

cd $PATCH_FOLDER/
unzip "*.zip"

opatch napply $PATCH_FOLDER/ -silent -id ,,

Weblogic Patches:

cd $PATCH_FOLDER/suwrapper/generic
unzip "*.zip"

cd $PATCH_FOLDER/weblogic/generic

$JAVA_HOME/bin/java -jar $PATCH_FOLDER/suwrapper/generic/bsu-wrapper.jar -prod_dir=$WL_HOME -install -patchlist=JEJW,LJVB,EAS7,TN4A,KPFJ,RJNF,2GH7,W3Q6,FKGW,6AEJ,IHFB -bsu_home=$MW_HOME/utils/bsu -meta=$PATCH_FOLDER/suwrapper/generic/suw_metadata.txt -verbose > $PATCH_FOLDER/weblogic_patching.log

Even though this is a very specific error as an example, understanding the logs and having the break-down of all of the patches will help with any number of patch errors at this step of the Oracle BI Applications installation. I would love to hear your thoughts if you found this helpful or if any part was confusing. Keep an eye out for the next Becky’s BI Apps Corner where I move on from installs and start digging into incremental logic and Knowledge Modules.

The post Becky’s BI Apps Corner: OBIA install Perl Script Patching and troubleshooting when they fail. appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Storage difference between 2 identical Exa boxes. How and why?

Syed Jaffar - Thu, 2016-02-04 04:58
We noticed around 1.6TB storage difference between two Eight (1/8) Exadata boxes while configuring Data Guard. Wondered what went wrong. The Exa box configured for DR was around 1.6TB short compare to the other Exa box. Verified the lun, physical disk and grdidisk status on a cell, which showed active/online status. The tricky part on Exadata is, everything has to be active/online across all cell storage servers. We then figured-out that grid disk status on the 3rd cell storage server was inactive. After making them active on the 3rd cell server, everything become normal, i mean, the missing 1.6TB space appeared.When you work with Exadata, you need to verify all cell storage servers to confirm the issue, rather than just query things over just one cell server.

The MagicBand

Oracle AppsLab - Wed, 2016-02-03 21:59

Editor’s note: Here’s the first post from our new-ish researcher, Tawny. She joined us back in September, just in time for OpenWorld. After her trip to Disney World, she talked eagerly about the MagicBand experience, and if you read here, you know I’m a fan of Disney’s innovative spirit.

Enjoy.

Planning a Disney World trip is no small feat. There are websites that display crowd calendars to help you find the best week to visit and the optimal parks to visit on each of those days so you can take advantage of those magic hours. Traveling with kids? Visiting during the high season? Not sure which FastPass+ to make reservations for?

There are annually updated guidebooks dedicated to providing you the most optimal attraction routes and FastPass+ reservations, based off of thousands of data points for each park. Beginning 2013, Disney introduced the MagicBand, a waterproof bracelet that acts as your entry ticket, FastPass+ holder, hotel key and credit card holder. The bands are part of The MyMagic+ platform, consisting of four main components: MagicBands, FastPass+, My Disney Experience, and PhotoPass Memory Maker. Passborterboard lists everything you can do with a MagicBand.

I got my chance to experience the MagicBand early this January.

image001 image002

 

These are both open edition bands. This means that they do not have customized lights or sounds at FP+ touchpoints. We bought them at the kiosk at Epcot after enviously looking on at other guests who were conveniently accessing park attractions without having to take out their tickets! It was raining, and the idea of having to take out anything from our bags under our ponchos was not appealing.

The transaction was quick and the cashier kindly linked our shiny new bands to our tickets. Freedom!!!

The band made it easy for us to download photos and souvenirs across all park attractions without having to crowd around the photo kiosk at the end of the day. It was great being able to go straight home to our hotel room while looking through our Disney photo book through their mobile app!

Test Track at Epcot made the most use of the personalization aspect of these bands. Before the rise, guests could build their own cars with the goal of outperforming other cars in 4 key areas: power, turn handling, environmental efficiency and responsiveness.

image003

After test driving our car on the ride, there were still many things we could do with our car such as join a multiplayer team race…we lost :(

What was really interesting were guests were fortunate to show off personalized entry colors and sounds, a coveted status symbol amongst MagicBand collectors. The noise and colors was a mini attraction on its own! I wish our badge scanners said hi to us like this every morning…

 

 

When used in conjunction with My Disney Experience app, there can be a lot of potential:

  • Order ahead food + scan to pick up or food delivery while waiting in a long line.
  • Heart sensor + head facing camera to take pictures within an attractions to capture happy moments.
  • Haptic feedback to tell you that your table is ready at a restaurant. Those pagers are bulky.

So what about MagicBands for the enterprise context?

Hospitals may benefit, but some argue that the MagicBand model works exclusively for Disney because of its unique ecosystem and the heavy cost it would take to implement it. The concept of the wearable is no different from the badges workers have now.

Depending on the permissions given to the badgeholder, she can badge into any building around the world.

What if we extend our badge capabilities to allow new/current employees to easily find team members to collaborate and ask questions?

What if the badge carried all of your desktop and environmental preferences from one flex office to the desk so you never have to set up or complain about the temperature ever again?

What if we could get a push notification that it’s our cafeteria cashier’s birthday as we’re paying and make their day with a “Happy Birthday?”

That’s something to think about.Possibly Related Posts:

Two New Oracle Security Presentations Available

Pete Finnigan - Wed, 2016-02-03 11:50

I attended the UKOUG conference last week Monday to Wednesday in Birmingham. This is the first year for three years that it has been back at the ICC in the center of Birmingham. The last two years have seen the....[Read More]

Posted by Pete On 14/12/15 At 08:54 PM

Categories: Security Blogs

Oracle Security Training In York

Pete Finnigan - Wed, 2016-02-03 11:50

We ran a five day Oracle Security training event in York, England from September 21st to September 25th at the Holiday Inn hotel. This proved to be very successful and good fun. The event included back to back teaching by....[Read More]

Posted by Pete On 22/10/15 At 08:49 PM

Categories: Security Blogs

New Presentation - Building Practical Oracle Audit Trails

Pete Finnigan - Wed, 2016-02-03 11:50

I wrote a presentation on designing and building practical audit trails back in 2012 and presented it once and then never again. By chance I did not post the pdf's of these slides at that time. I did though some....[Read More]

Posted by Pete On 01/10/15 At 05:16 PM

Categories: Security Blogs

Protect Your APEX Application PL/SQL Source Code

Pete Finnigan - Wed, 2016-02-03 11:50

Oracle Application Express is a great rapid application development tool where you can write your applications functionality in PL/SQL and create the interface easily in the APEX UI using all of the tools available to create forms and reports and....[Read More]

Posted by Pete On 21/07/15 At 04:27 PM

Categories: Security Blogs