Skip navigation.

Feed aggregator

PFCLScan Version 1.3 Released

Pete Finnigan - Tue, 2014-04-15 10:50

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-04-15 10:50

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2014-04-15 10:50

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Oracle 12c Security - SQL Translation and Last Logins

Pete Finnigan - Tue, 2014-04-15 10:50

There has been some big new security items added to 12cR1 such as SHA2 in DBMS_CRYPTO, code based security in PL/SQL, Data Redaction, unified audit or even privilege analysis but also as I hinted in some previous blogs there are....[Read More]

Posted by Pete On 31/07/13 At 11:11 AM

Categories: Security Blogs

Hacking Oracle 12c COMMON Users

Pete Finnigan - Tue, 2014-04-15 10:50

The main new feature of Oracle 12cR1 has to be the multitennant architecture that allows tennant databases to be added or plugged into a container database. I am interested in the security of this of course and one element that....[Read More]

Posted by Pete On 23/07/13 At 02:52 PM

Categories: Security Blogs

Oracle Security Loop hole from Steve Karam

Pete Finnigan - Tue, 2014-04-15 10:50

I just saw a link to a post by Steve Karam on an ISACA list and went for a look. The post is titled " Password Verification Security Loophole ". This is an interesting post discussing the fact that ALTER....[Read More]

Posted by Pete On 22/07/13 At 08:39 PM

Categories: Security Blogs

Supercharge your Applications with Oracle WebLogic

All enterprises are using an application server but the question is why they need an application server. The answer is they need to deliver applications and software to just about any device...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Why is Affinity Mask Negative in sp_configure?

Pythian Group - Tue, 2014-04-15 07:56

While looking at a SQL server health report, I found affinity mask parameter in sp_configure output showing a negative value.

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

Affinity mask is a SQL Server configuration option which is used to assign processors to specific threads for improved performance. To know more about affinity mask, read this. Usually, the value for affinity mask is a positive integer (decimal format) in sp_configure. The article in previous link shows an example of binary bit mask and corresponding decimal value to be set in sp_configure.

 

I was curious to find out why the value of affinity mask could be negative as according to BOL http://technet.microsoft.com/en-us/library/ms187104(v=sql.105).aspx

 

 affinity_mask_memeThe values for affinity mask are as follows:

          · A one-byte affinity mask covers up to 8 CPUs in a multiprocessor computer.

       

          · A two-byte affinity mask covers up to 16 CPUs in a multiprocessor computer.

         

          · A three-byte affinity mask covers up to 24 CPUs in a multiprocessor computer.

         

          · A four-byte affinity mask covers up to 32 CPUs in a multiprocessor computer.

         

         · To cover more than 32 CPUs, configure a four-byte affinity mask for the first 32    CPUs and up to a four-byte affinity64 mask for the remaining CPUs.

 

Time to unfold the mystery. Windows Server 2008 R2 supports more than 64 logical processors. From ERRORLOG, I see there are 40 logical processors on the server:

 

2014-03-31 18:18:18.18 Server      Detected 40 CPUs. This is an informational message; no user action is required.

 

Further, going down in the ERRORLOG I see this server has four NUMA nodes configured.

 

Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.

:

Node configuration: node 0: CPU mask: 0x00000000000ffc00:0 Active CPU mask: 0x0000000000001c00:0.

Node configuration: node 1: CPU mask: 0x00000000000003ff:0 Active CPU mask: 0×0000000000000007:0.

Node configuration: node 2: CPU mask: 0x000000003ff00000:0 Active CPU mask: 0×0000000000700000:0.

Node configuration: node 3: CPU mask: 0x000000ffc0000000:0 Active CPU mask: 0x00000001c0000000:0. 

 

These were hard NUMA nodes. No soft NUMA node configured on the server (no related registry keys exist)

 

An important thing to note is that the affinity mask value forsp_configure ranges from -2147483648 to 2147483647 = 2147483648 + 2147483647 + 1 = 4294967296 = 2^32 = the range of int data type. Hence affinity mask value from sp_configure is not sufficient to hold more than 64 CPUs. To deal with this, ALTER SERVER CONFIGURATION was introduced in SQL Server 2008 R2 to support and set the processor affinity for more than 64 CPUs. However, the value of affinity mask in sp_configure, in such cases, is still an *adjusted* value which we are going to find out below.

 

Let me paste the snippet from ERRORLOG again:

 

Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.

 

As it says, the underlined values above are for the processor mask i.e. processor affinity or affinity mask. These values correspond to that of online_scheduler_mask in sys.dm_os_nodes which makes up the ultimate value for affinity mask in sp_configure. Ideally, affinity mask should be a sum of these values. Let’s add these hexadecimal values using windows calculator (Choose Programmer from Viewmenu)

 

  0x0000000000001c00

+ 0×0000000000000007

+ 0×0000000000700000

+ 0x00000001c0000000

--------------------

= 0x00000001C0701C07

 

= 7523539975 (decimal)

 

So, affinity mask in sp_configure should have been equal to 7523539975. Since this no. is greater than the limit of 2^32 i.e. 4294967296 we see an *adjusted* value (apparently a negative value). The reason I say it an *adjusted* value is because sum of processor mask values (in decimal) is adjusted (subtracted from the int range i.e. 4294967296 so that it fits within the range and falls below or equal to 4294967296 ). Here’s is an example which explains the theory:

 

7523539975 – 4294967296  – 4294967296 = –1066394617 = the negative value seen in sp_configure

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

That explains why affinity mask shows up as a negative number in sp_configure.

 

To make the calculation easier, I wrote a small script to find out the sp_configure equivalent value of affinity mask in case of NUMA nodes

               


-- Find out the sp_configure equivalent value of affinity mask in case of NUMA nodes

--------------------------------------------------------------------------------------

BEGIN
DECLARE @real_value bigint; -- to hold the sum of online_scheduler_mask

DECLARE @range_value bigint = 4294967296; -- range of int dataype i.e. 2^32

DECLARE @config_value int = 0; -- default value of affinity_mask in sp_configure output. to be set later.
-- Fetch the sum of Online Scheudler Mask excluding node id 64 i.e. Hidden scheduler

SET @real_value =( SELECT SUM(online_scheduler_mask) as online_scheduler_mask

FROM sys.dm_os_nodes

WHERE memory_node_id <> 64

);
-- Calculate the value for affinity_mask in sp_configure

WHILE (@real_value > 2147483647)

BEGIN

SET @real_value=(@real_value - @range_value);

END;
-- Copy the value for affinity_mask as seen in sp_configure

SET @config_value = @real_value;
PRINT 'The current config_value for affinity_mask parameter in sp_configure is: ' + cast(@config_value as varchar);

END;

This script will give the current config value for SQL server in any case, NUMA nodes, >64 procs, SQL Server 2008 R2..

 

Hope this post will help you if were as puzzled as I was seeing the negative no. in sp_configure.

 

Stay tuned!

Categories: DBA Blogs

Delivering the Moments of Engagement Across the Enterprise

WebCenter Team - Tue, 2014-04-15 07:00
12.00

 Delivering Moments of Engagement Across the Enterprise

A Five Step Roadmap for Mobilizing a Digital business

Geoffrey Bock, Principal, Bock & Company
Michael Snow, Principal Product Marketing Director, Oracle WebCenter

Over the past few years, we have been fascinated by the impact of mobility on business. As employees, partners, and customers, we now carry powerful devices in our pockets and handbags. Our smartphones and tablets are always with us, always on, and always collecting information. We are no longer tethered to fixed work places; we can frequently find essential information with just a few taps and swipes. More and more, this content is keyed to our current context. Moreover, we often are immersed in an array of sensors that track our actions, personalize the results, and assist us in innumerable ways. Our business and social worlds are in transition. This is not the enterprise computing environment of the 1990’s or even the last decade.

Yet while tracking trends with the mobile industry, we have encountered a repeated refrain from many technology and business leaders. Sure, mobile apps are neat, they say. But how do you justify the investments required? What are the business benefits of enterprise mobility? When should companies harness the incredible opportunities of the mobile revolution?

To answer these questions, we think that it is important to recognize the steps along the mobile journey. Certainly companies have been investing in their enterprise infrastructure for many years. In fact, enterprise-wide mobility is just the latest stage in the development of digital business initiatives.

What is at stake is not simply introducing nifty mobile apps as access points to existing enterprise applications. The challenge is weaving innovative digital technologies (including mobile) into the fabric (and daily operations) of an organization. Companies become digital businesses by adapting and transforming essential enterprise activities. As they mobilize key business experiences, they drive digital capabilities deeply into their application infrastructure.

Please join us for a conversation about how Oracle customers are making this mobile journey, our five-step roadmap for delivering the moments of engagement across the enterprise.

Editors note: This webcast is now available On-Demand


Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";}

Creating some Pivotal Cloud Foundry (PCF) PHD services

Pas Apicella - Tue, 2014-04-15 06:52
After installing PHD add on for Pivotal Cloud Foundry 1.1 I quickly created some development services for PHD using the CLI as shown below.

[Tue Apr 15 22:40:08 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hawq-cf free dev-hawq
Creating service dev-hawq in org pivotal / space development as pas...
OK
[Tue Apr 15 22:42:31 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hbase-cf free dev-hbase
Creating service dev-hbase in org pivotal / space development as pas...
OK
[Tue Apr 15 22:44:10 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hive-cf free dev-hive
Creating service dev-hive in org pivotal / space development as pas...
OK
[Tue Apr 15 22:44:22 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-yarn-cf free dev-yarn
Creating service dev-yarn in org pivotal / space development as pas...
OK

Finally using the web console to brow the services in the "Development" space


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

OBIEE Security: Usage Tracking, Logging and Auditing for SYSLOG or Splunk

Enabling OBIEE Usage Tracking and Logging is a key part of most any security strategy. More information on these topics can be found in the whitepaper references below. It is very easy to setup logging such that a centralized logging solution such as SYSLOG or Splunk can receive OBIEE activity.

Usage Tracking

Knowing who ran what report, when and with what parameters is helpful not only for performance tuning but also for security. OBIEE 11g provides a sample RPD with a Usage Tracking subject area. The subject area will report on configuration and changes to the RPD as well as configuration changes to Enterprise Manager.  To start using the functionality, one of the first steps is to copy the components from the sample RPD to the production RPD.

Usage tracking can also be redirected to log files. The STORAGE_DIRECTORY setting is in the NQSConfig.INI file. This can be set if OBIEE usage logs are being sent, for example, to a centralized SYSLOG database.

The User Tracking Sample RPD can be found here:

{OBIEE_11G_Instance}/bifoundation/OracleBIServerComponent/coreapplication_obis1/sample/usagetracking

Logging

OBIEE offers standard functionality for application level logging.  This logging should be considered as one component of the overall logging approach and strategy. The operating system and database(s) supporting OBIEE should be using a centralized logging solution (most likely syslog) and it is also possible to parse the OBIEE logs for syslog consolidation.

For further information on OBIEE logging refer to the Oracle Fusion Middleware System Administrator’s Guide for OBIEE 11g (part number E10541-02), chapter eight.

To configure OBIEE logging, the BI Admin client tool is used to set the overall default log level for the RPD as well as identify specific users to be logged. The log level can differ among users. No logging is possible for a role.

Logging Levels are set between zero and seven.

Level 0 - No logging

Level 1 - Logs the SQL statement issued from the client application.

Level 2 - All level 1 plus OBIEE infrastructure information and query statisics

Level 3 - All level 2 plus Cache information

Level 4 - All level 3 plus query plan execution

Level 5 - All level 4 plus intermediate row counts

Level 6 & 7 - not used

 

OBIEE log files

BI Component

Log File

Log File Directory

OPMN

debug.log

ORACLE_INSTANCE/diagnostics/logs/OPMN/opmn

OPMN

opmn.log

ORACLE_INSTANCE/diagnostics/logs/OPMN/opmn

BI Server

nqserver.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBIServerComponent/coreapplication_obis1

BI Server Query

nquery<n>.log <n>=data and timestamp for example nqquery-20140109-2135.log

Oracle BI Server query Log

ORACLE_INSTANCE/diagnostics/logs/OracleBIServerComponent/coreapplication

BI Cluster Controller

nqcluster.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBIClusterControllerComponent/coreapplication_obiccs1

Oracle BI Scheduler

nqscheduler.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBISchedulerComponent/coreapplication_obisch1

Useage Tracking

NQAcct.yyymmdd.hhmmss.log

STORAGE_DIRECTORY parameter in the Usage Tracking section of the NQSConfig.INI file determines the location of usage tracking logs

Presentation Services

sawlog*.log (for example, sawlog0.log)

ORACLE_INSTANCE/diagnostics/logs/
OracleBIPresentationServicesComponent/
coreapplication_obips1

 

The configuration of this log (e.g. the writer setting to output to syslog or windows event log) is set in instanceconfig.xml

BI JavaHost

jh.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBIJavaHostComponent/coreapplication_objh1

 

If you have questions, please contact us at info@integrigy.com

 -Michael Miller, CISSP-ISSMP

References

 

Tags: Oracle Business Intelligence (OBIEE)AuditorIT Security
Categories: APPS Blogs, Security Blogs

WordPress 3.8.3 – Auto Update

Tim Hall - Tue, 2014-04-15 01:53

WordPress 3.8.3 came out yesterday. It’s a small maintenance release, with the downloads and changelog in the usual places. For many people, this update will happen automatically and they’ll just receive and email to say it has been applied.

I’m still not sure what to make of the auto-update feature of WordPress. Part of me likes it and part of me is a bit irritated by it. For the lazy folks out there, I think it is a really good idea, but for those who are on their blog admin screens regularly it might seem like a source of confusion. I currently self-host 5 WordPress blogs and the auto-update feature seems a little erratic. One blog always auto-updates as soon as the new a new release comes out. A couple sometimes do. I don’t think this blog has ever auto-updated…

I’d be interested to hear if other self-hosting WordPress bloggers have had a similar experience…

Cheers

Tim…

WordPress 3.8.3 – Auto Update was first posted on April 15, 2014 at 8:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Database experts try to mitigate the effects of the Heartbleed bug

Chris Foot - Tue, 2014-04-15 01:44

Recently, the Heartbleed Bug has sent a rift through global economic society. The personal information of online shoppers, social media users and business professionals is at risk and database administration providers are doing all they can to either prevent damage from occurring or mitigate detrimental effects of what has already occurred. 

What it does and the dangers involved
According to Heartbleed.com, the vulnerability poses a serious threat to confidential information, as it compromises the protection Open Secure Sockets Layer/Transport Security Layer technology provides for Internet-based communications. The virus allows anyone on the Web – particularly, cybercriminals – to view the memory of the systems protected by affected versions of OpenSSL software, allowing attackers to monitor a wide array of transactions between individuals, governments and enterprises and numerous other connections. 

Jeremy Kirk, a contributor to PCWorld, noted that researchers at CloudFlare, a San Francisco-based security company, found that hackers could steal the SSL/TSL and use it to create an encrypted avenue between users and websites, essentially posing as legitimate webpages in order to decrypt traffic passing between a computer and a server. For online retailers lacking adequate database support services, it could mean the divulgence of consumer credit card numbers. If customers no longer feel safe  in purchasing products online, it could potentially result in the bankruptcy of a merchandiser. 

Think mobile devices are safe? Think again 
Now more than ever, database experts are making concentrated efforts to effectively monitor communications between mobile devices and business information. As the Heartbleed Bug can compromise connections between PCs and websites, the same risk is involved for those with mobile applications bridging the distance between smartphones and Facebook pages. CNN reported that technology industry leaders Cisco and Juniper claimed that someone can potentially hack into a person's phone and log the details of his or her conversations. Sam Bowling, a senior infrastructure engineer at web hosting service Singlehop, outlined several devices that could be compromised:

  • Cisco revealed that select versions of the company's WebEx service are vulnerable, posing a threat to corporate leaders in a video conference. 
  • If work phones aren't operating behind a defensive firewall, a malicious entity could use Heartbleed to access the devices' memory logs. 
  • Smartphone users accessing business files from iPhones and Android devices may be exposed, as hackers can view whatever information a person obtained through select applications. 

Upgraded deployments of OpenSSL are patching liable avenues, but remote database services are still exercising assiduous surveillance in order to ensure that client information remains confidential. 

Oracle TNS_ADMIN issues due to bad environment settings

Yann Neuhaus - Mon, 2014-04-14 18:11

Recently, I faced a TNS resolution problem at a customer. The reason was a bad environment setting: The customer called the service desk because of a DBLINK pointing to a bad database.

The users were supposed to be redirected to a development database, and the DBLINK was redirecting to a validation database instead. The particularity of the environment is that development and validation databases are running on the same server, but on different Oracle homes, each home having its own tnsnames.ora. Both tnsnames.ora contain common alias names, but pointing on different databases. Not exactly best practice, but this is not the topic here.

The problem started with some issues to reproduce the case. Our service desk was not able to reproduce the situation without understanding that the customer was trying to access the database remotely via a development tool (through the listener), while we were connected locally on the server.

Let me present the case with my environment.

First, this is the database link concerned by the issue:

 

SQL> select * from dba_db_links;
OWNER      DB_LINK              USERNAME                       HOST       CREATED
---------- -------------------- ------------------------------ ---------- ---------
PUBLIC     DBLINK               DBLINK                         MYDB       21-MAR-14

 

And this is the output when we try to display the instance name through the DBLINK, when connected locally:

 

SQL> select instance_name from v$instance@DBLINK;
INSTANCE_NAME
----------------
DB2

 

The user is redirected on the remote database, as expected. Now, let's see what happens when connected using the SQL*Net layer:

 

[oracle@srvora01 ~]$ sqlplus system@DB1
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 24 10:07:45 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 
Enter password:
 
Connected to:
 
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 
SQL> select instance_name from v$instance@DBLINK;
INSTANCE_NAME
----------------
DB1

 

Here we can see that the user is not redirected to the same database (here, for demonstration puproses, on the database itself).

The first thing to check is the TNS_ADMIN variable, if it exists:

 

[oracle@srvora01 ~]$ echo $TNS_ADMIN
/u00/app/oracle/product/11.2.0/db_3_0/network/admin

 

There is the content of the tnsnames.ora file on that location:

 

[oracle@srvora01 ~]$ cat /u00/app/oracle/product/11.2.0/db_3_0/network/admin/tnsnames.ora
DB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = tcp)(HOST = srvora01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DB1)
    )
  )
MYDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = tcp)(HOST = srvora01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = DB2)
    )
  )

 

Clearly, we have a problem with the TNS resolution. The local connection resolves the MYDB alias correctly, while the remote connection resolves a different database with the alias. In this case, we have two solutions:

  • The tnsnames.ora is not well configured: this is not the case, as you can see above
  • Another tnsnames.ora file exists somewhere on the server and is used by remote connections

 To confirm that the second hypothesis is the good one, we can use the strace tool:

 

SQL> set define #
SQL> select spid from v$process p join v$session s on p.addr=s.paddr and s.sid=sys_context('userenv','sid');
SPID
------------------------
5578

 

SQL>  host strace -e trace=open -p #unix_pid & echo $! > .tmp.pid
Enter value for unix_pid: 5578
SQL> Process 5578 attached - interrupt to quit

 

SQL> select instance_name from v$instance @ DBLINK;
open("/u00/app/oracle/product/11.2.0/db_3_0/network/admin/tnsnames.ora", O_RDONLY) = 8
open("/etc/host.conf", O_RDONLY)        = 8
open("/etc/resolv.conf", O_RDONLY)      = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 10
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 10
open("/etc/hostid", O_RDONLY)           = -1 ENOENT (No such file or directory)
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 10INSTANCE_NAME
----------------
DB2

 

The DBLINK is resolved using the file /u00/app/oracle/product/11.2.0/db_3_0/network/admin/tnsnames.ora.

Now, when connected remotely:

 

SQL> set define #
SQL> select spid from v$process p join v$session s on p.addr=s.paddr and s.sid=sys_context('userenv','sid');
SPID
------------------------
6838

 

SQL> host strace -e trace=open -p #unix_pid & echo $! > .tmp.pid
Enter value for unix_pid: 6838
SQL> Process 6838 attached - interrupt to quit

 

SQL> select instance_name from v$instance@DBLINK;
open("/u00/app/oracle/network/admin/tnsnames.ora", O_RDONLY) = 8
open("/etc/host.conf", O_RDONLY)        = 8
open("/etc/resolv.conf", O_RDONLY)      = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 9
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 9
open("/etc/hostid", O_RDONLY)           = -1 ENOENT (No such file or directory)
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 9INSTANCE_NAME
----------------
DB1

 

Here the DBLINK is resolved with the file /u00/app/oracle/network/admin/tnsnames.ora.

 

Two different tnsnames.ora files are used according to the connection method! If we query the content of the second tnsnames.ora, we have an explanation for our problem:

 

[oracle@srvora01 ~]$ cat /u00/app/oracle/network/admin/tnsnames.ora
MYDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = tcp)(HOST = srvora01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = DB1)
    )
  )

 

It is not clearly documented by Oracle, but the database session can inherit the environment variables in three different ways:

  • When you connect locally to the server (no SQL*Net, no listener), the Oracle session inherits the client environment
  • When you connect remotely to a service statically registered on the listener, the Oracle session inherits the environment which started the listener
  • When you connect remotely to a service dynamically registered on the listener, the Oracle session inherits the environment which started the database

In our case, the database was restarted with the wrong TNS_NAMES value set. Then, the database registered this value for remote connections. We can check this with the following method:

 

[oracle @ srvora01 ~]$ ps -ef | grep pmon
oracle    3660     1  0 09:02 ?        00:00:00 ora_pmon_DB1
oracle    4006     1  0 09:05 ?        00:00:00 ora_pmon_DB2
oracle    6965  3431  0 10:44 pts/1    00:00:00 grep pmon

 

[oracle @ srvora01 ~]$ strings /proc/3660/environ | grep TNS_ADMIN
TNS_ADMIN=/u00/app/oracle/network/admin

 

Note that we can get the value for TNS_ADMIN using the dbms_system.get_env.

The solution was to restart the database with the correct TNS_ADMIN value:

 

[oracle @ srvora01 ~]$ echo $TNS_ADMIN
/u00/app/oracle/product/11.2.0/db_3_0/network/admin

 

[oracle@srvora01 ~]$ sqlplus / as sysdba
 
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 24 10:46:03 2014
 
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 
 
Connected to:
 
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

 

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

 

SQL> startup
ORACLE instance started.Total System Global Area 1570009088 bytes
Fixed Size                  2228704 bytes
Variable Size            1023413792 bytes
Database Buffers          536870912 bytes
Redo Buffers                7495680 bytes
Database mounted.
Database opened.

 

[oracle@srvora01 ~]$ ps -ef | grep pmon
oracle    4006     1  0 09:05 ?        00:00:00 ora_pmon_DB2
oracle    7036     1  0 10:46 ?        00:00:00 ora_pmon_DB1
oracle    7116  3431  0 10:46 pts/1    00:00:00 grep pmon

 

[oracle@srvora01 ~]$ strings /proc/7036/environ | grep TNS_ADMIN
TNS_ADMIN=/u00/app/oracle/product/11.2.0/db_3_0/network/admin

 

The value for TNS_ADMIN is now correct.

 

[oracle@srvora01 ~]$ sqlplus system @ DB1
 
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 24 10:47:21 2014
 
Copyright (c) 1982, 2011, Oracle.  All rights reserved.Enter password:
 
Enter password:
 
Connected to:
 
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

 

SQL> select instance_name from v$instance @ DBLINK;
INSTANCE_NAME
----------------
DB2

 

Remote connections are now using the right tnsnames.ora.

I hope this will help you with your TNS resolution problems.

JasperReportsIntegration - Full path requirement and workaround

Dietmar Aust - Mon, 2014-04-14 16:38
I have just posted an answer to a question that seems like a bug in the JasperReportsIntegration toolkit, that you have to use absolute paths for referencing images or subreports, which is typically a bad thing.

Don't know exactly why it doesn't work, but there is a workaround for that: http://www.opal-consulting.de/site/jasperreportsintegration-full-path-requirement-and-workaround/

Hope that helps,
~Dietmar.

Annonce : Oracle Database Backup & Oracle Storage Cloud

Jean-Philippe Pinte - Mon, 2014-04-14 16:01
Oracle annonce la disponibilité des services Oracle Database Backup et Oracle Storage Cloud
 Plus d'informations :

First blogpost at my own hosted wordpress instance

Dietmar Aust - Mon, 2014-04-14 15:50
I have been blogging at daust.blogspot.com for quite some years now ... and many people have rather preferred wordpress to blogspot, I can now understand why :).

It is quite flexible and easy to use and there are tons of themes available ... really cool ones.

The main decision to host my own wordpress instance was in the end motivated by different means. I have created two products and they needed a platform to be presented:
First I wanted to buy a new theme from themeforest and build an APEX theme for that ... but this is a lot of work.

I then decided to host my content using wordpress since I have already bought a new theme: http://www.kriesi.at/themedemo/?theme=enfold

And this one has a really easy setup procedure for wordpress and comes with a ton of effects and wizards, cool page designer, etc.

Hopefully this will get mit motivated to post more frequently ... we will see ;).

Cheers,
~Dietmar.

OpenSSL Heartbleed (CVE-2014-0160) and Oracle E-Business Suite Impact

Integrigy has completed an in-depth security analysis of the "Heartbleed" vulnerability in OpenSSL (CVE-2014-0160) and the impact on Oracle E-Business Suite 11i (11.5) and R12 (12.0, 12.1, and 12.2) environments.  The key issue is where in the environment is the SSL termination point both for internal and external communication between the client browser and application servers. 

1.  If the SSL termination point is the Oracle E-Business Suite application servers, then the environment is not vulnerable as stated in Oracle's guidance (Oracle Support Note ID 1645479.1 “OpenSSL Security Bug-Heartbleed” [support login required]).

2.  If the SSL termination point is a load balancer or reverse proxy, then the Oracle E-Business Suite environment MAY BE VULNERABLE to the Heartbleed vulnerability.  Environments using load balancers, like F5 Big-IP, or reverse proxies, such as Apache mod_proxy or BlueCoat, may be vulnerable depending on software versions.

Integrigy's detailed analysis of use of OpenSSL in Oracle E-Business Environments is available here -

OpenSSL Heartbleed (CVE-2014-0160) and the Oracle E-Business Suite Impact Analysis

Please let us know if you have any questions or need additional information at info@integrigy.com.

Tags: VulnerabilityOracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Integrigy Collaborate 2014 Presentations

Integrigy had a great time at Collaborate 2014 last week in Las Vegas.  What did not stay in Las Vegas were many great sessions and a lot of good information on Oracle E-Business Suite 12.2, Oracle Security, and OBIEE.  Posted below are the links to the three papers that Integrigy presented.

If you have questions about our presentations, or any questions about OBIEE and E-Business Suite security, please contact us at info@integrigy.com

References Tags: Oracle DatabaseOracle E-Business SuiteOracle Business Intelligence (OBIEE)
Categories: APPS Blogs, Security Blogs

Parallel Execution Skew – Demonstrating Skew

Randolf Geist - Mon, 2014-04-14 12:42
This is just a short notice that the next part of the mini-series "Parallel Execution Skew" is published at AllThingsOracle.com