Skip navigation.

Feed aggregator

April 2014 Critical Patch Update Released

Oracle Security Team - Tue, 2014-04-15 14:04

Hello, this is Eric Maurice again.

Oracle today released the April 2014 Critical Patch Update.  This Critical Patch Update provides fixes for 104 vulnerabilities across a number of product lines including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Supply Chain Product Suite, Oracle iLearning, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle Java SE, Oracle and Sun Systems Products Suite, Oracle Linux and Virtualization, and Oracle MySQL.  A number of the vulnerabilities fixed in this Critical Patch Update have high CVSS Base Score and are being highlighted in this blog entry.  Oracle recommends this Critical Patch Update be applied as soon as possible.

Out of the 104 vulnerabilities fixed in the April 2014 Critical Patch Update, 2 were for the Oracle Database.  The most severe of these database vulnerabilities received a CVSS Base Score of 8.5 for the Windows platform to denote a full compromise of the targeted system, although a successful exploitation of this bug requires authentication by the malicious attacker.  On other platforms (e.g., Linux, Solaris), the CVSS Base Score is 6.0, because a successful compromise would be limited to the Database and not extend to the underlying Operating System.  Note that Oracle reports this kind of vulnerabilities with the ‘Partial+’ value for Confidentiality, Integrity, and Availability impact (Partial+ is used when the exploit affects a wide range of resources, e.g. all database tables).  Oracle makes a strict application of the CVSS 2.0 standard, and as a result, the Partial+ does not result in an inflated CVSS Base Score (CVSS only provides for ‘None,’ ‘Partial,’ or ‘Complete’ to report the impact of a bug).  This custom value is intended to call customers’ attention to the potential impact of the specific vulnerability and enable them to potentially manually increase this severity rating.  For more information about Oracle’s use of CVSS, see http://www.oracle.com/technetwork/topics/security/cvssscoringsystem-091884.html.

This Critical Patch Update also provides fixes for 20 Fusion Middleware vulnerabilities.  The highest CVSS Base Score for these Fusion Middleware vulnerabilities is 7.5.  This score affects one remotely exploitable without authentication vulnerability in Oracle WebLogic Server (CVE-2014-2470).  If successfully exploited, this vulnerability can result in a wide compromise of the targeted WebLogic Server (Partial+ rating for Confidentiality, Integrity, and Availability.  See previous discussion about the meaning of the ‘Partial+’ value reported by Oracle). 

Also included in this Critical Patch Update were fixes for 37 Java SE vulnerabilities.  4 of these Java SE vulnerabilities received a CVSS Base Score of 10.0.  29 of these 37 vulnerabilities affected client-only deployments, while 6 affected client and server deployments of Java SE.  Rounding up this count were one vulnerability affecting the Javadoc tool and one affecting unpack200.  As a reminder, desktop users, including home users, can leverage the Java Autoupdate or visit Java.com to ensure that they are running the most recent version of Java.  Java SE security fixes delivered through the Critical Patch Update program are cumulative.  In other words, running the most recent version of Java provides users with the protection resulting from all previously-released security fixes.   Oracle strongly recommends that Java users, particularly home users, keep up with Java releases and remove obsolete versions of Java SE, so as to protect themselves against malicious exploitation of Java vulnerabilities. 

This Critical Patch Update also included fixes for 5 vulnerabilities affecting Oracle Linux and Virtualization products suite.  The most severe of these vulnerabilities received a CVSS Base Score of 9.3, and this vulnerability (CVE-2013-6462) affects certain versions of Oracle Global Secure Desktop. 

Due to the relative severity of a number of the vulnerabilities fixed in this Critical Patch Update, Oracle strongly recommends that customers apply this Critical Patch Update as soon as possible.  In addition, as previously discussed, Oracle does not test unsupported products, releases and versions for the presence of vulnerabilities addressed by each Critical Patch Update.  However, it is often the case that earlier versions of affected releases are affected by vulnerabilities fixed in recent Critical Patch Updates.  As a result, it is highly desirable that organizations running unsupported versions, for which security fixes are no longer available under Oracle Premier Support, update their systems to a currently-supported release so as to fully benefit from Oracle’s ongoing security assurance effort.

For more information:

The April 2014 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpuapr2014-1972952.html

More information about Oracle’s application of the CVSS scoring system is located at http://www.oracle.com/technetwork/topics/security/cvssscoringsystem-091884.html

An Ovum white paper “Avoiding security risks with regular patching and support services” is located at http://www.oracle.com/us/corporate/analystreports/ovum-avoiding-security-risks-1949314.pdf

More information about Oracle Software Security Assurance, including details about Oracle’s secure development and ongoing security assurance practices is located at http://www.oracle.com/us/support/assurance/overview/index.html

The details of the Common Vulnerability Scoring System (CVSS) are located at http://www.first.org/cvss/cvss-guide.html

Java desktop users can verify that they are running the most version of Java and remove older versions of Java by visiting http://java.com/en/download/installed.jsp.      

 

 

<b>Contributions by Angela Golla,

Oracle Infogram - Tue, 2014-04-15 13:52
Contributions by Angela Golla, Infogram Deputy Editor

Mark Hurd’s Latest Blog Explains Why Customer-Obsessed Marketing Is Your Next Competitive Edge



Oracle President Mark Hurd has posted his latest LinkedIn Influencer blog, “Customer-Obsessed Marketing Is Your Next Competitive Edge.” 
Mark HurdMark Hurd,
President, OracleIn this new blog, Mark writes, “Marketing executives are leading the charge to convince their organizations of the inherent danger in today’s highly digitized buyer-seller relationship. And they’re doing that by proving that “your customers are only one click away from your competitors” is more than just a clever phrase—it’s the difference between being a market leader and going out of business.
 
"The good news is that as marketing executives strive to develop new customer-engagement models, to optimize multiple channels formerly in conflict and generate new revenue streams, they now have access to world-class marketing-automation tools, which have the potential to keep more prospects from making that one-click jump to a competitor…

Frequently Misused Metrics in Oracle

Steve Karam - Tue, 2014-04-15 13:43
Lying Businessman

Back in March of last year I wrote an article on the five frequently misused metrics in Oracle: These Aren’t the Metrics You’re Looking For.

To sum up, my five picks for the most misused metrics were:

Business Graph

  1. db file scattered read – Scattered reads aren’t always full table scans, and they’re certainly not always bad.
  2. Parse to Execute Ratio – This is not a metric that shows how often you’re hard parsing, no matter how many times you may have read otherwise.
  3. Buffer Hit Ratio – I want to love this metric, I really do. But it’s an advisory one at best, horribly misleading at worst.
  4. CPU % – You license Oracle by CPU. You should probably make sure you’re making the most of your processing power, not trying to reduce it.
  5. Cost – No, not money. Optimizer cost. Oracle’s optimizer might be cost based, but you are not. Tune for time and resources, not Oracle’s own internal numbers.

Version after version, day after day, these don’t change much.

Anyways, I wanted to report to those who aren’t aware that I created a slideshow based on that blog for RMOUG 2014 (which I sadly was not able to attend at the last moment). Have a look and let me know what you think!

Metric Abuse: Frequently Misused Metrics in Oracle

Have you ever committed metric abuse? Gone on a performance tuning snipe hunt? Spent time tuning something that, in the end, didn’t even really have an impact? I’d love to hear your horror stories.

Also while you’re at it, have a look at the Sin of Band-Aids, and what temporary tuning fixes can do to a once stable environment.

And lastly, keep watching #datachat on Twitter and keep an eye out for an update from Confio on today’s #datachat on Performance Tuning with host Kyle Hailey!

The post Frequently Misused Metrics in Oracle appeared first on Oracle Alchemist.

Links to External Articles and Interviews

Michael Feldstein - Tue, 2014-04-15 11:41

Last week I was off the grid (not just lack of Internet but also lack of electricity), but thanks to publishing cycles I managed to stay artificially productive: two blog posts and one interview for an article.

Last week brought news of a new study on textbooks for college students, this time from a research arm of the  National Association of College Stores. The report, “Student Watch: Attitudes and Behaviors toward Course Materials, Fall 2013″, seems to throw some cold water on the idea of digital textbooks based on the press release summary [snip]

While there is some useful information in this survey, I fear that the press release is missing some important context. Namely, how can students prefer something that is not really available?

March 28, 2014 may well go down as the turning point where Big Data lost its placement as a silver bullet and came down to earth in a more productive manner. Triggered by a March 14 article in Science Magazine that identified “big data hubris” as one of the sources of the well-known failures of Google Flu Trends,[1] there were five significant articles in one day on the disillusionment with Big Data. [snip]

Does this mean Big Data is over and that education will move past this over-hyped concept? Perhaps Mike Caulfield from the Hapgood Blog stated it best, including adding the education perspective . . .

This is the fun one for me, as I finally have my youngest daughter’s interest (you made Buzzfeed!). Buzzfeed has added a new education beat focusing on the business of education.

The public debut last week of education technology company 2U, which partners with nonprofit and public universities to offer online degree programs, may have looked like a harbinger of IPO riches to come for companies that, like 2U, promise to disrupt the traditional education industry. At least that’s what the investors and founders of these companies want to believe. [snip]

“We live in a post-Facebook area where startups have this idea that they can design a good product and then just grow, grow, grow,” said Phil Hill, an education technology consultant and analyst. “That’s not how it actually works in education.”

 

The post Links to External Articles and Interviews appeared first on e-Literate.

Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Tue, 2014-04-15 10:50

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

PFCLScan Reseller Program

Pete Finnigan - Tue, 2014-04-15 10:50

We are going to start a reseller program for PFCLScan and we have started the plannng and recruitment process for this program. I have just posted a short blog on the PFCLScan website titled " PFCLScan Reseller Program ". If....[Read More]

Posted by Pete On 29/10/13 At 01:05 PM

Categories: Security Blogs

PFCLScan Version 1.3 Released

Pete Finnigan - Tue, 2014-04-15 10:50

We released version 1.3 of PFCLScan our enterprise database security scanner for Oracle a week ago. I have just posted a blog entry on the PFCLScan product site blog that describes some of the highlights of the over 220 new....[Read More]

Posted by Pete On 18/10/13 At 02:36 PM

Categories: Security Blogs

PFCLScan Updated and Powerful features

Pete Finnigan - Tue, 2014-04-15 10:50

We have just updated PFCLScan our companies database security scanner for Oracle databases to version 1.2 and added some new features and some new contents and more. We are working to release another service update also in the next couple....[Read More]

Posted by Pete On 04/09/13 At 02:45 PM

Categories: Security Blogs

Oracle Security Training, 12c, PFCLScan, Magazines, UKOUG, Oracle Security Books and Much More

Pete Finnigan - Tue, 2014-04-15 10:50

It has been a few weeks since my last blog post but don't worry I am still interested to blog about Oracle 12c database security and indeed have nearly 700 pages of notes in MS Word related to 12c security....[Read More]

Posted by Pete On 28/08/13 At 05:04 PM

Categories: Security Blogs

Oracle 12c Security - SQL Translation and Last Logins

Pete Finnigan - Tue, 2014-04-15 10:50

There has been some big new security items added to 12cR1 such as SHA2 in DBMS_CRYPTO, code based security in PL/SQL, Data Redaction, unified audit or even privilege analysis but also as I hinted in some previous blogs there are....[Read More]

Posted by Pete On 31/07/13 At 11:11 AM

Categories: Security Blogs

Hacking Oracle 12c COMMON Users

Pete Finnigan - Tue, 2014-04-15 10:50

The main new feature of Oracle 12cR1 has to be the multitennant architecture that allows tennant databases to be added or plugged into a container database. I am interested in the security of this of course and one element that....[Read More]

Posted by Pete On 23/07/13 At 02:52 PM

Categories: Security Blogs

Oracle Security Loop hole from Steve Karam

Pete Finnigan - Tue, 2014-04-15 10:50

I just saw a link to a post by Steve Karam on an ISACA list and went for a look. The post is titled " Password Verification Security Loophole ". This is an interesting post discussing the fact that ALTER....[Read More]

Posted by Pete On 22/07/13 At 08:39 PM

Categories: Security Blogs

Supercharge your Applications with Oracle WebLogic

All enterprises are using an application server but the question is why they need an application server. The answer is they need to deliver applications and software to just about any device...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Why is Affinity Mask Negative in sp_configure?

Pythian Group - Tue, 2014-04-15 07:56

While looking at a SQL server health report, I found affinity mask parameter in sp_configure output showing a negative value.

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

Affinity mask is a SQL Server configuration option which is used to assign processors to specific threads for improved performance. To know more about affinity mask, read this. Usually, the value for affinity mask is a positive integer (decimal format) in sp_configure. The article in previous link shows an example of binary bit mask and corresponding decimal value to be set in sp_configure.

 

I was curious to find out why the value of affinity mask could be negative as according to BOL http://technet.microsoft.com/en-us/library/ms187104(v=sql.105).aspx

 

 affinity_mask_memeThe values for affinity mask are as follows:

          · A one-byte affinity mask covers up to 8 CPUs in a multiprocessor computer.

       

          · A two-byte affinity mask covers up to 16 CPUs in a multiprocessor computer.

         

          · A three-byte affinity mask covers up to 24 CPUs in a multiprocessor computer.

         

          · A four-byte affinity mask covers up to 32 CPUs in a multiprocessor computer.

         

         · To cover more than 32 CPUs, configure a four-byte affinity mask for the first 32    CPUs and up to a four-byte affinity64 mask for the remaining CPUs.

 

Time to unfold the mystery. Windows Server 2008 R2 supports more than 64 logical processors. From ERRORLOG, I see there are 40 logical processors on the server:

 

2014-03-31 18:18:18.18 Server      Detected 40 CPUs. This is an informational message; no user action is required.

 

Further, going down in the ERRORLOG I see this server has four NUMA nodes configured.

 

Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.

:

Node configuration: node 0: CPU mask: 0x00000000000ffc00:0 Active CPU mask: 0x0000000000001c00:0.

Node configuration: node 1: CPU mask: 0x00000000000003ff:0 Active CPU mask: 0×0000000000000007:0.

Node configuration: node 2: CPU mask: 0x000000003ff00000:0 Active CPU mask: 0×0000000000700000:0.

Node configuration: node 3: CPU mask: 0x000000ffc0000000:0 Active CPU mask: 0x00000001c0000000:0. 

 

These were hard NUMA nodes. No soft NUMA node configured on the server (no related registry keys exist)

 

An important thing to note is that the affinity mask value forsp_configure ranges from -2147483648 to 2147483647 = 2147483648 + 2147483647 + 1 = 4294967296 = 2^32 = the range of int data type. Hence affinity mask value from sp_configure is not sufficient to hold more than 64 CPUs. To deal with this, ALTER SERVER CONFIGURATION was introduced in SQL Server 2008 R2 to support and set the processor affinity for more than 64 CPUs. However, the value of affinity mask in sp_configure, in such cases, is still an *adjusted* value which we are going to find out below.

 

Let me paste the snippet from ERRORLOG again:

 

Processor affinity turned on: node 0, processor mask 0x0000000000001c00.

Processor affinity turned on: node 1, processor mask 0×0000000000000007.

Processor affinity turned on: node 2, processor mask 0×0000000000700000.

Processor affinity turned on: node 3, processor mask 0x00000001c0000000.

 

As it says, the underlined values above are for the processor mask i.e. processor affinity or affinity mask. These values correspond to that of online_scheduler_mask in sys.dm_os_nodes which makes up the ultimate value for affinity mask in sp_configure. Ideally, affinity mask should be a sum of these values. Let’s add these hexadecimal values using windows calculator (Choose Programmer from Viewmenu)

 

  0x0000000000001c00

+ 0×0000000000000007

+ 0×0000000000700000

+ 0x00000001c0000000

--------------------

= 0x00000001C0701C07

 

= 7523539975 (decimal)

 

So, affinity mask in sp_configure should have been equal to 7523539975. Since this no. is greater than the limit of 2^32 i.e. 4294967296 we see an *adjusted* value (apparently a negative value). The reason I say it an *adjusted* value is because sum of processor mask values (in decimal) is adjusted (subtracted from the int range i.e. 4294967296 so that it fits within the range and falls below or equal to 4294967296 ). Here’s is an example which explains the theory:

 

7523539975 – 4294967296  – 4294967296 = –1066394617 = the negative value seen in sp_configure

name                                minimum     maximum     config_value run_value
----------------------------------- ----------- ----------- ------------ -----------
affinity mask                       -2147483648 2147483647  -1066394617  -1066394617

That explains why affinity mask shows up as a negative number in sp_configure.

 

To make the calculation easier, I wrote a small script to find out the sp_configure equivalent value of affinity mask in case of NUMA nodes

               


-- Find out the sp_configure equivalent value of affinity mask in case of NUMA nodes

--------------------------------------------------------------------------------------

BEGIN
DECLARE @real_value bigint; -- to hold the sum of online_scheduler_mask

DECLARE @range_value bigint = 4294967296; -- range of int dataype i.e. 2^32

DECLARE @config_value int = 0; -- default value of affinity_mask in sp_configure output. to be set later.
-- Fetch the sum of Online Scheudler Mask excluding node id 64 i.e. Hidden scheduler

SET @real_value =( SELECT SUM(online_scheduler_mask) as online_scheduler_mask

FROM sys.dm_os_nodes

WHERE memory_node_id <> 64

);
-- Calculate the value for affinity_mask in sp_configure

WHILE (@real_value > 2147483647)

BEGIN

SET @real_value=(@real_value - @range_value);

END;
-- Copy the value for affinity_mask as seen in sp_configure

SET @config_value = @real_value;
PRINT 'The current config_value for affinity_mask parameter in sp_configure is: ' + cast(@config_value as varchar);

END;

This script will give the current config value for SQL server in any case, NUMA nodes, >64 procs, SQL Server 2008 R2..

 

Hope this post will help you if were as puzzled as I was seeing the negative no. in sp_configure.

 

Stay tuned!

Categories: DBA Blogs

Delivering the Moments of Engagement Across the Enterprise

WebCenter Team - Tue, 2014-04-15 07:00
12.00

 Delivering Moments of Engagement Across the Enterprise

A Five Step Roadmap for Mobilizing a Digital business

Geoffrey Bock, Principal, Bock & Company
Michael Snow, Principal Product Marketing Director, Oracle WebCenter

Over the past few years, we have been fascinated by the impact of mobility on business. As employees, partners, and customers, we now carry powerful devices in our pockets and handbags. Our smartphones and tablets are always with us, always on, and always collecting information. We are no longer tethered to fixed work places; we can frequently find essential information with just a few taps and swipes. More and more, this content is keyed to our current context. Moreover, we often are immersed in an array of sensors that track our actions, personalize the results, and assist us in innumerable ways. Our business and social worlds are in transition. This is not the enterprise computing environment of the 1990’s or even the last decade.

Yet while tracking trends with the mobile industry, we have encountered a repeated refrain from many technology and business leaders. Sure, mobile apps are neat, they say. But how do you justify the investments required? What are the business benefits of enterprise mobility? When should companies harness the incredible opportunities of the mobile revolution?

To answer these questions, we think that it is important to recognize the steps along the mobile journey. Certainly companies have been investing in their enterprise infrastructure for many years. In fact, enterprise-wide mobility is just the latest stage in the development of digital business initiatives.

What is at stake is not simply introducing nifty mobile apps as access points to existing enterprise applications. The challenge is weaving innovative digital technologies (including mobile) into the fabric (and daily operations) of an organization. Companies become digital businesses by adapting and transforming essential enterprise activities. As they mobilize key business experiences, they drive digital capabilities deeply into their application infrastructure.

Please join us for a conversation this Thursday (04/17/14 @ 10AM PST) about how Oracle customers are making this mobile journey, our five-step roadmap for delivering the moments of engagement across the enterprise.



Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Cambria","serif";}

Creating some Pivotal Cloud Foundry (PCF) PHD services

Pas Apicella - Tue, 2014-04-15 06:52
After installing PHD add on for Pivotal Cloud Foundry 1.1 I quickly created some development services for PHD using the CLI as shown below.

[Tue Apr 15 22:40:08 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hawq-cf free dev-hawq
Creating service dev-hawq in org pivotal / space development as pas...
OK
[Tue Apr 15 22:42:31 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hbase-cf free dev-hbase
Creating service dev-hbase in org pivotal / space development as pas...
OK
[Tue Apr 15 22:44:10 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-hive-cf free dev-hive
Creating service dev-hive in org pivotal / space development as pas...
OK
[Tue Apr 15 22:44:22 papicella@:~/vmware/pivotal/products/cloud-foundry ] $ cf create-service p-hd-yarn-cf free dev-yarn
Creating service dev-yarn in org pivotal / space development as pas...
OK

Finally using the web console to brow the services in the "Development" space


http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

OBIEE Security: Usage Tracking, Logging and Auditing for SYSLOG or Splunk

Enabling OBIEE Usage Tracking and Logging is a key part of most any security strategy. More information on these topics can be found in the whitepaper references below. It is very easy to setup logging such that a centralized logging solution such as SYSLOG or Splunk can receive OBIEE activity.

Usage Tracking

Knowing who ran what report, when and with what parameters is helpful not only for performance tuning but also for security. OBIEE 11g provides a sample RPD with a Usage Tracking subject area. The subject area will report on configuration and changes to the RPD as well as configuration changes to Enterprise Manager.  To start using the functionality, one of the first steps is to copy the components from the sample RPD to the production RPD.

Usage tracking can also be redirected to log files. The STORAGE_DIRECTORY setting is in the NQSConfig.INI file. This can be set if OBIEE usage logs are being sent, for example, to a centralized SYSLOG database.

The User Tracking Sample RPD can be found here:

{OBIEE_11G_Instance}/bifoundation/OracleBIServerComponent/coreapplication_obis1/sample/usagetracking

Logging

OBIEE offers standard functionality for application level logging.  This logging should be considered as one component of the overall logging approach and strategy. The operating system and database(s) supporting OBIEE should be using a centralized logging solution (most likely syslog) and it is also possible to parse the OBIEE logs for syslog consolidation.

For further information on OBIEE logging refer to the Oracle Fusion Middleware System Administrator’s Guide for OBIEE 11g (part number E10541-02), chapter eight.

To configure OBIEE logging, the BI Admin client tool is used to set the overall default log level for the RPD as well as identify specific users to be logged. The log level can differ among users. No logging is possible for a role.

Logging Levels are set between zero and seven.

Level 0 - No logging

Level 1 - Logs the SQL statement issued from the client application.

Level 2 - All level 1 plus OBIEE infrastructure information and query statisics

Level 3 - All level 2 plus Cache information

Level 4 - All level 3 plus query plan execution

Level 5 - All level 4 plus intermediate row counts

Level 6 & 7 - not used

 

OBIEE log files

BI Component

Log File

Log File Directory

OPMN

debug.log

ORACLE_INSTANCE/diagnostics/logs/OPMN/opmn

OPMN

opmn.log

ORACLE_INSTANCE/diagnostics/logs/OPMN/opmn

BI Server

nqserver.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBIServerComponent/coreapplication_obis1

BI Server Query

nquery<n>.log <n>=data and timestamp for example nqquery-20140109-2135.log

Oracle BI Server query Log

ORACLE_INSTANCE/diagnostics/logs/OracleBIServerComponent/coreapplication

BI Cluster Controller

nqcluster.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBIClusterControllerComponent/coreapplication_obiccs1

Oracle BI Scheduler

nqscheduler.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBISchedulerComponent/coreapplication_obisch1

Useage Tracking

NQAcct.yyymmdd.hhmmss.log

STORAGE_DIRECTORY parameter in the Usage Tracking section of the NQSConfig.INI file determines the location of usage tracking logs

Presentation Services

sawlog*.log (for example, sawlog0.log)

ORACLE_INSTANCE/diagnostics/logs/
OracleBIPresentationServicesComponent/
coreapplication_obips1

 

The configuration of this log (e.g. the writer setting to output to syslog or windows event log) is set in instanceconfig.xml

BI JavaHost

jh.log

ORACLE_INSTANCE/diagnostics/logs/
OracleBIJavaHostComponent/coreapplication_objh1

 

If you have questions, please contact us at info@integrigy.com

 -Michael Miller, CISSP-ISSMP

References

 

Tags: Oracle Business Intelligence (OBIEE)AuditorIT Security
Categories: APPS Blogs, Security Blogs

WordPress 3.8.3 – Auto Update

Tim Hall - Tue, 2014-04-15 01:53

WordPress 3.8.3 came out yesterday. It’s a small maintenance release, with the downloads and changelog in the usual places. For many people, this update will happen automatically and they’ll just receive and email to say it has been applied.

I’m still not sure what to make of the auto-update feature of WordPress. Part of me likes it and part of me is a bit irritated by it. For the lazy folks out there, I think it is a really good idea, but for those who are on their blog admin screens regularly it might seem like a source of confusion. I currently self-host 5 WordPress blogs and the auto-update feature seems a little erratic. One blog always auto-updates as soon as the new a new release comes out. A couple sometimes do. I don’t think this blog has ever auto-updated…

I’d be interested to hear if other self-hosting WordPress bloggers have had a similar experience…

Cheers

Tim…

WordPress 3.8.3 – Auto Update was first posted on April 15, 2014 at 8:53 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Database experts try to mitigate the effects of the Heartbleed bug

Chris Foot - Tue, 2014-04-15 01:44

Recently, the Heartbleed Bug has sent a rift through global economic society. The personal information of online shoppers, social media users and business professionals is at risk and database administration providers are doing all they can to either prevent damage from occurring or mitigate detrimental effects of what has already occurred. 

What it does and the dangers involved
According to Heartbleed.com, the vulnerability poses a serious threat to confidential information, as it compromises the protection Open Secure Sockets Layer/Transport Security Layer technology provides for Internet-based communications. The virus allows anyone on the Web – particularly, cybercriminals – to view the memory of the systems protected by affected versions of OpenSSL software, allowing attackers to monitor a wide array of transactions between individuals, governments and enterprises and numerous other connections. 

Jeremy Kirk, a contributor to PCWorld, noted that researchers at CloudFlare, a San Francisco-based security company, found that hackers could steal the SSL/TSL and use it to create an encrypted avenue between users and websites, essentially posing as legitimate webpages in order to decrypt traffic passing between a computer and a server. For online retailers lacking adequate database support services, it could mean the divulgence of consumer credit card numbers. If customers no longer feel safe  in purchasing products online, it could potentially result in the bankruptcy of a merchandiser. 

Think mobile devices are safe? Think again 
Now more than ever, database experts are making concentrated efforts to effectively monitor communications between mobile devices and business information. As the Heartbleed Bug can compromise connections between PCs and websites, the same risk is involved for those with mobile applications bridging the distance between smartphones and Facebook pages. CNN reported that technology industry leaders Cisco and Juniper claimed that someone can potentially hack into a person's phone and log the details of his or her conversations. Sam Bowling, a senior infrastructure engineer at web hosting service Singlehop, outlined several devices that could be compromised:

  • Cisco revealed that select versions of the company's WebEx service are vulnerable, posing a threat to corporate leaders in a video conference. 
  • If work phones aren't operating behind a defensive firewall, a malicious entity could use Heartbleed to access the devices' memory logs. 
  • Smartphone users accessing business files from iPhones and Android devices may be exposed, as hackers can view whatever information a person obtained through select applications. 

Upgraded deployments of OpenSSL are patching liable avenues, but remote database services are still exercising assiduous surveillance in order to ensure that client information remains confidential. 

Oracle TNS_ADMIN issues due to bad environment settings

Yann Neuhaus - Mon, 2014-04-14 18:11

Recently, I faced a TNS resolution problem at a customer. The reason was a bad environment setting: The customer called the service desk because of a DBLINK pointing to a bad database.

The users were supposed to be redirected to a development database, and the DBLINK was redirecting to a validation database instead. The particularity of the environment is that development and validation databases are running on the same server, but on different Oracle homes, each home having its own tnsnames.ora. Both tnsnames.ora contain common alias names, but pointing on different databases. Not exactly best practice, but this is not the topic here.

The problem started with some issues to reproduce the case. Our service desk was not able to reproduce the situation without understanding that the customer was trying to access the database remotely via a development tool (through the listener), while we were connected locally on the server.

Let me present the case with my environment.

First, this is the database link concerned by the issue:

 

SQL> select * from dba_db_links;
OWNER      DB_LINK              USERNAME                       HOST       CREATED
---------- -------------------- ------------------------------ ---------- ---------
PUBLIC     DBLINK               DBLINK                         MYDB       21-MAR-14

 

And this is the output when we try to display the instance name through the DBLINK, when connected locally:

 

SQL> select instance_name from v$instance@DBLINK;
INSTANCE_NAME
----------------
DB2

 

The user is redirected on the remote database, as expected. Now, let's see what happens when connected using the SQL*Net layer:

 

[oracle@srvora01 ~]$ sqlplus system@DB1
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 24 10:07:45 2014
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 
Enter password:
 
Connected to:
 
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 
SQL> select instance_name from v$instance@DBLINK;
INSTANCE_NAME
----------------
DB1

 

Here we can see that the user is not redirected to the same database (here, for demonstration puproses, on the database itself).

The first thing to check is the TNS_ADMIN variable, if it exists:

 

[oracle@srvora01 ~]$ echo $TNS_ADMIN
/u00/app/oracle/product/11.2.0/db_3_0/network/admin

 

There is the content of the tnsnames.ora file on that location:

 

[oracle@srvora01 ~]$ cat /u00/app/oracle/product/11.2.0/db_3_0/network/admin/tnsnames.ora
DB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = tcp)(HOST = srvora01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = DB1)
    )
  )
MYDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = tcp)(HOST = srvora01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = DB2)
    )
  )

 

Clearly, we have a problem with the TNS resolution. The local connection resolves the MYDB alias correctly, while the remote connection resolves a different database with the alias. In this case, we have two solutions:

  • The tnsnames.ora is not well configured: this is not the case, as you can see above
  • Another tnsnames.ora file exists somewhere on the server and is used by remote connections

 To confirm that the second hypothesis is the good one, we can use the strace tool:

 

SQL> set define #
SQL> select spid from v$process p join v$session s on p.addr=s.paddr and s.sid=sys_context('userenv','sid');
SPID
------------------------
5578

 

SQL>  host strace -e trace=open -p #unix_pid & echo $! > .tmp.pid
Enter value for unix_pid: 5578
SQL> Process 5578 attached - interrupt to quit

 

SQL> select instance_name from v$instance @ DBLINK;
open("/u00/app/oracle/product/11.2.0/db_3_0/network/admin/tnsnames.ora", O_RDONLY) = 8
open("/etc/host.conf", O_RDONLY)        = 8
open("/etc/resolv.conf", O_RDONLY)      = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 10
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 10
open("/etc/hostid", O_RDONLY)           = -1 ENOENT (No such file or directory)
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 10INSTANCE_NAME
----------------
DB2

 

The DBLINK is resolved using the file /u00/app/oracle/product/11.2.0/db_3_0/network/admin/tnsnames.ora.

Now, when connected remotely:

 

SQL> set define #
SQL> select spid from v$process p join v$session s on p.addr=s.paddr and s.sid=sys_context('userenv','sid');
SPID
------------------------
6838

 

SQL> host strace -e trace=open -p #unix_pid & echo $! > .tmp.pid
Enter value for unix_pid: 6838
SQL> Process 6838 attached - interrupt to quit

 

SQL> select instance_name from v$instance@DBLINK;
open("/u00/app/oracle/network/admin/tnsnames.ora", O_RDONLY) = 8
open("/etc/host.conf", O_RDONLY)        = 8
open("/etc/resolv.conf", O_RDONLY)      = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 8
open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 9
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 9
open("/etc/hostid", O_RDONLY)           = -1 ENOENT (No such file or directory)
open("/etc/hosts", O_RDONLY|O_CLOEXEC)  = 9INSTANCE_NAME
----------------
DB1

 

Here the DBLINK is resolved with the file /u00/app/oracle/network/admin/tnsnames.ora.

 

Two different tnsnames.ora files are used according to the connection method! If we query the content of the second tnsnames.ora, we have an explanation for our problem:

 

[oracle@srvora01 ~]$ cat /u00/app/oracle/network/admin/tnsnames.ora
MYDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = tcp)(HOST = srvora01)(PORT = 1521))
    (CONNECT_DATA =
      (SERVICE_NAME = DB1)
    )
  )

 

It is not clearly documented by Oracle, but the database session can inherit the environment variables in three different ways:

  • When you connect locally to the server (no SQL*Net, no listener), the Oracle session inherits the client environment
  • When you connect remotely to a service statically registered on the listener, the Oracle session inherits the environment which started the listener
  • When you connect remotely to a service dynamically registered on the listener, the Oracle session inherits the environment which started the database

In our case, the database was restarted with the wrong TNS_NAMES value set. Then, the database registered this value for remote connections. We can check this with the following method:

 

[oracle @ srvora01 ~]$ ps -ef | grep pmon
oracle    3660     1  0 09:02 ?        00:00:00 ora_pmon_DB1
oracle    4006     1  0 09:05 ?        00:00:00 ora_pmon_DB2
oracle    6965  3431  0 10:44 pts/1    00:00:00 grep pmon

 

[oracle @ srvora01 ~]$ strings /proc/3660/environ | grep TNS_ADMIN
TNS_ADMIN=/u00/app/oracle/network/admin

 

Note that we can get the value for TNS_ADMIN using the dbms_system.get_env.

The solution was to restart the database with the correct TNS_ADMIN value:

 

[oracle @ srvora01 ~]$ echo $TNS_ADMIN
/u00/app/oracle/product/11.2.0/db_3_0/network/admin

 

[oracle@srvora01 ~]$ sqlplus / as sysdba
 
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 24 10:46:03 2014
 
Copyright (c) 1982, 2011, Oracle.  All rights reserved.
 
 
Connected to:
 
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

 

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

 

SQL> startup
ORACLE instance started.Total System Global Area 1570009088 bytes
Fixed Size                  2228704 bytes
Variable Size            1023413792 bytes
Database Buffers          536870912 bytes
Redo Buffers                7495680 bytes
Database mounted.
Database opened.

 

[oracle@srvora01 ~]$ ps -ef | grep pmon
oracle    4006     1  0 09:05 ?        00:00:00 ora_pmon_DB2
oracle    7036     1  0 10:46 ?        00:00:00 ora_pmon_DB1
oracle    7116  3431  0 10:46 pts/1    00:00:00 grep pmon

 

[oracle@srvora01 ~]$ strings /proc/7036/environ | grep TNS_ADMIN
TNS_ADMIN=/u00/app/oracle/product/11.2.0/db_3_0/network/admin

 

The value for TNS_ADMIN is now correct.

 

[oracle@srvora01 ~]$ sqlplus system @ DB1
 
SQL*Plus: Release 11.2.0.3.0 Production on Mon Mar 24 10:47:21 2014
 
Copyright (c) 1982, 2011, Oracle.  All rights reserved.Enter password:
 
Enter password:
 
Connected to:
 
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

 

SQL> select instance_name from v$instance @ DBLINK;
INSTANCE_NAME
----------------
DB2

 

Remote connections are now using the right tnsnames.ora.

I hope this will help you with your TNS resolution problems.