Skip navigation.

Feed aggregator

October 2014 Critical Patch Update Released

Oracle Security Team - Tue, 2014-10-14 13:49
Normal 0 false false false EN-US X-NONE X-NONE

Hello, this is Eric Maurice again.

Oracle today released the October 2014 Critical Patch Update. This Critical Patch Update provides fixes for 154 vulnerabilities across a number of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Enterprise Manager Grid Control, Oracle E-Business Suite, Oracle Supply Chain Product Suite, Oracle PeopleSoft Enterprise, Oracle JDEdwards EnterpriseOne, Oracle Communications Industry Suite, Oracle Retail Industry Suite, Oracle Health Sciences Industry Suite, Oracle Primavera, Oracle Java SE, Oracle and Sun Systems Product Suite, Oracle Linux and Virtualization, and Oracle MySQL.

In today’s Critical Patch Update Advisory, you will see a stronger than previously-used statement about the importance of applying security patches. Even though Oracle has consistently tried to encourage customers to apply Critical Patch Updates on a timely basis and recommended customers remain on actively-supported versions, Oracle continues to receive credible reports of attempts to exploit vulnerabilities for which fixes have been already published by Oracle. In many instances, these fixes were published by Oracle years ago, but their non-application by customers, particularly against Internet-facing systems, results in dangerous exposure for these customers. Keeping up with security releases is a good security practice and good IT governance.

Out of the 154 vulnerabilities fixed with today’s Critical Patch Update release, 31 are for the Oracle Database. All but 3 of these database vulnerabilities are related to features implemented using Java in the Database, and a number of these vulnerabilities have received a CVSS Base Score of 9.0.

This CVSS 9.0 Base Score reflects instances where the user running the database has administrative privileges (as is typical with pre-12 Database versions on Windows). When the database user has limited (or non-root) privilege, then the CVSS Base Score is 6.5 to denote that a successful compromise would be limited to the database and not extend to the underlying Operating System. Regardless of this decrease in the CVSS Base Score for these vulnerabilities for most recent versions of the database on Windows and all versions on Unix and Linux, Oracle recommends that these patches be applied as soon as possible because a wide compromise of the database is possible.

The Java Virtual Machine (Java VM) was added to the database with the release of Oracle 8i in early 1999. The inclusion of Java VM in the database kernel allows Java stored procedures to be executed by the database. In other words, by running Java in the database server, Java applications can benefit from direct access to relational data. Not all customers implement Java stored procedures; however support for Java stored procedures is required for the proper operation of the Oracle Database as certain features are implemented using Java. Due to the nature of the fixes required, Oracle development was not able to produce a normal RAC-rolling fix for these issues. To help protect customers until they can apply the Oracle JavaVM component Database PSU, which requires downtime, Oracle produced a script that introduces new controls to prevent new Java classes from being deployed or new calls from being made to existing Java classes, while preserving the ability of the database to execute the existing Java stored procedures that customers may rely on.

As a mitigation measure, Oracle did consider revoking all Public Grant to Java Classes, but such approach is not feasible with a static script. Due to the dynamic nature of Java, it is not possible to identify all the classes that may be needed by an individual customer. Oracle’s script is designed to provide effective mitigation against malicious exploitation of Java in the database to customers who are not deploying new Java code or creating Java code dynamically.

Customers who regularly develop in Java in the Oracle Database can take advantage of a new feature introduced in Oracle 12.1. By running their workloads with Privilege Analysis enabled, these customers can determine which Java classes are actually needed and remove unnecessary Grants.

18 of the 154 fixes released today are for Oracle Fusion Middleware. Half of these fixes are pass-through fixes to address vulnerabilities in third-party components included in Oracle Fusion Middleware distributions. The most severe CVSS Base Score reported for these Oracle Fusion Middleware vulnerabilities is 7.5.

This Critical Patch Update also provides fixes for 25 new Java SE vulnerabilities. The highest reported CVSS Base Score for these Java SE vulnerabilities is 10.0. This score affects one Java SE vulnerability. Out of these 25 Java vulnerabilities, 20 affect client-only deployments of Java SE (and 2 of these vulnerabilities are browser-specific). 4 vulnerabilities affect client and server deployments of Java SE. One vulnerability affects client and server deployments of JSSE.

Rounding up this Critical Patch Update release are 15 fixes for Oracle and Sun Systems Product Suite, and 24 fixes for Oracle MySQL.

Note that on September 26th 2014, Oracle released Security Alert CVE-2014-7169 to deal with a number of publicly-disclosed vulnerabilities affecting GNU Bash, a popular open source command line shell incorporated into Linux and other widely used operating systems. Customers should check out this Security Alert and apply relevant security fixes for the affected systems as its publication so close to the publication of the October 2014 Critical Patch Update did not allow for inclusion on these Security Alert fixes in the Critical Patch Update release.

For More Information:

The October 2014 Critical Patch Update is located at http://www.oracle.com/technetwork/topics/security/cpuoct2014-1972960.html

Security Alert CVE-2014-7169 is located at http://www.oracle.com/technetwork/topics/security/alert-cve-2014-7169-2303276.html. Furthermore, a list of Oracle products using GNU Bash is located at http://www.oracle.com/technetwork/topics/security/bashcve-2014-7169-2317675.html.

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance/

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

What Do You Need to Secure EHR? [VIDEO]

Chris Foot - Tue, 2014-10-14 11:29

Transcript

Electronic health records are becoming a regular part of the healthcare industry, but are organizations taking the right measures to secure them?

Hi, welcome to RDX. EHR systems can help doctors and other medical experts monumentally enhance patient treatment, but they also pose serious security risks.

SC Magazine reported an employee of Memorial Hermann Health System in Houston accessed more than 10,000 patient records over the course of six years. Social Security Numbers, dates of birth and other information was stolen.

In order to deter such incidents from occurring, health care organizations must employ active security monitoring of their databases. That way, suspicious activity can readily be identified and acted upon.

Thanks for watching! Be sure to join us next time for more security best practices and tips.

The post What Do You Need to Secure EHR? [VIDEO] appeared first on Remote DBA Experts.

Uber won't want drivers in the future

Steve Jones - Tue, 2014-10-14 09:30
I'm an Uber user, its a great service outside of cities with decent public transport.  But I have been thinking about where they will justify the $17bn valuation and give people a return on that $1.2bn investment.  At the same time I've been following the autonomous car pieces with interest and I think there is a pretty clear way this can end, especially as Uber have already said they are going
Categories: Fusion Middleware

Oracle E-Business Suite Updates From OpenWorld 2014

Pythian Group - Tue, 2014-10-14 08:29

Oracle OpenWorld has always been my most exciting conference to attend. I always see high energy levels everywhere, and it kind of revs me up to tackle new upcoming technologies. This year I concentrated on attending mostly Oracle E-Business Suite release 12.2 and Oracle 12c Database-related sessions.

On the Oracle E-Business Suite side, I started off with Oracle EBS Customer Advisory Board Meeting with great presentations on new features like the Oracle EBS 12.2.4 new iPad Touch-friendly interface. This can be enabled by setting “Self Service Personal Home Page mode” profile value to “Framework Simplified”. Also discussed some pros and cons of the new downtime mode feature of adop Online patching utility that allows  release update packs ( like 12.2.3 and 12.2.4 patch ) to be applied with out starting up a new online patching session. I will cover more details about that in a separate blog post. In the mean time take a look at the simplified home page look of my 12.2.4 sandbox instance.

Oracle EBS 12.2.4 Simplified Interface

Steven Chan’s presentation on EBS Certification Roadmap announced upcoming support for Android tablets Chrome Browser, IE11 and Oracle Unified Directory etc. Oracle did not extend any support deadlines for Oracle EBS 11i or R12 this time. So to all EBS customers on 11i: It’s time to move to R12.2. I also attended a good session on testing best practices for Oracle E-Business Suite, which had a good slide on some extra testing required during Online Patching Cycle. I am planning to do a separate blog with more details on that, as it is an important piece of information that one might ignore. Also Oracle announced a new product called Flow Builder that is part of Oracle Application Testing Suite, which helps users test functional flows in Oracle EBS.

On the 12c Database side, I attended great sessions by Christian Antognini on Adaptive Query Optimization and Markus Michalewicz sessions on 12c RAC Operational Best Practices and RAC Cache Fusion Internals. Markus Cachefusion presentation has some great recommendations on using _gc_policy_minimum instead of turning off DRM completely using _gc_policy_time=0. Also now there is a way to control DRM of a object using package DBMS_CACHEUTIL.

I also attended attended some new, upcoming technologies that are picking up in the Oracle space like Oracle NoSQL, Oracle Big Data SQL, and Oracle Data Integrator Hadoop connectors. These products seem to have great future ahead and have good chances of becoming mainstream in the data warehousing side of businesses.

Categories: DBA Blogs

Let the Data Guard Broker control LOG_ARCHIVE_* parameters!

The Oracle Instructor - Tue, 2014-10-14 08:20

When using the Data Guard Broker, you don’t need to set any LOG_ARCHIVE_* parameter for the databases that are part of your Data Guard configuration. The broker is doing that for you. Forget about what you may have heard about VALID_FOR – you don’t need that with the broker. Actually, setting any of the LOG_ARCHIVE_* parameters with an enabled broker configuration might even confuse the broker and lead to warning or error messages. Let’s look at a typical example about the redo log transport mode. There is a broker configuration enabled with one primary database prima and one physical standby physt. The broker config files are mirrored on each site and spfiles are in use that the broker (the DMON background process, to be precise) can access:

 OverviewWhen connecting to the broker, you should always connect to a DMON running on the primary site. The only exception from this rule is when you want to do a failover: That must be done connected to the standby site. I will now change the redo log transport mode to sync for the standby database. It helps when you think of the log transport mode as an attribute (respectively a property) of a certain database in your configuration, because that is how the broker sees it also.

 

[oracle@uhesse1 ~]$ dgmgrl sys/oracle@prima
DGMGRL for Linux: Version 11.2.0.3.0 - 64bit Production

Copyright (c) 2000, 2009, Oracle. All rights reserved.

Welcome to DGMGRL, type "help" for information.
Connected.
DGMGRL> edit database physt set property logxptmode=sync;
Property "logxptmode" updated

In this case, physt is a standby database that is receiving redo from primary database prima, which is why the LOG_ARCHIVE_DEST_2 parameter of that primary was changed accordingly:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Sep 30 17:21:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="physt", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="physt" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Configuration for physt

The mirrored broker configuration files on all involved database servers contain that logxptmode property now. There is no new entry in the spfile of physt required. The present configuration allows now to raise the protection mode:

DGMGRL> edit configuration set protection mode as maxavailability;
Succeeded.

The next broker command is done to support a switchover later on while keeping the higher protection mode:

DGMGRL> edit database prima set property logxptmode=sync;
Property "logxptmode" updated

Notice that this doesn’t lead to any spfile entry; only the broker config files store that new property. In case of a switchover, prima will then receive redo with sync.

Configuration for primaNow let’s do that switchover and see how the broker ensures automatically that the new primary physt will ship redo to prima:

 

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    prima - Primary database
    physt - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to physt;
Performing switchover NOW, please wait...
New primary database "physt" is opening...
Operation requires shutdown of instance "prima" on database "prima"
Shutting down instance "prima"...
ORACLE instance shut down.
Operation requires startup of instance "prima" on database "prima"
Starting instance "prima"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "physt"

All I did was the switchover command, and without me specifying any LOG_ARCHIVE* parameter, the broker did it all like this picture shows:

Configuration after switchoverEspecially, now the spfile of the physt database got the new entry:

 

[oracle@uhesse2 ~]$ sqlplus sys/oracle@physt as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:43:41 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> show parameter log_archive_dest_2

NAME				     TYPE	 VALUE
------------------------------------ ----------- ------------------------------
log_archive_dest_2		     string	 service="prima", LGWR SYNC AFF
						 IRM delay=0 optional compressi
						 on=disable max_failure=0 max_c
						 onnections=1 reopen=300 db_uni
						 que_name="prima" net_timeout=3
						 0, valid_for=(all_logfiles,pri
						 mary_role)

Not only is it not necessary to specify any of the LOG_ARCHIVE* parameters, it is actually a bad idea to do so. The guideline here is: Let the broker control them! Else it will at least complain about it with warning messages. So as an example what you should not do:

[oracle@uhesse1 ~]$ sqlplus sys/oracle@prima as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Oct 14 15:57:11 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL> alter system set log_archive_trace=4096;

System altered.

Although that is the correct syntax, the broker now gets confused, because that parameter setting is not in line with what is in the broker config files. Accordingly that triggers a warning:

DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database
      Warning: ORA-16792: configurable property value is inconsistent with database setting

Fast-Start Failover: DISABLED

Configuration Status:
WARNING

DGMGRL> show database prima statusreport;
STATUS REPORT
       INSTANCE_NAME   SEVERITY ERROR_TEXT
               prima    WARNING ORA-16714: the value of property LogArchiveTrace is inconsistent with the database setting

In order to resolve that inconsistency, I will do it also with a broker command – which is what I should have done instead of the alter system command in the first place:

DGMGRL> edit database prima set property LogArchiveTrace=4096;
Property "logarchivetrace" updated
DGMGRL> show configuration;

Configuration - myconf

  Protection Mode: MaxAvailability
  Databases:
    physt - Primary database
    prima - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Thanks to a question from Noons (I really appreciate comments!), let me add the complete list of initialization parameters that the broker is supposed to control. Most but not all is LOG_ARCHIVE*

LOG_ARCHIVE_DEST_n
LOG_ARCHIVE_DEST_STATE_n
ARCHIVE_LAG_TARGET
DB_FILE_NAME_CONVERT
LOG_ARCHIVE_FORMAT
LOG_ARCHIVE_MAX_PROCESSES
LOG_ARCHIVE_MIN_SUCCEED_DEST
LOG_ARCHIVE_TRACE
LOG_FILE_NAME_CONVERT
STANDBY_FILE_MANAGEMENT


Tagged: Data Guard, High Availability
Categories: DBA Blogs

OOW14 : One week in a nutshell

Luc Bors - Tue, 2014-10-14 04:26

Mind Control?

Oracle AppsLab - Mon, 2014-10-13 16:37

Editor’s note: Hey look, a new author. Here’s the first post from Raymond Xie, who joined us nearly a year ago. You may remember him from such concept demos as geo-fencing or Pebble watchface. Raymond has been busy at work and wants to share the work he did with telekinesis. Or something, you decide. Enjoy.

You put on a headband, stare at a ball, tilt your head back-forth and left-right . . . the ball navigates through a simple maze, rushing, wavering, changing colors, and finally hitting the target.

That is the latest creation out of AppsLab: Muse Sphero Driver. When it was first showed at OAUX Exchange during OOW, it amused many people, as they would call it “mind control” game.

The setup consists of  Muse – a brain-sensing headband, Sphero – a robotic ball, and a tablet to bridge the two.

Technically, it is your brainwave data (Electroencephalography – EEG) driving the Sphero (adjusting speed and changing color with spectrum from RED to BLUE, where RED: fast, active;  BLUE: slow, calm);  and head gesture (3d Accelerarometer- ACC) controlling the direction of Sphero movement.  Whether or not you call that as “mind control” is up to your own interpretation.

You kind of drive the ball with your mind, but mostly brainwave noises instead of conscious thought. It is still too early to derive accurate “mind control” from EEG data out of any regular person, for the reasons:

1. For EEG at Scalp level, the noise-to-signal ratio is very poor;
2. Need to establish the correlation between EEG and mind activity.

But it does open up a dialog in HCI, such as voice-control vs mind-control (silence); or in Robotics, instead of asking machine to “see”/”understand”, we can “see”/”understand” and impersonate it with our mind and soul.

While it is difficult to read out “mind” (any mind activity) transparently, we think it is quite doable to map your mind into certain states, and use the “state” as command indirectly.

We may do something around this area. So stay tuned.

Meanwhile, you can start to practice Yoga or Zen, to get better noise-to-signal ratio, and to set your mind into certain state with ease.Possibly Related Posts:

PeopleTools 8.54 will be the last release to certify Crystal Reports

Javier Delgado - Mon, 2014-10-13 15:49
It was just a question of time. In July 2011, Oracle announced that newly acquired PeopleSoft applications would not include a Crystal Reports license. Some years before, in October 2007, Business Objects was acquired by SAP. You don't need to read Machiavelli's Il Principe to understand why the license was now not included.

In order to keep customer's investment on custom reports safe, Oracle kept updating Crystal Reports certifications for those customers who purchased PeopleSoft applications before that date. In parallel, BI Publisher was improved release after release, providing a viable replacement to Crystal Reports, and in many areas surpassing its features.

Now, as announced in My Oracle Support's document 1927865.1, PeopleTools 8.54 will be the last release for which Crystal Reports will be certified, and support for report issues will end together with the expiration of PeopleSoft 9.1 applications support.








PeopleTools 8.54 was just released a couple of months ago, so there is no need to panic, but PeopleSoft applications managers would do well if they start coming up with an strategy to convert their existing Crystal Reports into BI Publisher reports.

Extending SaaS with PaaS free eLearning lectures

Angelo Santagata - Mon, 2014-10-13 15:17

Hey all,

Over the last 4 months I've been working with some of my US friends to create a eLearning version of the PTS SaaS extending PaaS workshop I co-wrote....., Well the time has come and we've published the first 4 eLearning seminars, and I'm sure there will be more coming.

Check em out and let me know what you think and what other topics need to be covered.

https://apex.oracle.com/pls/apex/f?p=44785:24:0::::P24_CONTENT_ID,P24_PREV_PAGE:10390,24

How to measure Oracle index fragmentation

Yann Neuhaus - Mon, 2014-10-13 14:39

At Oracle Open World 2014, or rather the Oaktable World, Chris Antognini has presented 'Indexes: Structure, Splits and Free Space Management Internals'. It's not something new, but it's still something that is not always well understood: how index space is managed, block splits, fragmentation, coalesce and rebuilds. Kyle Hailey has made a video of it available here.
For me, it is the occasion to share the script I use to see if an index is fragmented or not.

First, forget about those 'analyze index validate structure' which locks the table, and the DEL_LEAF_ROWS that counts only the deletion flags that are transient. The problem is not the amount of free space. The problem is where is that free space. Because if you will insert again in the same range of values, then that space will be reused. Wasted space occurs only when lot of rows were deleted in a range where you will not insert again. For exemple, when you purge old ORDERS, then the index on the ORDER_DATE - or on the ORDER_ID coming from a sequence - will be affected. Note that the problem occurs only for sparse purges because full blocks are reclaimed when needed and can get rows from an other range of value.

I have a script that shows the number of rows per block, as well as used and free space per block, and aggregates that by range of values.

First, let's create a table with a date and an index on it:

SQL> create table DEMOTABLE as select sysdate-900+rownum/1000 order_date,decode(mod(rownum,100),0,'N','Y') delivered , dbms_random.string('U',16) cust_id
  2  from (select * from dual connect by level

My script shows 10 buckets with begin and end value and for each of them the averge number of rows per block and the free space:

SQL> @index_fragmentation 

ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
24-APR-12 -> 02-AUG-12        377        7163          11        266                        
03-AUG-12 -> 11-NOV-12        377        7163          11        266                        
11-NOV-12 -> 19-FEB-13        377        7163          11        266                        
19-FEB-13 -> 30-MAY-13        377        7163          11        265                        
30-MAY-13 -> 07-SEP-13        377        7163          11        265                        
07-SEP-13 -> 16-DEC-13        377        7163          11        265                        
16-DEC-13 -> 26-MAR-14        377        7163          11        265                        
26-MAR-14 -> 03-JUL-14        377        7163          11        265                        
04-JUL-14 -> 11-OCT-14        377        7163          11        265                        
12-OCT-14 -> 19-JAN-15        376        7150          11        265                        

Note that the script reads all the table (it can do a sample but here it is 100%). Not exactly the table but only the index. It counts the index leaf blocks with the undocumented function sys_op_lbid() which is used by oracle to estimate the clustering factor.

So here I have no fragmentation. All blocks have about 377 rows and no free space. This is because I inserted them in increasing order and the so colled '90-10' block split occured.

Let's see what I get if I delete most of the rows before the 01-JAN-2014:

SQL> delete from DEMOTABLE where order_dateSQL> @index_fragmentation 

ORDER_DAT -> ORDER_DAT rows/block bytes/block %free space     blocks free                   
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
25-APR-12 -> 02-AUG-12          4          72          99        266 oooo                   
03-AUG-12 -> 11-NOV-12          4          72          99        266 oooo                   
11-NOV-12 -> 19-FEB-13          4          72          99        266 oooo                   
19-FEB-13 -> 30-MAY-13          4          72          99        265 oooo                   
30-MAY-13 -> 07-SEP-13          4          72          99        265 oooo                   
07-SEP-13 -> 16-DEC-13          4          72          99        265 oooo                   
16-DEC-13 -> 26-MAR-14          4          72          99        265 oooo                   
26-MAR-14 -> 03-JUL-14          4          72          99        265 oooo                   
04-JUL-14 -> 11-OCT-14         46         870          89        265 oooo                   
12-OCT-14 -> 19-JAN-15        376        7150          11        265                        

I have the same buckets, and same number of blocks. But blocks which are in the range below 01-JAN-2014 have only 4 rows and a lot of free space. This is exactly what I want to detect: I can check if that free space will be reused.

Here I know I will not enter any orders with a date in the past, so those blocks will never have an insert into them. I can reclaim that free space with a COALESCE:

SQL> alter index DEMOINDEX coalesce;

Index altered.
SQL> @index_fragmentation 

ORDER_DAT to ORDER_DAT rows/block bytes/block %free space     blocks free                      
--------- -- --------- ---------- ----------- ----------- ---------- -----                     
25-APR-12 -> 03-OCT-14        358        6809          15         32                        
03-OCT-14 -> 15-OCT-14        377        7163          11         32                        
15-OCT-14 -> 27-OCT-14        377        7163          11         32                        
27-OCT-14 -> 08-NOV-14        377        7163          11         32                        
08-NOV-14 -> 20-NOV-14        377        7163          11         32                        
20-NOV-14 -> 02-DEC-14        377        7163          11         32                        
02-DEC-14 -> 14-DEC-14        377        7163          11         32                        
14-DEC-14 -> 26-DEC-14        377        7163          11         32                        
27-DEC-14 -> 07-JAN-15        377        7163          11         32                        
08-JAN-15 -> 19-JAN-15        371        7056          12         32                        

I still have 10 buckets because this is defined in my script, but each bucket noew has less rows. I've defragmented the blocks and reclaimed the free blocks.

Time to share the script now. Here it is: index_fragmentation.zip

The script is quite ugly. It's SQL generated by PL/SQL. It's generated because it selects the index columns. And as I don't want to have it too large it is not indented nor commented. However, if you run it with set servertoutput on you will see the generated query.

How to use it? Just change the owner, table_name, and index name. It reads the whole index so if you have a very large index you may want to change the sample size.

Color Analysis of Flags – Patterns and symbols – Visualizations and Dashboards

Nilesh Jethwa - Mon, 2014-10-13 13:54

Recently, here at InfoCaptor we started a small research on the subject of flags. We wanted to answer certain questions like what are the most frequently used colors across all country flags, what are the different patterns etc.

Read more Color and Pattern analysis on Flags of Countries – Simple visualization but interesting data

ShareThis

Oracle Critical Patch Update October 2014 - Massive Patch

Just when you thought the Oracle Database world was getting safer, Oracle will be releasing fixes for 32 database security bugs on Tuesday, October 14th.  This is in stark contrast to the previous twenty-five quarters where the high was 16 database bugs and average per quarter was 8.2 database bugs.  For the previous two years, the most database bugs fixed in a single quarter was six.

In addition to the 32 database security bugs, there are a total of 155 security bugs fixed in 44 different products.

Here is a brief analysis of the pre-release announcement for the upcoming October 2014 Oracle Critical Patch Update (CPU).

Oracle Database

  • There are 32 database vulnerabilities; only one is remotely exploitable without authentication and 4 are applicable to client-side only installations.
  • Since at least one database vulnerability has a CVSS 2.0 metric of 9.0 (critical for a database vulnerability), this is a fairly important CPU due to severity and volume of fixes.
  • The remotely exploitable without authentication bug is likely in Application Express (APEX).  Any organizations running APEX externally on the Internet should look to apply the relevant patches immediately.  To patch APEX, the newest version must be installed, which requires appropriate testing and upgrading of applications.
  • There are four cilent-side only installations and likely most are in JDBC.
  • Core RDBMS and PL/SQL are listed as patched components, so most likely there are critical security vulnerabilities in all database implementations.

Oracle Fusion Middleware

  • There are 17 new Oracle Fusion Middleware vulnerabilities, 13 of which are remotely exploitable without authentication and the highest CVSS score being 7.5.
  • Various Fusion Middleware products are listed as vulnerable, so you should carefully review this CPU to determine the exact impact to your environment.
  • The core WebLogic Server is listed as a patched component, therefore, most likely all Fusion Middleware customers will have to apply the patch.

Oracle E-Business Suite 11i and R12

  • There are nine new Oracle E-Business Suite 11i and R12 vulnerabilities, seven of which are remotely exploitable without authentication.  Many of these are in core Oracle EBS components such as Oracle Applications Framework (OAF) and Application Object Library (AOL/FND).  Even though the maximum CVSS score is 5.0, most of these vulnerabilities should be considered high risk.
  • All DMZ implementations of Oracle EBS should carefully review the CPU to determine if there environment is vulnerable.  As all Oracle EBS CPU patches are now cumulative, the CPU patch should be prioritized or mitigating controls, such as AppDefend, be implemented.

Planning Impact

  • We anticipate this quarter's CPU to be higher risk than most and should be prioritized.  Based on the patched components, this may be a higher than average risk CPU for all Oracle database environments.
  • As with all previous CPUs, this quarter's security patches should be deemed critical and you should adhere to the established procedures and timing used for previous CPUs.
  • For Oracle E-Business Suite customers, DMZ implementations may have to apply this quarter's patch faster than previous quarters due to the number and severity of bugs.
Tags: Oracle Critical Patch Updates
Categories: APPS Blogs, Security Blogs

Here We Grow Again

Oracle AppsLab - Mon, 2014-10-13 12:18

Cheesy title aside, the AppsLab (@theappslab) is growing again, and this time, we’re branching out into new territory.

As part of the Oracle Applications User Experience (@usableapps) team, we regularly work with interaction designers, information architects and researchers, all of whom are pivotal to ensuring that what we build is what users want.

Makes sense, right?

So, we’re joining forces with the Emerging Interactions team within OAUX to formalize a collaboration that has been ongoing for a while now. In fact, if you read here, you’ll already recognize some of the voices, specifically John Cartan and Joyce Ohgi, who have authored posts for us.

For privacy reasons (read, because Jake is lazy), I won’t name the entire team, but I’m encouraging them to add their thoughts to this space, which could use a little variety. Semi-related, Noel (@noelportugal) was on a mission earlier this week to add content here and even rebrand this old blog. That seems to have run its course quickly.

One final note, another author has also joined the fold, Mark Vilrokx (@mvilrokx); Mark brings a long and decorated history of development experience with him.

So, welcome everyone to the AppsLab team.Possibly Related Posts:

Memory

Jonathan Lewis - Mon, 2014-10-13 10:24

On a client site recently, experimenting with a T5-2 – fortunately a test system – we decided to restart an instance with a larger SGA. It had been 100GB, but with 1TB of memory and 256 threads (2 sockets, 16 cores per socket, 8 threads per core) it seemed reasonable to crank this up to 400GB for the work we wanted to do.

It took about 15 minutes for the instance to start; worse, it took 10 to 15 seconds for a command-line call to SQL*Plus on the server to get a response; worse still, if I ran a simple “ps -ef” to check what processes were running the output started to appear almost instantly but stopped after about 3 lines and hung for about 10 to 15 seconds before continuing. The fact that the first process name to show after the “hang” was one of the Oracle background processes was a bit of hint, though.

Using truss on both the SQL*Plus call and on the ps call, I found that almost all the elapsed time was spent in a call to shmatt (shared memory attach); a quick check with “ipcs – ma” told me (as you might guess) that the chunk of shared memory identified by truss was one of the chunks allocated to Oracle’s SGA. Using pmap on the pmon process to take a closer look at the memory I found that it consisted of a few hundred pages sized at 256MB and a lot of pages sized at 8KB; this was a little strange since the alert log had told me that the instance was allocating about 1,600 memory pages of 256MB (i.e. 400GB) and 3 pages of 8KB – clearly a case of good intentions failing.

It wasn’t obvious what my next steps should be – so I bounced the case off the Oak Table … and got the advice to reboot the machine. (What! – it’s not my Windows PC, it’s a T5-2.) The suggestion worked: the instance came up in a few seconds, with shared memory consisting of a few 2GB pages, a fair number of 256MB pages, and a few pages of other sizes (including 8KB, 64KB and 2MB).

There was a little more to the advice than just rebooting, of course; and there was an explanation that fitted the circumstances. The machine was using ZFS and, in the absence of a set limit, the file system cache had at one point managed to acquire 896 GB of memory. In fact when we first tried to start the instance at with a 400GB SGA Oracle couldn’t start up at all until the system administrator had reduced the filesystem cache and freed up most of the memory; even then so much of the memory had been allocated originally in 8KB pages that Oracle had made a complete mess of building a 400GB memory map.

I hadn’t passed all these details to the Oak Table but the justification for the suggested course of action (which came from Tanel Poder) sounded like a perfect description of what had been happening up to that point. In total his advice was:

  • limit the ZFS ARC cache (with two possible strategies suggested)
  • use sga_target instead of memory_target (to avoid a similar problem on memory resize operations)
  • start the instance immediately after the reboot

Maxim: Sometimes the solution you produce after careful analysis of the facts looks exactly like the solution you produce when you can’t think of anything else to do.


Digital Learning – LVC: It’s the attitude, stupid!

The Oracle Instructor - Mon, 2014-10-13 06:33

The single most important factor for successful digital learning is the attitude both of the instructor as well as of the attendees towards the course format. Delivery of countless Live Virtual Classes (LVCs) for Oracle University made me realize that. There are technical prerequisites of course: A reliable and fast network connection and the usage of a good headset is mandatory, else the participant is doomed from the start. Other prerequisites are the same as for traditional courses: Good course material, working lab environment for hands on practices and last not least knowledgeable instructors. For that part notice that we have the very same courseware, lab environments and instructors like for our classroom courses at Oracle University education centers also for LVCs. The major difference is in your head :-)

Delivering my first couple of LVCs, I felt quite uncomfortable with that new format. Accordingly, my performance was not as good as usual. Meanwhile, I consider the LVC format as totally adequate for my courses and that attitude enables me to deliver them with the same performance as my classroom courses. Actually, they are even better to some degree: I always struggle producing clean sketches with readable handwriting on the whiteboard. Now look at this MS paint sketch from one of my Data Guard LVCs:

Data Guard Real-Time Apply

Data Guard Real-Time Apply

Attendees get all my sketches per email if they like afterwards.

In short: Because I’m happy delivering through LVC today, I’m now able to do it with high quality. The attitude defines the outcome.

Did you ever have a teacher in school that you just disliked for some reason? It was hard to learn anything from that teacher, right? Even if that person was competent.

So this is also true on the side of the attendee: The attitude defines the outcome. If you take an LVC thinking “This cannot work!”, chances are that you are right just because of your mindset. When you attend an LVC with an open mind – even after some initial trouble because you need to familiarize yourself with the learning platform and the way things are presented there – it is much more likely that you will benefit from it. You may even like it better than classroom courses because you can attend from home without the time and expenses it takes to travel :-)

Some common objections against LVC I have heard from customers and my usual responses:

An LVC doesn’t deliver the same amount of interaction like a classroom course!

That is not necessarily so: You are in a small group (mostly less than 10) that is constantly in an audio conference. Unmute yourself and say anything you like, just like in a classroom. Additionally, you have a chatbox available. This is sometimes extremely helpful, especially with non-native speakers in the class :-) You can easily exchange email addresses using the chatbox as well and stay in touch even after the LVC.

I have no appropriate working place to attend an LVC!

You have no appropriate working place at all, then, for something that requires a certain amount of concentration. Talk to your manager about it – maybe there is something like a quiet room available during the LVC.

I cannot keep up the attention when starring the whole day on the computer screen!

Of course not, that is why we have breaks and practices in between the lessons.

Finally, I would love to hear about your thoughts and experiences with online courses! What is your attitude towards Digital Learning?


Tagged: Digital Learning, LVC
Categories: DBA Blogs

PLS-00201 in stored procedures

Laurent Schneider - Mon, 2014-10-13 03:36

When you grant table access thru a role, you cannot use that role in a stored procedure or view.


create role r;

create user u1 identified by ***;
grant create procedure, create session to u1;

create user u2 identified by ***;
grant create procedure, create session, r to u2;

conn u1/***
create procedure u1.p1 is begin null; end; 
/

grant execute on u1.p1 to r;

conn u2/***

create procedure u2.p2 is begin u1.p1; end; 
/

sho err procedure u2.p2

Errors for PROCEDURE U2.P2:

L/COL ERROR
----- -------------------------------------------
1/26  PL/SQL: Statement ignored
1/26  PLS-201: identifier U1.P1 must be declared

However, If i run it in an anonymous block, it works


declare
  procedure p2 is 
  begin 
    u1.p1; 
  end; 
begin
  p2; 
end; 
/

PL/SQL procedure successfully completed.

But this only works when my role is active. If my role is no longer active, then it obviously fails.


set role none;

declare 
  procedure p2 is 
  begin 
    u1.p1; 
  end; 
begin 
  p2; 
end; 
/
ERROR at line 1:
ORA-06550: line 4, column 5:
PLS-00201: identifier 'U1.P1' must be declared
ORA-06550: line 4, column 5:
PL/SQL: Statement ignored

It is all written in the doc,

All roles are disabled in any named PL/SQL block (stored procedure, function, or trigger) that executes with definer’s rights

I knew the behavior but not the reason behind it. Thanks to Bryn for bringing me so much knowledge on plsql.

#OOW14, #DB12c, #BikeB4OOW(!) at 14:45 today, #CON8269: "Doing PL/SQL from SQL: Correctness and Performance" http://t.co/A9BKqrKygv

— Bryn Llewellyn (@BrynLite) September 29, 2014

Context for Cloudera

DBMS2 - Mon, 2014-10-13 02:02

Hadoop World/Strata is this week, so of course my clients at Cloudera will have a bunch of announcements. Without front-running those, I think it might be interesting to review the current state of the Cloudera product line. Details may be found on the Cloudera product comparison page. Examining those details helps, I think, with understanding where Cloudera does and doesn’t place sales and marketing focus, which given Cloudera’s Hadoop market stature is in my opinion an interesting thing to analyze.

So far as I can tell (and there may be some errors in this, as Cloudera is not always accurate in explaining the fine details):

  • CDH (Cloudera Distribution … Hadoop) contains a lot of Apache open source code.
  • Cloudera has a much longer list of Apache projects that it thinks comprise “Core Hadoop” than, say, Hortonworks does.
    • Specifically, that list currently is: Hadoop, Flume, HCatalog, Hive, Hue, Mahout, Oozie, Pig, Sentry, Sqoop, Whirr, ZooKeeper.
    • In addition to those projects, CDH also includes HBase, Impala, Spark and Cloudera Search.
  • Cloudera Manager is closed-source code, much of which is free to use. (I.e., “free like beer” but not “free like speech”.)
  • Cloudera Navigator is closed-source code that you have to pay for (free trials and the like excepted).
  • Cloudera Express is Cloudera’s favorite free subscription offering. It combines CDH with the free part of Cloudera Manager. Note: Cloudera Express was previously called Cloudera Standard, and that terminology is still reflected in parts of Cloudera’s website.
  • Cloudera Enterprise is the umbrella name for Cloudera’s three favorite paid offerings.
  • Cloudera Enterprise Basic Edition contains:
    • All the code in CDH and Cloudera Manager, and I guess Accumulo code as well.
    • Commercial licenses for all that code.
    • A license key to use the entirety of Cloudera Manager, not just the free part.
    • Support for the “Core Hadoop” part of CDH.
    • Support for Cloudera Manager. Note: Cloudera is lazy about saying this explicitly, but it seems obvious.
    • The code for Cloudera Navigator, but that’s moot, as the corresponding license key for Cloudera Navigator is not part of the package.
  • Cloudera Enterprise Data Hub Edition contains:
    • Everything in Cloudera Basic Edition.
    • A license key for Cloudera Navigator.
    • Support for all of HBase, Accumulo, Impala, Spark, Cloudera Search and Cloudera Navigator.
  • Cloudera Enterprise Flex Edition contains everything in Cloudera Basic Edition, plus support for one of the extras in Data Hub Edition.

In analyzing all this, I’m focused on two particular aspects:

  • The “zero, one, many” system for defining the editions of Cloudera Enterprise.
  • The use of “Data Hub” as a general marketing term.

Given its role as a highly influential yet still small “platform” vendor in a competitive open source market, Cloudera even more than most vendors faces the dilemma:

  • Cloudera wants customers to adopt its views as to which Hadoop-related technologies they should use.
  • However, Cloudera doesn’t want to be in the position of trying to ram some particular unwanted package down a customer’s throat.

The Flex/Data Hub packaging fits great with that juggling act, because Cloudera — and hence also Cloudera salespeople — get paid exactly as much when customers pick 2 Flex options as when they use all 5-6. If you prefer Cassandra or MongoDB to HBase, Cloudera is fine with that. Ditto if you prefer CitusDB or Vertica or Teradata Hadapt to Impala. Thus Cloudera can avoid a lot of religious wars, even if it can’t entirely escape Hortonworks’ “More open source than thou” positioning.

Meanwhile, so far as I can tell, Cloudera currently bets on the “Enterprise Data Hub” as its core proposition, as evidenced by that term being baked into the name of Cloudera’s most comprehensive and expensive offering. Notes on the EDH start:

  • Cloudera also portrays “enterprise data hub” as an architectural/reference architecture concept.
  • “Enterprise data hub” doesn’t really mean anything very different from “data lake” + “data refinery”; Cloudera just thinks it sounds more important. Indeed, Cloudera claims that the other terms are dismissive or disparaging, at least in some usages.

Cloudera’s long-term dream is clearly to make Hadoop the central data platform for an enterprise, while RDBMS fill more niche (or of course also legacy) roles. I don’t think that will ever happen, because I don’t think there really will be one central data platform in the future, any more than there has been in the past. As I wrote last year on appliances, clusters and clouds,

Ceteris paribus, fewer clusters are better than more of them. But all things are not equal, and it’s not reasonable to try to reduce your clusters to one — not even if that one is administered with splendid efficiency by low-cost workers, in a low-cost building, drawing low-cost electric power, in a low-cost part of the world.

and earlier in the same post

… these are not persuasive reasons to put everything on a SINGLE cluster or cloud. They could as easily lead you to have your VMware cluster and your Exadata rack and your Hadoop cluster and your NoSQL cluster and your object storage OpenStack cluster — among others — all while participating in several different public clouds as well.

One system is not going to be optimal for all computing purposes.

Categories: Other

Workaround for ADF 12c Choice List Blank Selection Issue

Andrejus Baranovski - Sun, 2014-10-12 09:28
I would like to share a workaround for Choice List component in ADF 12c. There is specific issue, related to blank selection - as soon as user selects blank selection in the choice list, it starts to invoke value change listener for that list, each time when any other element is selected in the table. This is quite annoying and could lead to unexpected results, especially if you depend on logic implemented in value change listener.

Workaround is implemented in the sample application - ADFChoiceList12cApp.zip. We should take a look first, how it works without workaround applied. Imagine you have a table and one of the columns implements a choice list. Go and select a blank selection for a couple of rows:


Navigate between rows, do couple of clicks - you should notice in the log information about value change listener invoked. This is quite strange, it should not call value change listener each time, just only first time after selection was changed in the list:


Choice list UI component is set with AutoSubmit=true and assigned with value change listener method:


Choice list is configured with blank item selection in ADF BC:


To apply workaround, go and uncheck blank item selection in ADF BC, make sure checkbox for 'Include No Selection Item' is unchecked:


On UI side, provide any value for UnselectedLabel property of the choice list component, this will generate property in the source (alternatively you could just type it manually directly in the source):


Make sure UnselectedLabel property is set to be blank, this will generate blank item on the runtime (it works much better than the one enabled from ADF BC):


User could navigate through the table, there will be no value change listener triggered anymore (besides one time, when value was actually changed):


Exciting as usual - new features and new bugs :)