Skip navigation.

Feed aggregator

July 2014 Critical Patch Update Released

Oracle Security Team - Tue, 2014-07-15 13:41
Normal 0 false false false EN-US X-NONE X-NONE

Hello, this is Eric Maurice.

Oracle today released the July 2014 Critical Patch Update. This Critical Patch Update provides 113 new security fixes across a wide range of product families including: Oracle Database, Oracle Fusion Middleware, Oracle Hyperion, Oracle Enterprise Manager Grid Control, Oracle E-Business Suite, Oracle PeopleSoft Enterprise, Oracle Siebel CRM, Oracle Industry Applications, Oracle Java SE, Oracle Linux and Virtualization, Oracle MySQL, and Oracle and Sun Systems Products Suite.

This Critical Patch Update provides 20 additional security fixes for Java SE. The highest CVSS Base Score for the Java vulnerabilities fixed in this Critical Patch Update is 10.0. This score affects a single Java SE client vulnerability (CVE-2014-4227). 7 other Java SE client vulnerabilities receive a CVSS Base Score of 9.3 (denoting that a complete compromise of the targeted client is possible, but that that access complexity to exploit these vulnerabilities is “medium.”) All in all, this Critical Patch Update provides fixes for 17 Java SE client vulnerabilities, 1 for a JSSE vulnerability affecting client and server, and 2 vulnerabilities affecting Java client and server. Oracle recommends that home users visit http://java.com/en/download/installed.jsp to ensure that they run the most recent version of Java. Oracle also recommends Windows XP users to upgrade to a currently-supported operating system. Running unsupported operating systems, particularly one as prevalent as Windows XP, create a very significant risk to users of these systems as vulnerabilities are widely known, exploit kits routinely available, and security patches no longer provided by the OS provider.

This Critical Patch Update also includes 5 fixes for the Oracle Database. The highest CVSS Base Score for these database vulnerabilities is 9.0 (this score affects vulnerability CVE-2013-3751)).

Oracle Fusion Middleware receives 29 new security fixes with this Critical Patch Update. The most severe CVSS Base Score for these vulnerabilities is 7.5.

Oracle E-Business Suite receives 5 new security fixes with this Critical Patch Update. The most severe CVSS Base Score reported for these vulnerabilities is 6.8.

Oracle Sun Systems Products Suite receive 3 new security fixes with this Critical Patch Update and one additional Oracle Enterprise Manager Grid Control fix is applicable to these deployments. Fixes that exist because of the dependency between individual Oracle product components are listed in italics in the Critical Patch Update Advisory. These bugs are listed in the risk matrices of the products they initially exist in, as well as in the risk matrices of the products they are used with. The most severe CVSS Base Score for these Oracle Sun Systems Products Suite vulnerabilities is 6.9.

As a reminder, Critical Patch Update fixes are intended to address significant security vulnerabilities in Oracle products and also include code fixes that are prerequisites for the security fixes. As a result, Oracle recommends that this Critical Patch Update be applied as soon as possible by customers using the affected products.

For More Information:

The July 2014 Critical Patch Update Advisory is located at http://www.oracle.com/technetwork/topics/security/cpujul2014-1972956.html

The Oracle Software Security Assurance web site is located at http://www.oracle.com/us/support/assurance.

Java home users can detect if they are running obsolete versions of Java SE and install the most recent version of Java by visiting http://java.com/en/download/installed.jsp

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

Oracle OpenWorld and JavaOne SF 2014 - Early Bird Ends July 18th!

OTN TechBlog - Tue, 2014-07-15 13:34
Get the most. Save the most.

There are things to do at Oracle OpenWorld and JavaOne you can't do anywhere else. One of them is scoring Early Bird savings, which end on July 18, THIS FRIDAY!

Register for Oracle OpenWorld

Register for JavaOne

OTN will be posting it's list of 'can't do anywhere else' activities that we will be hosting at Oracle OpenWorld and JavaOne soon. 

RAC Commands : 2 -- Updating Configuration for Services

Hemant K Chitale - Tue, 2014-07-15 08:30
NOTE : This is in a Policy Managed configuration

Adding a database service

[root@node1 ~]# su - oracle
-sh-3.2$ srvctl add service -d RACDB -s NEW_SVC -g RACSP -c SINGLETON
-sh-3.2$ srvctl config service -d RACDB -s NEW_SVC
Service name: NEW_SVC
Service is enabled
Server pool: RACSP
Cardinality: SINGLETON
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$
-sh-3.2$ srvctl start service -d RACDB -s NEW_SVC
-sh-3.2$

Since this is Policy Managed database in the RACSP Server Pool, I added a service with the appropriate parameters. The SINGLETON cardinality means that it will run on only one instance.  (See the previous post for the service MY_RAC_SVC with the cardinliaty UNIFORM).

Let's verify the alert.log entry.
-sh-3.2$ cd /u01/app/oracle/diag/rdbms/racdb/RACDB_1
-sh-3.2$ cd trace
-sh-3.2$ tail alert_RACDB_1.log
Begin automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Tue Jul 15 22:01:20 2014
End automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Tue Jul 15 22:06:45 2014
db_recovery_file_dest_size of 4000 MB is 22.25% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Tue Jul 15 22:11:10 2014
ALTER SYSTEM SET service_names='MY_RAC_SVC','NEW_SVC' SCOPE=MEMORY SID='RACDB_1';
-sh-3.2$

The SCOPE specification of the ALTER SYSTEM limits the service to only this instance. (Note : MY_RAC_SVC had already been added to RACDB_2 earlier).


Removing a database service

Now, let's remove a database service

-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$ srvctl remove service -d RACDB -s MY_RAC_SVC
PRCR-1025 : Resource ora.racdb.my_rac_svc.svc is still running

I need to first stop the service.

-sh-3.2$ srvctl stop service -d RACDB -s MY_RAC_SVC
-sh-3.2$
-sh-3.2$ srvctl config service -d RACDB -s MY_RAC_SVC
Service name: MY_RAC_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$ tail alert_RACDB_1.log
End automatic SQL Tuning Advisor run for special tuning task "SYS_AUTO_SQL_TUNING_TASK"
Tue Jul 15 22:06:45 2014
db_recovery_file_dest_size of 4000 MB is 22.25% used. This is a
user-specified limit on the amount of space that will be used by this
database for recovery-related files, and does not reflect the amount of
space available in the underlying filesystem or ASM diskgroup.
Tue Jul 15 22:11:10 2014
ALTER SYSTEM SET service_names='MY_RAC_SVC','NEW_SVC' SCOPE=MEMORY SID='RACDB_1';
Tue Jul 15 22:17:48 2014
ALTER SYSTEM SET service_names='NEW_SVC' SCOPE=MEMORY SID='RACDB_1';
-sh-3.2$

I can now remove the service.

-sh-3.2$ srvctl remove service -d RACDB -s MY_RAC_SVC
-sh-3.2$ srvctl config service -d RACDB
Service name: NEW_SVC
Service is enabled
Server pool: RACSP
Cardinality: SINGLETON
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$

Now, only the new service is listed.

Modifying the cardinality of a service

-sh-3.2$ srvctl modify service -d RACDB -s NEW_SVC -c UNIFORM
-sh-3.2$ srvctl config service -d RACDB -s NEW_SVC
Service name: NEW_SVC
Service is enabled
Server pool: RACSP
Cardinality: UNIFORM
Disconnect: false
Service role: PRIMARY
Management policy: AUTOMATIC
DTP transaction: false
AQ HA notifications: false
Failover type: NONE
Failover method: NONE
TAF failover retries: 0
TAF failover delay: 0
Connection Load Balancing Goal: LONG
Runtime Load Balancing Goal: NONE
TAF policy specification: NONE
Edition:
Service is enabled on nodes:
Service is disabled on nodes:
-sh-3.2$
-sh-3.2$ srvctl start service -d RACDB -s NEW_SVC

The service has been modified from SINGLETON (single instance) to UNIFORM (all instances).
Verifying this on node 2.

-sh-3.2$ pwd
/u01/app/oracle/diag/rdbms/racdb/RACDB_2/trace
-sh-3.2$ tail -2 alert_RACDB_2.log
Tue Jul 15 22:27:36 2014
ALTER SYSTEM SET service_names='NEW_SVC' SCOPE=MEMORY SID='RACDB_2';
-sh-3.2$

The service has been enabled on RACDB_2 as well now.

.
.
.

Categories: DBA Blogs

Auditing Files in Linux

Pythian Group - Tue, 2014-07-15 08:25

Stat command in Linux can be used to display a file or a file system status.

I came across an issue in RHEL4 where a file’s ‘Change time’ is far ahead than the ‘Modification time’ without a change in uid, gid and mode.

# stat /etc/php.ini
File: `/etc/php.ini'
Size: 45809 Blocks: 96 IO Block: 4096 regular file
Device: 6801h/26625d Inode: 704615 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2014-06-25 13:22:15.000000000 -0400
Modify: 2012-10-01 13:21:41.000000000 -0400
Change: 2014-06-01 20:06:35.000000000 -0400 

To explain why this can be considered unusual, I will start by explaining the time values associated with a file:

  • Access (atime) – Time the file was last accessed. This involves syscalls like open(). For example, running cat command on the file would update this.
  • Modify    (mtime) – Time the file content was last modified. For example, if a file is edited and some content is added this value would change.
  • Change (ctime) – When any of the inode attributes in the file changes this value changes. Stat command would notice change if inode attributes except access time is changed. Following are the rest of the inode attributes – mode, uid, gid, size and modification time.

So ctime would get updated with mtime and file size would get updated with a mtime. So if a file’s ctime is changed from mtime without a change in mode, uid, and gid, the behaviour can be considered unexpected.

On checking the stat upstream (coreutils) source, I came across a known issue. Running chmod on a file without changing the file permissions can alter inode and cause the same behaviour. It is documented in TODO of coreutils upstream source.

Modify chmod so that it does not change an inode's st_ctime
when the selected operation would have no other effect.
First suggested by Hans Ecke  in

http://thread.gmane.org/gmane.comp.gnu.coreutils.bugs/2920

Discussed more recently on http://bugs.debian.org/497514.

This behaviour is not fixed in upstream.

Now we can assume that a process or user ran a chmod command which actually did not changed the attributes of php.ini. This would change ctime and not other attributes.

I can reproduce the same behaviour in my Fedora system as well.

For example,

# stat test
File: ‘test’
Size: 0             Blocks: 0          IO Block: 4096   regular empty file
Device: 803h/2051d    Inode: 397606      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2014-07-14 16:26:10.996128678 +0530
Modify: 2014-07-14 16:26:10.996128678 +0530
Change: 2014-07-14 16:26:10.996128678 +0530
Birth: -
# chmod 644 test
# stat test
File: ‘test’
Size: 0             Blocks: 0          IO Block: 4096   regular empty file
Device: 803h/2051d    Inode: 397606      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2014-07-14 16:26:10.996128678 +0530
Modify: 2014-07-14 16:26:10.996128678 +0530
Change: 2014-07-14 16:26:41.444377623 +0530 
Birth: -

But this is just an assumption. For getting a conclusive answer on what is causing this behaviour in this specific system, we would need to find what process is causing this.

auditd in linux can be used for watching a file and capturing audit records on that file to /var/log/audit/.

To watch the file, I edited /etc/audit.rules and added following.

-w /etc/php.ini

Then restarted auditd,

# service auditd start
Starting auditd:                                           [  OK  ]
# chkconfig auditd on

Running a cat command on the php.ini file would give following logs.

type=SYSCALL msg=audit(1404006436.500:12): arch=40000003 syscall=5 success=yes exit=3 a0=bff88c10 a1=8000 a2=0 a3=8000 items=1 pid=19905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0
egid=0 sgid=0 fsgid=0 comm="cat" exe="/bin/cat"
type=FS_WATCH msg=audit(1404006436.500:12): watch_inode=704615 watch="php.ini" filterkey= perm=0 perm_mask=4
type=FS_INODE msg=audit(1404006436.500:12): inode=704615 inode_uid=0 inode_gid=0 inode_dev=68:01 inode_rdev=00:00
type=CWD msg=audit(1404006436.500:12):  cwd="/root"
type=PATH msg=audit(1404006436.500:12): name="/etc/php.ini" flags=101 inode=704615 dev=68:01 mode=0100644 ouid=0 ogid=0 rdev=00:00

ausearch command is available for searching through the audit logs. Following command would display the audit entries from 6th July related to /etc/php.ini file.

# ausearch -ts 7/6/2014 -f /etc/php.ini | less

When I noticed the ctime changed again, I ran ausearch. I saw multiple events on the file. Most of the access are from syscall=5, which is the open system call.

Following entries seem to be pointing to the culprit. You can see that the system call is 271.

type=SYSCALL msg=audit(1404691594.175:37405): arch=40000003 syscall=271 success=yes exit=0 a0=bff
09b00 a1=bff07b00 a2=7beff4 a3=bff0a1a0 items=1 pid=9830 auid=4294967295 uid=0 gid=0 euid=0 suid=
0 fsuid=0 egid=0 sgid=0 fsgid=0 comm="bpbkar" exe="/usr/openv/netbackup/bin/bpbkar"
type=FS_WATCH msg=audit(1404691594.175:37405): watch_inode=704615 watch="php.ini" filterkey= perm
=0 perm_mask=2
type=FS_INODE msg=audit(1404691594.175:37405): inode=704615 inode_uid=0 inode_gid=0 inode_dev=68:
01 inode_rdev=00:00
type=FS_WATCH msg=audit(1404691594.175:37405): watch_inode=704615 watch="php.ini" filterkey= perm
=0 perm_mask=2
type=FS_INODE msg=audit(1404691594.175:37405): inode=704615 inode_uid=0 inode_gid=0 inode_dev=68:
01 inode_rdev=00:00
type=CWD msg=audit(1404691594.175:37405):  cwd="/etc"
type=PATH msg=audit(1404691594.175:37405): name="/etc/php.ini" flags=1 inode=704615 dev=68:01 mod
e=0100644 ouid=0 ogid=0 rdev=00:00

Using ausearch you can search based on system calls also. You can see that there is only one record with system call number 271. Another advantage of ausearch is that it would convert the time stamps to human readable form.

# ausearch -ts 7/6/2014 -sc 271 -f /etc/php.ini 

You can see time in the start of each block of search outputs.

----
time->Sun Jul  6 20:06:34 2014
type=PATH msg=audit(1404691594.175:37405): name="/etc/php.ini" flags=1 inode=704615 dev=68:01 mod
e=0100644 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1404691594.175:37405):  cwd="/etc"
type=FS_INODE msg=audit(1404691594.175:37405): inode=704615 inode_uid=0 inode_gid=0 inode_dev=68:
01 inode_rdev=00:00
type=FS_WATCH msg=audit(1404691594.175:37405): watch_inode=704615 watch="php.ini" filterkey= perm
=0 perm_mask=2
type=FS_INODE msg=audit(1404691594.175:37405): inode=704615 inode_uid=0 inode_gid=0 inode_dev=68:
01 inode_rdev=00:00
type=FS_WATCH msg=audit(1404691594.175:37405): watch_inode=704615 watch="php.ini" filterkey= perm
=0 perm_mask=2
type=SYSCALL msg=audit(1404691594.175:37405): arch=40000003 syscall=271 success=yes exit=0 a0=bff
09b00 a1=bff07b00 a2=7beff4 a3=bff0a1a0 items=1 pid=9830 auid=4294967295 uid=0 gid=0 euid=0 suid=
0 fsuid=0 egid=0 sgid=0 fsgid=0 comm="bpbkar" exe="/usr/openv/netbackup/bin/bpbkar"

The time stamps matches.

# stat /etc/php.ini
File: `/etc/php.ini'
Size: 45809         Blocks: 96         IO Block: 4096   regular file
Device: 6801h/26625d    Inode: 704615      Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2014-07-07 01:06:47.000000000 -0400
Modify: 2012-10-01 13:21:41.000000000 -0400
Change: 2014-07-06 20:06:34.000000000 -0400

From RHEL4 kernel source code we can see that syscall 271 is utimes.

# cat ./include/asm-i386/unistd.h |grep 271
#define __NR_utimes        271

utimes is a legacy syscall that can change a file’s last access and modification times. utimes is later deprecated and replaced with utime from RHEL5.

netbackup process bpbkar is doing a utimes syscall on the file, possibly modifying the mtime to the already existing time resulting in the change.

This example shows us the power of Linux Auditing System. Auditing is a kernel feature which provides interface to daemons like auidtd to capture events related to system and user space processes and log it.

Categories: DBA Blogs

The point of predicate pushdown

Curt Monash - Tue, 2014-07-15 07:52

Oracle is announcing today what it’s calling “Oracle Big Data SQL”. As usual, I haven’t been briefed, but highlights seem to include:

  • Oracle Big Data SQL is basically data federation using the External Tables capability of the Oracle DBMS.
  • Unlike independent products — e.g. Cirro — Oracle Big Data SQL federates SQL queries only across Oracle offerings, such as the Oracle DBMS, the Oracle NoSQL offering, or Oracle’s Cloudera-based Hadoop appliance.
  • Also unlike independent products, Oracle Big Data SQL is claimed to be compatible with Oracle’s usual security model and SQL dialect.
  • At least when it talks to Hadoop, Oracle Big Data SQL exploits predicate pushdown to reduce network traffic.

And by the way – Oracle Big Data SQL is NOT “SQL-on-Hadoop” as that term is commonly construed, unless the complete Oracle DBMS is running on every node of a Hadoop cluster.

Predicate pushdown is actually a simple concept:

  • If you issue a query in one place to run against a lot of data that’s in another place, you could spawn a lot of network traffic, which could be slow and costly. However …
  • … if you can “push down” parts of the query to where the data is stored, and thus filter out most of the data, then you can greatly reduce network traffic.

“Predicate pushdown” gets its name from the fact that portions of SQL statements, specifically ones that filter data, are properly referred to as predicates. They earn that name because predicates in mathematical logic and clauses in SQL are the same kind of thing — statements that, upon evaluation, can be TRUE or FALSE for different values of variables or data.

The most famous example of predicate pushdown is Oracle Exadata, with the story there being:

  • Oracle’s shared-everything architecture created a huge I/O bottleneck when querying large amounts of data, making Oracle inappropriate for very large data warehouses.
  • Oracle Exadata added a second tier of servers each tied to a subset of the overall storage; certain predicates are pushed down to that tier.
  • The I/O between Exadata’s two sets of servers is now tolerable, and so Oracle is now often competitive in the high-end data warehousing market,

Oracle evidently calls this “SmartScan”, and says Oracle Big Data SQL does something similar with predicate pushdown into Hadoop.

Oracle also hints at using predicate pushdown to do non-tabular operations on the non-relational systems, rather than shoehorning operations on multi-structured data into the Oracle DBMS, but my details on that are sparse.

Related link

The point of predicate pushdown

DBMS2 - Tue, 2014-07-15 07:52

Oracle is announcing today what it’s calling “Oracle Big Data SQL”. As usual, I haven’t been briefed, but highlights seem to include:

  • Oracle Big Data SQL is basically data federation using the External Tables capability of the Oracle DBMS.
  • Unlike independent products — e.g. Cirro — Oracle Big Data SQL federates SQL queries only across Oracle offerings, such as the Oracle DBMS, the Oracle NoSQL offering, or Oracle’s Cloudera-based Hadoop appliance.
  • Also unlike independent products, Oracle Big Data SQL is claimed to be compatible with Oracle’s usual security model and SQL dialect.
  • At least when it talks to Hadoop, Oracle Big Data SQL exploits predicate pushdown to reduce network traffic.

And by the way – Oracle Big Data SQL is NOT “SQL-on-Hadoop” as that term is commonly construed, unless the complete Oracle DBMS is running on every node of a Hadoop cluster.

Predicate pushdown is actually a simple concept:

  • If you issue a query in one place to run against a lot of data that’s in another place, you could spawn a lot of network traffic, which could be slow and costly. However …
  • … if you can “push down” parts of the query to where the data is stored, and thus filter out most of the data, then you can greatly reduce network traffic.

“Predicate pushdown” gets its name from the fact that portions of SQL statements, specifically ones that filter data, are properly referred to as predicates. They earn that name because predicates in mathematical logic and clauses in SQL are the same kind of thing — statements that, upon evaluation, can be TRUE or FALSE for different values of variables or data.

The most famous example of predicate pushdown is Oracle Exadata, with the story there being:

  • Oracle’s shared-everything architecture created a huge I/O bottleneck when querying large amounts of data, making Oracle inappropriate for very large data warehouses.
  • Oracle Exadata added a second tier of servers each tied to a subset of the overall storage; certain predicates are pushed down to that tier.
  • The I/O between Exadata’s two sets of servers is now tolerable, and so Oracle is now often competitive in the high-end data warehousing market,

Oracle evidently calls this “SmartScan”, and says Oracle Big Data SQL does something similar with predicate pushdown into Hadoop.

Oracle also hints at using predicate pushdown to do non-tabular operations on the non-relational systems, rather than shoehorning operations on multi-structured data into the Oracle DBMS, but my details on that are sparse.

Related link

Categories: Other

Oracle DBA Training Options Are Changing

Oracle DBA Training Options Are Changing
Training options for Oracle Database DBAs are changing. Generally, I don't think they are for the better. Companies don't value Oracle Database Administrators like they used to. And, it shows in the lack of their professional development investment.

When I travel a long way from home, I tend to get very reflective about life, death and beyond. On my way home from teaching an onsite two-day Oracle performance tuning seminar coupled with a one-day predictive analysis (forecasting) class in Ghana (yes, Ghana as in AFRICA) I started thinking about how fortunate the Ghana DBAs I taught are. Clearly their management is willing to invest in their DBAs' future. This is very, very rare.

Today most Oracle DBAs receive what I call, "Training By Google." You know, blog posts, YouTube videos and various syntax websites. While these are all valuable (I am a content creator myself with my blog posts and videos), they are no substitute for instructor led training. Not even close! So what is happening that is forcing Oracle DBAs to change their training habits?

So Why The Change? Three Reasons
1. Training Budget. Over the past five years I have been disappointed (more like disturbed) that most companies do not provide the training DBAs need. They just won't do it. IT managers (not typically DBA managers) believe their staff can get by with "Training By Google." It's stupid and foolish. It tells DBAs they are worthless and leaves them unprepared to perform at their best. And, of course, that ends up hurting the company they work for. Stupid and foolish.

Are we then surprised with the results from poor performing systems, down production systems, massive security breaches, and DBAs hopping from one company to another?

2. Travel Budget. A nasty tactic many companies use is to provide a minimal training budget but without a travel budget. If you want specialized and advanced training, you'll probably have to travel to get to it. Maybe not hundreds or thousands of miles, but probably more than you want to commute each day.

Essentially the company is splitting the training cost with the DBA and ensuring the DBA really, really wants the training. OK, I can respect that. But, I think a company that does not truly provide training for its employees (human beings that spend a significant portion of their lives doing whatever it takes to get the job done) is cruel and frankly immoral.

3. More Training Options. The good news for Oracle DBAs is there is more information and training options available today than ever before. When the orapub.com website began in 1995, doing a "tail -f" on the web log was a lesson in world geography. It was amazing watching line after line stream by as DBAs from all over the world were looking for Oracle performance materials through the web. Now there is much more available. Training options for Oracle DBAs now include traditional instructor led training (ILT), web sites from content aggregators (people who pull together content for us), content creators (like myself), and online training. I'm very excited about online training and have made a significant investment in OraPub's Online Institute.

Summary
So there you have it. Because of economics, the devaluation of DBAs as human beings and the increase in training options, the Oracle DBA training landscape is changing. If you believe this, the next question is, "What is good content?" That will be the subject of my next posting!

Enjoy your work and thanks for reading!

Craig.

https://resources.orapub.com/OraPub_Online_Seminars_About_Oracle_Database_Performance_s/100.htm
You can watch seminar introductions (like above) for free on YouTube!
If you enjoy my blog, I suspect you'll get a lot out of my online or in-class training. For details go to www.orapub.com. I also offer on-site training and consulting services.

P.S. If you want me to respond to a comment or you have a question, please feel free to email me directly at craig@orapub .com. Another option is to send an email to OraPub's general email address, which is currently info@ orapub. com.

Categories: DBA Blogs

Oracle Midlands : Event #4 – Summary

Tim Hall - Tue, 2014-07-15 03:51

What a cracking Oracle Midlands event!

The evening started with a session on “Designing Efficient SQL” by Jonathan Lewis. The first few slides prompted this tweet.

jl

When someone asks me a question about SQL tuning my heart sinks. It’s part of my job and I can do it, but I find it really hard to communicate what I’m doing. Jonathan’s explanation during this session was probably the best one I’ve ever heard. Rather than trying to explain a million and one optimizer features, it’s very much focussed on a “What are you actually trying to achieve?” approach. It should be mandatory viewing for all Oracle folks.

After the break, where I stuffed myself with samosas, it was on to the lightning talks (10 mins each).

  • Breaking Exadata - Jonathan Lewis, JL Computer Consultancy : This focused on a couple of situations where the horsepower of Exadata doesn’t come to the rescue, like large hash joins that flood to disk and decompressions in the storage cells being abandoned and the compressed blocks being sent back to the compute nodes to be decompressed. If I ever get to use an Exadata…
  • How to rename a 500gb schema in 10 minutes - Richard Harrison, EON : Why can’t we have a rename user/schema command? Richard showed a quick way to use transportable tablespaces to rename a schema. Neat!
  • Oracle Big Data Appliance – What’s in the box? - Salih Oztop, Business AnalytiX : The title says it all really. I thought it was a really good introduction to the BDA. I’ve been to 1 hour talks on this subject that didn’t convey as much information as he managed to fit into 10 minutes. Also, a hint at a cool new feature about to be announced…
  • Installing RAC: Things to sort out with your systems and network admins - Patrick Hurley, Scale Abilities : Patrick is a cool guy and he upped his cool rating further by brandishing a light sabre as a pointer during his talk! His session was a list of gotchas he’s encountered while installing RAC. Some of them I’ve encountered myself. Some not. Good stuff.
  • Is the optimiser too smart now? - Martin Widlake, ORA600 : I could hear a voice, but I couldn’t see anyone over the podium. :) The question was, has it got to a point where it is too complicated for normal folks and beginners to stand a chance at understanding it, or should we now be treating it like a black box? My own feeling is that 12c might be the turning point where I really have to say I don’t understand it any more. It feels a bit sad, but maybe it is inevitable…

I though the lightning talks worked really well. It felt like a whole conference packed into one hour. :)

The event was free, thanks to the sponsorship by those kind people at Red Gate. The Oracle Press teddy bears made another appearance, but I didn’t win one. :(

Big thanks to Mike for organising it and to all the speakers for doing a great job. The next event will be up on the website soon. Please show your support! These things live or die based on your participation…

Cheers

Tim…

Oracle Midlands : Event #4 – Summary was first posted on July 15, 2014 at 10:51 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Presenting at UKOUG Tech14 Conference (Ian Fish, U K Heir)

Richard Foote - Tue, 2014-07-15 01:52
I’ve been lucky enough to present at various conferences, seminars and user group events over the years in some 20 odd countries. But somewhere I’ve never quite managed to present at before is the place of my birth, the UK. Well this year, I’ve decided end my drought and submitted a number of papers for the UKOUG Tech14 Conference and […]
Categories: DBA Blogs

Create Physical Standby Database using RMAN Restore

Michael Dinh - Mon, 2014-07-14 21:19

Normally, when I create physical standby database, the configuration has the same directory structures and name values as production with the exception of db_unique_name.

But this time was not the case as shown below.

ANGEL:(SYS@xmenstby):PHYSICAL STANDBY> show parameter name

NAME                      TYPE        VALUE
------------------------- ----------- ----------------------------------------
cell_offloadgroup_name    string
db_file_name_convert      string      /oradata/xmenprod, /oradata/xmenstby
db_name                   string      xmenprod
db_unique_name            string      angel_xmenstby
global_names              boolean     FALSE
instance_name             string      xmenstby
lock_name_space           string
log_file_name_convert     string      /oradata/xmenprod, /oradata/xmenstby
processor_group_name      string
service_names             string      xmenstby
ANGEL:(SYS@xmenstby):PHYSICAL STANDBY>

I have not been accustomed to adding suffixes such as prod, stby, qa, dev, uat, etc… to database name.

Hopefully, when a connection is made to QA server, it’s for a QA database and not PROD.

Enough of the rant, the requirement is to create physical standby with different directory structures and ORACLE_SID at standby site is xmenstby.

The format I have been using is to prefix db_name with closest airport code to the data center to create db_unique_name.

Alternatively is to prefix with hostname.

Active database duplication is not an option because concern it may take a long time.

It was suggested to perform RMAN backup on primary, transfer backup to standby server using multiple scp, and restore database.

Here I go and if you are interested in how this turned out, then please read more about it here

UPDATE: July 18, 2014

If the intention is to know the primary is now a standby and vice versa  after a switchover, then naming the db with the environment will achieve this.

DGMGRL> show configuration

Configuration - dg_xmen

  Protection Mode: MaxPerformance
  Databases:
    xmenprod       - Primary database
    angel_xmenstby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> switchover to angel_xmenstby
Performing switchover NOW, please wait...
Operation requires a connection to instance "xmenstby" on database "angel_xmenstby"
Connecting to instance "xmenstby"...
Connected.
New primary database "angel_xmenstby" is opening...
Operation requires startup of instance "xmenprod" on database "xmenprod"
Starting instance "xmenprod"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "angel_xmenstby"
DGMGRL> show configuration verbose

Configuration - dg_xmen

  Protection Mode: MaxPerformance
  Databases:
    angel_xmenstby - Primary database
    xmenprod       - Physical standby database

  Properties:
    FastStartFailoverThreshold      = '30'
    OperationTimeout                = '30'
    FastStartFailoverLagLimit       = '30'
    CommunicationTimeout            = '180'
    ObserverReconnect               = '0'
    FastStartFailoverAutoReinstate  = 'TRUE'
    FastStartFailoverPmyShutdown    = 'TRUE'
    BystandersFollowRoleChange      = 'ALL'
    ObserverOverride                = 'FALSE'
    ExternalDestination1            = ''
    ExternalDestination2            = ''
    PrimaryLostWriteAction          = 'CONTINUE'

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL>

Perfmon does not start automatically

Yann Neuhaus - Mon, 2014-07-14 19:28

I have recently used perfmon (performance monitor) at a customer site. I created a Data Collector Set to monitor CPU, Memory, Disk, and Network during one day. Then, I ran the monitor and I received a "beautiful" error message…

PeopleTools 8.54 is GA!

Jim Marion - Mon, 2014-07-14 14:28

On Friday the PeopleTools blog announced that PeopleTools 8.54 is now Generally Available (GA). PeopleTools 8.54 brings several usability features including responsive design for mobile devices as well as development features such as Mobile Application Platform (MAP). One of my favorite new features is component specific branding. You can now attach stylesheets and JavaScript to components as described in PeopleBooks Applying Branding to Other Objects. Another great branding enhancement is Attribute-based branding. This is very similar to role based branding, but more flexible and easier to administer. You can read about it in the PeopleBooks entry Administering System Branding.

To learn more read the PeopleTools announcement or visit the hosted 8.54 PeopleBooks. I can't wait for the new demo images!

MySQL on-premise to Amazon RDS migration tips

Kubilay Çilkara - Mon, 2014-07-14 13:27
Things to watch and do when migrating MySQL databases from ‘on-premise’ to Amazon AWS RDS
  • Not all versions of databases can be migrated to RDS. Especially if you want to do a 0 downtime migration. Make sure you know which versions are possible, at this writing Amazon announced that it  will support any old version of MySQL 5.1 and above.
  • In a zero downtime migration to Amazon RDS you work with mysqldump or mydumper to  import the baseline data and and then you use MySQL Replication and the binary_log  position to apply the additional records created during the import, the delta.  That is it is possible to create a MySQL slave in the Amazon AWS Clouds!
  • So when you have confirmed the on-premise MySQL that you have is  compatible you can then use mysqldump with the --master-data parameter to export your data including the binlog  position coordinates at the time of  the export. You can use mydumper if yor database is big to do this with parallel streams. You will use the coordinates and MySQL replication to catch-up with the on-premise master database when creating the MySQL slave in RDS.
  • Use different database parameters for different databases.
  • As  you load the RDS database using myloader or  mysql the operation might take long time depending on the size of your database. If this is the case, disable backups, it stops logging, try using one  of the better spec RDS Instance classes and IOPS for the duration of the operation. You can always downsize the RDS instance after you have  completed the initial load.
  • After you have completed the initial load, use Multi AZ which is a synchronous standby (in Oracle parlour) and schedule the backups immediately before you open your applications to the database, as initial backup requires a reboot.
  • Beware there is no SSH access to RDS, that means you have no access to the file system.
  • Get the DB Secuirty groups right and make sure your applications can access the RDS instances
Categories: DBA Blogs

NPR and Missed (Course) Signals

Michael Feldstein - Mon, 2014-07-14 12:45

Anya Kamenetz has a piece up on NPR about learning analytics, highlighting Purdue’s Course Signals as its centerpiece. She does a good job of introducing the topic to a general audience and raising some relevant ethical questions. But she missed one of the biggest ethical questions surrounding Purdue’s product—namely, that some of its research claims are likely false. In particular, she repeats the following claim:

Course Signals…has been shown to increase the number of students earning A’s and B’s and lower the number of D’s and F’s, and it significantly raises the chances that students will stick with college for an additional year, from 83% to 97%. [Emphasis added.]

Based on the work of Mike Caulfield and Al Essa summarized in the link above, it looks like that latter claim is probably the result of selection bias rather than a real finding. So who is at fault for this questionable claim being repeated without challenge in a popular venue many months after it has been convincingly challenged?

For starters, Purdue is. They never responded to the criticism, despite confirmation that they are aware of it—for one thing, they got contacted by us and by Inside Higher Ed—and despite the fact that they apparently continue to make money off the sales of the product through a licensing deal with Ellucian. And the uncorrected paper is still available on their web site. This is unconscionable.

Anya clearly bears some responsibility too. Although it’s easy to assume from the way the article is written that the dubious claim was repeated to her in an interview by Purdue research Matt Pistilli, she confirmed for me via email that she took the claim from the previously published research paper and did not discuss it with Pistilli. Given that this is her central example of the potential of learning analytics, she should have interrogated this a little more, particularly since she had Matt on the phone. Mike Caulfield also commented to me that any claim of such a dramatic increase in year-to-year retention should automatically be subject to additional scrutiny.

I have to put some blame on the higher ed press as well. Inside Higher Ed covered the story (and, through them, the Times Higher Education). In fact, Carl Straumsheim actually advanced the story a bit by putting the question to researcher Matt Pistilli (who gave a non-answer). The Chronicle of Higher Education did not cover it, despite having run a puff piece on Purdue’s claims the same day that Mike Caulfield wrote his original piece challenging the results. It is very clear to Phil and me that we are read by the Chronicle staff, in part because they periodically publish stories that have been obviously influenced by our earlier coverage. Sometimes without attribution. I don’t care that much about the credit, but if they thought Purdue’s claims were newsworthy enough to cover in the first place then they should have done their own reporting on the fact that those claims have been called into question. If they had been more aggressive in their coverage then the mainstream press reporters who find Course Signals will be more likely to find the other side(s) of the story as well. Outside of IHE, I’m having trouble finding any coverage, never mind any original reporting, in the higher ed or ed tech press.

I have a lot of respect for news reporters in general, and I think that most people grossly underestimate how hard the job is. I think highly of Anya as a professional. I like the reporters I interact with most at the Chronicle as well. Nor will I pretend that we are perfect here at e-Literate. We miss important angles and get details wrong our fair share. For example, I doubt that I would have caught the flaw in Purdue’s research if Mike hadn’t brought it to my attention. But collectively, we have to do a better job of providing critical coverage of topics like learning analytics, particularly at a time when so much money is being spent and our entire educational system is starting to be remade on the premise that this stuff will work. And there is absolutely no excuse whatsoever for a research university to not take responsibility for their published research on a topic that is so critical to the future of universities.

The post NPR and Missed (Course) Signals appeared first on e-Literate.

<b>Contributions by Angela Golla,

Oracle Infogram - Mon, 2014-07-14 11:13
Contributions by Angela Golla, Infogram Deputy Editor

Oracle Open World 2014, San Francisco, CA
Oracle Open World will be held September 28 - October 2, 2014 in San Francisco, CA.  Learn about the latest technology innovations from Oracle experts.  Register now as spots fill up fast.  Go to the Open World home page to learn more about available sessions, key notes and to register. 

Take a Walk

Scott Spendolini - Mon, 2014-07-14 10:24
Steven Feuerstein (https://twitter.com/stevefeuerstein) just tweeted this:

Improve your programming with a daily regimen of situps (or anything you can do to strengthen abs), walks in the woods, and lots of water.
— Steven Feuerstein (@stevefeuerstein) July 14, 2014 Which in turn, inspired me to quickly write this post.

The combination of being in IT and working from home leads to lots of hours logged in some sort of chair, whether its in my home office, at a customer site or a coffee shop.  You don't need to be a doctor to realize that this is not particularly healthy behavior.

So for the past few months, I've incorporated something new into my daily routine: taking a walk.  It doesn't sound like much, and quite honestly, it really isn't.  But, I wish that I had started this years ago, because the benefits of it are huge.

First of all, it's nice to get outside during the day, especially when it's actually nice out.  Nothing can quite compare to it, no matter how many pixels they squeeze into a tablet.  Sometimes I just walk at a leisurely pace, other times I run.  I'm not training for any specific race, nor do I feel compelled to share my statistics over social media.  I just do what I want when I can.

Second of all, it gives me some time to either listen to a podcast, music or to just think.  I've really grown to like the podcasts that the folks at TWiT (http://www.twit.tv) produce, with This Week in Tech being one of my favorites.  Listening to something that interests you makes the time go by so much quicker, that you may even be tempted to extend your distance to accommodate the extra content.

In fact, listening to them really puts me in a creative and inspired mood, which helps explain the third benefit: background processing.  I don't know much about neuroscience, but I do know a little bit how my brain works.  If I'm struggling with a difficult problem, I've learned over time that the best thing that I can do is to literally walk away from it.  Going on a walk or run or even a drive allows my brain to "background process" that problem while I focus on other things.  The "A-Ha!" moment that I have is my brain's way of alerting me once the problem has been solved.   Corny, I know, but that's how it works for me.

And lastly - and probably most importantly - I've been able to drop a few pounds because of my walks (combined with better eating habits).  I do use RunKeeper to log my walks and track my weight, because numbers simply don't lie.  It also serves as a source of inspiration if I can beat a personal record or cross a weight milestone.

Big Data doom mongers need to look outside of the marketing department

Steve Jones - Mon, 2014-07-14 09:10
In every change there are hype machines that over play and sages who call doom.  Into the Big Data arena steps David Searls to proclaim that Big Data is a myth and simply hype which is set to burst in an article over at ZDNet. But big data, he said, is nothing more than the myth that collecting vast amounts of data can help companies know customers better than those customers even know
Categories: Fusion Middleware

Lock Timeout Error While Creating Database

Pythian Group - Mon, 2014-07-14 08:07

Recently I worked on a issue where a third-party application was failing during the installation. Below was the error returned by the Application.

Lock request time out period exceeded. (Microsoft SQL Server, Error: 1222)

The application was failing while it was trying to create a database. The application seemed to have a default timeout setting which was about 60 seconds after which it was failing, as the command had not yet returned any results.

I tried creating a test database directly from the SQL Server Management Studio and noticed that it was taking long time as well. Once I checked the sys.sysprocesses, I found that the created database was having IO related waits.

Some of the reasons why it might be taking more time while creating the database are

  • IO bottleneck on the drive where we are creating the database files
  • Large size of Model database
  • Instant File Initialization is not enabled

I verified the size of the model database files and found that the model database data file is 5 GB and log file is 1 GB. I have reduced the size of the model database to 1 GB and log file size to 512 MB and then I was able to create the Test database quickly.

Now we started the installation of the Application and it completed successfully as well.

Categories: DBA Blogs

Script to Get the Create Date of an Object from Multiple Databases

Pythian Group - Mon, 2014-07-14 08:04

As a DBA, it is common to get a request to run scripts against production databases. However, there can be environments where there are multiple databases on same instance, where the script needs to be run against. I have seen a environment where there were 50+ databases which have same schema with different data, and each database serving different customers.

When we get a request to run large scripts against many databases, at times with user over sight, it may be possible to miss running the script against one or more databases. The requirement comes to verify if the script was executed against all the databases. One way to verify if the script is executed against all databases is to pick an object (stored procedure, table, view, function) which was created as part of script execution, get the create date of that object and verify if it is showing the date and time when we ran the script. The challenge is to get the create date of a specific object from all databases at a time with little work.

Below is the code which will help in fetching the create date of the specified object (Stored Procedure, Table, View, Function) from all user databases on a instance. Pass the object name @ObjName in the 7th line of the code. Run the code and verify the create date from the output and make sure that the script was executed and created the object during the time the script was run.

-- Script Originally Written By: Keerthi Deep | http://www.SQLServerF1.com/

Set NOCOUNT ON
Declare @ObjName nvarchar(300)
declare @dbn nvarchar(200)

Set @ObjName = 'Object_Name' -- Specify the name of the Stored Procedure/ Table/View/Function

create table #DatabaseList(dbname nvarchar(2000)) 

Insert into #DatabaseList select name from sys.sysdatabases
where name not in ('master', 'msdb', 'model','tempdb')
order by name asc 

--select * from #DatabaseList
Create table #Output_table (DB nvarchar(200), crdate datetime, ObjectName nvarchar(200))
declare c1 cursor for select dbname from #DatabaseList open c1
Fetch next from c1 into @dbn
WHILE @@FETCH_STATUS = 0
BEGIN

declare @Query nvarchar(2048)
Set @Query = 'select ''' + @dbn + ''' as DBName, crdate, [name] from ' + @dbn + '.sys.sysobjects where name = ''' + @ObjName + ''''
--print @Query
Insert into #Output_table Exec sp_executesql @Query

FETCH NEXT FROM c1 into @dbn
END
CLOSE c1
DEALLOCATE c1

select * from #Output_table
Drop table #Output_table
Drop table #DatabaseList

Limitations:
This will work only if the object is created using create command, but will not work if Alter command is used.

Any suggestions are welcome.

Categories: DBA Blogs