Skip navigation.

Feed aggregator

Reading System Logs on SQL Server

Pythian Group - Thu, 2015-07-09 12:54

HDDRecently, while I was working on a backup failure issue, I found that it was failing for a particular database. When I ran the backup manually to a different folder it would complete successfully, but not on the folder that it was directed to when the backup jobs were originally configured .  This makes me suspicious about hard disk corruption. In the end, I fixed the backup issues in the interim so that in the future I would not get paged, as well as lowering the risk of having no backup in place.

Upon reviewing the Windows Event logs, it was revealed that I was right about suspecting a faulty hard drive. The log reported some messages related to the SCSI codes, especially the SCSI Sense Key 3 which means SCSI had a Medium error. Eventually, the hard drive was replaced by the client and the database has been moved to another drive.  In the past month, I have had about 3 cases where I have observed that the serious messages related to storage are reported as information. I have included one case here for your reference, which may help you in case you see such things in your own logs.

CASE 1 – Here is what I found in the SQL Server error log:

  • Error: 18210, Severity: 16, State: 1
  • BackupIoRequest::WaitForIoCompletion: read failure on backup device ‘G:\MSSQL\Data\SomeDB.mdf’.
  • Msg 3271, Level 16, State 1, Line 1
  • A non-recoverable I/O error occurred on file “G:\MSSQL\Data\SomeDB.mdf:” 121 (The semaphore timeout period has expired.).
  • Msg 3013, Level 16, State 1, Line 1
  • BACKUP DATABASE is terminating abnormally.

When I ran the backup command manually I found that it ran fine until a specific point (i.e. 55%) before it failed again with the above error. Further, I decided to run DBCC CHECKDB which reports when a particular table has a consistency error at a particular page. Here are the reported errors:

Msg 8966, Level 16, State 2, Line 1
Unable to read and latch page (1:157134) with latch type SH. 121(The semaphore timeout period has expired.) failed.
Msg 2533, Level 16, State 1, Line 1
Table error: page (1:157134) allocated to object ID 645577338, index ID 0, partition ID 72057594039304192, alloc unit ID 72057594043301888 (type In-row data) 
was not seen. The page may be invalid or may have an incorrect alloc unit ID in its header. The repair level on the DBCC statement caused this repair to be bypassed.

Of course, repairing options did not help as I had anticipated initially, since the backup was also failing when it reached at 55%. The select statement also failed to complete when I queried the object 645577338.  The only option that I was left with was to recreate the new table and drop the original table. After this had been done, the full back up succeeded. As soon as this was completed we moved the database to another drive.

I was still curious regarding these errors, so I started looking at Windows Error Logs – System folder, filtering it to show only Errors and Warnings.  However, this did not show me anything that attracted me to read further. Thus, I removed the filter, and carefully reviewed the logs.  To my surprise, the logs show entries for a bad sector, but, this was in the Information section of Windows Event Viewer, System folder.

Event Type: Information
Event Source: Server Administrator
Event Category: Storage Service
Event ID: 2095
Date: 6/10/2015
Time: 1:04:18 AM
User: N/A
Computer: SQLServer
Description: SCSI sense data Sense key: 3 Sense code:11 Sense qualifier: 0:  Physical Disk 0:2 Controller 0, Connector 0.

There could be a different error, warning or information printed on your server depending what the issue is. Upon further review there is still much to be said in order to explain codes and descriptions.

You may have noticed that I have referred to this as CASE 1, which means, I will blog one or two more case(s) in the future. Stay tuned!

Photo credit: Hard Disk KO via photopin (license)

Learn more about our expertise in SQL Server.

The post Reading System Logs on SQL Server appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Using a Parallel Gateway without a Merge in OBPM

Jan Kettenis - Thu, 2015-07-09 10:54
In this blog article I give a brief explanation regarding some aspect of the behavior of the parallel gateway in Oracle BPM. It has been changed on September 15 2015 by adding the remark at the end regarding a Complex Merge (thanks to Martien van den Akker).

For the BPMN modelers among us, I have a small quiz.

Given a process model like this, what would be the behavior of Oracle BPM?



  1. It does not compile because OBPM thinks it is not valid BPMN
  2. The flows with Activity 1 and 2 are merged, the token moves to the End event of the process, and then the instance finishes.
  3. Activity 1 and 2 are executed, and then OBPM waits in the merge because to continue all tokens have to reach the merge.
  4. The flows with Activity 1 and 2 are merged, the token moves to the End event of the process, and in the meantime waits until the timer expires. It will not end before the token reached the Terminate end event, because not all flows from the split are explicitly merged the whole process itself serves as an implicit merge.

If this would be some magazine, I would now tell you to go to the last page and turn it upside down to read the answer. Or wait until the next issue in which I announce the prize winners.

Alas, no such thing here so let me give you the answer straight away, which is answer 4:



I must admit I was a bit surprised, as I seem to remember that some bundle patches or patch sets ago it would have been a. But when you look at the BPMN specification there is nothing that says that a parallel gateway always has to have a merge. Strange then that OBPM does not let you draw a model without one, but at least it works with a merge with just one ingoing flow.

As a matter of fact, to make the End even actually end the instance, you should change it into an Intermediate Message Throw event, and end the process with a Terminate End event as well. Run-time that looks awkward, because even when your process ends successfully it has the state Terminated.

Fir this reason and and perhaps because your audience might just not understand this model, specifically when it concerns a larger one, the following alternative perhaps is easier to understand. You now can choose if and which flow you want to end with a Terminate End event.

To force that the process continues after the merge, a Complex Merge is used that aborts all other pending parallel flows when the timer expires.

WebCenter & BPM: Adaptive Case Management

WebCenter Team - Thu, 2015-07-09 10:47
By Mitchell Palski, Oracle WebCenter Sales Consultant 
We are happy to have Mitchell Palski joining us on the blog for a Q&A around strategies and best practices for how to deliver Adaptive Case Management with WebCenter and BPM.
Q. So to begin, can you describe for our listeners what case management is? A case is a collection of activities that support a specific business objective. Each case has a lifecycle, which is essentially a process that delivers a service to a use that includes:
  • Activities
  • Rules
  • Information, content, etc.
Case management defines specific interactions that a case may have with a system and with the actual users who are involved in a case’s lifecycle. In a self-service solution, these cases are typically:
  1. Initiated by a customer or citizen
  2. Routed through workflow based on specific business rules or employee intervention
  3. Resolved by evaluating data and content that is captured during the lifecycle

Q. Why is Case Management important? How does Case Management differ from Adaptive Case Management?
Case management is an important concept in today’s technology because it is a primary means of how services are provided to our end users. Some examples might include:

  • Patient services
  • Traffic violation payments
  • Retirement benefit delivery
  • Building, health, or child safety inspections
  • Employee application, promotion, or incident tracking
Each of these examples ties unique combinations of data and documentation together to provide some meaningful service to an end user. Case management software gives organizations the means to standardize how those services are delivered so that they can be completed accurately, quickly and more efficiently.
Adaptive case management is way of modeling flexible and data intensive business processes. Typically, adaptive case management is needed when a case’s lifecycle includes:
  • Complex interactions of people and policies
  • Complex decision making that require subjective judgments to be made
  • Specific dependencies that may need to be overridden based on the combination of fluid circumstances

Adaptive case management allows your organization to employ the use of standard processes and policies, but also allows for flexibility and dynamic decision making when necessary.

Q. How do Oracle WebCenter and Oracle Business Process Management help to deliver Adaptive Case Management? The Oracle Business Process Management Suite includes:
  • Business user-friendly modeling and optimization tools
  • Tools for system integration
  • Business activity monitoring dashboards
  • Rich task and case management capabilities for end users
Oracle BPM Suite gives your organization the tools that it needs to illustrate complex case management lifecycles, define and assign business rules, and integrate your processes into critical enterprise systems. Everything from defining your data and process flows, to implementing actual case interactions; BPM Suite has an intuitive web-based interface where everyone on your staff can collaborate and deliver the best solution for your customers possible.

WebCenter Portal serves as a secure and role-based presentation layer that can be used to optimize the way that users interact with the case management system. For case management lifecycles to be effective, they need to be easy and intuitive to access as well as provide meaningful contextual content to end users who are interacting with their case. WebCenter Content supports the document management aspect of a case by managing the complete lifecycle of documents that are associated with cases, organizational policies, or any web content that helps to educate end-users.

Q. Do you have any customer or real-world examples you could share with our listeners? The Spirit of Alaska is a Retirements Benefit program that faced:
  • Limited resources and funding
  • Out-dated and undocumented processes
  • A drastic and immediate increase in the number of cases being processes
With the help of Oracle, the Alaska Department of Retirement Benefits was able to:
  • Automate and streamline their business processes
  • Reduce the frequency of data input errors
  • Improve customer service effectiveness
The end result was a solution that not only delivered retirements benefits to citizens more quickly and accurately, but also relieved the burden of the state’s business challenges now and in the future.
Thank you, Mitchell for sharing your strategies and best practices on how to deliver Adaptive Case Management with WebCenter and BPM.  You can listen to a podcast on this topic here, and be sure to tune in to the Oracle WebCenter Café Best Practices Podcast Series for more information!

Unizin One Year Later: View of contract reveals . . . nothing of substance

Michael Feldstein - Thu, 2015-07-09 09:18

By Phil HillMore Posts (343)

I’ve been meaning to write an update post on Unizin, as we broke the story here at e-Literate in May 2014 and Unizin went public a month later. It’s one year later, and we still have the most expensive method to get the Canvas LMS. There are also plans for a Content Relay and Analytics Relay as seen in ELI presentation, but the actual dates keep slipping.

Unizin Roadmap

e-Literate was able to obtain a copy of the Unizin contract, at least for the founding members, through a public records request. There is nothing to see here. Because there is nothing to see here. The essence of the contract is for a university to pay $1.050 million to become a member. The member university then has a right (but not an obligation) to then select and pay for actual services. Based on the contract, membership gets you . . . membership. Nothing else.

What is remarkable to me is the portion of the contract spelling out obligations. Section 3.1 calls out that “As a member of the Consortium, University agrees to the following:” and lists:

  • complying with Unizin bylaws and policies;
  • paying the $1.050 million; and
  • designating points of contact and representation on board.

Unizin agrees to nothing. There is literally no description of what Unizin provides beyond this description [emphasis added]:

This Agreement establishes the terms of University’s participation in the Consortium, an unincorporated member-owned association created to provide Consortium Members access to an evolving ecosystem of digitally enabled educational systems and collaborations.

What does access mean? For the past year the only service available has been Canvas as an LMS. When and if the Content Relay and Analytics Relay become available, member institutions will have the right to pay for those. Membership in Unizin gives a school input into defining those services as well.

As we described last year, paying a million dollars to join Unizin does not give a school any of the software. The school has to pay licensing & hosting fees for each service in addition to the initial investment.

The contract goes out of its way to point out that Unizin actually provides nothing. While this is contract legalese, it’s important to note this description in section 6.5 [original emphasized in ALL CAPS but shared here at lower volume].[1]

Consortium operator is not providing the Unizin services, or any other services, licenses, products, offerings or deliverables of any kind to University, and therefore makes no warranties, whether express or implied. Consortium Operator expressly disclaims all warranties in connection with the Unizin services and any other services, licenses, products, offerings or deliverables made available to University under or in connection with this agreement, both express and implied, …[snip]. Consortium Operator will not be liable for any data loss or corruption related to use of the Unizin services.

This contract appears to be at odds with the oft-stated goal of giving institutions control and ownership of their digital tools (also taken from ELI presentation).

We have a vested interest in staying in control of our data, our students, our content, and our reputation/brand.

I had planned to piece together clues and speculate on what functionality the Content Relay will provide, but given the delays it is probably best to just wait and see. I’ve been told by Unizin insiders and heard publicly at conference presentations since February 2015 about the imminent release of Content Relay, and right now we just have slideware. I have asked for a better description of what functionality the Content Relay will provide, but this information is not yet available.

Unizin leadership and board members understand this quandary. As Bruce Maas, CIO at U Wisconsin, put it to me this spring, his job promoting and explaining Unizin will get a lot easier when there is more to offer than just Canvas as the LMS.

For now, here is the full agreement as signed by the University of Florida [I have removed the signature page and contact information page as I do not see the need to make these public].

Download (PDF, 587KB)

  1. Also note that Unizin is unincorporated part of Internet2. Internet2 is the “Consortium Operator” and signer of this agreement.

The post Unizin One Year Later: View of contract reveals . . . nothing of substance appeared first on e-Literate.

VirtualBox 5.0

Tim Hall - Thu, 2015-07-09 09:02

virtualboxOracle VirtualBox 5.0 has been released. You can see the Oracle Virtualization Blog announcement here, which includes a link to the official announcement.

Downloads and changelog in the normal places.

I’m downloading… Now!

Cheers

Tim…

Update: Up and running on my Windows 7 PC at work. Will have to wait until tonight to do it on the Mac and Linux boxes at home… :)

Update 2: Running fine on Mac too. :)

VirtualBox 5.0 was first posted on July 9, 2015 at 4:02 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Learn Oracle Apps DBA (R12) with us:Training Starts on 8th of August

Online Apps DBA - Thu, 2015-07-09 05:40
Everyone having a similar question in mind when they are freshers or are into the same field same domain for years together that which technology we should learn which should be innovative, long running, having some sort of creative touch, and my answer to all those Tech geeks or would be tech geeks is – Oracle Technologies. Oracle from last few years (will not go beyond that !!!) has developed in such a fast pace that you cannot ignore it. When there are lots of development activities goes on and Go Live of the Projects, Testing then there is one Crucial member in the Company/Team who (usually gets ignore ;-)) manages all the environment and give optimise environment to perform all those things : Apps DBA. Apps DBA is combination of Oracle DBA and Oracle Applications- Double Power. Apps DBA is the first entry towards the Big technology which oracle has developed. Oracle Application licenses are increasing every year and all these company are looking for Good Apps DBA who has understanding, knowledge and one of the most important Learning Attitude, to do experiment (of course not on PROD!!!). Who can learn Apps DBA ? Logically if I want to answer. Here is the list
  • All the Freshers, Newbies or may be who want to enter Oracle Applications Area.
  • Who is into Core DBA from years and want new technology to learn.
Apps DBA requirement is not only conceptual but practical as well. As much as you make your hands dirty your leaning grows many folds. When I was at your stage I always search for such institute or training which gives more practical stuff, real time scenarios but was not able to get it, keeping that in mind K21 Technologies is starting Apps DBA Training of R12 from 8th Aug-2015. More Practical oriented, Dedicated instance to play around, mini projects, Support. Apps DBA is a gateway to enter Oracle Technologies and you can move further with many feathers like Fusion Middleware ,Fusion Applications, SOA etc. What topics I should learn to become Apps DBA To start with you should start with Architecture, Installation, Patching, Cloning, changing Schema Password, backup & recovery. We cover this all including hands-on where you do all these using our step by step instructions on our Server. Who ever wants to learn please get enrolled soon as seats are limited. K21 focus on Quality Training with Full Money back Guarantee (If you are not happy after 2 sessions then you can ask for Full Money Back )

For further details check

http://k21technologies.com/oracle-apps-dba-training

The post Learn Oracle Apps DBA (R12) with us:Training Starts on 8th of August appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

PK Index

Jonathan Lewis - Wed, 2015-07-08 11:08

Here’s one of those little details that I might have known once, or maybe it wasn’t true in earlier versions of oracle, or maybe I just never noticed it and it’s “always” been true; and it’s a detail I’ll probably have forgotten again a couple of years from now.  Consider the following two ways of creating a table with primary key:


Option 1:

create table orders (
        order_id        number(10,0) not null,
        customer_id     number(10,0) not null,
        date_ordered    date         not null,
        other_bits      varchar2(250),
--      constraint ord_fk_cus foreign key(customer_id) references customers,
        constraint ord_pk primary key(order_id)
)
tablespace TS_ORD
;

Option 2:

create table orders (
        order_id        number(10,0) not null,
        customer_id     number(10,0) not null,
        date_ordered    date         not null,
        other_bits      varchar2(250)
)
tablespace TS_OP_DATA
;

alter table orders add constraint ord_pk primary key(order_id);

There’s a significant difference between the two strategies (at least in 11.2.0.4, I haven’t gone back to check earlier versions): in the first form the implicit primary key index is created in the tablespace of the table, in the second form it’s created in the default tablespace of the user. To avoid the risk of putting something in the wrong place you can always add the “using index” clause, for example:


alter table order add constraint ord_pk primary key (order_id) using index tablespace TS_OP_INDX;

Having noticed / reminded myself of this detail I now have on my todo list a task to check the equivalent behaviour when creating partitioned (or composite partitioned) tables – but that’s a task with a very low priority.


More Kscope15 Impressions

Oracle AppsLab - Wed, 2015-07-08 09:03

Kscope15 (#kscope15) was hosted at Diplomat resort along beautiful Hollywood Beach, and the Scavenger Hunt from OAUX AppsLab infused a hint of fun and excitement between the packed, busy, and serious sessions.

XIMAG6345

The Scavenger Hunt was quite a comprehensive system for people to win points in various ways, and keep track of events, points and a leaderboard. And of course, we had one Internet of Things (IoT) component that people could search for and tap to win points.

And here is the build, with powerful battery connected to it, complete with anti-theft feature, which is double-sided duct tape :) All together, it is a stand-alone, self-contained, and definitely mobile, computer.

Isn’t it cool? I overheard on multiple occasions people say it was the coolest thing at the conference.

IMAG6347

One of the bartenders at the Community Night reception wanted to trade me the “best” drink of the night for my Raspberry Pi.

I leased it to him for two hours, and he gave me the drink. That fact is that I would put the Raspberry Pi on his table anyway for the community night event, and he would give me the drink anyway if I knew how to order it.

IMAG6378

On the serious side, APEX (Oracle Applications Express) had a good showing with many sessions. Considering our Scavenger Hunt Web Admin was built on APEX, I am interested in learning it too. After two hands-on sessions, I did feel that I’d use it for quick web app in the future.

On the database side, the most significant development is ORDS (Oracle REST Data Services) and the ability to call a web end-point from within database. This opens up possibility of monitoring data/state change at the data level, and triggering events into a web server, which in turn can trigger client reaction via WebSocket.

Again the Kscope15 was a very fruitful event for us, as we demonstrated Scavenger Hunt game and provoked lots of interest. It has some potential for large event and enterprise application, so stay tuned while we make some twist to it in the future.

Editor’s note: Raymond (@yuhuaxie) forgot to mention how much fun he had at Kscope15. Pics because it happened:

download_20150622_223725

ODTUG (@odtug) commissioned a short film, which was shot, edited and produced during the week that was Kscope15. It debuted during the Closing Session, and they have graciously shared it on YouTube. It’s 10 minutes, but very good at capturing what I like about Kscope so much.

Noel appears to talk about the Scavenger Hunt at 7:29. Watch it here.

Possibly Related Posts:

RMAN -- 4b : Recovering from an Incomplete Restore with OMF Files

Hemant K Chitale - Wed, 2015-07-08 07:51
Following up on my previous post (which had the datafiles as non-OMF), here is a case with OMF files.

SQL> select file_name from dba_data_files
2 where tablespace_name = 'HEMANT';

FILE_NAME
--------------------------------------------------------------------------------
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst84r1w_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst850ts_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85312_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85njw_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85qsq_.dbf

SQL>
SQL> !rm /home/oracle/app/oracle/oradata/HEMANTDB/datafile/*hemant*dbf

SQL> shutdown immediate;
ORA-01116: error in opening database file 6
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst84r1w_.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3
SQL> shutdown abort;
ORACLE instance shut down.
SQL>

I have removed the datafiles for a tablespace. Note that the datafiles are all OMF.  I then attempt to restore the tablespace.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:15:21 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 390073016 bytes
Database Buffers 58720256 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> select file# from v$datafile
2 where ts# = (select ts# from v$tablespace where name = 'HEMANT')
3 order by 1;

FILE#
----------
6
7
8
9
11

SQL>
SQL> alter database datafile 6 offline;

Database altered.

SQL> alter database datafile 7 offline;

Database altered.

SQL> alter database datafile 8 offline;

Database altered.

SQL> alter database datafile 9 offline;

Database altered.

SQL> alter database datafile 11 offline;

Database altered.

SQL> alter database open;

Database altered.

SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Wed Jul 8 21:22:02 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN>
RMAN> restore tablespace HEMANT;

Starting restore at 08-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=36 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00007 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szss_.dbf
channel ORA_DISK_1: restoring datafile 00009 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szxb_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58p_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58p_.bkp tag=TAG20150708T211100
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf
channel ORA_DISK_1: restoring datafile 00008 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf
channel ORA_DISK_1: restoring datafile 00011 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3615
Session ID: 29 Serial number: 21

[oracle@localhost ~]$

Once again, the database has crashed in the midst of the RESTORE. Let's check the datafile names.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:25:12 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> set pages60
SQL> select file#, name from v$datafile where file# in (6,7,8,9,11) order by 1;

FILE#
----------
NAME
--------------------------------------------------------------------------------
6
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf

7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf

9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf


SQL>
SQL> select file#, name from v$datafile_header where file# in (6,7,8,9,11) order by 1;

FILE#
----------
NAME
--------------------------------------------------------------------------------
6


7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8


9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11



SQL>

[To understand why I queried both V$DATAFILE and V$DATAFILE_HEADER, see my previous post "Datafiles not Restored  --  using V$DATAFILE and V$DATAFILE_HEADER".]

So, datafiles 7 and 9 have been restored. We can see that in the RESTORE log as well -- "backup piece 1" in the RESTORE had datafiles 7 and 9 and was the only one to complete. Let's check the datafile names. Datafiles 7 and 9 are differently named from what they were earlier.  Earlier, they were "%bst85%", now they are "%bst90%".

So, if we want to re-run the restore, we can use SET NEWNAME for datafiles 7 and 9 to allow Oracle to check that they are already restored.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Wed Jul 8 21:32:12 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> run
2> {set newname for datafile 7 to '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf';
3> set newname for datafile 9 to '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf';
4> restore tablespace HEMANT;}

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 08-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK

skipping datafile 7; already restored to file /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
skipping datafile 9; already restored to file /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf
channel ORA_DISK_1: restoring datafile 00008 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf
channel ORA_DISK_1: restoring datafile 00011 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp tag=TAG20150708T211100
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 08-JUL-15

RMAN>

YES ! Datafiles 7 and 9 were identified as "already restored".
Let's re-check the datafiles and then RECOVER them.

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:37:29 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>
SQL> select name from v$datafile
2 where ts#=(select ts# from v$tablespace where name = 'HEMANT')
3 minus
4 select name from v$datafile_header
5 where ts#=(select ts# from v$tablespace where name = 'HEMANT')
6 /

no rows selected

SQL>
SQL> select * from v$datafile_header where name is null;

no rows selected

SQL>
SQL> recover datafile 6;
Media recovery complete.
SQL> recover datafile 7;
Media recovery complete.
SQL> recover datafile 8;
Media recovery complete.
SQL> recover datafile 9;
Media recovery complete.
SQL> recover datafile 11;
Media recovery complete.
SQL> alter tablespace HEMANT online;
alter tablespace HEMANT online
*
ERROR at line 1:
ORA-01109: database not open


SQL> alter database open;

Database altered.

SQL> alter tablespace HEMANT online;

Tablespace altered.

SQL>
SQL> select owner, segment_name, bytes/1048576 from dba_segments where tablespace_name = 'HEMANT';

OWNER
------------------------------
SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
HEMANT
LARGE_TABLE
272


SQL> select count(*) from hemant.large_table;

COUNT(*)
----------
2404256

SQL>

Yes, I have been able to verify that all the datafiles have been restored.  I have been able to bring the tablespace online and query the data in it.

SQL> set pages60
SQL> select file_id, file_name from dba_data_files where tablespace_name = 'HEMANT';

FILE_ID
----------
FILE_NAME
--------------------------------------------------------------------------------
6
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf

7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf

9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf


SQL>

And, yes the datafile names (%bst90%) are different from what they were earlier (%bst84% and %bst85%).

(Reference :  See Oracle Support Note Doc ID 1621319.1)
.
.
.

Categories: DBA Blogs

12c: New SQL PLAN OPERATIONS and HINTS

XTended Oracle SQL - Wed, 2015-07-08 07:27

This post is just a compilation of the links to other people’s articles and short descriptions about new SQL PLAN OPERATIONS and HINTS with a couple little additions from me.

th.c_operation_name { min-width:100px; max-width:100px; } th.c_description { min-width:200px } .c_links { min-width:150px; max-width:220px; } .c_links ul { margin: 0 0 5px 0 !important; -webkit-padding-start: 5px; } .c_links ul li { margin-left: 0px; -webkit-padding-start: 0px; } td.c_operation_name { font-size:12px;} td.c_description { font-size:12px;} td.c_links { font-size:10px;} .c_body td { vertical-align: text-top; } div.hints_wrapper { border-style: solid; border-width: 1px; padding: 2px; overflow: scroll !important; } div.hints_content { width: 1175px; min-width:1175px; padding: 2px; }


OPERATION_NAME Description Links JSONTABLE EVALUATION JSON_TABLE execution XMLTABLE EVALUATION This is new name for “COLLECTION ITERATOR PICKLER FETCH [XQSEQUENCEFROMXMLTYPE]”. XPATH EVALUATION still exists. MATCH RECOGNIZE New feature “PATTERN MATCHING” STATISTICS COLLECTOR Optimizer statistics collector OPTIMIZER STATISTICS GATHERING Automatic Optimizer statistics gathering during the following types of bulk loads:

  • CREATE TABLE … AS SELECT
  • INSERT INTO … SELECT into an empty table using a direct-path insert
CUBE JOIN Joining Cubes to Tables and Views EXPRESSION EVALUATION Each parallel slave executes scalar correllated subqueries from SELECT-list. parallel “FILTER” Each parallel slave executes own FILTER operation

Example
SQL> explain plan for
  2  select--+ parallel
  3      owner,object_name
  4  from xt_test l
  5  where exists(select/*+ no_unnest */ 0 from dual where dummy=object_name);

Explained.

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------
Plan hash value: 2189761709

-------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name      | Rows  | Bytes | Cost (%CPU)|   TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |           |     2 |    62 |   177K  (1)|       |      |            |
|   1 |  PX COORDINATOR          |           |       |       |            |       |      |            |
|   2 |   PX SEND QC (RANDOM)    | :TQ10000  | 91060 |  2756K|   113   (0)| Q1,00 | P->S | QC (RAND)  |
|*  3 |    FILTER                |           |       |       |            | Q1,00 | PCWC |            |
|   4 |     PX BLOCK ITERATOR    |           | 91060 |  2756K|   113   (0)| Q1,00 | PCWC |            |
|   5 |      INDEX FAST FULL SCAN| IX_TEST_1 | 91060 |  2756K|   113   (0)| Q1,00 | PCWP |            |
|*  6 |     TABLE ACCESS FULL    | DUAL      |     1 |     2 |     2   (0)|       |      |            |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "SYS"."DUAL" "DUAL" WHERE "DUMMY"=:B1))
   6 - filter("DUMMY"=:B1)
                

[collapse]

PX SELECTOR Execution of the serial plan parts in the one of the parallel slaves PX SEND 1 SLAVE Execution of the serial plan parts in the one of the parallel slaves(single DFO tree) PX TASK Parallel access to fixed tables(x$) by each node in RAC HYBRID HASH DISTRIBUTION Adaptive parallel data distribution that does not decide the final data distribution(HASH, BROADCAST or SKEW) method until execution time. PQ_DISTRIBUTE_WINDOW In addition to “PX SEND” HASH-distribution for WINDOW functions, “PX SEND RANGE” was added
Example
-- TESTPART - list-partitiioned table:
-------------------------------------------------------------------------------------------------
| Operation               | Name     | Rows  | Cost | Pstart| Pstop |   TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------
| SELECT STATEMENT        |          | 74384 |   102|       |       |       |      |            |
|  PX COORDINATOR         |          |       |      |       |       |       |      |            |
|   PX SEND QC (RANDOM)   | :TQ10001 | 74384 |   102|       |       | Q1,01 | P->S | QC (RAND)  |
|    WINDOW SORT          |          | 74384 |   102|       |       | Q1,01 | PCWP |            |
|     PX RECEIVE          |          | 74384 |   100|       |       | Q1,01 | PCWP |            |
|      PX SEND RANGE      | :TQ10000 | 74384 |   100|       |       | Q1,00 | P->P | RANGE      |
|       PX BLOCK ITERATOR |          | 74384 |   100|     1 |     3 | Q1,00 | PCWC |            |
|        TABLE ACCESS FULL| TESTPART | 74384 |   100|     1 |     3 | Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------
Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      PQ_DISTRIBUTE_WINDOW(@"SEL$1" 3)
      FULL(@"SEL$1" “TESTPART"@"SEL$1")
      OUTLINE_LEAF(@"SEL$1")
      ALL_ROWS
      DB_VERSION('12.1.0.2')
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

[collapse]
Hint PQ_DISTRIBUTE_WINDOW(@Query_block N), where N=1 for hash, N=2 for range, N=3 for list VECTOR
KEY VECTOR Inmemory aggregation

RECURSIVE ITERATION Unknown WINDOW CONSOLIDATOR WINDOW CONSOLIDATOR BUFFER for parallel execution of analyrical WINDOW aggregation functions

Example
SQL> explain plan for select/*+ parallel(t 4) PQ_DISTRIBUTE_WINDOW(2) */ count(*) over(partition by owner) cnt,owner from xt_test t;
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------
Plan hash value: 3410952625
---------------------------------------------------------------------------------------------------
| Id | Operation                    |Name    |Rows |Cost |Pstart|Pstop|   TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT             |        |91060|  124|      |     |       |      |            |
|  1 |  PX COORDINATOR              |        |     |     |      |     |       |      |            |
|  2 |   PX SEND QC (RANDOM)        |:TQ10001|91060|  124|      |     | Q1,01 | P->S | QC (RAND)  |
|  3 |    WINDOW CONSOLIDATOR BUFFER|        |91060|  124|      |     | Q1,01 | PCWP |            |
|  4 |     PX RECEIVE               |        |91060|  124|      |     | Q1,01 | PCWP |            |
|  5 |      PX SEND HASH            |:TQ10000|91060|  124|      |     | Q1,00 | P->P | HASH       |
|  6 |       WINDOW SORT            |        |91060|  124|      |     | Q1,00 | PCWP |            |
|  7 |        PX BLOCK ITERATOR     |        |91060|  122|    1 |    4| Q1,00 | PCWC |            |
|  8 |         TABLE ACCESS FULL    |XT_TEST |91060|  122|    1 |    4| Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 4 because of table property
                

[collapse] DETECT END Unknown DM EXP MAX AGGR Unknown DM EXP MAX PAR Unknown FAULT-TOLERANCE BUFFER The fault-tolerance for parallel statement.
Patent #US8572051: Making parallel execution of structured query language statements fault-tolerant

  • PX_FAULT_TOLERANCE / NO_PX_FAULT_TOLERANCE hints


See also:

  1. Randolf Geist “12c New Optimizer Features”
  2. Randolf Geist “Parallel Execution 12c New Features Overview”


HINTS:

sup {color: red} table.HINTS{ font-size:12px; } .HINTS td {vertical-align: text-top;}

PATH HINT_CLASS HINT_NAME VERSION VERSION_OUTLINE ALL WITH_PLSQL WITH_PLSQL 12.1.0.1 ALL -> ANSI_REARCH ANSI_REARCH 1 ANSI_REARCH
NO_ANSI_REARCH 12.1.0.2 12.1.0.2 ALL -> EXECUTION BATCH_TABLE_ACCESS_BY_ROWID 2 BATCH_TABLE_ACCESS_BY_ROWID
NO_BATCH_TABLE_ACCESS_BY_ROWID 12.1.0.1 12.1.0.1 INMEMORY INMEMORY
NO_INMEMORY 12.1.0.2 12.1.0.2 INMEMORY_PRUNING INMEMORY_PRUNING
NO_INMEMORY_PRUNING 12.1.0.2 12.1.0.2 ALL -> COMPILATION -> ZONEMAP ZONEMAP ZONEMAP
NO_ZONEMAP 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> DATA_SECURITY_REWRITE DATA_SECURITY_REWRITE_LIMIT DATA_SECURITY_REWRITE_LIMIT

NO_DATA_SECURITY_REWRITE 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO CLUSTER_BY_ROWID CLUSTER_BY_ROWID
CLUSTER_BY_ROWID 12.1.0.1(11.2.0.4) 12.1.0.1 ALL -> COMPILATION -> CBO -> ACCESS_PATH -> BITMAP_TREE BITMAP_AND BITMAP_AND 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> ADAPTIVE_PLAN ADAPTIVE_PLAN ADAPTIVE_PLAN
NO_ADAPTIVE_PLAN 12.1.0.2 12.1.0.2 ALL -> COMPILATION -> CBO -> AUTO_REOPT AUTO_REOPTIMIZE 2 AUTO_REOPTIMIZE
NO_AUTO_REOPTIMIZE 12.1.0.1 ALL -> COMPILATION -> CBO -> JOIN_METHOD ANTIJOIN CUBE_AJ 12.1.0.1 12.1.0.1 SEMIJOIN CUBE_SJ 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> JOIN_METHOD -> USE_CUBE JOIN USE_CUBE

NO_USE_CUBE 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> PARTIAL_JOIN PARTIAL_JOIN PARTIAL_JOIN
NO_PARTIAL_JOIN 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> PARTITION USE_HIDDEN_PARTITIONS USE_HIDDEN_PARTITIONS 12.1.0.1 ALL -> COMPILATION -> CBO -> PQ PARTIAL_ROLLUP_PUSHDOWN PARTIAL_ROLLUP_PUSHDOWN

NO_PARTIAL_ROLLUP_PUSHDOWN 12.1.0.1 12.1.0.1 PQ_CONCURRENT_UNION PQ_CONCURRENT_UNION

NO_PQ_CONCURRENT_UNION 12.1.0.1 12.1.0.1 PQ_DISTRIBUTE_WINDOW PQ_DISTRIBUTE_WINDOW 12.1.0.1 12.1.0.1 PQ_FILTER PQ_FILTER 12.1.0.1 12.1.0.1 PQ_SKEW PQ_SKEW

NO_PQ_SKEW 12.1.0.1 12.1.0.1 PX_FAULT_TOLERANCE PX_FAULT_TOLERANCE
NO_PX_FAULT_TOLERANCE 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> PQ -> PQ_REPLICATE PQ_REPLICATE PQ_REPLICATE
NO_PQ_REPLICATE 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> STATS -> DBMS_STATS GATHER_OPTIMIZER_STATISTICS GATHER_OPTIMIZER_STATISTICS

NO_GATHER_OPTIMIZER_STATISTICS 12.1.0.1 ALL -> COMPILATION -> TRANSFORMATION ELIM_GROUPBY ? ELIM_GROUPBY

NO_ELIM_GROUPBY ALL -> COMPILATION -> CBO -> CBQT -> VECTOR_AGG
and

ALL -> COMPILATION -> TRANSFORMATION -> CBQT -> VECTOR_AGG USE_VECTOR_AGGREGATION USE_VECTOR_AGGREGATION
NO_USE_VECTOR_AGGREGATION 12.1.0.2 12.1.0.2 VECTOR_TRANSFORM VECTOR_TRANSFORM

NO_VECTOR_TRANSFORM 12.1.0.2 12.1.0.2 VECTOR_TRANSFORM_DIMS VECTOR_TRANSFORM_DIMS

NO_VECTOR_TRANSFORM_DIMS 12.1.0.2 12.1.0.2 VECTOR_TRANSFORM_FACT VECTOR_TRANSFORM_FACT

NO_VECTOR_TRANSFORM_FACT 12.1.0.2 12.1.0.2

ALL -> COMPILATION -> TRANSFORMATION -> HEURISTIC -> DECORRELATE DECORRELATE DECORRELATE

NO_DECORRELATE 12.1.0.1 12.1.0.1

See also:
Fuyuncat(Wei Huang) – “Oracle 12c new SQL Hints”

Categories: Development

Oracle Midlands : Event #10 Summary

Tim Hall - Wed, 2015-07-08 07:26

oracle-midlands Last night was Oracle Midlands Event #10 with Jonathan Lewis.

The first session was on “Five Hints for Optimizing SQL”. The emphasis was very much on “shaping the query plan” to help the optimizer make the right decisions, not trying to determine every single join and access structure etc.

In the past I’ve seen Jonathan do sessions on hints, which made me realise how badly I was using them. As a result of that I found myself a little scared by them and gravitating to this “shaping” approach, but my version was not anywhere near as well thought out and reasoned as Jonathan’s approach. It’s kind-of nice to see I was on the right path, even if my approach was the mildly pathetic, infantile version of it. :)

red-stack-tech-swagThe break consisted of food, chatting and loads of prizes. It’s worth coming even if you don’t want to see the sessions, just to get a chance of winning some swag. :) Everyone also got to take home a Red Stack Tech mug, stress bulb and some sweets as well.

The second session was on “Creating Test Data to Model Production”. I sat there smugly thinking I knew what was coming, only to realise I had only considered a fraction of the issues. I think “eye opening” would be the phrase I would use for this one. Lots of lessons learned!

I must say, after nearly 20 years (19 years and 11 months) in the game, it’s rather disconcerting to feel like such a newbie. It seems to be happening quite a lot recently. :)

redstacktechSo that was another great event! Many thanks to Jonathan for taking the time to come and speak to us. Hopefully we’ll get another visit next year? Well done to Mike for keeping this train rolling. Wonderful job! Thanks to all the sponsors of the prize draw and of course, thanks to Red Stack Tech for their support, allowing the event to remain free! Big thanks to all the members of the Oracle Midlands family that came out to support the event. Without your asses on seats it wouldn’t happen!

The next event will be on the 1st September with Christian Antognini, so put it in your diary!

Cheers

Tim…

Oracle Midlands : Event #10 Summary was first posted on July 8, 2015 at 2:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OAM PS3 - continued

Frank van Bortel - Wed, 2015-07-08 06:12
Allow auto start (production mode) for your scripts: cd /oracle/user_projects/domains/oam_domain/servers mkdir -p oam_server1/security mkdir -p omsm_server1/security mkdir -p oam_policy_mgr1/security vi oam_server1/security/boot.properties cp oam_server1/security/boot.properties omsm_server1/security/ cp oam_server1/security/boot.properties oam_policy_mgr1/security/ You can now use command line Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

APEX 5.0 New Features Training September 2015

Denes Kubicek - Wed, 2015-07-08 06:12




Oracle Application Express 5.0 wurde am 15.04.2015 freigegeben und ist als Download verfügbar.

Wir haben jetzt alle über zwei Jahre auf das neue Release gewartet ... und das Warten hat sich mehr als gelohnt ... die neuen Möglichkeiten werden Sie umhauen!

Wir haben die neuen Möglichkeiten genau unter die Lupe genommen und sind selbst total begeistert, weil sie das tägliche Arbeiten extrem vereinfachen. Wir haben schon einige Anwendungen in APEX 5.0 neu programmiert und einige ältere Versionan auf 5.0 umgestellt.

Mit dem neuen Pagedesigner sind wir so produktiv wie noch nie, das Universal Theme erlaubt auf einfache Weise, richtig elegante Applikationen zu entwickeln. Damit ist wirklich ein Durchbruch gelungen, die Konfiguration (über Template Optionen) und selbst farbliche Anpassungen sind ein Kinderspiel, der integrierte Theme Roller ist einfach genial!

Modale Dialoge, multiple Interaktive Berichte pro Seite, Erweiterungen für die mobilen Endgeräte, ein komplett neues File-Handling und viele neue Security - Features ... dieses Release ist wirklich umfangreich!

Neben diesen großen Features wurden auch wieder über 100 kleinere oder auch größere Verbesserungen implementiert.

In unseren Kursen haben wir schon über 200 APEX-Fans die besten Herangehensweisen, Tipps und Tricks beigebracht. Durch die Hands-On Übungen vertiefen wir diese und Sie können diese sofort einsetzen ... oder Sie schlagen sie nach ... wenn Sie sie später brauchen ;). Aber auf jeden Fall wissen Sie nach dem Kurs, was möglich ist!

Wir beide (Denes und Dietmar) entwickeln seit 2006 fast jeden Tag mit APEX Applikationen für unsere Kunden, wir haben mit APEX schon alles ausprobiert.

Nehmen Sie die Abkürzung und lernen Sie von den Besten, was für die Praxis am wichtigsten sein wird.

Um die Vorteile des neuen Releases möglichst schnell nutzen zu können, melden Sie sich am besten sofort an und sichern Sie sich Ihren Platz!
  • Klicken Sie auf den Link "Anmeldung zum Kurs".
  • Tragen Sie Ihre Anmeldedaten ein und klicken Sie auf den Button "Anmelden".
  • Sie bekommen sofort eine Bestätigungs-Email zugeschickt.
  • Sobald Sie in der Email auf den Link zur Bestätigung klicken, haben Sie Ihren Platz gesichert und sind auf jeden Fall dabei!


Anmeldung zum Kurs

P.S.: Die vollständige Agenda und weitere Infos zum Kurs gibt es online in der Kursbeschreibung.
Categories: Development

Become an #Oracle Certified Expert for Data Guard!

The Oracle Instructor - Wed, 2015-07-08 04:33

It is with great pride that I can announce a new certification being available – Oracle Database 12c: Data Guard Administration.

We wanted this for years and finally got it now, after having put much effort and expertise into the development of the exam. It is presently in beta and offered with a discount. Come and get it!


Tagged: Data Guard, Oracle Certification
Categories: DBA Blogs

OpenSSL and KeyTool commands

Darwin IT - Wed, 2015-07-08 03:29
Earlier I wrote an article about message transport security in Oracle B2B. It collects a few usefull Java Keytool and OpenSSL commands to convert and import Certificates.

Today I learned another (from co-worker Joris, thanks).

This is how to get a certificate from an external server.
openssl x509 -in <(openssl s_client -connect {remote-host}:443 -prexit 2>/dev/null) -out /tmp/certificate.crt 

This is usefull, because in some cases the remote host, maybe a virtual one, where by means of Server Name Indication the specific virtual-host's certificate is to be 'asked', while the actual certificate of the physical host is presented by default. Note that Weblogic (and other JEE Appserver as JBoss, Websphere, Glassfish, etc.) does not support SNI.

I think I should create a blog-entry to collect these usefull commands in one page. However I've found these:


Twitter Oracle Security Open Chat Thursday 6th March

Pete Finnigan - Wed, 2015-07-08 02:50

I will be co-chairing/hosting a twitter chat on Thursday 6th March at 7pm UK time with Confio. The details are here . The chat is done over twitter so it is a little like the Oracle security round table sessions....[Read More]

Posted by Pete On 05/03/14 At 10:17 AM

Categories: Security Blogs

UK Partner Event – Value of Support Investment, Best Practices & Resources

Chris Warticki - Wed, 2015-07-08 02:18
Thanks to the more than 40 Oracle Partner Delegates that joined today's in-person workshop held at our Oracle Reading, U.K., (TVP) office.
Below is the leave-behind list of all the links to the information discussed. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4

First – The #1 investment is the product itself, therefore be a student of the product
Continued Training & Education - Product, Product, Product

Oracle Partner Network
Oracle Partner Guided Learning Paths
Oracle Learning Library
My Oracle Support Monthly Advisor Webcasts
My Oracle Support Accreditation (Are you Oracle Support accredited?)
Oracle E-Business Suite Release 12.1 and 12.2 Transfer of Information (TOI) Online Training
Oracle University--Training On Demand

#2 – Remain In-the-Know from Oracle Support and Oracle Corporation
Proactive Alerts, Notifications & Subscriptions - Are you receiving the specific you need?

Setup Hot-Topics emails from My Oracle Support
Subscribe to available Newsletters from major product lines and technologies
Events & Webcasts Schedule and Archives
Product Support Newsletters from Oracle Support teams

#3 – Personalize My Oracle Support
My Oracle Support - Efficiencies. 
Customize your Dashboard & use Powerviews

#4 – FIND it, the FIRST time, FAST!
Know what Support knows with 100% certainty whether you need to open a Service Request or not.
Use the Knowledge Browser in My Oracle Support
Catalog: Oracle Information Centers: All Products - Database - Exadata - EBS - JDE - Fusion - Middleware - GBUs - Siebel - Sun Systems - Peoplesoft - Enterprise Manager - Endeca

If you haven’t LEARNED ABOUT IT, haven’t READ ABOUT IT, and you can’t FIND IT – then absolutely log a Service Request.

Service Request Best Practices - What does a fully qualified SR contain?

#5 – Leverage ALL the available Diagnostics tools and Scripts
Proactive Support Portfolio - Categorical List of all Tools, Diagnotics, Scripts and Best Practices (by Product Family)
Service Request Profiles or Existing SR for expediency
TFA Collector - Tool for Enhanced Diagnostic Gathering - Includes and Integrates the following tools that were previously packaged as the "RAC and DB Support Tools Bundle" (MOS Version Only): ORAchk, OSWatcher, Procwatcher, SQLT, ORAtop and DARDA to ensure such data is available for reactive types of issues 

Install - Remote Diagnostic Agent (RDA) for Database, Server Tech & other Products
Over 25 built-in tools and tests. Over 80 seeded profiles
Ora-600/7445 Internal Errors Tool
Performance Diagnostics Guide and Tuning Diagnostics

Install - EBusiness Diagnostic Support Pack for Applications

PSFT – Troubleshooting Assistants for PeopleSoft Enterprise
PSFT – Diagnostic Framework Plugins
PSFT – Performance Diagnostic Tools
PSFT – Cobol Assistance
PSFT - Page, PeopleCode and Online SQL Troubleshooting
PSFT - analyzing *.tracesql files with TraceMagic
PSFT - Prevent and Resolve Performance Issues Advisor

JDE – Performance Workbench
JDE - Server Manager
JDE - Change Assistant
JDE - Configuration Assistant
JDE - Performance Workbench

Guardian Resource Center

Sun Systems Management and Diagnostic Tools
Auto Service Request
Oracle Services Tools Bundle with Sun Explorer Data Collector
Gathering Solaris Performance Data (GUDS)
Sun Gathering Debug Data (Sun GDD)
Oracle Solaris Crash Analysis Tool
Oracle Shared Shell
Oracle Hardware Service Request Automated Diagnosis
Oracle Validation Test Suite
Oracle Hardware Management Pack
PC-check
Oracle ASR Product Page
Oracle STB Product Page
Oracle Secure File Transport
Oracle Hardware Service Request Automated Diagnosis

#6 – Engage with Oracle Support

Fill-out Service Request Templates completely
Use all Diagnostics & Data Collectors (432.1)
Upload ALL reports if logging a Service Request
Leverage Oracle Collaborative Support (web conferencing)
Better Yet – Record your issue and upload it (why wait for a scheduled web conference?)
Request Management Attention as necessary

#7 – Expand your Circles of Influence

Oracle Community-wide Resources
My Oracle Support Communities
Oracle Technology Network Forums

Blogs.oracle.com

Oracle PartnerNetwork Strategy Blog

SOA & BPM Partner Community Blog
WebLogic Partner Community EMEA

Systems Technology Enablement for Partners (STEP)
Oracle Partner Enablement for Apps Partners
Oracle on YouTube

Twitter Directory

Oracle Partner Network @oraclepartners
Oracle Partner Biz @ORCLpartnerbiz

Oracle Podcasts

#8 – Understand Oracle Support Policies and Processes

All Technical Support Policies
Lifetime Support Policy
Oracle Support Technical Support Policies
Database, FMW, EM Grid Control and OCS Software Error Correction Policy
Ebusiness Suite Software Error Correction Policy

Thanks
- Chris Warticki
#Oracle News, Info & Support
@cwarticki

/* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

Essbase Studio checkModelInSyncExternal Error

Rittman Mead Consulting - Wed, 2015-07-08 02:12

This week I am back at one of my favourite clients to assist with some issues since the latest upgrade to 11.1.1.9. These guys are seriously technical and know their stuff, so when issues pop up I’m always ready for an interesting and challenging day in the office.

As with all the recent 11g versions, they had to decide between in-place and out-of-place upgrades, and here they opted for an in-place upgrade because they have a fairly settled configuration using Answers, Publisher and Essbase. They were trying to avoid the need to reconfigure components like:

  • SQL Group Providers
  • Customisations
  • Application Roles used in OBIEE and Essbase Filters

Plus, when reconfiguring the above you also run the risk of missing something and it could take a long time to track down where it went wrong.

The Problem

In the end this was not a very complicated problem or solution but seeing as we couldn’t find anything on the web or Oracle Support regarding this error, I thought it might be useful to share in case others run into this same issue.

After performing the upgrade from OBIEE 11.1.1.7.140114 to 11.1.1.9 the client was unable to view or edit the Essbase Model properties in Essbase Studio. In fact, they couldn’t even see their Essbase Model at all. Only the Cube Schema was visible.

cube_schema_only1

When we tried to select or edit the Schema the following message appeared:

Error in the external procedure 'checkModelInSyncExternal'. Line = 136.

checkinsyncerror1

Oracle Support Came To The Party

After trying several different options to fix it, none of which made any difference, the client raised a P1 query with Oracle Support. After going through the initial standard questions and a few messages between the client and Oracle Support, they came back with a solution. All of this within 24 hours…

The Reason

After applying the patch the catalog version is not synchronised with the new version of Essbase Studio Server.

The Solution

Even thought we couldn’t find any reference to this in the post-patching section of the documentation, when you read the Oracle Essbase Studio 11.1.2.4.000 Readme there is a section in there describing the problem and solution.

To fix the problem we followed these simple steps

  1. Navigate to
    <ORACLE_BI_HOME>\products\Essbase\EssbaseStudio\Server\scripts.template\
  2. Copy startCommandLineClient.bat.template to
    <ORACLE_BI_HOME>\products\Essbase\EssbaseStudio\Server\
  3. Rename the new startCommandLineClient.bat.template to startCommandLineClient.bat
  4. Edit startCommandLineClient.bat so it looks like this
    NOTE – Update the paths according to your environment AND use full paths
    @echo off
    
    set JAVA_HOME=C:\oracle\Oracle_BI1\jdk\jre
    "%JAVA_HOME%\bin\java" -Xmx128m %JAVA_OPTIONS% -jar "C:\oracle\Oracle_BI1\products\Essbase\EssbaseStudio\Server\client.jar"
    
    if %errorlevel% == 0 goto finish
    
    echo .
    echo Press any key to finish the client session
    pause > null
    
    : finish
  5. Open a CMD window and start startCommandLineClient.bat
  6. Enter the requested information (Host, Username and Password)
  7. Enter the following command and wait for it to complete
    reinit
  8. Type Exit to leave the script and close the CMD window

You should now be able to open the Model Properties in Essbase Studio without any issues.

cube_schema_and_essb_props1

Categories: BI & Warehousing

Zoomdata and the Vs

DBMS2 - Tue, 2015-07-07 17:23

Let’s start with some terminology biases:

So when my clients at Zoomdata told me that they’re in the business of providing “the fastest visual analytics for big data”, I understood their choice, but rolled my eyes anyway. And then I immediately started to check how their strategy actually plays against the “big data” Vs.

It turns out that:

  • Zoomdata does its processing server-side, which allows for load-balancing and scale-out. Scale-out and claims of great query speed are relevant when data is of high volume.
  • Zoomdata depends heavily on Spark.
  • Zoomdata’s UI assumes data can be a mix of historical and streaming, and that if looking at streaming data you might want to also check history. This addresses velocity.
  • Zoomdata assumes data can be in a variety of data stores, including:
    • Relational (operational RDBMS, analytic RDBMS, or SQL-on-Hadoop).
    • Files (generic HDFS — Hadoop Distributed File System or S3).*
    • NoSQL (MongoDB and HBase were mentioned).
    • Search (Elasticsearch was mentioned among others).
  • Zoomdata also tries to detect data variability.
  • Zoomdata is OEM/embedding-friendly.

*The HDFS/S3 aspect seems to be a major part of Zoomdata’s current story.

Core aspects of Zoomdata’s technical strategy include: 

  • QlikView/Tableau-style navigation, at least up to a point. (I hope that vendors with a much longer track record have more nuances in their UIs.)
  • Suitable UI for wholly or partially “real-time” data. In particular:
    • Time is an easy dimension to get along the X-axis.
    • You can select current or historical regions from the same graph, aka “data rewind”.
  • Federated query with some predicate pushdown, aka “data fusion”.
    • Data filtering and some GroupBys are pushed down to the underlying data stores — SQL or NoSQL — when it makes sense.*
    • Pushing down joins (assuming that both sides of the join are from the same data store) is a roadmap item.
  • Approximate query results, aka “data sharpening”. Zoomdata simulates high-speed query by first serving you approximate query results, ala Datameer.
  • Spark to finish up queries. Anything that isn’t pushed down to the underlying data store is probably happening in Spark DataFrames.
  • Spark for other kinds of calculations.

*Apparently it doesn’t make sense in some major operational/general-purpose — as opposed to analytic — RDBMS. From those systems, Zoomdata may actually extract and pre-cube data.

The technology story for “data sharpening” starts:

  • Zoomdata more-or-less samples the underlying data, and returns a result just for the sample. Since this is a small query, it resolves quickly.
  • More precisely, there’s a sequence of approximations, with results based on ever larger samples, until eventually the whole query is answered.
  • Zoomdata has a couple of roadmap items for making these approximations more accurate:
    • The integration of BlinkDB with Spark will hopefully result in actual error bars for the approximations.
    • Zoomdata is working itself on how to avoid sample skew.

The point of data sharpening, besides simply giving immediate gratification, is that hopefully the results for even a small sample will be enough for the user to determine:

  • Where in particular she wants to drill down.
  • Whether she asked the right query in the first place. :)

I like this early drilldown story for a couple of reasons:

  • I think it matches the way a lot of people work. First you get to the query of the right general structure; then you refine the parameters.
  • It’s good for exact-results performance too. Most of what otherwise might have been a long-running query may not need to happen at all.

Aka “Honey, I shrunk the query!”

Zoomdata’s query execution strategy depends heavily on doing lots of “micro-queries” and unioning their result sets. In particular:

  • Data sharpening relies on a bunch of data-subset queries of increasing size.
  • Streaming/”real-time” BI is built from a bunch of sub-queries restricted to small time slices each.

Even for not-so-micro queries, Zoomdata may find itself doing a lot of unioning, as data from different time periods may be in different stores.

Architectural choices in support of all this include:

  • Zoomdata ships with Spark, but can and probably in most cases should be pointed at an external Spark cluster instead. One point is that Zoomdata itself scales by user count, while the Spark cluster scales by data volume.
  • Zoomdata uses MongoDB off to the side as a metadata store. Except for what’s in that store, Zoomdata seems to be able to load balance rather statelessly. And Zoomdata doesn’t think that the MongoDB store is a bottleneck either.
  • Zoomdata uses Docker.
  • Zoomdata is starting to use Mesos.

When a young company has good ideas, it’s natural to wonder how established or mature this all is. Well:

  • Zoomdata has 86 employees.
  • Zoomdata has (production) customers, success stories, and so on, but can’t yet talk fluently about many production use cases.
  • If we recall that companies don’t always get to do (all) their own positioning, it’s fair to say that Zoomdata started out as “Cloudera’s cheap-option BI buddy”, but I don’t think that’s an accurate characterization as this point.
  • Zoomdata, like almost all young companies in the history of BI, favors a “land-and-expand” adoption strategy. Indeed …
  • … Zoomdata tells prospects it wants to be an additional BI provider to them, rather than rip-and-replacement.

As for technological maturity:

  • Zoomdata’s view of data seems essentially tabular, notwithstanding its facility with streams and NoSQL. It doesn’t seem to have tackled much in the way of event series analytics yet.
  • One of Zoomdata’s success stories is iPad-centric. (Salesperson visits prospect and shows her an informative chart; prospect opens wallet; ka-ching.) So I presume mobile BI is working.
  • Zoomdata is comfortable handling 10s of millions of rows of data, may be strained when handling 100s of millions of rows, and has been tested in-house up to 1 billion rows. But that’s data that lands in Spark. The underlying data being filtered can be much larger, and Zoomdata indeed cites one example of a >40 TB Impala database.
  • When I asked about concurrency, Zoomdata told me of in-house testing, not actual production users.
  • Zoomdata’s list when asked what they don’t do (except through partners, of which they have a bunch) was:
    • Data wrangling.
    • ETL (Extract/Transform/Load).
    • Data transformation. (In a market segment with a lot of Hadoop and Spark, that’s not really redundant with the previous bullet point.)
    • Data cataloguing, ala Alation or Tamr.
    • Machine learning.

Related link

  • I wrote about multiple kinds of approximate query result capabilities, Zoomdata-like or otherwise, back in July, 2012.
Categories: Other