Skip navigation.

Feed aggregator

ACFS 12.1.0.2 on Oracle Linux 7.1

Yann Neuhaus - Fri, 2015-07-10 03:56
Recently we wanted to create an ACFS filesystem on a brand new 12.1.0.2 GI installation on Oracle Linux 7.1. According to the documentation this should not be an issue as "Oracle Linux 7 with the Unbreakable Enterprise kernel: 3.8.13-33.el7uek.x86_64 or later" is supported.

Oracle Priority Support Infogram for 09-JUL-2015

Oracle Infogram - Thu, 2015-07-09 15:05

Solaris
The blogosphere is humming with the new Solaris 11.3 and this posting from the Oracle brings a bunch of those links together for you:
Here's Your Oracle Solaris 11.3 List of Blog Posts
A couple of general posts of interest. This one from Prakash Sangappa's Weblog: Post-Wait mechanism
And: Secure multi-threaded live migration for kernel zones, from The Zones Zone.
Oracle Support
New ORAchk Version 12.1.0.2.4 Released, from the Database Support Blog at Oracle Communities.
System Administration
Handy Space Monitoring, from Oracle Storage Ops.
OEM
Snap Cloning Databases on Exadata using Enterprise Manager, from the Oracle Enterprise Managerblog.
And from the same blog:
Understanding Plans, Profiles and Policies in Enterprise Manager Ops Center
SQL Developer
Top 10 Things You Might Be Overlooking in Oracle SQL Developer, from that JEFF SMITH.
OBIEE
Download Demonstration VM for OBI 11g SampleApp v506 with Big Data, from BI & EPM Partner Community EMEA.
OVM
Oracle VM VirtualBox 5.0 Officially Released!, from Oracle’s Visualization Blog.
ODI
ODI KMs for Business Intelligence Cloud Service, from the Data Integration blog.
Oracle Stream Explorer
Getting Started with Oracle Stream Explorer free online training at Oracle Learning Library, from SOA & BPM Partner Community Blog.
Java
Java ME 8 Tutorial Series, from The Java Source.
EBS
From the Oracle E-Business Suite Support blog:
Does the Approval Analyzer show "Authentication failed" in the Output?
New to the Procurement Accounting Space - Introducing the EBS iProcurement Change Request Analyzer!
Webcast: Subledger Accounting (SLA) Features within Cost Management
Need Help with PO Output for Communication Issues?

EB Tax Analyzer enhanced to capture Tax Reporting issues !!

Pillars of PowerShell: SQL Server – Part 1

Pythian Group - Thu, 2015-07-09 13:37
Introduction

This is the sixth and final post in the series on the Pillars of PowerShell, at least part one of the final post. The previous posts in the series are:

  1. Interacting
  2. Commanding
  3. Debugging
  4. Profiling
  5. Windows OS

PowerShell + SQL Server is just cool! You will see folks talk about the ability to perform a task against multiple servers at a time, automate implementing a configuration or database change, or just obtaining a bit of consistency when doing certain processes. I tend to use it just because I can, and it is fun to see what I can do. There are a some instances where I have used it for a specific purpose where it saved me time, but overall I just chose to use it. I would say that on average there are going to be things you can do in PowerShell that could be done in T-SQL, and in those cases you use the tool that fits your needs.

Interacting with SQL Server PowerShell

There are a three main ways to interact with SQL Server using PowerShell that I have seen:

  1. SQL Server PowerShell (SQLPS)
  2. SQL Server Server Management Object (SMO)
  3. Native .NET coding

I am not going to touch on the third option in this series because it is not something I use enough to discuss. I will say, it is not the first choice for me to use it, but it does serve a purpose at times.

To try and provide enough information to introduce you to working with PowerShell and SQL Server, I broke this into two parts. Part one, we are going to look at SQL Server PowerShell (SQLPS) and using the SQL Server Provider (SQLSERVER:\). In part two we will go over SMO and what can be accomplished.

SQLPS, to me, offers you quick access to do the one-liner type tasks against SQL Server. It is just a preference really on which option you go with, so if it works for you just use it. There are some situations that using the SQL Server Provider actually requires you to mix in using SMO (e.g. creating a schema or database role). It also offers up a few cmdlets that are added onto (and improved upon) with each release of SQL Server.

Loading/Importing

The first thing to understand is how to get the product module into your PowerShell session. As with most products, some portion of the software has to exist on the machine you are working on, or the machine your script is going to be executed on. SQL Server PowerShell and SMO are installed by default if you install the SQL Server Management Tools (aka SSMS and such) for SQL Server 2008 and higher. I will only mention that they can also be found in the SQL Server Feature Pack if you need a more “standalone” type setup on a remote machine.

One thing you should get in the habit of doing with your scripts is verifying certain things that can cause more errors than are desired, one of those is dealing with modules. If the module is not loaded when the script is run your script is just going to spit out a ton of red text. If the prerequisites are not there to begin with, there is no point in continuing. You can verify that a version of the SQLPS module is installed on your machine by running the following command:

Get-Module -ListAvailable -Name SQL*

If you are running SQL Server 2012 or 2014 you will see something like this:

SQLModule1

This works in a similar fashion when you want to verify if the SQL Server 2008 snap-in is loaded:

SQLSnapin1

I generally do not want to have to remember or type out these commands all the time when I am doing things on the fly, so I will add this bit of code to my PowerShell Profile:

Push-Location
Import-Module SQLPS -DisableNameChecking -ErrorAction 'Stop'
Pop-Location

#Load SQL Server 2008 by uncommenting next line
#Add-PSSnapin *SQL* -ErrorAction 'Stop'

One cool thing that most cmdlets you use in PowerShell contain is the -ErrorAction parameter. There are a few different values you can use for this parameter, and you can find those by checking the help on about_CommonParamters. If your script is one that is going to be interactive or run manually I would use -ErrorAction ‘Inquire‘ instead, try it out on a machine that does not have the module installed to see what happens. Once you have the module or snap-in loaded you will be able to access the SQL Server PowerShell Provider.

One side note, there actually is a “sqlps.exe” utility that is easily accessible in most cases via the right-click menu in SSMS (e.g. right-click on the “Databases” node in Object Explorer). If you open this, you are thrust into the SQLPS provider and the “directory” of the node you opened from in SSMS. However convenient as that may seem, it is something that was added to the depreciation list with SQL Server 2012, so there’s not much point in talking about it. It has its own little quirks that most folks steer clear of using it anymore.

Being Specific

The code I use in my profile is going to load the most current version of the module found on my system, at least it should. It may not do as you think it will every time. In some circumstances when you are developing scripts on your own system you may need to only import a specific version; especially if you are in a mixed version environment for SQL Server. You can load a specific version of the module by utilizing Get-Module to find your version, and just pass it to Import-Module.

Get-Module -ListAvailable -Name SQLPS | select name, path
#110 = SQL Server 2012, 120 = SQL Server 2014, 130 = SQL Server 2016
Push-Location
Get-Module -ListAvailable -Name SQLPS |
     where {$_.path -match "110"} | Import-Module
Pop-Location

# To show that it was indeed loaded
Get-Module -Name SQLPS | select name, path

#If you want to switch to another one, you need to remove it
Remove-Module SQLPS
Authentication

By default when you browse the SQLPS provider (or most providers actually), it is going to utilize the account that is running the PowerShell session, Windows Authentication. If you find yourself working with an instance that you require SQL Login authentication, don’t lose hope. You can connect to an instance via the SQL Server Provider with a SQL Login. There is an MSDN article that provides a complete function that you can use to create a connection for such a purpose. It does not show a version of the article for SQL Server 2008 but I tested this with SQL Server 2008 R2 and it worked fine.

SQLSnapin_Authentication

One important note I will make that you can learn from the function in that article: the password is secure and not stored or processed in plain text.

SQLPS Cmdlets

SQLPS as noted previously offers a handful of cmdlets for performing a few administrative tasks against SQL Server instances. The majority of the ones you will find with SQL Server 2012 for example revolve around Availability Groups (e.g. disabling, creating, removing, etc.). The other unmentionables include Backup-SqlDatabase and Restore-SqlDatabase, these do exactly what you think but with a few limitations. The backup cmdlet can actually only perform a FULL, LOG, or FILE level backup (not sure why they did not offer support of a differential backup). Anyway, they could be useful for automating backups of production databases to “refresh” development or testing environments as the backup cmdlet does support doing a copy only backup. Another way is if you deal with Express Edition you can utilize this cmdlet and a scheduled task to backup those databases.

Update 7/13/2015: One correction, where I should have checked previously, but the Backup cmdlet for 2012 and above does include an “-Incremental” parameter for performing differential backups.

The other main cmdlet you get with SQLPS is what most people consider the replacement to the sqlcmd utility, Invoke-Sqlcmd. The main thing you get from the cmdlet is a smarter output in the sense that PowerShell will more appropriately detect the data type coming out, compared to the utility that just had everything as a string.

SQLPS One-liners

Working with the SQL Server Provider you will traverse this provider as you would a drive on your computer. So you can use the cmdlet Get-ChildItem or do as most folks and use the alias dir. The main thing to understand is the first few “directories” to access a given SQL Server instance. There are actually multiple root directories under the provider that you can see just by doing “dir SQLSERVER:\“. You can see by the description what each one is for, the one we are interested in is the “Database Engine”

SQLProvider2

Once you get beyond the root directory it can require a bit of patience as the provider is slow to respond or return information. If we want to dig into an instance of SQL Server you just need to understand the structure of the provider, it will generally follow this syntax: <Provider>:\<root>\<hostname>\<instance name>\. The instance name will be “DEFAULT” if you are dealing with a SQL Server default instance. If you have a named instance you just add the name of the instance (minus the server name).

To provide a real-world example, Avail Monitoring is the custom tool Pythian developed to monitor the SQL Server environments of our customers (or Oracle or MySQL…you get the point). One of the features it includes, among many, is monitoring for failed jobs. We customize the monitoring around the customer’s requirements so some job failures will page us immediately when it occurs, while others may allow a few extra failures before we are notified to investigate. This is all done without any intervention required by the customer and I know from that notification what job failed. Well right off you are going to want to check the job history for that job to see what information shows up, and I can use SQLPS Provider to do just that:

# To see the job history
dir SQLSERVER:\SQL\MANATARMS\SQL12\JobServer\Jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.EnumHistory()} | select message, rundate -first 5 | format-list
SQLProvider3
# if I needed to start the job again
$jobs = dir SQLSERVER:\SQL\MANATARMS\SQL12\JobServer\Jobs
$jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.Start()}

You might think that is a good bit to typing, but consider how long it can take for me to do the same thing through SSMS…I can type much faster than I can click with a mouse.

Anyway to close things out, I thought I would show one cool thing SQLPS can be used for the most: scripting out stuff. Just about every “directory” you go into with the provider is going to offer a method named “Script()”.

$jobs | where {$_.name -eq "Test_MyFailedJob"} | foreach {$_.Script()}

I will get the T-SQL equivalent of the job just like SSMS provides, this can be used to document your jobs or used when refreshing a development server.

Summary

I hope you got the idea of what SQLPS can do from the information above, one-liners are always fun to discover. The SQL Server Provider is not the most used tool out there by DBAs, but it can be a life-saver at times. In the next post we will dig into using SMO and the awesome power it offers.

 

Discover more about our expertise in SQL Server

The post Pillars of PowerShell: SQL Server – Part 1 appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Kscope15 - It's a Wrap, Part II

Chet Justice - Thu, 2015-07-09 13:04
Another fantastic Kscope in the can.

This was my final year in an official capacity which was a lot more difficult to deal with than I had anticipated. Here's my record of service:
  • 2010 (2011, Long Beach) - I was on the database abstract review committee run by Lewis Cunningham. I ended up volunteering to help put together the Sunday Symposium and with the help of Dominic Delmolino, Cary Millsap and Kris Rice, I felt I did a pretty decent job.
  • 2011 (2012, San Antonio) - Database track lead. I believe this is the year that Oracle started running the Sunday Symposiums. Kris again led the charge with some input from those other two from the year before, i.e. DevOps oriented
  • 2012 (2013, New Orleans) Content co-chair for the traditional stuff (Database, APEX, ADF), Interview Monkey (Tom Kyte OMFG!), OOW/ODTUG Coordinator, etc.
  • 2013 (2014, Seattle) Content co-chair for the traditional stuff (Database, APEX, ADF), Interview Monkey, OOW/ODTUG Coordinator, etc.
  • 2014 (2015, Hollywood, FL) Content co-chair for the traditional stuff (Database, APEX, ADF)

This has been a wonderful time for me both professionally and, more importantly to me, personally. Obviously I had a big voice in the direction of content. Also and maybe hard to believe, I actually presented for the first time. Slotted against Mr. Kyte. I reminded everyone of that too. Multiple times. It seemed to go well though. Only a few made fun of me.

I was constantly recruiting too. "Did you submit an abstract?" "No, why not?" and I'd go into my own personal diatribe (ignoring my own lack of presenting) into why they should present. Sarah Craynon Zumbrum summed it up pretty well in a recent article.

But it was the connections I made, the people I met, the stories I shared (#ampm, #cupcakeshirt, etc), and the friends that I made, that's what has had the most impact on me. Kscope is unique in that way because of it's size...at Collaborate or OOW, you'll be lucky to see someone more than once or twice, at Kscope you're running into everyone constantly.

How could I forget? #tadasforkate! This year was even more special. For those that don't know, Katezilla is my profoundly delayed but equally profoundly happy 10 y/o daughter. Just prior to the conference her physical therapist taught her "tada!" and Kate would hold her hands up high in the air and everyone around would yell, Tada! I got this crazy idea to ask others to do it and I would film it. Thirty or forty videos and hundreds of participants later...



So a gigantic thank you to everyone who made this possible for me.
Here's a short list of those that had a direct impact on me...
  • Lewis Cunningham - he asked me to be a reviewer which started all of this off.
  • Mike Riley - can't really say enough about Mike. After turning me away a long time ago (jerk), he was probably my biggest supporter over the years. (Remind me next year to you tell you about "The Hug."). Mike, and his family, are very dear to me.
  • Monty Latiolais (rhymes with Frito Lay I would tell myself) - How can you not love this guy?
  • Natalie Delemar - Co-chair for EPM/BI and then boss as Conference Chair.
  • Opal Alapat - Co-chair for EPM/BI and one of my favorite humans ever invented. I aspire to be more organized, assertive, and bad-ass like Opal.
That list is by no means exhaustive. It doesn't even include staff at YCC, like Crystal Walton, Lauren Prezby and everyone else there. Nor does it include the very long list of Very Special People I've met. I consider myself very fortunate and incredibly grateful.

What's the future hold?
I have no idea. My people are in talks with Helen J. Sander's people to do one or more presentations next year, so there's that. Speaking of which...it's in Chicago. Abstract submissions start soon, I hope you plan on submitting. If you're not ready to submit, I hope you take try to take part in shaping the content by finding one of about 10 abstract review committees. Who knows where they may lead you?

Finally, here's the It's a Wrap video from Kscope15 (see Helen's story there). Here's Kscope16's site. Go sign up.

Categories: BI & Warehousing

Reading System Logs on SQL Server

Pythian Group - Thu, 2015-07-09 12:54

HDDRecently, while I was working on a backup failure issue, I found that it was failing for a particular database. When I ran the backup manually to a different folder it would complete successfully, but not on the folder that it was directed to when the backup jobs were originally configured .  This makes me suspicious about hard disk corruption. In the end, I fixed the backup issues in the interim so that in the future I would not get paged, as well as lowering the risk of having no backup in place.

Upon reviewing the Windows Event logs, it was revealed that I was right about suspecting a faulty hard drive. The log reported some messages related to the SCSI codes, especially the SCSI Sense Key 3 which means SCSI had a Medium error. Eventually, the hard drive was replaced by the client and the database has been moved to another drive.  In the past month, I have had about 3 cases where I have observed that the serious messages related to storage are reported as information. I have included one case here for your reference, which may help you in case you see such things in your own logs.

CASE 1 – Here is what I found in the SQL Server error log:

  • Error: 18210, Severity: 16, State: 1
  • BackupIoRequest::WaitForIoCompletion: read failure on backup device ‘G:\MSSQL\Data\SomeDB.mdf’.
  • Msg 3271, Level 16, State 1, Line 1
  • A non-recoverable I/O error occurred on file “G:\MSSQL\Data\SomeDB.mdf:” 121 (The semaphore timeout period has expired.).
  • Msg 3013, Level 16, State 1, Line 1
  • BACKUP DATABASE is terminating abnormally.

When I ran the backup command manually I found that it ran fine until a specific point (i.e. 55%) before it failed again with the above error. Further, I decided to run DBCC CHECKDB which reports when a particular table has a consistency error at a particular page. Here are the reported errors:

Msg 8966, Level 16, State 2, Line 1
Unable to read and latch page (1:157134) with latch type SH. 121(The semaphore timeout period has expired.) failed.
Msg 2533, Level 16, State 1, Line 1
Table error: page (1:157134) allocated to object ID 645577338, index ID 0, partition ID 72057594039304192, alloc unit ID 72057594043301888 (type In-row data) 
was not seen. The page may be invalid or may have an incorrect alloc unit ID in its header. The repair level on the DBCC statement caused this repair to be bypassed.

Of course, repairing options did not help as I had anticipated initially, since the backup was also failing when it reached at 55%. The select statement also failed to complete when I queried the object 645577338.  The only option that I was left with was to recreate the new table and drop the original table. After this had been done, the full back up succeeded. As soon as this was completed we moved the database to another drive.

I was still curious regarding these errors, so I started looking at Windows Error Logs – System folder, filtering it to show only Errors and Warnings.  However, this did not show me anything that attracted me to read further. Thus, I removed the filter, and carefully reviewed the logs.  To my surprise, the logs show entries for a bad sector, but, this was in the Information section of Windows Event Viewer, System folder.

Event Type: Information
Event Source: Server Administrator
Event Category: Storage Service
Event ID: 2095
Date: 6/10/2015
Time: 1:04:18 AM
User: N/A
Computer: SQLServer
Description: SCSI sense data Sense key: 3 Sense code:11 Sense qualifier: 0:  Physical Disk 0:2 Controller 0, Connector 0.

There could be a different error, warning or information printed on your server depending what the issue is. Upon further review there is still much to be said in order to explain codes and descriptions.

You may have noticed that I have referred to this as CASE 1, which means, I will blog one or two more case(s) in the future. Stay tuned!

Photo credit: Hard Disk KO via photopin (license)

Learn more about our expertise in SQL Server.

The post Reading System Logs on SQL Server appeared first on Pythian - Data Experts Blog.

Categories: DBA Blogs

Using a Parallel Gateway without a Merge in OBPM

Jan Kettenis - Thu, 2015-07-09 10:54
In this blog article I give a brief explanation regarding some aspect of the behavior of the parallel gateway in Oracle BPM. It has been changed on September 15 2015 by adding the remark at the end regarding a Complex Merge (thanks to Martien van den Akker).

For the BPMN modelers among us, I have a small quiz.

Given a process model like this, what would be the behavior of Oracle BPM?



  1. It does not compile because OBPM thinks it is not valid BPMN
  2. The flows with Activity 1 and 2 are merged, the token moves to the End event of the process, and then the instance finishes.
  3. Activity 1 and 2 are executed, and then OBPM waits in the merge because to continue all tokens have to reach the merge.
  4. The flows with Activity 1 and 2 are merged, the token moves to the End event of the process, and in the meantime waits until the timer expires. It will not end before the token reached the Terminate end event, because not all flows from the split are explicitly merged the whole process itself serves as an implicit merge.

If this would be some magazine, I would now tell you to go to the last page and turn it upside down to read the answer. Or wait until the next issue in which I announce the prize winners.

Alas, no such thing here so let me give you the answer straight away, which is answer 4:



I must admit I was a bit surprised, as I seem to remember that some bundle patches or patch sets ago it would have been a. But when you look at the BPMN specification there is nothing that says that a parallel gateway always has to have a merge. Strange then that OBPM does not let you draw a model without one, but at least it works with a merge with just one ingoing flow.

As a matter of fact, to make the End even actually end the instance, you should change it into an Intermediate Message Throw event, and end the process with a Terminate End event as well. Run-time that looks awkward, because even when your process ends successfully it has the state Terminated.

Fir this reason and and perhaps because your audience might just not understand this model, specifically when it concerns a larger one, the following alternative perhaps is easier to understand. You now can choose if and which flow you want to end with a Terminate End event.

To force that the process continues after the merge, a Complex Merge is used that aborts all other pending parallel flows when the timer expires.

WebCenter & BPM: Adaptive Case Management

WebCenter Team - Thu, 2015-07-09 10:47
By Mitchell Palski, Oracle WebCenter Sales Consultant 
We are happy to have Mitchell Palski joining us on the blog for a Q&A around strategies and best practices for how to deliver Adaptive Case Management with WebCenter and BPM.
Q. So to begin, can you describe for our listeners what case management is? A case is a collection of activities that support a specific business objective. Each case has a lifecycle, which is essentially a process that delivers a service to a use that includes:
  • Activities
  • Rules
  • Information, content, etc.
Case management defines specific interactions that a case may have with a system and with the actual users who are involved in a case’s lifecycle. In a self-service solution, these cases are typically:
  1. Initiated by a customer or citizen
  2. Routed through workflow based on specific business rules or employee intervention
  3. Resolved by evaluating data and content that is captured during the lifecycle

Q. Why is Case Management important? How does Case Management differ from Adaptive Case Management?
Case management is an important concept in today’s technology because it is a primary means of how services are provided to our end users. Some examples might include:

  • Patient services
  • Traffic violation payments
  • Retirement benefit delivery
  • Building, health, or child safety inspections
  • Employee application, promotion, or incident tracking
Each of these examples ties unique combinations of data and documentation together to provide some meaningful service to an end user. Case management software gives organizations the means to standardize how those services are delivered so that they can be completed accurately, quickly and more efficiently.
Adaptive case management is way of modeling flexible and data intensive business processes. Typically, adaptive case management is needed when a case’s lifecycle includes:
  • Complex interactions of people and policies
  • Complex decision making that require subjective judgments to be made
  • Specific dependencies that may need to be overridden based on the combination of fluid circumstances

Adaptive case management allows your organization to employ the use of standard processes and policies, but also allows for flexibility and dynamic decision making when necessary.

Q. How do Oracle WebCenter and Oracle Business Process Management help to deliver Adaptive Case Management? The Oracle Business Process Management Suite includes:
  • Business user-friendly modeling and optimization tools
  • Tools for system integration
  • Business activity monitoring dashboards
  • Rich task and case management capabilities for end users
Oracle BPM Suite gives your organization the tools that it needs to illustrate complex case management lifecycles, define and assign business rules, and integrate your processes into critical enterprise systems. Everything from defining your data and process flows, to implementing actual case interactions; BPM Suite has an intuitive web-based interface where everyone on your staff can collaborate and deliver the best solution for your customers possible.

WebCenter Portal serves as a secure and role-based presentation layer that can be used to optimize the way that users interact with the case management system. For case management lifecycles to be effective, they need to be easy and intuitive to access as well as provide meaningful contextual content to end users who are interacting with their case. WebCenter Content supports the document management aspect of a case by managing the complete lifecycle of documents that are associated with cases, organizational policies, or any web content that helps to educate end-users.

Q. Do you have any customer or real-world examples you could share with our listeners? The Spirit of Alaska is a Retirements Benefit program that faced:
  • Limited resources and funding
  • Out-dated and undocumented processes
  • A drastic and immediate increase in the number of cases being processes
With the help of Oracle, the Alaska Department of Retirement Benefits was able to:
  • Automate and streamline their business processes
  • Reduce the frequency of data input errors
  • Improve customer service effectiveness
The end result was a solution that not only delivered retirements benefits to citizens more quickly and accurately, but also relieved the burden of the state’s business challenges now and in the future.
Thank you, Mitchell for sharing your strategies and best practices on how to deliver Adaptive Case Management with WebCenter and BPM.  You can listen to a podcast on this topic here, and be sure to tune in to the Oracle WebCenter Café Best Practices Podcast Series for more information!

Unizin One Year Later: View of contract reveals . . . nothing of substance

Michael Feldstein - Thu, 2015-07-09 09:18

By Phil HillMore Posts (343)

I’ve been meaning to write an update post on Unizin, as we broke the story here at e-Literate in May 2014 and Unizin went public a month later. It’s one year later, and we still have the most expensive method to get the Canvas LMS. There are also plans for a Content Relay and Analytics Relay as seen in ELI presentation, but the actual dates keep slipping.

Unizin Roadmap

e-Literate was able to obtain a copy of the Unizin contract, at least for the founding members, through a public records request. There is nothing to see here. Because there is nothing to see here. The essence of the contract is for a university to pay $1.050 million to become a member. The member university then has a right (but not an obligation) to then select and pay for actual services. Based on the contract, membership gets you . . . membership. Nothing else.

What is remarkable to me is the portion of the contract spelling out obligations. Section 3.1 calls out that “As a member of the Consortium, University agrees to the following:” and lists:

  • complying with Unizin bylaws and policies;
  • paying the $1.050 million; and
  • designating points of contact and representation on board.

Unizin agrees to nothing. There is literally no description of what Unizin provides beyond this description [emphasis added]:

This Agreement establishes the terms of University’s participation in the Consortium, an unincorporated member-owned association created to provide Consortium Members access to an evolving ecosystem of digitally enabled educational systems and collaborations.

What does access mean? For the past year the only service available has been Canvas as an LMS. When and if the Content Relay and Analytics Relay become available, member institutions will have the right to pay for those. Membership in Unizin gives a school input into defining those services as well.

As we described last year, paying a million dollars to join Unizin does not give a school any of the software. The school has to pay licensing & hosting fees for each service in addition to the initial investment.

The contract goes out of its way to point out that Unizin actually provides nothing. While this is contract legalese, it’s important to note this description in section 6.5 [original emphasized in ALL CAPS but shared here at lower volume].[1]

Consortium operator is not providing the Unizin services, or any other services, licenses, products, offerings or deliverables of any kind to University, and therefore makes no warranties, whether express or implied. Consortium Operator expressly disclaims all warranties in connection with the Unizin services and any other services, licenses, products, offerings or deliverables made available to University under or in connection with this agreement, both express and implied, …[snip]. Consortium Operator will not be liable for any data loss or corruption related to use of the Unizin services.

This contract appears to be at odds with the oft-stated goal of giving institutions control and ownership of their digital tools (also taken from ELI presentation).

We have a vested interest in staying in control of our data, our students, our content, and our reputation/brand.

I had planned to piece together clues and speculate on what functionality the Content Relay will provide, but given the delays it is probably best to just wait and see. I’ve been told by Unizin insiders and heard publicly at conference presentations since February 2015 about the imminent release of Content Relay, and right now we just have slideware. I have asked for a better description of what functionality the Content Relay will provide, but this information is not yet available.

Unizin leadership and board members understand this quandary. As Bruce Maas, CIO at U Wisconsin, put it to me this spring, his job promoting and explaining Unizin will get a lot easier when there is more to offer than just Canvas as the LMS.

For now, here is the full agreement as signed by the University of Florida [I have removed the signature page and contact information page as I do not see the need to make these public].

Download (PDF, 587KB)

  1. Also note that Unizin is unincorporated part of Internet2. Internet2 is the “Consortium Operator” and signer of this agreement.

The post Unizin One Year Later: View of contract reveals . . . nothing of substance appeared first on e-Literate.

VirtualBox 5.0

Tim Hall - Thu, 2015-07-09 09:02

virtualboxOracle VirtualBox 5.0 has been released. You can see the Oracle Virtualization Blog announcement here, which includes a link to the official announcement.

Downloads and changelog in the normal places.

I’m downloading… Now!

Cheers

Tim…

Update: Up and running on my Windows 7 PC at work. Will have to wait until tonight to do it on the Mac and Linux boxes at home… :)

Update 2: Running fine on Mac too. :)

VirtualBox 5.0 was first posted on July 9, 2015 at 4:02 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Learn Oracle Apps DBA (R12) with us:Training Starts on 8th of August

Online Apps DBA - Thu, 2015-07-09 05:40
Everyone having a similar question in mind when they are freshers or are into the same field same domain for years together that which technology we should learn which should be innovative, long running, having some sort of creative touch, and my answer to all those Tech geeks or would be tech geeks is – Oracle Technologies. Oracle from last few years (will not go beyond that !!!) has developed in such a fast pace that you cannot ignore it. When there are lots of development activities goes on and Go Live of the Projects, Testing then there is one Crucial member in the Company/Team who (usually gets ignore ;-)) manages all the environment and give optimise environment to perform all those things : Apps DBA. Apps DBA is combination of Oracle DBA and Oracle Applications- Double Power. Apps DBA is the first entry towards the Big technology which oracle has developed. Oracle Application licenses are increasing every year and all these company are looking for Good Apps DBA who has understanding, knowledge and one of the most important Learning Attitude, to do experiment (of course not on PROD!!!). Who can learn Apps DBA ? Logically if I want to answer. Here is the list
  • All the Freshers, Newbies or may be who want to enter Oracle Applications Area.
  • Who is into Core DBA from years and want new technology to learn.
Apps DBA requirement is not only conceptual but practical as well. As much as you make your hands dirty your leaning grows many folds. When I was at your stage I always search for such institute or training which gives more practical stuff, real time scenarios but was not able to get it, keeping that in mind K21 Technologies is starting Apps DBA Training of R12 from 8th Aug-2015. More Practical oriented, Dedicated instance to play around, mini projects, Support. Apps DBA is a gateway to enter Oracle Technologies and you can move further with many feathers like Fusion Middleware ,Fusion Applications, SOA etc. What topics I should learn to become Apps DBA To start with you should start with Architecture, Installation, Patching, Cloning, changing Schema Password, backup & recovery. We cover this all including hands-on where you do all these using our step by step instructions on our Server. Who ever wants to learn please get enrolled soon as seats are limited. K21 focus on Quality Training with Full Money back Guarantee (If you are not happy after 2 sessions then you can ask for Full Money Back )

For further details check

http://k21technologies.com/oracle-apps-dba-training

The post Learn Oracle Apps DBA (R12) with us:Training Starts on 8th of August appeared first on Oracle : Design, Implement & Maintain.

Categories: APPS Blogs

PK Index

Jonathan Lewis - Wed, 2015-07-08 11:08

Here’s one of those little details that I might have known once, or maybe it wasn’t true in earlier versions of oracle, or maybe I just never noticed it and it’s “always” been true; and it’s a detail I’ll probably have forgotten again a couple of years from now.  Consider the following two ways of creating a table with primary key:


Option 1:

create table orders (
        order_id        number(10,0) not null,
        customer_id     number(10,0) not null,
        date_ordered    date         not null,
        other_bits      varchar2(250),
--      constraint ord_fk_cus foreign key(customer_id) references customers,
        constraint ord_pk primary key(order_id)
)
tablespace TS_ORD
;

Option 2:

create table orders (
        order_id        number(10,0) not null,
        customer_id     number(10,0) not null,
        date_ordered    date         not null,
        other_bits      varchar2(250)
)
tablespace TS_OP_DATA
;

alter table orders add constraint ord_pk primary key(order_id);

There’s a significant difference between the two strategies (at least in 11.2.0.4, I haven’t gone back to check earlier versions): in the first form the implicit primary key index is created in the tablespace of the table, in the second form it’s created in the default tablespace of the user. To avoid the risk of putting something in the wrong place you can always add the “using index” clause, for example:


alter table order add constraint ord_pk primary key (order_id) using index tablespace TS_OP_INDX;

Having noticed / reminded myself of this detail I now have on my todo list a task to check the equivalent behaviour when creating partitioned (or composite partitioned) tables – but that’s a task with a very low priority.


More Kscope15 Impressions

Oracle AppsLab - Wed, 2015-07-08 09:03

Kscope15 (#kscope15) was hosted at Diplomat resort along beautiful Hollywood Beach, and the Scavenger Hunt from OAUX AppsLab infused a hint of fun and excitement between the packed, busy, and serious sessions.

XIMAG6345

The Scavenger Hunt was quite a comprehensive system for people to win points in various ways, and keep track of events, points and a leaderboard. And of course, we had one Internet of Things (IoT) component that people could search for and tap to win points.

And here is the build, with powerful battery connected to it, complete with anti-theft feature, which is double-sided duct tape :) All together, it is a stand-alone, self-contained, and definitely mobile, computer.

Isn’t it cool? I overheard on multiple occasions people say it was the coolest thing at the conference.

IMAG6347

One of the bartenders at the Community Night reception wanted to trade me the “best” drink of the night for my Raspberry Pi.

I leased it to him for two hours, and he gave me the drink. That fact is that I would put the Raspberry Pi on his table anyway for the community night event, and he would give me the drink anyway if I knew how to order it.

IMAG6378

On the serious side, APEX (Oracle Applications Express) had a good showing with many sessions. Considering our Scavenger Hunt Web Admin was built on APEX, I am interested in learning it too. After two hands-on sessions, I did feel that I’d use it for quick web app in the future.

On the database side, the most significant development is ORDS (Oracle REST Data Services) and the ability to call a web end-point from within database. This opens up possibility of monitoring data/state change at the data level, and triggering events into a web server, which in turn can trigger client reaction via WebSocket.

Again the Kscope15 was a very fruitful event for us, as we demonstrated Scavenger Hunt game and provoked lots of interest. It has some potential for large event and enterprise application, so stay tuned while we make some twist to it in the future.

Editor’s note: Raymond (@yuhuaxie) forgot to mention how much fun he had at Kscope15. Pics because it happened:

download_20150622_223725

ODTUG (@odtug) commissioned a short film, which was shot, edited and produced during the week that was Kscope15. It debuted during the Closing Session, and they have graciously shared it on YouTube. It’s 10 minutes, but very good at capturing what I like about Kscope so much.

Noel appears to talk about the Scavenger Hunt at 7:29. Watch it here.

Possibly Related Posts:

RMAN -- 4b : Recovering from an Incomplete Restore with OMF Files

Hemant K Chitale - Wed, 2015-07-08 07:51
Following up on my previous post (which had the datafiles as non-OMF), here is a case with OMF files.

SQL> select file_name from dba_data_files
2 where tablespace_name = 'HEMANT';

FILE_NAME
--------------------------------------------------------------------------------
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst84r1w_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst850ts_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85312_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85njw_.dbf
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst85qsq_.dbf

SQL>
SQL> !rm /home/oracle/app/oracle/oradata/HEMANTDB/datafile/*hemant*dbf

SQL> shutdown immediate;
ORA-01116: error in opening database file 6
ORA-01110: data file 6: '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst84r1w_.dbf'
ORA-27041: unable to open file
Linux Error: 2: No such file or directory
Additional information: 3
SQL> shutdown abort;
ORACLE instance shut down.
SQL>

I have removed the datafiles for a tablespace. Note that the datafiles are all OMF.  I then attempt to restore the tablespace.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ sqlplus

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:15:21 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Enter user-name: / as sysdba
Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 390073016 bytes
Database Buffers 58720256 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> select file# from v$datafile
2 where ts# = (select ts# from v$tablespace where name = 'HEMANT')
3 order by 1;

FILE#
----------
6
7
8
9
11

SQL>
SQL> alter database datafile 6 offline;

Database altered.

SQL> alter database datafile 7 offline;

Database altered.

SQL> alter database datafile 8 offline;

Database altered.

SQL> alter database datafile 9 offline;

Database altered.

SQL> alter database datafile 11 offline;

Database altered.

SQL> alter database open;

Database altered.

SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Wed Jul 8 21:22:02 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655)

RMAN>
RMAN> restore tablespace HEMANT;

Starting restore at 08-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=36 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00007 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szss_.dbf
channel ORA_DISK_1: restoring datafile 00009 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szxb_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58p_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58p_.bkp tag=TAG20150708T211100
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf
channel ORA_DISK_1: restoring datafile 00008 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf
channel ORA_DISK_1: restoring datafile 00011 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00601: fatal error in recovery manager
RMAN-03004: fatal error during execution of command
ORA-01092: ORACLE instance terminated. Disconnection forced
ORACLE error from target database:
ORA-03135: connection lost contact
Process ID: 3615
Session ID: 29 Serial number: 21

[oracle@localhost ~]$

Once again, the database has crashed in the midst of the RESTORE. Let's check the datafile names.

[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:25:12 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 456146944 bytes
Fixed Size 1344840 bytes
Variable Size 394267320 bytes
Database Buffers 54525952 bytes
Redo Buffers 6008832 bytes
Database mounted.
SQL> set pages60
SQL> select file#, name from v$datafile where file# in (6,7,8,9,11) order by 1;

FILE#
----------
NAME
--------------------------------------------------------------------------------
6
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf

7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf

9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf


SQL>
SQL> select file#, name from v$datafile_header where file# in (6,7,8,9,11) order by 1;

FILE#
----------
NAME
--------------------------------------------------------------------------------
6


7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8


9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11



SQL>

[To understand why I queried both V$DATAFILE and V$DATAFILE_HEADER, see my previous post "Datafiles not Restored  --  using V$DATAFILE and V$DATAFILE_HEADER".]

So, datafiles 7 and 9 have been restored. We can see that in the RESTORE log as well -- "backup piece 1" in the RESTORE had datafiles 7 and 9 and was the only one to complete. Let's check the datafile names. Datafiles 7 and 9 are differently named from what they were earlier.  Earlier, they were "%bst85%", now they are "%bst90%".

So, if we want to re-run the restore, we can use SET NEWNAME for datafiles 7 and 9 to allow Oracle to check that they are already restored.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Wed Jul 8 21:32:12 2015

Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCL (DBID=1229390655, not open)

RMAN> run
2> {set newname for datafile 7 to '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf';
3> set newname for datafile 9 to '/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf';
4> restore tablespace HEMANT;}

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 08-JUL-15
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=18 device type=DISK

skipping datafile 7; already restored to file /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf
skipping datafile 9; already restored to file /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf
channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00006 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szv5_.dbf
channel ORA_DISK_1: restoring datafile 00008 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8szwh_.dbf
channel ORA_DISK_1: restoring datafile 00011 to /home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst8t089_.dbf
channel ORA_DISK_1: reading from backup piece /NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp
channel ORA_DISK_1: piece handle=/NEW_FS/oracle/FRA/HEMANTDB/backupset/2015_07_08/o1_mf_nnndf_TAG20150708T211100_bst8c58n_.bkp tag=TAG20150708T211100
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 08-JUL-15

RMAN>

YES ! Datafiles 7 and 9 were identified as "already restored".
Let's re-check the datafiles and then RECOVER them.

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Wed Jul 8 21:37:29 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL>
SQL> select name from v$datafile
2 where ts#=(select ts# from v$tablespace where name = 'HEMANT')
3 minus
4 select name from v$datafile_header
5 where ts#=(select ts# from v$tablespace where name = 'HEMANT')
6 /

no rows selected

SQL>
SQL> select * from v$datafile_header where name is null;

no rows selected

SQL>
SQL> recover datafile 6;
Media recovery complete.
SQL> recover datafile 7;
Media recovery complete.
SQL> recover datafile 8;
Media recovery complete.
SQL> recover datafile 9;
Media recovery complete.
SQL> recover datafile 11;
Media recovery complete.
SQL> alter tablespace HEMANT online;
alter tablespace HEMANT online
*
ERROR at line 1:
ORA-01109: database not open


SQL> alter database open;

Database altered.

SQL> alter tablespace HEMANT online;

Tablespace altered.

SQL>
SQL> select owner, segment_name, bytes/1048576 from dba_segments where tablespace_name = 'HEMANT';

OWNER
------------------------------
SEGMENT_NAME
--------------------------------------------------------------------------------
BYTES/1048576
-------------
HEMANT
LARGE_TABLE
272


SQL> select count(*) from hemant.large_table;

COUNT(*)
----------
2404256

SQL>

Yes, I have been able to verify that all the datafiles have been restored.  I have been able to bring the tablespace online and query the data in it.

SQL> set pages60
SQL> select file_id, file_name from dba_data_files where tablespace_name = 'HEMANT';

FILE_ID
----------
FILE_NAME
--------------------------------------------------------------------------------
6
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4vt_.dbf

7
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jct_.dbf

8
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x0_.dbf

9
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst90jf1_.dbf

11
/home/oracle/app/oracle/oradata/HEMANTDB/datafile/o1_mf_hemant_bst9o4x5_.dbf


SQL>

And, yes the datafile names (%bst90%) are different from what they were earlier (%bst84% and %bst85%).

(Reference :  See Oracle Support Note Doc ID 1621319.1)
.
.
.

Categories: DBA Blogs

12c: New SQL PLAN OPERATIONS and HINTS

XTended Oracle SQL - Wed, 2015-07-08 07:27

This post is just a compilation of the links to other people’s articles and short descriptions about new SQL PLAN OPERATIONS and HINTS with a couple little additions from me.

th.c_operation_name { min-width:100px; max-width:100px; } th.c_description { min-width:200px } .c_links { min-width:150px; max-width:220px; } .c_links ul { margin: 0 0 5px 0 !important; -webkit-padding-start: 5px; } .c_links ul li { margin-left: 0px; -webkit-padding-start: 0px; } td.c_operation_name { font-size:12px;} td.c_description { font-size:12px;} td.c_links { font-size:10px;} .c_body td { vertical-align: text-top; } div.hints_wrapper { border-style: solid; border-width: 1px; padding: 2px; overflow: scroll !important; } div.hints_content { width: 1175px; min-width:1175px; padding: 2px; }


OPERATION_NAME Description Links JSONTABLE EVALUATION JSON_TABLE execution XMLTABLE EVALUATION This is new name for “COLLECTION ITERATOR PICKLER FETCH [XQSEQUENCEFROMXMLTYPE]”. XPATH EVALUATION still exists. MATCH RECOGNIZE New feature “PATTERN MATCHING” STATISTICS COLLECTOR Optimizer statistics collector OPTIMIZER STATISTICS GATHERING Automatic Optimizer statistics gathering during the following types of bulk loads:

  • CREATE TABLE … AS SELECT
  • INSERT INTO … SELECT into an empty table using a direct-path insert
CUBE JOIN Joining Cubes to Tables and Views EXPRESSION EVALUATION Each parallel slave executes scalar correllated subqueries from SELECT-list. parallel “FILTER” Each parallel slave executes own FILTER operation

Example
SQL> explain plan for
  2  select--+ parallel
  3      owner,object_name
  4  from xt_test l
  5  where exists(select/*+ no_unnest */ 0 from dual where dummy=object_name);

Explained.

PLAN_TABLE_OUTPUT
-------------------------------------------------------------------------------------------------------
Plan hash value: 2189761709

-------------------------------------------------------------------------------------------------------
| Id  | Operation                | Name      | Rows  | Bytes | Cost (%CPU)|   TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT         |           |     2 |    62 |   177K  (1)|       |      |            |
|   1 |  PX COORDINATOR          |           |       |       |            |       |      |            |
|   2 |   PX SEND QC (RANDOM)    | :TQ10000  | 91060 |  2756K|   113   (0)| Q1,00 | P->S | QC (RAND)  |
|*  3 |    FILTER                |           |       |       |            | Q1,00 | PCWC |            |
|   4 |     PX BLOCK ITERATOR    |           | 91060 |  2756K|   113   (0)| Q1,00 | PCWC |            |
|   5 |      INDEX FAST FULL SCAN| IX_TEST_1 | 91060 |  2756K|   113   (0)| Q1,00 | PCWP |            |
|*  6 |     TABLE ACCESS FULL    | DUAL      |     1 |     2 |     2   (0)|       |      |            |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter( EXISTS (SELECT /*+ NO_UNNEST */ 0 FROM "SYS"."DUAL" "DUAL" WHERE "DUMMY"=:B1))
   6 - filter("DUMMY"=:B1)
                

[collapse]

PX SELECTOR Execution of the serial plan parts in the one of the parallel slaves PX SEND 1 SLAVE Execution of the serial plan parts in the one of the parallel slaves(single DFO tree) PX TASK Parallel access to fixed tables(x$) by each node in RAC HYBRID HASH DISTRIBUTION Adaptive parallel data distribution that does not decide the final data distribution(HASH, BROADCAST or SKEW) method until execution time. PQ_DISTRIBUTE_WINDOW In addition to “PX SEND” HASH-distribution for WINDOW functions, “PX SEND RANGE” was added
Example
-- TESTPART - list-partitiioned table:
-------------------------------------------------------------------------------------------------
| Operation               | Name     | Rows  | Cost | Pstart| Pstop |   TQ  |IN-OUT| PQ Distrib |
-------------------------------------------------------------------------------------------------
| SELECT STATEMENT        |          | 74384 |   102|       |       |       |      |            |
|  PX COORDINATOR         |          |       |      |       |       |       |      |            |
|   PX SEND QC (RANDOM)   | :TQ10001 | 74384 |   102|       |       | Q1,01 | P->S | QC (RAND)  |
|    WINDOW SORT          |          | 74384 |   102|       |       | Q1,01 | PCWP |            |
|     PX RECEIVE          |          | 74384 |   100|       |       | Q1,01 | PCWP |            |
|      PX SEND RANGE      | :TQ10000 | 74384 |   100|       |       | Q1,00 | P->P | RANGE      |
|       PX BLOCK ITERATOR |          | 74384 |   100|     1 |     3 | Q1,00 | PCWC |            |
|        TABLE ACCESS FULL| TESTPART | 74384 |   100|     1 |     3 | Q1,00 | PCWP |            |
-------------------------------------------------------------------------------------------------
Outline Data
-------------
  /*+
      BEGIN_OUTLINE_DATA
      PQ_DISTRIBUTE_WINDOW(@"SEL$1" 3)
      FULL(@"SEL$1" “TESTPART"@"SEL$1")
      OUTLINE_LEAF(@"SEL$1")
      ALL_ROWS
      DB_VERSION('12.1.0.2')
      OPTIMIZER_FEATURES_ENABLE('12.1.0.2')
      IGNORE_OPTIM_EMBEDDED_HINTS
      END_OUTLINE_DATA
  */

[collapse]
Hint PQ_DISTRIBUTE_WINDOW(@Query_block N), where N=1 for hash, N=2 for range, N=3 for list VECTOR
KEY VECTOR Inmemory aggregation

RECURSIVE ITERATION Unknown WINDOW CONSOLIDATOR WINDOW CONSOLIDATOR BUFFER for parallel execution of analyrical WINDOW aggregation functions

Example
SQL> explain plan for select/*+ parallel(t 4) PQ_DISTRIBUTE_WINDOW(2) */ count(*) over(partition by owner) cnt,owner from xt_test t;
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------
Plan hash value: 3410952625
---------------------------------------------------------------------------------------------------
| Id | Operation                    |Name    |Rows |Cost |Pstart|Pstop|   TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------
|  0 | SELECT STATEMENT             |        |91060|  124|      |     |       |      |            |
|  1 |  PX COORDINATOR              |        |     |     |      |     |       |      |            |
|  2 |   PX SEND QC (RANDOM)        |:TQ10001|91060|  124|      |     | Q1,01 | P->S | QC (RAND)  |
|  3 |    WINDOW CONSOLIDATOR BUFFER|        |91060|  124|      |     | Q1,01 | PCWP |            |
|  4 |     PX RECEIVE               |        |91060|  124|      |     | Q1,01 | PCWP |            |
|  5 |      PX SEND HASH            |:TQ10000|91060|  124|      |     | Q1,00 | P->P | HASH       |
|  6 |       WINDOW SORT            |        |91060|  124|      |     | Q1,00 | PCWP |            |
|  7 |        PX BLOCK ITERATOR     |        |91060|  122|    1 |    4| Q1,00 | PCWC |            |
|  8 |         TABLE ACCESS FULL    |XT_TEST |91060|  122|    1 |    4| Q1,00 | PCWP |            |
---------------------------------------------------------------------------------------------------

Note
-----
   - Degree of Parallelism is 4 because of table property
                

[collapse] DETECT END Unknown DM EXP MAX AGGR Unknown DM EXP MAX PAR Unknown FAULT-TOLERANCE BUFFER The fault-tolerance for parallel statement.
Patent #US8572051: Making parallel execution of structured query language statements fault-tolerant

  • PX_FAULT_TOLERANCE / NO_PX_FAULT_TOLERANCE hints


See also:

  1. Randolf Geist “12c New Optimizer Features”
  2. Randolf Geist “Parallel Execution 12c New Features Overview”


HINTS:

sup {color: red} table.HINTS{ font-size:12px; } .HINTS td {vertical-align: text-top;}

PATH HINT_CLASS HINT_NAME VERSION VERSION_OUTLINE ALL WITH_PLSQL WITH_PLSQL 12.1.0.1 ALL -> ANSI_REARCH ANSI_REARCH 1 ANSI_REARCH
NO_ANSI_REARCH 12.1.0.2 12.1.0.2 ALL -> EXECUTION BATCH_TABLE_ACCESS_BY_ROWID 2 BATCH_TABLE_ACCESS_BY_ROWID
NO_BATCH_TABLE_ACCESS_BY_ROWID 12.1.0.1 12.1.0.1 INMEMORY INMEMORY
NO_INMEMORY 12.1.0.2 12.1.0.2 INMEMORY_PRUNING INMEMORY_PRUNING
NO_INMEMORY_PRUNING 12.1.0.2 12.1.0.2 ALL -> COMPILATION -> ZONEMAP ZONEMAP ZONEMAP
NO_ZONEMAP 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> DATA_SECURITY_REWRITE DATA_SECURITY_REWRITE_LIMIT DATA_SECURITY_REWRITE_LIMIT

NO_DATA_SECURITY_REWRITE 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO CLUSTER_BY_ROWID CLUSTER_BY_ROWID
CLUSTER_BY_ROWID 12.1.0.1(11.2.0.4) 12.1.0.1 ALL -> COMPILATION -> CBO -> ACCESS_PATH -> BITMAP_TREE BITMAP_AND BITMAP_AND 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> ADAPTIVE_PLAN ADAPTIVE_PLAN ADAPTIVE_PLAN
NO_ADAPTIVE_PLAN 12.1.0.2 12.1.0.2 ALL -> COMPILATION -> CBO -> AUTO_REOPT AUTO_REOPTIMIZE 2 AUTO_REOPTIMIZE
NO_AUTO_REOPTIMIZE 12.1.0.1 ALL -> COMPILATION -> CBO -> JOIN_METHOD ANTIJOIN CUBE_AJ 12.1.0.1 12.1.0.1 SEMIJOIN CUBE_SJ 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> JOIN_METHOD -> USE_CUBE JOIN USE_CUBE

NO_USE_CUBE 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> PARTIAL_JOIN PARTIAL_JOIN PARTIAL_JOIN
NO_PARTIAL_JOIN 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> PARTITION USE_HIDDEN_PARTITIONS USE_HIDDEN_PARTITIONS 12.1.0.1 ALL -> COMPILATION -> CBO -> PQ PARTIAL_ROLLUP_PUSHDOWN PARTIAL_ROLLUP_PUSHDOWN

NO_PARTIAL_ROLLUP_PUSHDOWN 12.1.0.1 12.1.0.1 PQ_CONCURRENT_UNION PQ_CONCURRENT_UNION

NO_PQ_CONCURRENT_UNION 12.1.0.1 12.1.0.1 PQ_DISTRIBUTE_WINDOW PQ_DISTRIBUTE_WINDOW 12.1.0.1 12.1.0.1 PQ_FILTER PQ_FILTER 12.1.0.1 12.1.0.1 PQ_SKEW PQ_SKEW

NO_PQ_SKEW 12.1.0.1 12.1.0.1 PX_FAULT_TOLERANCE PX_FAULT_TOLERANCE
NO_PX_FAULT_TOLERANCE 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> PQ -> PQ_REPLICATE PQ_REPLICATE PQ_REPLICATE
NO_PQ_REPLICATE 12.1.0.1 12.1.0.1 ALL -> COMPILATION -> CBO -> STATS -> DBMS_STATS GATHER_OPTIMIZER_STATISTICS GATHER_OPTIMIZER_STATISTICS

NO_GATHER_OPTIMIZER_STATISTICS 12.1.0.1 ALL -> COMPILATION -> TRANSFORMATION ELIM_GROUPBY ? ELIM_GROUPBY

NO_ELIM_GROUPBY ALL -> COMPILATION -> CBO -> CBQT -> VECTOR_AGG
and

ALL -> COMPILATION -> TRANSFORMATION -> CBQT -> VECTOR_AGG USE_VECTOR_AGGREGATION USE_VECTOR_AGGREGATION
NO_USE_VECTOR_AGGREGATION 12.1.0.2 12.1.0.2 VECTOR_TRANSFORM VECTOR_TRANSFORM

NO_VECTOR_TRANSFORM 12.1.0.2 12.1.0.2 VECTOR_TRANSFORM_DIMS VECTOR_TRANSFORM_DIMS

NO_VECTOR_TRANSFORM_DIMS 12.1.0.2 12.1.0.2 VECTOR_TRANSFORM_FACT VECTOR_TRANSFORM_FACT

NO_VECTOR_TRANSFORM_FACT 12.1.0.2 12.1.0.2

ALL -> COMPILATION -> TRANSFORMATION -> HEURISTIC -> DECORRELATE DECORRELATE DECORRELATE

NO_DECORRELATE 12.1.0.1 12.1.0.1

See also:
Fuyuncat(Wei Huang) – “Oracle 12c new SQL Hints”

Categories: Development

Oracle Midlands : Event #10 Summary

Tim Hall - Wed, 2015-07-08 07:26

oracle-midlands Last night was Oracle Midlands Event #10 with Jonathan Lewis.

The first session was on “Five Hints for Optimizing SQL”. The emphasis was very much on “shaping the query plan” to help the optimizer make the right decisions, not trying to determine every single join and access structure etc.

In the past I’ve seen Jonathan do sessions on hints, which made me realise how badly I was using them. As a result of that I found myself a little scared by them and gravitating to this “shaping” approach, but my version was not anywhere near as well thought out and reasoned as Jonathan’s approach. It’s kind-of nice to see I was on the right path, even if my approach was the mildly pathetic, infantile version of it. :)

red-stack-tech-swagThe break consisted of food, chatting and loads of prizes. It’s worth coming even if you don’t want to see the sessions, just to get a chance of winning some swag. :) Everyone also got to take home a Red Stack Tech mug, stress bulb and some sweets as well.

The second session was on “Creating Test Data to Model Production”. I sat there smugly thinking I knew what was coming, only to realise I had only considered a fraction of the issues. I think “eye opening” would be the phrase I would use for this one. Lots of lessons learned!

I must say, after nearly 20 years (19 years and 11 months) in the game, it’s rather disconcerting to feel like such a newbie. It seems to be happening quite a lot recently. :)

redstacktechSo that was another great event! Many thanks to Jonathan for taking the time to come and speak to us. Hopefully we’ll get another visit next year? Well done to Mike for keeping this train rolling. Wonderful job! Thanks to all the sponsors of the prize draw and of course, thanks to Red Stack Tech for their support, allowing the event to remain free! Big thanks to all the members of the Oracle Midlands family that came out to support the event. Without your asses on seats it wouldn’t happen!

The next event will be on the 1st September with Christian Antognini, so put it in your diary!

Cheers

Tim…

Oracle Midlands : Event #10 Summary was first posted on July 8, 2015 at 2:26 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

OAM PS3 - continued

Frank van Bortel - Wed, 2015-07-08 06:12
Allow auto start (production mode) for your scripts: cd /oracle/user_projects/domains/oam_domain/servers mkdir -p oam_server1/security mkdir -p omsm_server1/security mkdir -p oam_policy_mgr1/security vi oam_server1/security/boot.properties cp oam_server1/security/boot.properties omsm_server1/security/ cp oam_server1/security/boot.properties oam_policy_mgr1/security/ You can now use command line Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

APEX 5.0 New Features Training September 2015

Denes Kubicek - Wed, 2015-07-08 06:12




Oracle Application Express 5.0 wurde am 15.04.2015 freigegeben und ist als Download verfügbar.

Wir haben jetzt alle über zwei Jahre auf das neue Release gewartet ... und das Warten hat sich mehr als gelohnt ... die neuen Möglichkeiten werden Sie umhauen!

Wir haben die neuen Möglichkeiten genau unter die Lupe genommen und sind selbst total begeistert, weil sie das tägliche Arbeiten extrem vereinfachen. Wir haben schon einige Anwendungen in APEX 5.0 neu programmiert und einige ältere Versionan auf 5.0 umgestellt.

Mit dem neuen Pagedesigner sind wir so produktiv wie noch nie, das Universal Theme erlaubt auf einfache Weise, richtig elegante Applikationen zu entwickeln. Damit ist wirklich ein Durchbruch gelungen, die Konfiguration (über Template Optionen) und selbst farbliche Anpassungen sind ein Kinderspiel, der integrierte Theme Roller ist einfach genial!

Modale Dialoge, multiple Interaktive Berichte pro Seite, Erweiterungen für die mobilen Endgeräte, ein komplett neues File-Handling und viele neue Security - Features ... dieses Release ist wirklich umfangreich!

Neben diesen großen Features wurden auch wieder über 100 kleinere oder auch größere Verbesserungen implementiert.

In unseren Kursen haben wir schon über 200 APEX-Fans die besten Herangehensweisen, Tipps und Tricks beigebracht. Durch die Hands-On Übungen vertiefen wir diese und Sie können diese sofort einsetzen ... oder Sie schlagen sie nach ... wenn Sie sie später brauchen ;). Aber auf jeden Fall wissen Sie nach dem Kurs, was möglich ist!

Wir beide (Denes und Dietmar) entwickeln seit 2006 fast jeden Tag mit APEX Applikationen für unsere Kunden, wir haben mit APEX schon alles ausprobiert.

Nehmen Sie die Abkürzung und lernen Sie von den Besten, was für die Praxis am wichtigsten sein wird.

Um die Vorteile des neuen Releases möglichst schnell nutzen zu können, melden Sie sich am besten sofort an und sichern Sie sich Ihren Platz!
  • Klicken Sie auf den Link "Anmeldung zum Kurs".
  • Tragen Sie Ihre Anmeldedaten ein und klicken Sie auf den Button "Anmelden".
  • Sie bekommen sofort eine Bestätigungs-Email zugeschickt.
  • Sobald Sie in der Email auf den Link zur Bestätigung klicken, haben Sie Ihren Platz gesichert und sind auf jeden Fall dabei!


Anmeldung zum Kurs

P.S.: Die vollständige Agenda und weitere Infos zum Kurs gibt es online in der Kursbeschreibung.
Categories: Development

Become an #Oracle Certified Expert for Data Guard!

The Oracle Instructor - Wed, 2015-07-08 04:33

It is with great pride that I can announce a new certification being available – Oracle Database 12c: Data Guard Administration.

We wanted this for years and finally got it now, after having put much effort and expertise into the development of the exam. It is presently in beta and offered with a discount. Come and get it!


Tagged: Data Guard, Oracle Certification
Categories: DBA Blogs

OpenSSL and KeyTool commands

Darwin IT - Wed, 2015-07-08 03:29
Earlier I wrote an article about message transport security in Oracle B2B. It collects a few usefull Java Keytool and OpenSSL commands to convert and import Certificates.

Today I learned another (from co-worker Joris, thanks).

This is how to get a certificate from an external server.
openssl x509 -in <(openssl s_client -connect {remote-host}:443 -prexit 2>/dev/null) -out /tmp/certificate.crt 

This is usefull, because in some cases the remote host, maybe a virtual one, where by means of Server Name Indication the specific virtual-host's certificate is to be 'asked', while the actual certificate of the physical host is presented by default. Note that Weblogic (and other JEE Appserver as JBoss, Websphere, Glassfish, etc.) does not support SNI.

I think I should create a blog-entry to collect these usefull commands in one page. However I've found these: