Skip navigation.

Feed aggregator

2016 Oracle Utilities America’s Product Development Customer Advisory Board

Anthony Shorten - Thu, 2016-01-07 22:49

I will be attending the 2016 Oracle Utilities America’s Product Development Customer Advisory Board this year which is in Phoenix Arizona from the Feb 29th till 3rd March. This year we are running a dedicated technical stream highlighting specific technical features and also running a Technical Q&A Panel to answer technical questions and discuss our directions.

The sessions will be focused on technical aspects of the solution and be a combination of presentations on topics, live demonstrations and question/answer sessions with product experts. The planned sessions this year are:

 Session  Overview  Security Features and Functions
In this session the new and improved security features of the Oracle Utilities Application Framework will be discussed including integrations to various security technologies to understand and take advantage of the advanced security solutions now available.
 Web Services Integration
In this session the new Inbound Web Services, Message Driven beans and REST functionality of the Oracle Utilities Application Framework are highlighted to understand the integration capabilities for implementation. This session will include integrations to SOA products.
 Information Lifecycle Management
In this session the new Information Lifecycle Management solution will be outlined and discussed to highlight the capabilities, implementation strategies and techniques for reducing storage costs whilst retaining data for business purposes
 Managing your Utilities environment using Oracle Enterprise Manager In this session the techniques and capabilities of reducing your IT management costs using Oracle Enterprise Manager are outlined including using the base capabilities of the console and using the various packs available to augment the solution including the Oracle Application Management Pack for Oracle Utilities.
 Technical Cloud Solutions
In this session the technical architecture of Oracle Cloud offerings for Software As A Service (SaaS) and Platform As A Service (PaaS) will be discussed. This session highlights all the technology used in the solution as well as the architecture of those solutions.
 Oracle Utilities Framework Roadmap
In this session the roadmap of the Oracle Utilities Application Framework will be outlined.
 Oracle Utilities Testing Solutions
In this session the new Oracle Application Testing Suite based testing accelerators for Oracle Utilities products will be outlined and demonstrated for quick adoption of automated testing. The solution includes Functional/Regression Testing, Performance/Load Testing and Testing Management.
 Mobile Framework Overview
In this session the planned Mobile Server integration architecture and technology will be highlighted to allow connected and disconnected mobile clients for Oracle Utilities products.
 Technology Strategy
In this session the short, medium and long term technology strategy will be discussed to outline the technology directions and integrations for the Oracle Utilities Application Framework in future releases. There will be a Q&A session in this session as well to discuss technology options.
 Technical Implementation Q&A Panel
This session will be generic panel session where product managers and product developers are available for customer and partner questions and discussions on technical aspects of implementations.

I will be available for all these sessions with other product managers and will also be attending the Customer User Group meetings after the CAB has completed. These sessions are designed for technical personnel rather than business personnel.

I look forward to seeing you at the CAB. For those in APAC, I am also attending the APAC CAB in Melbourne (my home town) in Mid February 2016 with a subset of these sessions.


If you're new to the APEX community, here are some tips to get engaged

Joel Kallman - Thu, 2016-01-07 20:29
Last night (January 6, 2016) we had our first-in-2016 APEX Meetup meeting in Columbus, Ohio, USA.  For being on short notice, we had a nice turnout, and I was able to distribute the new apex.world stickers.  I was most impressed that a gentleman (by the name of Shannon) drove down from Cleveland, Ohio - almost 2 hours drive each way.  He's been using APEX for all of two weeks, was using it with PowerSchool, and wanted to see what this APEX was all about.

Today, I wrote on our Oracle APEX Columbus Meetup board a short summary of the information we reviewed last night.  For those people who've been doing APEX for years, none of this is going to be new.  But the information I posted may be especially helpful to those who are very new to APEX, or even curious about APEX.  I decided to simply share it again here, in the hopes that someone else just as new as Shannon will find this useful.

--

We discussed a few things last night and I wished to summarize them here:

1)  There are ways to remain connected to the APEX community via Social media:

Facebook:  https://www.facebook.com/orclapex
LinkedIn:  http://linkedin.com/groups/8263065
Twitter:  The hashtag for Oracle Application Express is #orclapex.  Most everyone who attended last night is on Twitter.  You can follow many of us.  I’m at @joelkallman.  The APEX news is at @oracleapexnews.  If you don't know anyone on twitter, just do a Twitter search for #orclapex.

I’ll be honest - almost everyone in the APEX community is heavily engaged on Twitter, a lot less on LinkedIn, and almost never on Facebook.

2)  You should get registered on https://apex.world

It’s the APEX Community site, written by others in the APEX community (outside of Oracle).  There are jobs, plug-ins, open source, twitter feeds, news, and more.  You should also get registered on Slack, because apex.world is also integrated with Slack.  Follow the instructions on apex.world to get a Slack invitation.  It’s worth it.

3)  I spoke of some upcoming conferences

There is an upcoming conference in May in Cleveland, the Great Lakes Oracle Conference.  Not only will Jason Straub and I be there, doing a couple sessions (about what’s coming in APEX 5.1), but we’re also doing a pre-conference workshop.  There will be other non-Oracle people there presenting on APEX.  You should think about presenting at this conference, and you can submit your abstracts until February.  As I tried to convey to attendees last night, don’t think that you have to submit the most exotic, obtuse topic possible.  How you’re using APEX, the challenges you’ve encountered and how you worked around them, may be a very useful topic.  The conference committee wants to expand their APEX offerings, and I think those of us in Ohio should help them. https://www.neooug.org/gloc/

b)  In June, in Chicago, is the Oracle Development Tools User Group (ODTUG) annual Kscope conference.   This is the place to be on the planet if you do any APEX whatsoever.  Just in the APEX track alone, there will be 46 sessions over 5 days.  On the Sunday before the conference starts, there will be the Sunday Symposium, which will be exclusively from the Oracle APEX product development team.  From a global perspective, this is the place to be for APEX.  It’s highly technical, and attendees and speakers from around the world assemble here.  http://kscope16.com

4)  How to get started, especially for someone who is new.  I offered a couple suggestions:

a)  Go to https://apex.oracle.com, and scroll down to the "Learn More" section, where there are links to documentation, tutorials, videos, hands-on-labs, etc.
b)  An Oracle employee mentioned that he took the APEX training class on Udemy, and for 7 hours of training, he thought it was pretty good.  I can't vouch for the training, and this isn't an official recommendation, but he thought it was worth his time and money.  He also said that while it's priced at $25, they often run specials for as low as $10.  https://www.udemy.com/create-web-apps-with-apex-5/

5)  Lastly, I showed Oracle’s community site for APEX, https://apex.oracle.com/community

I showed the numerous customer quotes we’ve received, and I put another plea out to attendees that, if you’re using APEX, please consider going through your management chain to get approvals for a quote.  At least ask.   There is no huge legal process involved, approvals can all be done via email.  The hard part is taking time out of your day job and pursuing this at your employer (or customer).  It will be a huge benefit to the entire APEX community.

P.S. I never showed it last night, but ODTUG also has a nice community site for APEX, at http://odtug.com/apex

Customer User Administration in Cloud Portal

Joshua Solomin - Thu, 2016-01-07 17:01
New in My Oracle Support December 2015 Release

The December 2015 My Oracle Support Release added Customer User Administrator (CUA) functionality to the Oracle Cloud Support portal. The change makes it much easier for customers using the Cloud Portal (including many Oracle MICROS and Oracle Hospitality customers) to manage user access and Support Identifier (SI) tasks.

If you are a CUA, you will be notified of current administrative tasks in the notification area in the top area of the Cloud Portal page. You can also click your name, then the Administration sub-link, and you'll be taken to the CUA information view in the Cloud Portal.

Notification icon User Request View Video Training

Review the video found in "Cloud Support Portal - How to Use the CUA Functionality" (Document 2072499.1).

Video Click here to view the video

You can also review existing trainings on how to perform common CUA tasks by visiting "Customer User Administrator (CUA) Roles and Responsibilities" (Document 1544004.2).

The Cloud Portal allows you to verify user approval actions and review Support Identifier notices. However, Cloud Portal does not fully support managing Support Identifier assignments and renewals. To perform these actions you will need to continue using the standard My Oracle Support portal for the time being. Review the video training linked above for more details.

Tuning SQL with Javascript in SQLCL

Kris Rice - Thu, 2016-01-07 13:22
In case anyone missed it, #sqlcl has the ability to run javascript. This opens a lot of options.  Here's a simple example that shows how to using javascript. Open a new Database Connection Collect stats on the base connection Do work on the main connection Collect stats again Only Print the stats that changed In SQL Developer, the autotrace feature has for a long time selected the session

Enterprise Manager Cloud Control 13c Upgrade

Tim Hall - Thu, 2016-01-07 12:31

em-12cA couple of weeks ago I posted about doing a fresh installation of Enterprise Manager Cloud Control 13c (article, blog post). I’ve finally got around to doing an upgrade test from EM CC 12cR5 to 13cR1. You can see the result of that here.

upgrade-meme

Gokhan Atil did a post about this upgrade pretty much as soon as it was released, so I’m a little late to the party compared to him. :)

As you’ll see from the article, the upgrade process was similar to the patches that came before it. There are of course some extra prerequisites which you can read about in either my post, Gokhan’s or the docs…

Even though the upgrade tests were fine, after discussion with our system administrators, we are probably going to go for a clean installation and migrate the monitored hosts one at a time.

Why the slash and burn approach? I’ve made some mistakes with our installations in the past and they persist with every subsequent upgrade. It would be nice to take a step back and fix stuff. We are doing a similar thing with our WebLogic installations. I was learning new stuff all the time while I was installing our WebLogic 11g infrastructure. Rather than upgrading to WebLogic 12cR2, we are going to build a new infrastructure, migrate to it and throw the old one away.

This is relatively easy for us for a few reasons.

  1. We use virtualization for everything. We will provision the new VMs, set everything up. Start migrating stuff. When the migration is complete we will throw away the old VMs. No major hardware overhead.
  2. We are a pretty small operation. If we had a massive amount of infrastructure, a slash and burn approach would be very time consuming and as such, very costly.
  3. I am really anal about some things and I am willing to go the extra mile to get things right. I did the best I could at the time, but I’m happy to admit I made mistakes and I want to sort them out. This is not because I’m a company boy. It’s because those mistakes eat away at me and I want them eradicated so they will only haunt me in my memories, not in my day to day life.

If we had been going for the upgrade approach, I probably would have done it in the next couple of weeks. With clean slate approach, we’ll probably take a few more weeks to get ready for it. No point rushing in and making more mistakes. I would rather let the idea brew for a while before we start. :)

Cheers

Tim…

Enterprise Manager Cloud Control 13c Upgrade was first posted on January 7, 2016 at 7:31 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Becky’s BI Apps Corner: Installing on Windows Server 2012

Rittman Mead Consulting - Thu, 2016-01-07 08:33

Recently I was installing Oracle BI Apps on a VM for a custom training, as mentioned in a previous post. I setup the VM with Windows Server 2012 after verifying in the certification matrix that the Oracle BI Apps version (11.1.1.9.2) was certified to run on Windows Server 2012.

As I was going along, I was getting errors during some of the installs on the pre-requisite checks for the operating system (OS), as can be the case when the OS gets certified after the initial release of that version of software. The workaround is available for that error in MOS Doc ID 1577652.1, and involves adding the OS to an .xml file… Okay, I admit it. I didn’t actually update the .xml file. I just clicked Continue past the pre-requisite check. Since there didn’t seem to be any repercussions and the installs completed successfully, I continued on my merry way through the installation instructions.

When I was running OBIEE’s configuration (config.bat), it failed at 35% on the step Setting Up BI Publisher. The install_DATE_TIME.log file didn’t have enough information, so I dug into the install_DATE_TIME.out file. In it, there was the following error:

updateBIPConfigFiles: OSError: (0, 'Failed to execute command ([\'sh\', \'-c\', \'java -classpath C:\\\\Oracle\\\\Product\\\\BI_11.1\\\\Oracle_BI1\\\\clients\\\\bipublisher\\\\xdo-server.jar oracle.xdo.install.UpdateConfigFiles 9704 9703 9710 jdbc/mds/owsm C:\\\\Oracle\\\\Product\\\\BI_11.1\\\\user_projects\\\\domains\\\\bifoundation_domain\']): java.io.IOException: Cannot run program "sh": CreateProcess error=2, The system cannot find the file specified')

I left the config.bat open (It only takes one time to learn not to cancel during the config.bat or configApps.bat) and I took a trip over to my friendly neighborhood support.oracle.com where I found MOS Doc ID 1580583.1. Essentially, it says to dig into and update a Jar file, and then directs you to Oracle’s Java Tutorials to learn how. I decided to do just that and post the steps here.

Forays into Java

The MOS doc says I need to update a file called javashell.py by adding the string “Windows Server 2012” to the list of os.names. This javashell.py file is archived inside a jar file located: \common\wlst\modules\jython-modules.jar.

A jar file is simply a package of a bunch of files. To view content of a jar file, the command is:
jar tf jar-file

In my command window, I navigated to the folder where the jython-modules.jar was. For this particular jar, the number of files was bigger than my command window would show, so I sent it to a text file.

The command I used was:

C:\app\product\fmw\wlserver_10.3\common\wlst\modules>c:\java\jdk1.7.0_85\bin\jar
tf jython-modules.jar >log.txt

In the jython-modules.jar, there was a Lib\javashell.py. Now that I found my file, I needed to extract the file from the jar. To do that, the command is:
jar xf jar-file [archived-file(s)]

Since I was still in the same directory, I used the following command:

C:\app\product\fmw\wlserver_10.3\common\wlst\modules>c:\java\jdk1.7.0_85\bin\jar
xf jython-modules.jar Lib/javashell.py

The file is now extracted and can be edited. I opened the javashell.py file in Notepad++. You can open/edit it in your editor of choice. Not too far into the python script, I found the os.name and the list of other operating systems. I’m not particularly skilled in python, but the change seemed pretty straight forward. As you can see from the screenshot, I just added a comma, and the ‘Windows Server 2012’ at the end of the “nt” operating systems. I am certain this would work for other operating systems added to the certification matrix after the software was released.

Once I’ve saved my changes, I need to get this javashell.py file back into the python-modules.jar.

The command to update a file inside of a jar is:
jar uf jar-file input-file(s)

I used:

C:\app\product\fmw\wlserver_10.3\common\wlst\modules>c:\java\jdk1.7.0_85\bin\jar
uf jython-modules.jar Lib/javashell.py

With everything in order, I went back to the config.bat window, which was still running. I scrolled to the top, checked the box next to the first line, clicked Retry.

The install completed without any further errors! We got to play with Java Jar files and a python script to resolve an error with the config.bat installer. These steps will allow you to update for any OS, which can come in handy for any that are certified after the initial release of a version of software. Let me know in the comments all the different Operating Systems where you have installed BI Apps. So far, I’ve installed on RHEL 5 and 6, Solaris 10, Windows Server 2008, and Windows Server 2012, each with interesting little quirks. Also, if you have training needs, check out our new trainings for 2016, and contact us! And of course, keep an eye out for more Becky’s BI Apps Corner coming soon.

The post Becky’s BI Apps Corner: Installing on Windows Server 2012 appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Oracle XE 12c?

Tim Hall - Thu, 2016-01-07 03:30

There was a post on Oracle-L asking about Oracle Express Edition (XE) 12c. I started to write a reply, but thought a blog post may be more appropriate.

Oracle XE 12c doesn’t exist yet, but people at OpenWorld 2015 confirmed they “plan” to have one. As always, no promises. So when will it arrive? Typically the XE version is put together based on the the first major patchset of release 2 of a version. So the kind of thing you might expect is,

  • 12.2.0.1 : Released mid 2016 maybe.
  • 12.2.0.2 : Released mid 2017 maybe.
  • XE team brought together: Some time after 12.2.0.2 release.
  • XE Released: When it is built and stable.

Things to consider, based on stuff I’ve heard over the last few years.

  • There is no XE team. People are taken from their normal jobs to put this together and test it. There is no revenue generation directly associated with this product, so it’s not at the top of the priority list.
  • There are no patches for XE, so they wait until they have a stable release they can rely on for the 3-6 years before the next release 2 DB version.
  • Previous versions of XE have had bits of functionality missing/disabled, so it’s not just rolling out SE with some restrictions.
  • Other products in the Oracle stack are moving to more regular release cycles (3 months to cloud, with a yearly on-premise release). Comments from the database team suggest this is not the case for the database. As it gets more complicated, the testing takes longer, so the release cycles are getting longer. They originally said they wanted an 18 month release cycle for the database. 12cR1 took about 3 years to arrive. It looks like 12cR2 will arrive about 3 years after 12cR1. If that cycle continues, it would mean about a 6 year wait between XE releases, unless they change tack.

Of course, this is all just me thinking out loud. No facts have been presented here! :)

I think Oracle XE is a really important product for Oracle, even though it doesn’t directly make them money. Think of it as a gateway drug. XE makes it easy and cheap for people to try stuff with Oracle. If those projects grow, that could be additional licensing of SE2, EE or cloud subscriptions in the future. Without it, people will look elsewhere for their cheap starting point and may never make a move to Oracle later!

Cheers

Tim…

Oracle XE 12c? was first posted on January 7, 2016 at 10:30 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Legal Entity Document Sequencing in Receivables

OracleApps Epicenter - Thu, 2016-01-07 02:55
You need to consider these points when you are trying setup Legal Entity Document Sequencing in Receivables You can set up your primary ledger to allow document sequencing at the legal entity level instead of at the ledger level. This means if you have more than one legal entity assigned to the same ledger, you […]
Categories: APPS Blogs

Next Generation Outline Extractor 2.0.5.1073 released

Tim Tow - Wed, 2016-01-06 23:21


In the last week or so, we placed an updated version of the Next Generation Outline Extractor on our website. This version provides support for some updated Essbase versions, including 11.1.2.4.002, 11.1.2.4.003, and 11.1.2.4.005. More importantly, it addresses a bug where alias names were improperly associated with parent members when using the MaxL extraction source.. This bug was reported to us by a number of users and we are glad we were able to address it. Here is a list of the issues that were addressed:

2015.11.23 - Issue 1401 - Resolved an issue where only one alias table is exported when using MaxL as the extract source.

2015.11.23 - Issue 1402 - Resolved an issue where extracts using MaxL input and having members specified with Unicode may print incorrect characters in the output.

2015.11.23 - Issue 1403 - Resolved an issue where aliases and udas may have been improperly placed on parent members.

Please contact our support team if you have any issues.

Categories: BI & Warehousing

ORA-01652: unable to extend temp segment even with Autoextend ON

Michael Dinh - Wed, 2016-01-06 18:22

I have encountered a very strange feature and wondering if anyone has experienced this.

Tracing was enabled for “ORA-01652: unable to extend temp segment” using alter system set events ‘1652 trace name errorstack level 1’;.

The tablespace is configured to autotextend by 64MB with max size 31GB and currently has 4MB Free.

“create index x on abc(txt) tablespace users;” failed with ORA-01652: unable to extend temp segment by 128 in tablespace USERS.

I understand objects are first created as temporary segments in USERS tablespace and this does not occur if there is no tracing enable for event 1652.

Any ideas or explanations?

ARROW:(MDINH@hawklas):PRIMARY> alter database datafile 4 autoextend on next 64m;

Database altered.

ARROW:(MDINH@hawklas):PRIMARY> @check_tbs.sql
ARROW:(MDINH@hawklas):PRIMARY> set lines 200 echo on
ARROW:(MDINH@hawklas):PRIMARY> col file_name for a55
ARROW:(MDINH@hawklas):PRIMARY> col segment_name for a30
ARROW:(MDINH@hawklas):PRIMARY> col partition_name for a30
ARROW:(MDINH@hawklas):PRIMARY> col segment_type for a18
ARROW:(MDINH@hawklas):PRIMARY> select extent_management, allocation_type, segment_space_management,
  2  initial_extent, next_extent, pct_increase
  3  from dba_tablespaces
  4  where tablespace_name=UPPER('&&tbs')
  5  ;

EXTENT_MAN ALLOCATIO SEGMEN INITIAL_EXTENT NEXT_EXTENT PCT_INCREASE
---------- --------- ------ -------------- ----------- ------------
LOCAL      SYSTEM    AUTO            65536

ARROW:(MDINH@hawklas):PRIMARY> select tablespace_name, file_id, file_name, bytes/1024/1024 mb, autoextensible, maxbytes/1024/1024/1024 max_gb
  2  from dba_data_files
  3  where tablespace_name=UPPER('&&tbs')
  4  ;

TABLESPACE_NAME                   FILE_ID FILE_NAME                                                       MB AUT     MAX_GB
------------------------------ ---------- ------------------------------------------------------- ---------- --- ----------
USERS                                   4 /oradata/HAWKLAS/datafile/o1_mf_users_c7120bbr_.dbf            160 YES 31.9999847

ARROW:(MDINH@hawklas):PRIMARY> select max(bytes)/1024/1024 max_free_mb, sum(bytes)/1024/1024 total_free_mb, count(*)
  2  from dba_free_space
  3  where tablespace_name=UPPER('&&tbs')
  4  ;

MAX_FREE_MB TOTAL_FREE_MB   COUNT(*)
----------- ------------- ----------
          4             6          7

ARROW:(MDINH@hawklas):PRIMARY> select segment_type, segment_subtype, max(initial_extent), max(next_extent)
  2  from dba_segments
  3  where tablespace_name=UPPER('&&tbs')
  4  group by segment_type, segment_subtype
  5  ;

SEGMENT_TYPE       SEGMENT_SU MAX(INITIAL_EXTENT) MAX(NEXT_EXTENT)
------------------ ---------- ------------------- ----------------
INDEX              ASSM                     65536          1048576
LOBSEGMENT         ASSM                     65536          1048576
LOBINDEX           ASSM                     65536          1048576
TABLE              ASSM                  94371840          1048576

ARROW:(MDINH@hawklas):PRIMARY> ALTER SESSION SET TRACEFILE_IDENTIFIER = "mdinh_test2";

Session altered.

ARROW:(MDINH@hawklas):PRIMARY> alter system set events '1652 trace name errorstack level 1';

System altered.

ARROW:(MDINH@hawklas):PRIMARY> create index x on abc(txt) tablespace users;
^Ccreate index x on abc(txt) tablespace users
                  *
ERROR at line 1:
ORA-01013: user requested cancel of current operation



ARROW:(MDINH@hawklas):PRIMARY> alter database datafile
  2   '/oradata/HAWKLAS/datafile/o1_mf_users_c7120bbr_.dbf' resize 160m;

Database altered.

ARROW:(MDINH@hawklas):PRIMARY>
oracle@arrow:hawklas:/u01/app/oracle/diag/rdbms/hawklas/hawklas/trace
$ tail -20  alert_hawklas.log
  Current log# 3 seq# 75 mem# 0: /oradata/HAWKLAS/onlinelog/o1_mf_3_c7129gc8_.log
  Current log# 3 seq# 75 mem# 1: /oradata/fra/HAWKLAS/onlinelog/o1_mf_3_c7129go4_.log
Wed Jan 06 15:27:16 2016
Archived Log entry 4037 added for thread 1 sequence 74 ID 0xbc2f7b69 dest 1:
Wed Jan 06 15:46:41 2016
OS Pid: 3999 executed alter system set events '1652 trace name errorstack level 1'
Wed Jan 06 15:47:08 2016
Errors in file /u01/app/oracle/diag/rdbms/hawklas/hawklas/trace/hawklas_ora_3999_mdinh_test2.trc:
ORA-01652: unable to extend temp segment by 128 in tablespace USERS
Wed Jan 06 15:50:09 2016
Thread 1 advanced to log sequence 76 (LGWR switch)
  Current log# 1 seq# 76 mem# 0: /oradata/HAWKLAS/onlinelog/o1_mf_1_c7128lk6_.log
  Current log# 1 seq# 76 mem# 1: /oradata/fra/HAWKLAS/onlinelog/o1_mf_1_c7128lvc_.log
Wed Jan 06 15:50:16 2016
Archived Log entry 4038 added for thread 1 sequence 75 ID 0xbc2f7b69 dest 1:
Wed Jan 06 15:51:01 2016
alter database datafile
 '/oradata/HAWKLAS/datafile/o1_mf_users_c7120bbr_.dbf' resize 160m
Completed: alter database datafile
 '/oradata/HAWKLAS/datafile/o1_mf_users_c7120bbr_.dbf' resize 160m
oracle@arrow:hawklas:/u01/app/oracle/diag/rdbms/hawklas/hawklas/trace
$

Updated Jan 07, 2016 based on comments.

There are sufficient space from the OS.
Size of the table is 96MB and size of the created index is 104MB

From the alert log, the ORA-01652 did not occur if tracing is turned off for the event – alter system set events ‘1652 trace name errorstack off’;

ARROW:(MDINH@hawklas):PRIMARY> @check_tbs.sql
ARROW:(MDINH@hawklas):PRIMARY> col file_name for a55
ARROW:(MDINH@hawklas):PRIMARY> col segment_name for a30
ARROW:(MDINH@hawklas):PRIMARY> col partition_name for a30
ARROW:(MDINH@hawklas):PRIMARY> col segment_type for a18
ARROW:(MDINH@hawklas):PRIMARY> select extent_management, allocation_type, segment_space_management,
  2  initial_extent, next_extent, pct_increase
  3  from dba_tablespaces
  4  where tablespace_name=UPPER('&&tbs')
  5  ;
Enter value for tbs: users

EXTENT_MAN ALLOCATIO SEGMEN INITIAL_EXTENT NEXT_EXTENT PCT_INCREASE
---------- --------- ------ -------------- ----------- ------------
LOCAL      SYSTEM    AUTO            65536

ARROW:(MDINH@hawklas):PRIMARY> select tablespace_name, file_id, file_name, bytes/1024/1024 mb, autoextensible, maxbytes/1024/1024/1024 max_gb
  2  from dba_data_files
  3  where tablespace_name=UPPER('&&tbs')
  4  ;

TABLESPACE_NAME                   FILE_ID FILE_NAME                                                       MB AUT     MAX_GB
------------------------------ ---------- ------------------------------------------------------- ---------- --- ----------
USERS                                   4 /oradata/HAWKLAS/datafile/o1_mf_users_c7120bbr_.dbf            160 YES 31.9999847

ARROW:(MDINH@hawklas):PRIMARY> select max(bytes)/1024/1024 max_free_mb, sum(bytes)/1024/1024 total_free_mb, count(*)
  2  from dba_free_space
  3  where tablespace_name=UPPER('&&tbs')
  4  ;

MAX_FREE_MB TOTAL_FREE_MB   COUNT(*)
----------- ------------- ----------
          4             6          7

ARROW:(MDINH@hawklas):PRIMARY> select segment_type, segment_subtype, max(initial_extent), max(next_extent)
  2  from dba_segments
  3  where tablespace_name=UPPER('&&tbs')
  4  group by segment_type, segment_subtype
  5  ;

SEGMENT_TYPE       SEGMENT_SU MAX(INITIAL_EXTENT) MAX(NEXT_EXTENT)
------------------ ---------- ------------------- ----------------
INDEX              ASSM                     65536          1048576
LOBSEGMENT         ASSM                     65536          1048576
LOBINDEX           ASSM                     65536          1048576
TABLE              ASSM                  94371840          1048576

ARROW:(MDINH@hawklas):PRIMARY> select sum(bytes)/1024/1024 from dba_segments where segment_name='ABC';

SUM(BYTES)/1024/1024
--------------------
                  96

ARROW:(MDINH@hawklas):PRIMARY> !df -h /oradata
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vg01-LogVol01
                       59G   35G   22G  62% /

ARROW:(MDINH@hawklas):PRIMARY> alter system set events '1652 trace name errorstack off';

System altered.

ARROW:(MDINH@hawklas):PRIMARY> create index x on abc(txt) tablespace users;

Index created.

ARROW:(MDINH@hawklas):PRIMARY> @check_tbs.sql
ARROW:(MDINH@hawklas):PRIMARY> set lines 200 echo on
ARROW:(MDINH@hawklas):PRIMARY> col file_name for a55
ARROW:(MDINH@hawklas):PRIMARY> col segment_name for a30
ARROW:(MDINH@hawklas):PRIMARY> col partition_name for a30
ARROW:(MDINH@hawklas):PRIMARY> col segment_type for a18
ARROW:(MDINH@hawklas):PRIMARY> select extent_management, allocation_type, segment_space_management,
  2  initial_extent, next_extent, pct_increase
  3  from dba_tablespaces
  4  where tablespace_name=UPPER('&&tbs')
  5  ;

EXTENT_MAN ALLOCATIO SEGMEN INITIAL_EXTENT NEXT_EXTENT PCT_INCREASE
---------- --------- ------ -------------- ----------- ------------
LOCAL      SYSTEM    AUTO            65536

ARROW:(MDINH@hawklas):PRIMARY> select tablespace_name, file_id, file_name, bytes/1024/1024 mb, autoextensible, maxbytes/1024/1024/1024 max_gb
  2  from dba_data_files
  3  where tablespace_name=UPPER('&&tbs')
  4  ;

TABLESPACE_NAME                   FILE_ID FILE_NAME                                                       MB AUT     MAX_GB
------------------------------ ---------- ------------------------------------------------------- ---------- --- ----------
USERS                                   4 /oradata/HAWKLAS/datafile/o1_mf_users_c7120bbr_.dbf            288 YES 31.9999847

ARROW:(MDINH@hawklas):PRIMARY> select max(bytes)/1024/1024 max_free_mb, sum(bytes)/1024/1024 total_free_mb, count(*)
  2  from dba_free_space
  3  where tablespace_name=UPPER('&&tbs')
  4  ;

MAX_FREE_MB TOTAL_FREE_MB   COUNT(*)
----------- ------------- ----------
         29            30          5

ARROW:(MDINH@hawklas):PRIMARY> select segment_type, segment_subtype, max(initial_extent), max(next_extent)
  2  from dba_segments
  3  where tablespace_name=UPPER('&&tbs')
  4  group by segment_type, segment_subtype
  5  ;

SEGMENT_TYPE       SEGMENT_SU MAX(INITIAL_EXTENT) MAX(NEXT_EXTENT)
------------------ ---------- ------------------- ----------------
INDEX              ASSM                     65536          1048576
LOBSEGMENT         ASSM                     65536          1048576
LOBINDEX           ASSM                     65536          1048576
TABLE              ASSM                  94371840          1048576

ARROW:(MDINH@hawklas):PRIMARY> select sum(bytes)/1024/1024 from dba_segments where segment_name='ABC';

SUM(BYTES)/1024/1024
--------------------
                  96

ARROW:(MDINH@hawklas):PRIMARY> select sum(bytes)/1024/1024 from dba_segments where segment_name='X';

SUM(BYTES)/1024/1024
--------------------
                 104

ARROW:(MDINH@hawklas):PRIMARY>
oracle@arrow:hawklas:/u01/app/oracle/diag/rdbms/hawklas/hawklas/trace
$ tailf alert_hawklas.log
Thu Jan 07 13:34:50 2016
Starting background process CJQ0
Thu Jan 07 13:34:51 2016
CJQ0 started with pid=26, OS id=3088
Thu Jan 07 13:37:28 2016
OS Pid: 3097 executed alter system set events '1652 trace name errorstack off'
Thu Jan 07 13:37:59 2016
Starting background process SMCO
Thu Jan 07 13:37:59 2016
SMCO started with pid=24, OS id=3101
Thu Jan 07 13:41:03 2016
Thread 1 advanced to log sequence 78 (LGWR switch)
  Current log# 3 seq# 78 mem# 0: /oradata/HAWKLAS/onlinelog/o1_mf_3_c7129gc8_.log
  Current log# 3 seq# 78 mem# 1: /oradata/fra/HAWKLAS/onlinelog/o1_mf_3_c7129go4_.log
Thu Jan 07 13:41:10 2016
Archived Log entry 4040 added for thread 1 sequence 77 ID 0xbc2f7b69 dest 1:

Trying Python and Pyplot for Database Performance Graphs

Bobby Durrett's DBA Blog - Wed, 2016-01-06 16:16

In the past I have used Excel to graph things related to Oracle database performance. I am trying out Python and the Pyplot library as an alternative to Excel.  I took a graph that I had done in Excel and rewrote it in Python. The graph shows the CPU usage within the database by category.  For example, I labeled the database CPU used by a group of web servers “WEBFARM1” on the graph.

Here is an example graph:

monday

You can find most of this code in the Python section of my GitHub repository. Here is the code that I used to create the example graph above using some made up data: zip

To make this graph in Excel I was running a sqlplus script and cutting and pasting the output into a text file that I imported into Excel. Very manual. No doubt there are ways that I could have automated what I was doing in Excel. But I have studied Python as part of the edX classes I took so I thought I would give it a try.

Python let me write a program to run the graph from an icon on my desktop. I used the cx_Oracle package to pull the data from the database and Pyplot for the graph.

I’m running the Windows 32 bit version of Canopy Express for my Python development environment. This environment comes with Pylot so I just had to install cx_Oracle to have all the packages I needed to make the graph.

I think both Excel and Python/Pyplot still have value. Excel still seems easier for quick and dirty graphing. But I used Python to automate a report that I run every day with fewer manual steps.  Probably could have done the same thing in Excel but I have recently studied Python so I was able to apply what I learned in my classes without a lot more effort.

Bobby

 

 

Categories: DBA Blogs

Error when starting #GoldenGate Extract against MS SQL Server

DBASolved - Wed, 2016-01-06 16:10

If you work with Oracle GoldenGate long enough, you will eventually have to setup against a Microsoft SQL Server. Being that GoldenGate is a heterogeneous application, this isn’t a problem; however there are small differences. One such difference is how the exact/replicat will connect to the MS SQL Database.

In an Oracle-to-Oracle configuration, you would just use a command line the following from the command line:

GGSCI> dblogin useridalias [ alias name]
or
GGSCI> dblogin userid [ user name ] password [ password ]

In a MS SQL Server environment, you can still login at the GGSCI command prompt with the following:

GGSCI> dblgoin sourcedb [ dns ]

You will notice the difference, which is the use of an ODBC DNS entry. Although setting up the ODBC DNS entry is not the point of this post, just keep it in mind that is is required when connecting to MS SQL Server with Oracle GoldenGate.

After setting up the ODBC DNS, you will need to add the following to the extract/replicat parameter file to enable the process to connect to the database.

sourcedb [ dns ]

Note: I normally put my connection information in a macro to modularize my parameter files. Please it makes it easier if it needs to change.

MACRO #logon_settings
BEGIN
sourcedb [ dns ]
END;

Now, when you go to start the extract/replicat, you may get the following error:

ERROR OGG-00551 Database operation failed: Couldn’t connect to [ dns ]. ODBC error: SQLSTATE 37000 native database error 4060. [Microsoft][SQL Server Native Client 11.0][SQL Server]Cannot open database “db_name” requested by the login. The login failed.

The error message is a little bit misleading. It tells you that the process cannot connect to the database which you were able to connect to from the GGSCI command prompt with no issue. Why is this? The issue lies in the fact that the manager (MGR) process is running as a service and does not have the correct permissions to access the database from the service.

In searching MOS for this error, I was found Note ID: 1633138.1. In this note, notice that this issue is known as of Oracle GoldenGate version 12.1.2.x.x. The note also provides you a fix to this issue. In simple terms, since the manager process is running as a service; additional permissions have to be granted to manger.

To grant the SYSADMIN privilege for the manager process follow the below sequence of steps (on windows after all):

1. Manager is installed as service:
Open SQL Server Management studio -> Security ->login>select NT AUTHORITY\SYSTEM ->Right Click -> Properties–>Server Role –>Enable sysadmin role

2. ggsci>stop mgr

3. ggsci>start mgr

4. ggsci>start extract <extract-name>

After granting the sysadmin role, the extract will start.

Enjoy!

about.me:http://about.me/dbasolved


Filed under: Golden Gate
Categories: DBA Blogs

WordPress 4.4.1

Tim Hall - Wed, 2016-01-06 15:41

WordPress 4.4.1 has been released.

You can see the list of fixes here. Your site will probably auto-update soon if you can’t be bothered to give it a nudge before.

There is also an update to the Twenty Sixteen theme, which you will have to trigger manually, if you are using it of course. :)

Happy blogging.

Cheers

Tim…

WordPress 4.4.1 was first posted on January 6, 2016 at 10:41 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Cloud Solutions for State Boards and Commissions

WebCenter Team - Wed, 2016-01-06 13:03
By Mitchell Palski – Oracle Fusion Middleware Specialist Managing documents within state boards and commissions can be paper and process intensive. Often times, these applications are initiated with paper forms, internal documents are tracked in spread sheets, and approvals are sent through external workflows as faxes or email-attachments. Some processes will be focused on certifying new applicants; others will just be for circulating internal documents (such as a Code of Conduct). These processes require notifications, auditing, appointment scheduling, data evaluations, document collaboration and management, application integration and more. Examples of state boards and commissions include (but are not limited to):
  • Accountancy
  • Architects
  • Barber Examiners
  • Chiropractic
  • Cosmetology
  • Dentistry
  • Massage Therapy
  • Nursing
  • Optometry
  • Osteopathic Medicine
  • Pharmacy
  • Physical Therapy
  • Real Estate
  • Veterinary
  • And more… 
What issues need to be addressed?

Applications:

  • Applicants have no awareness of their application status
  • High-volumes cause overhead that can’t be managed appropriately
  • Documents get lost and workflow gets delayed
  • Once accepted, users have to be given access to internal systems manually 
Internal documents:

  • Collaboration through email leads to multiple versions of the same document
  • No visibility into which employees have complied with policy and opened documents
  • Documents are hard to find and can only be accessed from desktops/laptops

Certifications:

  • Education requirements can’t be accurately monitored
  • Organization does not have high-level progress-tracking of all members
  • Difficulty storing, maintaining, and finding documentation

Flexible and Simple Applications with Oracle Cloud

All of these processes have common needs but differ in their requirements – in their process, their data, and their needs for documentation. Some boards will have a high enough volume of cases to justify the need for a full-featured COTS solution. In other cases, budget and IT restrictions will cause certain organizations to either continue to depend on manual paper-based processes or to search for cost-effective alternatives.

Oracle Documents Cloud Service (ODCS) can safely and securely store your electronic documents so that when paper-forms are used for applications or as supplementary documents they can be associated with the applicant’s request. Oracle Process Cloud Service integrates with ODCS to automate document workflow, including review and approval, automated routing, dynamic and manual task assignment, and process performance metrics.

Oracle provides an integrated suite of digital collaboration services that enables business users to collaborate efficiently, automate disparate business processes, coordinate tasks, and interact via mobile and social interfaces—all with complete security and control. Oracle’s cloud collaboration platform ensures faster turnaround of requests and decisions. Users can securely store content in the cloud, tie it to their source of record and collaborate online—anywhere, anytime, and from any device—through shared folders and conversations.

References:

http://www.oracle.com/us/solutions/cloud/sb-content-collaboration-cloud-2626191.pdf

Happy New Year: The Zode In The Road

Floyd Teter - Wed, 2016-01-06 10:31
Happy New Year!  No predictions from me for 2016...we've all seen how poorly that works out.  No resolutions either (although I may open a fitness gym called "Resolutions"...the place converts into a bar at the end of January).   Instead, I simply leave you with the wisdom of Dr. Seuss; something to consider as you kick off the new year.

The Zode In The Road 
Did I ever tell you about the young Zode,Who came to two signs at the fork in the road?One said to Place One, and the other, Place Two.So the Zode had to make up his mind what to do.Well...the Zode scratched his chin, and his head, and his pants.And he said to himself, "I'll be taking a chanceIf I go to Place One.  Now that place may be hot!And so how do I know if I'll like it or not?On the other hand though, I'll be sort of a foolIf I go to Place Two and I find it too cool.In that case I may catch a chill and turn blue!So, maybe Place One is the best, not Place Two,But then again, what if Place One is too high?I may catch a terrible earache and die.So Place Two may be best.  On the other hand though...What might happen to me if Place Two is too low?I might get some very strange pain in my toe!So Place One may be best", and he started to go.Then he stopped, and he said, "On the other hand though... .On the other hand... On the other hand... On the other hand though...And for 36 hours and one half that poor ZodeMade starts and stops at the fork in the roadSaying "Don't take a chance! No! You may not be right."Then he got an idea that was wonderfully bright!"Play safe!" cried the Zode. "I'll play safe, I'm no dunce!I'll simply start out for both places at once!"And that's how the Zode who would not take a chanceGot no place at all with a split in his pants.
So make this the year you take a few chances...

How an Oracle error can send you the wrong way…

Patrick Barel - Wed, 2016-01-06 08:53
.code, .code pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .code pre { margin: 0em; } .code .rem { color: #ff0000; } .code .kwrd { color: #008080; } .code .str { color: #0000ff; } .code .op { color: #0000c0; } .code .preproc { color: #cc6633; } .code .asp { background-color: #ffff00; } .code .html { color: #800000; } .code .attr { color: #ff0000; } .code .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .code .lnum { color: #606060; }

At my current assignment I needed to create an update statement to copy data from one table to another. Quite a simple task, I would say, but an error or actually a constraint violation sent me the wrong way in finding my solution.

Suppose I have two tables:

CREATE TABLE a
( mykey NUMBER
, thisvalue VARCHAR2(20) NOT NULL
)
/
CREATE TABLE b
( mykey NUMBER
, thatvalue VARCHAR2(20) NOT NULL
)
/

and some data in them:

INSERT INTO a (mykey,thisvalue) VALUES (1,'Larry');
INSERT INTO a (mykey,thisvalue) VALUES (2,'Bryn');
INSERT INTO a (mykey,thisvalue) VALUES (3,'Steven');
INSERT INTO a (mykey,thisvalue) VALUES (4,'Patrick');
INSERT INTO b (mykey,thatvalue) VALUES (1,'Larry Ellison');
INSERT INTO b (mykey,thatvalue) VALUES (2,'Bryn Llewellyn');
INSERT INTO b (mykey,thatvalue) VALUES (3,'Steven Feuerstein');
COMMIT
/ 

Now I want to update the values in table a with the values of table b. My first idea was to write a statement like this:

UPDATE a
   SET a.thisvalue = (select b.thatvalue
                        FROM b
                       WHERE b.mykey = a.mykey)
/

but this statement led to the following error:

ORA-01407: cannot update ("DEMO"."A"."THISVALUE") to NULL 


No problem, I thought, if the new value is somehow NULL, then just use the old value:

UPDATE a
   SET a.thisvalue = (select NVL(b.thatvalue, a.thisvalue)
                        FROM b
                       WHERE b.mykey = a.mykey)
/

but this still resulted in the same error:

ORA-01407: cannot update ("DEMO"."A"."THISVALUE") to NULL 

Then it dawned upon me. For mykey=4 there would be no match in table B, which resulted in no row returned, hence a NULL value.

The error given is absolutely correct, but it sent me the wrong way in finding a solution. If I would have gotten a NO DATA FOUND error, I would have known right away what was the problem.

The solution was pretty easy, just update the rows that have a matching row in table B:

UPDATE a
   SET a.thisvalue = (select b.thatvalue
                        FROM b
                       WHERE b.mykey = a.mykey)
 WHERE EXISTS (select b.thatvalue
                 FROM b
                WHERE b.mykey = a.mykey)
/

Another solution might be using the MERGE statement:

MERGE INTO a
USING (select b.mykey, b.thatvalue
         FROM b) b
ON (a.mykey = b.mykey)
WHEN MATCHED THEN
  UPDATE
     SET a.thisvalue = b.thatvalue
/

If the subselect results in more than one row you get an equivalent of the TOO_MANY_ROWS exception, but if the subselect results in no rows you don’t get the NO_DATA_FOUND (or equivalent) exception.

How an Oracle error can send you the wrong way…

Bar Solutions - Wed, 2016-01-06 08:53
.code, .code pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .code pre { margin: 0em; } .code .rem { color: #ff0000; } .code .kwrd { color: #008080; } .code .str { color: #0000ff; } .code .op { color: #0000c0; } .code .preproc { color: #cc6633; } .code .asp { background-color: [...]

NLS Mess

Jonathan Lewis - Wed, 2016-01-06 07:18

The Oracle database has all sorts of little details built into it to help it deal with multi-national companies, but since they’re not commonly used you can find all sorts of odd “buggy” bits of behaviour when you start to look closely. I have to put “buggy” in quotes because some of the reported oddities are the inevitable consequences of (for example) how multi-byte character sets have to work; but some of the oddities look as if they simply wouldn’t be there if the programmer writing the relevant bit of code had remembered that they also had to cater for some NLS feature.

Here’s an example of the type of unexpected behaviour that can appear. There probably are some bugs in the area I’m going to demonstrate but, at first glance, I thought I was looking at an acceptable limitation imposed by a generic requirement. The example came from AskTom. which is why the data set isn’t my usual “t1” generation (and the formatting and capitalisation isn’t according to my usual standards).

The problem involves Case Insensitive indexing.


ALTER session SET nls_sort=binary_ci;
ALTER session SET nls_comp=linguistic;

CREATE TABLE log_data(
  account_id NUMBER,
  log_type NUMBER,
  sys_name VARCHAR2(30),
  log_time TIMESTAMP,
  msg varchar2(4000)
)
nologging
;

insert /*+ append */ into log_data(
  account_id,
  log_type,
  sys_name,
  log_time,
  msg
)
select
        5,
        2,
        dbms_random.string('a',1),
        sysdate + dbms_random.value,
        rpad('x',200)
from
        dual
connect by
        level <= 26000
;


create index log_date on log_data (
        account_id, 
        log_type, 
--      sys_name,
        NLSSORT(sys_name,'NLS_SORT=BINARY_CI'),
        log_time
)
nologging
;
  
rem     ======================================================================
rem     Need to gather stats AFTER index creation because of the hidden column
rem     ======================================================================
  
begin
        dbms_stats.gather_table_stats(
                ownname          => user,
                tabname          =>'LOG_DATA',
                method_opt       => 'for all columns size 1'
        );
end;
/

And here’s the query I want to optimize:


SELECT 
        *
FROM
  (
    SELECT
        sys_name, log_time,  substr(msg,1,40) msg
    FROM log_data
    WHERE
      account_id=5
      AND log_type=2
      AND sys_name='a'
    ORDER BY
      log_time  desc
  )
WHERE
  rownum <= 10
;

The requirement of the query is that we see the ten most recent entries for a given combination of account_id, log_type and sys_name (ignoring case in sys_name). The orginal table has tens of millions of rows, of course, with many combinations, and some of the combinations have a very large number of entries hence the desire to find an access path that gets just the 10 rows we want without getting all the rows for a combination and sorting them before returning the ten.

Normally we would just create an index that started with the 3 columns used in the equality and ending with the column in the order by clause, and that would be enough for the optimizer to see the option for a “sort order by nosort” operation to get the required data through an index range scan; so that’s the index the code sample creates, except that since we’ve enabled case insensitive sorting we need to use a function-based index to hold the case-insensitive version of sys_name.

Here’s the execution plan we would get if we DIDN’T use the nlssort() function in the index – I’ve run the query in 11.2.0.4 and pulled the plan from memory with rowsource execution stats enabled:


---------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |          |      1 |        |   605 (100)|     10 |00:00:00.02 |    1065 |       |       |          |
|*  1 |  COUNT STOPKEY                 |          |      1 |        |            |     10 |00:00:00.02 |    1065 |       |       |          |
|   2 |   VIEW                         |          |      1 |    500 |   605   (1)|     10 |00:00:00.02 |    1065 |       |       |          |
|*  3 |    SORT ORDER BY STOPKEY       |          |      1 |    500 |   605   (1)|     10 |00:00:00.02 |    1065 |  2048 |  2048 | 2048  (0)|
|   4 |     TABLE ACCESS BY INDEX ROWID| LOG_DATA |      1 |    500 |   603   (1)|    966 |00:00:00.01 |    1065 |       |       |          |
|*  5 |      INDEX RANGE SCAN          | LOG_DATE |      1 |    500 |   103   (3)|    966 |00:00:00.01 |     100 |       |       |          |
---------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=10)
   3 - filter(ROWNUM<=10)
   5 - access("ACCOUNT_ID"=5 AND "LOG_TYPE"=2)
       filter(NLSSORT("SYS_NAME",'nls_sort=''BINARY_CI''')=HEXTORAW('6100') )

Notice particularly the filter predicate at operation 5: that’s the thing we need to get into the index before we can avoid picking up excess data and sorting it. Notice also in the A-Rows column that we acquired 966 rows from the table before sorting and discarding all but 10 of them at operation 3.

Notice especially how important it is to look at the predicate section of an execution plan to gain a full understanding of what’s happening.

So here’s the execution plan we get by default with the function-based index in place:


----------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |  OMem |  1Mem | Used-Mem |
----------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |          |      1 |        |    13 (100)|     10 |00:00:00.01 |     969 |       |       |          |
|*  1 |  COUNT STOPKEY                  |          |      1 |        |            |     10 |00:00:00.01 |     969 |       |       |          |
|   2 |   VIEW                          |          |      1 |     11 |    13   (0)|     10 |00:00:00.01 |     969 |       |       |          |
|*  3 |    SORT ORDER BY STOPKEY        |          |      1 |     11 |    13   (0)|     10 |00:00:00.01 |     969 |  2048 |  2048 | 2048  (0)|
|   4 |     TABLE ACCESS BY INDEX ROWID | LOG_DATA |      1 |   1000 |    13   (0)|    966 |00:00:00.01 |     969 |       |       |          |
|*  5 |      INDEX RANGE SCAN DESCENDING| LOG_DATE |      1 |     11 |     2   (0)|    966 |00:00:00.01 |       5 |       |       |          |
----------------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=10)
   3 - filter(ROWNUM<=10)
   5 - access("ACCOUNT_ID"=5 AND "LOG_TYPE"=2 AND "LOG_DATA"."SYS_NC00006$"=HEXTORAW('6100') )

It didn’t work ! (Check the A-Rows at operations 4 and 5, and the sort that we didn’t want at operation 3 where the data is finally reduced to 10 rows.

But there’s something odd going on here – look at the predicate section: our three predicates are all access predicates for the index range scan descending. We are doing exactly what we want to do with the index, but we’re not stopping after the 10 rows that we need, we’re getting all of them (in the order we want) and then doing a trivial sort and discard. Look at the Cost column – the cost at operation 4 is exactly what we might expect for the 10 rows we want to see, and the E-rows at line 5 is clearly based on our “first 10 rows” requirement.

This raises two questions:

  1. What’s gone wrong ?
  2. Can we work around the problem ?

The answer to (1) is, I think, that there’s a bug in the code. Looking at the 10053 trace file I can see the optimizer correctly handling the arithmetic of the virtual column (the sys_nc000006$) representing the function in the index and then getting to the point where it goes into a code section relating to “Recost for ORDER BY”, and brings back the original function as a filter predicate – I think that in the recosting it may be losing track of the fact that sys_nc000006$ and nlssort(sys_name, ‘nls_sort=binary_ci’) are the same thing and therefore can’t apply the rule about “Equality on 1st N columns, order by on the remainder”.

There are several answers to (2).

Workarounds

The honest hack

The first one is simply to fall back to the old (probably version 7, possibly version 8) requirement for getting the “sort order by nosort” operation – put all the index columns into the order by clause. Unfortunately the optimizer then did a tablescan rather than an index range scan because my data set was so small, so I had to hack the system stats temporarily to make the tablescan very expensive:


begin
        dbms_stats.set_system_stats('MBRC',2);
        dbms_stats.set_system_stats('MREADTIM',20); 
        dbms_stats.set_system_stats('SREADTIM',5);
        dbms_stats.set_system_stats('CPUSPEED',1000); 
end;
/

... order by account_id desc, log_type desc, sys_name desc, lot_time desc

Unfortunately the optimizer still went wrong – it did an ASCENDING index range scan sorting all the data. I actually had to hint the code to use the index in descending order to get the following execution plan:


------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |          |      1 |        |  1215 (100)|     10 |00:00:00.01 |      13 |
|*  1 |  COUNT STOPKEY                 |          |      1 |        |            |     10 |00:00:00.01 |      13 |
|   2 |   VIEW                         |          |      1 |   1000 |  1215   (1)|     10 |00:00:00.01 |      13 |
|   3 |    TABLE ACCESS BY INDEX ROWID | LOG_DATA |      1 |   1000 |  1006   (1)|     10 |00:00:00.01 |      13 |
|*  4 |     INDEX RANGE SCAN DESCENDING| LOG_DATE |      1 |   1000 |     5   (0)|     10 |00:00:00.01 |       3 |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=10)
   4 - access("ACCOUNT_ID"=5 AND "LOG_TYPE"=2 AND "LOG_DATA"."SYS_NC00006$"=HEXTORAW('6100') )

The A-Rows tells us we’ve accessed the minimum data set, and the absence of the SORT ORDER BY STOPKEY operation tells us that we’ve avoided doing the sort. Notice, though that the cost is the cost that would have been appropriate if we have accessed all 1,000 rows that matched the equality predicates. This is an example of a plan that you couldn’t really trust if all you had done was an “explain plan” rather than running the query and checking the rowsource execution stats. If you ignore the A-Rows it looks as if the plan WOULD get all the data in order and only eliminate the redundant rows at operation 1.

The silly surprise

The original author of the problem came up with this one. Put in two predicates which, between them are equivalent to the original requirement:


where ...
and     sys_name >= 'a'
and     sys_name <= 'a'

Clearly this is totally silly – the optimizer can fold this pair of predicates into the single predicate “sys_name = ‘a'”, so it shouldn’t make any difference. But here’s the execution plan:

------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |          |      1 |        |    13 (100)|     10 |00:00:00.01 |      13 |
|*  1 |  COUNT STOPKEY                 |          |      1 |        |            |     10 |00:00:00.01 |      13 |
|   2 |   VIEW                         |          |      1 |     11 |    13   (0)|     10 |00:00:00.01 |      13 |
|   3 |    TABLE ACCESS BY INDEX ROWID | LOG_DATA |      1 |   1000 |    13   (0)|     10 |00:00:00.01 |      13 |
|*  4 |     INDEX RANGE SCAN DESCENDING| LOG_DATE |      1 |     11 |     2   (0)|     10 |00:00:00.01 |       3 |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=10)
   4 - access("ACCOUNT_ID"=5 AND "LOG_TYPE"=2 AND "LOG_DATA"."SYS_NC00006$"=HEXTORAW('6100') )

Yes, it’s (structurally) exactly the same plan, with exactly the same predicate section except that (a) it gets there without being hinted, (b) the Cost column looks appropriate all down the line, and (c) the E-Rows value for the VIEW operator would have helped us appreciate that the correct elimination was (probably) going to happen if all we had done was the Explain Plan.

The dirty hack

I know the name of the hidden column that’s causing the problem, and I know how to generate the value it has to be – so let’s give Oracle exactly what it needs to see rather than allowing its internal transformation to rewrite the SQL:

...
AND sys_nc00006$ = nlssort('a','nls_sort=binary_ci')
...


------------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name     | Starts | E-Rows | Cost (%CPU)| A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |          |      1 |        |    13 (100)|     10 |00:00:00.01 |      13 |
|*  1 |  COUNT STOPKEY                 |          |      1 |        |            |     10 |00:00:00.01 |      13 |
|   2 |   VIEW                         |          |      1 |     11 |    13   (0)|     10 |00:00:00.01 |      13 |
|   3 |    TABLE ACCESS BY INDEX ROWID | LOG_DATA |      1 |   1000 |    13   (0)|     10 |00:00:00.01 |      13 |
|*  4 |     INDEX RANGE SCAN DESCENDING| LOG_DATE |      1 |     11 |     2   (0)|     10 |00:00:00.01 |       3 |
------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(ROWNUM<=10)
   4 - access("ACCOUNT_ID"=5 AND "LOG_TYPE"=2 AND "SYS_NC00006$"=HEXTORAW('6100') )

We get exactly the plan we need – and the silly thing about this example is that it’s a case where we get the plan we want by EXPLICITLY transforming the SQL to reproduce the transformation that Oracle had done IMPLICITLY and then messed up !

Final Choice
Of the three options – the dirty hack is definitely a no-no in production; the “double the predicate” trock is undesirable because it may depend in some unexpected way on a particular optimizer bug or on some statistical detail that could change; so I’d choose the hinted path with the (nominally) redundant columns.

One final point about this solution, we actually needed to include only the sys_name in the order by clause to use the descending range scan and early stop – which is basically another indication that it’s something about the function-based column that is breaking the normal code path.

Reference Script: nls_sort_anomaly.sql


The Time I Ran Out of Gas

Oracle AppsLab - Wed, 2016-01-06 02:08

Before Christmas, I ran out of gas for the first time.

All things considered, I was very lucky. It was just me in the car, and the engine died in a covered parking structure, in a remote corner with few cars. Plus, it was the middle of the day, during the week before Christmas, so not a lot of people were out and about anyway.

Could have been a lot worse.

The reason why I ran out of gas is more germane, and as a result of my mishap, I found another interesting experience.

I’ll start with the why. If you read here, you’ll know I’ve been researching the quantified self, which I understand roughly as the collection of data related to me and the comparison of these data sets to identify efficiencies.

As an example, I tracked fitness data with a variety of wearables for most of last year.

About a year ago, Ben posted his impressions of Automatic, a device you plug into your car’s OBD-II diagnostics port that will quantify and analyze your driving data and track your car’s overall health, including its fuel consumption and range.

I have since added Automatic my car as another data set for my #QS research. I’ve found the data very useful, especially with respect to real cost of driving.

Since Automatic knows when you fill the tank and can determine the price you paid, it can provide a very exact cost for each trip you make. This adds a new dimension to mundane tasks.

Suddenly, running out for a single item has a real cost, driving farther for a sale can be accurately evaluated, splitting the cost of a trip can be exact, etc.

One feature I appreciate is the range. From the wayback archives, in 2010, I discussed the experience and design challenges of a gas gauge. My Jeep warns of low fuel very early and won’t report the range once the low fuel indicator has been tripped.

However, Automatic always reports an estimated range, which I really like.

Screenshot_2016-01-04-14-24-12

Maybe too much, given this is how I ran out of gas. The low fuel indicator had been on for a day, but I felt confident I could get to a gas station with plenty of time, based on the estimated range reported by Automatic.

Armed with my false confidence, I stopped to eat on the way to the gas station, and then promptly ran out of gas.

To be clear, this was my fault, not Automatic’s.

In my 2010 musings on the gas gauge, I said:

It does seem to be nigh impossible to drive a car completely out of gas, which seems to be a good thing, until you realize that people account for their experiences with the gauges when driving on E, stretching them to the dry point.

Yeah, I’m an idiot, but I did discover an unforeseen negative, over reliance on data. I knew it was there, like in every other data vs. experience point-counter point. I know better than to rely on data, but I still failed.

Something I’ll need to consider more deeply as my #QS investigation continues.

Now for the interesting experience I found.

After calling roadside assistance for some gas, my insurance company texted me a link to monitor “the progress of my roadside event.” Interesting copy-writing.

That link took me an experience you might recognize.

Screenshot_2015-12-18-10-05-21

Looks like Uber, doesn’t it? Because it’s a web page, the tow trunk icon wasn’t animated like the Uber cars are, but overall, it’s the same, real-time experience.

I like the use here because it gives a tangible sense that help is on the way, nicely done.

So, that’s my story. I hope to avoid repeating this in the future, both the running-out-of-gas and the over-reliance on data.

Find the comments and share your thoughts.Possibly Related Posts:

Automatic install of JDeveloper

Darwin IT - Wed, 2016-01-06 01:57
A few weeks ago I wrote a little script to do a silent install of SOA/BPM QuickStart.
Yesterday I enhanced the script to also do an automatic install of Java and adapt the environment settings to have the JDeveloper settings specific placed in C:\Data\JDeveloper\SOA.

To run the script you should have the following files placed in one folder (for instance c:\temp\soabpm):
Download the jdk-8u65-windows-x64.exe from here. If you use another version adapt the script below to change the JAVA_HOME as the place where it should be installed and the reference to the installer (jdk-8u65-windows-x64.exe) to the version. The script below ignores any other installation of Java (either an other update of jdk8 or earlier versions).

Using the 'setx' command  (see docs) the JAVA_HOME is explicitly set, and also later on the variables JDEV_USER_DIR_SOA and JDEV_USER_HOME_SOA pointing to C:\Data\JDeveloper\SOA. I picked the jdev.boot from an earlier install adapted the ide.user.dir.var variable as follows:

#
# The ide.user.dir.var specifies the name of the environment variable
# that points to the root directory for user files. The system and
# mywork directories will be created there. If not defined, the IDE
# product will use its base directory as the user directory.
#
ide.user.dir.var = JDEV_USER_HOME_SOA,JDEV_USER_DIR_SOA

A copy of this jdev.boot file I placed in the same folder as the script. The scripts sets the FMW_HOME as 'C:\oracle\JDeveloper\12210_BPMQS'. After the install of JDeveloper-SOA/BPM QuickStart, it copies this jdev.boot file to the right place in the FMW_HOME (%FMW_HOME%\jdeveloper\jdev\bin).
Make sure that you have the installer jar files (fmw_12.2.1.0.0_bpm_quickstart.jar and fmw_12.2.1.0.0_bpm_quickstart.jar) unzipped in the same folder. You could have downloaded them in zip files. You can use this script also to do a complete silent install of the JDeveloper Studio Edition, but then use the generic installer downloaded here (Oracle JDeveloper 12c 12.2.1.0.0 Generic/Others(2.1GB)).

The installBpmQS.bat script is as follows.
@echo off
set JAVA_HOME=c:\Program Files\Java\jdk1.8.0_65
echo setx -m JAVA_HOME "%JAVA_HOME%"
setx -m JAVA_HOME "%JAVA_HOME%"
echo JAVA_HOME=%JAVA_HOME%
rem Check Java
if exist "%JAVA_HOME%" goto :JAVA_HOME_EXISTS
echo Install %JAVA_HOME%
jdk-8u65-windows-x64.exe /s INSTALLDIR="%JAVA_HOME%"
:JAVA_HOME_EXISTS
echo %JAVA_HOME% exists
rem check BPM12.2 QS
set FMW_HOME=C:\oracle\JDeveloper\12210_BPMQS
if exist "%FMW_HOME%" goto :BPMQS_HOME_EXISTS
echo Install %FMW_HOME%
rem echo "Current dir2:" %~dp0
echo "Silent install BPM QuickStart, using response file:" %cd%\bpmqs1221_silentInstall.rsp
"%JAVA_HOME%\bin\java.exe" -jar fmw_12.2.1.0.0_bpm_quickstart.jar -silent -responseFile %cd%\bpmqs1221_silentInstall.rsp -nowait
:BPMQS_HOME_EXISTS
echo %FMW_HOME% exists
if not exist c:\Data mkdir c:\Data
if not exist c:\Data\JDeveloper mkdir c:\Data\JDeveloper
if not exist c:\Data\JDeveloper\SOA mkdir c:\Data\JDeveloper\SOA
rem set JDEV_USER_DIR_SOA and JDEV_USER_HOME_SOA as 'c:\Data\JDeveloper\SOA'
echo set JDEV_USER_DIR_SOA and JDEV_USER_HOME_SOA as c:\Data\JDeveloper\SOA
setx -m JDEV_USER_DIR_SOA c:\Data\JDeveloper\SOA
setx -m JDEV_USER_HOME_SOA c:\Data\JDeveloper\SOA
echo copy jdev.boot naar "%FMW_HOME%\jdeveloper\jdev\bin"
copy jdev.boot "%FMW_HOME%\jdeveloper\jdev\bin" /Y
echo done

For completeness, the content of the response file is:
[ENGINE]

#DO NOT CHANGE THIS.
Response File Version=1.0.0.0.0

[GENERIC]

#Set this to true if you wish to skip software updates
DECLINE_AUTO_UPDATES=true

#
MOS_USERNAME=

#
MOS_PASSWORD=<secure value="">

#If the Software updates are already downloaded and available on your local system, then specify the path to the directory where these patches are available and set SPECIFY_DOWNLOAD_LOCATION to true
AUTO_UPDATES_LOCATION=

#
SOFTWARE_UPDATES_PROXY_SERVER=

#
SOFTWARE_UPDATES_PROXY_PORT=

#
SOFTWARE_UPDATES_PROXY_USER=

#
SOFTWARE_UPDATES_PROXY_PASSWORD=<secure value="">

#The oracle home location. This can be an existing Oracle Home or a new Oracle Home
ORACLE_HOME=C:\oracle\JDeveloper\12210_BPMQS
Save this as bpmqs1221_silentInstall.rsp. If you use this for the installation of JDeveloper Studio Edition or any other version of JDeveloper, then adapt the name, also in installBpmQS.bat above, and adapt the FMW_HOME/ORACLE_HOME in both installBpmQS.bat and the response-file.

I could have enhanced this script to put the possible points to change in variables. But I think with my eloboration you should get the point...