Skip navigation.

Feed aggregator

WebLogic Server Hangs at Startup : Beware if you Disk is 100% full

Online Apps DBA - Tue, 2016-02-02 16:43
This entry is part 5 of 6 in the series WebLogic Server

We were recently implementing  Oracle Fusion MiddleWare for one of our Customer in United Arab Emirates and we encountered issue on WebLogic Server where startup of WebLogic Server hangs (WebLogic is heart of Fusion Middleware that is now used in almost every Fusion Middleware Product and also in E-Business Suite R12.2 and Peoplesoft) which we have described below.

Issue:

While starting WebLogic Admin Server, WebLogic Server was not coming up or hangs at startup at one point without generating any logs. This might come due to many reasons.

WebLogic Server 10.3.6.0 Tue Nov 15 08:52:36 PST 2011 1441050 >
<Jan 29, 2016 3:42:26 AM PDT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to STARTING>
<Jan 29, 2016 3:42:26 AM PDT> <Info> <WorkManager> <BEA-002900> <Initializing self-tuning thread pool>
<Jan 29, 2016 3:42:26 AM PDT> <Notice> <Log Management> <BEA-170019> <The server log file /mydomain/WLS/user_projects/domains/wls_mydomain/servers/AdminServer/logs/AdminServer.log is opened. All server side log events will be written to this file.>

Cause:

Earlier, we were facing Disk Space Issue (mount where WebLogicDomain directory configured was 100% full) and we got one message as ” No Disk Space Left on Device” because of which WebLogic Admin Server for BI Publisher got into hang state. After providing some Disk space to on the server it was not coming up and stuck at above point.

Fix:

After further investigations, we came to know the issue was with data folder inside /mydomain/WLS/user_projects/domains/wls_mydomain/servers/AdminServer/data folder, so to check if its really issue with data folder we renamed/moved the data folder to data_backup (as this was test environment) and tried to start it again. We analysed the server recreated the data folder by itself (from Admin Server) and started up fine without any hangs.

 

If you want to learn more or wish to discuss challenges you are hitting in Oracle WebLogic Server Implementation, Register for our Oracle WebLogic Administration Training (next batch starts on 13th February, 2016 – Register before 6th Feb and get discount of 100 USD,  Apply coupon code W100OFF ).

We are so confident on quality and value of our training that We provide 100% Money back guarantee so in unlikely case of you being not happy after 2 sessions, just drop us a mail before third session and We’ll refund FULL money.

We also provide dedicated machine on cloud to practice WebLogic Implementation including day to day tasks and recording of live interactive trainings for life time access.

 

The post WebLogic Server Hangs at Startup : Beware if you Disk is 100% full appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Fluid Header and Navigation is the New Standard

PeopleSoft Technology Blog - Tue, 2016-02-02 16:15
Beginning with PeopleTools 8.55, PeopleSoft 9.2 applications will have a Fluid header on their classic pages that matches the fluid pages.  This unifies the user experience of classic pages with newer fluid pages and applications.  With the fluid user interface, user navigation is more seamless and intuitive.  Using fluid homepages, tiles, global search, related actions, and the new fluid Navigation Collection feature, users can more easily navigate to the information most important to them.  Refer to the PeopleSoft Fluid User Interface and Navigation Standards White Paper (Document ID 2063602.1) for more information on design best practices for navigation within PeopleSoft applications.
Part of this change that makes Fluid the default is the replacement of the drop down menu navigation.  In most cases, customers will want their users to simply use the Nav Bar in place of any classic menu navigation.  However, if there is a special circumstance where customers want to maintain the classic menus, they can do so.  There are two ways of displaying the classic menus:

 Method 1 – Switch back to default tangerine or Alt-Tang theme

1. Go to PeopleTools >> Portal >> Branding >> Branding System Options; 2. Change the system default theme back to default tangerine or alt-tang; 3. Sign out and sign in again to see the changes.

Method 2 – Unhide the drop down menu in default fluid theme

1. Go to PeopleTools >> Portal >> Branding >> Define Headers and Footers; 2. Search and open the DEFAULT_HEADER_FLUID header definition; 3. Copy the following styles into the “Style Definitions” field at bottom of the page, and then save; .desktopFluidHdr .ptdropdownmenu {     display: block; }

4. Sign out and sign in again to see the changes.

We encourage customers to stick with Fluid navigation as the standard.  It's simply better and more intuitive. 

WordPress 4.4.2

Tim Hall - Tue, 2016-02-02 13:30

WordPress 4.4.2 has been released.

You can see the list of fixes here. Three of the five installations I maintain had already updated by the time I got to them, so by the time you read this you will probably already have it too. :)

Happy blogging.

Cheers

Tim…

WordPress 4.4.2 was first posted on February 2, 2016 at 8:30 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Empowering Students in Open Research

Michael Feldstein - Tue, 2016-02-02 12:51

By Michael FeldsteinMore Posts (1052)

Phil and I will be writing a twice-monthly column for the Chronicle’s new Re:Learning section. In my inaugural column, “Muy Loco Parentis,” I write about how schools make data privacy decisions on behalf of the students that the students wouldn’t make for themselves, and that may even be net harmful for the students. In contrast to the ways in which other campus policies have evolved, there is still very much a default paternalistic position regarding data.

But the one example that I didn’t cover in my piece happens to be the one that inspired it in the first place. A few months back at the OpenEd conference, I heard a presentation from CMU’s Norm Bier about that challenges of getting different schools to submit OLI student data to a common database for academic research. Basically, every school that wants to do this has to go through its own IRB process, and every IRB is different. Since the faculty using the OLI products usually aren’t engaged in the research themselves, it generally isn’t worth the hassle to go through this process, so the data doesn’t get submitted and the research doesn’t get done. Note that Pearson and McGraw Hill do not have this problem; if they want to look at student performance in a learning application across various schools, they can. Easily. Something is wrong with this picture. I proposed in Norm’s session that maybe students could be given an option to openly publish their data. Maybe that would get around the restrictions. David Wiley, who does a lot more academic research than I do, seemed to think this wasn’t a crazy idea, so I’ve been gnawing on the problem since then.

I have talked to a bunch of researchers about the idea. The first reaction is often skepticism. IRB is not so easy to circumvent (for good reason). What generally changed their minds was the following thought experiment:

  • Suppose that, in some educational software program, there was a button labeled “Export.” Students could click the button and export their data in some suitably anonymized format. (Yes, yes, it is impossible to fully de-identify data, but let’s posit “reasonably anonymized” as assessed by a community of data scientists.) Would giving students the option to export their data to any server of their choosing trigger the requirement for IRB review? [Answer: No.]
  • Suppose the export button offered a choice to export to CMU’s research server. Would giving students that option trigger the requirement for IRB review? [Answer: Probably not.]

There are two shades of gray here that are complications. First, researchers worry about the data bias that comes from opt in. And the further you lead students down the path toward encouraging them to share their data, such as making sharing the default, the more the uneasiness sets in. Second and relatedly, there is the issue of informed consent. There was a general feeling that, even if you get around IRB review, there is still a strong ethical obligation to do more than just pay lip service to informed consent. You need to really educate students on the potential consequences of sharing their data.

That’s all fair. I don’t claim that there is a silver bullet. But the thought experiment is revealing. Our intuitions, and therefore our policies, about student data privacy are strongly paternalistic in an academic context but shift pretty quickly once the institutional role fades and the student’s individual choice is foregrounded. I think this is an idea worth exploring further.

The post Empowering Students in Open Research appeared first on e-Literate.

BPM/SOA 12c: Symbolic Filebased MDS in Integrated Weblogic

Darwin IT - Tue, 2016-02-02 05:44
In BPM/SOA projects, we use the MDS all the time, for sharing xsd's and wsdl's between projects.

Since 12cR1 (12.1.3) we have the QuickStart installers for SOA and BPM,  that allows you to create an Integrated Weblogic domain to use for SOASuite and/or BPMSuite.

In most projects we have the contents of the MDS in subversion and of course a check out of that in a local svn working copy.

My whitepaper mentioned in this blog entry describes how you can use the mds in a SOA Suite project from 11g onwards.

But how use the MDS in your integrated weblogic? I would expect that some how 'magically' the integrated weblogic would 'know' of the mds references that I have in the adf-config.xml file in my SOA/BPM Application. But unfortunately it hasn't. That is only used on design/compile time.

Now you could just deploy/sync your MDS to your integrated weblogic as you would do to your test/production server and did on 11g.

But I wouldn't write this blog-entry if I did not find a cool trick: symbolic links, even on Windows.

As denoted by the JDEV_USER_DIR variable your (see also this blog entry), your DefaultDomain would be in 'c:\Data\JDeveloper\SOA\system12.2.1.0.42.151011.0031\DefaultDomain' or 'c:\Users\MAG\AppData\Roaming\JDeveloper\system12.2.1.0.42.151011.0031\DefaultDomain' (on Windows).

Within the  Domain folder you'll find the following folder structure: 'store\gmds\mds-soa\soa-infra'.
 This is apparently the folder that is used for the MDS for SOA and BPM Suite. Within there you'll find the folders:
  • deployed-composites
  • soa
In there you can create a symbolic link (in Windows a Junctions) named 'apps' and pointing to the folder in your svn working copy that holds the 'oramds://apps'-related content. In Windows this is done like:
C:\...\DefaultDomain\store\gmds\mds-soa\soa-infra>mklink /J apps y:\Generiek\MDS\trunk\SOA\soa-infra\apps
The /J makes it a 'hard symbolic link' or a 'Junction'. Under Linux you woud use 'ln -s ...'.

You'll get a response like:
C:\...\DefaultDomain\store\gmds\mds-soa\soa-infra>Junction created for apps <<===>> y:\Generiek\MDS\trunk\SOA\soa-infra\apps
When you perform a dir you'll see:

c:\Data\JDeveloper\SOA\system12.2.1.0.42.151011.0031\DefaultDomain\store\gmds\mds-soa\soa-infra>dir
Volume in drive C is System
Volume Serial Number is E257-B299

Directory of c:\Data\JDeveloper\SOA\system12.2.1.0.42.151011.0031\DefaultDomain\store\gmds\mds-soa\soa-infra

02-02-2016 12:06 <DIR> .
02-02-2016 12:06 <DIR> ..
02-02-2016 12:06 <JUNCTION> apps [y:\Generiek\MDS\trunk\SOA\soa-infra\apps]
02-02-2016 12:07 <DIR> deployed-composites
02-02-2016 11:23 <DIR> soa
0 File(s) 0 bytes
5 Dir(s) 18.475.872.256 bytes free
You can just CD to the apps folder and do a DIR there, it will then list the contents of the svn working copy folder of your MDS but just from within your Default Domain.

Just refire your Integrated Domain's DefaultServer and you should be able to deploy your composites that depend on the MDS.

Partitioned Bitmap Join

Jonathan Lewis - Tue, 2016-02-02 02:32

If you don’t want to read the story, the summary for this article is:

If you create bitmap join indexes on a partitioned table and you use partition exchanges to load data into the table then make sure you create the bitmap join indexes on the loading tables in exactly the same order as you created them on the partitioned table or the exchange will fail with the (truthful not quite complete) error: ORA-14098: index mismatch for tables in ALTER TABLE EXCHANGE PARTITION.

My story starts with this OTN posting from John Hall where he found after a year of successful batch loading one of his partition exchanges was raising error 14098. After an exchange of ideas, user rp0428 came up with a query against sys.jijoin$ (one of the tables behind bitmap join indexes) that allowed John Hall to see that the indexes on the exchange table had been created in a different order from that of the partitioned table. I did a quick test to see if this might be relevant (it shouldn’t be, it isn’t with “normal” indexes or function-based indexes, or virtual columns) and didn’t manage to reproduce the problem with two dimension tables and two bitmap join indexes.

Fortunately John didn’t take my word for it and tested the idea on a clone of the production system – and found that the order of creation did matter. His system, however, had 9 dimension tables and 33 bitmap join indexes – which shouldn’t have made any difference in principle, but maybe it was something to do with having several indexes on the same table,  maybe it was something to do with have far more tables or far more indexes than I had. So I built a larger test case with 6 dimension tables and six indexes per table – and reproduced the problem.

Then I started cutting back to see where the problem appeared, and found that all it took was one dimension with two indexes, or two dimensions with one index each – whatever I had done in my “quick test” I had clearly done it too quickly and done something wrong. (Unfortunately I had overwritten most of the code from the original quick test while building the larger test, so I couldn’t go back and see where the error was.)

Here, then, is the minimal test case that I finally ran to demonstrate that switching the order of index creation on the exchange table causes the exchange to fail:


drop table pt_range purge;
drop table t1 purge;
drop table dim_1 purge;
drop table dim_2 purge;

prompt  =================
prompt  Partitioned table
prompt  =================

create table pt_range (
        id,
        grp1,
        grp2,
        padding
)
nologging
partition by range(id) (
        partition p2001 values less than (2001),
        partition p4001 values less than (4001),
        partition p6001 values less than (6001),
        partition p8001 values less than (8001)
)
as
select
        rownum                          id,
        trunc(rownum/100)               grp1,
        trunc(rownum/100)               grp2,
        rpad('x',100)                   padding
from
        all_objects
where 
        rownum <= 8000
;

prompt  ================================================
prompt  Exchange table - loaded to match partition p8001
prompt  ================================================

alter table pt_range 
add constraint pt_pk primary key (id) using index local;

create table t1 (
        id,
        grp1,
        grp2,
        padding
)
as 
select
        rownum + 6000                   id,
        trunc(rownum/100)               grp1,
        trunc(rownum/100)               grp2,
        rpad('x',100)                   padding
from
        all_objects
where 
        rownum <= 2000
;

alter table t1
add constraint t1_pk primary key (id);

execute dbms_stats.gather_table_stats(user,'pt_range')
execute dbms_stats.gather_table_stats(user,'t1')

prompt  ================
prompt  dimension tables
prompt  ================

create table dim_1 
as 
select distinct 
        grp1, 
        cast('A'||grp1 as varchar2(3)) agrp1,
        cast('B'||grp1 as varchar2(3)) bgrp1
from
        t1
;

create table dim_2 as select * from dim_1;

prompt  ===============================
prompt  Primary keys required for BMJIs
prompt  ===============================

alter table dim_1 add constraint d1_pk primary key (grp1);
alter table dim_2 add constraint d2_pk primary key (grp1);

execute dbms_stats.gather_table_stats(user,'dim_1')
execute dbms_stats.gather_table_stats(user,'dim_2')

prompt  ============================
prompt  Creating bitmap join indexes
prompt  ============================

create bitmap index pt_1a on pt_range(d1.agrp1) from pt_range pt, dim_1 d1 where d1.grp1 = pt.grp1 local ;
create bitmap index pt_2a on pt_range(d2.agrp1) from pt_range pt, dim_2 d2 where d2.grp1 = pt.grp2 local ;

prompt  ====================================================
prompt  Pick your index creation order on the exchange table
prompt  ====================================================

create bitmap index t1_1a on t1(d1.agrp1) from t1, dim_1 d1 where d1.grp1 = t1.grp1 ;
create bitmap index t1_2a on t1(d2.agrp1) from t1, dim_2 d2 where d2.grp1 = t1.grp2 ;
-- create bitmap index t1_1a on t1(d1.agrp1) from t1, dim_1 d1 where d1.grp1 = t1.grp1 ;

prompt  ==================
prompt  Exchanging (maybe)
prompt  ==================

alter table pt_range
        exchange partition p8001 with table t1
        including indexes
        without validation
;

I’ve got the same create statement twice for one of the bitmap join indexes – as it stands the indexes will be created in the right order and the exchange will work; if you comment out the first t1_1a create and uncomment the second the exchange will fail. (If you comment out the ‘including indexes’ then the exchange will succeed irrespective of the order of index creation, but that rather defeats the point of being able to exchange partitions.)

I’ve reproduced the problem in 12.1.0.2, 11.2.0.4 and 10.2.0.5

Footnote: running an extended trace didn’t help me work out how Oracle is detecting the mismatch, presumably it’s something that gets into the dictionary cache in a general “load the index definition” step; but it did show me that (in the “without validation” case) the code seems to check the correctness of the exchange table’s primary key data BEFORE checking whether the indexes match properly.


Pareto Rocks!

Floyd Teter - Mon, 2016-02-01 17:55
I'm a big fan of Vifredo Pareto's work.  He observed the world around him and developed some very simple concepts to explain what he observed.  Pareto was ahead of his time.

Some of Dr. Pareto's work is based on the Pareto Principle:  the idea that 80% of effects come from 20% of causes.  In the real world, we continually see examples of the Pareto Principle.

I've been conducting one of my informal surveys lately...talking to lots of partners, customers and industry analysts about their experiences in implementing SaaS and the way it fits their business.  And I've found that, almost unanimously, the experience falls in line with the Pareto Principle.  Some sources vary the numbers a bit, but it generally plays out as follows:

  • Using the same SaaS footprint, 60% of any SaaS configuration is the same across all industries.  The configuration values and the data values may be different, but the overall scheme is the same.
  • Add another 20% for SaaS customers within the same vertical (healthcare, retail, higher education, public sector, etc.)..
  • Only about 20% of the configuration, business processes, and reporting/business intelligence is unique for the same SaaS footprint in the same industry sector between one customer and another.
Many of the customers I've spoken to in this context immediately place the qualifier: "but our business is different."  And they're right. In fact, for the sake of profitability and survival, their business must be different.  Every business needs differentiators.  But it's different within the scope of that 20% mentioned above.  That other 80% is common with everyone in their business sector.  And, when questioned, most customers agree with that idea.

This is what makes the business processes baked into SaaS so important; any business wants to burn their calories of effort on the differentiators rather than the processes that simply represent "the cost of being in business."  SaaS offers the opportunity to standardize the common 80%, allowing customers to focus their efforts on the unique 20%.  Pareto had it right.






Putting SQL in the corner with Javascript in SQLCL

Kris Rice - Mon, 2016-02-01 16:22
Here's a pretty small javascript file that allows for running sql in the background of your current sqlcl session.  This is a trivial example of a sql script that has a sleep in it to simulate something taking time. It also prints the SID to show it's a different connection than the base. select 'DB SID ' ||sys_context('USERENV','SID') || ' is going to sleep' bye from dual; begin

[Free Webinar] Learn Weblogic from Oracle ACE Atul Kumar & Oracle Expert Pavan

Online Apps DBA - Mon, 2016-02-01 11:14
This entry is part 4 of 6 in the series WebLogic Server

WebLogic 1

Nowadays, enterprises are using Weblogic server as it provides all the essential features to build and support JAVA EE applications.

And for that reason, more people are getting attracted towards learning Weblogic, but the main issue still lies. Where to go? and Where to start?

And if you are one those people who are still not sure about What Weblogic really is or why should you learn Weblogic; then we have good news for you.

On Saturday February 6th  at 10:00 PM IST, 4:30 PM GMT, 8:30 AM PST, Oracle ACE Atul Kumar & Oracle Expert Pavan would be discussing about Weblogic; and this is where you can clear all of your doubts related to Weblogic. You can grab this opportunity by clicking on below button to register for the webinar.

Click Here to Register For Free Webinar

We have limited number of seats for a limited time. So, grab it before it goes off!.

In the session, we will also have Live Question & Answer section in which you can ask questions to your heart’s content. So its 100% bonus session, with guaranteed benefit.

And just for the quick start, Weblogic server was first developed by BEA Systems, which was later acquired by Oracle in 2008. Weblogic is a middle tier server software application which is a Online Transaction Processing Platform (OLTP), and is mandatory in EBS 12.2.

For those who are interested in learning Weblogic from scratch, we also provide Training in Weblogic. Where you get a dedicated machine to practice and hone your skills, 24*7 Technical support, and if you are not satisfied then 100% money back guaranteed.

Don’t forget to share this post if you think this could be useful to others and also Subscribe to this blog for more such FREE Webinars and useful content related to Oracle.

The post [Free Webinar] Learn Weblogic from Oracle ACE Atul Kumar & Oracle Expert Pavan appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Log Buffer #458: A Carnival of the Vanities for DBAs

Pythian Group - Mon, 2016-02-01 10:38

This Log Buffer Edition covers various useful tips and tricks from blogs for Oracle, SQL Server and MySQL.

Oracle:

  • pstack(or thread stack) for Windows to diagnose Firefox high CPU usage
  • With the ever-changing browser landscape, we needed to make some tough decisions as to which browsers and versions are going to be deemed “supported” for Oracle Application Express.  There isn’t enough time and money to support all browsers and all versions, each with different bugs and varying levels of support of standards.
  • Are you effectively using Java SE 8 streams for data processing? Introduced in Java 8, streams allow you to process data in a declarative way and leverage multi-core architectures without writing multithreaded code.
  • If you are upgrading but would like to increase or decrease the number of data sources you must do so when installing the latest version.
  • When talking about BPM security, you need to know the about certain set of information and where those information will come from etc.

SQL Server:

  • Having fun with PARSENAME (SQL Spackle)
  • Time and Space: How to Monitor Drive Space in SQL Server
  • Application Security with Azure Key Vault
  • Declarative SQL: Using CHECK() & DEFAULT
  • Microsoft SQL Server 2016 Public Preview Boosts Database Security

MySQL:

  • Have you heard the news? As of MySQL 5.7.8, MySQL includes a new JavaScript Object Notation (JSON) data type that enables more efficient access to JSON-encoded data.
  • Oracle MySQL 5.7 Database Nears General Availability
  • Most of you know, that it is possible to synchronize MySQL and MariaDB servers using replication. But with the latest releases, it is also possible to use more than just two servers as a multi-master setup.
  • Transport Layer Security (TLS, also often referred to as SSL) is an important component of a secure MySQL deployment, but the complexities of properly generating the necessary key material and configuring the server dissuaded many users from completing this task.
  • Managing MySQL Replication for High Availability

 

Learn more about Pythian’s expertise in Oracle , SQL Server & MySQL.

Categories: DBA Blogs

February 24, 2016: Solairus Aviation―Oracle HCM Cloud Customer Forum

Linda Fishman Hoyle - Mon, 2016-02-01 10:34
Join us for an Oracle HCM Cloud Customer Forum call on Wednesday, February 24, 2016.

You will hear Mark Dennen, CFO from Solairus Aviation, talk about the company’s need for an integrated set of applications that was both easy to use and powerful enough to tie HCM and ERP business functions together.

Linda Fishman, Senior Director, Oracle HCM Cloud, will host this call and discuss with Mr. Dennen why the company chose Oracle HCM Cloud, the details of its selection process for new HR software, its implementation experience with Oracle HCM Cloud, and the benefits of its new modern HR system.

Solairus Aviation is a US-based, full-service private aircraft management and charter company.

Register now to attend the live Forum on Wednesday, February 24, 2016, at 9:00 a.m. Pacific Time and learn more about Solairus Aviation’s experience with Oracle HCM Cloud.

LearningStudio and OpenClass End-Of-Life: Pearson is getting out of LMS market

Michael Feldstein - Mon, 2016-02-01 08:20

By Phil HillMore Posts (385)

Pearson has notified customers that LearningStudio will be shut down as a standalone LMS over the next 2-3 years. Created from the Pearson acquisition of both eCollege and Fronter, LearningStudio has been targeted primarily at fully-online programs and associated hybrid programs – not for simple augmentation of face-to-face classes. The customer base has mostly included for-profit institutions as well as not-for-profit programs that are often packaged with an online service prover model (e.g. Embanet customers). As of this year, LearningStudio has approximately 110 customers with 1.2 million unique student enrollments.

This decision is not one isolated to LearningStudio, as the end-of-life notification caps a series of moves by Pearson to get out of the LMS market in general.

Less than a year ago I wrote a post about Texas Christian University claiming that Pearson was “getting out of the LMS market”, although during research for that story the administrator requested a change in the campus newspaper.

“Pearson is out of the learning management system game,” Hughes said. “We need something to evolve with the Academy of Tomorrow and where we’re moving to at TCU.”Hughes said Pearson withdrew from the LMS search process for TCU but remains an LMS provider.

From 2007 through 2012, Pearson aggressively moved into the LMS market. In 2007 the company acquired eCollege for $477 million, taking it private. In 2008 Pearson acquired the European LMS provider Fronter. In 2009 Pearson announced LearningStudio as the rebranded combination of eCollege and Fronter, predominantly from eCollege. Then the big PR move came in 2011 with the splashy announcement of OpenClass, an “completely free” and “amazing” LMS that dominated the discussion at EDUCAUSE that year, partially due to “misleading headlines” implying a partnership with Google.

In the past year, however, Pearson has reversed all of these strategic moves. Announced last September, OpenClass will no longer be available as of January 2018. In November Pearson sold Fronter to itsLearning. And now LearningStudio (and in effect eCollege) is being retired. To be more precise, LearningStudio is being retired as a standalone LMS. What is not publicized it that LearningStudio internally provides the infrastructure and platform support for Pearson’s MyLabs & Mastering courseware. That internal platform will remain, but the external product will go away.

For this story Michael and I interviewed Curtiss Barnes, Managing Director of Technology Products for Pearson Global Higher Education[1] Barnes confirmed the story and said that all LearningStudio customers have been notified, and that there are no plans for a public announcement or press release. Barnes said the decision to get out of the LMS category was based on Pearson’s continuing efforts to reorganize and streamline the diversified company, and being competitive in the LMS market just doesn’t help meet corporate goals.

So what platforms and technology products do meet corporate goals? Barnes said that Pearson does courseware really well, with over 12 million students on these platforms overall and approximately 2 million per day. He sees large distinctions between content-agnostic LMS solutions and courseware. Courseware might require certain features that overlap LMS features, but the fundamentals of what’s being delivered goes well beyond content management store, calendaring, and other LMS basics to include instrumentation of content and science-based learning design. Barnes said that learning design is the key element they’re looking for as a company.

The front page for OpenClass now describes Pearson’s view on LMS and courseware markets.

On January 1, 2018, OpenClass will no longer be available to faculty, students, or administrators, and as of today, no new accounts will be created. You will be able to sign in and access OpenClass and we will maintain SLAs until January 1, 2018. We will also continue to provide Community Forum support and OpenClass Knowledge Base until this date.

At Pearson, we are relentlessly committed to driving learner outcomes and we see a bigger opportunity to provide value to our customers via programs such as MyLab & Mastering and REVEL, and through our professional services, such as curriculum design and online program management.

While the LMS will endure as an important piece of academic infrastructure, we believe our learning applications and services are truly “where the learning happens.” In short, withdrawing from the crowded LMS market allows us to concentrate on areas where we can make the biggest measurable impact on student learning outcomes.

Pearson has told customers that they still have engineers and operations teams to fully support continuing operations and mitigate bugs or issues affecting LearningStudio, but they are not developing new features. LearningStudio will remain available for customers through their existing contracts, but the earliest loss of support for any customer will be December 31, 2017 to allow customers whose contracts expire before then more time to select a different LMS and migrate their courses.

Michael and I pressed during the interview to see if Pearson is favoring one solution over another in their discussions with customers, but Barnes said that Pearson has decided to remain neutral. Customers are not being given recommendations on alternate solutions.

This move out of the LMS market by Pearson has a parallel with last year’s sale of PowerSchool, a Student Information System for the K-12 market. Pearson acquired PowerSchool from Apple in 2006, but it no longer made sense to try and be competitive in the SIS market.

Like the forced migration caused by WebCT and ANGEL end-of-life notices, there will now be more than 100 LMS changes triggered by this announcement. While the for-profit sector has taken big hits in enrollments over the past 3-4 years, there are still some very large online programs that now have to select a new LMS.

This has been an eventful year for the LMS market already, and it’s only one month old. Expect to see more movement and changes.

  1. Disclosure: Pearson is a client of MindWires Consulting on an separate project.

The post LearningStudio and OpenClass End-Of-Life: Pearson is getting out of LMS market appeared first on e-Literate.

Corporate Social Responsibility (Where Can We Serve?)

Rittman Mead Consulting - Mon, 2016-02-01 03:00

At Rittman Mead, we believe that people are more important than profit.
This manifests itself in two ways. First, we want to impact the world beyond data and analytics, and secondly, we want our employees to be able to contribute to organizations they believe are doing impactful work.

This year, we’ve put a Community Service requirement in place for all of our full-time employees.

We’ll each spend 40 hours this year serving with various nonprofits. Most of our team are already involved with some amazing organizations, and this “requirement” allows us to not only be involved after hours and on the weekends, but even during normal business hours.

We want to highlight a few team members and show how they’ve been using their Community Service hours for good.

Beth deSousa
Beth is our Finance Manager and she has been serving with Sawnee Women’s Club. Most of her work has been around getting sponsorship and donations for their annual silent auction. She’s also helped with upgrading a garden at the local high school, collecting toys and gift wrap for their Holiday House, and collecting prom dresses and accessories for girls in need.

Charles Elliott
Charles is the Managing Director of North America. He recently ran in the Dopey Challenge down at Disney World which means he ran a 5k, 10k, half marathon, and full marathon in 4 days. He did the run to raise funds for Autism Speaks. Charles was recognized as the third largest fundraiser for Autism Speaks at the Dopey Challenge!

David Huey
David is our U.S. Business Development rep. He recently served with the nonprofit Hungry For A Day for their Thanksgiving Outreach. He flew up to Detroit the week of Thanksgiving and helped serve over 8,000 Thanksgiving dinners to the homeless and needy in inner city Detroit.

Andy Rocha

Andy is our Consulting Manager. Andy is a regular volunteer and instructor with Vine City Code Crew. VC3 works with inner city youth in Atlanta to teach them about electronics and coding.

Pete Tamisin

Pete is a Principal Consultant. He is also involved as a volunteer and instructor with the aforementioned Code Crew. Pete has taught a course using Makey Makey electronic kits for VC3.

This is just a sample of what our team has done, but engaging in our local communities is something that Rittman Mead is striving to make an integral piece of our corporate DNA.
We can’t wait to show you how we’ve left our communities better in 2016!

The post Corporate Social Responsibility (Where Can We Serve?) appeared first on Rittman Mead Consulting.

Categories: BI & Warehousing

Trace Files -- 11 : Tracing the Optimization of an SQL Statement

Hemant K Chitale - Sun, 2016-01-31 07:53
So far, the previous examples have been on tracing the Execution of SQL statements and/or the Execution Plan used.

But what if you want to trace the Optimization --- identify how the Optimizer determined an "optimal" execution plan -- of an SQL statement.

Note : Pre-11g methods involved Event 10053.   But as with Event 10046, I prefer to use methods where I don't have to use an Event Number but a Name.  So, here I am not demonstrating the Event 10053 method itself.

Let's assume that there is a particular SQL identified as SQL_ID='b086mzzp82x7w' for which we need to know not just the Execution Plan but also how Oracle arrived at the Execution Plan.

Here's one way :

SQL> alter session set events 'trace[rdbms.SQL_Optimizer.*][sql:b086mzzp82x7w]';

Session altered.

SQL> select 'abc' from dual;

'AB
---
abc

SQL> select count(*) from small_insert;

COUNT(*)
----------
4

SQL> select count(*) from all_objects_many_list
2 where created > sysdate-365;

COUNT(*)
----------
25548

SQL> select count(*) from all_objects_many_list;

COUNT(*)
----------
7254201

SQL> select value from v$diag_info where name = 'Default Trace File';

VALUE
------------------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_3102.trc

SQL>

I have multiple SQLs executed in the session but am interested in the Optimization of only 1 SQL.  Note how the specific SQL_ID is specified in the ALTER SESSION SET EVENTS command.

The resultant trace file is a very long trace file with a listing of all the instance/session parameters (hidden and public), all the Bug Fixes and the costing done for the SQL.  The trace file captures only the SQL of interest, all the other SQLs in the same session are *not* in the trace file.

Here is an extract from the trace file :

Registered qb: SEL$1 0x2173aea0 (PARSER)
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=4 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

SPM: statement not found in SMB

**************************
Automatic degree of parallelism (ADOP)
**************************
Automatic degree of parallelism is disabled: Parameter.

PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
OPTIMIZER INFORMATION

******************************************
----- Current SQL Statement for this session (sql_id=b086mzzp82x7w) -----
select count(*) from all_objects_many_list
where created > sysdate-365
*******************************************
Legend
The following abbreviations are used by optimizer trace.
CBQT - cost-based query transformation
JPPD - join predicate push-down
OJPPD - old-style (non-cost-based) JPPD
FPD - filter push-down
PM - predicate move-around
CVM - complex view merging
SPJ - select-project-join
SJC - set join conversion
SU - subquery unnesting
OBYE - order by elimination
OST - old style star transformation
ST - new (cbqt) star transformation
CNT - count(col) to count(*) transformation
JE - Join Elimination
JF - join factorization
CBY - connect by
SLP - select list pruning
DP - distinct placement
qb - query block
LB - leaf blocks
DK - distinct keys
LB/K - average number of leaf blocks per key
DB/K - average number of data blocks per key
CLUF - clustering factor
NDV - number of distinct values
Resp - response cost
Card - cardinality
Resc - resource cost
NL - nested loops (join)
SM - sort merge (join)
HA - hash (join)
CPUSPEED - CPU Speed
IOTFRSPEED - I/O transfer speed
IOSEEKTIM - I/O seek time
SREADTIM - average single block read time
MREADTIM - average multiblock read time
MBRC - average multiblock read count
MAXTHR - maximum I/O system throughput
SLAVETHR - average slave I/O throughput
dmeth - distribution method
1: no partitioning required
2: value partitioned
4: right is random (round-robin)
128: left is random (round-robin)
8: broadcast right and partition left
16: broadcast left and partition right
32: partition left using partitioning of right
64: partition right using partitioning of left
256: run the join in serial
0: invalid distribution method
sel - selectivity
ptn - partition
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
*************************************
PARAMETERS WITH ALTERED VALUES
Compilation Environment Dump
Bug Fix Control Environment


*************************************
PARAMETERS WITH DEFAULT VALUES
******************************
Compilation Environment Dump
........... long list of parameters and their values .........
..............................................................
.. followed by ...
........... long list of Bug Fixes that are enabled ..........
..............................................................


***************************************
PARAMETERS IN OPT_PARAM HINT
****************************
***************************************
Column Usage Monitoring is ON: tracking level = 1
***************************************

Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for b086mzzp82x7w.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
OBYE: Considering Order-by Elimination from view SEL$1 (#0)
***************************
Order-by elimination (OBYE)
***************************
OBYE: OBYE bypassed: no order by to eliminate.
CVM: Considering view merge in query block SEL$1 (#0)
OJE: Begin: find best directive for query block SEL$1 (#0)
OJE: End: finding best directive for query block SEL$1 (#0)
CNT: Considering count(col) to count(*) on query block SEL$1 (#0)
*************************
Count(col) to Count(*) (CNT)
*************************
CNT: COUNT() to COUNT(*) not done.
query block SEL$1 (#0) unchanged
Considering Query Transformations on query block SEL$1 (#0)
**************************
Query transformations (QT)
**************************
JF: Checking validity of join factorization for query block SEL$1 (#0)
JF: Bypassed: not a UNION or UNION-ALL query block.
ST: not valid since star transformation parameter is FALSE
TE: Checking validity of table expansion for query block SEL$1 (#0)
TE: Bypassed: No partitioned table in query block.
CBQT bypassed for query block SEL$1 (#0): no complex view, sub-queries or UNION (ALL) queries.
CBQT: Validity checks failed for b086mzzp82x7w.
CSE: Considering common sub-expression elimination in query block SEL$1 (#0)
*************************
Common Subexpression elimination (CSE)
*************************
CSE: CSE not performed on query block SEL$1 (#0).
SU: Considering subquery unnesting in query block SEL$1 (#0)
********************
Subquery Unnest (SU)
********************
SJC: Considering set-join conversion in query block SEL$1 (#0)
*************************
Set-Join Conversion (SJC)
*************************
SJC: not performed
PM: Considering predicate move-around in query block SEL$1 (#0)
**************************
Predicate Move-Around (PM)
**************************
PM: PM bypassed: Outer query contains no views.
PM: PM bypassed: Outer query contains no views.
query block SEL$1 (#0) unchanged
FPD: Considering simple filter push in query block SEL$1 (#0)
"ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
try to generate transitive predicate from check constraints for query block SEL$1 (#0)
finally: "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365

apadrv-start sqlid=12691376846034531580
:
call(in-use=2008, alloc=16344), compile(in-use=56240, alloc=58632), execution(in-use=2504, alloc=4032)

*******************************************
Peeked values of the binds in SQL statement
*******************************************

Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(*)" FROM "HEMANT"."ALL_OBJECTS_MANY_LIST" "ALL_OBJECTS_MANY_LIST" WHERE "ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
kkoqbc: optimizing query block SEL$1 (#0)

:
call(in-use=2056, alloc=16344), compile(in-use=57184, alloc=58632), execution(in-use=2504, alloc=4032)

kkoqbc-subheap (create addr=0x7f5f216ff9d0)
****************
QUERY BLOCK TEXT
****************
select count(*) from all_objects_many_list
where created > sysdate-365
---------------------
QUERY BLOCK SIGNATURE
---------------------
signature (optimizer): qb_name=SEL$1 nbfros=1 flg=0
fro(0): flg=0 objn=35014 hint_alias="ALL_OBJECTS_MANY_LIST"@"SEL$1"

-----------------------------
SYSTEM STATISTICS INFORMATION
-----------------------------
Using NOWORKLOAD Stats
CPUSPEEDNW: 937 millions instructions/sec (default is 100)
IOTFRSPEED: 4096 bytes per millisecond (default is 4096)
IOSEEKTIM: 10 milliseconds (default is 10)
MBRC: NO VALUE blocks (default is 8)

***************************************
BASE STATISTICAL INFORMATION
***********************
Table Stats::
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
#Rows: 7197952 #Blks: 98279 AvgRowLen: 93.00 ChainCnt: 0.00
Index Stats::
Index: ALL_OBJ_M_L_CRTD_NDX Col#: 7
LVLS: 2 #LB: 19093 #DK: 1232 LB/K: 15.00 DB/K: 351.00 CLUF: 432893.00
Access path analysis for ALL_OBJECTS_MANY_LIST
***************************************
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]
Column (#7): CREATED(
AvgLen: 8 NDV: 1232 Nulls: 0 Density: 0.000812 Min: 2455803 Max: 2457343
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
Card: Original: 7197952.000000 Rounded: 1346076 Computed: 1346075.60 Non Adjusted: 1346075.60
Access Path: TableScan
Cost: 27174.11 Resp: 27174.11 Degree: 0
Cost_io: 26619.00 Cost_cpu: 6242311042
Resp_io: 26619.00 Resp_cpu: 6242311042
Access Path: index (index (FFS))
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 5173.00 resc_cpu: 4598699894
ix_sel: 0.000000 ix_sel_with_filters: 1.000000
Access Path: index (FFS)
Cost: 5581.95 Resp: 5581.95 Degree: 1
Cost_io: 5173.00 Cost_cpu: 4598699894
Resp_io: 5173.00 Resp_cpu: 4598699894
Access Path: index (IndexOnly)
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 3573.00 resc_cpu: 294660105
ix_sel: 0.187008 ix_sel_with_filters: 0.187008
Cost: 3599.20 Resp: 3599.20 Degree: 1
Best:: AccessPath: IndexRange
Index: ALL_OBJ_M_L_CRTD_NDX
Cost: 3599.20 Degree: 1 Resp: 3599.20 Card: 1346075.60 Bytes: 0

***************************************


OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]#0
***********************
Best so far: Table#: 0 cost: 3599.2033 card: 1346075.6041 bytes: 10768608
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Enumerating distribution method (advanced)

Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
Final cost for query block SEL$1 (#0) - All Rows Plan:
Best join order: 1
Cost: 3599.2033 Degree: 1 Card: 1346076.0000 Bytes: 10768608
Resc: 3599.2033 Resc_io: 3573.0000 Resc_cpu: 294660105
Resp: 3599.2033 Resp_io: 3573.0000 Resc_cpu: 294660105
kkoqbc-subheap (delete addr=0x7f5f216ff9d0, in-use=26384, alloc=32840)
kkoqbc-end:
:
call(in-use=8664, alloc=49288), compile(in-use=59704, alloc=62776), execution(in-use=2504, alloc=4032)

kkoqbc: finish optimizing query block SEL$1 (#0)
apadrv-end
:
call(in-use=8664, alloc=49288), compile(in-use=60616, alloc=62776), execution(in-use=2504, alloc=4032)


Starting SQL statement dump

user_id=87 user_name=HEMANT module=SQL*Plus action=
sql_id=b086mzzp82x7w plan_hash_value=1689651126 problem_type=3
----- Current SQL Statement for this session (sql_id=b086mzzp82x7w) -----
select count(*) from all_objects_many_list
where created > sysdate-365
sql_text_length=71
sql=select count(*) from all_objects_many_list
where created > sysdate-365
----- Explain Plan Dump -----
----- Plan Table -----

============
Plan Table
============
-------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3599 | |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX RANGE SCAN | ALL_OBJ_M_L_CRTD_NDX| 1315K | 10M | 3599 | 00:00:44 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("CREATED">SYSDATE@!-365)

Content of other_xml column
===========================
db_version : 11.2.0.4
parse_schema : HEMANT
plan_hash : 1689651126
plan_hash_2 : 1742296710
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
DB_VERSION('11.2.0.4')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX(@"SEL$1" "ALL_OBJECTS_MANY_LIST"@"SEL$1" ("ALL_OBJECTS_MANY_LIST"."CREATED"))
END_OUTLINE_DATA
*/

Optimizer state dump:
Compilation Environment Dump
optimizer_mode_hinted = false
optimizer_features_hinted = 0.0.0
........... long list of parameters and their values .........
..............................................................
.. followed by ...
........... long list of Bug Fixes that are enabled ..........
..............................................................

Query Block Registry:
SEL$1 0x2173aea0 (PARSER) [FINAL]

:
call(in-use=11728, alloc=49288), compile(in-use=90576, alloc=152120), execution(in-use=6440, alloc=8088)

End of Optimizer State Dump
Dumping Hints
=============
====================== END SQL Statement Dump ======================

The trace file captured only the SQL of interest.  It also shows all the instance /session parameters and Bug Fixes that are relevant (these are very long lists so I have not reproduced them in entirety).
Note : The listing of parameters and Bug Fixes are very important in that if you have different execution plans in two different databases, you must verify the parameters and bug fixes and ensure that any differences in them are not relevant.

From the trace file, we can determine that this is the Execution Plan chosen :
-------------------------------------------------+-----------------------------------+
| Id | Operation | Name | Rows | Bytes | Cost | Time |
-------------------------------------------------+-----------------------------------+
| 0 | SELECT STATEMENT | | | | 3599 | |
| 1 | SORT AGGREGATE | | 1 | 8 | | |
| 2 | INDEX RANGE SCAN | ALL_OBJ_M_L_CRTD_NDX| 1315K | 10M | 3599 | 00:00:44 |
-------------------------------------------------+-----------------------------------+
Predicate Information:
----------------------
2 - access("CREATED">SYSDATE@!-365)

Content of other_xml column
===========================
db_version : 11.2.0.4
parse_schema : HEMANT
plan_hash : 1689651126
plan_hash_2 : 1742296710
Outline Data:
/*+
BEGIN_OUTLINE_DATA
IGNORE_OPTIM_EMBEDDED_HINTS
OPTIMIZER_FEATURES_ENABLE('11.2.0.4')
DB_VERSION('11.2.0.4')
ALL_ROWS
OUTLINE_LEAF(@"SEL$1")
INDEX(@"SEL$1" "ALL_OBJECTS_MANY_LIST"@"SEL$1" ("ALL_OBJECTS_MANY_LIST"."CREATED"))
END_OUTLINE_DATA
*/

The computation of Cost appears here :
SINGLE TABLE ACCESS PATH
Single Table Cardinality Estimation for ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]
Column (#7): CREATED(
AvgLen: 8 NDV: 1232 Nulls: 0 Density: 0.000812 Min: 2455803 Max: 2457343
Table: ALL_OBJECTS_MANY_LIST Alias: ALL_OBJECTS_MANY_LIST
Card: Original: 7197952.000000 Rounded: 1346076 Computed: 1346075.60 Non Adjusted: 1346075.60
Access Path: TableScan
Cost: 27174.11 Resp: 27174.11 Degree: 0
Cost_io: 26619.00 Cost_cpu: 6242311042
Resp_io: 26619.00 Resp_cpu: 6242311042
Access Path: index (index (FFS))
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 5173.00 resc_cpu: 4598699894
ix_sel: 0.000000 ix_sel_with_filters: 1.000000
Access Path: index (FFS)
Cost: 5581.95 Resp: 5581.95 Degree: 1
Cost_io: 5173.00 Cost_cpu: 4598699894
Resp_io: 5173.00 Resp_cpu: 4598699894
Access Path: index (IndexOnly)
Index: ALL_OBJ_M_L_CRTD_NDX
resc_io: 3573.00 resc_cpu: 294660105
ix_sel: 0.187008 ix_sel_with_filters: 0.187008
Cost: 3599.20 Resp: 3599.20 Degree: 1
Best:: AccessPath: IndexRange
Index: ALL_OBJ_M_L_CRTD_NDX
Cost: 3599.20 Degree: 1 Resp: 3599.20 Card: 1346075.60 Bytes: 0

Note how different Access Paths (Table Scan, Index FFS, IndexOnly,IndexRange) are all listed. The Best is shown as an IndexRange on the ALL_OBJ_M_L_CRTD_NDX with a Cost of 3599.20 More details appear here :
OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***************************************
Considering cardinality-based initial join order.
Permutations for Starting Table :0
Join order[1]: ALL_OBJECTS_MANY_LIST[ALL_OBJECTS_MANY_LIST]#0
***********************
Best so far: Table#: 0 cost: 3599.2033 card: 1346075.6041 bytes: 10768608
***********************
(newjo-stop-1) k:0, spcnt:0, perm:1, maxperm:2000

*********************************
Number of join permutations tried: 1
*********************************
Enumerating distribution method (advanced)

Trying or-Expansion on query block SEL$1 (#0)
Transfer Optimizer annotations for query block SEL$1 (#0)
id=0 frofkks[i] (index start key) predicate="ALL_OBJECTS_MANY_LIST"."CREATED">SYSDATE@!-365
Final cost for query block SEL$1 (#0) - All Rows Plan:
Best join order: 1
Cost: 3599.2033 Degree: 1 Card: 1346076.0000 Bytes: 10768608
Resc: 3599.2033 Resc_io: 3573.0000 Resc_cpu: 294660105
Resp: 3599.2033 Resp_io: 3573.0000 Resc_cpu: 294660105

This is a very detailed listing for an SQL query on a single Table (no joins) and a single candidate index.  Try running this with an SQL with Join of two or more tables and more than one candidate Index on each and see how complicated the Cost calculation becomes.


Note : To disable tracing in the session, I would run :

ALTER SESSION SET EVENTS 'trace[rdbms.SQL_Optimizer.*] off';


This sort of tracing can also be done with ALTER SYSTEM if you are not sure which session will be running the SQL_ID of interest and cannot interactively invoke the SQL from a private session.  
.
.
.



Categories: DBA Blogs

Multisessioning with Python

Gary Myers - Sun, 2016-01-31 00:27
I'll admit that I pretty constantly have at least one window either open into SQL*Plus or at the command line ready to run a deployment script through it. But there's time when it is worth taking a step beyond.

One problem with the architecture of most SQL clients is they connect to a database, send off a SQL statement and do nothing until the database responds back with an answer. That's a great model when it takes no more than a second or two to get the response. It is cumbersome when the statement can take minutes to complete. Complex clients, like SQL Developer, allow the user to have multiple sessions open, even against a single schema if you use "unshared" worksheets. But they don't co-ordinate those sessions in any way.

Recently I needed to run a task in a number of schemas. We're all nicely packaged up and all I needed to do was execute a procedure in each of the schemas and we can do that from a master schema with appropriate grants. However the tasks would take several minutes for each schema, and we had dozens of schemas to process. Running them consecutively in a single stream would have taken many hours and we also didn't want to set them all off at once through the job scheduler due to the workload. Ideally we wanted a few running concurrently, and when one finished another would start. I haven't found an easy way to do that in the database scheduler.

Python, on the other hand, makes it so darn simple.
[Credit to Stackoverflow, of course]

proc connects to the database, executes the procedure (in this demo just setting the client info with a delay so you can see it), and returns.
Strs is a collection of parameters.
pool tells it how many concurrent operation to run. And then it maps the strings to the pool, so A, B and C will start, then as they finish D,E,F and G will be processed as threads become available.

I could my collection was a list of the schema names, and the statement was more like 'begin ' + arg + '.task; end;'

#!/usr/bin/python

"""
Global variables
"""

db    = 'host:port/service'
user  = 'scott'
pwd   = 'tiger'

def proc(arg):
   con = cx_Oracle.connect(user + '/' + pwd + '@' + db)
   cur = con.cursor()
   cur.execute('begin sys.dbms_application_info.set_client_info(:info); end;',{'info':arg})
   time.sleep(10)   
   cur.close()
   con.close()
   return
   
import cx_Oracle, time
from multiprocessing.dummy import Pool as ThreadPool 

strs = [
  'A',  'B',  'C',  'D',  'E',  'F',  'G'
  ]

# Make the Pool of workers
pool = ThreadPool(3) 
# Pass the elements of the array to the procedure using the pool 
#  In this case no values are returned so the results is a dummy
results = pool.map(proc, strs)
#close the pool and wait for the work to finish 
pool.close() 
pool.join() 

PS. In this case, I used cx_Oracle as the glue between Python and the database.
The pyOraGeek blog is a good starting point for that.

If/when I get around to blogging again, I'll discuss jaydebeapi / jpype as an alternative. In short, cx_Oracle goes through the OCI client (eg Instant Client) and jaydebeapi takes the JVM / JDBC route.



Upgrading Oracle Apps (EBS) to 12.2 ? OPatch stopped with error “oracle.as.common.clone, 11.1.1.6.0, higher version 11.1.1.7.0 found

Online Apps DBA - Sat, 2016-01-30 13:13
This entry is part 5 of 6 in the series Oracle EBS 12.2 Upgrade

This post is from our Oracle EBS Upgrade R12.2 training where we cover Architecture, Overview of R12.2 & Major features in Upgrading to R12.2, Different upgrade paths available to R12.2, Best practices for R12.2 Upgrade, How to Minimize down time for R12.2 Upgrade, Difficulties/Issues while upgrading to R12.2.

One of upgrade trainee from our previous batch, hitting issue “oracle.as.common.clone, 11.1.1.6.0, higher version 11.1.1.7.0 found” while applying Latest AD and TXK patches  ‘20642039’ in Oracle E-Business 12.2.

Issue:

1. Applying the Latest AD and TXK patches  ‘20642039‘ in Oracle E-Business 12.2 as

export ORACLE_HOME=/u01/oracle/PROD122/fs1/FMW_Home/oracle_common 

Note: Here /u01/oracle/PROD122 is ORACLE_BASE where Oracle EBS 12.2 is installed and as patch is for ORACLE COMMON Home in Fusion Middleware we set ORACLE_HOME accordingly 

export PATH=/u01/oracle/PROD122/fs1/FMW_Home/oracle_common/OPatch:$PATH

cd $PATCH_TOP/20642039

opatch apply

And opatch stopped with below messages

Applying interim patch ‘20642039’ to OH ‘/u01/oracle/PRD12238/fs1/FMW_Home/oracle_common’ 
Verifying environment and performing prerequisite checks… 
OPatch system modification phase did not start: 
Patch “20642039” is not needed since it has no fixes for this Oracle Home. Please see log file for details. 
Log file location: /u01/oracle/PRD12238/fs1/FMW_Home/oracle_common/cfgtoollogs/opatch/20642039_Mar_10_2010_17_22_28/apply2010-03-10_17-22-27PM_1.log

OPatch stopped on request.

2. Then we look into Log file at /u01/oracle/PRD12238/fs1/FMW_Home/oracle_common/cfgtoollogs/opatch

/20642039_Mar_10_2010_17_22_28/apply2010-03-10_17-22-27PM_1.log, it was showing below error message

[Mar 13, 2010 1:09:00 AM]    ——————— Oracle Home discovery ——————— [Mar 13, 2010 1:09:00 AM]    OUI-67086:ApplySession applying interim patch ‘20642039’ to OH ‘/u01/oracle/PRD12238/fs1/FMW_Home/oracle_common’ [Mar 13, 2010 1:09:00 AM]    Applying interim patch ‘20642039’ to OH ‘/u01/oracle/PRD12238/fs1/FMW_Home/oracle_common’ [Mar 13, 2010 1:09:00 AM]    Starting to apply patch to local system at Sat Mar 13 01:09:00 GMT 2010 [Mar 13, 2010 1:09:00 AM]    Verifying environment and performing prerequisite checks… [Mar 13, 2010 1:09:02 AM]    Start the Apply initScript at Sat Mar 13 01:09:02 GMT 2010 [Mar 13, 2010 1:09:02 AM]    Finish the Apply initScript at Sat Mar 13 01:09:02 GMT 2010 [Mar 13, 2010 1:09:02 AM]    ——————— Prerequisite for apply ——————— [Mar 13, 2010 1:09:02 AM]    Running prerequisite checks… [Mar 13, 2010 1:09:02 AM]    Patch “20642039” is ignored as it is not a “Fusion Applications patch”. [Mar 13, 2010 1:09:02 AM]    Check if patch “20642039”  is a no-op patch. [Mar 13, 2010 1:09:02 AM]    Found a higher component in OH inventory: oracle.as.common.clone, 11.1.1.6.0 [Mar 13, 2010 1:09:02 AM]    [ oracle.as.common.clone, 11.1.1.6.0, higher version 11.1.1.7.0 found. ] Fix:  Since version is already 11.1.1.7.0 so this patch is not applicable. In this case there is another patch mentioned in the DOC ID 1903052.1 The latest AD-TXK codelevel has a dependency on Oracle Fusion Middleware:

So we have applied the patch 20756887 and It completed successfully.

If you are applying Delta 6 to a system on a pre-Delta 5 AD-TXK codelevel, you must apply the Oracle Fusion Middleware patch before proceeding with the AD and TXK patches. If you do not apply this patch, application of the TXK patch will fail.

If you want to learn more about Oracle EBS Upgrade to R12.2  then click the button below and register for  our  Oracle Upgrade 12.2  (next batch starts on 20th Februrary, 2016 )

Note: We are so confident on our workshops that we provide 100% Money back guarantee, in unlikely case of you being not happy after first session, just drop us a mail before second session and We’ll refund FULL money (or ask us from our 100s of happy trainees in our private Facebook Group)

Stay Tuned for more Information on Oracle Apps 12.2 Upgrade!!

Oracle E-Business Suite Upgrade to R12.2 Training

Live Instructor led Online sessions with Hands-on Lab Exercises, Dedicated Machines to Practice and Recorded sessions of the Training

Click here to learn more with limited time discounts

The post Upgrading Oracle Apps (EBS) to 12.2 ? OPatch stopped with error “oracle.as.common.clone, 11.1.1.6.0, higher version 11.1.1.7.0 found appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Video: Database as a Service (DBaaS) on Oracle Cloud

Tim Hall - Sat, 2016-01-30 09:29

The latest video on my YouTube Channel is a run through of using the Database as a Service (DBaaS) offering on Oracle Cloud.

There have been a few minor changes in the interface since I last ran through capturing images, so the related article has been brought up to date.

I used my dad for the cameo in this video. Hopefully this will help him get a little more recognition, as he’s pretty much a nobody on the Oracle scene at the moment. With your help this could change!

Cheers

Tim…

Update: Almost as soon as I released this blog post the footage was out of date as Oracle released some minor changes to the interface. I rerecorded the video and re-uploaded it, so it is up to date as of now. All links from my website and this blog post point to the new video. If you have read this post via an RSS reader, you may still be seeing the old version of the post, and as a result see the link to the video as broken. But in that case, you won’t be able to read this either. :)

Video: Database as a Service (DBaaS) on Oracle Cloud was first posted on January 30, 2016 at 4:29 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Oracle Database 12c Features Now Available on apex.oracle.com

Joel Kallman - Sat, 2016-01-30 06:42
As a lot of people know, apex.oracle.com is the customer evaluation instance of Oracle Application Express (APEX).  It's a place where anyone on the planet can sign up for a workspace and "kick the tires" of APEX.  After a brief signup process, in a matter of minutes you have access to a slice of an Oracle Database, Oracle REST Data Services, and Oracle Application Express, all easily accessed through your Web browser.

apex.oracle.com has been running Oracle Database 12c for a while now.  But a lot of the 12c-specific developer features weren't available, simply because the database initialization parameter COMPATIBLE wasn't set to 12.0.0.0.0 or higher.  If you've ever tried to use one of these features in SQL on apex.oracle.com, you may have run into the dreaded ORA-00406.  But as of today (January 30, 2016), that's changed.  You can now make full use of the 12c specific features on apex.oracle.com.  Even if you don't care about APEX, you can still sign up on apex.oracle.com and kick the tires of Oracle Database 12c.

What are some things you can do now on apex.oracle.com? You can use IDENTITY columns.  You can generate a default value from a sequence.  You can specify a default value for explicit NULL columns.  And much more.

You might wonder what's taken so long, and let's just say that sometimes it takes a while to move a change like this through the machinery that is Oracle.

P.S.  I've made the request to update MAX_STRING_SIZE to EXTENDED, so you can define column datatypes up to VARCHAR2(32767).  Until this is implemented, you're limited to VARCHAR2(4000).

node-oracledb 1.6.0 is on NPM (Node.js add-on for Oracle Database)

Christopher Jones - Sat, 2016-01-30 06:07
Node-oracledb 1.6.0, the Node.js add-on for Oracle Database, is on NPM.

In this release a comprehensive pull request by Dieter Oberkofler adds support for binding PL/SQL Collection Associative Array (Index-by) types. Strings and numbers can now be bound and passed to and from PL/SQL blocks. Dieter tells us that nowadays he only gets to code for a hobby - keep it up Dieter!

Using PL/SQL Associative Arrays can be a very efficient way of transferring database between an application and the database because it can reduce the number of 'round trips' between the two.

As an example, consider this table and PL/SQL package:

  CREATE TABLE mytab (numcol NUMBER);

  CREATE OR REPLACE PACKAGE mypkg IS
    TYPE numtype IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
    PROCEDURE myinproc(p IN numtype);
  END;
  /

  CREATE OR REPLACE PACKAGE BODY mypkg IS
    PROCEDURE myinproc(p IN numtype) IS
    BEGIN
      FORALL i IN INDICES OF p
	INSERT INTO mytab (numcol) VALUES (p(i));
    END;
  END;
  /

With this schema, the following JavaScript will result in mytab containing five rows:

  connection.execute(
    "BEGIN mypkg.myinproc(:bv); END;",
    {
      bv: { type : oracledb.NUMBER,
	    dir: oracledb.BIND_IN,
	    val: [1, 2, 23, 4, 10]
	  }
    },
    function (err) { . . . });

There is a fuller example in examples/plsqlarray.sql and check out the documentation.

Other changes in node-oracledb 1.6 are

  • @KevinSheedy sent a GitHub Pull Request for the README to help the first time reader have the right pre-requisites and avoid the resulting pitfalls.

  • Fixed a LOB problem causing an uncaught error to be generated.

  • Removed the 'close' event that was being generated for LOB Writables Streams. The Node.js Streams doc specifies it only for Readable Streams.
  • Updated the LOB examples to show connection release.

  • Extended the OS X install section with a way to install on El Capitan that doesn't need root access for Instant Client 11.2. Thanks to @raymondfeng for pointing this out.

  • Added RPATH to the link line when building on OS X in preparation for future client.

TypeScript users will be happy to hear Richard Natal recently had a node-oracledb TypeScript type definition file added to the DefinitelyTyped project. This is not part of node-oracledb itself but Richard later mentioned he found a way it could be incorporated. Hopefully he will submit a pull request and it will make it directly to the project so it can be kept in sync.

Thanks to everyone who has worked on this release and kept the momentum going.

What's coming up for the next release? There is discussion about adding a JavaScript layer. This was kicked off by a pull request from Sagie Gur-Ari which has lead to some work by Oracle's Dan McGhan. See the discussion and let us know what you think. Having this layer could make it quicker and easier for JavaScript coders to contribute node-oracledb and do things like reduce API inconsistency, make it easier to add a promise API in future, and of course provide a place to directly add Sagie's Streaming query result suggestion that started the whole thing.

I know a few contributors have recently submitted the Oracle Contributor Agreement ready to do big and small things - every bit counts. I look forward to being able to incorporate your work.

I've heard a couple of reports that Node LTS 4.2.6 on Windows is having some issues building native add-ons. 0.10, 0.12, 5.x and 4.2.5 don't have issues. Drop me a line if you encounter a problem.

Issues and questions about node-oracledb can be posted on GitHub. We value your input to help prioritize work on the add-on. Drop us a line!

node-oracledb installation instructions are here.

Node-oracledb documentation is here.