Skip navigation.

Feed aggregator

Log Buffer #390, A Carnival of the Vanities for DBAs

Pythian Group - Fri, 2014-09-26 10:50

All eyes are on the Oracle Open World. Thousands of sessions, demos, and labs topped up with the countless networking opportunities, Moscone Center is the place to be.

Oracle:

Oracle OpenWorld 2014 – Bloggers Meetup.

Query to show sql_ids related to SQL Profiles.

Oracle OpenWorld is nearly upon us and the agenda is packed with interesting sessions. Prominent among these are several sessions by customers who are able to share their stories of success with Oracle Multitenant.

The biggest challenge for people coming to such huge events as Oracle OpenWorld is to navigate through all of the events that simultaneously happen at various locations.

Pythian at Oracle OpenWorld 2014.

SQL Server:

Monitoring Longest Running Transaction using SQL Server Agent Alerts .

Free PDF Booklet: 119 SQL Code Smells.

Level 1: What is “SQL Server AlwaysOn“?

10 things I learned about rapidly scaling websites with Azure.

Importing Excel Data into SQL Server Via SSIS: Questions You Were Too Shy to Ask

MySQL:

How to do reverse engineering with MySQL Workbench.

MySQL 5.5.40 Overview and Highlights.

MySQL 5.7.5: GROUP BY respects functional dependencies!

JSON UDF functions version 0.3.2 have been released.

Importing related MySQL tables into an Excel Data Model using MySQL for Excel.

Categories: DBA Blogs

ODAC 12c Release 3 Beta now available

Christian Shay - Fri, 2014-09-26 10:17

The Oracle Data Access Components (ODAC) 12c Release 3 Beta is now available! This beta introduces numerous new features for Entity Framework and managed ODP.NET.


Entity Framework 6 Code FirstODAC 12c R3 is the first ODP.NET release to certify with Entity Framework (EF) 6 and EF Code First. EF Code First is a popular development model for .NET object-relational mapping. Developers define the application domain model using source code, rather than with a designer or an XML-based configuration file. An EF Code First model's classes are defined in code through Plain Old CLR Objects (POCOs).

This support includes interoperability with the two Entity Framework methods that control the resulting Oracle data type: Data Annotations and Code First Fluent API. Data Annotations permit you to explicitly mark a class property with one or more attributes, whereas the Code First Fluent API permits you to use code rather than attributes to achieve the same goal.

Code First developers can modify and create their database schema as their model changes via EF Code First Migrations. ODP.NET supports EF Code First Migrations through the Visual Studio Package Manager Console commands.

These features are all available in both managed and unmanaged ODP.NET.


New ODP.NET, Managed Driver FeaturesSeveral new managed ODP.NET features have been delivered with this beta. XML DB developers can now use all of ODP.NET's XML classes that are supported by ODP.NET, Unmanaged Driver with the exception of the OracleXmlType.IsFragment and OracleCommand.XmlCommandType properties. This makes Unmanaged ODP.NET XML DB application migration to the managed driver a simple process.

ODP.NET, Managed Driver supports the VARCHAR2, NVARCHAR2, and RAW data types up to 32 KB in size. No code changes are required to use the larger data types, (which are a new Oracle Database 12c feature). By storing more data, developers can use these data types more frequently, providing programming flexibility. In addition, SQL Server to Oracle Database application migration is easier with these new data type sizes.

When using array binding to execute multiple DML statements, ODP.NET, Managed Driver can now provide an array that lists the number of rows affected for each input value from the bound array, rather than just the total number of rows affected. To retrieve the row count, ODP.NET can call the OracleCommand.ArrayBindRowsAffected property. You can use this information to better evaluate the DML's efficiency and whether the data changes were correctly applied.


.NET Framework 4.5.2 and Distributed TransactionsThe ODP.NET managed and unmanaged beta drivers are certified with new .NET Framework 4.5.2 release. .NET 4.5.2 introduces a new Microsoft Distributed Transaction Coordinator feature that allows ODP.NET, Managed Driver to support distributed transactions without requiring you to deploy Oracle.ManagedDataAccessDTC.dll with your application.


More to ComeUp next, the ODAC team plans a NuGet installable managed and unmanaged ODP.NET. Stay tuned to @OracleDOTNET to learn when it is available. We hope to hear your feedback via the OTN discussion forums.

EPM and BI Meetup at Next Week’s Openworld (and details of our Oracle DI Speakeasy)

Rittman Mead Consulting - Fri, 2014-09-26 10:08

Just a short note to help publicise the Oracle Openworld 2014 EPM and BI Meetup that’s running next week, organised by Cameron Lackpour and Tim Tow from the ODTUG board.

This is an excellent opportunity for EPM and BI developers and customers to get together and network over drinks and food, and chat with members of the ODTUG board and maybe some of the EPM and BI product management team. It’s running at Piattini, located at 2331 Mission St. (between 19th St & 20th St), San Francisco, CA 94110 from 7pm to late and there’s more details at this blog post by Cameron. The turnout should be pretty good, and if you’re an EPM or BI developer looking to meet up with others in your area this is a great opportunity to do so. Attendance is free and you just need to register using this form.

Similarly, if you’re into data warehousing and data integration you might be interested in our Rittman Mead / Oracle Data Integration’s Speakeasy event, running on the same evening (Tuesday September 30th 2014) from 7pm – 9pm at Local Edition, 691 Market St, San Francisco, CA. Aimed at ODI, OWB and data integration developers and customers and featuring members of the Rittman Mead team and Oracle’s Data Integration product team, again this is a great opportunity to meet with your peers and share stories and experiences. Registration is free and done through this registration form, with spaces still open at the time of posting.

Categories: BI & Warehousing

Oracle Openworld SF .NET and Windows Sessions

Christian Shay - Fri, 2014-09-26 10:04
Oracle Openworld San Francisco begins this Sunday, September 28!  Join the Oracle .NET development team and your fellow .NET developers and learn about new Oracle .NET features, including Entity Framework, Code First, Code First Migrations, NuGet packaging, support for new Oracle Database 12c features and much more. Meet Oracle and .NET experts at the .NET Demogrounds booth and at the "Ask the Experts" Q+A session. The .NET schedule listing is at this link. Register at the OpenWorld website. See you there!

On the Road with Luan

Pythian Group - Fri, 2014-09-26 08:59

For the months of September, October, and November, Microsoft SQL Server MVP Luan Moreno will be touring Brazil, Europe, and EUA for various speaking engagements. He’ll be sharing his SQL Server knowledge and insights during SQL Saturdays, conferences, and virtual conferences.

“Pythian is a company that values and truly supports employees in sharing their knowledge at community events,” Luan says. “It’s a pleasure to be a part of this amazing company.”

 

Date Location Event Topic Notes 27-Sep-14 São Paulo, Brazil SQL Saturday #325 In-Memory OLTP a.k.a Hekaton Speaking schedule 28-Sep-14 São Paulo, Brazil SQL Saturday #325 MythBusters – Caçadores de Mitos Speaking schedule 6-Oct-14 Online MVP ShowCast 2014 In-Memory OLTP – Cenários de Aplicação Register here 7-Oct-14 Online 24 Hours of Pass Troubleshooting SQL Server Extended Events Register here 7-Oct-14 Online 24 Hours of Pass In-Memory OLAP a.k.a ColumnStore Index Internals Register here 17-Oct-14 Porto Alegre, Brazil TDC 2014 ORM e Consultas x Performance Speaking schedule 25-Oct-14 Rio de Janerio, Brazil SQL Saturday #329 In-Memory OLAP a.k.a ColumnStore Index Speaking schedule Oct 28-31 Barcelona, Spain TechED 2014 Europe Subject Matter Expert (SME) – SQL Server Event schedule Nov 3-6 Redmond, Seattle MVP Summit 2014 TBD Event schedule Nov 4-7 Redmond, Seattle Pass Summit 2014 TBD Event schedule

 

Will you be attending any of these events? If so, Luan extends the invite to chat SQL Server over a coffee! Reach out to him at moreno@pythian.com or follow him on Twitter at @luansql.

Categories: DBA Blogs

Découverte : Workshop Enterprise Manager

Jean-Philippe Pinte - Fri, 2014-09-26 06:16
Oracle France propose début octobre un workshop permettant de découvrir; à travers 6 ateliers de manipulation,  la solution Oracle Enterprise Manager 12c Cloud Control.

Latest OAM certified against EBS

Frank van Bortel - Fri, 2014-09-26 03:22
OAM 11.1.2.2 certified EBS This blog entry shows OAM 11G Rel 2 PS2 (a.k.a. 11.1.2.2.0) is certified against the Oracle e-Business suite 11 and 12 as of February 2014. Just in case you missed it, like me. Oracle e-business and SSO using OAMThe blog entry also references a series of articles on how to do e-Business Suite SSO using OAM.Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

12c: Does PDB Have An SPfile?

Oracle in Action - Fri, 2014-09-26 01:39

RSS content

In a multi-tenant container database, since there are many PDB’s per CDB, it is possible for set some parameters for each individual PDB. The SPFILE for CDB stores parameter values associated with the root which apply to the root, and serve as default values for all other containers. Different values can be set in PDBs for those parameters where the column ISPDB_MODIFIABLE in V$PARAMETER is TRUE. The parameters are set for a PDB and  are stored in table PDB_SPFILE$ remembered across PDB close/open and across restart of the CDB.

– Currently  I have a CDB called CDB1 having one PDB – PDB1.

SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> sho pdbs

CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED                       READ ONLY  NO
3 PDB1                           READ WRITE NO

– There is a table pdb_spfile for each of the containers (cdb$root and PDB1)

SQL>  select con_id,  table_name from cdb_tables  where table_name = 'PDB_SPFILE$';

CON_ID TABLE_NAME
---------- --------------------
3 PDB_SPFILE$
1 PDB_SPFILE$

– pdb_spfile contains only those parameters which have been specifically   set for a container hence currently there are  no records

SQL>   col container_name for a10
col parameter for a20
col value$ for a30

select container.name container_name, par.name PARAMETER,
par.value$
from pdb_spfile$ par, v$containers container
where par.pdb_uid = container.con_uid
and par.name = 'cursor_sharing';

no rows selected

– Let’s explicitly set cursor_sharing = ‘exact’ in root and check if   it is reflected in pdb_spfile$

SQL> alter system set cursor_sharing='similar';

col container_name for a10
col parameter for a20
col value$ for a30
select container.name container_name, par.name PARAMETER,
par.value$
from pdb_spfile$ par, v$containers container
where par.pdb_uid = container.con_uid
and par.name = 'cursor_sharing';

no rows selected

– It does not show any results but v$spparameter has been updated    probably implying that spfile for the root is maintained in the    operating system only and pdb_spfile does not contain info about parameters in cdb$root.

SQL> select name, value from v$spparameter where name='cursor_sharing';

NAME                           VALUE
------------------------------ -------
cursor_sharing                 similar

-- v$parameter shows the value of parameter for root

SQL> col name for a30
col value for a30

select con_id, name, value from v$parameter
where name = 'cursor_sharing';


CON_ID NAME                           VALUE
---------- ------------------------------ --------
1 cursor_sharing                 similar

– To see parameters for the CDB and all the PDB’s (except PDB$SEED),  v$system_parameter can be accessed. It can be seen that currently it shows only the value for the CDB which will be inherited by all the PDB’s.

SQL>select con_id, name, value from v$system_parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ---------
0 cursor_sharing                 similar

– Change container to PDB1 and verify that PDB has inherited the value from CDB

SQL> alter session set container=pdb1;

sho parameter cursor_sharing

NAME                                 TYPE        VALUE
------------------------------------ -------     ---------------
cursor_sharing                       string      similar

– Since parameter has not been explicitly specified    in PDB ,  v$spparameter shows record  for con_id = 0 and null in value column

SQL> select con_id, name, value from v$spparameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ----------
0 cursor_sharing

– Let’s check if the parameter can be modified for the PDB

SQL> col ispdb_modifiable for a17
select con_id, name, value, ispdb_modifiable

from v$parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE      ISPDB_MODIFIABLE
---------- ------------------------------ ---------- -----------------
3 cursor_sharing                 similar    TRUE

– Since the parameter can be modified in PDB, let us modify its value in PDB to ‘FORCE’

SQL> alter system set cursor_sharing = 'FORCE';

sho parameter cursor_sharing

NAME                                 TYPE        VALUE
------------------------------------ ----------- -----
cursor_sharing                       string      FORCE

SQL> select con_id, name, value from v$parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ----------
3 cursor_sharing                 FORCE

– v$spparameter shows updated value but con_id is still 0 (bug??)

SQL> select con_id, name, value from v$spparameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ----------
 0 cursor_sharing                 FORCE

– Current value of the parameter  for PDB can be viewed from root using v$system_parameter

SQL> alter session set container=cdb$root;
select con_id, name, value from v$system_parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ -------
0 cursor_sharing                 similar
3 cursor_sharing                 FORCE

– Current value in spfile of PDB can be seen from pdb_spfile$

SQL> col value$ for a30
select pdb.name PDB_NAME, par.name PARAMETER, par.value$
from pdb_spfile$ par, v$pdbs pdb
where par.pdb_uid = pdb.con_uid
and par.name = 'cursor_sharing';

PDB_NAME   PARAMETER            VALUE$
---------- -------------------- ------------------------------
PDB1       cursor_sharing       'FORCE'

– The parameter still has earlier value of similar for cdb$root

SQL> sho parameter cursor_sharing

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------
cursor_sharing                       string      similar

SQL> col name for a30
col value for a30

select con_id, name, value from v$parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ --------
1 cursor_sharing                 similar

– Let’s check if modified value persists across closing / opening of the PDB

SQL> alter pluggable database pdb1 close;

– After PDB is closed, entry in its spfile is still visible    but current value cannot be seen as PDB is closed

SQL> col value$ for a30
select pdb.name PDB_NAME, par.name PARAMETER, par.value$
from pdb_spfile$ par, v$pdbs pdb
where par.pdb_uid = pdb.con_uid
and par.name = 'cursor_sharing';

PDB_NAME   PARAMETER            VALUE$
---------- -------------------- -------------
PDB1       cursor_sharing       'FORCE'

SQL> select con_id, name, value from v$system_parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ------------
0 cursor_sharing                 similar

– It can be seen that after PDB is re-opened, the updated
   value still persists

SQL>  alter pluggable database pdb1 open;

select con_id, name, value from v$system_parameter
where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ -------------
0 cursor_sharing                 similar
3 cursor_sharing                 FORCE

– Let’s verify that parameter change persists across CDB shutdown

SQL> shu immediate;
     startup
     alter pluggable Database  pdb1 open;

     select con_id, name, value 
     from   v$system_parameter
     where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ -----------
0 cursor_sharing                 similar
3 cursor_sharing                 FORCE

SQL> col value$ for a30
     select pdb.name PDB_NAME, par.name 
            PARAMETER, par.value$
     from pdb_spfile$ par, v$pdbs pdb
     where par.pdb_uid = pdb.con_uid
      and par.name = 'cursor_sharing';

PDB_NAME   PARAMETER            VALUE$
---------- -------------------- ------------------------------
PDB1       cursor_sharing       'FORCE'

– Now we will change the parameter in PDB spfile only

SQL> alter session set container=pdb1;

     alter system set cursor_sharing = 'EXACT' scope=spfile;

– Current value still remains FORCE

sho parameter cursor_sharing

NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------
cursor_sharing                       string      FORCE

–Value has been changed to EXACT in SPfile only

SQL> select con_id, name, value
     from     v$spparameter
     where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ -----------
0 cursor_sharing                 EXACT

– The above changes can be seen from root as well

SQL> alter session set container=cdb$root;

-- The current value is shown as FORCE

SQL> select con_id, name, value 
      from  v$system_parameter
     where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ ----------
0 cursor_sharing                 similar
3 cursor_sharing                 FORCE

– The value in SPFILE is ‘EXACT’ as set

SQL> col value$ for a30
     select pdb.name PDB_NAME, par.name 
            PARAMETER, par.value$
     from pdb_spfile$ par, v$pdbs pdb
      where par.pdb_uid = pdb.con_uid
     and par.name = 'cursor_sharing';

PDB_NAME   PARAMETER            VALUE$
---------- -------------------- ------------------------------
PDB1       cursor_sharing       'EXACT'

– Let’s close and re-open PDB to vefify that value in spfile is
   applied

SQL> alter pluggable database pdb1 close;

    alter pluggable database pdb1 open;
  
     select con_id, name, value 
     from   v$system_parameter
     where name = 'cursor_sharing';  2

CON_ID NAME                           VALUE
---------- ------------------------------ ---------------
0 cursor_sharing                 similar
3 cursor_sharing                 EXACT

– Since the value in spfile is same as default, we can remove this
entry by resetting the value of the parameter.

SQL> alter session set container=pdb1;
     alter system reset cursor_sharing;
     sho parameter cursor_sharing

NAME                                 TYPE        VALUE
------------------------------------ ----------- ---------
cursor_sharing                       string      EXACT

– The entry has been deleted from spfile

SQL> select con_id, name, value 
     from v$spparameter
     where name = 'cursor_sharing';

CON_ID NAME                           VALUE
---------- ------------------------------ -------------
0 cursor_sharing

SQL> alter session set container=cdb$root;

     col value$ for a30
     select pdb.name PDB_NAME, par.name 
            PARAMETER, par.value$
     from pdb_spfile$ par, v$pdbs pdb
     where par.pdb_uid = pdb.con_uid
     and par.name = 'cursor_sharing';

no rows selected

I hope this post was useful. Your comments and suggestions are always welcome!!

References:

Oracle documentation

——————————————————————————————-

Related Links:

Home

Oracle 12c Index

 

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [12c: Does PDB Have An SPfile?], All Right Reserved. 2014.

The post 12c: Does PDB Have An SPfile? appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

Investigation of IPEDS Distance Education Data Highlights System Not Ready for Modern Trends

Michael Feldstein - Fri, 2014-09-26 00:00

This article is cross-posted to the WCET blog.

After billions of dollars spent on administrative computer systems and billions of dollars invested in ed tech companies, the U.S. higher education system is woefully out of date and unable to cope with major education trends such as online & hybrid education, flexible terms, and the expansion of continuing and extended education. Based on an investigation of the recently released distance education data for IPEDS, the primary national education database maintained by the National Center for Education Statistics (NCES), we have found significant confusion over basic definitions of terms, manual gathering of data outside of the computer systems designed to collect data, and, due to confusion over which students to include in IPEDS data, the systematic non-reporting of large numbers of degree-seeking students.

In Fall 2012, the IPEDS (Integrated Postsecondary Education Data System) data collection for the first time included distance education – primarily for online courses and programs. This data is important for policy makers and institutional enrollment management as well as for the companies serving the higher education market.

We first noticed the discrepancies based on feedback from analysis that we have both included at the e-Literate and WCET blogs. One of the most troubling calls came from a state university representative that said that the school has never reported any students who took their credit bearing courses through their self-supported, continuing education program.  Since they did not include the enrollments in reporting to the state, they did not report those enrollments to IPEDS. These were credits toward degrees and certificate programs offered by the university and therefore should have been included in IPEDS reporting based on the following instructions.

Include all students enrolled for credit (courses or programs that can be applied towards the requirements for a postsecondary degree, diploma, certificate, or other formal award), regardless of whether or not they are seeking a degree or certificate.

Unfortunately, the instructions call out this confusing exclusion (one example out of four):

Exclude students who are not enrolled for credit. For example, exclude: Students enrolled exclusively in Continuing Education Units (CEUs).

How many schools have interpreted this continuing education exclusion to apply to all continuing education enrollments? To do an initial check, we contacted several campuses in the California State University system and were told that all IPEDS reporting was handled at the system level. Based on the introduction of the Fall 2012 distance education changes, Cal State re-evaluated whether to change their reporting policy. A system spokesman explained that:

I’ve spoken with our analytic studies staff and they’ve indicated that the standard practice for data reporting has been to share only data for state-supported enrollments. We have not been asked by IPEDS to do otherwise so when we report distance learning data next spring, we plan on once again sharing only state-supported students.

Within the Cal State system, this means that more than 50,000 students taking for-credit self-support courses will not be reported, and this student group has never been reported.

One of the reasons for the confusion as well as the significance of this change is that continuing education units have moved past their roots of offering CEUs and non-credit courses for the general public (hence the name continuing education) and taking up a new role of offering courses not funded by the state (hence self-support). Since these courses and programs are not state funded, they are not subject to the same oversight and restrictions as state-funded equivalents such as maximum tuition per credit hour.

This situation allows continuing education units in public schools to become laboratories and innovators in online education. The flip side is that given the non-state-funded nature of these courses and programs, it appears that schools may not be reporting these for-credit enrollments through IPEDS, whether or not the students were in online courses. However, the changes in distance education reporting may actually trigger changes in reporting.

Do Other Colleges Also Omit Students from Their IPEDS Report?

Given what was learned from the California State University System, we were interested in learning if other colleges were having similar problems with reporting distance education enrollments to IPEDS. WCET conducted a non-scientific canvassing of colleges to get their feedback on what problems they may have encountered. Twenty-one institutions were selected through a non-scientific process of identifying colleges that reported enrollment figures that seemed incongruous with their size or distance education operations. See the “Appendix A: Methodology” for more details.

From early August to mid-September, we sought answers regarding whether the colleges reported all for-credit distance education and online enrollments for Fall 2012. If they did not, we asked about the size of the undercount and why some enrollments were not reported.

Typically, the response included some back-and-forth between the institutional research and distance education units at each college. Through these conversations, we quickly realized that we should have asked a question about the U.S. Department of Education’s definition of “distance education.”   Institutions were very unclear about what activities to include or exclude in their counts. Some used local definitions that varied from the federal expectations. As a result, we asked that question as often as we could.

The Responses

Twenty institutions provided useable responses. We agreed to keep responses confidential. Table 1 provides a very high level summary of the responses to the following two questions:

  • Counts Correct? – Do the IPEDS data reported include all for-credit distance education and online enrollments for Fall 2012?
  • Problem with “Distance Education” Definition? – Although we did not specifically ask this question, several people volunteered that they had trouble applying the IPEDS definition.
Table 1: Counts for Institutional Responses  Counts Correct?Problem with "Distance Education" Definition? Yes113 Maybe55 No412

Of those that assured us that they submitted the correct distance education counts, some of them also reported having used their own definitions or processes for distance education. This would make their reported counts incomparable to the vast majority of others reporting.One institution declined to respond. Given that its website advertises many hundreds of online courses, the distance education counts reported would leave us to believe that they either: a) under-reported, or b) average one or two students per online class. The second scenario seems unlikely.

Findings

This analysis found several issues that call into question the usability of IPEDS distance education enrollment counts and, more broadly and more disturbingly, IPEDS statistics, in general.

There is a large undercount of distance education students

While only a few institutions reported an undercount, one was from the California State University System and another from a large university system in another populous state. Since the same procedures were used within each system, there are a few hundred thousand students who were not counted in just those two systems.

In California, they have never reported students enrolled in Continuing Education (self-support) units to IPEDS. A source of the problem may be in the survey instructions. Respondents are asked to exclude: “Students enrolled exclusively in Continuing Education Units (CEUs).” The intent of this statement is to exclude those taking only non-credit courses. It is conceivable that some might misinterpret this to mean to exclude those in the campuses continuing education division. What was supposed to be reported was the number of students taking for-credit courses regardless of what college or institutional unit was responsible for offering the course.

In the other large system, they do not report out-of-state students as they do not receive funding from the state coffers.

It is unclear what the numeric scope would be if we knew the actual numbers across all institutions. Given that the total number of “students enrolled exclusively in distance education courses” for Fall 2012 was 2,653,426, an undercount of a hundred thousand students just from these two systems would be a 4% error. That percentage is attention-getting on its own.

The IPEDS methodology does not work for innovative programs…and this will only get worse

Because it uses as many as 28 start dates for courses, one institutional respondent estimated that there was approximately a 40% undercount in its reported enrollments. A student completing a full complement of courses in a 15-week period might not be enrolled in all of those courses at the census date. With the increased use of competency-based programs, adaptive learning, and innovations still on the drawing board, it is conceivable that the census dates used by an institution (IPEDS gives some options) might not serve every type of educational offering.

The definition of ‘distance education’ is causing confusion

It is impossible to get an accurate count of anything if there is not a clear understanding of what should or should not be included in the count. The definition of a “distance education course” from the IPEDS Glossary is:

A course in which the instructional content is delivered exclusively via distance education.  Requirements for coming to campus for orientation, testing, or academic support services do not exclude a course from being classified as distance education.

Even with that definition, colleges faced problems with counting ‘blended’ or ‘hybrid’ courses. What percentage of a course needs to be offered at a distance to be counted in the federal report? Some colleges had their own standard (or one prescribed by the state) with the percentage to be labeled a “distance education” course varied greatly. One reported that it included all courses with more than 50% of the course being offered at a distance.

To clarify the federal definition, one college said they called the IPEDS help desk. After escalating the issue to a second line manager, they were still unclear on exactly how to apply the definition.

The Online Learning Consortium is updating their distance education definitions. Their current work could inform IPEDs on possible definitions, but probably contains too many categories for such wide-spread data gathering.

There is a large overcount of distance education students

Because many colleges used their own definition, there is a massive overcount of distance education. At least, it is an overcount relative to the current IPEDS definition. This raises the question, is the near 100% standard imposed by that definition useful in interpreting activity in this mode of instruction? Is it the correct standard since no one else seems to use it?

In addressing the anomalies, IPEDS reporting becomes burdensome or the problems ignored

In decentralized institutions or in institutions with “self-support” units that operate independently from the rest of campus, their data systems are often not connected. They are also faced with simultaneously having to reconcile differing “distance education” definitions. One choice for institutional researchers is to knit together numbers from incompatible data systems and/or with differing definitions. Often by hand. To their credit, institutional researchers overcome many such obstacles. Whether it is through misunderstanding the requirements or not having the ability to perform the work, some colleges did not tackle this burdensome task.

Conclusions – We Don’t Know

While these analyses have shed light on the subject, we are still left with the feeling that we don’t know what we don’t know. In brief the biggest finding is that we do not know what we do not know and bring to mind former Secretary of Defense Donald Rumsfeld’s famous rambling:

There are known knowns. These are things we know that we know. We also know there are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are ones we don’t know we don’t know.

The net effect is not known

Some institutions reported accurately, some overcounted, some undercounted, some did both at the same time. What should the actual count be?

We don’t know.

The 2012 numbers are not a credible baseline

The distance education field looked forward to the 2012 Fall Enrollment statistics with distance education numbers as a welcomed baseline to the size and growth of this mode of instruction. That is not possible and the problems will persist with the 2013 Fall Enrollment report when those numbers are released. These problems can be fixed, but it will take work. When can we get a credible baseline?

We don’t know.

A large number of students have not been included on ANY IPEDS survey, EVER.

A bigger issue for the U.S. Department of Education goes well beyond the laser-focused issue of distance education enrollments. Our findings indicate that there are hundreds of thousands of students who have never been reported on any IPEDS survey that has ever been conducted. What is the impact on IPEDS? What is the impact on the states where they systematically underreported large numbers of students?

We don’t know.

Who is at fault?

Everybody and nobody.  IPEDS is faced with institutional practices that vary greatly and often change from year-to-year as innovations are introduced.  Institutional researchers are faced with reporting requirements that vary depending on the need, such as state oversight agencies, IPEDS, accrediting agencies, external surveys and ranking services, and internal pressures from the marketing and public relations staffs.  They do the best they can in a difficult situation. Meanwhile, we are in an environment in which innovations may no longer fit into classic definitional measurement boxes.

What to expect?

In the end, this expansion of data from NCES through the IPEDS database is a worthwhile effort in our opinion, and we should see greater usage of real data to support policy decisions and market decisions thanks to this effort. However, we recommend the following:

  • The data changes from the Fall 2012 to Fall 2013 reporting periods will include significant changes in methodology from participating institutions. Assuming that we get improved definitions over time, there will also be changes in reporting methodology at least through Fall 2015. Therefore we recommend analysts and policy-makers not put too much credence in year-over-year changes for the first two or three years.
  • The most immediate improvement available is for NCES to clarify and gain broader consensus on the distance education definitions. This process should include working with accrediting agencies, whose own definitions influence school reporting, as well as leading colleges and universities with extensive online experience.
Appendix: Methodology The Process for Selecting Institutions to Survey

The selection process for institutions to survey was neither random nor scientific. A multi-step process of identifying institutions that might have had problems in reporting distance education enrollments was undertaken. The goal was to identify twenty institutions to be canvassed. The steps included:

  • A first cut was created by an “eyeball” analysis of the Fall 2012 IPEDS Fall Enrollment database to identify institutions that may have had problems in responding to the distance education enrollment question.
    • Colleges that reported distance education enrollments that did not appear to be in scope with the size of the institution (i.e., a large institution with very low distance education enrollments) or what we knew about their distance education operations were included.
    • Special attention was paid to land grant colleges as they are likely to have self-funded continuing or distance education units.
    • Institutions in the California State University system were excluded.
    • This resulted in a list of a little more than 100 institutions.
  • The second cut was based upon:
    • Including colleges across different regions of the country.
    • Including a private college and an HBCU as indicators as to whether this problem might be found in colleges from those institutional categories.
    • Twenty institutions were identified.
  • In side discussions with a distance education leader at a public university, they agreed to participate in the survey. This brought the total to twenty-one institutions.

Questions Asked in the Survey

  1. Do the IPEDS data reported include all for-credit distance education and online enrollments for Fall 2012?
  2. If the IPEDS data reported does not include all for-credit distance education and online enrollments for Fall 2012, approximately how many enrollments are under-counted?
  3. If the IPEDS data reported does not include all for-credit distance education and online enrollments for Fall 2012, why did you not report some enrollments?

The post Investigation of IPEDS Distance Education Data Highlights System Not Ready for Modern Trends appeared first on e-Literate.

Introduction to Oracle BI Cloud Service : Service Administration

Rittman Mead Consulting - Thu, 2014-09-25 20:17

Earlier in the week we’ve looked at the developer features within Oracle BI Cloud Service (BICS), aimed at departmental users who want the power of OBIEE 11g without the need to stand-up their own infrastructure. We looked at the process of uploading spreadsheets and other data to the Oracle Database Schema Service that accompanies BICS, how you create the BI Repository that translates the tables and columns you upload into measures, attributes and hierarchies, and then took a brief look at how dashboards and reports are created and then shared with other users in your department. If you’re coming in late, here’s the links to the previous posts in the series:

One of the design goals for BICS was to reduce the amount of administration work an end-user has to perform, and to simplify and consolidate any tasks that they do have to do. Behind the scenes BICS actually comprises a BI environment, and a database environment, with most of the administration work being concerned with the BI one. Let’s start by looking at the service administration page that you see when you first log into the BICS environment as an administrator, with the screenshot below showing the overview page for the overall service.

NewImage

Oracle BI Cloud Service is part of Oracle’s overall Oracle Platform-as-a-Service (PaaS) offering, with BICS being made up of a database service and a BI service. The screenshot above shows the overall availability of these two services over the past two weeks, and you click on either the database service or the BI service to drill into more detail. Let’s click on the BI service first.

NewImage

The BI service dashboard page shows the same availability statuses again, along with a few graphs to show usage over that period. Also on this page are details of the start and end date for the service contract, details of the SFTP user account you’ll need to for some import/archive operations, and a link to Presentation Services for this instance, to launch the OBIEE Home Page.

The OBIEE home page, as we saw in previous posts in this series, has menu items for model editing, data uploading and creating reports and dashboards. What it also has though is a Manage menu item, as shown in the screenshot below, that takes you through to an administration function that lets you set up application roles and backup/restore the system.

NewImage

Application roles are the way that OBIEE groups permissions and privileges and then assigns them to sets of users. With on-premise OBIEE the only way to manage application roles is through Enterprise Manager Fusion Middleware Control, but with BICS this functionality has been moved into OBIEE proper so that non-system administrators can perform this task. The list of users you work with are the ones defined for your service (tenancy) and using this tool you can assign them to existing application roles, create new ones, or group one set of roles within another. Users themselves are created as part of the instance creation process, with the minimum (license) number of users for an instance being 10.

NewImage

The Snapshots tab on this same Service Console page provides access to a new, system-wide snapshot and restore function that provides the means to version your system, restore it from a backup and transport a dev/test environment to your production instance. As I mentioned in previous postings in the series, each tenant for BICS comes with two instances, once for dev/test and one for prod, and the snapshot facility gives you a means to copy everything from one environment into another, for when you’ve completed development and testing and want to put your dashboards into production.

NewImage

Taking a snapshot, as shown in the screenshot above, creates an archive file containing your RPD, the catalog and all the security settings, and you can store a number of snapshots within each environments, giving you a (very coarse-grained) versioning ability. What you can also do is download these snapshots as what are called “BI Archive” files as shown in the screenshot below, and its these archive files that you can then upload into your other instance to give you your code promotion process – note however that applying an archive file overwrites everything that was there before, so you’ll need to be careful doing this when users start creating reports in your production environment – really, it’s just a once-only code promotion facility followed then by a way of backing up and restoring your environments.

NewImage

Note also that you’ll separately need to backup and restore any database elements, as these aren’t automatically included in the BI archive process. Backup and restoration of database elements is done via the separate database instance service page shown below, where you can export the whole schema or just parts of it, and then retrieve the export file via an SFTP transfer.

NewImage

So that’s in in terms of BICS administration, and for our initial look at the BI Cloud Service platform. Rittman Mead are of course offering services around BICS and cloud BI in-general so contact us if you’d like to give BICS a spin, and keep an eye on the blog over the next few weeks where we’ll take you through the example BICS application we built, reporting against Salesforce.com data using their REST API.

Categories: BI & Warehousing

Pivotal GemFire 8 - Starting a Locator / Server directly from IntelliJ 13.x

Pas Apicella - Thu, 2014-09-25 18:42
With the introduction of Pivotal GemFire 8 developers can easily incorporate starting/stopping GemFire Locators/Servers directly within Java code allowing them to easily integrate GemFire  management within their IDE. This ensures developers can develop/test/run GemFire applications all within their IDE of choice making them much more productive using very simple Launcher API's

The locator is a Pivotal GemFire process that tells new, connecting members where running members are located and provides load balancing for server use. A GemFire server is a Pivotal GemFire process that runs as a long-lived, configurable member of a distributed system. The GemFire server is used primarily for hosting long-lived data regions and for running standard GemFire processes such as the server in a client/server configuration.

In this post I am going to show how we can use the following classes to launch a Pivotal GemFire  locator / server from code directly within IntelliJ IDEA allowing you to develop/test GemFire applications directly from your IDE of choice.

Note: In this post we use Intellij IDEA 13.x

com.gemstone.gemfire.distributed.LocatorLauncher API
com.gemstone.gemfire.distributed.ServerLauncher API

1. Add the GemFire 8 maven REPO to your project to ensure we pull the required JAR files.
  
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>gemfire-compression</groupId>
<artifactId>gemfire-compression</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
<gemfire.version>8.0.0</gemfire.version>
</properties>

<dependencies>
<dependency>
<groupId>com.gemstone.gemfire</groupId>
<artifactId>gemfire</artifactId>
<version>${gemfire.version}</version>
<scope>compile</scope>
</dependency>
</dependencies>

<repositories>
<repository>
<id>gemstone-release</id>
<name>GemStone Maven RELEASE Repository</name>
<url>http://dist.gemstone.com.s3.amazonaws.com/maven/release</url>
</repository>
</repositories>
</project>

2. Create a class as follows to start a locator
  
package pivotal.gemfire.compression;

import com.gemstone.gemfire.distributed.LocatorLauncher;

import java.util.concurrent.TimeUnit;

public class StartLocator
{
public static void main(String[] args)
{
LocatorLauncher locatorLauncher = new LocatorLauncher.Builder()
.set("jmx-manager", "true")
.set("jmx-manager-start", "true")
.set("jmx-manager-http-port", "8083")
.set("jmx-manager-ssl", "false")
.setMemberName("locator")
.setPort(10334)
.setBindAddress("localhost")
.build();

System.out.println("Attempting to start Locator");

locatorLauncher.start();

locatorLauncher.waitOnStatusResponse(30, 5, TimeUnit.SECONDS);

System.out.println("Locator successfully started");
}

}

3. Create a class as follow to start a single cache server, could create as many iof these as you need
  
package pivotal.gemfire.compression;

import com.gemstone.gemfire.distributed.ServerLauncher;

public class StartMember
{
public static void main(String[] args){
ServerLauncher serverLauncher = new ServerLauncher.Builder()
.setMemberName("server1")
.set("locators","localhost[10334]")
.set("cache-xml-file", "cache.xml")
.set("log-level", "info")
.build();

System.out.println("Attempting to start cache server");

serverLauncher.start();

System.out.println("Cache server successfully started");
}
}

4. Create a cache.xml with a dummy region
  
<!DOCTYPE cache PUBLIC
"-//GemStone Systems, Inc.//GemFire Declarative Caching 8.0//EN"
"http://www.gemstone.com/dtd/cache8_0.dtd">
<cache>
<cache-server bind-address="localhost" port="0" hostname-for-clients="localhost"/>

<region name="CompressedRegion">
<region-attributes data-policy="partition">
<key-constraint>java.lang.String</key-constraint>
<value-constraint>java.lang.String</value-constraint>
<partition-attributes redundant-copies="1" total-num-buckets="113"/>
<eviction-attributes>
<lru-heap-percentage action="overflow-to-disk"/>
</eviction-attributes>
</region-attributes>
</region>


<resource-manager critical-heap-percentage="75" eviction-heap-percentage="65"/>

</cache>

5. Edit the run configurations for StartLocator.java to include GEMFIRE env variable as shown below.


6. Run StartLocator.java as shown below.


7. Run StartMember.java as shown below.


8. Finally from the IDE run a script called verify.sh to view the cluster member/regions to ensure it worked.

verify.sh
  
#!/bin/bash

. ./setup.sh

gfsh <<EOF
connect --locator=localhost[10334];
list members;
list regions;
exit;
EOF

Output



More InformationPivotal GemFire Locator Processes
http://gemfire.docs.pivotal.io/latest/userguide/deploying/topics/running_the_locator.html

Pivotal GemFire Server Processes
http://gemfire.docs.pivotal.io/latest/userguide/deploying/topics/running_the_cacheserver.html

http://feeds.feedburner.com/TheBlasFromPas
Categories: Fusion Middleware

Query to show sql_ids related to SQL Profiles

Bobby Durrett's DBA Blog - Thu, 2014-09-25 17:58

I have a number 0f SQL Profiles on a database I’m working on and I wanted to know the sql_id of the SQL statements that they relate to.  Here is what I came up with:

select distinct 
p.name sql_profile_name,
s.sql_id
from 
dba_sql_profiles p,
DBA_HIST_SQLSTAT s
where
p.name=s.sql_profile;

Here is the output on my system:

SQL_PROFILE_NAME               SQL_ID
------------------------------ -------------
coe_5up1944u1zb7r_1979920644   5up1944u1zb7r
coe_b9ad7h2rvtxwf_3569562598   b9ad7h2rvtxwf
coe_9f21udzrfcygh_2815756405   9f21udzrfcygh
coe_366brswtxxuct_10194574     366brswtxxuct
coe_2261y37rp45py_2815756405   2261y37rp45py

These are all profiles I created manually with the SQLT scripts so the profile name includes the sql_id but not all sql profiles are like this.  I have more entries in dba_sql_profiles than these five but these are the only rows that matched a row in dba_hist_sqlstat so I guess this won’t work for queries that are not used very often or are so fast that they don’t get pulled into the tables that are behind dba_hist_sqlstat.

– Bobby



Categories: DBA Blogs

Singapore

Doug Burns - Thu, 2014-09-25 16:31

Now, *this* is a post I should have written ages ago but somehow (as in most cases these days) Twitter overtook blogging because it is so much easier to write a bunch of tweets on a mobile device of some kind when living normal life than to sit down and write a blog post.

By now, most people I know know that I've moved to Singapore because I've either bombarded them with face-to-face chat leading up to the move, or they've similarly but asynchronously been bombarded by my 140-character diarrhea on Twitter. (If you care about that, I'm trying to do it from @DouglasIBurns.) But, to spare friends who have not had the joys of either experience who I might be seeing for the first time in a while at Openworld, here are a few facts that might make conversations a little less painful ....

- I now live and work in Singapore.

- It's my first job as an employee of a company other than my own for around 22 years. Those who know me well will understand that this fact is the most surprising and scary of all!

- Singapore is fantastic and don't believe a large majority of what you read about it on the Internet. There's usually some truth in there but it's exaggerated, both positively and negatively. If you're expecting some kind of pristine, well-oiled, boring Utopia, then you've got the wrong place. There are definitely elements of that, but I've been smoking my little head off whilst having a relatively cheap cold Tiger beer in less-than-salubrious surroundings. It might be slick and sterile by Asian standards, but it's lively and real enough for me.

- I have DBA privileges back for the first time in a long time and I'm enjoying that immensely. A senior role that gives me scope to actually *do stuff* is more than fine by me!

- I think our original intention in moving was to give it a try for the minimum 1 year that would be required, but probably 2. However, I keep running into people who thought the same and are here 4, 6 or 8 years later. I can understand why.

- Little known fact: I moved to Singapore when I was around 9 months old because my dad was posted here by the Royal Air Force, just as Singapore was becoming an independent country, and left when I was almost 4. So I grew up in Singapore, but remember nothing about it. However, my older family adored it and they're all looking forward to coming back to visit. I still haven't visited our old home yet. I wonder if it will provoke any memories? As my eldest sister pointed out, I celebrated my first birthday in Singapore and (hopefully) I'll celebrate my 50th one here too.

- Despite the move requiring a lot of effort, I feel re-energised by the place and so hopefully I can get back to more blogging again, particularly as I've had 3 months of interesting issues to contend with, although there will be limits on what I can blog about unless I can reproduce some non-specific test cases :-(

- I am *so* pleased that Morten Egan decided to come here too. He's not only a stellar worker, but a pretty top human being too and it's good to have someone around with similar humour and sensibilities. We'll conquer the world together! (actually, that last bit may be a joke)

It's safe to say that it was a great decision to move and thanks to @madsjt and @RCT_Enterprises for taking it on with me! Other than that, no more Singapore chat here. Follow this Twitter account if you care.

Oracle Priority Support Infogram for 25-SEP-2014

Oracle Infogram - Thu, 2014-09-25 15:22

OpenWorld
Focus On Oracle Advanced Customer Support
Oracle Support and Services at Oracle OpenWorld 2014
Architecting a Complete Access Solution for the Cloud Economy [CON7975]
Optimizing Oracle Exadata with Oracle Support Services: A Client View from KPN [CON7054]
OakTable always has a conference within the conference with some leading lights of the RDBMS world. From Kyle Hailey: Agenda for Oaktable World 2014, located at Creativity Museum
From Oracle Fusion Middleware: You are invited! See the Power of Innovation with Oracle Fusion Middleware
RDBMS
Import into UTF8 Database, from Laurent Schneider.
When to use the Oracle Database In-Memory option? from AMIS Technology Blog.
PL/SQL
From Oracle-Base: PL/SQL New Features and Enhancements in Oracle Database 12c Release 1 (12.1)
Linux
From the Oracle Linux Blog: Oracle Linux 5.11 is now available
Installing Oracle’s VirtualBox-4.3 in Fedora 20, from Smittix!
PeopleSoft
From Archbeat: Video: 2 Minute Tech Tip: Peoplesoft, MAA, and Exalogic
BI
From Rittman Mead: Introduction to Oracle BI Cloud Service : Building Dashboards & Reports
Big Data
From Technet: Download Oracle Big Data Lite Virtual Machine
From Pythian: Is X a Big Data product?
GlassFish
From The Aquarium: Spotlight on GlassFish 4.1: #11 Enable Remote Admin
Testing
From Annuitas: Email Best Practices – Why, When and How to Test.
Data Science
Didn't find yourself on this list? Total fail! 43 Data Science Thought Leaders, According to Berkeley University from Data Science Central.
EBS
From the Oracle E-Business Suite Support Blog:
North American Payroll Year End is Approaching!
Functional Impact Analysis Documents for Oracle Projects Key Patches
Don't Miss This! Leverage Endeca extension with iProcurement to Increase Operating Profit
From the Oracle E-Business Suite Technology Blog:
Second Recommended Patch Collection for EBS 12.1.3 Now Available
Updated Whitepaper: Using Application Express with EBS 12.2

Database 12.1.0.1 Certified with EBS 12.2

Annonce : MySQL & Enterprise Manager

Jean-Philippe Pinte - Thu, 2014-09-25 15:06
Oracle a annoncé l'extension des capacités d'Oracle Enterprise Manager à superviser des bases de données MySQL.

Plus d'informations :

In-Memory OLTP – ORDER BY Behavior Clause Using Range Index

Pythian Group - Thu, 2014-09-25 14:43

Some clients are beginning to thinking about the possibilities of migrating some disk tables to In-Memory tables — this process is not so simple. Migrating tables requires a new mindset as some things changed in the SQLOS and SQL Serve architecture.

It is extremely important to know about the differences, the scenarios that can be applied, and the non-supported aspects. First, take a look at the official link for the In-Memory Tables documentation – http://msdn.microsoft.com/en-us/library/dn133186%28v=sql.120%29.aspx

One of the things that we need to know about in In-Memory tables, is that this feature comes with two new types of indexes known as HASH INDEX and RANGE INDEX.

1. Hash Index – The Hash Index is perfect to use for equality purposes. For example, to search for a specific number and character.

hash_index
Figure 1 – Hash Index Structure

 

2. Range Index – Basically the Range Index is perfect for range purposes, So for example search for a range of values and normally is more applied when used with date ranges – DATE, DATETIME and DATETIME2.

range_index
Figure 2 – Range Index Structure

When we realize the range index creation, a very important aspect is the ORDER of the data sort (ASC or DESC). Creating In-Disk Tables (Conventional Tables) the ORDER BY is not normally a big point of concern because the data pages are double linked — this difference becomes significant when you have more than one column in two different directions.

Another interesting aspect of the ORDER BY clause in In-Disk tables is the ORDERING creation and Parallelism usage, and you can check this interesting behavior here sqlmag.com/t-sql/descending-indexes

Now, let’s analyze the ORDER BY clause in In-Memory tables using the RANGE INDEX and check the behavior and the differences when using the ASC and DESC order in queries.

 

USE HktDB

go

sp_help ‘inmem_DadosAtendimentoClientes’

table_info
Figure 3 – Range Index = idxNCL_DataCadastro

 

Creating an In-Memory table with a NONCLUSTERED Range Index in Ascending Order.

 

CREATE TABLE [dbo].[inmem_DadosAtendimentoClientes]

(

INDEX [idxNCL_DataCadastro] NONCLUSTERED

(

[DataCadastro] ASC

)

)

 

Grabbing information’s about RANGE INDEX

 

SELECT SIS.name AS IndexName,

SIS.type_desc AS IndexType,

XIS.scans_started,

XIS.rows_returned,

XIS.rows_touched

FROM sys.dm_db_xtp_index_stats AS XIS

INNER JOIN sys.indexes AS SIS

ON XIS.object_id = SIS.object_id

AND XIS.index_id = SIS.index_id

WHERE XIS.object_id = 50099219

AND  SIS.type_desc = ‘NONCLUSTERED’

index_info
Figure 4 – Range Index Information

 

Execution 1 – Ordering using the ORDER BY ASC

SELECT *

FROM inmem_DadosAtendimentoClientes

WHERE DataCadastro BETWEEN ‘2005-02-15 18:58:48.000? AND ‘2005-02-25 18:58:48.000?

ORDER BY DataCadastro ASC

plan_1

 

Execution 2 – Ordering using the ORDER BY DESC

SELECT *

FROM inmem_DadosAtendimentoClientes

WHERE DataCadastro BETWEEN ‘2005-02-15 18:58:48.000? AND ‘2005-02-25 18:58:48.000?

ORDER BY DataCadastro DESC

plan_2

 

sort_operation

Sort of 80% in this query, Why ?

 

Analyzing the XML of the Execution Plan…

<OrderBy>

  <OrderByColumn Ascending=”false”>

<ColumnReference Database=”[HktDB]” Schema=”[dbo]” Table=”[inmem_DadosAtendimentoClientes]” Column=”DataCadastro” />

</OrderByColumn>

</OrderBy>

 

What happened?

When we execute a query with the ORDER BY clause in Range Index column, we need to verify the order that was created – ASC or DESC. This happened because is this case I created the column ‘DataCadastro‘ with ASC order, this way the data is ordered in the ascending way and not in descending way, and talking about In-Memory tables the order MATTERS a lot, You just can benefit of the ORDER if the order that you searched is the same that the order that you created the tables, this happens because the data is stored in another way in-memory, this is a BY DESIGN consideration.

MSDN – “Nonclustered indexes (not hash indexes) support everything that hash indexes supports plus seek operations on inequality predicates such as greater than or less than, as well as sort order. Rows can be retrieved according to the order specified with index creation. If the sort order of the index matches the sort order required for a particular query, for example if the index key matches the ORDER BY clause, there is no need to sort the rows as part of query execution. Memory-optimized nonclustered indexes are unidirectional; they do not support retrieving rows in a sort order that is the reverse of the sort order of the index. For example, for an index specified as (c1 ASC), it is not possible to scan the index in reverse order, as (c1 DESC).”

 

Recommendation

Always realize the creation of the RANGE Index in the correct ordination that you want, in the most of times the most common ORDER pattern is the DESC, because normally you want to visualize and search the most recent data of your application or search for last transaction that you had in an specific date. You should always be careful about this because if you want to change the ORDER BY is necessary to DROP and CREATE again the table, In-Memory tables don’t enable the ALTER clause option.

You don’t want to see this in your environment, ever!

 

plan_comparison

 

 

Categories: DBA Blogs

Database Links

Tim Dexter - Thu, 2014-09-25 13:39

Yeah, its been a while, moving on ...

I got a question a week back asking about how BI Publisher could handle dblinks. The customer currently has db links from DB1 to DB2 and uses them in their queries. Could BIP handle the syntax and pass it on to the database in its SQL or could it handle the link another way?

select e1.emp_name
, e1.emp_id
,e2.manager
from emps e1
, emps@db2 e2
where e1.manager_id = e2.id

Well, there is the obvious way to create the join in BIP. Just get rid of the db link alttogether and create two separate database connections (db1 and db2). Write query A against db1 and query B against db2. Then just create a join between the two queries, simple.

 But, what if you wanted to use the dblink? Well, BIP would choke on the @db2 you would have in the sql. Some silly security rules that, no, you can not turn off if you want to. But there are ways around it, the choking, not the security. Create an alias at the database level for the emp@db2, that way BIP can parse the resulting query. Lets assume I create an alias in the db for my db linked table as 'managers'. Now my query becomes:

select e1.emp_name
, e1.emp_id
,e2.manager
from emps e1
, managers e2
where e1.manager_id = e2.id

 BIP will not choke, it will just pass the query through and the db can handle the linking for it.

Thats it, thats all I got on db links. See you in 6 months :)




Categories: BI & Warehousing

ODI 12c - Migrating from OWB to ODI - PLSQL Procedures

Antonio Romero - Thu, 2014-09-25 11:42

 

Check the blog post below for supporting PLSQL procedures within ODI and how you can get 100s of percentage fold improvements; 

https://blogs.oracle.com/dataintegration/entry/odi_12c_migrating_from_owb

Cheers
David 

OOW 2014: Beginnings

Doug Burns - Thu, 2014-09-25 11:15

Disclosure: I'm attending Openworld at the invitation of the OTN ACE Director program who are paying for my flights, hotel and conference fee. My employer has helpfully let me attend on work time, as well as sending other team mates because they recognise the educational value of attending. Despite that, all of the opinions expressed in these posts are, as usual, all my own.

As I'm completely free of any presentation responsibilities this year, I thought I might try a little blogging again and see how I get on. I'm loving my new Lenovo Yoga 2 Pro, which reminds me of a modernised, lighter version of my old Sony (the red one, rather than the monstrosity that I replaced it with) which is much easier to carry around and work with. As I said, I'll see how I get on.

The most obvious difference from a personal perspective this year is that I'm now a full-time employee of a company in Singapore (more on that in a separate post), which means a few changes :-

- A much longer trip out to San Francisco. It's not quite Australian lengths (kudos to all my Aussie mates for doing this time after time) but 24 hours or so door-to-door is still a little more challenging than my usual trip. I really can't complain much, though. The date-line thing and West-East jet lag on the way here is different, but the flights were fine, particularly after Cathay Pacific bumped me up to PE. (Thanks!) I feel great, to be honest, as a few people have noticed. Singapore is definitely good for me.

- I'm being paid while I'm here! Any independent consultant/contractor will know what a huge difference this is making to me, not having to take time off paid work. On the flip-side, I do need to pay a little more attention to presentations that are relevant to my employer. I've also started being invited to more of those jolly events that *real customers* are invited to. It's a bit difficult having to explain to the local sales guys that my dance card at OOW is already rammed as it is, so I'll probably need to at least show face at a couple of them.

The tail-end of the trip - arriving in the USA - was the most painful part as seasoned visitors will be unsurprised to hear which meant that I was up until around 2:30 local time and with having to tie up some loose ends for the office, it's kind of ruined the first day of the ACE D briefing for me. I've still had time for coffees and chat around the Oracle offices though and it's been great to see so many old friends faces again.

Feeding off how refreshed I feel compared to when I'm usually here and the lack of frantic slide-polishing, I'm looking forward to a great conference!

(Man, I *love* this laptop ...) 

Drinks! Food! Fun! at the My Oracle Support Monday Mix Event at OpenWorld San Francisco

Chris Warticki - Thu, 2014-09-25 09:00
Join us for a fun and relaxing happy hour at the annual My Oracle Support Monday Mix. This year’s gathering is Monday, September 29th from 6:00 to 8:00 p.m. at the ThirstyBear Brewing Company – just a 3 minute walk from Moscone Center.

After a busy day at Oracle OpenWorld, take a break and unwind with your peers. Stop by and get to know the Oracle Support Engineers you depend on as well as key Oracle Support executives and developers over drinks and hors d’oevres. 

Admission is free for Premier Support customers with your Oracle OpenWorld badge. Visit our website for more details. We’ll see you there!

Visit the Support and Services Oracle OpenWorld website to discover how you can take advantage of all Oracle OpenWorld has to offer.


.htmtableborders, .htmtableborders td, .htmtableborders th {border : 1px dashed lightgrey ! important;} html, body { border: 0px; } body { background-color: #ffffff; } img, hr { cursor: default }