Skip navigation.

Feed aggregator

e-Literate TV Case Study Preview: Middlebury College

Michael Feldstein - Sun, 2015-02-15 10:48

By Michael FeldsteinMore Posts (1013)

As we get closer to the release of the new e-Literate TV series on personalized learning, Phil and I will be posting previews highlighting some of the more interesting segments from the series. Both our preview posts and the series itself start with Middlebury College. When we first talked about the series with its sponsors, the Bill & Melinda Gates Foundation, they agreed to give us the editorial independence to report what we find, whether it is good, bad, or indifferent. And as part of our effort to establish a more objective frame, we started the series by going not to a school that was a Gates Foundation grantee but to the kind of place that Americans probably think of first when they think of a high-quality personalized education outside the context of technology marketing. We decided to go to an elite New England liberal arts college. We wanted to use that ideal as the context for talking about personalizing learning through technology. At the same time, we were curious to find out how technology is changing these schools and their notion of what a personal education is.

We picked Middlebury because it fit the profile and because we had a good connection through our colleagues at IN THE TELLING.[1] We really weren’t sure what we would find once we arrived on campus with the cameras. Some of what we found there was not surprising. In a school with a student/teacher ratio of 8.6 to 1, we found strong student/teacher relationships and empowered, creative students. Understandably, we heard concerns that introducing technology into this environment would depersonalize education. But we also heard great dialogues between students and teachers about what “personalized” really means to students who have grown up with the internet. And, somewhat unexpectedly, we saw some signs that the future of educational technology at places like Middlebury College may not be as different from what we’re seeing at public colleges and universities as you might think, as you’ll see in the interview excerpt below.

Jeff Howarth is an Assistant Professor of Geography at Middlebury. He teaches a very popular survey-level course in Geographic Information Systems (GIS). But it’s really primarily a course about thinking about spaces. As Jeff pointed out to me, we typically provide little to no formal education on spacial reasoning in primary and secondary schooling. So the students walking into his class have a wide range of skills, based primarily on their natural ability to pick them up on their own. This broad heterogeneity is not so different from the wide spread of skills that we saw in the developmental math program at Essex County College in Newark, NJ. Furthermore, the difference between a novice and an expert within a knowledge domain is not just about how many competencies they have racked up. It’s also about how they acquire those competencies. Jeff did his own study of how students learn in his class which confirmed broader educational research showing that novices in a domain tend to start with specific problems and generalize outward, while experts (like professors, but also like more advanced students) tend to start with general principles and apply them to the specific problem at hand. As Jeff pointed out to me, the very structure of the class schedule conspires against serving novice learners in the way that works best for them. Typically, students go to a lecture in which they are given general principles and then are sent to a lab to apply those principles. That order works for students who have enough domain experience to frame specific situations in terms of the general principles but not for the novices who are just beginning to learn what those general principles might even look like.

When Jeff thought about how to serve the needs of his students, the solution he came up with—partly still a proposal at this point—bears a striking resemblance to the basic design of commercial “personalized learning” courseware. I emphasize that he arrived at this conclusion through his own thought process rather than by imitating commercial offerings. Here’s an excerpt in which he describes deciding to flip his classroom before he had ever even heard of the term:

Click here to view the embedded video.

In the full ten-minute episode, we hear Jeff talk about his ideas for personalized courseware (although he never uses that term). And in the thirty-minute series, we have a great dialogue between students and faculty as well as some important context setting from the college leadership. The end result is that the Middlebury case study shows us that personalized learning software tools do not just have to be inferior substitutes for the real thing that are only for “other people’s children” while simultaneously reminding us of what a real personal education looks like and what we must be careful not to lose as we bring more technology into the classroom.

  1. Full disclosure: Since filming the case study, Middlebury has become a client of MindWires Consulting, the company that Phil and I run together.

The post e-Literate TV Case Study Preview: Middlebury College appeared first on e-Literate.

Database Flashback -- 3

Hemant K Chitale - Sun, 2015-02-15 10:02
Continuing my series on Oracle Database Flashback


As I pointed out in my previous post, the ability to flashback is NOT strictly specified by db_flashback_retention_target.  The actual scope may be greater than or even less than the target.

[oracle@localhost Hemant]$ sqlplus '/ as sysdba'

SQL*Plus: Release 11.2.0.2.0 Production on Sun Feb 15 23:39:24 2015

Copyright (c) 1982, 2010, Oracle. All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SYS>select sysdate from dual;

SYSDATE
---------
15-FEB-15

SYS>show parameter db_flashback_retention_target

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_flashback_retention_target integer 1440
SYS>select * from v$flashback_database_log;

OLDEST_FLASHBACK_SCN OLDEST_FL RETENTION_TARGET FLASHBACK_SIZE ESTIMATED_FLASHBACK_SIZE
-------------------- --------- ---------------- -------------- ------------------------
14573520 15-FEB-15 1440 24576000 0

SYS>
SYS>select to_char(oldest_flashback_time,'DD-MON HH24:MI')
2 from v$flashback_database_log
3 /

TO_CHAR(OLDEST_FLASHB
---------------------
15-FEB 23:31

SYS>

In my previous post, the OLDEST_FLASHBACK_TIME was a week ago. Now, it doesn't appear to be so !

SYS>select * from v$flash_recovery_area_usage;

FILE_TYPE PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES
-------------------- ------------------ ------------------------- ---------------
CONTROL FILE 0 0 0
REDO LOG 0 0 0
ARCHIVED LOG .95 .94 5
BACKUP PIECE 29.12 .12 6
IMAGE COPY 0 0 0
FLASHBACK LOG .61 0 3
FOREIGN ARCHIVED LOG 0 0 0

7 rows selected.

SYS>

After the blog of 08-Feb, my database had been SHUTDOWN. The instance was STARTED at 23:24 today and the database was OPENed at 23:25.

Sun Feb 08 23:08:21 2015
Shutting down instance (immediate)
....
....
Sun Feb 08 23:08:37 2015
ARCH shutting down
ARC0: Archival stopped
Thread 1 closed at log sequence 1
Successful close of redo thread 1
Completed: ALTER DATABASE CLOSE NORMAL
ALTER DATABASE DISMOUNT
Completed: ALTER DATABASE DISMOUNT
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
ARCH: Archival disabled due to shutdown: 1089
Shutting down archive processes
Archiving is disabled
Sun Feb 08 23:08:38 2015
Stopping background process VKTM
Sun Feb 08 23:08:40 2015
Instance shutdown complete
....
....
Sun Feb 15 23:24:46 2015
Starting ORACLE instance (normal)
....
....
Sun Feb 15 23:25:33 2015
QMNC started with pid=34, OS id=2449
Completed: ALTER DATABASE OPEN

So, it seems that, this time, Oracle says I cannot Flashback to 08-Feb. Although, on 08-Feb, it did say that I could Flashback to 01-Feb. I strongly recommend periodically query V$FLASH_RECOVERY_AREA_USAGE and V$FLASHBACK_DATABASE_LOG.
I have seen DBAs only referring to the parameter db_flashback_retention_target without querying these views.
.
.
.
Categories: DBA Blogs

SQL Server monitoring with nagios: installation

Yann Neuhaus - Sun, 2015-02-15 07:37

Nagios is an IT Infrastructure monitoring solution which is able to monitor SQL Server instances with a specific plugin.
I have installed this plugin to test those functionalities and I will explain here how to do it.

Prerequisites installation FreeTDS installation

FreeTDS is a set of libraries for Unix and Linux that allows your programs to natively talk to Microsoft SQL Server and Sybase databases.
First we will download this set of libraries, for that go to the freetds website and click on Stable Release:

 b2ap3_thumbnail_blog1_20150129-144605_1.jpg

The file Freetds-stable.tgz will be downloaded.
When the file is downloaded, go to the nagios download directory and uncompressed the tgz file:

b2ap3_thumbnail_script11.jpg

Go to the Freetds directory and list files:

b2ap3_thumbnail_script2.jpg

Prepare and control libraries before compilation:

b2ap3_thumbnail_script3.jpg

Now, we will compile and install the Freetds libraries:

b2ap3_thumbnail_script4.jpg

Please, check if you don't have any errors.
If not, Freetds is now installed.

Perl-module DBD::Sybase installation

DBD::Sybase is a Perl module which works with the DBI module to provide access to Sybase databases.
To download this module go to http://search.cpan.org/~mewp/DBD-Sybase-1.15/Sybase.pm and click the download link:

b2ap3_thumbnail_blog2_20150129-144603_1.jpg

When the file DBD-Sybase-1.15.tar.gz is downloaded, go to the download directory, uncompressed the file and go to the new directory:

b2ap3_thumbnail_script5.jpg

We will now compile and install this new module:

 b2ap3_thumbnail_script6.jpg

Now the module is installed. Check if you received errors.
Create a symbolic link:

b2ap3_thumbnail_script7.jpg

We are able now to edit the Freetds.conf file to change:

  • the tds version due to security advice (follow the link and find the chapter Security Advice)
  • add SQL Server parameters: IP Address, TCP/IP port, instance name if not default instance

b2ap3_thumbnail_Nagios_SQLServer_script81.jpg

The Perl module is installed.
Prerequisites are now installed and we have to install the SQL Server plugin.

Nagios SQL Server plugin Installation

First, we have to download the SQL Server plugin of Nagios. To do it connect here and click the below link:

 b2ap3_thumbnail_Nagios_SQLServer_blog3.jpg

When the file is downloaded:

  • go to the download directory
  • list files
  • uncompressed the file
  • re-list files to see the new plugin directory
  • change the location to the new directory

b2ap3_thumbnail_Nagios_SQLServer_script8.jpg

The next command to type is Configure which will check that all dependencies are present and configure & write a Makefile that will contain build orders.

 b2ap3_thumbnail_Nagios_SQLServer_script9.jpg

After we will have to execute a make which will make the compilation (can be run as normal user):

  b2ap3_thumbnail_Nagios_SQLServer_script91.jpg

The next command to run is a Make install which will install the plugin (as to be run as root):

b2ap3_thumbnail_Nagios_SQLServer_script92.jpg

The SQL Server Nagios plugin is successfully installed.

Set environment

As user root, change the permissions on file freetds.conf to be able to add server/instance from user nagios:

b2ap3_thumbnail_Nagios_SQLServer_script93.jpg

Add libraries /usr/local/freetds/lib to file /etc/ld.so.conf:

 b2ap3_thumbnail_Nagios_SQLServer_script94.jpg

As user nagio, export the freetds libraries to the LD_LIBRARY_PATH environment variable:

 b2ap3_thumbnail_Nagios_SQLServer_script95.jpg

Test the connection:

 b2ap3_thumbnail_Nagios_SQLServer_script96.jpg

Connection failed due to wrong PERLLIB path...
To resolve this error, unset the PERLLIB variable.
We force the plugin named check_mssql_health to use the PERLLIB libraries of the OS instead of these of the Oracle client by unsetting the PERLLIB libraries of the check_oracle_health plugin:

 b2ap3_thumbnail_Nagios_SQLServer_script97.jpg

Verification of the connection:

 b2ap3_thumbnail_Nagios_SQLServer_script98.jpg

Verification succeeded.

The next steps will be now to test this plugin by checking which counters can be used, how to configure them, with which thresholds, how to subscribe to alerts...
Those steps will be covered by a next blog.

Next Windows Server edition postponed 'til 2016 [VIDEO]

Chris Foot - Fri, 2015-02-13 10:50

Transcript 

Hi, welcome to RDX! With all the talk about Windows 10, systems administrators are wondering what’s in store for the next version of Windows Server.

Apparently, interested parties will have to wait until 2016. ZDNet’s Mary Jo Foley noted Microsoft intends to further refine its flagship server operating system instead of releasing it at the same time as Windows 10.

Windows Server 2003 will cease to receive any support from Microsoft this July. As a result, it’s expected that companies will upgrade to Windows Server 2012 R2. In light of this prediction, some have speculated that a new version of Windows Server would not receive the adoption rates Microsoft would want to see.

Thanks for watching! Check in next time for more OS news.

The post Next Windows Server edition postponed 'til 2016 [VIDEO] appeared first on Remote DBA Experts.

EM 12C RACI and Maintenance Task

Arun Bavera - Fri, 2015-02-13 10:10
ENTERPRISE MANAGER 12C RACI
TASK RESPONSIBLE ACCOUNTABLE CONSULTED INFORMED Define Monitoring Requirements Target Owners, Infrastructure Teams, EM Admin EM Admin Installation planning and architecture EM Admin EM Admin Target Owners, Infrastructure Installation and configuration of EM EM Admin EM Admin Defining Agent deployment and patching procedures and processes EM Admin EM Admin Security and User Administration EM Admin/Security Admin EM Admin Admin Group Creation EM Admin EM Admin Target Owners Agent Deployment (can be performed by target owners) Target Owners Target Owners EM Admin Agent Patching (can be performed by target owners) Target Owners Target Owners EM Admin Target Configuration and Availability Target Owners Target Owners Agent Troubleshooting Target Owners, EM Admin EM Admin Target Troubleshooting Target Owners Target Owners EM Admin Weekly/Monthly/Quarterly Maintenance EM Admin EM Admin Target Owners OMS Patching EM Admin EM Admin Target OwnersRECOMMENDED MAINTENANCE TASKS TASK DAILY BIWEEKLY MONTHLY QUARTERLY Review critical EM component availability X Review events, incidents and problems for EM related infrastructure X Review overall health of the system including the job system, backlog, load, notifications and task performance X Review Agent issues for obvious problems (i.e. large percentage of agents with an unreachable status) X Review Agent issues (deeper /more detailed review of agents with consistent or continual problems) X Review metric trending for anything out of bounds X Evaluate database (performance, sizing, fragmentation) X Check for updates in Self Update (plug-ins, connectors, agents, etc.) Note that there is an out-of-box ruleset that will provide notification for the availability of new updates X Check for recommended patches XReferences:https://blogs.oracle.com/EMMAA/entry/operational_considerations_and_troubleshooting_oraclehttp://www.oracle.com/technetwork/database/availability/managing-em12c-1973055.pdf
Categories: Development

How to convert Excel file to csv

Laurent Schneider - Fri, 2015-02-13 10:02

#excel to #csv in #powershell (New-Object -ComObject Excel.Application).Workbooks.Open("c:x.xlsx").SaveAs("c:x.csv",6)

— laurentsch (@laurentsch) February 13, 2015

One-liner to convert Excel to CSV (or to and from any other format).
There is a bug 320369 if you have excel in English and your locale is not America. Just change your settings to us_en before conversion.

Ephemeral Port Issue with Essbase Has Been Fixed!

Tim Tow - Fri, 2015-02-13 09:24
The issue that has plagued a number of Essbase customers over the years related to running out of available ports has finally been fixed!

This issue, which often manifested itself with errors in the Essbase error 10420xx range, was caused by how the Essbase Java API communicated with the server. In essence, whenever a piece of information was needed, the Essbase Java API grabbed a port from the pool of available ports, did its business, and then released the port back to the pool. That doesn’t sound bad, but the problem occurs due to how Windows handles this pool of ports. Windows will put the port into a timeout status for a period of time before it makes the port available for reuse and the default timeout in Windows is 4 minutes! Further, the size of the available pool of ports is only about 16,000 ports in the later versions of Windows. That may sound like a lot of ports, but the speed of modern computers makes it possible, and even likely, that certain operations, such as the outline APIs, that call Essbase many, many times to get information would be subject to this issue. Frankly, we see this issue quite often with both VB and the Java Essbase Outline Extractors.

We brought this issue to the attention of the Java API team and assisted them by testing a prerelease version of the Java API jars. I am happy to report the fix was released with Essbase 11.1.2.3.502. In addition, there is a new essbase.properties setting that allows you to turn the optimization on or off:

olap.system.socketoptimization=false

It is our understanding that this optimization is turned on by default. I also checked the default essbase.properties files shipped with both Essbase 11.1.2.3.502 and 11.1.2.4 and did not see that setting in those files. It may be one of those settings that is there in case it messes something else up. The work of our own Jay Zuercher in our labs and searching Oracle Support seems to have confirmed that thought. There is apparently an issue where EIS drill-through reports don't work in Smart View if socket optimization is turned on. It is documented in Oracle Support Doc ID 1959533.1.

There is also another undocumented essbase.properties setting:

olap.server.socketIdleTime

According to Oracle development, this value defaults to 300 ms but there should be little need to ever change it. The only reason it is there is to tune socket optimization in case more than 2 sockets are used per Java API session.

Jay also tested the 11.1.2.4 version in our labs with the Next Generation Outline Extractor. With the default settings, one large test outline we have, "BigBad", with about 120,000 members in it, extracted in 1 minute and 50 seconds.  With socket optimization turned off, the same outline was only about 25% complete after 2 hours.   In summary, this fix will be very useful for a lot of Oracle customers.
Categories: BI & Warehousing

Video: Oracle Platinum Partner Redstone Content Solutions

WebCenter Team - Fri, 2015-02-13 08:47

John Klein, Principal of Redstone Content Solutions, sits down with Sam Sharp, Senior Director at Oracle, to discuss Redstone’s expertise in integrating WebCenter products, and tying the entire WebCenter Suite together.

Health care needs and database service requirements

Chris Foot - Fri, 2015-02-13 08:03

Delivering affordable, quality care to patients is an objective every health care organization in the United States would like to achieve, but doing so necessitates the appropriate backend systems. 

From a database administration service standpoint, hospitals, care clinics and other institutions require solutions with strict authentication protocols and integrated analytics functions. In addition, these databases must be accessible by those who frequently view patient information. These needs demand a lot from DBAs, but sacrificing operability isn't an option. 

Assessing the health care arena 
Bruce Johnson, a contributor to Supply Chain Digital, outlined how U.S. health care providers are managing the industry's forced evolution. The implementation of the Affordable Care Act incited a wave of supply chain redevelopment, technology consolidation and electronic health record systems integration. It's an atmosphere that has administrators and upper management wearing more hats than they can fit on their heads. 

Consider the impact of EHR solutions on health care workflow. The primary reason why their deployment is required by law is because they promise to allow professionals in the industry to share patient information more effectively. For example, if a person's primary care physician suspects that his or her patient may have a spinal problems, the PCP may refer that individual to a chiropractor. In order to provide the specialist with as much information as he or she needs, the PCP delivers the patient's EHR to the chiropractor.

What does this mean for databases? 
EHR software primarily handles structured information, containing data regarding an individual's ailments, past medications, height, weight and so forth. Therefore, it makes sense that these solutions would operate on top of databases using structured query language. 

One of the reasons why relational engines such as MySQL, SQL Server and Oracle 12c are necessary is because of the transaction protocol these solutions abide by: atomicity, consistency, isolation and durability.

According to TechTarget, what the ACID model does is ensure that data transfers or manipulations can be easily monitored and validated. The rule set also negates the prevalence of unauthorized or invalid data transportation or changes. For example, the "consistency" component of ACID returns all information to its original state in the event a transaction failure occurs. 

With this protocol in mind, health care organizations using relational databases to support their EHR systems should consult a database monitoring service. As EHR hold a wealth of sensitive information, hackers are likely to target these records whenever possible. 

The post Health care needs and database service requirements appeared first on Remote DBA Experts.

How to add a Cordova NFC plugin in MAF 2.1 application

You may have already noticed that the latest release of Oracle MAF 2.1 brings a major refresh to the MAF-Cordova story. In short, we have two big improvements here: 1. The version of Cordova has...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Fine Grained Auditing (FGA) and Protecting Oracle E-Business PII Data for Executives

With the recent news about yet another database breach of Personally Identifiable Information (PII), Integrigy had a discussion with a client about how to better protect the PII data of their executives.

The following Fine-Grained-Auditing (FGA) policy started the discussion. The policy below will conditionally log direct connections to the Oracle E-Business Suite database when the PII data of corporate executives is accessed. For example, it will ignore E-Business Suite end-user connections to the database, but will catch people directly connecting to the database from their laptop. However, it will only do so if PII data for executives is accessed:

BEGIN

DBMS_FGA.ADD_POLICY (
   object_schema     =>  'HR',
   object_name       =>  'PER_ALL_PEOPLE_F',
   policy_name       =>  'FGA_PPF_NOT_GUI_AND_OFFICER',
   audit_condition   =>  ' PER_ALL_PEOPLE_F.PERSON_ID IN (
         SELECT PAX.PERSON_ID
         FROM PER_ASSIGNMENTS_X PAX, PER_JOBS J, PER_JOB_DEFINITIONS JD
         WHERE PAX.JOB_ID = J.JOB_ID
         AND J.JOB_DEFINITION_ID = JD.JOB_DEFINITION_ID
         AND UPPER(JD.SEGMENT6) LIKE UPPER(''%EXECUTIVE%''))
         AND NOT (SYS_CONTEXT (''USERENV'',''IP_ADDRESS') IN
         (''IP of your DB server’’, ‘’IP of your cm server’’, 
           ‘’IP of your application server’’) 
        AND SYS_CONTEXT (''USERENV'',''CURRENT_USER'') = ''APPS'' ) ',
   audit_column      =>   NULL,
   handler_schema    =>   NULL,
   handler_module    =>   NULL,
   enable            =>   TRUE,
   statement_types   =>  'SELECT',
   audit_trail       =>   DBMS_FGA.DB,
   audit_column_opts =>   DBMS_FGA.ANY_COLUMNS);

END;

Here is an explanation of the policy above:

  • Audits only direct database activity and ignores database connections from the E-Business Suite user interface, the database server, the web and application servers, as well as the concurrent manager.
  • Audits SELECT activity against PER_ALL_PEOPLE_F or any view based on the table PER_ALL_PEPOPLE_F. PII data exists outside of PER_ALL_PEOPLE_F but this table is the central table within the E-Business Suite that defines a person and thus contains critical PII data such as name, birthdate and National Identifier.
  • Audits ALL columns in the table but could easily be restricted to only specific columns.
  • Audits ONLY those result sets that includes current or ex-employee whose job title has ‘%Executive%' in the Job Title. Note this policy was demonstrated using the Vision demo database. Your Job Key Flexfield definition will be different.
  • FGA comes standard with the Enterprise license of the Oracle database. If you own the Oracle E-Business Suite, you don't need an additional license to use FGA.

The policy above would certainly strengthen an overall database security posture, but it does have several immediate drawbacks:

  • While it does address risks with direct database activity, including the use of the APPS account from a laptop, it will not guard against privileged database users such as DBAs.
  • Spoofing of USRENV attributes is possible which precludes using any USERENV attribute other than the IP address and DB username.
  • Audit data needs security stored and regularly purged. Privileged users may have access to FGA data and policies. Audit data also needs to be retained and purged per corporate policies.
  • Lastly, the performance impact of the policy above would need to be carefully measured. If the policy above were to be implemented, it would need to be seriously tested, especially if modules are to be used such as Oracle Advanced Benefits and/or Payroll.

As part of a database security program, Integrigy recommends that all clients implement defense in depth. No one tool or security feature will protect your data. Oracle Traditional Auditing (TA) as well as FGA policies similar to the above should be implemented, but the both TA and FGA have limitations and trade-offs.

Integrigy recommends that both Oracle TA and FGA be used with database security solutions such as the Oracle Audit Vault and Database Firewall (AVDF), Splunk, Imperva, and IBM Guardium.  Database monitoring and alerting needs to be automated and should done using a commercial tool. You also need to secure and monitor privileged users such as DBAs and database security cannot come at the cost of overall application performance.

Our client conversation about the FGA policy above concluded that while the policy could work, given the variety of different database connections, a better solution would be to utilize a variation of the policy above along with Splunk, which they already own.

If you have questions about the sample FGA policy above or about database security, please contact us at: mailto:info@integrigy.com

References

Tags: AuditingSensitive DataHIPAAOracle E-Business Suite
Categories: APPS Blogs, Security Blogs

Oracle Querayable Patch Interface

Pakistan's First Oracle Blog - Thu, 2015-02-12 18:39
Starting from Oracle 12c, from within the SQL patching information can be obtained. A new package DBMS_QOPATCH offers some really nifty procedures to get the patch information. Some of that information is shared below:




To get patch information from the inventory:


SQL> select xmltransform(dbms_qopatch.get_opatch_install_info, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Home      : /u01/app/oracle/product/12.1.0/db_1
Inventory      : /u01/app/oraInventory

The following is an equivalent of opatch lsinventory command at the OS level:

SQL> select xmltransform(dbms_qopatch.get_opatch_lsinventory, dbms_qopatch.get_opatch_xslt) from dual;


Oracle Querayable Patch Interface 1.0
--------------------------------------------------------------------------------
Oracle Home      : /u01/app/oracle/product/12.1.0/db_1
Inventory      : /u01/app/oraInventory
--------------------------------------------------------------------------------Installed Top-level Products (1):
Oracle Database 12c                       12.1.0.1.0
Installed Products ( 131)

Oracle Database 12c                        12.1.0.1.0
Sun JDK                             1.6.0.37.0
oracle.swd.oui.core.min                     12.1.0.1.0
Installer SDK Component                     12.1.0.1.0
Oracle One-Off Patch Installer                    12.1.0.1.0
Oracle Universal Installer                    12.1.0.1.0
Oracle USM Deconfiguration                    12.1.0.1.0
Oracle Configuration Manager Deconfiguration            10.3.1.0.0
Oracle RAC Deconfiguration                    12.1.0.1.0
Oracle DBCA Deconfiguration                    12.1.0.1.0
Oracle Database Plugin for Oracle Virtual Assembly Builder  12.1.0.1.0
Oracle Configuration Manager Client                10.3.2.1.0
Oracle Net Services                        12.1.0.1.0
Oracle Database 12c                        12.1.0.1.0
Oracle OLAP                            12.1.0.1.0
Oracle Spatial and Graph                    12.1.0.1.0
Oracle Partitioning                        12.1.0.1.0
Enterprise Edition Options                    12.1.0.1.0


Interim patches:

Categories: DBA Blogs

Multiple Utilities Pack versions on Self Update

Anthony Shorten - Thu, 2015-02-12 15:31

Customers of the Oracle Application Management Pack for Oracle Utilities will notice that there are multiple entries for the pack within Self Update. Let me clarify:

  • The plugin, marked below, named the Oracle Utilities Application is the correct pack to install and use. This is the new addon version of the pack with the latest functionality.
  • The plugin named the Oracle Utilities is the original version of the pack. Whilst the version number is higher than the new pack, it is only listed for customers of that version to have access to the software.

Self Update

In the future, the Oracle Utilities entries will disappear and only the Oracle Utilities Application entries will remain.

In short, use the Oracle Utilities Application plugin not the Oracle Utilities plugins.

How do I ...

Tim Dexter - Thu, 2015-02-12 15:23

An email came in this morning to an internal mailing list,

We have an Essbase customer with some reporting requirements and we are evaluating BI Publisher as the potential solution. I'd like to ask for your help with any document, blog or white paper with guidelines about using BI Publisher with Essbase as the main data source.

Is there any tutorial showing how to use BI Publisher with Essbase as the main data source?

There is not one to my knowledge but trying to be helpful I came up with the following response

I'll refer to the docs ...
First set up your connection to Essbase
http://docs.oracle.com/cd/E28280_01/bi.1111/e22255/data_sources.htm#BIPAD294
Then create your data model using that Essbase connection
http://docs.oracle.com/cd/E28280_01/bi.1111/e22258/create_data_sets.htm#BIPDM404
Use the MDX query builder to create the query or write it yourself (lots of fun :)
http://docs.oracle.com/cd/E28280_01/bi.1111/e22258/create_data_sets.htm#BIPDM431
Add parameters (optional)
http://docs.oracle.com/cd/E28280_01/bi.1111/e22258/add_params_lovs.htm#BIPDM306
Then build layouts for your Essbase query
http://docs.oracle.com/cd/E28280_01/bi.1111/e22254/toc.htm
annnnd your're done :)

Simple, right? Well simple in its format but it required me to know the basic steps to build said report and then where to find the appropriate pages in the doc for the links. Leslie saw my reply and commented on how straightforward it was and how our docs are more like reference books than 'how to's.' This got us thinking. I have noticed that the new 'cloud' docs have How do I ... sections where a drop down will then show maybe 10 tasks associated with the page Im on right now in the application.

Getting that help functionality into the BIP is going to take a while. We thought, in the mean time, we could carve out a section on the blog for just such content. Here's where you guys come in. What do you want to know how to do? Suggestions in the comment pleeeease!

Categories: BI & Warehousing

Considerations about SQL Server database files placement, Netapp storage and SnapManager

Yann Neuhaus - Thu, 2015-02-12 14:06

When you install SQL Server, you have to consider how to place the database files. At this point you will probably meet the well-known best practices and guidelines provided by Microsoft but are you really aware of the storage vendor guidelines? Let’s talk about it in this blog post.

A couple of weeks ago, I had the opportunity to implement a new SQL Server architecture over on a Netapp storage model FAS 3140. The particularity here is that my customer wants to manage his backups with NetApp SnapManager for SQL Server. This detail is very important because in this case we have to meet some requirements about storage layout. After reading the Netapp storage documents I was ready to begin my database files.

First of all, we should know how SnapManager works. In fact SnapManager performs SQL Server backups by using either a snapshot of database files or by issuing streaming backups. Streaming backups concern only the system database files and the transaction log files. To achieve this task, SnapManager will coordinate several components like SnapDrive for Windows to control the storage LUNS and SQL Server to freeze IO by using the VSS Framework. It is important to precise that without SnapManager, performing snapshots from a pure storage perspective is still possible but unfortunately in this case the storage layer is not aware of the database files placement. We may find in such situation that a database is not consistent. Here a simplified scheme of what's happen during the SnapManager backup process:

 

SnapManager (1) --> VDI (2) - - backup database .. with snapshot --> VSS (3) - database files IO freeze  --> SnapDrive (3) - snapshot creation

 

So why do we have to take care about database file placements with Netapp Storage? Well, according to our first explanation above, if we perform snapshot backups the unit of work is in fact the volume (FlexVol is probably the better term here. FlexVol is a kind of virtual storage pool to manage physical storage efficiently and can include one or more LUNs). In others words, when a snapshot is performed by SnapManager all related LUNs are considered together and all concerned database files in single volume are frozen when snapshot is created. Notice that if a database is spread across several volumes, IO are frozen for all concerned volumes at the same time but snapshots are taken serially.

This concept is very important and as a database administrator, we also now have to deal with these additional constraints (that I will describe below) to place our database files regarding the storage layout and the RPO / RTO needs.

1- System database files must be placed on a dedicated volume including their respective transaction log files. Only streaming backups are performed for system databases

2- Tempdb database files must also be placed on a dedicated volume that will be excluded from SnapManager backup operations.

3- Otherwise, placement of user databases depends on several factors as their size or the storage layout architecture in-place. For example, we may encounter cases with small databases that will share all their database files on the same volume. We may also encounter cases where we have to deal with placement of large database files. Indeed, large databases often include different Filegroups with several database files. In this case, spreading database files on different LUNs may be a performance best practice and we have to design the storage with different LUNs that share the same dedicated volume. Also note that the database transaction log files have to be on separated volume (remember that transaction log files are concerned by streaming backups).

4- In consolidated scenario, we may have several SQL Server instances on the same server with database files on the same volume or we may have dedicated volumes for each instance.

Regarding these constraints, keep in mind that your placement strategy may presents some advantages and but also some drawbacks:

For example, restoring a database on a dedicated volume from a snapshot will be quicker than using streaming backups but in the same time we may reach quickly the maximum number of drives we may use if we are in the situation where we dedicate one or several volumes per database. Imagine that you have 5 databases spreaded on 4 volumes (system, tempdb, data and logs)... I'll let you do the math. A possible solution is to replace the letter-oriented volume identification by mount points.

In the same time, sharing volumes between several databases makes more difficult the restore in-place process of only one database because in this case we will have to copy data from a mounted snapshot back into the active file system. But increasing the number of shared files on a single volume may cause timeout issues during the snapshot operation. According to Netapp Knowledgebase here the maximum recommended number of databases on a single volume is 35.  If you want to implement a policy that verify that the number of database files is lower than this threshold please see the blog post of my colleague Stéphane Haby. The number of files concerned by a snapshot creation, the size of the volumes or the storage performance are all factors to consider during the database files placement.

After digesting all these considerations, let me present briefly my context: 3 SQL Server instances on a consolidated server with the following storage configuration. We designed the storage layout as follows

 

blog_30_1_storage_layout_architecture

 

One volume for system databases, one volume for tempdb, one volume for user database data files, one volume for user database log files and finally one volume dedicated for the snapinfo. The snapinfo volume is used by SnapManager to store streaming backups and some metadata

But according to what I said earlier my configuration seems to have some caveats. Indeed, to restore only one database in-place in this case is possible but because we’re actually sharing database files between several databases and several SQL Server instances, we will force the snapmanager to copy data from a mounted snapshot back into the file system. Otherwise, we may also face the timeout threshold issue (10s hard-coded into the VSS framework) during the freezing database IO step. Finally we may not exclude issues caused by the limit of 35 databases on a single volume because I’m in a consolidated environment scenario. Fortunately my customer is aware of all the limitations and he is waiting for the upgrade of its physical storage layout. We will plan to redesign the final storage layout at this time. Until then, the number of databases should not be a problem.

See you!

California Community College OEI Selects LMS Vendor

Michael Feldstein - Thu, 2015-02-12 13:53

By Phil HillMore Posts (289)

The Online Education Initiative (OEI) for California’s Community College System has just announced its vendor selection for a Common Course Management System (CCMS)[1]. For various reasons I cannot provide any commentary on this process, so I would prefer to simply direct people to the OEI blog site. Update: To answer some questions, the reason I cannot comment is that CCC is a MindWires client, and I facilitated the meetings. Based on this relationship we have a non-disclosure agreement with OEI.

Here is the full announcement.

The California Community Colleges (CCC) Online Education Initiative (OEI) announced its intent to award Instructure Inc. the contract to provide an online course management system and related services to community colleges statewide.

Support for Instructure’s Canvas system was nearly unanimous among the OEI’s Common Course Management System (CCMS) Committee members, with overwhelming support from student participants, officials said. Canvas is a course management platform that is currently being used by more than 1,000 colleges, universities and school districts across the country.

“Both the students and faculty members involved believed that students would be most successful using the Canvas system,” said OEI Statewide Program Director Steve Klein. “The student success element was a consistent focus throughout.”

The announcement includes some information on the process as well.

A 55-member selection committee participated in the RFP review that utilized an extensive scoring rubric. The decision-making process was guided by and included the active involvement of the CCMS Committee, which is composed of the CCMS Workgroup of the OEI Steering Committee, the members of OEI’s Management Team, and representatives from the eight Full Launch Pilot Colleges, which will be the first colleges to test and deploy the CCMS tool.

The recommendation culminated an extremely thorough decision-making process that included input from multiple sources statewide, and began with the OEI’s formation of a CCMS selection process in early 2014. The selection process was designed to ensure that a partner would be chosen to address the initiative’s vision for the future.

  1. Note that this is an Intent to Award, not yet a contract.

The post California Community College OEI Selects LMS Vendor appeared first on e-Literate.

MongoDB 3.0

DBMS2 - Thu, 2015-02-12 13:44

Old joke:

  • Question: Why do policemen work in pairs?
  • Answer: One to read and one to write.

A lot has happened in MongoDB technology over the past year. For starters:

  • The big news in MongoDB 3.0* is the WiredTiger storage engine. The top-level claims for that are that one should “typically” expect (individual cases can of course vary greatly):
    • 7-10X improvement in write performance.
    • No change in read performance (which however was boosted in MongoDB 2.6).
    • ~70% reduction in data size due to compression (disk only).
    • ~50% reduction in index size due to compression (disk and memory both).
  • MongoDB has been adding administration modules.
    • A remote/cloud version came out with, if I understand correctly, MongoDB 2.6.
    • An on-premise version came out with 3.0.
    • They have similar features, but are expected to grow apart from each other over time. They have different names.

*Newly-released MongoDB 3.0 is what was previously going to be MongoDB 2.8. My clients at MongoDB finally decided to give a “bigger” release a new first-digit version number.

To forestall confusion, let me quickly add:

  • MongoDB acquired the WiredTiger product and company, and continues to sell the product on a standalone basis, as well as bundling a version into MongoDB. This could cause confusion because …
  • … the standalone version of WiredTiger has numerous capabilities that are not in the bundled MongoDB storage engine.
  • There’s some ambiguity as to when MongoDB first “ships” a feature, in that …
  • … code goes to open source with an earlier version number than it goes into the packaged product.

I should also clarify that the addition of WiredTiger is really two different events:

  • MongoDB added the ability to have multiple plug-compatible storage engines. Depending on how one counts, MongoDB now ships two or three engines:
    • Its legacy engine, now called MMAP v1 (for “Memory Map”). MMAP continues to be enhanced.
    • The WiredTiger engine.
    • A “please don’t put this immature thing into production yet” memory-only engine.
  • WiredTiger is now the particular storage engine MongoDB recommends for most use cases.

I’m not aware of any other storage engines using this architecture at this time. In particular, last I heard TokuMX was not an example. (Edit: Actually, see Tim Callaghan’s comment below.)

Most of the issues in MongoDB write performance have revolved around locking, the story on which is approximately:

  • Until MongoDB 2.2, locks were held at the process level. (One MongoDB process can control multiple databases.)
  • As of MongoDB 2.2, locks were held at the database level, and some sanity was added as to how long they would last.
  • As of MongoDB 3.0, MMAP locks are held at the collection level.
  • WiredTiger locks are held at the document level. Thus MongoDB 3.0 with WiredTiger breaks what was previously a huge write performance bottleneck.

In understanding that, I found it helpful to do a partial review of what “documents” and so on in MongoDB really are.

  • A MongoDB document is somewhat like a record, except that it can be more like what in a relational database would be all the records that define a business object, across dozens or hundreds of tables.*
  • A MongoDB collection is somewhat like a table, although the documents that comprise it do not need to each have the same structure.
  • MongoDB documents want to be capped at 16 MB in size. If you need one bigger, there’s a special capability called GridFS to break it into lots of little pieces (default = 1KB) while treating it as a single document logically.

*One consequence — MongoDB’s single-document ACID guarantees aren’t quite as lame as single-record ACID guarantees would be in an RDBMS.

By the way:

  • Row-level locking was a hugely important feature in RDBMS about 20 years ago. Sybase’s lack of it is a big part of what doomed them to second-tier status.
  • Going forward, MongoDB has made the unsurprising marketing decision to talk about “locks” as little as possible, relying instead on alternate terms such as “concurrency control”.

Since its replication mechanism is transparent to the storage engine, MongoDB allows one to use different storage engines for different replicas of data. Reasons one might want to do this include:

  • Fastest persistent writes (WiredTiger engine).
  • Fastest reads (wholly in-memory engine).
  • Migration from one engine to another.
  • Integration with some other data store. (Imagine, for example, a future storage engine that works over HDFS. It probably wouldn’t have top performance, but it might make Hadoop integration easier.)

In theory one can even do a bit of information lifecycle management (ILM), by using different storage engines for different subsets of database, by:

  • Pinning specific shards of data to specific servers.
  • Using different storage engines on those different servers.

That said, similar stories have long been told about MySQL, and I’m not aware of many users who run multiple storage engines side by side.

The MongoDB WiredTiger option is shipping with a couple of options for block-level compression (plus prefix compression that is being used for indexes only). The full WiredTiger product also has some forms of columnar compression for data.

One other feature in MongoDB 3.0 is the ability to have 50 replicas of data (the previous figure was 12). MongoDB can’t think of a great reason to have more than 3 replicas per data center or more than 2 replicas per metropolitan area, but some customers want to replicate data to numerous locations around the world.

Related link

Categories: Other

Oracle Priority Support Infogram for 12-FEB-2015

Oracle Infogram - Thu, 2015-02-12 13:34

RDBMS
Management database in 12c release 12.1.0.2, from the Oracle Database Know How blog.
ZFS
From The Wonders of ZFS Storage: Thin Cloning of PDBs is a Snap with Oracle Storage
MAF
How-to open and close a popup dialog from Java in MAF, from the The Oracle Mobile Platform Blog.
WebLogic
Dynamic UOO - Oracle Service Bus, from Gita’s Blog.
Suppressing ADF LOV Like Operator Filtering V2, from the WebLogic Partner Community EMEA, blog.
And from the same blog: JDeveloper Debugging Tips (Watchpoints & Beep)
OVM
From the Simon Cotter Blog: Oracle VM 3.3.2 Officialy released!
OUD
A couple of good OUD items from Sylvain Duloutre’s Weblog:
How to lock every account in a LDAP subtree with OUD
Sudden SSLv3-related errors in OUD explained
SOA
SOA Suite 12c New Features - Coherence Adapter, from the SOA & BPM Partner Community Blog.
And SOA/API developer tool tips, from the same blog.
OBIEE
OBIEE 11g: Troubleshooting Stuck Threads or Hangs, from Business Analytics - Proactive Support.
And from the same blog: Hyperion Financial Management and Oracle EPM system New Version 11.1.2.4
BI
Extracting Useful Information From Human Language Text, from BI & EPM Partner Community EMEA.
Fusion
Is Your IT Private PaaS Ready? Take This 10-minute Assessment to Find Out, from Oracle Fusion Middleware.
Retail
Announcing Oracle Retail 14.1 Product Overview and TOI Training Sessions as well as the Solutions Enablement Enterprise Version 14.1 – Guide to Enablement, from Oracle Retail Documentation.
EBS
From the Oracle E-Business Suite Support Blog:
Webcast: Demystifying Service Items Selling in Order Management
Webcast: Everything You Need To Know About Receivables Aging Reports in Release 12
Cycle Count No Data Found? Find Your Data!
Are you Running the Approval Analyzer and Getting an Error?
From the Oracle E-Business Suite Technology blog:
Database 12.1.0.2 Certified with E-Business Suite 11i
EBS 12.2 Certified on Microsoft Windows Server 2012 R2
Internet Explorer 11 Certified with E-Business Suite 11.5.10.2
Reminder: Upgrade Database 10.2.0.5 Before July 2015
…And Finally
Just a personal pet peeve here. I hate dongles, exclusivity marketing (i.e. monopolies), DRM, etc., so Keurig’s difficulties after releasing a coffee maker that is a step down the evolutionary ladder pleases me. I just bought a version 1.0 Keurig, my second maker. My next maker will almost certainly come from another company. Who knows, maybe the Keurig 2.0 will be such a failure it becomes a collector’s item like Ford Edsels and the new Coke. Probably not, though. People just want to drink coffee.

Keurig's attempt to 'DRM' its coffee cups totally backfired

Essbase VB API is Officially Dead

Tim Tow - Thu, 2015-02-12 11:06
It is with a sad heart that bring you the news that, as of Essbase 11.1.2.4, the Essbase VB API is officially dead.  I cut my teeth in Essbase working with that API way back in the mid-1990's, but the writing had been on the wall for some time.  Microsoft stopped supporting VB years ago, so it was only a matter of time before this time would come.  That being said, I haven't used the VB API for any new Essbase work since about 2001; the Essbase Java API is alive and growing so my efforts have been there.

Here is the official notification:













You can read the notification yourself in the Essbase 11.1.2.4 Readme.
Categories: BI & Warehousing

Customer2Cloud Program removes barriers to Cloud Adoption

Modern cloud applications can help lower the cost of IT ownership, accelerate the pace of innovation, and vastly improve users’ experiences. Business leaders acknowledge this, but the journey to...

We share our skills to maximize your revenue!
Categories: DBA Blogs