Skip navigation.

Feed aggregator

Notes on HBase

DBMS2 - Tue, 2015-03-10 12:24

I talked with a couple of Cloudera folks about HBase last week. Let me frame things by saying:

  • The closest thing to an HBase company, ala MongoDB/MongoDB or DataStax/Cassandra, is Cloudera.
  • Cloudera still uses a figure of 20% of its customers being HBase-centric.
  • HBaseCon and so on notwithstanding, that figure isn’t really reflected in Cloudera’s marketing efforts. Cloudera’s marketing commitment to HBase has never risen to nearly the level of MongoDB’s or DataStax’s push behind their respective core products.
  • With Cloudera’s move to “zero/one/many” pricing, Cloudera salespeople have little incentive to push HBase hard to accounts other than HBase-first buyers.

Also:

  • Cloudera no longer dominates HBase development, if it ever did.
    • Cloudera is the single biggest contributor to HBase, by its count, but doesn’t make a majority of the contributions on its own.
    • Cloudera sees Hortonworks as having become a strong HBase contributor.
    • Intel is also a strong contributor, as are end user organizations such as Chinese telcos. Not coincidentally, Intel was a major Hadoop provider in China before the Intel/Cloudera deal.
  • As far as Cloudera is concerned, HBase is just one data storage technology of several, focused on high-volume, high-concurrency, low-latency short-request processing. Cloudera thinks this is OK because of HBase’s strong integration with the rest of the Hadoop stack.
  • Others who may be inclined to disagree are in several cases doing projects on top of HBase to extend its reach. (In particular, please see the discussion below about Apache Phoenix and Trafodion, both of which want to offer relational-like functionality.)

Cloudera’s views on HBase history — in response to the priorities I brought to the conversation — include:

  • HBase initially favored consistency over performance/availability, while Cassandra initially favored the opposite choice. Both products, however, have subsequently become more tunable in those tradeoffs.
  • Cloudera’s initial contributions to HBase focused on replication, disaster recovery and so on. I guess that could be summarized as “scaling”.
  • Hortonworks’ early HBase contributions included (but were not necessarily limited to):
    • Making recovery much faster (10s of seconds or less, rather than minutes or more).
    • Some of that consistency vs. availability tuning.
  • “Coprocessors” were added to HBase ~3 years ago, to add extensibility, with the first use being in security/permissions.
  • With more typical marketing-oriented version numbers:
    • HBase .90, the first release that did a good job on durability, could have been 1.0.
    • HBase .92 and .94, which introduced coprocessors, could have been Version 2.
    • HBase .96 and .98 could have been Version 3.
    • The recent HBase 1.0 could have been 4.0.

The HBase roadmap includes:

  • A kind of BLOB/CLOB (Binary/Character Large OBject) support.
    • Intel is heavily involved in this feature.
    • The initial limit is 10 megabytes or so, due to some limitations in the API (I didn’t ask why that made sense). This happens to be all the motivating Chinese customer needs for the traffic photographs it wants to store.
  • Various kinds of “multi-tenancy” support (multi-tenancy is one of those terms whose meaning is getting stretched beyond recognition), including:
    • Mixed workload support (short-request and analytic) on the same nodes.
    • Mixed workload support on different nodes in the same cluster.
    • Security between different apps in the same cluster.
  • (Still in the design phase) Bottleneck Whack-A-Mole, with goals including but not limited to:
    • Scale-out beyond the current assumed limit of ~1200 nodes.
    • More predictable performance, based on smaller partition sizes.
  • (Possibly) Multi-data-center fail-over.

Not on the HBase roadmap per se are global/secondary indexes. Rather, we talked about projects on top of HBase which are meant to provide those. One is Apache Phoenix, which supposedly:

  • Makes it simple to manage compound keys. (E.g., City/State/ZipCode)
  • Provides global secondary indexes (but not in a fully ACID way).
  • Offers some very basic JOIN support.
  • Provides a JDBC interface.
  • Offers efficiencies in storage utilization, scan optimizations, and aggregate calculations.

Another such project is Trafodion — supposedly the Welsh word for “transaction” — open sourced by HP. This seems to be based on NonStop SQL and Neoview code, which counter-intuitively have always been joined at the hip.

There was a lot more to the conversation, but I’ll stop here for two reasons:

  • This post is pretty long already.
  • I’m reserving some of the discussion until after I’ve chatted with vendors of other NoSQL systems.

Related link

  • My July 2011 post on HBase offers context, as do the comments on it.
Categories: Other

Some stuff on my mind, March 10, 2015

DBMS2 - Tue, 2015-03-10 10:27

I found yesterday’s news quite unpleasant.

  • A guy I knew and had a brief rivalry with in high school died of colon cancer, a disease that I’m at high risk for myself.
  • GigaOm, in my opinion the best tech publication — at least for my interests — shut down.
  • The sex discrimination trial around Kleiner Perkins is undermining some people I thought well of.

And by the way, a guy died a few day ago snorkeling at the same resort I like to go to, evidently doing less risky things than I on occasion have.

So I want to unclutter my mind a bit. Here goes.

1. There are a couple of stories involving Sam Simon and me that are too juvenile to tell on myself, even now. But I’ll say that I ran for senior class president, in a high school where the main way to campaign was via a single large poster, against a guy with enough cartoon-drawing talent to be one of the creators of the Simpsons. Oops.

2. If one suffers from ulcerative colitis as my mother did, one is at high risk of getting colon cancer, as she also did. Mine isn’t as bad as hers was, due to better tolerance for medication controlling the disease. Still, I’ve already had a double-digit number of colonoscopies in my life. They’re not fun. I need another one soon; in fact, I canceled one due to the blizzards.

Pro-tip — never, ever have a colonoscopy without some kind of anesthesia or sedation. Besides the unpleasantness, the lack of meds increases the risk that the colonoscopy will tear you open and make things worse. I learned that the hard way in New York in the early 1980s.

3. Five years ago I wrote optimistically about the evolution of the information ecosystem, specifically using the example of the IT sector. One could argue that I was right. After all: 

  • Gartner still seems to be going strong.
  • O’Reilly, Gartner and vendors probably combine to produce enough good conferences.
  • A few traditional journalists still do good work (in the areas covered by this blog Doug Henschen comes to mind).
  • A few vendor folks are talented and responsible enough to add to the discussion. A few small-operation folks — e.g. me — are still around.

Still, the GigaOm news is not encouraging.

4. As TechCrunch and Pando reported, plaintiff Ellen Pao took the stand and sounded convincing in her sexual harassment suit against Kleiner Perkins (but of course she hadn’t been cross-examined yet). Apparently there was a major men-only party hosted by partner Al Gore, a candidate I first supported in 1988. And partner Ray Lane, somebody who at Oracle showed tremendous management effectiveness, evidently didn’t do much to deal with Pao’s situation.

Blech.

At some point I want to write about a few women who were prominent in my part of the tech industry in the 1980s — at least Ann Winblad, Esther Dyson, and Sandy Kurtzig, maybe analyst/investment banker folks Cristina Morgan and Ruthann Quindlen as well. We’ve come a long way since those days (when, in particular, I could briefly list a significant fraction of the important women in the industry). There seems to be a lot further yet to go.

5. All that said — I’m indeed working on some cool stuff. Some is evident from recent posts. Other may be reflected in an upcoming set of posts that focus on NoSQL, business intelligence, and — I hope — the intersection of the two areas.

6. Speaking of recent posts, I did one on marketing for young companies that brings a lot of advice and tips together. I think it’s close to being a must-read.

Categories: Other

Loading CSV files with special characters in Oracle DB

Dimitri Gielis - Tue, 2015-03-10 10:08
I often need to load the data of Excel or CSV files into the Oracle Database.

Ever got those annoying question marks when you try to load the data? or instead of question marks you just get empty blanks when the file is using special characters? Here's an example:


My database characterset is UTF-8, so ideally you want to load your data UTF-8 encoded.
With Excel I've not found an easy way to specify the encoding to UTF-8 when saving to a CSV file.Although in Excel (OSX) - Preferences - General - Web Options - Encoding, I specified UTF-8, it still saves the file as Western (Mac OS Roman).
I've two workarounds I use to get around the issue. Open the file in a text editor e.g. BBEdit and click the encoding option and select UTF-8.

Another way is to open Terminal and use the iconv command line tool to convert the file

iconv -t UTF8 -f MACROMAN < file.csv > file-utf8.csv

If you get a CSV file and you want to import it in Excel first, the best way I found is to create a new Workbook and import the CSV file (instead of opening directly). You can import either by using File - Import or Data - Get External Data - Import Text File. During the import you can specify the File origin and you can see which data format works for you.


After the manipulations in Excel you can save again as CSV as outlines above to make sure you resulting CSV file is UTF-8 encoded.
Finally to import the data you can use APEX, SQL Developer or SQLcl to load your CSV file into your table.
Categories: Development

PeopleTools 8.54: Performance Performance Monitor Enhancements

David Kurtz - Tue, 2015-03-10 04:09
This is part of a series of articles about new features and differences in PeopleTools 8.54 that will be of interest to the Oracle DBA.
Transaction History Search ComponentThere are a number of changes:
  • You can specify multiple system identifiers.  For example, you might be monitoring Portal, HR and CRM.  Now you can search across all of them in a single search.
    • It has always been the case that when you drill into the Performance Monitoring Unit (PMU), by clicking on the tree icon, you would see the whole of a PMU that invoked services from different systems.
  • You can also specify multiple transaction types, rather than have to search each transaction type individually.
This is a useful enhancement when searching for a specific or a small number of transaction.  However, I do not think it will save you from having to query the underlying transactions table.
PPM Archive Process The PPM archive process (PSPM_ARCHIVE) has been significantly rewritten in PeopleTools 8.54.  In many places, it still uses this expression to identify rows to be archived or purged:
%DateTimeDiff(X.PM_MON_STRT_DTTM, %CurrentDateTimeIn) >= (PM_MAX_HIST_AGE * 24 * 60)
This expands to
ROUND((CAST(( CAST(SYSTIMESTAMP AS TIMESTAMP)) AS DATE) - CAST((X.PM_MON_STRT_DTTM) AS DATE)) * 1440, 0)
   >= (PM_MAX_HIST_AGE * 24 *  60)
which has no chance of using an index.  This used to cause performance problems when the archive process had not been run for a while and the high water marks on the history tables had built up.

Now, the archive process now works hour by hour, and this will use the index on the timestamp column.
"... AND X.PM_MON_STRT_DTTM <= SYSDATE - PM_MAX_HIST_AGE 
and (PM_MON_STRT_DTTM) >= %Datetimein('" | DateTimeValue(&StTime) | "')
and (PM_MON_STRT_DTTM) <= %DateTimeIn('" | DateTimeValue(&EndTime) | "')"
Tuxedo Queuing Since Performance Monitor was first introduced, event 301 has never reported the length of the inbound message queues in Tuxedo.  The reported queue length was always zero.  This may have been fixed in PeopleTools 8.53, but I have only just noticed it
Java Management Extensions (JMX) SupportThere have been some additions to Performance Monitor that suggest that it will be possible to extract performance metrics using JMX.  The implication is that the Oracle Enterprise Manager Application Management Pack of PeopleSoft will be able to do this.  However, so far I haven't found any documentation. The new component is not mentioned in the PeopleTools 8.54: Performance Monitor documentation.
  • New Table
    • PS_PTPMJMXUSER - keyed on PM_AGENTID
  • New Columns
    • PSPMSYSDEFAULTS - PTPHONYKEY.  So far I have only seen it set to 0.
    • PSPMAGENT - PM_JMX_RMI_PORT.  So far only seen it set to 1
  • New Component

    ©David Kurtz, Go-Faster Consultancy Ltd.

    Log Buffer #413, A Carnival of the Vanities for DBAs

    Pythian Group - Mon, 2015-03-09 21:15

    This Log Buffer Editions scours the Internet and brings some of the fresh blog posts from Oracle, SQL Server and MySQL.

    Oracle:

    Most of Kyles’ servers tend to be Linux VMs on VMware ESX without any graphics desktops setup, so it can be disconcerting trying to install Oralce with it’s graphical “runInstaller” being the gate way we have to cross to achieve installation.

    Working around heatbeat issues caused by tracing or by regexp

    APEX 5 EA Impressions: Custom jQuery / jQuery UI implementations

    Introduction to the REST Service Editor, Generation (PART 2)

    Due to recent enhancements and importance within Oracle’s storage portfolio, StorageTek Storage Archive Manager 5.4 (SAM-QFS) has been renamed to Oracle Hierarchical Storage Manager (Oracle HSM) 6.0.

    SQL Server:

    There are different techniques to optimize the performance of SQL Server queries but wouldn’t it be great if we had some recommendations before we started planning or optimizing queries so that we didn’t have to start from the scratch every time? This is where you can use the Database Engine Tuning Advisor utility to get recommendations based on your workload.

    Data Mining Part 25: Microsoft Visio Add-Ins

    Stairway to Database Source Control Level 3: Working With Others (Centralized Repository)

    SQL Server Hardware will provide the fundamental knowledge and resources you need to make intelligent decisions about choice, and optimal installation and configuration, of SQL Server hardware, operating system and the SQL Server RDBMS.

    Questions About SQL Server Transaction Log You Were Too Shy To Ask

    MySQL:

    The post shows how you can easily read the VCAP_SERVICES postgresql credentials within your Java Code using the maven repo. This assumes your using the ElephantSQL Postgresql service. A single connection won’t be ideal but for demo purposes might just be all you need.

    MariaDB 5.5.42 Overview and Highlights

    How to test if CVE-2015-0204 FREAK SSL security flaw affects you

    Using master-master for MySQL? To be frankly we need to get rid of that architecture. We are skipping the active-active setup and show why master-master even for failover reasons is the wrong decision.

    Resources for Highly Available Database Clusters: ClusterControl Release Webinar, Support for Postgres, New Website and More

    Categories: DBA Blogs

    Recovering an Oracle Database with Missing Redo

    Pythian Group - Mon, 2015-03-09 21:14
    Background

    I ran into a situation where we needed to recover from an old online backup which (due to some issues with the RMAN “KEEP” command) was missing the archived redo log backups/files needed to make the backup consistent.  The client wasn’t concerned about data that changed during the backup, they were interested in checking some very old data from long before this online backup had started.

    Visualizing the scenario using a timeline (not to scale):

      |-------|------------------|---------|------------------|
      t0      t1                 t2        t3                 t4
              Data is added                                   Present
    

    The client thought that some data had become corrupted and wasn’t sure when but knew that it wasn’t recently so the flashback technologies were not an option.  Hence they wanted a restore of the database into a new temporary server as of time t1 which was in the distant past.

    An online (hot) backup was taken between t2 and t3 and was considered to be old enough or close enough to t1 however the problem was that all archived redo log backups were missing. The client was certain that the particular data they were interested in would not have change during the online backup.

    Hence the question is: without the necessary redo data to make the online backup consistent (between times t2 and t3) can we still open the database to extract data from prior to when the online backup began?  The official answer is “no” – the database must be made consistent to be opened.  And with an online backup the redo stream is critical to making the backed up datafiles consistent.  So without the redo vectors in the redo stream, the files cannot be made consistent with each other and hence the database cannot be opened.  However the unofficial, unsupported answer is that it can be done.

    This article covers the unsupported and unofficial methods for opening a database with consistency corruption so that certain data can be extracted.

    Other scenarios can lead to the same situation.  Basically this technique can be used to open the Oracle database any time the datafiles cannot be made consistent.

     

    Demo Setup

    To illustrate the necessary steps I’ve setup a test 12c non-container database called NONCDB.  And to simulate user transactions against it I ran a light workload using the Swingbench Order Entry (SOE) benchmark from another computer in the background.

    Before beginning any backups or recoveries I added two simple tables to the SCOTT schema and some rows to represent the “old” data (with the words “OLD DATA” in the C2 column):

    SQL> create table scott.parent (c1 int, c2 varchar2(16), constraint parent_pk primary key (c1)) tablespace users;
    
    Table created.
    
    SQL> create table scott.child (c1 int, c2 varchar2(16), foreign key (c1) references scott.parent(c1)) tablespace soe;
    
    Table created.
    
    SQL> insert into scott.parent values(1, 'OLD DATA 001');
    
    1 row created.
    
    SQL> insert into scott.parent values(2, 'OLD DATA 002');
    
    1 row created.
    
    SQL> insert into scott.child  values(1, 'OLD DETAILS A');
    
    1 row created.
    
    SQL> insert into scott.child  values(1, 'OLD DETAILS B');
    
    1 row created.
    
    SQL> insert into scott.child  values(1, 'OLD DETAILS C');
    
    1 row created.
    
    SQL> insert into scott.child  values(2, 'OLD DETAILS D');
    
    1 row created.
    
    SQL> commit;
    
    Commit complete.
    
    SQL>
    

     

    Notice that I added a PK-FK referential integrity constraint and placed each table is a different tablespace so they could be backed up at different times.

    These first entries represent my “old data” from time t1.

     

    The Online Backup

    The next step is to perform the online backup.  For simulation purposes I’m adjusting the steps a little bit to try to represent a real life situation where the data in my tables is being modified while the backup is running.  Hence my steps are:

    • Run an online backup of all datafiles except for the USERS tablespace.
    • Add some more data to my test tables (hence data going into the CHILD table is after the SOE tablespace backup and the data into the PARENT table is before the USERS tablespace backup).
    • Record the current archived redo log and then delete it to simulate the lost redo data.
    • Backup the USERS tablespace.
    • Add some post backup data to the test tables.

    The actual commands executed in RMAN are:

    $ rman
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Thu Feb 26 15:59:36 2015
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    RMAN> connect target
    
    connected to target database: NONCDB (DBID=1677380280)
    
    RMAN> backup datafile 1,2,3,5;
    
    Starting backup at 26-FEB-15
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=46 device type=DISK
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00005 name=/u01/app/oracle/oradata/NONCDB/datafile/SOE.dbf
    input datafile file number=00001 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_b2k8dsno_.dbf
    input datafile file number=00002 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_sysaux_b2k8f3d4_.dbf
    input datafile file number=00003 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_undotbs1_b2k8fcdm_.dbf
    channel ORA_DISK_1: starting piece 1 at 26-FEB-15
    channel ORA_DISK_1: finished piece 1 at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp tag=TAG20150226T155942 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:11:16
    Finished backup at 26-FEB-15
    
    Starting Control File and SPFILE Autobackup at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/autobackup/2015_02_26/o1_mf_s_872698259_bgzb0647_.bkp comment=NONE
    Finished Control File and SPFILE Autobackup at 26-FEB-15
    
    RMAN> alter system switch logfile;
    
    Statement processed
    
    RMAN> commit;
    
    Statement processed
    
    RMAN> alter system switch logfile;
    
    Statement processed
    
    RMAN> insert into scott.parent values (3, 'NEW DATA 003');
    
    Statement processed
    
    RMAN> insert into scott.child  values (3, 'NEW DETAILS E');
    
    Statement processed
    
    RMAN> commit;
    
    Statement processed
    
    RMAN> select sequence# from v$log where status='CURRENT';
    
     SEQUENCE#
    ----------
            68
    
    RMAN> alter system switch logfile;
    
    Statement processed
    
    RMAN> alter database backup controlfile to '/tmp/controlfile_backup.bkp';
    
    Statement processed
    
    RMAN> backup datafile 4;
    
    Starting backup at 26-FEB-15
    using channel ORA_DISK_1
    channel ORA_DISK_1: starting full datafile backup set
    channel ORA_DISK_1: specifying datafile(s) in backup set
    input datafile file number=00004 name=/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_users_b2k8gf7d_.dbf
    channel ORA_DISK_1: starting piece 1 at 26-FEB-15
    channel ORA_DISK_1: finished piece 1 at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp tag=TAG20150226T165814 comment=NONE
    channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
    Finished backup at 26-FEB-15
    
    Starting Control File and SPFILE Autobackup at 26-FEB-15
    piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/autobackup/2015_02_26/o1_mf_s_872701095_bgzdrrrh_.bkp comment=NONE
    Finished Control File and SPFILE Autobackup at 26-FEB-15
    
    RMAN> alter database backup controlfile to '/tmp/controlfile_backup.bkp';
    
    Statement processed
    
    RMAN> insert into scott.parent values (4, 'NEW DATA 004');
    
    Statement processed
    
    RMAN> insert into scott.child  values (4, 'NEW DETAILS F');
    
    Statement processed
    
    RMAN> commit;
    
    Statement processed
    
    RMAN> exit
    
    
    Recovery Manager complete.
    $
    

     

    Notice that in the above steps that since I’m using Oracle Database 12c I’m able to execute normal SQL commands from RMAN – this is a RMAN 12c new feature.

     

    Corrupting the Backup

    Now I’m going to corrupt my backup by removing one of the archived redo logs needed to make the backup consistent:

    SQL> set pages 999 lines 120 trims on tab off
    SQL> select 'rm '||name stmt from v$archived_log where sequence#=68;
    
    STMT
    ------------------------------------------------------------------------------------------------------------------------
    rm /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_68_bgzcnv04_.arc
    
    SQL> !rm /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_68_bgzcnv04_.arc
    
    SQL>
    

     

    Finally I’ll remove the OLD data to simulate the data loss (representing t4):

    SQL> select * from scott.parent order by 1;
    
            C1 C2
    ---------- ----------------
             1 OLD DATA 001
             2 OLD DATA 002
             3 NEW DATA 003
             4 NEW DATA 004
    
    SQL> select * from scott.child order by 1;
    
            C1 C2
    ---------- ----------------
             1 OLD DETAILS A
             1 OLD DETAILS B
             1 OLD DETAILS C
             2 OLD DETAILS D
             3 NEW DETAILS E
             4 NEW DETAILS F
    
    6 rows selected.
    
    SQL> delete from scott.child where c2 like 'OLD%';
    
    4 rows deleted.
    
    SQL> delete from scott.parent where c2 like 'OLD%';
    
    2 rows deleted.
    
    SQL> commit;
    
    Commit complete.
    
    SQL>
    

     

    Attempting a Restore and Recovery

    Now let’s try to recover from our backup on a secondary system so we can see if we can extract that old data.

    After copying over all of the files, the first thing to do is to try a restore as per normal:

    $ rman target=/
    
    Recovery Manager: Release 12.1.0.2.0 - Production on Mon Mar 2 08:40:12 2015
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    connected to target database (not started)
    
    RMAN> startup nomount;
    
    Oracle instance started
    
    Total System Global Area    1577058304 bytes
    
    Fixed Size                     2924832 bytes
    Variable Size                503320288 bytes
    Database Buffers            1056964608 bytes
    Redo Buffers                  13848576 bytes
    
    RMAN> restore controlfile from '/tmp/controlfile_backup.bkp';
    
    Starting restore at 02-MAR-15
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=12 device type=DISK
    
    channel ORA_DISK_1: copied control file copy
    output file name=/u01/app/oracle/oradata/NONCDB/controlfile/o1_mf_b2k8d9nq_.ctl
    output file name=/u01/app/oracle/fast_recovery_area/NONCDB/controlfile/o1_mf_b2k8d9v5_.ctl
    Finished restore at 02-MAR-15
    
    RMAN> alter database mount;
    
    Statement processed
    released channel: ORA_DISK_1
    
    RMAN> restore database;
    
    Starting restore at 02-MAR-15
    Starting implicit crosscheck backup at 02-MAR-15
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=12 device type=DISK
    Crosschecked 4 objects
    Finished implicit crosscheck backup at 02-MAR-15
    
    Starting implicit crosscheck copy at 02-MAR-15
    using channel ORA_DISK_1
    Crosschecked 2 objects
    Finished implicit crosscheck copy at 02-MAR-15
    
    searching for all files in the recovery area
    cataloging files...
    cataloging done
    
    using channel ORA_DISK_1
    
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_b2k8dsno_.dbf
    channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_sysaux_b2k8f3d4_.dbf
    channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_undotbs1_b2k8fcdm_.dbf
    channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/NONCDB/datafile/SOE.dbf
    channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp
    channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T155942_bgz9ol3g_.bkp tag=TAG20150226T155942
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:01:46
    channel ORA_DISK_1: starting datafile backup set restore
    channel ORA_DISK_1: specifying datafile(s) to restore from backup set
    channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/NONCDB/datafile/o1_mf_users_b2k8gf7d_.dbf
    channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp
    channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/NONCDB/backupset/2015_02_26/o1_mf_nnndf_TAG20150226T165814_bgzdrpmk_.bkp tag=TAG20150226T165814
    channel ORA_DISK_1: restored backup piece 1
    channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
    Finished restore at 02-MAR-15
    
    RMAN>
    

     

    Notice that it did restore the datafiles from both the SOE and USERS tablespaces, however we know that those are inconsistent with each other.

    Attempting to do the recovery should give us an error due to the missing redo required for consistency:

    RMAN> recover database;
    
    Starting recover at 02-MAR-15
    using channel ORA_DISK_1
    
    starting media recovery
    
    archived log for thread 1 with sequence 67 is already on disk as file /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_67_bgzcn05f_.arc
    archived log for thread 1 with sequence 69 is already on disk as file /u01/app/oracle/fast_recovery_area/NONCDB/archivelog/2015_02_26/o1_mf_1_69_bgzdqo9n_.arc
    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_bh914cx2_.dbf'
    
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 03/02/2015 08:44:21
    RMAN-06053: unable to perform media recovery because of missing log
    RMAN-06025: no backup of archived log for thread 1 with sequence 68 and starting SCN of 624986 found to restore
    
    RMAN>
    

     

    As expected we got the dreaded ORA-01547, ORA-01194, ORA-01110 errors meaning that we don’t have enough redo to make the recovery successful.

     

    Attempting a Recovery

    Now the crux of the situation. We’re stuck with the common inconsistency error which most seasoned DBAs should be familiar with:

    Oracle Error:
    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    ORA-01110: data file 1: '/u01/app/oracle/oradata/NONCDB/datafile/o1_mf_system_bh914cx2_.dbf'
    
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of recover command at 03/02/2015 08:44:21
    RMAN-06053: unable to perform media recovery because of missing log
    RMAN-06025: no backup of archived log for thread 1 with sequence 68 and starting SCN of 624986 found to restore

     

    And of course we need to be absolutely positive that we don’t have the missing redo somewhere.  For example in an RMAN backup piece on disk or on tape somewhere from an archive log backup that can be restored.  Or possibly still in one of the current online redo logs.  DBAs should explore all possible options for retrieving the missing redo vectors in some form or another before proceeding.

    However, if we’re absolutely certain of the following we can continue:

    1. We definitely can’t find the missing redo anywhere.
    2. We absolutely need to extract data from prior to the start of the online backup.
    3. Our data definitely wasn’t modified during the online backup.

     

    The natural thing to check first when trying to open the database after an incomplete recovery is the fuzziness and PIT (Point In Time) of the datafiles from SQLPlus:

    SQL> select fuzzy, status, checkpoint_change#,
      2         to_char(checkpoint_time, 'DD-MON-YYYY HH24:MI:SS') as checkpoint_time,
      3         count(*)
      4    from v$datafile_header
      5   group by fuzzy, status, checkpoint_change#, checkpoint_time
      6   order by fuzzy, status, checkpoint_change#, checkpoint_time;
    
    FUZZY STATUS  CHECKPOINT_CHANGE# CHECKPOINT_TIME        COUNT(*)
    ----- ------- ------------------ -------------------- ----------
    NO    ONLINE              647929 26-FEB-2015 16:58:14          1
    YES   ONLINE              551709 26-FEB-2015 15:59:43          4
    
    SQL>
    

     

    The fact that there are two rows returned and that not all files have FUZZY=NO indicates that we have a problem and that more redo is required before the database can be opened with the RESETLOGS option.

    But our problem is that we don’t have that redo and we’re desperate to open our database anyway.

     

    Recovering without Consistency

    Again, recovering without consistency is not supported and should only be attempted as a last resort.

    Opening the database with the data in an inconsistent state is actually pretty simple.  We simply need to set the “_allow_resetlogs_corruption” hidden initialization parameter and set the undo management to “manual” temporarily:

    SQL> alter system set "_allow_resetlogs_corruption"=true scope=spfile;
    
    System altered.
    
    SQL> alter system set undo_management='MANUAL' scope=spfile;
    
    System altered.
    
    SQL> shutdown abort;
    ORACLE instance shut down.
    SQL> startup mount;
    ORACLE instance started.
    
    Total System Global Area 1577058304 bytes
    Fixed Size                  2924832 bytes
    Variable Size             503320288 bytes
    Database Buffers         1056964608 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    SQL>
    

     

    Now, will the database open? The answer is still: “probably not”.  Giving it a try we get:

    SQL> alter database open resetlogs;
    alter database open resetlogs
    *
    ERROR at line 1:
    ORA-01092: ORACLE instance terminated. Disconnection forced
    ORA-00600: internal error code, arguments: [2663], [0], [551715], [0], [562781], [], [], [], [], [], [], []
    Process ID: 4538
    Session ID: 237 Serial number: 5621
    
    
    SQL>
    

     

    Doesn’t look good, right?  Actually the situation is not that bad.

    To put it simply this ORA-00600 error means that a datafile has a recorded SCN that’s ahead of the database SCN.  The current database SCN is shown as the 3rd argument (in this case 551715) and the datafile SCN is shown as the 5th argument (in this case 562781).  Hence a difference of:

    562781 - 551715 = 11066

    In this example, that’s not too large of a gap.  But in a real system, the difference may be more significant.  Also if multiple datafiles are ahead of the current SCN you should expect to see multiple ORA-00600 errors.

    The solution to this problem is quite simple: roll forward the current SCN until it exceeds the datafile SCN.  The database automatically generates a number of internal transactions on each startup hence the way to roll forward the database SCN is to simply perform repeated shutdowns and startups.  Depending on how big the gap is, it may be necessary to repeatedly shutdown abort and startup – the gap between the 5th and 3rd parameter to the ORA-00600 will decrease each time.  However eventually the gap will reduce to zero and the database will open:

    SQL> connect / as sysdba
    Connected to an idle instance.
    SQL> shutdown abort
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    
    Total System Global Area 1577058304 bytes
    Fixed Size                  2924832 bytes
    Variable Size             503320288 bytes
    Database Buffers         1056964608 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    Database opened.
    SQL>
    

     

    Now presumably we want to query or export the old data so the first thing we should do is switch back to automatic undo management using a new undo tablespace:

    SQL> create undo tablespace UNDOTBS2 datafile size 50M;
    
    Tablespace created.
    
    SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile;
    
    System altered.
    
    SQL> alter system set undo_management='AUTO' scope=spfile;
    
    System altered.
    
    SQL> shutdown abort
    ORACLE instance shut down.
    SQL> startup
    ORACLE instance started.
    
    Total System Global Area 1577058304 bytes
    Fixed Size                  2924832 bytes
    Variable Size             503320288 bytes
    Database Buffers         1056964608 bytes
    Redo Buffers               13848576 bytes
    Database mounted.
    Database opened.
    SQL>
    

     

    Finally the database is opened (although the data is inconsistent) and the “old” data can be queried:

    SQL> select * from scott.parent;
    
            C1 C2
    ---------- ----------------
             1 OLD DATA 001
             2 OLD DATA 002
             3 NEW DATA 003
    
    SQL> select * from scott.child;
    
            C1 C2
    ---------- ----------------
             1 OLD DETAILS A
             1 OLD DETAILS B
             1 OLD DETAILS C
             2 OLD DETAILS D
    
    SQL>
    

     

    As we can see, all of the “old” data (rows that begin with “OLD”) that were from before the backup began (before t2) is available.  And only part of the data inserted during the backup (rows where C1=3) as would be expected – that’s our data inconsistency.

    We’ve already seen that we can SELECT the “old” data.  We can also export it:

    $ expdp scott/tiger dumpfile=DATA_PUMP_DIR:OLD_DATA.dmp nologfile=y
    
    Export: Release 12.1.0.2.0 - Production on Mon Mar 2 09:39:11 2015
    
    Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
    Starting "SCOTT"."SYS_EXPORT_SCHEMA_02":  scott/******** dumpfile=DATA_PUMP_DIR:OLD_DATA.dmp nologfile=y
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 640 KB
    Processing object type SCHEMA_EXPORT/USER
    Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
    Processing object type SCHEMA_EXPORT/ROLE_GRANT
    Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
    Processing object type SCHEMA_EXPORT/STATISTICS/MARKER
    . . exported "SCOTT"."CHILD"                             5.570 KB       4 rows
    . . exported "SCOTT"."PARENT"                            5.546 KB       3 rows
    Master table "SCOTT"."SYS_EXPORT_SCHEMA_02" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for SCOTT.SYS_EXPORT_SCHEMA_02 is:
      /u01/app/oracle/admin/NONCDB/dpdump/OLD_DATA.dmp
    Job "SCOTT"."SYS_EXPORT_SCHEMA_02" successfully completed at Mon Mar 2 09:39:46 2015 elapsed 0 00:00:34
    
    $
    

     

    At this point we’ve either queried or extracted that critical old data which was the point of the exercise and we should immediately discard the restored database.  Remember it has data inconsistency which may include in internal tables an hence shouldn’t be used for anything beyond querying or extracting that “old” data.  Frequent crashes or other bizarre behavior of this restored database should be expected.  So get in, get the data, get out, and get rid of it!

     

    Conclusion

    If “desperate times call for desperate measures” and if you’re in that situation described in detail above where you need the data, are missing the necessary redo vectors, and are not concerned about the relevant data being modified during the backup then there options.

    The “more redo needed for consistency” error stack should be familiar to most DBAs:

    ORA-01547: warning: RECOVER succeeded but OPEN RESETLOGS would get error below
    ORA-01194: file 1 needs more recovery to be consistent
    

    And they may also be somewhat familiar with the “_allow_resetlogs_corruption” hidden initialization parameter.  However don’t let the resulting ORA-00600 error make the recovery attempt seem unsuccessful:

    ORA-00600: internal error code, arguments: [2663], [0], [551715], [0], [562781], [], [], [], [], [], [], []
    

    This error is overcome-able and the database likely can still be opened so the necessary data can be queried or extracted.

    Note: this process has been tested with Oracle Database 10g, Oracle Database 11g, and Oracle Database 12c.

    Categories: DBA Blogs

    Oracle Database Tools updated - check out SQLcl

    Dimitri Gielis - Mon, 2015-03-09 16:31
    Today Oracle released new versions of:

    Also Oracle REST Data Services 3 got a new EA2 version.You may want to check Kris Rice's blog for new features.
    I already blogged about all of the tools before, but not yet about SQLcl.This is a command line tool, I call it "SQL*Plus on steroids" (or as Jeff calls it SQL Developer meets SQL*Plus). It's particularly useful when you're on your server and quickly need to run some queries. Or if you're a command line guy/girl all the time, this tool is for you.
    Here's a screenshot how to connect to your database with SQLcl from Linux.

    Typing help will show you a list of quick shortcuts.
    For example if you type APEX you get a list of your APEX applications

    What I really like about SQLcl is that it formats the output so nicely. With SQL*Plus you had to set column widths, page sizes etc. Not with SQLcl, it's smart and formats it nicely.
    Next to that you can quickly output your query in JSON format by typing "set sqlformat json":

    There're many more features - a good starting point is this presentation and video by Jeff Smith.
    Categories: Development

    Development on Windows 8.1 Phone and Tablet

    Oracle AppsLab - Mon, 2015-03-09 14:59

    This is a follow up to my previous post (“Where are the Mobile Windows Devices?“) in which I gave my initial impressions of mobile windows devices.  As part of our assessment of these devices we also developed a few apps and this post details how that went.

    Getting Started

    Windows Phone 8.1 applications have to be developed on Windows 8.1.  I am using a Mac so I installed Windows 8.1 Enterprise Trial (90-day Free Trial) in a Parallels VM.  In order to run the Phone Emulator (which is also a VM and so I was running a VM in a VM), I had to enable Nested Virtualization in Parallels.

    Development is done in Visual Studio, I don’t think you can use any other IDE. You can download a version of Visual Studio Express for free.

    Finally, you’ll need a developer license to develop and test a Windows Store app before the Store can certify it. When you run Visual Studio for the first time, it prompts you to obtain a developer license. Read the license terms and then click I Accept if you agree. In the User Account Control (UAC) dialog box, click Yes if you want to continue. It was $19 for a developer license.

    Development

    There are 2 distinct ways to develop applications on the Windows Platform.

    Using the Windows Runtime (WinRT)

    Applications build with WinRT are called “Windows Runtime apps”, again, there are 2 types of these:

    • “Windows Phone Store apps” are WinRT apps that run on the Windows Phone.
    • “Windows Store apps” that run on a Windows device such as a PC or tablet.

    What’s really cool is that Visual Studio provide a universal Windows app template that lets you create a Windows Store app (for PCs, tablets, and laptops) and a Windows Phone Store app in the same project. When your work is finished, you can produce app packages for the Windows Store and Windows Phone Store with a single action to get your app out to customers on any Windows device. These applications can share a lot of their code, both business logic and presentation layer.

    Even better, you can create Windows Runtime apps using the programming languages you’re most familiar with, like JavaScript, C#, Visual Basic, or C++. You can even write components in one language and use them in an app that’s written in another language.  Windows Runtime apps can use the Windows Runtime, a native API built into the operating system. This API is implemented in C++ and bindings (called “projections”) are created for  JavaScript, C#, Visual Basic, and C++ in a way that feels natural for each language.

    Note that this is very different than the Phonegap/Cordova approach that also let you write apps in JavaScript. Universal Windows Apps do not run in a UIWebView/WebView, they are native applications for which (some of) the application logic gets run through the JavaScript engine. This means that they do not suffer from the challenges we face with Phonegap/Cordova (you can’t use cutting edge features, performance issues, etc.), yet you still get the benefits of using the language you are already familiar with.

    This also allows you to use existing JavaScript libraries and CSS templates, no porting requires. You can even write one app use multiple languages, leveraging the dynamic nature of JavaScript for app logic while leveraging languages like C# and C++ for more computationally intensive tasks.

    Traditional (Not using the WinRT)

    Applications that do not use the WinRT are called Windows desktop app and are executables or browser plug-ins that runs in the Windows desktop environment. These apps are typically written in Win32 and COM, .NET, WPF, or Direct3D APIs. There are also Windows Phone Silverlight apps which are Windows Phone apps that uses the Windows Phone Silverlight UI Framework instead of the Windows Runtime and can be sold in the Windows Phone Store.

    Deployment

    To deploy to my device I had to first “developer unlock” my phone (instructions).

    Deployment is a breeze from Visual Studio, just hook up your phone, select your device and hit deploy. The application gets saved to your phone and it opens. It appears in the apps list like all other apps.  You can also “side-load” applications to other windows machines for testing purpose, just package your application up in Visual Studio, put it on a USB stick, stick it in the other Tablet/PC and run the install script created by the packaging process.

    I created 2 simple application, one was a C# Universal Application and one was a JavaScript/CSS3/HTML5 Universal Application. I was able to deploy and run both on a Tablet, Desktop and Phone without any problem. They were very simple applications but I could not see any performance difference between the C# application and the JS application.

    Additional Findings

    For best User Experience when developing Universal Apps using JS/HTML5/CSS3 you should develop Single Page Applications (SPA).  This ensures there are no weird “page loads” in the middle of your app running.  Users will not expect this from their application, remember, these are universal apps and could be run by a user on his desktop.

    State can be easily shared between devices by automatically roaming app settings and state, along with Windows settings, between trusted devices on which the user is logged in with the same Microsoft account.

    Applications on the Windows App Store come with build in crashanalytics: This is one of the valuable services you get in exchange for your annual registration with the Store, no need to build it yourself.
    <h3Conclusion

    As a JavaScript developer myself I am extremely excited by the fact that I can develop native applications on the Windows Platform using tools that I am already familiar with.  Furthermore, with Windows 10 it seems that Microsoft is doubling down on Universal Apps and with that OS Upgrade, my JavaScript apps can soon also be deployed to the HoloLens, Surface Hub, and IoT devices like the Raspberry Pi 2!Possibly Related Posts:

    Thoughts On Student Engagement

    Floyd Teter - Mon, 2015-03-09 13:55
    For those of us in the Higher Education portion of Oracle’s ecosystem, the big conference of the year - Alliance - is less than one week away.  But I already have a suspicion about the hot topic this year.  I’m betting on the subject of student engagement.

    There was a time when student engagement was all about educational institutions reaching out to students and potential students.  But there were only a few ways to get that done:  advertising, public relations, events.  And the schools controlled the discussion.  Because it wasn’t really a discussion as much as a series of one-way broadcasts from the universities to the students.
    But things have changed as new technologies have taken root in higher education. Social media, chat apps, mobile…now, not only can the students and potential students talk back to the universities, but they can also talk to each other.  So the schools no longer have control of the discussion.  While there are significant upsides to this turning of the tables, there’s also a downside…the schools, to a very great degree, are in the dark about the tastes, preferences, and habits of their students and potential students.  This is especially true in talking about “digital body language”.  What technology do those students and potential students use? What are their technology habits?  How can they be reached?  How can we learn more about what is important to them?  The real crux of successful student engagement is hidden in distracting complexities.
    A real challenge in all this comes from a distraction over platforms.  There are lots of social and communication platforms out there coming and going:  Facebook, Twitter, Snapchat, Instagram, Webchat…you get the idea.  Platforms come and go, and nobody has any idea of the next big thing.  But you can’t ignore them, because your students and potential students are already there.
    Another clear shift is that the days of individual and isolated decision-making are gone.  People want to check in with the groups that are important to them and know what other people are doing before making a decision.  So we have different people, all with different needs and hot buttons, all interacting with each other in a variety of networks to influence individual decisions and choices.  So decision making is much more complex.
    These complexities distract from the real point of student engagement - schools learning about and adapting to their constituencies by talking with and listening to students and potential students.
    To eliminate the complexities and efficiently get to the crux of student engagement in today’s environment, schools need more analysis in order to get the planning, design, and execution of the education process matched with the needs and wants of their students and potential students.  In other words, you have to learn about digital body language without getting wrapped around the axle about platforms and social networks.  You have to be able to engage in the discussions with your students and potential students where they are, when they are there…while not getting bogged down by the platforms and networks yourself.  It’s a challenge.  I’m sure we’ll hear more at Alliance.

    Oracle Fusion Middleware Partner Community Award for Outstanding ACM/BPM Contribution 2015

    Andrejus Baranovski - Mon, 2015-03-09 11:38
    We have received award for Outstanding ACM/BPM Contribution 2015. This award was given to me and my colleague - Danilo Schmiedel, for successful work in joint ACM/BPM and ADF projects, by Oracle Fusion Middleware Partner Community. Thanks and really proud to be a part of the community !


    I'm looking forward to share interesting ideas about ADF/ACM/BPM/Mobile through this blog in the future.

    Webcast: Expanding Your Digital Marketing Experience

    WebCenter Team - Mon, 2015-03-09 09:29
    Oracle Corporation Expanding Your Digital Marketing Experience Accelerating Digital Business & Marketing Transformation

    Successful digital marketing starts with optimizing lead conversion by empowering marketers to manage the online experience with a minimum of IT involvement. Research has shown the correlation between user experience across channels, and higher conversion rates -- The more consistent, continuous and unified the customer's digital experience is, the higher the conversion rates. In fact, many companies have seen as much as a 70% increase through personalized web experiences.

    The key to delivering a continuous, personalized, unified digital experience across channels and systems is to combine the use of Web Experience Management (WEM) and marketing automation systems allowing marketers to leverage the web and email, without IT involvement. This powerful combination:
    • Creates a unified engagement platform that gives marketers a robust suite for customer acquisition
    • Speeds time-to-market with the ability to quickly and easily publish content and sites without IT involvement
    • Separates and enables marketing agility from IT stability
    Register now for this webcast.

    Red Button Top Register Now Red Button Bottom Live Webcast Calendar March 12, 2015
    10:00 a.m. PT /
    1:00 p.m. ET Featured Speaker:

    Joshua Duhl Joshua Duhl
    Senior Principal
    Product Manager,
    Oracle Hardware and Software Engineered to Work Together Copyright © 2015, Oracle Corporation and/or its affiliates.
    All rights reserved.
    Contact Us | Legal Notices | Privacy

    Where are the Mobile Windows Devices?

    Oracle AppsLab - Mon, 2015-03-09 09:27

    That was one of the questions one of the Oracle’s Executives asked when we presented our new Cloud UX Lab.  The short answer was that there were none.  As far as I am aware, we never did any testing of any of our prototypes and applications on Windows Phones or tablets because, frankly, we thought it didn’t matter.   Windows Phones (and tablets) are a distant third to the 2 behemoths in this space, Android and iOS, and even lost market share in the year just wrapped up compared (2.7%) to 2013 (3.3%) according to IDC.  However, they are predicted to do better in the years ahead (although these predictions have been widely off in the past) and it seems that there is some pressure from our Enterprise Apps customers to look at the Windows Mobile platform, hence the question.  Never afraid of a challenge, we ordered a Surface Pro 3 and a Nokia Lumia 1520, used them for a few weeks, ran some test, wrote some apps and jotted down our findings, leading to this blog post.

    Initial impressions Surface Pro 3

    I’m going to be short about the Surface Pro 3, it’s basically a PC without a physical keyboard (although you can get one if you want) but with a touch screen and a stylus.  It even runs the same version of Windows 8.1 as your PC.  I must admit that the Tiles seem more practical on the tablet than on a PC, but I could do without the constant reminders to “upgrade Windows” and “upgrade Defender,” complete with mandatory reboots, just like on your PC.  The most infuriating part about this is that the virtual keyboard does not automatically pop up when you tap on an input field, just like on your PC that doesn’t have the concept of a Virtual Keyboard.  Instead you have to explicitly open it to be able to type anything.

    Fortunately, there are some advantages too, e.g. anything that runs on your Windows PC probably will run fine on the Windows tablet, confirmed by our tests.  It has a USB 3.0 port that works just like … a USB port.  Plug in a USB Drive and you can instantly access it, just like on your PC, quite handy for when you have to side-load applications (more on that in a later post).

    The whole package is also quite pricy, similar to a premium laptop.  It’s more of a competitor for the likes of Apple’s Macbook Air than the iPad I think.  I’m thinking people who try to use their iPads as little laptops are probably better of with this.

    Lumia 1520

    The phone on the other hand is a different beast.  The Windows 8.1 Phone OS, unlike the tablet version, is a smartphone OS.  As such, it has none of the drawbacks that the tablet displayed.  My first impression of the phone was that it is absolutely huge.  It measures 6 inches across and dwarfs my iPhone 6, which I already thought was big.  It’s even bigger than the iPhone 6+ and the Samsung Galaxy Note 4.  My thumb can reach less than 50% of the screen, this is not a phone you can handle with one hand.

    iPhone 4S vs iPhone 6 vs Lumia 1520

    iPhone 4S vs iPhone 6 vs Lumia 1520

    Initial setup was relatively quick, it comes “preinstalled” with a bunch of apps, although, they are not really installed on the phone yet, they get installed on first boot.  It took about 10-15 minutes for all “preinstalled” phone apps to be installed.

    The screen is absolutely gorgeous with bright colors and supreme fine detail, courtesy of a 367ppi AMOLED ClearBlack screen.  It also performs very good outside, in bright light.  It has an FM Radio which uses your headphone cable as the antenna (no headphones, no radio), a USB port and a microSD port.  It also has a dedicated, two stage camera shutter button.  There’s no physical mute button though.  The Tiles work really well on the phone.  They are much easier to tap than the app icons on either Android or iOS and you can resize them.

    I tried installing the same apps as I have on my iPhone, but this was unfortunately where I hit my first giant snag.  I knew the ecosystem was underdeveloped compared to Android and iOS, but I didn’t know it was this bad.  Staples on my iPhone like Feedly, Flickr, VLC, Instapaper and Pocket don’t exist on the Windows Phone platform.  You also won’t find a dedicated app to listen to your Amazon Prime music or watch your movies.  If you want to watch the latest exploits of the Lannisters, you are also going to have to do it on another device, no HBO Go or XFinity on the Windows Phone.  There is also no version of Cisco VPN, which means it’s a non-starter for Oracle employees as that is the only way to access our intranet.  Weirder still, there is no Chrome or Firefox available on Windows Phones, which means I had to do all my testing on the version of IE that came with the phone (gulp!).

    Impressions after a week of usage

    I used the Lumia as my main phone for a week (poor pockets), I just popped in the micro SIM card from my iPhone into the Lumia and it worked.  I really got hooked to the constantly updating Live Tiles.  News, stock prices, weather, calendar notifications, facebook notifications etc. get pushed straight to my main screen without having to open any apps.  I can glance and drill down if I want to, or just ignore them.  They are a little bit of a distraction with their constant flipping motion, but overall very cool.

    The other thing that was very noticeable was that the top notification bar is actually transparent and so it doesn’t seem like you lose that part of your screen, I liked that.

    The Windows Store has a try-before-you-buy feature, something that would be a godsend on the iPhone: my kids love to buy games and then drop them within a day never to be used again.  You can also connect the Windows Phone to your XBox One and use it as an input device/remote control.

    Another feature that I highly appreciated, especially as a newbie to the Windows Phone, was the smart learning notifications (not sure if that is the official name).  Rather than dumping all the help-information on you when you open the app for the first time, the phone seems to be monitoring what you do and how you do it.  If there is a better/easier way of doing that task, after repeated use, it will let you know, in a completely non condescending way, that “You are doing it wrong.” This seems to be a much better approach because if you tell me the first time I use the app how to use all its features, I will forget by the time I actually want to use that feature, or worse, I might never use that feature so now you wasted my time telling me about it.

    As for overall performance, there was some noticeable “jank” in the phones animations, it just didn’t feel as buttery smooth as the iPhone 6.

    The camera

    The camera really deserves its own chapter.  The 1520 is the sister phone of the Lumia 1020, which has a whopping 41 megapixel image sensor.  The 1520 has to make due with 20 megapixels but that is still at least double of what you find in most smartphones.  Megapixel size isn’t everything but it does produce some wonderful pictures.  One of the reasons that Nokia went with these large sensors is because they wanted to support better zooming.  Because you can’t optically zoom with a phone camera, you need a much bigger lens for that, a phone does digital zooming which typically leads to a pixelated mess when you zoom in.  Unless of course you start with a very high resolution image, which is what Nokia did.

    One of the interesting features of the photo app is that it supports “lenses.”  These are plugins you can install in the photo app that add features not available out-of-the-box.  There are dozens of these lenses, it’s basically an app store in an app, that add features like (instagram) filters, 360 shots, panoramic pictures etc.  One lens promises to make you look better in selfies (it didn’t work on me).  One really neat lens is Nokia’s “Refocus” lens that brings a Lytro-like variable depth of field to your phone, and it works great too.

    Refocus

    In the same lens app you can also filter out all colors except for the object you click on, called “color pop,” so you get this effect:

    color pop

    Color pop in action

    In the app, you can keep clicking on other objects (e.g. the table) to pop their color.

    Other than the 20 megapixel sensor, the phone is also equipped with a top notch Carl Zeiss lens.  The phone has a physical, dedicated, two-stage shutter button, half-press for focus and full press for taking the picture.  It also has a larger-than-usual degree of manual control. You’ll find the usual settings for flash mode, ISO, white balance and exposure compensation but also parameters for shutter speed and focus. The latter two are not usually available on mobile phones.  The camera also performs really well in low light conditions.

    Summary

    I like the phone and its OS, and I really like the camera. The Tiles also works really well on a phone. I dislike the performance, the size and the lack of applications, the latter is a deal-breaker for me. I had some trepidation about going cold turkey Windows Phone for the week but it turned out alright. However, I was happy to switch back to my iPhone 6 at the end of the week.
    I’m a bit more on the fence about the tablet. If you get the physical keyboard, it might work out better but then you basically have a laptop, so not sure what the point is. The fact that it runs windows has it’s advantages (everything runs just as on windows) and disadvantages (keyboard issues).

    I can’t wait to get my hands on Windows 10 and a HoloLens :-)

    Happy Coding!

    Mark.Possibly Related Posts:

    Flashback logging

    Jonathan Lewis - Mon, 2015-03-09 08:44

    When database flashback first appeared many years ago I commented (somewhere, but don’t ask me where) that it seemed like a very nice idea for full-scale test databases if you wanted to test the impact of changes to batch code, but I couldn’t really see it being a good idea for live production systems because of the overheads.

    Features and circumstances change, of course, and someone recently pointed out that if your production system is multi-terabyte and you’re running with a dataguard standby and some minor catastrophe forces you to switch to the standby then you don’t really want to be running without a standby for the time it would take for you to use restore and recover an old backup to create a new standby and there may be cases where you could flashback the original primary to before the catastrophe and turn it into the standby from that point onward. Sounds like a reasonable argument to me – but you might still need to think very carefully about how to minimise the impact of enabling database flashback, especially if your database is a datawarehouse, DSS, or mixed system.

    Imagine you have a batch processes that revolve around loading data into an empty table with a couple of indexes – it’s a production system so you’re running with archivelog mode enabled, and then you’re told to switch on database flashback. How much impact will that have on your current loading strategies ? Here’s a little bit of code to help you on your way – I create an empty table as a clone of the view all_objects, and create one index, then I insert 1.6M rows into it. I’ve generated 4 different sets of results: flashback on or off, then either maintaining the index during loading or marking it unusable then rebuilding it after the load. Here’s the minimum code:

    
    create table t1 segment creation immediate tablespace test_8k
    as
    select * from all_objects
    where   rownum < 1
    ;
    
    create index t1_i1 on t1(object_name, object_id) tablespace test_8k_assm_auto;
    -- alter index t1_i1 unusable;
    
    insert /*+ append */ into t1
    with object_data as (
            select --+ materialize
                    *
            from
                    all_objects
            where
                    rownum <= 50000
    ),
    counter as (
            select  --+ materialize
                    rownum id
            from dual
            connect by
                    level <= 32
    )
    select
            /*+ leading (ctr obj) use_nl(obj) */
            obj.*
    from
            counter         ctr,
            object_data     obj
    ;
    
    -- alter index t1_i1 rebuild;
    
    

    Here’s a quick summary of the timing I got  before I talk about the effects (running 11.2.0.4):

    Flashback off:
    Maintain index in real time: 138 seconds
    Rebuild index at end: 66 seconds

    Flashback on:
    Maintain index in real time: 214 seconds
    Rebuild index at end: 112 seconds

    It is very important to note that these timings do not allow you to draw any generic conclusions about optimum strategies for your systems. The only interpretation you can put on them is that different circumstances may lead to very different timings, so it’s worth looking at what you could do with your own systems to find good strategies for different cases.

    Most significant, probably, is the big difference between the two options where flashback is enabled – if you’ve got to use it, how do you do damage limitation. Here are some key figures, namely the file I/O stats and the some instance activity stats, I/O stats first:

    
    "Real-time" maintenance
    ---------------------------------
    Tempfile Stats - 09-Mar 11:41:57
    ---------------------------------
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs    Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------    ------      ------   ---------    -------------------
        1       1,088      22,454      20.638        .063         296        .000     1,011      22,455        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_938s5v4n_.tmp
    
    ---------------------------------
    Datafile Stats - 09-Mar 11:41:58
    ---------------------------------
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs         Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------     -------   ---------         ------      ------   ---------    -------------------
        3      24,802      24,802       1.000        .315      24,802        .315           0        .000          2,386      20,379        .239    /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
        5         718      22,805      31.762        .001           5        .000         713        .002            725      22,814        .002    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
        6       8,485       8,485       1.000        .317       8,485        .317           0        .000            785       6,938        .348    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf
    
    Mark Unusable and Rebuild
    ---------------------------------
    Tempfile Stats - 09-Mar 11:53:04
    ---------------------------------
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs    Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------    ------      ------   ---------    -------------------
        1       1,461      10,508       7.192        .100           1        .017       407      10,508        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_temp_938s5v4n_.tmp
    
    ---------------------------------
    Datafile Stats - 09-Mar 11:53:05
    ---------------------------------
    file#       Reads      Blocks    Avg Size   Avg Csecs     S_Reads   Avg Csecs     M_Reads   Avg Csecs         Writes      Blocks   Avg Csecs    File name
    -----       -----      ------    --------   ---------     -------   ---------     -------   ---------         ------      ------   ---------    -------------------
        3          17          17       1.000       5.830          17       5.830           0        .000             28          49       1.636    /u01/app/oracle/oradata/TEST/datafile/o1_mf_undotbs1_938s5n46_.dbf
        5         894      45,602      51.009        .001           2        .002         892        .001            721      22,811        .026    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k_bcdy0y3h_.dbf
        6       2,586       9,356       3.618        .313         264       3.064       2,322        .001          2,443       9,214        .000    /u01/app/oracle/oradata/TEST/datafile/o1_mf_test_8k__bfqsmt60_.dbf
    
    

    There are all sorts of interesting differences in these results due to the different way in which Oracle handles the index. For the “real-time” maintenance the session accumulates the key values and rowids as it writes the table, then sorts them, then does an cache-based bulk update to the index. For the “rebuild” strategy Oracle simply scans the table after it has been loaded, sorts the key values and indexes, then writes the index to disc using direct path writes; you might expect the total work done to be roughly the same in both cases – but it’s not.

    I’ve shown 4 files: the temporary tablespace, the undo tablespace, the tablespace holding the table and the tablespace holding the index; and the first obvious difference is the number of blocks written and read and the change in average read size on the temporary tablespace. Both sessions had to spill to disc for the sort, and both did a “one-pass” sort; the difference in the number of blocks written and read appears because the “real-time” session wrote the sorted data set back to the temporary tablespace one more time than it really needed to – it merged the sorted data in a single pass but wrote the data back to the temporary tablespace before reading it again and applying it to the index (for a couple of points on tracing sorts, see this posting). I don’t know why Oracle chose to use a much smaller read slot size in the second case, though.

    The next most dramatic thing we see is that real-time maintenance introduced 24,800 single block reads with 20,000 blocks written to the undo tablespace (with a few thousand more that would eventually be written by dbwr – I should have included a “flush buffer_cache” in my tests), compared to virtually no activity in the “rebuild” case. The rebuild generates no undo; real-time maintenance (even starting with an empty index) generates undo because (in theory) someone might look at the index and need to see a read-consistent image of it. So it’s not surprising that we see a lot of writes to the undo tablespace – but where did the reads come from? I’ll answer question that later.

    It’s probably not a surprise to see the difference in the number of blocks read from the table’s tablespace. When we rebuild the index we have to do a tablescan to acquire the data; but, again, we can ask why did we see 22,800 blocks read from the table’s tablespace when we were doing the insert with real-time maintenance. On a positive note those reads were multiblock reads, but what caused them? Again, I’ll postpone the answer.

    Finally we see that the number of blocks read (reason again postponed) and written to the index’s tablespace are roughly similar. The writes differ because because the rebuild is doing direct path writes, while the real-time maintenance is done in the buffer cache, so there are some outstanding index blocks to be written. The reads are similar, though one test is exclusively single block reads and the other is doing (small) multiblock reads – which is just a little bit more efficient.  The difference in the number of reads is because the rebuild was at the default pctfree=10 while the index maintenance was a massive “insert in order” which would have packed the index leaf blocks at 100%.

    To start the explanation – here are the most significant activity stats – some for the session, a couple for the instance:

    
    "Real-time" maintenance
    -----------------------
    Name                                                                     Value
    ----                                                                     -----
    physical reads for flashback new                                        33,263
    redo entries                                                           118,290
    redo size                                                          466,628,852
    redo size for direct writes                                        187,616,044
    undo change vector size                                            134,282,356
    flashback log write bytes                                          441,032,704
    
    Rebuild
    -------
    Name                                                                     Value
    ----                                                                     -----
    physical reads for flashback new                                           156
    redo entries                                                            35,055
    redo size                                                          263,801,792
    redo size for direct writes                                        263,407,628
    undo change vector size                                                122,156
    flashback log write bytes                                          278,036,480
    
    

    The big clue is the “physical reads for flashback new”. When you modify a block, if it hasn’t been dumped into the flashback log recently (as defined by the hidden _flashback_barrier_interval parameter) then the original version of the block has to be written to the flashback log before the change can be applied; moreover, if a block is being “newed” (Oracle-speak for being reformatted for a new use) it will also be written to flashback log. Given the way that the undo tablespace works it’s not surprising if virtually every block you modify in the undo tablespace has to be written to the flashback log before you use it. The 33,264 blocks read for “flashback new” consists of the 24,800 blocks read from the undo tablespace when we were maintaining the index in real-time plus a further 8,460 from “somewhere” – which, probably not coincidentally, matches the number of blocks read from the index tablespace as we create the index. The odd thing is that we don’t see the 22,800 reads on the table’s tablespace (which don’t occur when flashback is off) reported as “physical reads for flashback new”; this looks like a reporting error to me.

    So the volume of undo requires us to generate a lot of flashback log as well as the usual increase in the amount of redo. As a little side note, we get confirmation from these stats that the index was rebuilt using direct path writes – there’s an extra 75MB of redo for direct writes.

    Summary

    If you are running with flashback enabled in a system that’s doing high volume data loading remember that the “physical reads for flashback new” could be a major expense. This is particularly expensive on index maintenance, which can result in a large number single block reads of the undo tablespace. The undo costs you three times – once for the basic cost of undo (and associated redo), once for the extra reads, and once for writing the flashback log. Although you have to do tablescans to rebuild indexes, the cost of an (efficient, possibly direct path) tablescan may be much less than the penalty of the work relating to flashback.

    Footnote: since you can’t (officially) load data into a table with an unusable unique index or constraint, you may want to experiment with using non-unique indexes to support unique/PK constraints and disabling the constraints while loading.


    Announcing the Special Guest Speakers for Brighton & Atlanta BI Forum 2015

    Rittman Mead Consulting - Mon, 2015-03-09 08:13

    As well as a great line-up of speakers and sessions at each of the Brighton & Atlanta Rittman Mead BI Forum 2015 events in May, I’m very pleased to announce our two guest speakers who’ll give the second keynotes, on the Thursday evening of the two events just before we leave for the restaurant and the appreciation events. This year our special guest speaker in Atlanta is John Foreman, Chief Data Scientist at MailChimp and author of the book “Data Smart: Using Data Science to Transform Information into Insight”; and in Brighton we’re delighted to have Reiner Zimmerman, Senior Director of Product Management at Oracle US and the person behind the Oracle DW & Big Data Global Leaders program.

    NewImage

    I first came across John Foreman when somebody recommended his book to me, “Data Smart”, a year or so ago. At that time Rittman Mead were getting more-and-more requests from our customers asking us to help with their advanced analytics and predictive modelings needs, and I was looking around for resources to help myself and the team get to grips with some of the more advanced modelings and statistical techniques Oracle’s tools now support – techniques such as clustering and pattern matching, linear regression and genetic algorithms.

    One of the challenges when learning these sorts of techniques is not getting to caught up in the tools and technology – R was our favoured technology at the time, and there’s lots to it – so John’s book was particularly well-timed as it goes through these types of “data science” techniques but focuses on Microsoft Excel as the analysis tool, with simple examples and a very readable style.

    Back in his day job, John is Chief Data Scientist at MailChimp and has become a particularly in-demand speaker following the success of his book, and I was very excited to hear from Charles Elliott, our Practice Manager for Rittman Mead America, that he lived near John in Atlanta and had arranged for him to keynote at our Atlanta BI Forum event. His Keynote will be entitled “How Mailchimp used qualitative and quantitative analysis to build their next product” and we’re very much looking forward to meeting him at our event in Atlanta on May 13th-15th 2015.

    NewImage

    Our second keynote speaker at the Brighton Rittman Mead BI Forum 2015 event is non-other than Reiner Zimmerman, best known in EMEA for organising the Oracle DW Global Leaders Program. We’ve known Reiner for several years now as Rittman Mead are one of the associate sponsors for the program, which aims to bring together the leading organizations building data warehouse and big data systems on the Oracle Engineered Systems platform.

    A bit like the BI Forum (but even more exclusive), the DW Global Leaders program holds meetings in the US, EMEA and AsiaPac over the year and is a fantastic networking and knowledge-sharing group for an exclusive set of customers putting together the most cutting-edge DW and big data systems on the latest Oracle technology. Reiner’s also an excellent speaker and a past visitor to the BI Forum, and his session entitled “Hadoop and Oracle BDA customer cases from around the world” will be a look at what customers are really doing, and the value they’re getting, from building big data systems on the Oracle platform.

    Registration is now open for both the Brighton and Atlanta BI Forum 2015 events, with full details including the speaker line-up and how to register on the event website. Keep an eye on the blog for more details of both events later this week including more on the masterclass by myself and Jordan Meyer, and a data visualisation “bake-off” we’re going to run on the second day of each event. Watch this space…!

    Categories: BI & Warehousing

    Version Control for PL/SQL Webinar

    Gerger Consulting - Mon, 2015-03-09 05:43

    Thanks to everyone who attended the "Introduction to Gitora, the free version control tool for PL/SQL" webinar. You can watch a recording of the webinar below.



    Introducing Gitora, the free version control tool for PL/SQL from Yalim K. Gerger on Vimeo.

    You can also view the slides of the webinar below:


    Introducing Gitora, free version control tool for PL/SQL from Yalım K. Gerger

    Categories: Development

    Blueprint for a Post-LMS, Part 5

    Michael Feldstein - Sun, 2015-03-08 17:38

    By Michael FeldsteinMore Posts (1021)

    In parts 1, 2, 3, and 4 of this series, I laid out a model for a learning platform that is designed to support discussion-centric courses. I emphasized how learning design and platform design have to co-evolve, which means, in part, that a new platform isn’t going to change much if it is not accompanied by pedagogy that fits well with the strengths and limitations of the platform. I also argued that we won’t see widespread changes in pedagogy until we can change faculty relationships with pedagogy (and course ownership), and I proposed a combination of platform, course design, and professional development that might begin to chip away at that problem. All of these ideas are based heavily on lessons learned from social software  and from cMOOCs.

    In this final post in the series, I’m going to give a few examples of how this model could be extended to other assessment types and related pedagogical approaches, and then I’ll finish up by talking about what it would take to make the peer grading system described in part 2 be (potentially) accepted by students as at least a component of a grading system in a for-credit class.

    Competency-Based Education

    I started out the series talking about Habitable Worlds, a course out of ASU that I’ve written about before and that we feature in the forthcoming e-Literate TV series on personalized learning. It’s an interesting hybrid design. It has strong elements of competency-based education (CBE) and mastery learning, but the core of it is problem-based learning (PBL). The competency elements are really just building blocks that students need in the service of solving the big problem of the course. Here’s course co-designer and teacher Ariel Anbar talking about the motivation behind the course:

    Click here to view the embedded video.

    It’s clear that the students are focused on the overarching problem rather than the competencies:

    Click here to view the embedded video.

    And, as I pointed out in the first post in the series, they end up using the discussion board for the course very much like professionals might use a work-related online community of practice to help them work through their problems when they get stuck:

    Click here to view the embedded video.

    This is exactly the kind of behavior that we want to see and that the analytics I designed in part 3 are designed to measure. You could attach a grade to the students’ online discussion behaviors. But it’s really superfluous. Students get their grade from solving the problem of the course. That said, it would be helpful to the students if productive behaviors were highlighted by the system in order to make them easier to learn. And by “learn,” I don’t mean “here are the 11 discussion competencies that you need to display.” I mean, rather, that there are different patterns of productive behavior in a high-functioning group. It would be good for students to see not only the atomic behaviors but different patterns and even how different patterns complement each other within a group. Furthermore, I could imagine that some employers might be interested in knowing the collaboration style that a potential employee would bring to the mix. This would be a good fit for badges. Notice that, in this model, badges, competencies, and course grades serve distinct purposes. They are not interchangeable. Competencies and badges are closer to each other than either is to a grade. They both indicate that the student has mastered some skill or knowledge that is necessary to the central problem. But they are different from each other in ways that I haven’t entirely teased out in my own head yet. And they are not sufficient for a good course grade. To get that, the student must integrate and apply them toward generating a novel solution to a complex problem.

    The one aspect of Habitable Worlds that might not fit with the model I’ve outlined in this series is the degree to which it has a mandatory sequence. I don’t know the course well enough to have a clear sense, but I suspect that the lessons are pretty tightly scripted, due in part to the fact that the overarching structure of the course is based on an equation. You can’t really drop out one of the variables or change the order willy-nilly in an equation. There’s nothing wrong with that in and of itself, but in order to take full advantage of the system I’ve proposed here, the course design must have a certain amount of play in it for faculty teaching their individual classes to contribute additions and modifications back. It’s possible to use the discussion analytics elements without the social learning design elements, but then you don’t get potential the system offers for faculty buy-in “lift.”

    Adding Assignment Types

    I’ve written this entire series talking about “discussion-based courses” as if that were a thing, but it’s vastly more common to have discussion and writing courses. One interesting consequences of the work that we did abstracting out the Discourse trust levels is that we created a basic (and somewhat unconventional) generalized peer review system in the process. As long as conversation is the metric, we can measure the conversational aspects generated by any student-created artifact. For example, we could create a facility in OAE for students to claim the RSS feeds from their blogs. Remember, any integration represents a potential opportunity to make additional inferences. Once a post is syndicated into the system and associated with the student, it can generate a Discourse thread just like any other document. That discussion can be included in  With a little more work, you could have student apply direct ratings such as “likes” to the documents themselves. Making the assessment work for these different types isn’t quite as straightforward as I’m making it sound, either from a user experience design perspective or from a technology perspective. But the foundation is there to build on.

    One of the commenters on part 1 of the series provided another interesting use case:

    I’m the product manager for Wiki Education Foundation, a nonprofit that helps professors run Wikipedia assignments, in which the students write Wikipedia articles in place of traditional term papers. We’re building a system for managing these assignments, from building a week-by-week assignment plan that follows best practices, to keeping track of student activity on Wikipedia, to pulling in view data for the articles students work on, to finding automated ways of helping students work through or avoid the typical stumbling blocks for new Wikipedia editors.

    Wikipedia is its own rich medium for conversation and interaction. I could imagine taking that abstracted peer review system and just hooking it up directly to student activity within Wikipedia itself. Once we start down this path, we really need to start talking about IMS Caliper and federated analytics. This has been a real bottom-up analysis, but we quickly reach the point where we want to start abstracting out the particular systems or even system types, and start looking at a general architecture for sharing learning data (safely). I’m not going to elaborate on it here—even I have to stop at some point—but again, if you made it this far, you might find it useful to go back and reread my original post on the IMS Caliper draft standard and the comments I made on its federated nature in my most recent walled garden post. Much of what I have proposed here from an architectural perspective is designed specifically with a Caliper implementation in mind.

    Formal Grading

    I suppose my favorite model so far for incorporating the discussion trust system into a graded, for-credit class is the model I described above where the analytics act as more of a coach to help students learn productive discussion behavior, while the class grade actually comes from their solution to the central problem, project, or riddle of the course. But if we wanted to integrate the trust analytics as part of the formal grading system, we’d have to get over the “Wikipedia objection,” meaning the belief that somehow vetting by a single expert produces more reliably generates accurate results than crowdsourcing. Some students will want grades from their teachers and will tend to think that the trust levels are bogus as a grade. (Some teachers will agree.) To address their concerns, we need three things. First, we need objectivity, by which I mean that the scoring criteria themselves are being applied the same to everyone. “Objectivity” is often about as real in student evaluation as it is journalism (which is to say, it isn’t), but people do want some sense of fairness, which is probably a better goal. Clear ratings criteria applied to everyone equally gives some sense of fairness. Second, the trust scores themselves must be transparent, by which I mean that students should be able to see how they earned their trust scores. They should also be able to see various paths to improving their scores. And finally, there should be auditability, by which I mean that, in the event that a student is given a score by her peers that her teacher genuinely disagrees with (e.g., a group ganging up to give one student thumbs-downs, or a lot of conversation being generated around something that is essentially not helpful to the problem-solving effort), there is an ability for the faculty member to override that score. This last piece can be a rabbit hole, both in terms of user interface design and in terms of eroding the very sense you’re trying to build of a trust network, but it is probably necessary to get buy-in. The best thing to do is to pilot the trust system (and the course design that is supposed to inspire ranking-worthy conversation) and refine it to the point where it inspires a high degree of confidence before you start using it for formal grading.

    That’s All

    No, really. Even I run out of gas. Eventually.

    For a while.

    The post Blueprint for a Post-LMS, Part 5 appeared first on e-Literate.

    Partner Webcast – Oracle Private Cloud: Database as a Service (DBaaS) using Oracle Enterprise Manager 12c

    Large enterprises today have hundreds and thousands of databases of various versions, configurations and patch levels. Another challenge is around time to provision new databases. When an end...

    We share our skills to maximize your revenue!
    Categories: DBA Blogs

    Analyzing easily the blocked process report

    Yann Neuhaus - Sun, 2015-03-08 13:06

    Which DBA has not yet face a performance problem issued by several blocked processes? In reality, I’m sure a very little number of them. Troubleshooting a blocked issue scenario is not always easy and may require to use some useful tools to simplify this hard task. A couple of months ago, I had to deal this scenario at one of my customer. During some specific periods in the business day, he noticed that its application slowed down and he asked to me how to solve this issue.

    Fortunately, SQL Server provides a useful feature to catch blocked processes. We have just to configure the “blocked process threshold (s)” server option. There are plenty of blogs that explain how to play with this parameter. So I let you perform your own investigation by using your favourite search engine.

    Having a blocked process report is useful but often in such situation, there are a lot of processes that sometimes blocked each other’s and we have to find out among this can of worms the real responsible. So, my main concern was the following: how to extract information from the blocked process report and how to correlate all blocked processes together. After some investigation I found a useful script written by Michael J S Swart here. Usually I prefer to write my own script but I didn't had the time and I had to admit this script met perfectly my need. The original version provides the blocked hierarchy and the XML view of the issue. It’s not so bad because we have all the information to troubleshoot our issue. However, my modification consists to change this XM view by adding useful information in tabular format to make the reading of the final result easier. Here the modified version of the script:

     

    CREATE PROCEDURE [dbo].[sp_blocked_process_report_viewer_dbi] (        @Trace nvarchar(max),        @Type varchar(10) = 'FILE' )   AS   SET NOCOUNT ON   -- Validate @Type IF (@Type NOT IN('FILE', 'TABLE', 'XMLFILE'))        RAISERROR ('The @Type parameter must be ''FILE'', ''TABLE'' or ''XMLFILE''', 11, 1)   IF (@Trace LIKE '%.trc' AND @Type &lt;&gt; 'FILE')        RAISERROR ('Warning: You specified a .trc trace. You should also specify @Type = ''FILE''', 10, 1)   IF (@Trace LIKE '%.xml' AND @Type &lt;&gt; 'XMLFILE')        RAISERROR ('Warning: You specified a .xml trace. You should also specify @Type = ''XMLFILE''', 10, 1)          CREATE TABLE #ReportsXML (        monitorloop nvarchar(100) NOT NULL,        endTime datetime NULL,        blocking_spid INT NOT NULL,        blocking_ecid INT NOT NULL,        blocking_bfinput NVARCHAR(MAX),        blocked_spid INT NOT NULL,        blocked_ecid INT NOT NULL,        blocked_bfinput NVARCHAR(MAX),        blocked_waitime BIGINT,        blocked_hierarchy_string as CAST(blocked_spid as varchar(20)) + '.' + CAST(blocked_ecid as varchar(20)) + '/',        blocking_hierarchy_string as CAST(blocking_spid as varchar(20)) + '.' + CAST(blocking_ecid as varchar(20)) + '/',        bpReportXml xml not null,        primary key clustered (monitorloop, blocked_spid, blocked_ecid),        unique nonclustered (monitorloop, blocking_spid, blocking_ecid, blocked_spid, blocked_ecid) )   DECLARE @SQL NVARCHAR(max); DECLARE @TableSource nvarchar(max);   -- define source for table IF (@Type = 'TABLE') BEGIN        -- everything input by users get quoted        SET @TableSource = ISNULL(QUOTENAME(PARSENAME(@Trace,4)) + N'.', '')              + ISNULL(QUOTENAME(PARSENAME(@Trace,3)) + N'.', '')              + ISNULL(QUOTENAME(PARSENAME(@Trace,2)) + N'.', '')              + QUOTENAME(PARSENAME(@Trace,1)); END   -- define source for trc file IF (@Type = 'FILE') BEGIN         SET @TableSource = N'sys.fn_trace_gettable(N' + QUOTENAME(@Trace, '''') + ', -1)'; END   -- load table or file IF (@Type IN('TABLE', 'FILE')) BEGIN        SET @SQL = N'                    INSERT #ReportsXML(blocked_ecid, blocked_spid, blocked_bfinput , blocking_ecid, blocking_spid,                                 blocking_bfinput, blocked_waitime, monitorloop, bpReportXml,endTime)              SELECT                     blocked_ecid,                     blocked_spid,                     blocked_inputbuffer,                     blocking_ecid,                     blocking_spid,                     blocking_inputbuffer,                 blocked_waitime,                     COALESCE(monitorloop, CONVERT(nvarchar(100), endTime, 120), ''unknown''),                     bpReportXml,                     EndTime              FROM ' + @TableSource + N'              CROSS APPLY (                     SELECT CAST(TextData as xml)                     ) AS bpReports(bpReportXml)              CROSS APPLY (                     SELECT monitorloop = bpReportXml.value(''(//@monitorLoop)[1]'', ''nvarchar(100)''), blocked_spid = bpReportXml.value(''(/blocked-process-report/blocked-process/process/@spid)[1]'', ''int''), blocked_ecid = bpReportXml.value(''(/blocked-process-report/blocked-process/process/@ecid)[1]'', ''int''),                            blocked_inputbuffer = bpReportXml.value(''(/blocked-process-report/blocked-process/process/inputbuf/text())[1]'', ''nvarchar(max)''), blocking_spid = bpReportXml.value(''(/blocked-process-report/blocking-process/process/@spid)[1]'', ''int''), blocking_ecid = bpReportXml.value(''(/blocked-process-report/blocking-process/process/@ecid)[1]'', ''int''),                            blocking_inputbuffer = bpReportXml.value(''(/blocked-process-report/blocking-process/process/inputbuf/text())[1]'', ''nvarchar(max)''), blocked_waitime = bpReportXml.value(''(/blocked-process-report/blocked-process/process/@waittime)[1]'', ''bigint'')                     ) AS bpShredded              WHERE EventClass = 137';                     EXEC (@SQL); END   IF (@Type = 'XMLFILE') BEGIN        CREATE TABLE #TraceXML(              id int identity primary key,              ReportXML xml NOT NULL            )               SET @SQL = N'              INSERT #TraceXML(ReportXML)              SELECT col FROM OPENROWSET (                            BULK ' + QUOTENAME(@Trace, '''') + N', SINGLE_BLOB                     ) as xmldata(col)';          EXEC (@SQL);               CREATE PRIMARY XML INDEX PXML_TraceXML ON #TraceXML(ReportXML);          WITH XMLNAMESPACES        (              'http://tempuri.org/TracePersistence.xsd' AS MY        ),        ShreddedWheat AS        (              SELECT                     bpShredded.blocked_ecid,                     bpShredded.blocked_spid,                     bpShredded.blocked_inputbuffer,                     bpShredded.blocked_waitime,                     bpShredded.blocking_ecid,                     bpShredded.blocking_spid,                     bpShredded.blocking_inputbuffer,                     bpShredded.monitorloop,                     bpReports.bpReportXml,                     bpReports.bpReportEndTime              FROM #TraceXML              CROSS APPLY                     ReportXML.nodes('/MY:TraceData/MY:Events/MY:Event[@name="Blocked process report"]')                     AS eventNodes(eventNode)              CROSS APPLY                     eventNode.nodes('./MY:Column[@name="EndTime"]')                     AS endTimeNodes(endTimeNode)              CROSS APPLY                     eventNode.nodes('./MY:Column[@name="TextData"]')                     AS bpNodes(bpNode)              CROSS APPLY(                     SELECT CAST(bpNode.value('(./text())[1]', 'nvarchar(max)') as xml),                            CAST(LEFT(endTimeNode.value('(./text())[1]', 'varchar(max)'), 19) as datetime)              ) AS bpReports(bpReportXml, bpReportEndTime)              CROSS APPLY(                     SELECT                            monitorloop = bpReportXml.value('(//@monitorLoop)[1]', 'nvarchar(100)'),                            blocked_spid = bpReportXml.value('(/blocked-process-report/blocked-process/process/@spid)[1]', 'int'),                            blocked_ecid = bpReportXml.value('(/blocked-process-report/blocked-process/process/@ecid)[1]', 'int'),                            blocked_inputbuffer = bpReportXml.value('(/blocked-process-report/blocked-process/process/inputbuf/text())[1]', 'nvarchar(max)'),                            blocking_spid = bpReportXml.value('(/blocked-process-report/blocking-process/process/@spid)[1]', 'int'),                            blocking_ecid = bpReportXml.value('(/blocked-process-report/blocking-process/process/@ecid)[1]', 'int'),                            blocking_inputbuffer = bpReportXml.value('(/blocked-process-report/blocking-process/process/inputbuf/text())[1]', 'nvarchar(max)'),                            blocked_waitime = bpReportXml.value('(/blocked-process-report/blocked-process/process/@waittime)[1]', 'bigint')              ) AS bpShredded        )        INSERT #ReportsXML(blocked_ecid,blocked_spid,blocking_ecid,blocking_spid,              monitorloop,bpReportXml,endTime)        SELECT blocked_ecid,blocked_spid,blocking_ecid,blocking_spid,              COALESCE(monitorloop, CONVERT(nvarchar(100), bpReportEndTime, 120), 'unknown'),              bpReportXml,bpReportEndTime        FROM ShreddedWheat;               DROP TABLE #TraceXML   END   -- Organize and select blocked process reports ;WITH Blockheads AS (        SELECT blocking_spid, blocking_ecid, monitorloop, blocking_hierarchy_string        FROM #ReportsXML        EXCEPT        SELECT blocked_spid, blocked_ecid, monitorloop, blocked_hierarchy_string        FROM #ReportsXML ), Hierarchy AS (        SELECT monitorloop, blocking_spid as spid, blocking_ecid as ecid,              cast('/' + blocking_hierarchy_string as varchar(max)) as chain,              0 as level        FROM Blockheads               UNION ALL               SELECT irx.monitorloop, irx.blocked_spid, irx.blocked_ecid,              cast(h.chain + irx.blocked_hierarchy_string as varchar(max)),              h.level+1        FROM #ReportsXML irx        JOIN Hierarchy h              ON irx.monitorloop = h.monitorloop              AND irx.blocking_spid = h.spid              AND irx.blocking_ecid = h.ecid ) SELECT        ISNULL(CONVERT(nvarchar(30), irx.endTime, 120),              'Lead') as traceTime,        SPACE(4 * h.level)              + CAST(h.spid as varchar(20))              + CASE h.ecid                     WHEN 0 THEN ''                     ELSE '(' + CAST(h.ecid as varchar(20)) + ')'              END AS blockingTree,        irx.blocked_waitime,        bdp.last_trans_started as blocked_last_trans_started,        bdp.wait_resource AS blocked_wait_resource,        bgp.wait_resource AS blocking_wait_resource,        bgp.[status] AS blocked_status,        bdp.[status] AS blocking_status,        bdp.lock_mode AS blocked_lock_mode,        bdp.isolation_level as blocked_isolation_level,        bgp.isolation_level as blocking_isolation_level,        bdp.app AS blocked_app,        DB_NAME(bdp.current_db) AS blocked_db,        '-----> blocked statement' AS blocked_section,        CAST('' + irx.blocked_bfinput + '' AS XML) AS blocked_input_buffer,        CASE              WHEN bdp.frame_blocked_process_xml IS NULL THEN CAST('' + irx.blocked_bfinput + '' AS XML)              ELSE bdp.frame_blocked_process_xml        END AS frame_blocked_process_xml,        DB_NAME(bgp.current_db) AS blocking_db,        bgp.app AS blocking_app,        'blocking statement ----->' AS blocking_section,        CAST('' + irx.blocking_bfinput + '' AS XML) AS blocking_input_buffer,        CASE              WHEN bgp.frame_blocking_process_xml IS NULL THEN CAST('' + irx.blocking_bfinput + '' AS XML)              ELSE bgp.frame_blocking_process_xml        END AS frame_blocking_process_xml,        irx.bpReportXml from Hierarchy h left join #ReportsXML irx        on irx.monitorloop = h.monitorloop        and irx.blocked_spid = h.spid        and irx.blocked_ecid = h.ecid outer apply (        select              T.x.value('(./process/@waitresource)[1]', 'nvarchar(256)') AS wait_resource,              T.x.value('(./process/@lasttranstarted)[1]', 'datetime') as last_trans_started,              T.x.value('(./process/@lockMode)[1]', 'nvarchar(60)') as lock_mode,              T.x.value('(./process/@status)[1]', 'nvarchar(60)') as [status],              T.x.value('(./process/@isolationlevel)[1]', 'nvarchar(60)') as isolation_level,              T.x.value('(./process/@currentdb)[1]', 'int') as current_db,              T.x.value('(./process/@clientapp)[1]', 'nvarchar(200)') as app,              cast(              (select SUBSTRING(txt.text,(ISNULL(T.x.value('./@stmtstart', 'int'), 0) / 2) + 1,                            ((CASE ISNULL(T.x.value('./@stmtend', 'int'), -1)                                   WHEN -1 THEN DATALENGTH(txt.text)                                   ELSE T.x.value('./@stmtend', 'int')                               END - ISNULL(T.x.value('./@stmtstart', 'int'), 0)) / 2) + 1) + CHAR(13) AS statement_txt                        from bpReportXml.nodes('//blocked-process/process/executionStack/frame') AS T(x)                        cross apply sys.dm_exec_sql_text(T.x.value('xs:hexBinary(substring((./@sqlhandle), 3))', 'varbinary(max)')) AS txt                        for XML path('')) as xml) AS frame_blocked_process_xml          from bpReportXml.nodes('//blocked-process') AS T(x) ) AS bdp outer apply (        select              T.x.value('(./process/@waitresource)[1]', 'nvarchar(256)') AS wait_resource,              T.x.value('(./process/@status)[1]', 'nvarchar(60)') as [status],              T.x.value('(./process/@isolationlevel)[1]', 'nvarchar(60)') as isolation_level,              T.x.value('(./process/@currentdb)[1]', 'int') as current_db,              T.x.value('(./process/@clientapp)[1]', 'nvarchar(200)') as app,              cast(              (select SUBSTRING(txt.text,(ISNULL(T.x.value('./@stmtstart', 'int'), 0) / 2) + 1,                            ((CASE ISNULL(T.x.value('./@stmtend', 'int'), -1)                                   WHEN -1 THEN DATALENGTH(txt.text)                                   ELSE T.x.value('./@stmtend', 'int')                               END - ISNULL(T.x.value('./@stmtstart', 'int'), 0)) / 2) + 1) + CHAR(13) AS statement_txt                        from bpReportXml.nodes('//blocking-process/process/executionStack/frame') AS T(x)                        cross apply sys.dm_exec_sql_text(T.x.value('xs:hexBinary(substring((./@sqlhandle), 3))', 'varbinary(max)')) AS txt                        for XML path('')) as xml) AS frame_blocking_process_xml               from bpReportXml.nodes('//blocking-process') AS T(x) ) AS bgp order by h.monitorloop, h.chain   DROP TABLE #ReportsXML

     

    Unfortunately I can’t show my customer context so I will show only a sample of my own test to explain how we can use this script. In fact, the generated result set is splitted into three main sections.

    First section: Hierarchy blocked tree, lock resources and transaction isolation level

     

    blog_33_-_1_-_result_lock_section

     

    Let’s begin by the first category. You can see here the hierarchy tree and the blocked interactions that exist between the different processes. The above picture shows the process id = 72 that is blocking the process id = 73. In turn, the process = 73 is blocking other sessions (with id = 75, 77). Furthermore, the process 74 is at the same level than the process id = 73 and it is blocked by the process id = 72. Finally the process id = 76 is blocked by the process id = 74. A real can of worms isn’t it?

    Displaying the blocking hierarchy tree is very useful in this case. In addition, I added the transaction isolation level used by all processes, the status of the processes, the locks and the resources related to the issue. As a reminder, these information are already in the blocked processes report and my task consisted in extracting these information in tabular format. We will use all of them later in this blog post. For the moment, let’s focus on the first hierarchy branch: 72 -> 73 -> 75 -> 77 and the resource that all concerned processes are hitting:

    KEY: 6:72057594045595648 (089241b7b846) that we can split in three main parts

    6 : Database id = 6 => AdventureWorks2012

    72057594045595648 : The container hobt id of the partition that give us the schema, table and index as follows:

    select        s.name as [schema_name],        o.name as table_name,        i.name as index_name from sys.partitions as p join sys.objects as o        on p.object_id = o.object_id join sys.indexes as i        on i.object_id = p.object_id              and i.index_id = p.index_id join sys.schemas as s        on s.schema_id = o.schema_id where p.hobt_id = 72057594045595648

     

    blog_33_-_2_-_partition

     

    Person.Person.PK_Person_BusinessEntityID is a clustered index that includes the BusinessEntityID column.

     

    (089241b7b846) :

    The lock resource value that identifies the index key in the table Person.Person locked by the process id = 72. We may use the undocumented function %%lockres%% to locate the correct row in the table as follows:

     

    select        BusinessEntityID from Person.Person where %%lockres%% = '(089241b7b846)'

     

    blog_33_-_3_-_lockres

     

    At this point we know that the blocking process has started a transaction in repeatable read transaction isolation level and has not yet released the lock on the index key with value 14. This is why the session id = 73 is still pending because it attempts to access to the same resource by putting an S lock.

    Let’s continue with the next sections of the result set:

     

    Second section: blocking and blocked input buffers and their related frames

    This second part provides detailed information of blocked statement information including the concerned application and the concerned databases as well.

     

    blog_33_-_4_-_blocked_session_section

     

    Likewise, the last part provides the same kind of information but for the blocking statement(s):

     

    blog_33_-_5_-_blocking_session_section

     

    We will correlate the information of the above sections. For example, if we take a look directly at the blocking input buffer of the process id = 72 we will discover the responsible that is the following stored procedure:

     

    <blockingInputBuffer> EXECTestUpdatePersonNameStyle@NameStyle &lt;/blockingInputBuffer&gt;

     

    Next, the blocking frame identifies exactly the portion of code inside the stored procedure where the blocking issue has occurred:


    WAITFOR DELAY '00:02:00';

     

    Ok it seems that the stored procedure has started an explicit transaction with the repeatable read transaction isolation level and includes a WAITFOR DELAY command with a duration of 2 minutes. During this time, the different resources are still holding by the transaction because there is no transaction commit or transaction rollback and we are in repeatable read transaction isolation level. Let’s take a look at the stored procedure code:

     

    ALTER PROCEDURE [dbo].[TestUpdatePersonNameStyle] (        @NameStyle BIT,        @BusinessEntityID INT ) AS   SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;   BEGIN TRAN   SELECT PhoneNumber FROM Person.PersonPhone WHERE BusinessEntityID = @BusinessEntityID;   UPDATE Person.Person SET NameStyle = @NameStyle WHERE BusinessEntityID = @BusinessEntityID + 100;   WAITFOR DELAY '00:02:00';   ROLLBACK TRAN;

     

    We can confirm that we found in the first section, the repeatable read transaction isolation level used by the blocking session. In reality, it seems that we have two different resources holding by the above transaction. The first (index key = 14) and the second (index key = 14 + 100).

    Now let’s switch to the blocked statement part. A quick look at the input buffer tells us that the session id = 73 is trying to access the same resource than the UPDATE part of the blocking process. It confirms what we saw in the first section: the process id = 73 is in suspended state because it is trying to put a S lock on the concerned resource that is not compatible with an X lock from the UPDATE statement of the process id = 72.


    SELECT * FROM Person.Person WHERE BusinessEntityID = 114;  

     

    I will do not the same demonstration for all the lines in the result set but let’s finish by the process id = 74. Let’s go back to the first section. We can see that session id = 74 is trying to put an X lock on the following resource:

    KEY: 6:72057594045726720 (58e9f9de4ab6)

    Let’s apply the same rule that earlier and we may easily find the corresponding index key on the table Person.PersonPhone this time.

     

    blog_33_-_6_-_lockres

     

    Now let’s continue to the next sections and let’s take a look at the blocking frame:

     


    WAITFOR DELAY '00:02:00';

     

    The same thing that the first case…. Finally let’s take a look at the blocking input buffer:

     

      BEGIN TRAN; IF EXISTS(SELECT 1 FROM Person.Person WHERE BusinessEntityID = 14) DELETE FROM Person.PersonPhone WHERE BusinessEntityID = 14; ROLLBACK TRAN;    

     

    This time, it concerns an explicit transaction but with a different transaction isolation level: read committed mode. You can correlate with the first section by yourself. The blocking point concerns only the second part of the above query as indicated by the blocked_lock column in the first section: The process id = 74 is trying to put an X lock on a resource that is still holding by the process id = 72 (SELECT statement in repeatable read transaction isolation level).

    The issue that I faced with my customer was pretty similar. In fact you have just to replace the WAITFOR DELAY command by a series of other pieces of code which deferred drastically the transaction commit time. In this case, having a precise idea of the blocking tree and the other information readable directly on a tabular format helped us to save a lot of time in order to resolve this issue.

    Happy troubleshooting!

    UKOUG Systems Event and Exadata Content

    Jason Arneil - Sun, 2015-03-08 09:55

    I’ve been involved in organising a couple of upcoming UKOUG events.

    I will be involved with the engineered systems stream for the annual UKOUG conference, returning after an absence of a couple of years, to once again being held in Birmingham.

    While the planning for this is at a very early stage, Martin Widlake will be giving you the inside scoop on this.

    The event I really want to talk about though is an event that is much more immediate:

    The UKOUG Systems Event, this is a one day, multi-stream event which is being held in London on May 20th.

    This event will feature at least 1 and possibly 2 Exadata streams. I am sure we will have a really good range of speakers with a wealth of Exadata experience.

    In addition to Exadata there will be a focus on other engineered systems platforms as well as Linux/Solaris and virtualisation. So a wide range of topics being covered in a number of different streams. If you feel you have a presentation that might be of interest, either submit a paper, or feel free to get in touch with me to discuss further.

    Note the submission deadline is 18th March.

    But the real big news though is that the event is likely to feature some serious deep dive material from Roger Macnicol. Roger is one of the people within Oracle actually responsible for writing the smart scan code.

    If you want to understand Exadata smart scans you will not be able to get this information anywhere else in the whole of Europe.

    I had the privilege of seeing Roger present at E4 last year, and the information he can provide is so good you even had super smart people like Tanel Poder scribbling down a lot of the information that Roger was providing.

    So to repeat, if you are interested in knowing about how smart scan works we are hoping to be able to provide a talk with the level of detail that is only possible from having one of the people responsible for smart scan from inside Oracle come to give it. In addition to this he will be presenting on BDA.

    If all that was not enough, there should be a nice relaxed social event at the end of the conference where you will be able to chat over any questions you may still have!


    Blueprint for a post-LMS, Part 4

    Michael Feldstein - Sat, 2015-03-07 18:17

    By Michael FeldsteinMore Posts (1021)

    In part 1 of this series, I talked about some design goals for a conversation-based learning platform, including lowering the barriers and raising the incentives for faculty to share course designs and experiment with pedagogies that are well suited for conversation-based courses. Part 2 described a use case of a multi-school faculty professional development course which would give faculty an opportunity to try out these affordances in a low-stakes environment. In part 3, I discussed some analytics capabilities that could be added to a discussion forum—I used the open source Discourse as the example—which would lead to richer and more organic assessments in conversation-based courses. But we haven’t really gotten to the hard part yet. The hard part is encouraging experimentation and cross-fertilization among faculty. The problem is that faculty are mostly not trained, not compensated, and otherwise not rewarded for their teaching excellence. Becoming a better teacher requires time, effort, and thought, just as becoming a better scholar does. But even faculty at many so-called “teaching schools” are given precious little in the way of time or resources to practice their craft properly, never mind improving it.

    The main solution to this problem that the market has offered so far is “courseware,” which you can think of as a kind of course-in-a-box. In other words, it’s an attempt to move as much as the “course” as possible into the “ware”, or the product. The learning design, the readings, the slides, and the assessments are all created by the product maker. Increasingly, the students are even graded by the product.

    courseTarget20130412-TOP-no

    This approach as popularly implemented in the market has a number of significant and fairly obvious shortcomings, but the one I want to focus on for this post is these packages are still going to be used by faculty whose main experience is the lecture/test paradigm.[1] Which means that, whatever the courseware learning design originally was, it will tend to be crammed into a lecture/test paradigm. In the worst case, the result is that we have neither the benefit of engaged, experienced faculty who feel ownership of the course nor an advanced learning design that the faculty member has not learned how to implement.

    One of the reasons that this works from a commercial perspective is that it relies on the secret shame that many faculty members feel. Professors were never taught to teach, nor are they generally given the time, money, and opportunities necessary to learn and improve, but somehow they have been made to feel that they should already know how. To admit otherwise is to admit one’s incompetence. Courseware enables faculty to keep their “shame” secret by letting the publishers do the driving. What happens in the classroom stays in the classroom. In a weird way, the other side of the shame coin is “ownership.” Most faculty are certainly smart enough to know that neither they nor anybody else is going to get rich off their lecture notes. Rather, the driver of “ownership” is fear of having the thing I know how to do in my classroom taken away from me as “mine” (and maybe exposing the fact that I’m not very good at this teaching thing in the process). So many instructors hold onto the privacy of their classrooms and the “ownership” of their course materials for dear life.

    Obviously, if we really want to solve this problem at its root, we have to change faculty compensation and training. Failing that, the next best thing is to try to lower the barriers and increase the rewards for sharing. This is hard to do, but there are lessons we can learn from social media. In this post, I’m going to try to show how learning design and platform design in a faculty professional development course might come together toward this end.

    You may recall from part 2 of this series that use case I have chosen is a faculty professional development “course,” using our forthcoming e-Literate TV series about personalized learning as a concrete example. The specific content isn’t that important except to make the thought experiment a little more concrete. The salient details are as follows:

    1. The course is low-stakes; nobody is going to get mad if our grading scheme is a little off. To the contrary, because it’s a group of faculty engaged in professional development about working with technology-enabled pedagogy, the participants will hopefully bring a sense of curiosity to the endeavor.
    2. The course has one central, course-long problem or question: What, if anything, do we (as individual faculty, as a campus, and as a broader community of teachers) want to do with so-called “personalized learning” tools and approaches? Again, the specific question doesn’t matter so much as the fact that there is an overarching question where the answer is going to be specific to the people involved rather than objective and canned. That said, the fact that the course is generally about technology-enabled pedagogy does some work for us.
    3. Multiple schools or campuses will participate in the course simultaneously (though not in lock-step, as I will discuss in more detail later in this post). Each campus cohort will have a local facilitator who will lead some local discussions and customize the course design for local needs. That said, participants will also be able (and encouraged) to have discussions across campus cohorts.
    4. The overarching question naturally lends itself to discussion among different subgroups of the larger inter-campus group, e.g., teachers of the same discipline, people on the same campus, among peer schools, etc.

    That last one is critical. There are natural reasons for participants to want to discuss different aspects of the overarching question of the course with different peer groups within the course. Our goal in both course and platform design is to make those discussions as easy and immediately rewarding as possible. We are also going to take advantage of the electronic medium to blur the distinction between contributing a comment, or discussion “post,” with longer contributions such as documents or even course designs.

    We’ll need a component for sharing and customizing the course materials, or “design” and “curriculum,” for the local cohorts. Again, I will choose a specific piece of software in order to make the thought experiment more concreted, but as with Discourse in part 3 of this series, my choice of example is in no way intended to suggest that it is the only or best implementation. In this case, I’m going to use the open source Apereo OAE for this component in the thought experiment.

    When multiple people teach their own courses using existing the same curricular materials (like a textbook, for example), there is almost always a lot of customization that goes on at the local level. Professor A skips chapters 2 and 3. Professor B uses her own homework assignments instead of the end-of-chapter problems. Professor C adds in special readings for chapter 7. And so on. With paper-based books, we really have no way of knowing what gets used and reused, what gets customized (and how it gets customized), and what gets thrown out. Recent digital platforms, particularly from the textbook publishers, are moving in the direction of being able to track those things. But academia hasn’t really internalized this notion that courses are more often customized than built from scratch, never mind the idea that their customizations could (and should) be shared for the sake of collective improvement. What we want is a platform that makes the potential for this virtuous cycle visible and easy to take advantage of without forcing participants to sacrifice any local control (including the control to take part or all of their local course private if that’s what they want to do).

    OAE allows a user to create content that can be published into groups. But published doesn’t mean copied. It means linked. We could have the canonical copy of the ETV personalized learning MOOC (for example), that includes all the episodes from all the case studies plus any supplemental materials we think are useful. The educational technology director at Some State University (SSU) could create a group space for faculty and other stakeholders from her campus. She could choose to pull some, but not all, of the materials from the canonical course into her space. She could rearrange the order. You may recall from part 3 that Discourse can integrate with WordPress, spawning a discussion for every new blog post. We could easily imagine the same kind of integration with OAE. Since anything the campus facilitator pulls from the canonical course copy will be surfaced in her course space rather than copied into it, we would still have analytics on use of the curricular materials across the cohorts, then any discussions in Discourse that are related to the original content items would maintain their linkage (including the ability to automatically publish the “best” comments from the thread back into SSU’s course space). The facilitator could also add her own content, make her space private (from the default of public), and spawn private cohort-specific conversations. In other words, she could make it her own course.

    I slipped the first bit of magic into that last sentence. Did you catch it? When the campus facilitator creates a new document, the system can automatically spawn a new discussion thread in Discourse. By default, new documents from the local cohort become available for discussion to all cohorts. And with any luck, some of that discussion will be interesting and rewarding to the person creating the document. The cheap thrill of any successful social media platform is having the (ideally instant) gratification of seeing somebody respond positively to something you say or do. That’s the feeling we’re trying to create. Furthermore, because of the way OAE share documents across groups, if the facilitator in another cohort were to pull your document into her course design, it wouldn’t have to be invisible to you the way creating a copy is. We could create instant and continuously updated feedback on the impact of your sharing. Some documents (and discussions) in some cohorts might need to be private, and OAE supports that, but the goal is to get private, cohort- (or class-)internal sharing feel something like direct messaging feels on Twitter. There is a place for it, but it’s not what makes the experience rewarding.

    To that end, we could even feed sharing behavior from OAE into the trust analytics I described in part 3 of this post series. One of the benefits of abstracting the trust levels from Discourse into an external system that has open APIs is that it can take inputs from different systems. It would be possible, for example, to make having your document shared into another cohort on OAE or having a lot of conversation generated from your document count toward your trust level. I don’t love the term “gamification,” but I do love the underlying idea that a well-designed system should make desired behaviors feel good. That’s also a good principle for course design.

    I’m going to take a little detour into some learning design elements here, because they are critical success factors for the platform experience. First, the Problem-based Learning (PBL)-like design of the course is what makes it possible for individual cohorts to proceed at their own pace, in their own order, and with their own shortcuts or added excursions and still enable rich and productive discussions across cohorts. A course design that requires that units be released to the participants one week at a time will not work, because discussions will get out of sync as different cohorts proceed differently, and synchronization matters to the course design. If, on the other hand, synchronization across cohorts doesn’t matter because participants are going to the discussion authentically as needed to work out problems (the way they do all the time in online communities but much less often in typical online courses), then discussions will naturally wax and wane with participant needs and there will be no need to orchestrate them. Second, the design is friendly to participation through local cohorts but doesn’t require it. If you want to participate in the course as a “free agent” and have a more traditional MOOC-like experience, you could simply work off the canonical copy of the course materials and follow the links to the discussions.

    End of detour. There’s one more technology piece I’d like to add to finish off the platform design for our use case. Suppose that all the participants could log into the system with their university credentials through an identity management scheme like InCommon. This may seem like a trivial implementation detail that’s important mainly for participant convenience, but it actually adds the next little bit of magic to the design. In part 3, I commented that integrating the discussion forum with a content source enables us to make new inferences because we now know that a discussion is “about” the linked content in some sense, and because content creators often have stronger motivations than discussion participants to add metadata like tags or learning objectives that tell us more about the semantics. One general principle that is always worth keeping in mind when designing learning technologies these days is that any integration presents an potential opportunity for new inferences. In the case of single sign-on, we can go to a data source like IPEDS to learn a lot about the participants’ home institutions and therefore about their potential affinities. Affinities are the fuel that provides any social platform with its power. In our use case, participants might be particularly interested in seeing comments from their peer institutions. If we know where they are coming from, then we can do that automatically rather than forcing them to enter information or find each other manually. In a course environment, faculty might want to prioritize the trust signals from students at similar institutions over those from very different institutions. We could even generate separate conversation threads based on these cohorts. Alternatively, people might want to find people with high trust levels who are geographically near them in order to form meetups or study groups.

    And that’s it, really. The platform consists of a discussion board, a content system, and a federated identity management system that have been integrated in particular ways and used in concert with particular course design elements. There is nothing especially new about either the technology or the pedagogy. The main innovation here, to the degree that there is one, is combining them in a way that creates the right incentives for the participants. When I take a step back and really look at it, it seems too simple and too much like other things I’ve seen and too little like other things I’ve seen and too demanding of participants to possibly work. Then again, I said the same thing about blogs, Facebook, Twitter, and Instagram. They all seemed stupid to me before I tried them. Facebook still seems stupid to me, and I haven’t tried Instagram, but the point remains that these platforms succeeded not because of any obvious feat of technical originality but because they got the incentive structures right in lots of little ways that added up to something big. What I’m trying to do here with this design proposal is essentially to turn the concept of courseware inside out, changing the incentive structures in lots of little ways that hopefully add up to something bigger. Rather than cramming as much of the “course” as possible into the “ware,” reinforcing the isolation of the classroom in the process, I’m trying to make the “ware” generated by the living, human-animated course, making learning and curriculum design inherently social processes and hopefully thereby circumventing the shame reflex. And I’m trying to do that in the context of a platform and learning design that attempt to both reward and quantify social problem solving competencies in the class itself.

    I don’t know if it will fly, but it might. Stranger things have happened.[2]

    In the last post in this series, I will discuss some extensions that would probably have to be made in order to use this approach in a for-credit class as well as various miscellaneous considerations. Hey, if you’ve made it this far, you might as well read the last one and find out who dunnit.

     

    1. Of course, I recognize that some disciplines don’t do a lot of lecture/test (although they may do lecture/essay). These are precisely the disciplines in which courseware has been the least commercially successful.
    2. My wife agreeing to marry me, for instance.

    The post Blueprint for a post-LMS, Part 4 appeared first on e-Literate.