Skip navigation.

Feed aggregator

Indexing and Transparent Data Encryption Part III (You Can’t Do That)

Richard Foote - Tue, 2015-06-16 00:28
In Part II of this series, we looked at how we can create a B-Tree index on a encrypted column, providing we do not apply salt during encryption. However, this is not the only restriction with regard to indexing an encrypted column using column-based encryption. If we attempt to create an index that is not a […]
Categories: DBA Blogs

Dynamic Sampling

Jonathan Lewis - Mon, 2015-06-15 14:41

Following on from an OTN posting about dynamic sampling difficulties I had planned to write a blog post about the difference between “not sampling when hinted” and “ignoring the sample” – but Mohamed Houri got there before me.

It’s just worth highlighing a little detail that is often overlooked, though: there are two versions of the dynamic_sampling() hint, the cursor level and the table level, and the number of blocks sampled at a particular level is dependent on which version you are using.  Level 4 at the cursor level, for example, will sample 64 blocks if and only if a certain condition is met,  but at the table level it will sample 256 blocks unconditionally.

So try to be a little more specific when you say “I told the optimizer to use dynamic sampling …”, it’s either:

“I told the optimizer to use cursor level dynamic sampling at level X …”

or

“I told the optimizer to use table level dynamic sampling at level Y for table A and …”

Note – apart from the changes to dynamic sampling that allow for a level 11, there’s also a change introduced (I think) in 10g for the sample() clause applied to the table during sampling – it’s the addition of a seed() clause which ensures that when you repeat the same level you generate the same set of random rows.

Addendum

Here’s a little code I wrote some time ago to check the effect of the two options at different levels. I started by creating a (nologging) table from the first 50,000 rows of all_objects, then doubled it up a few times to 400,000 rows total, and ensured that there were no stats on the table. Then executed in turn each variant of the following anonymous pl/sql block (note that I have the execute privilege on the dbms_system package):


declare
	m_ct number;
begin
	execute immediate 'alter session set events ''10053 trace name context forever''';
	for i in 1..10 loop
		sys.dbms_system.ksdwrt(1,'=============');
		sys.dbms_system.ksdwrt(1,'Level ' || i);
		sys.dbms_system.ksdwrt(1,'=============');

		execute immediate 
			'select /*+ dynamic_sampling('    || i || ') */ count(*) from t1 ' ||
--			'select /*+ dynamic_sampling(t1 ' || i || ') */ count(*) from t1 ' ||
			'where owner = ''SYS'' and object_type = ''SYNONYM'''
			into m_ct;
	end loop;
end;
/

Obviously I could examine the resulting trace file to pick out bits of each optimisation, but for a quick check a simple grep for “sample block cnt” is almost all I need to do – with the following (slightly decorated) results from 11.2.0.4:


Table level
===========
Level 1
    max. sample block cnt. : 32
    sample block cnt. : 31
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 128
    sample block cnt. : 127
    max. sample block cnt. : 256
    sample block cnt. : 255
    max. sample block cnt. : 512
    sample block cnt. : 511
    max. sample block cnt. : 1024
    sample block cnt. : 1023
    max. sample block cnt. : 2048
    sample block cnt. : 2047
    max. sample block cnt. : 4096
    sample block cnt. : 4095
    max. sample block cnt. : 8192
    sample block cnt. : 8191
Level 10
    max. sample block cnt. : 4294967295
    sample block cnt. : 11565

Cursor level
============
No sampling at level 1
Level 2
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 64
    sample block cnt. : 63
    max. sample block cnt. : 128
    sample block cnt. : 127
    max. sample block cnt. : 256
    sample block cnt. : 255
    max. sample block cnt. : 1024
    sample block cnt. : 1023
    max. sample block cnt. : 4096
    sample block cnt. : 4095
Level 10
    max. sample block cnt. : 4294967295
    sample block cnt. : 11565


You’ll notice that the cursor level example didn’t do any sampling at level 1. Although the manual doesn’t quite make it clear, sampling will only occur if three conditions are met:

  • The table has no statistics
  • The table has no indexes
  • The table is involved in a join so that a sample could affect the join order and method

If only the first two conditions are met then the execution path will be a full tablescan whatever the sample looks like and the number of rows returned has no further impact as far as the optimizer is concerned – hence the third requirement (which doesn’t get mentioned explicitly in the manuals). If you do have a query that meets all three requirements then the sample size is 32 (31) blocks.

 


CBO Series

Jonathan Lewis - Mon, 2015-06-15 14:19

About a year ago I came across a couple of useful articles from Stefan Koehler, which is when I added his name to my blog roll. As an introduction for other readers I’ve compiled an index for a series of articles he wrote about the CBO viewed, largely, from the perspective of using Oracle to run SAP. Today I realised I hadn’t got around to publishing it, and there’s been a couple of additions since I first started to compile the list.

 


CRS-4995: The command ‘Modify resource’ is invalid in crsctl. Use srvctl for this command.

Oracle in Action - Mon, 2015-06-15 09:40

RSS content

Today, in my 12.1.0.2 cluster,  I encountered above error message when I was trying to modify ACL of an ASM cluster file system created on volume VOL1 in DATA diskgroup as follows:

[root@host01 ~]# crsctl modify resource ora.data.vol1.acfs -attr "ACL='owner:root:rwx,pgrp:dba:rwx,other::r--'"

CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.

I resolved the above problem by using the unsupported flag as follows:

[root@host01 ~]# crsctl modify resource ora.data.vol1.acfs -attr "ACL='owner:root:rwx,pgrp:dba:rwx,other::r--'" -unsupported

 

Hope it helps!!

References:
Oracle Issue running 12.1.0.2 clusterware with 11.2.0.2 database

Oracle Issue running 12.1.0.2 clusterware with 11.2.0.2 database

——————————————————————————————————————-

 Related Links :

Home

12c RAC Index

 



Tags:  

Del.icio.us
Digg

Comments:  0 (Zero), Be the first to leave a reply!
You might be interested in this:  
Copyright © ORACLE IN ACTION [CRS-4995: The command 'Modify resource' is invalid in crsctl. Use srvctl for this command.], All Right Reserved. 2015.

The post CRS-4995: The command ‘Modify resource’ is invalid in crsctl. Use srvctl for this command. appeared first on ORACLE IN ACTION.

Categories: DBA Blogs

What is more efficient: arrays or single columns values? - oracle

Yann Neuhaus - Mon, 2015-06-15 06:38
In the last post on this topic it turned out that using an array as a column type needs more space than using a column per value in PostgreSQL. Now I'll do the same test case in Oracle.

What is more efficient: arrays or single column values?

Yann Neuhaus - Mon, 2015-06-15 05:59
In PostgreSQL ( as well as in other rdbms ) you can define columns as arrays. What I wondered is: What is more efficient when it comes to space: Creating several columns or just creating once column as array? The result, at least for me, is rather surprising.

refhost.xml kludge is fixed

Frank van Bortel - Mon, 2015-06-15 05:50
No More missing packages I wrote several times about manually editing refhost.xml. There's not need for it, just apply Patch 18231786.Frankhttp://www.blogger.com/profile/07830428804236732019noreply@blogger.com0

SQL Interpolation with psql

Yann Neuhaus - Mon, 2015-06-15 04:45

The PostgreSQL psql utility provides some really nice features. One of these features is SQL interpolation which allows us to do interesting things, e.g. reading files and analyze the results directly in the database. This post will show how to use this by reading and analyzing sar files of a linux server.

“Chilling effects” revisited

DBMS2 - Sun, 2015-06-14 18:55

In which I observe that Tim Cook and the EFF, while thankfully on the right track, haven’t gone nearly far enough.

Traditionally, the term “chilling effect” referred specifically to inhibitions on what in the US are regarded as First Amendment rights — the freedoms of speech, the press, and in some cases public assembly. Similarly, when the term “chilling effect” is used in a surveillance/privacy context, it usually refers to the fear that what you write or post online can later be held against you. This concern has been expressed by, among others, Tim Cook of Apple, Laura Poitras, and the Electronic Frontier Foundation, and several research studies have supported the point.

But that’s only part of the story. As I wrote in July, 2013,

… with the new data collection and analytic technologies, pretty much ANY action could have legal or financial consequences. And so, unless something is done, “big data” privacy-invading technologies can have a chilling effect on almost anything you want to do in life.

The reason, in simplest terms, is that your interests could be held against you. For example, models can estimate your future health, your propensity for risky hobbies, or your likelihood of changing your residence, career, or spouse. Any of these insights could be useful to employers or financial services firms, and not in a way that redounds to your benefit. And if you think enterprises (or governments) would never go that far, please consider an argument from the sequel to my first “chilling effects” post:

What makes these dangers so great is the confluence of two sets of factors:

  • Some basic facts of human nature and organizational behavior — policies and procedures are biased against risk of “bad” outcomes, because people (and organizations) fear (being caught) making mistakes.
  • Technological developments that make ever more precise judgments as to what constitutes risk, or deviation from “proven-safe” profiles.

A few people have figured at least some of these dangers out. ACLU policy analyst Jay Stanley got there before I did, as did a pair of European Law and Economics researchers. Natasha Lomas of TechCrunch seems to get it. But overall, the chilling effects discussion — although I’m thrilled that it’s gotten even this far — remains much too narrow.

In a tough economy, will the day come that people organize their whole lives to appear as prudent and risk-averse as possible? As extreme as it sounds, that danger should not be overlooked. Plenty of societies have been conformist with much weaker mechanisms for surveillance (i.e., little beyond the eyes and ears of nosy neighbors).

And so I return yet again to my privacy mantra — we need to regulate information use, not just information collection and retention. To quote a third post from that July, 2013 flurry:

  • Governmental use of private information needs to be carefully circumscribed, including in most aspects of law enforcement.
  • Business discrimination based on private information needs in most cases to be proscribed as well.

As for exactly what those regulations should be — that, of course, is a complex subject in itself.

Categories: Other

Sticker Shock

Floyd Teter - Sun, 2015-06-14 17:37
A little off-subject, but still felt this experience was worth sharing.  I'll get back on track next post.

My smart phone is an iPhone 5.  My carrier is ATT.  It's been a great relationship since the iPhone first came out.  Sadly, I think it's coming to an end.

My two-year contract expired on June 1st...ATT informed me immediately that I was eligible for a phone upgrade and a new contract.  Exciting news, as I've had my eye on an iPhone 6-Plus.

Last week, I decided it was time to grab that new iPhone and renew my contract.  My wife came with...we're on the same plan and she is also upgrade eligible right now.  The local AT&T store is right around the corner, so we dropped by on the way out to lunch, figuring this would be about a 20-minute deal: cool new phones, new contract, and then off to lunch.

The experience did not quite work out as planned.  As you're probably aware, subsidized phones and 2-year contracts are on the way out with the wireless phone carriers.  More on that here.  So we were hit square between the eyes with sticker shock.  We were told we could buy new phones outright or pay in installments (including about $100 in fees...aka interest...over a two-year period).    Pretty significant bump in hardware out-of-pocket costs.

To add insult to injury, the price of carrier service has gone up around 33% for the same service levels as our own contract.  Regardless of whether we opted for a "Next" plan or not, the monthly outlay came out to the same amount.  And I thought data access was getting less expensive...

We left ATT in pretty short order and decided to try the Verizon store next door.  No help there...the numbers played out exactly the same.  The only difference was the branding of "Edge" rather than "Next".

Sprint tried to do better...they actually matched the level of service and price of our old plan.  For the first year.  Then they recovered that cost in the 2nd year.  So, over a 24-month period, the three major providers came out with the same price for service.  Contract, no contract, "Edge", "Next", or whatever...prices up 33% regardless.

We checked out Best Buy's $1 phone deal too.  I won't bore you with the details, other than to mention that the deal would not have saved us a dime on hardware or service over a two-year period.

Where I live, we have two more options.  T-Mobile and Boost Mobile.  Having checked T-Mobile, I've learned that they're leading this trend in higher service costs.  And their coverage map for my area is really spotty.  Boost Mobile, on the other hand, is offering substantially better pricing on service...which leaves me wondering how they do that, considering that they're leasing infrastructure and air time from Sprint in order to provide those services?

So, to sum up, I've learned two important things:  1) subsidized phones are now a thing of the past, and that smart phones (especially Apple smart phones) are expensive; 2) carrier providers are raising costs pretty substantially.  I suppose this is the cost to the consumer of finally converting perceptions of smart phones from a "cool new thing" to a necessity of modern life.

A few days have passed and I've now managed to talk my wallet down from jumping off a ledge.  We've made the decision in our house to hold the line on cell phone costs.

The sticker shock has made my old iPhone 5 look much, much better in my eyes.  I may just keep it until it dies. Or perhaps switch to the much lower-priced 'Droid-based OnePlus?

As far as increased provider costs, I imagine I'll be lowering my data plan and depending on the ever-increasing availability of free wifi.  Once that becomes less practical, I'll have to consider options...maybe switch back to a "dumb phone" and reconsider carrying a wifi-enabled tablet?  Yuk, that even sounds ugly :(

I suppose I've known for years that cell phone sticker shock was coming...but that doesn't make it any easier to deal with now that it's here.

The EDUCAUSE NGDLE and an API of One’s Own

Michael Feldstein - Sun, 2015-06-14 14:04

By Michael FeldsteinMore Posts (1033)

I have been meaning for some time to get around to blogging about the EDUCAUSE Learning Initiative’s (ELI’s) paper on a Next-Generation Digital Learning Environment (NGDLE) and Tony Bates’ thoughtful response to it. The core concepts behind the NGDLE are that a next-generation digital learning environment should have the following characteristics:

  • Interoperability and Integration
  • Personalization
  • Analytics, Advising, and Learning Assessment
  • Collaboration
  • Accessibility and Universal Design

The paper also suggests that the system should be modular. They draw heavily on an analogy to LEGOs and make a call for more robust standards. In response, Bates raises three concerns:

  1. He is suspicious of a potentially heavy and bureaucratic standards-making process that is vulnerable to undue corporate influence.
  2. He worries that LEGO is a poor metaphor that suggests an industrialized model.
  3. He is concerned that, taken together, the ELI requirements for an NGDLE will push us further in the direction of computer-driven rather than human-driven classes.

As it happens, ELI’s vision for NGDLE bears a significant resemblance to a vision that some colleagues and I came up with ten years ago when we were trying to help the SUNY system find an LMS that would fit the needs of all 64 campuses,[1] ranging from small, rural community colleges to R1 universities to medical and ophthalmology schools to a school of fashion. We got pretty deep into thinking about the implementation details, so it’s been on my mind to write my own personal perspective on the answers to Tony’s questions, based in large part on that previous experience. In the meantime, Jim Groom, who has made a transition from working at a university to working full-time at Reclaim Hosting, has written a series of really provocative and, to me, exciting posts on the future of the digital learning environment from his own perspective. Jim shares the starting assumption of the ELI and SUNY that a learning environment should be “learner-centric,” but he has a much more fully developed (and more radical) idea of what that really means, based on his previous work with A Domain of One’s Own. He also, in contrast to the ELI and SUNY teams, does not start from the assumption that “next-generation” means evolving the LMS. Rather, the questions he seems to be asking are “What is minimum amount of technical infrastructure required to create a rich digital learning environment?” and “Of that minimal amount of infrastructure we need, what is the minimal amount that needs to be owned by the institution rather than the learner?” I see these trains of thought emerging his posts on a university API, a personal API, and a syndication bus. What’s exciting to me about these posts is that, even though Jim is starting from a very different set of assumptions, he is also converging on something like the vision we had for SUNY.

In this post, I’m going to try to respond to both Tony and Jim. One of the challenges of this sort of conversation is that the relationship between the technical architecture and the possibilities it creates for the learners is complex. It’s easy to oversimplify or even conflate the two if we’re not very careful. So one of the things that I’m going to try to do here is untangle the technical talk from the functional talk.

I’ll start with Tony Bates’ concerns.

The Unbearable Heaviness of Standards

This is the most industry-talky part of the post, but it’s important for the later stuff. So if talk of Blackboard and Pearson sitting around a technical standards development table turns you off, please bear with me.

Bates writes,

First, this seems to be much too much of a top-down approach to developing technology-based learning environments for my taste. Standards are all very well, but who will set these standards? Just look at the ways standards are set in technology: international committees taking many years, with often powerful lobby groups and ‘rogue’ corporations trying to impose new or different standards.

Is that what we want in education? Or will EDUCAUSE go it alone, with the rest of the world outside the USA scrambling to keep up, or worse, trying to develop alternative standards or systems? (Just watch the European Commission on this one.) Attempts to standardize learning objects through meta-data have not had much success in education, for many good reasons, but EDUCAUSE is planning something much more ambitious than this.

Let me start by acknowledging, as somebody who has been involved in the sausage-making, that the technical standards development process is inherently difficult and fraught and that, because it is designed to produce a compromise that everybody can live with, it rarely produces a specification that anybody is thrilled with. Technical standards-making sucks, and its output often sucks as well. In fact, both process and output generally suck so badly that they collectively beg the question: Why would anyone ever do it? The answer is simple: Standards are usually created when the pain of not having a standard exceeds the pain of creating and living with one.

One of the biggest pains driving technical standards-making in educational technology has been the pain of vendor lock-in. Back in the days when Blackboard owned the LMS market and the LMS product category pretty much was the educational technology market, it was hard to get anyone developing digital learning tools or digital content to integrate with any other platform. Because there were no integration standards, anyone who wanted to integrate with both Blackboard and Moodle would have to develop that integration twice. Add in D2L and Sakai—this was pre-Canvas—and you had four times the effort. This is a problem in any field, but it’s particularly a problem in education because neither students nor courses are widgets. This means that we need a ton of specialized functionality, down to a very fine level. For example, both art historians and oncologists need image annotation tools to teach their classes digitally, but they use those tools very differently and therefore need different features. Ed tech is full of tiny (but important) niches, which means that there are needs for many tools that will make nobody rich. You’re not going to see a startup go to IPO with their wicked good art history image annotation tool. And so, inevitably, the team that develops such a tool will start small and stay small, whether they are building a product for sale, an open source project, or some internal project for a university or for their own classes. Having to develop for multiple platforms is just not feasible for a small team, which means the vast majority of teaching functionality will be available only on the most widely adopted platform. Which, in turn, makes that platform very hard to leave, because you’d also have to give up all those other great niche capabilities developed by third parties.

But there was a chicken-and-egg problem. To Tony’s point about the standards process being prone to manipulation, Blackboard had nothing to gain and a lot to lose from interoperability standards back when they dominated the market. They had a lot to gain from talking about standards, but nothing to gain (and a lot to lose) by actually implementing good standards. In those days, the kindest interpretation of their behavior in the IMS (which is the main technical standards body for ed tech) is that standards-making was not a priority for them. A more suspicious mind might suspect that there were times when they actively sabotaged those efforts. And they could, because a standard that wasn’t implemented by the platform used by 70% of the market was not one that would be adopted by those small tool makers. They would still have to build at least two integrations—one for Blackboard and one for everyone else. Thankfully, two big changes in the market disrupted this dynamic. First, Blackboard lost its dominance, thanks in part to the backlash among customers against just such anti-competitive behavior. It is no coincidence that then-CEO Michael Chasen chose to retain Ray Henderson, who was known for his long-standing commitment to open standards (and…um…actually caring about customer needs) right at the point when Blackboard backlash was at its worst and the company faced the probability of a mass exodus as they killed off WebCT. Second, content-centric platforms became increasingly sophisticated with consequent increasingly sophisticated needs for integrating other tools. This was driven by the collapse of the textbook publishers’ business model and their need to find some other way to justify their existence, but it was a welcome development for standards both because it brought more players to the table and because the world desperately needed (and still needs) alternative visions to the LMS for a digital learning environment, and the textbook publishers have the muscle to actually implement and drive adoption of their own visions. It doesn’t matter so much whether you like those visions or the players who are pushing them (although, honestly, almost anything would be a welcome change from the bento box that was and, to a large degree, still is the traditional LMS experience). What mattered from the standards-making perspective is that there were more players who had something to prove in the market and whose ideas about how niche functionality should integrate with the larger learning experience that their platform affords was not all the same. As a result, we are getting substantially richer and more polished ed tech integration standards more quickly from the IMS than we were getting a decade ago.

Unfortunately, the change in the market only helps with one of the hard problems of technical standards-making in ed tech. Another one, which Bates alludes to with his comment about failed efforts to standardize metadata for learning objects, is finding the right level of abstraction. There are a lot of reasons why learning objects have failed to gain the traction that advocates had hoped, but one good one is that there is no such thing as a learning object. At least, not one that we can define generically. What is it that syllabi, quizzes, individual quiz questions, readings, videos, simulations, week-long collections of all these things (or “modules”), and 15-week collections of these things (or “courses”) have in common? It is tempting to pretend that all of these things are alike in some fundamental way so that we can easily reuse them and build new things with them. You know…like LEGOs. If they were, then it would make sense to have one metadata standard to describe them all, because it would mean that the main challenge of building a new course out of old pieces would be finding the right pieces, and a metadata standard can help with that.

Alas.

Folks who are non-technical tend to think of software as a direct implementation of their functional needs, and their understanding of technical standards flows from that view of the world. As a result, it’s easy to overgeneralize the lesson of the learning object metadata standards failures. But the history of computing is one of building up successive layers of abstraction. For example, TCP/IP is a low-level technical standard that enables internet servers to connect to and communicate with each other, whether that communication takes the form of sending email, transferring a file, or looking up the address of a web site. Most of us don’t know about or care about what sorts of connections TCP/IP allows or doesn’t allow. At our level, it is indistinguishable from LEGOs in the sense tha we see these pieces fitting together generically and we don’t see a need for them to do anything else. But the programmers who built TCP/IP implemented it on top of the C programming language (which was standard in the informal sense that eventually became a Standard(TM) in the formal sense), which compiled to a number of different machine languages for different computer chips, making those chips more like LEGOs. Then other programmers created HTML and Javascript as a abstraction layers on top of TCP/IP, making web pages like LEGOs in the sense that any web server can serve any standards-conformant web page and any browser can read any such web page. From here, higher layers of abstraction get dicier, which is probably why we don’t have many higher-level Standards(TM). Instead, we start getting into things called “libraries” and “frameworks”. These are bits of code that are re-usable by enough developers that they are worth sharing and adopting, but not so much that they are worth going through the pain of formal standards development or become universal through some other means. And then, of course, there is just a vast amount of development on the web that is individual to the project and cannot be standardized, whether formally or informally. If you try to standardize that which is not standard, chances are that your “standard” will remain pretty non-standard.

So there is a generic danger that if we try to build a standard at the wrong level of abstraction, we will fail. But in education there is also the danger that we will try to build at the wrong level of abstraction and succeed. What I mean by this is we will enshrine a limited or even stunted vision of what kinds of teaching and learning a digital learning environment should support into the fundamental building blocks that we use to create new learning environments and learning experiences.

In What Sense Like LEGOs?

To wit, Bates writes:

A next generation digital learning environment where all the bits fit nicely together seems far too restrictive for the kinds of learning environments we need in the future. What about teaching activities and types of learning that don’t fit so nicely?

We need actually to move away from the standardization of learning environments. We have inherited a largely industrial and highly standardized system of education from the 19th century designed around bricks and mortar, and just as we are able to start breaking way from rigid standardization EDUCAUSE wants to provide a digital educational environment based on standards.

I have much more faith in the ability of learners, and less so but still a faith in teachers and instructors, to be able to combine a wide range of technologies in the ways that they decide makes most sense for teaching and learning than a bunch of computer specialists setting technical standards (even in consultation with educators).

Audrey Watters captured the subtlety of this challenge beautifully in her piece on the history of LEGO Mindstorms:

In some ways, the educational version of Mindstorms faces a similar problem as it struggles to balance imagination with instructions. As the product have become more popular in schools, Lego Education has added new features that make Mindstorms more amenable to the classroom, easier for teachers to use: portfolios, curriculum, data-logging and troubleshooting features for teachers, and so on.

“Little by little, the subversive features of the computer were eroded away. Instead of cutting across and challenging the very idea of subject boundaries, the computer now defined a new subject; instead of changing the emphasis from impersonal curriculum to excited live exploration by students, the computer was now used to reinforce School’s ways. What had started as a subversive instrument of change was neutralized by the system and converted into an instrument of consolidation.” – Seymour Papert, The Children’s Machine

That constructionist element is still there, of course – in Lego the toy and in Lego Mindstorms. Children of all ages continue to build amazing things. Yet as Mindstorms has become a more powerful platform – in terms of its engineering capabilities and its retail and educational success – it has paradoxically perhaps also become a less playful one.

There is a fundamental tension between making something more easily adoptable for a broad audience and making it challenging in the way that education should be challenging, i.e., that it is generative and encourages creativity (a quality that Amy Collier, Jen Ross, and George Veletsianos have started calling “not-yetness“). I don’t know about you, but when I was a kid, my LEGO kits didn’t look like this:

If I wanted to build the Millennium Falcon, I would have to figure out how to build it from scratch, which meant I was more likely to decide that it was too hard and that I couldn’t do it. But it also meant I was much more likely to build my own idea of a space ship rather than reproducing George Lucas’ idea. This is a fundamental and inescapable tension of educational technology (as well as the broad reuse or mass production of curricular materials), and it increases exponentially when teachers and administrators and parents are added as stakeholders in the mix of end users. But notice that, even with the real, analog-world LEGO kits, there are layers of abstraction and standardization. Standardizing the pin size on the LEGO blocks is generative because it suggests more possibilities for building new stuff out of the LEGOs. Standardizing the pieces to build one specialized model is reductive because it suggests fewer possibilities for building new stuff out of the LEGOs. To find ed tech interoperability standards that are generative rather than reductive, we need to first find the right level of abstraction.

What Does Your Space Ship Look Like?

This brings us to Tony Bates’ third concern:

I am becoming increasingly disturbed by the tendency of software engineers to force humans to fit technology systems rather than the other way round (try flying with Easyjet or Ryanair for instance). There may be economic reasons to do this in business enterprises, but we need in education, at least, for the technology to empower learners and teachers, rather than restrict their behaviour to fit complex technology systems. The great thing about social media, and the many software applications that result from it, is its flexibility and its ability to be incorporated and adapted to a variety of needs, despite or maybe even because of its lack of common standards.

When I look at EDUCAUSE’s specifications for its ‘NGDLE-conformant standards’, each on its own makes sense, but when combined they become a monster of parts. Do I want teaching decisions influenced by student key strokes or time spent on a particular learning object, for instance? Behind each of these activities will be a growing complexity of algorithms and decision-trees that will take teachers and instructors further way from knowing their individual students and making intuitive and inductive decisions about them. Although humans make many mistakes, they are also able to do things that computers can’t. We need technology to support that kind of behaviour, not try to replace it.

I read two interrelated concerns here. One is that, generically speaking, humans have a tendency to move too far in the direction of standardizing that which should not be standardized in an effort to achieve scalability of efficiency or one of those other words that would have impressed the steel and railroad magnates of a hundred years ago. This results in systems that are user-unfriendly at best and inhumane at worst. The second, more education-specific concern I’m hearing is that NGDLE as ELI envisions it would feed the beast that is our cultural mythology that education can and should be largely automated, which is pretty much where you arrive if you follow the road of standardization ad absurdam. So again, it comes down to standardizing the right things at the right levels of abstraction so that the standards are generative rather than reductive.

I’ll give an example of a level of ed tech interoperability that achieves a good level of LEGOicity.[2] Whatever digital learning environment you choose, whether it’s next-generation, this-generation, last-generation, or whatever-generation, there’s a good chance that you are going to want it to have some sense of “class-ness”, by which I mean that you will probably want to define a group of people who are in a class. This isn’t always true, but it is often true. And once you decide that you need that, you then need to specify who is in the class. That means, for every single class section that needs a sense of group, you need to register those users in the new system. If the system supports multiple classes that the students might be in (like an LMS, for example), then you’ll need unique identifiers for the class groups so that the system doesn’t get them mixed up, and you will also need human-readable identifiers (which may or may not be unique) so that the humans don’t get them mixed up and get lost in the system. Depending on the system, you may also want it to know when the class starts and ends, when it meets, who the teacher is, and so on. Again, not all digital learning environments require this information, but many do, including many that work very differently from each other. Furthermore, trying to move this information manually by, for example, asking your students to register themselves and then join a group themselves is…challenging. It makes sense to create a machine-to-machine method for sharing this information (a.k.a. an application programming interface, or API) so that the humans don’t have to do the tedious and error-prone manual work, and it makes sense to have this API be standard so that anybody developing a digital learning environment or learning tool anywhere can write one set of integration code and get this information from the relevant university system that has it, regardless of the particular brand or version of the system that the particular university is using. The IMS actually has two different standards—LIS and LTI—that do subsets of this sort of thing in different ways. Each one is useful for a particular and different set of situations, so it’s rare that you would be in a position of having to pick between the two. In most cases, one will be obviously better for you than the other. The existence and adoption of these standards are generative, because more people can build their own tools, or next-generation digital learning environments, or whatever, and easily make them work well for teachers and students by saving them from that tedious and frustrating registration and group creation workflow.

Notice the level of abstraction we are at. We are not standardizing the learning environment itself. We are standardizing the tools necessary for developers to build a learning environment. But even here, there are layers. Think about your mobile phone. It takes a lot of people with a lot of technical expertise a lot of time to build a mobile phone operating system. It takes a single 12-year-old a day to build a simple mobile phone app. This is one reason why there are only a few mobile phone operating systems which all tend to be similar while there are many, many, mobile apps that are very different from each other. Up until now, building digital learning environments has been more like building operating systems than like building mobile apps. When my colleagues and I were thinking about SUNY’s digital learning environment needs back in 2005, we wanted to create something we called a Learning Management Operating System (LMOS), but not because we thought that either learning management or operating systems were particularly sexy. To the contrary, we wanted to standardize the unsexy but essential foundations upon which a million billion sexy learning apps could be built by others. Try to remember what your smart phone was like before you installed any apps on it. Pretty boring, right? But it was just the right kind of standardized boring stuff that enabled such miracles of modern life as Angry Birds and Instragram. That’s what we wanted, but for teaching and learning.

Toward University APIs

Let’s break this down some more. Have you ever seen one of these sorts of prompts on your smart phone?

I bet that you have. This is one of those incredibly unsexy layers of standardization that makes incredibly sexy things happen. It enables my LinkedIn Connected app to know who I just met with and offer to make a connection with them. It lets any new social service I join know who I already know and therefore who I might want to connect with on that service. It lets the taxicab I’m ordering know where to pick me up. It lets my hotel membership apps find the nearest hotel for me. And so on. But there’s something weird going on in this screen grab. Fantastical, which is a calendar app, is asking permission to access my calendar. What’s up with that?

Apple provides a standard Calendar app that is…well…not terribly impressive. But that’s not what this dialog box is referring to. Apple also has an underlying calendaring API and data store, which is confusingly also named Calendar. It is this latter piece of unsexy but essential infrastructure that Fantastical is asking to access. It is also the unsexy piece of infrastructure that makes all the scheduling-related sexiness happen across apps. It’s the lingua franca for scheduling.

Now imagine a similar distinction between a rather unimpressive Discussions app within an LMS and a theoretical Discussions API in an LMOS. Most discussion apps have certain things in common. There are posts by authors. There are subjects and bodies and dates and times to those posts. Sometimes there are attachments. There are replies which form threads. Sometimes those threads branch. Imagine that you have all of that abstracted into an API or service. You could do a lot of things with it. For starters, you could build a different or better discussion board, the way Fantastical has done on top of Apple’s Calendar API. It could be a big thing that has all kinds of cool extra features, or it could be a little thing that, for example, just lets you attach a discussion thread anywhere on any page. Maybe you’re building an art history image annotation app and want to be able to hang a discussion thread off of particular spots on the image. Wouldn’t it be cool if you didn’t have to build all that discussion stuff yourself, but could just focus on the parts that are specific to your app? Maybe you’re not building something that needs a discussion thread at all but rather something that could use the data from the discussions app. Maybe you want to build a “Find a Study Buddy” app, and you want that app to suggest people in your class that you have interacted with frequently in class discussions. Or maybe you’re building an analytics app that looks at how often and how well students are using the class discussions. There’s a lot you could do if this infrastructure were standardized and accessible via an API. An LMOS is really a university API for teaching- and learning-relevant data and functionality, with a set of sample apps built on top of that API.

What’s valuable about this approach is that it can support and enable many different kinds of digital learning environments. If you want to build a super-duper adaptive-personalized-watching-every-click thing, an LMOS should make that easier to do. If you want to build a post-edupunk-open-ed-only-nominally-institutional thing, then an LMOS should make it easier to do that too. You can build whatever you need more quickly and easily, which means that you are more likely to build it. Done right, an LMOS should also support the five attributes that ELI is calling for:

  • Interoperability and Integration
  • Personalization
  • Analytics, Advising, and Learning Assessment
  • Collaboration
  • Accessibility and Universal Design

An LMOS-like infrastructure doesn’t require any of these things. It doesn’t require you to build analytics, for example. But by making the learning apps programmatically accessible via APIs, it makes analytics feasible if analytics are what you want. It is the roughly the right level of abstraction.

It is also roughly where we are headed, at least from a technical perspective. Returning to the earlier question of “at what price standards,” I believe that we have most or all of the essential technical interoperability standards we need to build an LMOS right now. Yes, there are a couple of interesting standards-in-development that may add further value, and yes, we will likely discover further holes that need to be filled here and there, but I think we have all the basic parts that we need. This is in part due to the fact that, with IMS’s new Caliper standard, we have yet another level of abstraction that makes it very flexible. Building on the previous discussion service example, Caliper lets you define a profile for a discussion, which is really just a formalization of all the pieces that you want to share—subject, body, author, time stamp, reply, thread, etc. You can also define a profile for, say, a note-taking app that re-uses the same Caliper infrastructure. If you come up with a new kind of digitally mediated learning interaction in a new app, you can develop a new Caliper profile for it. You might start by developing it just for your own use and then eventually submit it to the IMS for ratification as an official standard when there is enough demand to justify it. This also dramatically reduces the size of the negotiation that has to happen at the standards-making table and therefore improves both speed and quality of the output.

Toward a Personal API

I hope that I have addressed Tony Bates’ concerns, but I’m pretty sure that I haven’t gotten to the core of Jim Groom’s yet. Jim wants students to own their learning infrastructure, content, and identity as much as possible. And by “own,” he means that quite literally. He wants them to have their own web domains where the substantial majority of their digital learning lives resides permanently. To that end, he has started thinking about what he calls a Personal API:

[W]hat if one’s personal domain becomes the space where students can make their own calls to the University API? What if they have a personal API that enables them to decide what they share, with whom, and for how long. For example, what if you had a Portfolio site with a robust API (which was the use case we were discussing) that was installed on student’s personal domain at portfolio.mydomain.com, and enabled them to do a few basic things via API:

  • It called the University API and populated the students classes for that semester.
  • It enabled them to pull in their assignments from a variety of sources (and even version them).
  • it also let them “submit” those assignment to the campus LMS.
  • This would effectively be enabling the instructor to access and provide feedback that the student would now have as metadata on that assignment in their portfolio.
  • It can also track course events, discussions, etc.

This is very consistent with the example I gave in my 2005 blog post about how a student’s personal blog could connect bi-directionally with an LMOS:

Suppose that, in addition to having students publish information into the course, the service broker also let the course publish information out to the student’s personal data store (read “portfolio”). Imagine that for every content item that the student creates and owns in her personal area–blog posts, assignment drafts in her online file storage, etc.–there is also a data store to which courses could publish metadata. For example, the grade book, having recorded a grade and a comment about the student’s blog post, could push that information (along with the post’s URL as an identifier) back out to the student’s data store. Now the student has her professor’s grade and comment (in read-only format, of course), traveling with her long after the system administrator closed an archived the Psych 101 course. She can publish that information to her public e-portfolio, or not, as she pleases.

Fortuitously, this vision is also highly consistent with the fundamental structure that underlies IMS Caliper. Caliper is federated. That is, it assumes that there are going to be different sources of authority for different (but related) types of content, and that there will be different sharing models. So it is very friendly to world in which students own some data and universities own other data and could provide the “facade” necessary for the communication between the two world. So again, we have roughly the right level of abstraction to be generative rather than reductive. Caliper can support both a highly scaffolded and data-driven adaptive environment and a highly decentralized and extra-institutional environment. And, perhaps best of all, it lets us get to either incrementally by growing an ecosystem piece by piece rather than engineering a massive and monolithic platform.

Nifty, huh?

Believe it or not, none of this is the hard part. The hard part is the cultural and institutional barriers that prevent people from demanding the change that is very feasible from a technical perspective. But that’s another blog post (or fifty) for another time.

  1. I understand that SUNY has since added a 65th campus
  2. Yes, that is a word.

The post The EDUCAUSE NGDLE and an API of One’s Own appeared first on e-Literate.

12c Parallel Execution New Features: PX SELECTOR

Randolf Geist - Sun, 2015-06-14 13:49
Continuing my series on new 12c Parallel Execution features: I've already mentioned the new PX SELECTOR operator as part of the new Concurrent UNION ALL feature where it plays a key role. However, in general starting from 12c this new operator usually will get used when it comes to executing a serial part of the execution plan, like a full scan of an object not marked parallel, or an index based operation that can't be parallelized.

In pre-12c such serial parts get executed by the Query Coordinator itself, and the new PX SELECTOR changes that so that one of the PX slaves of a PX slave set is selected to execute that serial part.

There is not much left to say about that functionality, except that it doesn't get used always - there are still plan shapes possible in 12c, depending on the SQL constructs used and combined, that show the pre-12c plan shape where the Query Coordinator executes the serial part.

Let's have a look at a simple example to see in more detail what difference the new operator makes to the overall plan shape and runtime behaviour:

create table t1 as select * from dba_objects;

exec dbms_stats.gather_table_stats(null, 't1')

alter table t1 parallel;

create table t2 as select * from dba_objects;

exec dbms_stats.gather_table_stats(null, 't2')

create index t2_idx on t2 (object_name);

select /*+ optimizer_features_enable('11.2.0.4') */
*
from
t1
, t2
where
t1.object_id = t2.object_id
and t2.object_name like 'BLUB%'
;

-- 11.2.0.4 plan shape
-----------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | Q1,01 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | Q1,01 | PCWP | |
| 4 | BUFFER SORT | | Q1,01 | PCWC | |
| 5 | PX RECEIVE | | Q1,01 | PCWP | |
| 6 | PX SEND BROADCAST | :TQ10000 | | S->P | BROADCAST |
| 7 | TABLE ACCESS BY INDEX ROWID| T2 | | | |
|* 8 | INDEX RANGE SCAN | T2_IDX | | | |
| 9 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
|* 10 | TABLE ACCESS FULL | T1 | Q1,01 | PCWP | |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")
8 - access("T2"."OBJECT_NAME" LIKE 'BLUB%')
filter("T2"."OBJECT_NAME" LIKE 'BLUB%')
10 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"T1"."OBJECT_ID"))

-- 12.1.0.2 plan shape
--------------------------------------------------------------------------------------------
| Id | Operation | Name | TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | |
| 1 | PX COORDINATOR | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10001 | Q1,01 | P->S | QC (RAND) |
|* 3 | HASH JOIN | | Q1,01 | PCWP | |
| 4 | JOIN FILTER CREATE | :BF0000 | Q1,01 | PCWP | |
| 5 | PX RECEIVE | | Q1,01 | PCWP | |
| 6 | PX SEND BROADCAST | :TQ10000 | Q1,00 | S->P | BROADCAST |
| 7 | PX SELECTOR | | Q1,00 | SCWC | |
| 8 | TABLE ACCESS BY INDEX ROWID BATCHED| T2 | Q1,00 | SCWC | |
|* 9 | INDEX RANGE SCAN | T2_IDX | Q1,00 | SCWP | |
| 10 | JOIN FILTER USE | :BF0000 | Q1,01 | PCWP | |
| 11 | PX BLOCK ITERATOR | | Q1,01 | PCWC | |
|* 12 | TABLE ACCESS FULL | T1 | Q1,01 | PCWP | |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access("T1"."OBJECT_ID"="T2"."OBJECT_ID")
9 - access("T2"."OBJECT_NAME" LIKE 'BLUB%')
filter("T2"."OBJECT_NAME" LIKE 'BLUB%')
12 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"T1"."OBJECT_ID"))
The pre-12c plan shape here shows two significant things that I want to emphasize:

First this plan shape only requires a single PX slave set since the Query Coordinator takes over the part that needs to be re-distributed, so although we have a plan shape that requires re-distribution there's only a single PX slave set involved. In case there is at least one operation that gets executed in parallel and requires re-distribution there always will be two PX slave sets.

Second the plan shape demonstrates that parts of a Parallel Execution plan that get executed serially by the Query Coordinator require an additional BUFFER SORT operation. The HASH JOIN operation itself is blocking while it is consuming the left row source for building the hash table, so there is no true requirement to add another BUFFER SORT after the PX RECEIVE operation, but it looks like a pretty strict rule that any serial activity that involves the Query Coordinator adda a BUFFER SORT operation after re-distribution - I assume the reasoning for this is that the Query Coordinator isn't available for "coordinating" the PX slaves as along as it is actively involved in executing serial operations, hence the need to block any other parallel activity.

This normally shouldn't be too relevant to performance since you should only execute operations serially that are tiny and not worth to run parallel, so buffering them shouldn't add much overhead, but it's just another reason why you see additional BUFFER SORT operations in parallel plans that are not there in serial-only plans.

The 12c plan shape shows the new PX SELECTOR operator that executes now the serial part of the execution plan instead of the Query Coordinator. This also adds new decorators in the IN-OUT column called "SCWC" and "SCWP" respectivley, which you won't find in pre-12c plans - they are probably meant to read "Serial Combined With Child/Parent", similar to "PCWC/PCWP".

The good thing about the new PX SELECTOR is that the need for an additional BUFFER SORT operator is now gone.

However, one side-effect of the new operator for this particular plan shape here is that now a second PX slave set is allocated, although only one PX slave actually will get used at runtime. Note that for other plan shapes that need two PX slave sets anyway this doesn't matter.

Another good thing about the new PX SELECTOR operator is that it avoids an odd bug that sometimes happens with Serial->Parallel redistributions when the Query Coordinator is involved. This bug causes some delay to the overall execution that usually isn't too relevant since it only adds approx 1-2 seconds delay (but it can occur several times per execution so these seconds can add up) and therefore is rarely noticed when a Parallel Execution might take several seconds / minutes typically. I might cover this bug in a separate blog post.

Unrelated to the PX SELECTOR operator, the 12c plan shape also demonstrates that in 12c the way Bloom filters are shown in the plan has been improved. The 11.2.0.4 version includes the same Bloom filter as you can see from the "Predicate Information" section of the plan but doesn't make it that obvious from the plan shape that it is there (and sometimes in pre-12c it even doesn't show up in the "Predicate Information" section but is still used)

Next Generation Outline Extractor 2.0.4.887 Released

Tim Tow - Sun, 2015-06-14 08:36
We recently released an updated version of the Next Generation Outline Extractor. This new version, 2.0.4.887, addresses three issues:


  • Fixed an issue where the username and password passed via the command line were improperly logged
  • Fixed an issue reading MaxL XML data sources when the alias or UDA contained xml encoded characters such as the ampersand (&) character.
  • Updated labels on the Input Source tab of the user interface to clarify their purpose.
Here is a screenshot showing the updated labeling.


Due to the architecture of the Oracle Essbase APIs, it is generally much faster to use the MaxL Outline XML extracts when processing an Essbase Outline extract.  The Next Generation Outline Extractor still uses the Essbase Java API during this extract, but it is able to minimize the number of calls.  The second option shown above, Extract and Process MaxL Outline XML, will automatically extract the Outline XML from the cube during the processing.  The third option shown, Use Previously Extracted MaxL Outline XML, uses (obviously) an Outline XML file that has already been extracted.
Thank you to everyone who reported issues or made suggestions as you help make this utility better!
Categories: BI & Warehousing

A free PostgreSQL cloud database?

Yann Neuhaus - Sun, 2015-06-14 04:10
Recently I was looking for a free PostgreSQL cloud database service for testing. Why? Because I'd like to use such a cloud instance for testing no matter on which workstation or OS I am currently on. Another reason is, that I could could prepare some demos at home and use the same demos at work without needing to worry about taking the database with me each time.

APEX Meetup Frankfurt

Denes Kubicek - Sun, 2015-06-14 03:47
Am 26.06.2015 17.00 treffen wir uns in Frankfurt a. M. bei einem weiteren APEX Meetup. Danke an Sabine Heimsath und Moritz Klein für die Organisation. Ich zeige dort, wie man lokal eine XE Datenbank zusammen mit ORDS und Glassfish konfigurieren kann. Danach kann sehr leicht jede beliebige Version von APEX nachinstalliert werden. Der Vorteil ist, ich kann alle Features einsetzen (RESTful Services, XLS Upload) und der Austausch vo Images bzw. Konfiguration für die neue Version von APEX sind kein Problem mehr.

Die Adresse ist:

Ericsson Telekommunikation GmbH
Herriotstr. 1
Frankfurt

Our next meetup is in Frankfurt on 26th of June. Thanks to Sabine Heimsath and Moritz Klein we will meet at Ericsson Telekommunikation GmbH Herriotstr. 1 Frankfurt. I will demonstrate how to install XE with ORDS and Glassfish and how to upgrade to APEX 5.0 on a local virtual machine.

Categories: Development

RMAN -- 2 : ArchiveLog Deletion Policy

Hemant K Chitale - Sat, 2015-06-13 08:54
Most Internet references about defining the ArchiveLog Deletion Policy relate to the necessity to preserve ArchiveLogs for Standby databases.

For example, the configuration here prevents deletion unless an ArchiveLog has been applied on a Standby :

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/snapcf_orcl.f'; # default

RMAN>

But it is possible to also configure it differently. For example, thus for a database without a Standby, I can configure it to prevent deletion unless a Backup of the ArchiveLog has been made (to disk in this case)  :

RMAN> configure archivelog deletion policy to backed up 1 times to device type disk;

old RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO APPLIED ON ALL STANDBY;
new RMAN configuration parameters:
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
new RMAN configuration parameters are successfully stored

RMAN> show all;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
CONFIGURE BACKUP OPTIMIZATION OFF; # default
CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO BACKUPSET;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
CONFIGURE MAXSETSIZE TO UNLIMITED; # default
CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
CONFIGURE COMPRESSION ALGORITHM 'BASIC' AS OF RELEASE 'DEFAULT' OPTIMIZE FOR LOAD TRUE ; # default
CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 1 TIMES TO DISK;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/snapcf_orcl.f'; # default

RMAN>

Let's see how this plays.
RMAN> sql 'alter system archive log current ';

sql statement: alter system archive log current

RMAN> delete archivelog all;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=52 device type=DISK
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_623_bqrjp5gx_.arc thread=1 sequence=623
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_624_bqrjpsb3_.arc thread=1 sequence=624
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_625_bqrjq8kj_.arc thread=1 sequence=625
RMAN-08138: WARNING: archived log not deleted - must create more backups
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_626_bqrjqfdq_.arc thread=1 sequence=626

RMAN>

RMAN raised a WARNING that indicates that deletion of the ArchiveLog is not permitted until a Backup has been taken.  Thus, you can protect your ArchiveLogs from deletion by RMAN commands if they have not been backed up.
NOTE : This does NOT prevent non-RMAN commands (e.g. cron jobs with shell scripts) from deleting ArchiveLogs !

Let me backup and then delete the ArchiveLogs.

RMAN> backup as compressed backupset archivelog all;

Starting backup at 13-JUN-15
current log archived
using channel ORA_DISK_1
using channel ORA_DISK_2
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=623 RECID=9 STAMP=882312517
channel ORA_DISK_1: starting piece 1 at 13-JUN-15
channel ORA_DISK_2: starting compressed archived log backup set
channel ORA_DISK_2: specifying archived log(s) in backup set
input archived log thread=1 sequence=624 RECID=10 STAMP=882312537
input archived log thread=1 sequence=625 RECID=11 STAMP=882312552
input archived log thread=1 sequence=626 RECID=12 STAMP=882312557
channel ORA_DISK_2: starting piece 1 at 13-JUN-15
channel ORA_DISK_1: finished piece 1 at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_13/o1_mf_annnn_TAG20150613T225210_bqrjwtfd_.bkp tag=TAG20150613T225210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting compressed archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=627 RECID=13 STAMP=882312730
channel ORA_DISK_1: starting piece 1 at 13-JUN-15
channel ORA_DISK_2: finished piece 1 at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_13/o1_mf_annnn_TAG20150613T225210_bqrjwtg3_.bkp tag=TAG20150613T225210 comment=NONE
channel ORA_DISK_2: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: finished piece 1 at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/backupset/2015_06_13/o1_mf_annnn_TAG20150613T225210_bqrjwvp1_.bkp tag=TAG20150613T225210 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 13-JUN-15

Starting Control File and SPFILE Autobackup at 13-JUN-15
piece handle=/NEW_FS/oracle/FRA/ORCL/autobackup/2015_06_13/o1_mf_s_882312732_bqrjwwsc_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 13-JUN-15

RMAN> delete archivelog all;

released channel: ORA_DISK_1
released channel: ORA_DISK_2
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=35 device type=DISK
allocated channel: ORA_DISK_2
channel ORA_DISK_2: SID=52 device type=DISK
List of Archived Log Copies for database with db_unique_name ORCL
=====================================================================

Key Thrd Seq S Low Time
------- ---- ------- - ---------
9 1 623 A 07-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_623_bqrjp5gx_.arc

10 1 624 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_624_bqrjpsb3_.arc

11 1 625 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_625_bqrjq8kj_.arc

12 1 626 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_626_bqrjqfdq_.arc

13 1 627 A 13-JUN-15
Name: /NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_627_bqrjwt3k_.arc


Do you really want to delete the above objects (enter YES or NO)? YES
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_623_bqrjp5gx_.arc RECID=9 STAMP=882312517
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_624_bqrjpsb3_.arc RECID=10 STAMP=882312537
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_625_bqrjq8kj_.arc RECID=11 STAMP=882312552
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_626_bqrjqfdq_.arc RECID=12 STAMP=882312557
deleted archived log
archived log file name=/NEW_FS/oracle/FRA/ORCL/archivelog/2015_06_13/o1_mf_1_627_bqrjwt3k_.arc RECID=13 STAMP=882312730
Deleted 5 objects


RMAN>

Now, I am able to delete the ArchiveLogs as I have at least 1 backup (on disk) of each.

.
.
.

Categories: DBA Blogs

Personalized Learning Changes: Effect on instructors and coaches

Michael Feldstein - Fri, 2015-06-12 17:03

By Phil HillMore Posts (329)

Kate Bowles left an interesting comment at my previous post about an ASU episode on e-Literate TV, where I argued that there is a profound change in the instructor role. Her comment:

Phil, I’m interested to know if you found anything out about the pay rates for coaches v TAs. I’m also interested in what coaches were actually paid to do — how the parameters of their employable hours fit what they ended up doing. Academics are rarely encouraged to think of their work in terms of billable increments, because this would sink the ship. But still I’m curious. Did ASU really just hike up their staffing costs in moving to personalised learning, or was there some other cost efficiency here? If the overall increase in students paid off, how did this happen? I’m grappling with how this worked for ASU in budgetary terms, as the pedagogical gain is so clear.

This comment happened to coincide with my participation in WCET’s Leadership Summit on Adaptive Learning, where similar subjects were being discussed. For the purposes of this blog post, we’ll use the “personalized learning” language, which includes use of adaptive software as a subset. Let’s first address the ASU-specific questions.

ASU

The instructor in the ASU episode was Sue McClure who was kind enough to help answer these questions by email. Sue is a lecturer at ASU Online, which is a full-time salaried position with a teaching load of four courses per semester. Typical loads include 350 – 400 students over those four courses, and the MAT 110 personalized learning course (using Khan Academy) did not change this ratio. Sue added these observations:

During the Fall Semester of 2014 we offered our first MAT 110 courses using Khan. There was a great deal of work in the planning of the course, managing the work, working with students, hiring and managing the coaches, tracking student progress, and more. Of course, our main responsibility to help our students to be successful in our course overshadowed all of this. The work load during the first semester of our pilot was very much increased compared to previous semesters teaching MAT 110.

By the time that we reached Spring Semester of 2015 we had learned much more about methods that work best for student success, our coaches were more experienced, and our technology to track student progress and work was improved. During the second semester my work load was very much more in line with teaching MAT 110 before the pilot was begun.

For the TAs (coaches), they also had the same contracts as before the personalized learning approach, but they are paid on an hourly basis. I do not know if they ended up working more hours than expected in this course, but I did already note that there were many more coaches in the new course than is typical. Unfortunately, I cannot answer Kate’s follow-up question about TA / coach hourly pay issues in more detail, at least for now.

Putting it together, ASU is clearly investing in personalized learning – including investing in instructional resources – rather than trying to find cost efficiencies up front. Adrian Sannier in episode 1 described the “payoff” or goal for ASU.

Adrian Sannier:So, we very much view our mission as helping those students to find their way past the pastiche of holes that they might have and then to be able to realize their potential.

So, take math as an example. Math is, I think, a very easy place for most people to understand because I think almost everybody in the country has math deficits that they’re unaware of because you get a B in third-grade math. What that means is there were a couple of things you didn’t understand. Nobody tells you what those things are—you don’t have a very clear idea—but for the rest of your life, all the things that depend on those things that you missed you will have a rocky understanding of.

So, year over year you accumulate these holes. Then finally, somebody in an admissions exam or on the SAT or the ACT faces you with a comprehensive survey of your math knowledge, and you suddenly realize, “Wow, I’m under-prepared. I might even have gotten pretty good grades, but there are places where I have holes.”

We very much view our mission as trying to figure how it is that we can serve the student body. Even though our standards haven’t changed, our students certainly have because the demographics of the country have changed, the character of the country has changed, and the things we’re preparing students for have changed.

We heard several times in episode 1 that ASU wants to scale the number of students served (with same standards) without increasing faculty at the same rate, and to do this they need to help more of today’s students succeed in math. The payoff is retention, which is how the budget will work if they succeed (remember this is a new program).

WCET Adaptive Learning Summit

The WCET summit allowed for a more generalized response. In one panel moderated by Tom Cavanaugh from University of Central Florida (UCF), panelists were asked about the Return on Investment (ROI) of personalized learning[1]. Some annoying person in the audience[2] further pressed the panel during Q&A time to more directly address the issue raised by Kate. All the panelists view personalized / adaptive learning as an investment, where the human costs in instructors / faculty / TAs / coaches actually go up, at least in early years. They do not see this as cost efficiency, at least for the foreseeable future.

Santa Fe Rainbow

(My photos from inside the conference stunk, so I’ll use a better one from dinner instead.)

David Pinkus from Western Governors University answered that the return was three words: retention, retention, retention. Tom Cavanaugh added that UCF invested in additional staff for their personalized / adaptive learning program, specifically as a method to reduce the “friction” of faculty time investment.

I should point out that e-Literate TV case studies are not exhaustive. As Michael and I described:

We did look for schools that were being thoughtful about what they were trying to do and worked with them cooperatively, so it was not the kind of journalism that was likely to result in an exposé. We went in search of the current state of the art as practiced in real classrooms, whatever that turned out to be and however well it is working.

Furthermore, the panelists at the WCET Summit tended to be from schools that were leading the pack in thoughtful personalized learning implementations. In other words, the perspective I’m sharing in this post is for generally well-run programs that consciously considered student and faculty support as the key drivers.[3] When these programs have developed enough to allow independent reviews of effectiveness, student retention – both with the course and ideally within a program – should be one of the key metrics to evaluate.

Investment vs. Sustainability

There is another side to this coin, however, as pointed out by someone at the WCET Summit[4]. With so many personalized learning programs funded by foundations and even institutional investments above normal operations, there is a question of sustainability. It’s all well and good to demonstrate that a school is investing in new programs, including investments in faculty and TA support, but I do not think that many programs have considered the sustainability of these initiatives. If the TA quoted in the previous blog is accurate, ASU went from 2 to 11 TAs for the MAT 110 course. Essex County College invested $1.2 million in an emporium remedial math program. Even if the payoff is “retention”, will there be enough improvement in retention to justify an ongoing expenditure to support a program? Sustainability should be another key metric as groups evaluate the effectiveness of personalized learning approaches.

  1. specifically adaptive learning
  2. OK, me
  3. There will be programs that do seek to use personalized / adaptive learning as a cost-cutting measure or as primarily technology-driven. But I would be willing to bet that those programs will not succeed in the long run.
  4. I apologize for forgetting who this was.

The post Personalized Learning Changes: Effect on instructors and coaches appeared first on e-Literate.

Selling SaaS

Floyd Teter - Fri, 2015-06-12 14:51
Read a great article on the vendor-customer SaaS sale sdynamic  - Mike Vizard wrote the article at "Talkin'' Cloud".  Anthony Anzevino, Director of America Sales for AWS, describes selling cloud customers.  And Mr. Anzevion nails it.  Rather than summarize, I'll just quote the gist of it:
Speaking this week at a Marketplace LIVE event sponsored by Telx, a provider of hosting services, Anthony Anzevino, director of America sales for AWS, says the cloud giant focuses its own inside sales efforts on some 3,500 named accounts, which is then supplemented by some 5,000 systems integrators.Most of those customers, says Anzevino, are looking to find a more agile way to deliver IT services using a cloud service provider that is committed to innovate across a wide depth of product offerings. After all those factors are considered it's only then that the conversation turns to cost and pricing models, said Anzevino.The single biggest deterrent to making that sale, said Anzevino, is the internal IT organization. Unless there is some plan in place regarding how the skills of the internal IT staff will be reapplied to add value to the business Anzevino said most internal IT organizations will go to significant lengths to prevent application workloads from moving out to the cloud.Counterbalancing that influence, says Anzevino, is usually a chief financial officer that wants to outsource everything that is not core to the business. More often than not the CFO does not see IT as being strategic to the business and the real costs of delivering IT to any line of business inside that organization are generally poorly defined.For the most part AWS became the largest cloud provider by targeting independent software vendors (ISVs) that didn’t want to invest in IT infrastructure to deliver a software-as-a-service (SaaS) application. To fuel continued growth AWS is now targeting traditional enterprise IT organizations, many of whom are eager to at least move application development activities into the cloud. The battle comes when it’s time to move those applications into production environments. More often than not because of security, performance, compliance and total cost of ownership issues the internal IT organization will make a strong case for deploying production applications inside a private cloud or in a managed hosting environment.Yup.  See this very situation play out every day in the "whatever as a service" market.  I could not have described the situation better myself...You can read the entire article for yourself here.

A Framework for Wearables, Glance

Oracle AppsLab - Fri, 2015-06-12 12:27

Not long ago, Ultan (@ultan) wrote about our framework for wearable, and other, devices. We’re calling it Glance to reflect the OAUX glance-scan-commit design philosophy.

Noel (@noelportugal) produced a video highlighting Glance on several smartwatches as well as in the car, on Android Auto.

It’s pretty sweet. Check it out:

Glance has been in the works for more than a year now, and it arose out of our collective frustration with the effort involved developing for multiple device SDKs.

The goal of Glance is to do 75-80% of the overlapping work: calling Oracle Cloud Applications APIs, working with required cloud services like Apple Push Notifications and Google Cloud Messaging, deploying a companion mobile application, built in Oracle’s Mobile Application Framework, of course.

With all that done, we can build for and plug in new devices (ahem, Pebble Time) much more easily and with much less effort. Initially, we built Glance to support the original Pebble and Android Wear smartwatches, and the Apple Watch was our first proof-point for it.

IMG_0098

IMG_0875

We’re happy with the results so far, and Glance has made it much easier for us to build prototypes on new devices. Now, if only we could get access to CarPlay.Possibly Related Posts: