Skip navigation.

Michael Feldstein

Syndicate content
What We Are Learning About Online Learning...Online
Updated: 4 days 21 hours ago

WCET14 Student Panel: What do students think of online education?

Fri, 2014-11-21 15:37

Yesterday at the WCET14 conference in Portland I had the opportunity along with Pat James to moderate a student panel.[1] I have been trying to encourage conference organizers to include more opportunities to let students speak for themselves – becoming real people with real stories rather than nameless aggregations of assumptions. WCET stepped up with this session. And my new favorite tweet[2]:

@kuriousmind @lukedowden @wcet_info @PhilOnEdTech Best. Panel. Ever. Massively insightful experience.

— Matthew L Prineas (@mprineas) November 20, 2014

As I called out in my introduction, we talk about students, we characterize students, we listen to others talk about students, but we don’t do a good job in edtech talking with students.  There is no way that a student panel can be representative of all students, even for a single program or campus[3]. We’re not looking for statistical answers, but we can hear stories and gain understanding.

These four students were either working adults (and I’m including a stay-at-home mom in this category) taking undergraduate online programs. They were quite well-spoken and self-aware, which made for a great conversation that included comments that might surprise some on faculty-student interaction potential:

A very surprising (to me) comment on class size:

And specific feedback on what doesn’t work well in online courses:

To help with viewing of the panel, here are the primary questions / topics of discussion:

The whole student panel is available on the Mediasite platform:

Thanks to the help of the Mediasite folks, I have also uploaded a Youtube video of the full panel:

Click here to view the embedded video.

  1. Pat is the executive director of the California Community College Online Education Initiative (OEI) – see her blog here for program updates.
  2. I’m not above #shameless.
  3. As can be seen from this monochromatic panel, which might make sense for Portland demographics but not from a nationwide perspective.

The post WCET14 Student Panel: What do students think of online education? appeared first on e-Literate.

In Which I (Partially) Disagree with Richard Stallman on Kuali’s AGPL Usage

Wed, 2014-11-19 18:32

Since Michael is making this ‘follow-up blog post’ week, I guess I should jump in.

In my latest post on Kuali and the usage of the AGPL license, the key argument is that this license choice is key to understanding the Kuali 2.0 strategy – protecting KualiCo as a new for-profit entity in their future work to develop multi-tenant cloud hosting code.

What I have found interesting is that in most of my conversations with Kuali community people ,even for those who are disillusioned, they seem to think the KualiCo creation makes some sense. The real frustration and pushback has been on how decisions are made, how decisions have been communicated, and how the AGPL license choice will affect the community.

In the comments, Richard Stallman chimed in.

As the author of the GNU General Public License and the GNU Affero General Public License, and the inventor of copyleft, I would like to clear up a possible misunderstanding that could come from the following sentence:

“Any school or Kuali vendor, however, that develops its own multi-tenant cloud-hosting code would have to relicense and share this code publicly as open source.”

First of all, thinking about “open source” will give you the wrong idea about the reasons why the GNU AGPL and the GNU GPL work as they do. To see the logic, you should think of them as free software licenses; more specifically, as free software licenses with copyleft.

The idea of free software is that users of software deserve freedom. A nonfree program takes freedom away from its users, so if you want to be free, you need to avoid it. The aim of our copyleft licenses is to make sure all users of our code get freedom, and encourage release of improvements as free software. (Nonfree improvements may as well be discouraged since we’d need to avoid them anyway.) See http://gnu.org/philosophy/free-software-even-more-important.html.

I don’t use the term “open source”, since it rejects these ethical ideas. (http://gnu.org/philosophy/open-source-misses-the-point.html.) Thus I would say that the AGPL requires servers running modified versions of the code to make the source for the running version available, under the AGPL, to their users.

The license of the modifications themselves is a different question, though related. The author of the modifications could release the modifications under the AGPL itself, or under any AGPL-compatible free software license. This includes free licenses which are pushovers, such as the Apache 2.0 license, the X11 license, and the modified BSD license (but not the original BSD license — see
http://gnu.org/licenses/license-list.html).

Once the modifications are released, Kuali will be able to get them and use them under whatever license they carry. If it is a pushover license, Kuali will be able to incorporate those modifications even into proprietary software. (That’s what makes them pushover licenses.)

However, if the modifications carry the AGPL, and Kuali incorporates them into a version of its software, Kuali will be bound by the AGPL. If it distributes that version, it will be required to do so under the AGPL. If it installs that version on a server, it will be required by the AGPL to make the whole of the source code for that version available to the users of that server.

To avoid these requirements, Kuali would have to limit itself to Kuali’s own code, others’ code released under pushover licenses, plus code for which it gets special permission. Thus, Kuali will not have as much of a special position as some might think.

See also http://gnu.org/philosophy/assigning-copyright.html
and http://gnu.org/philosophy/selling-exceptions.html.

Dr Richard Stallman
President, Free Software Foundation (gnu.org, fsf.org)
Internet Hall-of-Famer (internethalloffame.org)
MacArthur Fellow

I appreciate this clarification and Stallman’s participation here at e-Literate, and it is useful to understand the rationale and ethics behind AGPL. However, I disagree with the statement “Thus, Kuali will not have as much of a special position as some might think”. I do not think he is wrong, per se, but the combination of both the AGPL license and the Contributor’s License Agreement (CLA) in my view does ensure that KualiCo has a special position. In fact, that is the core of the Kuali 2.0 strategy, and their approach would not be possible without the AGPL usage.

Note: I have had several private conversations that have helped me clarify my thinking on this subject. Besides Michael with his comment to the blog, Patrick Masson and three other people have been very helpful. I also interviewed Chris Coppola from KualiCo to understand and confirm the points below. Any mistakes in this post, however, are my own.

It is important to understand two different methods of licensing at play – distributing code through an APGL license and contributing code to KualiCo through a CLA (Kuali has a separate CLA for partner institutions and a Corporate CLA for companies).

  • Distribution – Anyone can download the Kuali 2.0 code from KualiCo and make modifications as desired. If the code is used privately, there is no requirement for distributing the modified code. If, however, a server runs the modified code, the reciprocal requirements of AGPL kick in and the code must be distributed (made available publicly) with the AGPL license or a pushover license. This situation is governed by the AGPL license.
  • Contribution – Anyone who modifies the Kuali 2.0 code and contributes it to KualiCo for inclusion into future releases of the main code grants a license with special permission to KualiCo to do with the code as they see fit. This situation is governed by the CLA and not AGPL.

I am assuming that the future KualiCo multi-tenant cloud-hosting code is not separable from the Kuali code. In other words, the Kuali code would need modifications to allow multi-tenancy.

For a partner institution, their work is governed by the CLA. For a company, however, the choice on whether to contribute code is mutual between that company and KualiCo, in that both would have to agree to sign a CLA. Another company may choose to do this to ensure that bug fixes or Kuali enhancements get into the main code and do not have to be reimplemented with each new release.

For any contributed code, KualiCo can still keep their multi-tenant code proprietary as their special sauce. For distributed code under AGPL that is not contributed under the CLA, the code would be publicly available and it would be up to KualiCo whether to incorporate any such code. If KualiCo incorporated any of this modified code into the main code base, they would have to share all of the modified code as well as their multi-tenant code. For this reason, KualiCo will likely never accept any code that is not under the CLA – they do not want to share their special sauce. Chris Coppola confirmed this assumption.

This setup strongly discourages any company from directly competing with KualiCo (vendor protection) and is indeed a special situation.

The post In Which I (Partially) Disagree with Richard Stallman on Kuali’s AGPL Usage appeared first on e-Literate.

A Weird but True Fact about Textbook Publishers and OER

Wed, 2014-11-19 13:44

As I was perusing David Kernohan’s notes on Larry Lessig’s keynote at the OpenEd conference, one statement leapt out at me:

Could the department of labour require that new education content commissioned ($100m) be CC-BY? There was a clause (124) that suggested that the government should check that no commercial content should exist in these spaces. Was argued down. But we were “Not important” enough to be defeated.

It is absolutely true that textbook publishers do not currently see OER as a major threat. But here’s a weird thing that is also true:

These days, many textbook publishers like OER.

Let me start with the full disclosure. For 18 months, I was an employee of Cengage Learning, one of the Big Three textbook publishers in US higher education. Since then, I have consulted for textbook publishers on and off. Pearson is a current client, and there have been others. Make of that what you will in terms of my objectivity on this subject, but I have been in the belly of the beast. I have had many conversations with textbook publisher employees at all levels about OER, and many of them truly, honestly like it. They really, really like it. As a rule, they don’t understand it. But some of them actually see it as a way out of the hole that they’re in.

This is a relatively recent thing. Not so very long ago, you’d get one of two reactions from employees at these companies, depending on the role of the person you were talking to. Editors would tend to dismiss OER immediately because they had trouble imagining that content that didn’t go through their traditional editorial vetting process could be good (fairly similarly to the way academics would dismiss Wikipedia as something that couldn’t be trusted without traditional peer review). There were occasional exceptions to this, but always for very granular content. Videos, for example. Sometimes editors saw (or still see) OER as extra bits—or “ancillary materials,” in their vernacular—that could be bundled with their professionally edited product. That’s the most that editors typically thought about OER. At the executive level, every so often they would trot out OER on their competitive threat list, look at it for a bit, and decide that no, they don’t see evidence that they are losing significant sales to OER. Then they would forget about it for another six months or so. Publishers might occasionally fight OER at a local level, or even at a state level in places like Washington or California where there was legislation. But in those cases the fight was typically driven by the sales divisions that stood to lose commissions, and they were treated like any other local or regional competition (such as home-grown content development). It wasn’t viewed as anything more than that. For the most part, OER was just not something publishers thought a lot about.

That has changed in US higher education as it has become clear that textbook profits are collapsing as student find more ways to avoid buying the new books. The traditional textbook business is clearly not viable in the long term, at least in that market, at least at the scale and margins that the bigger publishers are used to making. So these companies want to get out of the textbook business. A few of them will say that publicly, but many of them say it among themselves. They don’t want to be out of business. They just want to be out of the textbook business. They want to sell software and services that are related to educational content, like homework platforms or course redesign consulting services. But they know that somebody has to make the core curricular content in order to for them to “add value” around that content. As David Wiley puts it, content is infrastructure. Increasingly, textbook publishers are starting to think that maybe OER can be their infrastructure. This is why, for example, it makes sense for Wiley (the publisher, not the dude) to strike a licensing deal with OpenStax. They’re OK about not making a lot of money on the books as long as they can sell their WileyPlus software. Which, in turn, is why I think that Wiley (the dude, not the publisher) is not crazy at all when he predicts that “80% of all US general education courses will be using OER instead of publisher materials by 2018.” I won’t be as bold as he is to pick a number, but I think he could very well be directionally correct. I think many of the larger publishers hope to be winding down their traditional textbook businesses by 2018.

How particular OER advocates view this development will depend on why they are OER advocates. If your goal is to decrease curricular materials costs and increase the amount of open, collaboratively authored content, then the news is relatively good. Many more faculty and students are likely to be exposed to OER over the next four or five years. The textbook companies will still be looking to make their money, but they will have to do so by selling something else, and they will have to justify the value of that something else. It will no longer be the case that students buy closed textbooks because it never occurs to faculty that there is another viable option. On the other hand, if you are an OER advocate because you want big corporations to stay away from education, then Larry Lessig is right. You don’t currently register as a significant threat to them.

Whatever your own position might be on OER, George Siemens is right to argue that the significance of this coming shift demands more research. There’s a ton that we don’t know yet, even about basic attitudes of faculty, which is why the recent Babson survey that everybody has been talking about is so important. And there’s a funny thing about that survey which few people seem to have noticed:

It was sponsored by Pearson.

The post A Weird but True Fact about Textbook Publishers and OER appeared first on e-Literate.

Better Ed Tech Conversations

Tue, 2014-11-18 09:44

This is another follow-up to the comments thread on my recent LMS rant. As usual, Kate Bowles has insightful and empathetic comments:

…From my experience inside two RFPs, I think faculty can often seem like pretty raucous bus passengers (especially at vendor demo time) but in reality the bus is driven by whoever’s managing the RFP, to a managed timetable, and it’s pretty tightly regulated. These constraints are really poorly understood and lead to exactly the predictable and conservative outcomes you observe. Nothing about the process favours rethinking what we do.

Take your focus on the gradebook, which I think is spot on: the key is how simply I can pull grades in, and from where. The LMS we use is the one with the truly awful, awful gradebook. Awful user experience, awful design issues, huge faculty development curve even to use it to a level of basic competence.

The result across the institution is hostility to making online submission of assignments the default setting, as overworked faculty look at this gradebook and think: nope.

So beyond the choosing practice, we have the implementation process. And nothing about this changes the mind of actual user colleagues. So then the institutional business owner group notices underuse of particular features—oh hey, like online submission of assignments—and they say to themselves: well, we need a policy to make them do it. Awfulness is now compounding.

But then a thing happens. Over the next few years, faculty surreptitiously develop a workable relationship with their new LMS, including its mandated must-use features. They learn how to do stuff, how to tweak and stretch and actually enjoy a bit. And that’s why when checklist time comes around again, they plead to have their favourite corner left alone. They only just figured it out, truly.

If institutions really want to do good things online, they need to fund their investigative and staff development processes properly and continuously, so that when faculty finally meet vendors, all can have a serious conversation together about purpose, before looking at fit.

This comment stimulated a fair bit of conversation, some of which continued on the comments thread of Jonathan Rees’ reply to my post.

The bottom line is that there is a vicious cycle. Faculty, who are already stretched to the limit (and beyond) with their workloads, are brought into a technology selection process that tends to be very tactical and time-constrained. Their response, understandably, tends to be to ask for things that will require less time from them (like an easier grade book, for example). When administrators see that they are not getting deep and broad adoption, they tend to mandate technology use. Which makes the problem worse rather than better because now faculty are forced to use features that take up more of their time without providing value, leaving them with less time to investigate alternatives that might actually add value. Round and round it goes. Nobody stops and asks, “Hey, do we really need this thing? What is it that we do need, and what is the most sensible way of meeting our needs?”

The only way out of this is cultural change. Faculty and administrators alike have work together toward establishing some first principles around which problems the technology is supposed to help them solve and what a good solution would look like. This entails investing time and university money in faculty professional development, so that they can learn what their options are and what they can ask for. It entails rewarding faculty for their participation in the scholarship of teaching. And it entails faculty seeing educational technology selection and policy as something that is directly connected to their core concerns as both educational professionals and workers.

Sucky technology won’t fix itself, and vendors won’t offer better solutions if customers can’t define “better” for them. Nor will open source projects fare better. Educational technology only improves to the extent that educators develop a working consensus regarding what they want. The technology is a second-order effect of the community. And by “community,” I mean the group that collectively has input on technology adoption decisions. I mean the campus community.

The post Better Ed Tech Conversations appeared first on e-Literate.

Walled Gardens, #GamerGate, and Open Education

Sat, 2014-11-15 08:41

There were a number of interesting responses to my recent LMS rant. I’m going to address a couple of them in short posts, starting with this comment:

…The training wheels aren’t just for the faculty, they’re for the students, as well. The idea that the internet is a place for free and open discourse is nice, of course, but anyone who pays attention knows that to be a polite fiction. The public internet is a relatively safe place for straight, white, American males, but freedom of discourse is a privilege that only a small minority of our students (and faculty, for that matter) truly enjoy. If people didn’t understand that before, #notallmen/#yesallmen and GamerGate should certainly have driven that home.

As faculty and administrators we have an obligation–legal, and more importantly moral–to help our students understand the mechanisms, and unfortunately, often the consequences, of public discourse, including online communications. This is particularly true for the teenagers who make up the bulk of the undergrad population. Part of transformative teaching is giving people a safe space to become vulnerable and open to change. For those of us who think still of the “‘net” in terms of it’s early manifestations that were substantially open and inclusive research networks and BBS of largely like-minded people (someone else mentioned The Well, although The Well, of course, has always been a walled garden), open access seems tempting. But today’s internet is rarely that safe space for growth and learning. Just because students can put everything on the internet (YikYak, anyone?) doesn’t mean that they should.

In many, if not most, situations, A default stance of of walled garden with easy-to-implement open access options for chosen and curated content makes a great deal of sense….

There are lots of legitimate reasons why students might not want to post on the public internet. A few years back, when I was helping my wife with a summer program that exposed ESL high schoolers to college and encouraged them to feel like it could be something for them, we had a couple of students who did not want to blog. We didn’t put them on the spot by asking why, but we suspected that their families were undocumented and that they were afraid of getting in trouble.

This certainly doesn’t mean that everybody has to use an LMS or lock everything behind a login, but it does mean that faculty teaching open courses need to think about how to accommodate students who won’t or can’t work on the open web. I don’t think this sort of accommodation in any way compromises the ethic of open education. To the contrary, ensuring access for everyone is part of what open education is all about.

The post Walled Gardens, #GamerGate, and Open Education appeared first on e-Literate.

APLU Panel: Effects of digital education trends on teaching faculty

Wed, 2014-11-12 18:47

Last week I spoke on a panel at the Association of Public and Land-grant Universities (APLU) annual conference. Below are the slides and abridged notes on the talk.

It is useful to look across many of the technology-drive trends affecting higher education and ask what that tells about faculty of the future. Distance education (DE) of course is not new, and the first DE course came out in the mid 1800s in a course from London on shorthand. These distance, or often correspondence, course have expanded over time, but with the rise of the Internet online education (today’s version of DE) has been accelerating over the past 20 years to become quite common in our higher education system.

For the first time IPEDS has been collecting data on DE, starting with Fall 2012 data. We finally have some real data to show us what is happening state-by-state and by different measures. We’re talking numbers from 20 – 40+% of students taking at least one online course with public 4-year institutions. This is no longer just a fringe condition for our students – it’s hitting the mainstream.

Hill APLU Slides Nov 2014 18

 

Hill APLU Slides Nov 2014 19

We’re now in an area where online courses are becoming a standard part of our students’ educational experience. The student demographics and experience are changing. Much of this is driven by working adults, people coming back into college to get a degree, and what used to be called non-traditional students. What we know, of course, is that non-traditional students are now in the majority – we need new terminology.

The numbers we’re discussing with distance education really understate the change. There is no longer a simple dualism of traditional vs. online education. We’re seeing an emerging landscape of educational delivery models. What does this emerging landscape of educational delivery models look like? I have categorized the models not just in terms of modality—ranging from face-to-face to fully online—but also in terms of the method of course design. These two dimensions allow a richer understanding of the new landscape of educational delivery models. Within this landscape, the following models have emerged: ad hoc online courses and programs, fully online programs, School-as-a-Service or Online Service Providers, competency-based education, blended/hybrid courses and the flipped classroom, and MOOCs.

Hill APLU Slides Nov 2014 23

The vertical axis of course design gets at the core assumption that underlies much of the higher education system – the one-to-one relationship between a faculty member and a course. With many of the new models, we’re getting into multi-disciplinary faculty team designs and even team-based course designs include faculty, subject matter experts, instructional designers, and multi-media experts. These models raise a lot of questions over ownership of content and ability or permission to duplicate course section.

These new models change the assumptions of who owns the course, and it leads to different processes for designing, delivering, and updating courses–processes that just don’t exist in traditional education. The implications of this approach are significant. These differences create a barrier that very few institutions can cross.

It is culturally difficult to cross the barrier into the area of team-based course design, and yet this is where many of the new technology-enabled models involve.

There is another case of seeing the Course as a Product. Previously we had three separate domains with content (typically provided by publishers), platforms (typically provided by LMS vendors) and course and curriculum design (typically provided by faculty and academic departments. What we’re seeing more recently is the breakdown, or merging, of these domains with various products and services overlapping. Digital content includes both content and platform. Courseware, however, takes this tot the next level and organizes the content and delivery around learning outcomes. In other words, Courseware actually overlaps into the domain and course and curriculum design.

 

Hill APLU Slides Nov 2014 24

From an organizational change perspective, however, we are just now starting to see how digital education is affecting the mainstream of higher education. We’re not just dealing with niche programs but also having to grapple with how these changes are affecting our institutions as a whole.

Another way of viewing this situation is that we had been used to people experimenting with digital education as a group quietly playing in the corner.

Hill APLU Slides Nov 2014 21

But these people are contained no more and are loose in the house, often causing chaos but also having fun.

Hill APLU Slides Nov 2014 22

These moves raise many questions that need to be addressed at a policy and faculty governance level.

  • How broadly are we applying these initiatives? There are big questions in figuring out which pilot programs to start, whether and when to expand the new models beyond an isolated program more broadly.
  • Who owns the course when a team works on the design from start to finish?
  • Who needs to give permission to take a master course and duplicate into multiple shells, or course sections, taught by others?
  • How should faculty be credited for team-based course design and how should professional development opportunities adjust?

The late family therapist Virginia Satir created a model that can describe much of the changes arising from technology-based innovations. The model shows how social systems or cultures react to a transformative event through various stages (see Steve Smith’s post for more information).

The issue for our discussion is that a foreign element – the change or innovation – is the key event that triggers the move away from the late status quo. This change typically leads to resistance, and eventually to a period of chaos. During these two phases, the performance of the social system fluctuates to a large degree and actually is often worse than during the status quo phase, as the social system wrestles with how to integrate the change in a manner that produces benefits. The second key event is the transforming idea, when people determine how to integrate the innovation into the core of the social system. This integration phase leads to real performance improvements as well as less fluctuation. As the innovation reaches a critical mass, a new status quo develops.

QuoChart

It is not a given that the innovation actually takes hold, there are cases where the social system does not benefit from the innovation.

Some of the implications for faculty during these times of change:

  • With all of the changes, it’s not just that change should be difficult, but also that performance will fluctuate wildly and often our outcomes will get worse as the system adapts to an innovation.
  • The foreign element that dismantles the status quo is not necessarily the basis of technology adoption that gets adopted. The transforming idea is typically related to the foreign element, but it is not equivalent. Faculty ideally will have the time and opportunity to help “find” the transforming idea.
  • It would be a mistake to add accountability measures prematurely, when the system has not had a chance to figure out how to successfully improve outcomes.

Many of these digital education models also raise the question of whether the faculty members need to be on campus, and if not, what support structures should be in place to help out these distance faculty. What about professional development opportunities? Beyond that, how to do you include distance faculty in governance processes.

Other changes, such as competency-based education, can move beyond seat time as a core design element. But how does this change faculty compensation and faculty workload?

We also see faculty age assumptions. What I’m seeing lately is more and more evidence that this assumption is incorrect – older faculty in general are not more resistant to change than younger faculty – and this could have implications for ed tech initiatives struggling to get faculty buy-in.

In a recent post here at 20MM, I pointed out an interesting finding from a recent survey on the use of Open Educational Resources (OER) by the Babson Survey Research Group.

It has been hypothesized that it is the youngest faculty that are the most digitally aware, and have had the most exposure to and comfort in work with digital resources. Older faculty are sometimes assumed to be less willing to adopt the newest technology or digital resources. However, when the level of OER awareness is examined by age group, it is the oldest faculty (aged 55+) that have the greatest degree of awareness, while the youngest age group (under 35) trail behind. The youngest faculty do show the greatest proportion claiming to be“very aware” (6.7%), but have lower proportions reporting that they are “aware” or “somewhat aware.”

Hill APLU Slides Nov 2014 30

Combine this finding with one from another recent survey by Gallup, sponsored and reported by Inside Higher Education.

The doubt extends across age groups and most academic disciplines. Tenured faculty members may be the most critical of online courses, with an outright majority (52 percent) saying online courses produce results inferior to in-person courses, but that does not necessarily mean opposition rises steadily with age. Faculty respondents younger than 40, for example, are more critical of online courses (38 percent) than are those between the ages of 50 and 59 (34 percent).

These findings challenge the predominant assumption about older faculty being more resistant to change, but I would not consider it proof of the reverse. For now, I think the safest assumption is to stop assuming that age is a determining factor for ed tech and pedagogical changes from faculty members. What are the implications?

  • I have heard informal comments at schools about instituting change by waiting it out – letting the resistant older faculty retire over time and allowing innovative younger faculty to change the culture. This approach and assumption could be a mistake.
  • Everett Rogers has found that opinion leaders play a crucial role in the change process. There could be key advantages in actively reaching out to older faculty who might be established opinion leaders to include them directly in change initiatives.
  • We should not assume that older faculty would not want additional support and professional development. These ‘senior’ faculty members may need additional opportunities to learn new technologies, but you might be surprised to find they are more receptive to experimentation and participation in change initiatives.

I would not presume to be able to answer these questions for you, but I think it is important to highlight how technology changes will have faculty support and management implications that go well beyond niche programs and could change the faculty of the future. These innovations are having a broader effect.

Q (audience). Another dimension is that we’re seeing more need for interaction, seeking greater impact with students. We need more meaningful interactions between faculty and students. How do these changes apply to interaction?

A. One of the most encouraging aspects in the study mentioned from Inside Higher Ed faculty survey showed that the biggest definition of quality in online learning (and hopefully f2f learning) is mentorship. The quality of an online course or program depends on the design and implementation. There are a lot of bad online courses with poor engagement. But at the same time there are many well-designed online courses with more interaction between faculty and student that is even possible in traditional face-to-face courses. For example, online tools can increase the ability to reach out to introverts and bring them in to group discussions. Well-design learning analytics can act as a teacher’s eyes and ears to see more directly how different students are doing in the class. Moving forward, this is one of the biggest opportunities to enhance interaction. You raise a good point, though – it’s a challenge and cannot just be solved automatically.

If faculty or an institution fall back on traditional course design just being placed online,  there will be problems. Some of the best-designed courses, however, go beyond the official LMS tools and use social media, blogs, and various interactive tools to enhance creativity and interaction. Long and short – it’s not a matter of if a course or program is put online, it’s a matter of how the course is designed, the faculty role in actively creating opportunities for interaction, and adequate support for students and faculty.

The post APLU Panel: Effects of digital education trends on teaching faculty appeared first on e-Literate.

Dammit, the LMS

Mon, 2014-11-10 16:07

Count De Monet: I have come on the most urgent of business. It is said that the people are revolting!

King Louis: You said it; they stink on ice.

- History of the World, Part I

Jonathan Rees discovered a post I wrote about the LMS in 2006 and, in doing so, discovered that I was writing about LMSs in 2006. I used to write about the future of the LMS quite a bit. I hardly ever do anymore, mostly because I find the topic to be equal parts boring and depressing. My views on the LMS haven’t really changed in the last decade. And sadly, LMSs themselves haven’t changed all that much either. At least not in the ways that I care about most. At first I thought the problem was that the technology wasn’t there to do what I wanted to do gracefully and cost-effectively. That excuse doesn’t exist anymore. Then, once the technology arrived as Web 2.0 blossomed[1], I thought the problem was that there was little competition in the LMS market and therefore little reason for LMS providers to change their platforms. That’s not true anymore either. And yet the pace of change is still glacial. I have reluctantly come to the conclusion that the LMS is the way it is because a critical mass of faculty want it to be that way.

Jonathan seems to think that the LMS will go away soon because faculty can find everything they need on the naked internet. I don’t see that happening any time soon. But the reasons why seem to get lost in the perennial conversations about how the LMS is going to die any day now. As near as I can remember, the LMS has been about to die any day now since at least 2004, which was roughly when I started paying attention to such things.

And so it comes to pass that, with great reluctance, I take up my pen once more to write about the most dismal of topics: the future of the LMS.

In an Ideal World…

I have been complaining about the LMS on the internet for almost as long as there have been people complaining about the LMS on the internet. Here’s something I wrote in 2004:

The analogy I often make with Blackboard is to a classroom where all the seats are bolted to the floor. How the room is arranged matters. If students are going to be having a class discussion, maybe you put the chairs in a circle. If they will be doing groupwork, maybe you put them in groups. If they are doing lab work, you put them around lab tables. A good room set-up can’t make a class succeed by itself, but a bad room set-up can make it fail. If there’s a loud fan drowning out conversation or if the room is so hot that it’s hard to concentrate, you will lose students.

I am a first- or, at most, second-generation internet LMS whiner. And that early post captures an important aspect of my philosophy on all things LMS and LMS-like. I believe that the spaces we create for fostering learning experiences matter, and that one size cannot fit all. Therefore, teachers and students should have a great deal of control in shaping their learning environments. To the degree that it is possible, technology platforms should get out of the way and avoid dictating choices. This is a really hard thing to do well in software, but it is a critical guiding principle for virtual learning environments. It’s also the thread that ran through the 2006 blog post that Jonathan quoted:

Teaching is about trust. If you want your students to take risks, you have to create an environment that is safe for them to do so. A student may be willing to share a poem or a controversial position or an off-the-wall hypothesis with a small group of trusted classmates that s/he wouldn’t feel comfortable sharing with the entire internet-browsing population and having indexed by Google. Forever. Are there times when encouraging students to take risks out in the open is good? Of course! But the tools shouldn’t dictate the choice. The teacher should decide. It’s about academic freedom to choose best practices. A good learning environment should enable faculty to password-protect course content but not require it. Further, it should not favor password-protection, encouraging teachers to explore the spectrum between public and private learning experiences.

Jonathan seems to think that I was supporting the notion of a “walled garden” in that post—probably because the title of the post is “In Defense of Walled Gardens”—but actually I was advocating for the opposite at the platform level. A platform that is a walled garden is one that forces particular settings related to access and privacy on faculty and students. Saying that faculty and students have a right to have private educational conversations when they think those are best for the situation is not at all the same as saying that it’s OK for the platform to dictate decisions about privacy (or, for that matter, that educational conversations should always be private). What I have been trying to say, there and everywhere, is that our technology needs to support and enable the choices that humans need to make for themselves regarding the best conditions for their personal educational needs and contexts.

Regarding the question of whether this end should be accomplished through an “LMS,” I am both agnostic and utilitarian on this front. I can imagine a platform we might call an “LMS” that would have quite a bit of educational value in a broad range of circumstances. It would bear no resemblance to the LMS of 2004 and only passing resemblance to the LMS of 2014. In the Twitfight between Jonathan and Instructure co-founder Brian Whitmer that followed Jonathan’s post, Brian talked about the idea of an LMS as a “hub” or an “aggregator.” These terms are compatible with what my former SUNY colleagues and I were imagining in 2005 and 2006, although we didn’t think of it in those terms. We thought of the heart of it as a “service broker” and referred to the whole thing in which it would live as a “Learning Management Operating System (LMOS).” You can think of the broker as the aggregator and the user-facing portions of the LMOS as the hub that organized the aggregated content and activity for ease-of-use purposes.

By the way, if you leave off requirements that such a thing should be “institution-hosted” and “enterprise,” the notion that an aggregator or hub would be useful in virtual learning environments is not remotely contentious. Jim Groom’s ds106 uses a WordPress-based aggregation system, the current generation of which was built by Alan Levine. Stephen Downes built gRSShopper ages ago. Both of these systems are RSS aggregators at heart. That second post of mine on the LMOS service broker, which gives a concrete example of how such a thing would work, mainly focuses on how much you could do by fully exploiting the rich metadata in an RSS feed and how much more you could do with it if you just added a couple of simple supplemental APIs. And maybe a couple of specialized record types (like iCal, for example) that could be syndicated in feeds similarly to RSS. While my colleagues and I were thinking about the LMOS as an institution-hosted enterprise application, there’s nothing about the service broker that requires it to be so. In fact, if you add some extra bits to support federation, it could just as easily form the backbone of for a distributed network of personal learning environments. And that, in fact, is a pretty good description of the IMS standard in development called Caliper, which is why I am so interested in it. In my recent post about walled gardens from the series that Jonathan mentions in his own post, I tried to spell out how Caliper could enable either a better LMS, a better world without an LMS, or both simultaneously.

Setting aside all the technical gobbledygook, here’s what all this hub/aggregator/broker stuff amounts to:

  • Jonathan wants to “have it all,” by which he means full access to the wide world of resources on the internet. Great! Easily done.
  • The internet has lots of great stuff but is not organized to make that stuff easy to find or reduce the number of clicks it takes you to see a whole bunch of related stuff. So it would be nice to have the option of organizing the subset of stuff that I need to look at for a class in ways that are convenient for me and make minimal demands on me in terms of forcing me to go out and proactively look to see what has changed in the various places where there might be activity for my class.
  • Sometimes the stuff happening in one place on the internet is related to stuff happening in another place in ways that are relevant to my class. For example, if students are writing assignments on their blogs, I might want to see who has gotten the assignment done by the due date and collect all those assignments in one place that’s convenient for me to comment on them and grade them. It would be nice if I had options of not only aggregating but also integrating and correlating course-related information.
  • Sometimes I may need special capabilities for teaching my class that are not available on the general internet. For example, I might want to model molecules for chemistry or have a special image viewer with social commenting capabilities for art history. It would be nice if there were easy but relatively rich ways to add custom “apps” that can feed into my aggregator.
  • Sometimes it may be appropriate and useful (or even essential) to have private educational conversations and activities. It would be nice to be able to do that when it’s called for and still have access to whole public internet, including the option to hold classes mostly “in public.”

In an ideal world, every class would have its own unique mix of these capabilities based on what’s appropriate for the students, teacher, and subject. Not every class needs all of these capabilities. In fact, there are plenty of teachers who find that their classes don’t need any of them. They do just fine with WordPress. Or a wiki. Or a listserv. Or a rock and a stick. And these are precisely the folks who complain the loudest about what a useless waste the LMS is. It’s a little like an English professor walking into a chemistry lab and grousing, “Who the hell designed this place? You have these giant tables which are bolted to the floor in the middle of the room, making it impossible to have a decent class conversation. And for goodness sake, the tables have gas jets on them. Gas jets! Of all the pointless, useless, preposterous, dangerous things to have in a classroom…! And I don’t even want to know how much money the college wasted on installing this garbage.”

Of course, today’s LMS doesn’t look much like what I described in the bullet points above (although I do think the science lab analogy is a reasonable one even for today’s LMS). It’s fair to ask why that is the case. Some of us have been talking about this alternative vision for something that may or may be called an “LMS” for a decade or longer now. And there are folks like Brian Whitmer at LMS companies (and LMS open source projects) saying that they buy into this idea. Why don’t our mainstream platforms look like this yet?

Why We Can’t Have Nice Things

Let’s imagine another world for a moment. Let’s imagine a world in which universities, not vendors, designed and built our online learning environments. Where students and teachers put their heads together to design the perfect system. What wonders would they come up with? What would they build?

Why, they would build an LMS. They did build an LMS. Blackboard started as a system designed by a professor and a TA at Cornell University. Desire2Learn (a.k.a. Brightspace) was designed by a student at the University of Waterloo. Moodle was the project of a graduate student at Curtin University in Australia. Sakai was built by a consortium of universities. WebCT was started at the University of British Columbia. ANGEL at Indiana University.

OK, those are all ancient history. Suppose that now, after the consumer web revolution, you were to get a couple of super-bright young graduate students who hate their school’s LMS to go on a road trip, talk to a whole bunch of teachers and students at different schools, and design a modern learning platform from the ground up using Agile and Lean methodologies. What would they build?

They would build Instructure Canvas. They did build Instructure Canvas. Presumably because that’s what the people they spoke to asked them to build.

In fairness, Canvas isn’t only a traditional LMS with a better user experience. It has a few twists. For example, from the very beginning, you could make your course 100% open in Canvas. If you want to teach out on the internet, undisguised and naked, making your Canvas course site just one class resource of many on the open web, you can. And we all know what happened because of that. Faculty everywhere began opening up their classes. It was sunlight and fresh air for everyone! No more walled gardens for us, no sirree Bob.

That is how it went, isn’t it?

Isn’t it?

I asked Brian Whitmer the percentage of courses on Canvas that faculty have made completely open. He didn’t have an exact number handy but said that it’s “really low.” Apparently, lots of faculty still like their gardens walled. Today, in 2014.

Canvas was a runaway hit from the start, but not because of its openness. Do you know what did it? Do you know what single set of capabilities, more than any other, catapulted it to the top of the charts, enabling it to surpass D2L in market share in just a few years? Do you know what the feature set was that had faculty from Albany to Anaheim falling to their knees, tears of joy streaming down their faces, and proclaiming with cracking, emotion-laden voices, “Finally, an LMS company that understands me!”?

It was Speed Grader. Ask anyone who has been involved in an LMS selection process, particularly during those first few years of Canvas sales.

Here’s the hard truth: While Jonathan wants to think of the LMS as “training wheels” for the internet (like AOL was), there is overwhelming evidence that lots of faculty want those training wheels. They ask for them. And when given a chance to take the training wheels off, they usually don’t.

Let’s take another example: roles and permissions.[2] Audrey Watters recently called out inflexible roles in educational software (including but not limited to LMSs) as problematic:

Ed-tech works like this: you sign up for a service and you’re flagged as either “teacher” or “student” or “admin.” Depending on that role, you have different “privileges” — that’s an important word, because it doesn’t simply imply what you can and cannot do with the software. It’s a nod to political power, social power as well.

Access privileges in software are designed to enforce particular ways of working together, which can be good if and only if everybody agrees that the ways of working together that the access privileges are enforcing are the best and most productive for the tasks at hand. There is no such thing as “everybody agrees” on something like the one single best way for people to work together in all classes. If the access privileges (a.k.a. “roles and permissions”) are not adaptable to the local needs, if there is no rational and self-evident reason for them to be structured the way they are, then they end up just reinforcing the crudest caricatures of classroom power relationships rather than facilitating productive cooperation. Therefore, standard roles and permissions often do more harm than good in educational software. I complained about this problem in 2005 when writing about the LMOS and again in 2006 when reviewing an open source LMS from the UK called Bodington. (At the time, Stephen Downes mocked me for thinking that this was an important aspect of LMS design to consider.)

Bodington had radically open permissions structures. You could attach any permissions (read, write, etc.) to any object in the system, making individual documents, discussions, folders, and what have you totally public, totally private, or somewhere in between.You could collect sets of permissions and and define them as any roles that you wanted. Bodington also, by the way, had no notion of a “course.” It used a geographical metaphor. You would have a “building” or a “floor” that could house a course, a club, a working group, or anything else. In this way, it was significantly more flexible than any LMS I had seen before.

Of course, I’m sure you’ve all heard of Bodington, its enormous success in the market, and how influential it’s been on LMS design.[3]

What’s that? You haven’t?

Huh.

OK, but surely you’re aware of D2L’s major improvements in the same area. If you recall your LMS patent infringement history, then you’ll remember that roles and permissions were exactly the thing that Blackboard sued D2L over. The essence of the patent was this: Blackboard claimed to have invented a system where the same person could be given the role of “instructor” in one course site and the role of “student” in another. That’s it. And while Blackboard eventually lost that fight, there was a court ruling in the middle in which D2L was found to have infringed on the patent. In order to get around it, the company ripped out its predefined roles, making it possible (and necessary) for every school to create its own. As many as they want. Defined however they want. I remember Ken Chapman telling me that, even though it was the patent suit that pushed him to think this way, in the end he felt that the new way was a significant improvement over the old way of doing things.

And the rest, as you know, was history. The Chronicle and Inside Higher Ed wrote pieces describing the revolution on campuses as masses of faculty demanded flexible roles and permissions. Soon it caught the attention of Thomas Friedman, who proclaimed it to be more evidence that the world is indeed flat. And the LMS market has never been the same since.

That is what happened…right?

No?

Do you want to know why the LMS has barely evolved at all over the last twenty years and will probably barely evolve at all over the next twenty years? It’s not because the terrible, horrible, no-good LMS vendors are trying to suck the blood out of the poor universities. It’s not because the terrible, horrible, no-good university administrators are trying to build a panopticon in which they can oppress the faculty. The reason that we get more of the same year after year is that, year after year, when faculty are given an opportunity to ask for what they want, they ask for more of the same. It’s because every LMS review process I have ever seen goes something like this:

  • Professor John proclaims that he spent the last five years figuring out how to get his Blackboard course the way he likes it and, dammit, he is not moving to another LMS unless it works exactly the same as Blackboard.
  • Professor Jane says that she hates Blackboard, would never use it, runs her own Moodle installation for her classes off her computer at home, and will not move to another LMS unless it works exactly the same as Moodle.
  • Professor Pat doesn’t have strong opinions about any one LMS over the others except that there are three features in Canvas that must be in whatever platform they choose.
  • The selection committee declares that whatever LMS the university chooses next must work exactly like Blackboard and exactly like Moodle while having all the features of Canvas. Oh, and it must be “innovative” and “next-generation” too, because we’re sick of LMSs that all look and work the same.

Nobody comes to the table with an affirmative vision of what an online learning environment should look like or how it should work. Instead, they come with this year’s checklists, which are derived from last year’s checklists. Rather than coming with ideas of what they could have, the come with their fears of what they might lose. When LMS vendors or open source projects invent some innovative new feature, that feature gets added to next year’s checklist if it avoids disrupting the rest of the way the system works and mostly gets ignored or rejected to the degree that it enables (or, heaven forbid, requires) substantial change in current classroom practices.

This is why we can’t have nice things. I understand that it is more emotionally satisfying to rail against the Powers That Be and ascribe the things that we don’t like about ed tech to capitalism and authoritarianism and other nasty isms. And in some cases there is merit to those accusations. But if we were really honest with ourselves and looked at the details of what’s actually happening, we’d be forced to admit that the “ism” most immediately responsible for crappy, harmful ed tech products is consumerism. It’s what we ask for and how we ask for it. As with our democracy, we get the ed tech that we deserve.

In fairness to faculty, they don’t always get an opportunity to ask good questions. For example, at Colorado State University, where Jonathan works, the administrators, in their infinite wisdom, have decided that the best course of action is to choose their next LMS for their faculty by joining the Unizin coalition. But that is not the norm. In most places, faculty do have input but don’t insist on a process that leads to a more thoughtful discussion than compiling a long list of feature demands. If you want agitate for better ed tech, then changing the process by which your campus evaluates educational technology is the best place to start.

There. I did it. I wrote the damned “future of the LMS” post. And I did it mostly by copying and pasting from posts I wrote 10 years ago. I am now going to go pour myself a drink. Somebody please wake me again in another decade.

  1. Remember that term?
  2. Actually, it’s more of an extension of the previous example. Roles and permissions are what make a garden walled or not, which another reason why they are so important.
  3. The Bodington project community migrated to Sakai, where some, but not all, of its innovations were transplanted to the Sakai platform.

The post Dammit, the LMS appeared first on e-Literate.

Kuali, Ariah and Apereo: Emerging ed tech debate on open source license types

Mon, 2014-11-10 08:13

With the annual Kuali conference – Kuali Days – starting today in Indianapolis, the big topic should be the August decision to move from a community source to a professional open source model, moving key development to a commercial entity, the newly-formed KualiCo. Now there will be two new announcements for the community to discuss, both centering on a esoteric license choice that could have far-reaching implications. Both the announcement of the Ariah Group as a new organization to support Kuali products and the statement from the Apereo Foundation center on the difference between Apache-style and AGPL licenses.

AGPL and Vendor Protection

Kuali previously licensed its open source code as Educational Community License (ECL), a derivative of the standard Apache license that is designed to be permissive in terms of allowing organizations to contribute modified open source code while mixing with code with different licenses – including proprietary. This license is ‘permissive’ in the sense that the derived, remixed code may be licensed in different manners. It is generally thought that this license type gives the most flexibility for developing a community of contributors.

With the August pivot to Kuali 2.0 / KualiCo, the decision was made to fork and relicense any Kuali code that moves to KualiCo to use the Affero General Public License (AGPL), a derivative of the original GPL license and a form of “copyleft” licensing that allows derivative works but requires the derivatives to use the same license. Ideally the idea is to ensure that open source code remains open. No commercial entity can create derivative works and license with different terms.

The problem is when you have asymmetric AGPL licenses – where the copyright holder such as KualiCo does not have the same restrictions as all other users or developers of the code. Kuali has already announced that the multi-tenant cloud-hosting code to be developed by KualiCo will be proprietary and not open source. As the copyright holder, this is their right. Any school or Kuali vendor, however, that develops its own multi-tenant cloud-hosting code would have to share this code back with KualiCo relicense and share this code publicly as open source. If you want to understand how this choice might create vendor lock-in, even using an open source license, go read Charles Severance’s post. Update: fixed wording about sharing requirements.

To their credit, the Kuali Foundation and KualiCo are very open about the intention of this license change, as described at Inside Higher Ed from a month ago.

[Barry] Walsh, who has been dubbed the “father of Kuali,” issued that proclamation after a back-and-forth with higher education consultant Phil Hill, who during an early morning session asked the Kuali leadership to clarify which parts of the company’s software would remain open source.

The short answer: everything — from the student information system to library management software — but the one thing institutions that download the software for free won’t be able to do is provide multi-tenant support (in other words, one instance of the software accessed by multiple groups of users, a feature large university systems may find attractive). To unlock that feature, colleges and universities need to pay KualiCo to host the software in the cloud, which is one way the company intends to make money.

“I’ll be very blunt here,” Walsh said. “It’s a commercial protection — that’s all it is.”

My post clarifying this interaction can be found here.

Enter Ariah Group

On Friday of last week, the newly formed Ariah Group sent out an email announcing a new support option for Kuali products.

Strong interest has been expressed in continuing to provide open source support for Kuali®products therefore The Ariah Group, a new nonprofit entity, has been established for those who wish to continue and enhance that original open source vision.

We invite you to join us. The community is open to participants of all kinds with a focus on making open source more accessible. The goal will be to deliver quality open source products for Finance, Human Resources, Student, Library, Research, and Continuity Planning. The Ariah Group will collaborate to offer innovative new products to enhance the suite and support the community. All products will remain open source and use the Apache License, Version 2.0 (http://opensource.org/licenses/Apache-2.0) for new contributions. A number of institutions and commercial vendors will be announcing their support in the coming days and weeks.

To join or learn more visit The Ariah Group at http://ariahgroup.org/

Who is the Ariah Group? While details are scarce, this new organization seems to be based on 2 – 3 current and former Kuali vendors. As can be seen from their incomplete website, the details have not been worked out. The group has identified an Executive Director, based on an email exchange I had with the company.

The only vendor that I can confirm is part of Ariah is Moderas, the former Kuali Commercial Affiliate that was removed as an official vendor in September (left or kicked out, depending on which side you believe; I’d say it was a mutual decision). I talked to Chris Thompson, co-founder of Moderas, who said that he understood the business rationale for the move to the Professional Open Source model but had a problem with the community aspects. The Kuali Foundation made a business decision to adopt AGPL and shift development to KualiCo, which makes sense in his telling, but the decision did not include real involvement from the Kuali Community. Chris sees that the situation has changed Kuali from a collaborative to a competitive environment, with KualiCo holding most of the cards.

This is the type of thinking behind the Ariah Group announcement – going back to the future. As described on the website:

We’ve been asked if we’re “branching the code” as we’ve discussed founding Ariah and our response has been that we feel that in fact the Kuali Foundation is branching with their new structure that includes a commercial entity who will set the development priorities and code standards that may deviate from the current Java technology stack in use. At Ariah our members will set the priorities as it was and as it should be in any truly open source environment. Java will always be our technology stack as we understand the burden that changing could cause a massive impact to our members.

This is an attempt to maintain some of the previous Kuali model including an Apache license (very close to ECL) and the same technology stack. But this approach raises two questions: How serious is this group (including whether they are planning to raise investment capital)? And why would Ariah expect to succeed when Kuali was unable to deliver on this model?

While this move by Ariah would have to be considered high risk, at least in its current form without funding secured or details worked out, it adds a new set of risks for Kuali itself as the Kuali Days conference begins. Kuali is in a critical period where the Foundation is seeking to get partner institutions to sign agreements to support KualiCo, contributing both cash and project staff. Based on input from multiple sources, only the University of Maryland has already signed a Memo of Understanding and agreed to this move for the Kuali Student project. Will the Ariah Group announcement cause schools to either reconsider upcoming decisions or even to just delay decisions. Will the Kuali project functional councils be influenced by this announcement on whether to move to the AGPL license.

I contacted Brad Wheeler, chair of the Kuali Foundation board, who added this comment:

Unlike many proprietary software models, Kuali was established with and continues with a software model that has always enabled institutional prerogative. Nothing new here.

Apereo Statement

In a separate but related announcement, this morning the Apereo Foundation (parent organization for Sakai, uPortal and other educational open source projects) released a statement on open source licenses.

Apereo supports the general ideas behind “copyleft” and believes that free software should stay free. However, Apereo is more interested in promoting widespread adoption and collaboration around its projects, and copyleft licenses can be a barrier to this. Specifically, the required reciprocity of copyleft licenses (like the GPL and AGPL) is viewed negatively by many potential adopters and contributors. Apereo also has a number of symbiotic relationships with other open source communities and projects with Apache-style licensing that would be hurt by copyleft licensing.

Apereo strongly encourages anyone who improves upon an Apereo project to contribute those changes back to the community. Contributing is mutually beneficial since the community gets a better project and the contributor does not have to maintain a diverging codebase. Apereo project governance bodies that feel licensing under the GPL or AGPL is necessary in their context can request permission from the Licensing & Intellectual Property Committee and the Apereo Foundation Board of Directors to select this copyleft approach to outbound licensing.

Apereo believes that the reciprocity in a copyleft open source software project should be symmetrical for everyone, specifically that all individuals and organizations involved should share any derivative works as defined in the selected outbound license. Apereo sponsored projects that adopt a copyleft approach to outbound licensing will be required to maintain fully symmetric reciprocity for all parties, including Apereo itself.

Those seeking further information on copyleft licensing, including potential pitfalls of asymmetric application, should read chapter 11 of the “Copyleft and the GNU General Public License: A Comprehensive Tutorial and Guide – Integrating the GPL into Business Practices”. This can be found at –

http://www.copyleft.org/guide/comprehensive-gpl-guidech12.html#x15-10400011.2

While Kuali would appear to be one of the triggers for this statement, there are other educational changes to consider such as the Open edX change from AGPL to Apache (reverse of Kuali) for its XBlock code. From the edX blog post describing this change:

The XBlock API will only succeed to the extent that it is widely adopted, and we are committed to encouraging broad adoption by anyone interested in using it. For that reason, we’re changing the license on the XBlock API from AGPL to Apache 2.0.

The Apache license is permissive: it lets adopters and extenders do what they want with their changes. They can release them under a copyleft license like AGPL, or a permissive license like Apache, or even keep them closed-source.

Methods Matter

I’ll be interested to see any news or outcomes from the Kuali Days conference, and these two announcements should affect the license discussions at the conference. What I have found interesting is that in most of my conversations with Kuali community people ,even for those who are disillusioned, they seem to think the KualiCo creation makes some sense. The real frustration and pushback has been on how decisions are made, how decisions have been communicated, and how the AGPL license choice will affect the community.

It’s too early to tell if the Ariah Group will have any significant impact on the Kuali community or not, but the issue of license types should have a growing importance in educational technology discussions moving forward.

The post Kuali, Ariah and Apereo: Emerging ed tech debate on open source license types appeared first on e-Literate.

A New e-Literate TV Series is in Production

Sat, 2014-11-08 13:09

We have been quiet about e-Literate TV lately, but it doesn’t mean that we have been idle. In fact, we’ve been hard at work filming our next series. In addition to working with our old friends at IN THE TELLING—naturally—we’re also collaborating with EDUCAUSE Learning Initiative (ELI) and getting funding and support from the Bill & Melinda Gates Foundation.

As we have discussed both here and elsewhere, we think the term “personalized learning” carries a lot of baggage with it that needs to be unpacked, as does the related concept of “adaptive learning.” The field in general is grappling with these broad concepts and approaches; an exploration of specific examples and implementations should sharpen our collective understandings about the promise and risks of these concepts and approaches. The Gates foundation has funded the development of an ETV series and given us a free editorial hand to explore the topics of personalization and adaptive learning.

The heart of the series will be a series of case studies at a wide range of different schools. Some of these schools will be Gates Foundation grantees, piloting and studying the use of “personalized learning” technology or product, while others will not. (For more info about some of the pilots that Gates is funding in adaptive learning, including which schools are participating and the evaluation process the foundation has set up to ensure an object review of the results, see Phil’s post about the ALMAP program.) Each ETV case study will start by looking at who the students are at a particular school, what they’re trying to accomplish for themselves, and what they need. In other words who are the “persons” for whom we are trying to “personalize” learning? Hearing from the students directly through video interviews will be a central part of this series. We then take a look at how each school is using technology to support the needs of those particular students. We’re not trying to take a pro- or anti- position on any of these approaches. Rather, we’re trying to understand what personalization means to the people in these different contexts and how they are using tools to help them grapple with it.

Because many Americans have an idealized notion of what a personalized education means that may or may not resemble what “personalized learning” technologies deliver, we wanted to start the series by looking at that ideal. We filmed our first case study at Middlebury College, an elite New England liberal arts college that has an 8-to-1 student/teacher ratio. They do not use the term “personalized learning” at Middlebury, and some stakeholders there expressed the concern that technology, if introduced carelessly, could depersonalize education for Middlebury students. That said, we heard both students and teachers talk about ways in which even an eight-person seminar can support more powerful and personalized learning through the use of technology.

The second school on our list was Essex County College in Newark, New Jersey, where we are filming as of this writing (but will be finished by publication time). As you might imagine, the students, their needs, and their goals and aspirations are different, and the school’s approach to supporting them is different. Here again, we’ll be asking students and teachers for their stories and their views rather than imposing ours. We intend to visit a diverse handful of schools, with the goal of releasing a few finished case studies by the end of this year and more early next year.

With the help of the good folks at ELI, we will also be bringing together a panel at the ELI 2015 conference, consisting of the people from the various case studies to have a conversation about what we can learn about the idea of personalized learning by looking across these various contexts and experiences. This will be a “flipped” panel in the sense that the panelists (and the audience) will have had the opportunity to watch the case study videos before sitting down and talking to each other. The discussion will be filmed and included in the ETV series.

We’re pretty excited about the series and grateful, as always, to the support of our various partners. We’ll have more to say—and show—soon.

Stay tuned.

The post A New e-Literate TV Series is in Production appeared first on e-Literate.

Michael’s Keynote at Sakai Virtual Conference

Thu, 2014-11-06 18:51

Michael is giving the keynote address at tomorrow’s (Friday, Nov 7) Sakai Virtual Conference #SakaiVC14. The virtual conference is only $50 registration, with more information and registration link here. The schedule at a glance is available as PDF here.

Michael’s keynote is at 10:00am EDT, titled “Re-Imagining Educational Content for a Digital World”. At 4:30pm, there will be a Q&A session based on the keynote.

The post Michael’s Keynote at Sakai Virtual Conference appeared first on e-Literate.

Flipped Classrooms: Annotated list of resources

Thu, 2014-10-30 17:03

I was recently asked by a colleague if I knew of a useful article or two on flipped classrooms – what they are, what they aren’t, and when did they start. I was not looking for any simple advocacy or rejection posts, but explainers that can allow other people to understand the subject and make up their own mind on the value of flipping.

While I had a few in mind, I put out a bleg on Google+ and got some great responses from Laura Gibbs, George Station, and Bryan Alexander. Once mentioned, Robert Talbert and Michelle Pacansky-Brock jumped into the conversation with additional material. It seemed like a useful exercise to compile the results and share a list here at e-Literate. This list is not meant to be comprehensive, but a top level of the articles that I have found useful.

There are other useful article out there, but this list is a good starting place for balanced, non-hyped descriptions of the flipped classroom concept.[1] Let me know in the comments if there are others to include in this list.

  1. I did not include any directly commercial sites or articles in the list above. Michelle’s book was included as the introduction is freely available.

The post Flipped Classrooms: Annotated list of resources appeared first on e-Literate.

Significant Milestone: First national study of OER adoption

Tue, 2014-10-28 22:02

For years we have heard anecdotes and case studies about OER adoption based on one (or a handful) of institutions. There are many items we think we know, but we have lacked hard data on the adoption process to back up these assumptions that have significant policy and ed tech market implications.

OtC Cover PageThe Babson Survey Research Group (BSRG) – the same one that administers the annual Survey of Online Learning – has released a survey of faculty titled “Opening the Curriculum” on the decision process and criteria for choosing teaching resources with an emphasis on Open Educational Resources (OER). While their funding from the Hewlett Foundation and from Pearson[1] is for the current survey only, there are proposals to continue the Faculty OER surveys annually to get the same type of longitudinal study that they provide for online learning.

While there will be other posts (including my own) that will cover the immediate findings of this survey, I think it would be worthwhile to first provide context on why this is a significant milestone. Most of the following background and author findings is based on my interview with Dr. Jeff Seaman, one of the two lead researchers and authors of the report (the other is Dr. I. Elaine Allen).

Background

Three years ago when the Survey for Online Learning was in its 9th iteration, the Hewlett Foundation approached BSRG about creating reports on OER adoption. Jeff did a meta study to see what data was already available and was disappointed with results, so the group started to compile surveys and augment their own survey questionnaires.

The first effort, titled Growing the Curriculum and published two years ago, was a combination of results derived from four separate studies. The section on Chief Academic Officers was “taken from the data collected for the 2011 Babson Survey Research Group’s online learning report”. This report was really a test of the survey methodology and types of questions that needed to be asked.

The Hewlett Foundation is planning to develop an OER adoption dashboard, and there has been internal debate on what to measure and how. This process took some time, but once the groups came to agreement, the current survey was commissioned.

Pearson came in as a sponsor later in the process and provided additional resources to expand the scope of survey, augmented the questions to be asked, and helped with infographics, marketing, and distribution.

A key issue on OER adoption is that the primary decision-makers are faculty members. Thus the current study is based on responses from teaching faculty “(defined as having at least one course code associated with their records)”.

A total of 2,144 faculty responded to the survey, representing the full range of higher education institutions (two-year, four-year, all Carnegie classifications, and public, private nonprofit, and for-profit) and the complete range of faculty (full- and part-time, tenured or not, and all disciplines). Almost three-quarters of the respondents report that they are full-time faculty members. Just under one-quarter teach online, and they are evenly split between male and female, and 28% have been teaching for 20 years or more.

Internal Lessons

I asked Jeff what his biggest lessons have been while analyzing the results. He replied that the key meta findings are the following:

  • We have had a lot of assumptions in place (e.g. faculty are primary decision-makers on OER adoption, cost is not a major driver of the decision), but we have not had hard data to back up these assumptions, at least beyond several case studies.
  • The decision process for faculty is not about OER – it is about selecting teaching resources. The focus of studies should be on this general resource selection process with OER as one of the key components rather than just asking about OER selection.

Thus the best way to view this report is not to look for earth-shaking findings or to be disappointed if there are no surprises, but rather to see data-backed answers on the teaching resource adoption process.

Most Surprising Finding

Given this context, I pressed Jeff to answer what findings may have surprised him based on prior assumptions. The two answers are encouraging from an OER perspective.

  • Once you present OER to faculty, there’s a real affinity and alignment of OER with faculty values. Jeff was surprised more about the potential of OER than he had thought going in. Unlike other technology-based subjects of BSRG studies, there is almost no suspicion of OER. Everything else BSRG has measured has had strong minority views from faculty against the topic (online learning in particular), with incredible resentment detected. This resistance or resentment is just not there with OER. It is interesting for OER, with no organized marketing plan per se, to have no natural barriers from faculty perceptions.[2]
  • In the fundamental components of OER adoption – such as perceptions of quality and discoverability and currency – there is no significant difference between publisher-provided content and OER.
Notes on Survey

This is valuable survey, and I would hope that BSRG succeeds in getting funding (hint, hint Hewlett and Pearson) to make this into an annual report with longitudinal data. Ideally the base demographics will increase in scope so that we get a better understanding of the unique data between institution types and program types. Currently the report separates 2-year and 4-year institutions, but it would be useful to compare 4-year public vs. private and even for program type (e.g. competency-based programs vs. gen ed vs. fully online traditional programs).

There is much to commend in the appendices of this report – with basic data tables, survey methodology, and even the questionnaire itself. Too many survey reports neglect to include these basics.

You can download the full report here or read below. I’ll have more in terms of analysis of the specific findings in an upcoming post or two.

Download (PDF, 1.89MB)

  1. Disclosure: Pearson is a client of MindWires Consulting – see this post for more details.
  2. It’s no bed of roses for OER, however, as the report documents issues such as lack of faculty awareness and the low priority placed on cost as a criteria in selecting teaching resources.

The post Significant Milestone: First national study of OER adoption appeared first on e-Literate.

LISTedTECH: New wiki site and great visualizations

Thu, 2014-10-23 18:06

Last year I wrote about a relatively new site offering very interesting data and visualizations in the ed tech world. LISTedTECH was created by Justin Menard, who is Business Intelligence Senior Analyst at University of Ottawa. First of all, the site is broader in scope than just the LMS – there is a rich source of data & visualizations on MOOCs, university rankings, and IPEDS data. Most of the visualizations are presented by Tableau and therefore interactive in nature, allowing the user to filter data, zoom in on geographic data, etc. Since e-Literate is not set up for full-page visualizations, I have included screen shots below, but clicking on the image will take you to the appropriate LISTedTECH page.

Top_Learning_Management_System__LMS__by_State_or_Province_-_LISTedTECH

Justin created the LISTedTECH site based on his frustration with getting valuable market information while working on an ERP project at the University of Ottawa. After taking a year-long travel sabbatical, he added a programmer to his team this past summer. Justin does not have immediate plans to monetize the site beyond hoping to pay for server time.

LISTedTECH is a wiki. Anyone can sign up and contribute data on institutions and products. Justin gets notifications of data added and verifies data.[1] One of the key benefits of a wiki model is the ability to get user-defined data and even user ideas on useful data to include. Another benefit is the ability to scale. One of the key downsides of the wiki model is the need to clean out bad data, which can grow over time. Another downside is the selective sampling in data coverage.

LISTedTECH puts a priority on North America, and currently all ~140 Canadian schools are included. Justin and team are currently working to get complete, or near complete, US coverage. The one below could be titled If Ed Tech Were a Game of Risk, Moodle Wins.

World_Map_of_Learning_Management_Systems_08_2013_-_LISTedTECH

As of today the site includes:

  • Compan‏ies (511)
  • Products (1,326)
  • Institutions‏‎ (27,595)
  • Listed products used by institutions (over 18,000)
  • Product Categories‏‎ (36)
  • Countries (235)
  • World Ranking‏‎s (9)

The biggest change since I wrote last year is that LISTedTech has moved to a new site.

We have (finally) launched our new website wiki.listedtech.com. As you might remember, our old Drupal based site had been wikified to try and make contributions easier and try to build a community around HigherEd tech data. Even if it was possible to edit and share information, it was difficult to get all the steps down, and in the right order.

With the new version of the site, we knew that we needed a better tool. The obvious choice was to use the Mediawiki platform. To attain our goal of better data, we souped it up with semantic extensions. This helps by structuring the data on the pages so that they can be queried like a database.

Another example shows the history of commercial MOOCs based on the number of partner institutions:

MOOCs__a_short_history_-_LISTedTECH

I’m a sucker for great visualizations, and there is a lot to see at the site. One example is on blended learning and student retention, using official IPEDS data in the US. “Blended in this case means that the institution offers a mix of face-to-face and online courses.

Blended_Learning_and_Student_Retention_-_LISTedTECH

This is interesting – for 4-year institutions student retention positively there is a negative correlation with the percentage of courses available online, while for 2-year institutions the story is very different. That data invites additional questions and exploration.

All of the data for the website is available for download as XML files.

  1. He asks for people to include a link to source data to help in the QA process.

The post LISTedTECH: New wiki site and great visualizations appeared first on e-Literate.

What Faculty Should Know About Competency-Based Education

Thu, 2014-10-23 16:26

I loved the title of Phil’s recent post, “Competency-Based Education: Not just a drinking game” because it acknowledges that, whatever else CBE is, it is also a drinking game. The hype is huge and still growing. I have been thinking a lot lately about Gartner’s hype cycle and how it plays out in academia. In a way, it was really at the heart of the Duke keynote speech I posted the other day. There are a lot of factors that amplify it and make it more pernicious in the academic ecosystem than it is elsewhere. But it’s a tough beast to tackle.

I got some good responses to the “what faculty should know…” format that I used for a post about adaptive learning, so I’m going to try it again here in somewhat modified form. Let me know what you think of the format.

What Competency-Based Education (CBE) Is

The basic idea behind CBE is that what a student learns to pass a course (or program) should be fixed while the time it takes to do so should be variable. In our current education system, a student might have 15 weeks to master the material covered in a course and will receive a grade based on how much of the material she has mastered. CBE takes the position that the student should be able to take either more or less time than 15 weeks but should only be certified for completing the course when she has mastered all the elements. When a student registers for a course, she is in it until she passes the assessments for the course. If she comes in already knowing a lot and can pass the assessments in a few weeks—or even immediately—then she gets out quickly. If she is not ready to pass the assessments at the end of 15 weeks, she keeps working until she is ready.

Unfortunately, the term “CBE” is used very loosely and may have different connotations in different contexts. First, when “competency-based education” was first coined, it was positioned explicitly against similar approaches (like “outcomes-based education” and “mastery learning”) in that CBE was intended to be vocationally oriented. In other words, one of the things that CBE was intended to accomplish by specifying competencies was to ensure that what the students are learning is relevant to job skills. CBE has lost that explicit meaning in popular usage, but a vocational focus is often (but not always) present in the subtext.

Also, competencies increasingly feature prominently even in classes that do not have variable time. This is particularly true with commercial courseware. Vendors are grouping machine-graded assessment questions into “learning objectives” or competencies that are explicitly tied to instructional readings, videos, and so on. Rather than reporting that the student got quiz questions 23 through 26 wrong, the software is reporting that the student is not able to answer questions on calculating angular momentum, which was covered in the second section of Chapter 3. Building on this helpful but relatively modest innovation, courseware products are providing increasingly sophisticated support to both students and teachers on areas of the course (or “competencies”) where students are getting stuck. This really isn’t CBE in the way the term was originally intended but is often lumped together with CBE.

What It’s Good For

Because the term “CBE” is used for very different approaches, it is important to distinguish among them in terms of their upsides and downsides. Applying machine-driven competency-based assessments within a standard, time-based class is useful and helpful largely to the extent that machine-based assessment is useful and helpful. If you already are comfortable using software to quiz your students, then you will probably find competency-based assessments to be an improvement in that they provide improved feedback. This is especially true for skills that build on each other. If a student doesn’t master the first skill in such a sequence, she is unlikely to master the later skills that depend on it. A competency-based assessment system can help identify this sort of problem early so that the student doesn’t suffer increasing frustration and failure throughout the course just because she needs a little more help on one concept.

Thinking about your (time-based) course in terms of competencies, whether they are assessed by a machine or by a teacher, is also a useful tool in terms of helping you as a teacher shift your thinking from what it is you want to teach to what it is you want your students to learn—and how you will know that they have learned it. Part of defining a competency is defining how you will know when a student has achieved it. Thinking about your courses this way can not only help you design your courses better but also help when it is time to talk to your colleagues about program-level or even college-level goals. In fact, many faculty encounter the word “competency” for the first time in their professional context when discussing core competencies on a college-wide basis as part of the general education program. If you  have participated in these sorts of conversations, then you may well have found them simultaneously enlightening and incredibly frustrating. Defining competencies well is hard, and defining them so that they make sense across disciplines is even harder. But if faculty are engaged in thinking about competencies on a regular basis, both as individual teachers and as part of a college or disciplinary community, then they will begin to help each other articulate and develop their competencies around working with competencies.

Assuming that the competencies and assessments are defined well, then moving from a traditional time- or term-based structure to full go-at-your-own-pace CBE can help students by enabling those students who are especially bright or come in with prior knowledge and experience to advance quickly, while giving students who just need a little more time the chance they need to succeed. Both of these aspects are particularly important for non-traditional students[1] who come into college with life experience but also need help making school work with their work and life schedules—and who may very well have dropped out of college previously because they got stuck on a concept here or there and never got help to get past it.

What To Watch Out For

All that said, there are considerable risks attached to CBE. As with just about anything else in educational technology, one of the biggest has more to do with the tendency of technology products to get hyped than it does with the underlying ideas or technologies themselves. Schools and vendors alike, seeing a huge potential market of non-traditional students, are increasingly talking about CBE as a silver bullet. It is touted as more “personalized” than traditional courses in the sense that students can go at their own pace, and it “scales”—if the assessments are largely machine graded. This last piece is where CBE goes off the tracks pretty quickly. Along with the drive to service a large number of students at lower cost comes a strong temptation to dumb down competencies to the point where they can be entirely machine graded. Again, this probably doesn’t do much damage to traditional courses or programs that are already machine graded, it can do considerable damage in cases where the courses are not. And because CBE programs are typically aimed a working class students who can’t afford to go full-time, CBE runs the risk of making what is already a weaker educational experience in many cases (relative to expensive liberal arts colleges with small class sizes) worse by watering down standards for success and reducing the human support, all while advertising itself as “personalized.”

A second potential problem is that, even if the competencies are not watered down, creating a go-at-your-own-pace program makes social learning more of a challenge. If students are not all working on the same material at the same time, then they may have more difficulty finding peers they can work with. This is by no means an insurmountable design problem, but it is one that some existing CBE programs have failed to surmount.

Third, there are profound labor implications for moving from a time-based structure to CBE, starting with the fact that most contracts are negotiated around the number of credit hours faculty are expected to teach in a term. Negotiating a move from a time-based program to full CBE is far from straightforward.

Recomendations

CBE offers the potential to do a lot of good where it is implemented well and a lot of harm where it is implemented poorly. There are steps faculty can take to increase the chances of a positive outcome.

First, experiment with machine-graded competency-based programs in your traditional, time-based classes if and only if your are persuaded that the machine is capable of assessing the students well at what it is supposed to assess. My advice here is very similar to the advice I gave regarding adaptive learning, which is to think about the software as a tutor and to use, supervise, and assess its effectiveness accordingly. If you think that a particular software product can provide your students with accurate guidance regarding which concepts they are getting and which ones that they are not getting within a meaningful subset of what you are teaching, then it may be worth trying. But there is nothing magical about the word “competency.” If you don’t think that software can assess the skills that you want to assess, then competency-based software will be just as bad at it.

Second, try to spend a little time as you prepare for a new semester to think about your course in terms of competencies and refine your design at least a bit with each iteration. What are you trying to get students to know? What skills do you want them to have? How will you know if they have succeeded in acquiring that knowledge and those skills? How are your assessments connected to your goals? How are your lectures and course materials connected to them? To what degree are the connections clear and explicit?

Third, familiarize yourself with CBE efforts that are relevant to your institution and discipline, particularly if they are driven by organizations that you respect. For example, the American Association of Colleges and Universities (AAC&U) has created a list of competencies called the Degree Qualifications Profile (DQP) and a set of assessment rubrics called Valid Assessment of Learning in Undergraduate Education (VALUE). While these programs are consistent with and supportive of designing a CBE program, they focus on defining competencies students should receive from a high-quality liberal arts education and emphasize the use of rubrics applied by expert faculty for assessment over machine grading.

And finally, if your institution moves in the direction of developing a full CBE program, ask the hard questions, particularly about quality. What are the standards for competencies and assessments? Are they intended to be the same as for the school’s traditional time-based program? If so, then how will we know that they have succeeded in upholding those standards? If not, then what will the standards be, and why are they appropriate for the students who will be served by the program?

 

  1. The term “non-traditional is really out-of-date, since at many schools students who are working full-time while going to school are the rule rather than the exception. However, since I don’t know of a better term, I’m sticking with non-traditional for now.

The post What Faculty Should Know About Competency-Based Education appeared first on e-Literate.

Keynote: The Year After the Year of the MOOC

Wed, 2014-10-22 15:23

Here’s a talk I gave recently at the CIT event in Duke. In addition to being very nice and gracious, the Duke folks impressed me with how many faculty they have who are not only committed to improving their teaching practices but dedicating significant time to it as a core professional activity.

Anyway, for what it’s worth, here’s the talk:

Click here to view the embedded video.

The post Keynote: The Year After the Year of the MOOC appeared first on e-Literate.

State of the US Higher Education LMS Market: 2014 Edition

Wed, 2014-10-22 06:18

I shared the most recent graphic summarizing the LMS market in November 2013, and thanks to new data sources it’s time for an update. As with all previous versions, the 2005 – 2009 data points are based on the Campus Computing Project, and therefore is based on US adoption from non-profit institutions. This set of longitudinal data provides an anchor for the summary.

The primary data source for 2013 – 2014 is Edutechnica, which not only does a more direct measurement of a larger number of schools (viewing all schools in IPEDS database with more than 800 FTE enrollments), but it also allows scaling based on enrollment per institution. This means that the latter years now more accurately represent how many students use a particular LMS.

A few items to note:

  • Despite the addition of the new data source and its inclusion of enrollment measures, the basic shape and story of the graphic have not changed. My confidence has gone up in the past few years, but the heuristics were not far off.
  • The 2013 inclusion of Anglosphere (US, UK, Canada, Australia) numbers caused more confusion and questions than clarity, so this version goes back to being US only.
  • The Desire2Learn branding has been changed to Brightspace by D2L.
  • The eCollege branding has been changed to Pearson LearningStudio.
  • There is a growing area of “Alternative Learning Platforms” that includes University of Phoenix, Coursera, edX and OpenEdX, 2U, Helix and Motivis (the newly commercialized learning platform from College for America).
  • While the data is more solid than 2012 and prior years, keep in mind that you should treat the graphic as telling a story of the market rather than being a chart of exact data.

LMS_MarketShare_20141021

Some observations of the new data taken from the post on Edutechnica from September:

  • Blackboard’s BbLearn and ANGEL continue to lose market share in US -[1] Using the 2013 to 2014 tables (> 2000 enrollments), BbLearn has dropped from 848 to 817 institutions and ANGEL has dropped from 162 to 123. Using the revised methodology, Blackboard market share for > 800 enrollments now stands at 33.5% of institutions and 43.5% of total enrollments.
  • Moodle, D2L, and Sakai have no changes in US – Using the 2013 to 2014 tables (> 2000 enrollments), D2L has added only 2 schools, Moodle none, and Sakai 2 schools.
  • Canvas is the fasted growing LMS and has overtaken D2L – Using the 2013 to 2014 tables (> 2000 enrollments), Canvas grew ~40% in one year (from 166 to 232 institutions). For the first time, Canvas appears to have have larger US market share than D2L (13.7% to 12.2% of total enrollments using table above).

The post State of the US Higher Education LMS Market: 2014 Edition appeared first on e-Literate.

Competency-Based Education: Not just a drinking game

Thu, 2014-10-16 13:48

Ray Henderson captured the changing trend of the past two EDUCAUSE conferences quite well.

The #Edu14 drinking game: sure inebriation in 13 from vendor claims of "mooc" "cloud" or "disrupting edu". In 2014: "competency based."

— Ray Henderson (@readmeray) October 3, 2014

The drinking game: sure inebriation in 13 from vendor claims of “mooc” “cloud” or “disrupting edu”. In 2014: “competency based.”

Two years ago, the best-known competency-based education (CBE) initiatives were at Western Governors University (WGU), Southern New Hampshire University’s College for America (CfA), and SUNY’s Excelsior College. In an article this past summer describing the US Department of Education’s focus on CBE, Paul Fain noted [emphasis added]:

The U.S. Department of Education will give its blessing — and grant federal aid eligibility — to colleges’ experimentation with competency-based education and prior learning assessment.

On Tuesday the department announced a new round of its “experimental sites” initiative, which waives certain rules for federal aid programs so institutions can test new approaches without losing their aid eligibility. Many colleges may ramp up their experiments with competency-based programs — and sources said more than 350 institutions currently offer or are seeking to create such degree tracks.

One issue I’ve noticed, however, is that many schools are looking to duplicate the solution of CBE without understanding the the problems and context that allowed WGU, CfA and Excelsior to thrive. By looking at the three main CBE initiatives, it is important to note at least three lessons that are significant factors in their success to date, and these lessons are readily available but perhaps not well-understood.

Lesson 1: CBE as means to address specific student population

None of the main CBE programs were designed to target a general student population or to offer just another modality. In all three cases, their first consideration was how to provide education to working adults looking to finish a degree, change a career, or advance a career.

As described by WGU’s website:

Western Governors University is specifically designed to help adult learners like you fit college into your already busy lives. Returning to college is a challenge. Yet, tens of thousands of working adults are doing it. There’s no reason you can’t be one of them.

As described by College for America’s website:

We are a nonprofit college that partners with employers nationwide to make a college degree possible for their employees. We help employers develop their workforce by offering frontline workers a competency-based degree program built on project-based learning that is uniquely applicable in the workplace, flexibly scheduled to fit in busy lives, and extraordinarily affordable.

As described by Excelsior’s website:

Excelsior’s famously-flexible online degree programs are created for working adults.

SNHU’s ubiquitous president Paul Leblanc described the challenge of not understanding the target for CBE at last year’s WCET conference (from my conference notes):

One of the things that muddies our own internal debates and policy maker debates is that we say things about higher education as if it’s monolithic. We say that ‘competency-based education is going to ruin the experience of 18-year-olds’. Well, that’s a different higher ed than the people we serve in College for America. There are multiple types of higher ed with different missions.

The one CfA is interested in is the world of working adults – this represent the majority of college students today. Working adults need credentials that are useful in the workplace, they need low cost, they need me short completion time, and they need convenience. Education has to compete with work and family requirements.

CfA targets the bottom 10% of wage earners in large companies – these are the people not earning sustainable wages. They need stability and advancement opportunities.

CfA has two primary customers – the students and the employers who want to develop their people. In fact, CfA does not have a retail offering, and they directly work with employers to help employees get their degrees.

Lesson 2: Separate organizations to run CBE

In all three cases the use of CBE to serve working adults necessitated entirely new organizations that were designed to provide the proper support and structure based on this model.

WGU was conceived as a separate non-profit organization in 1995 and incorporated in 1997 specifically to design and enable the new programs. College for America was spun out of SNHU in 2012. Excelsior College started 40 years ago as Regents College, focused on both mastery and competency-based programs. The CBE nursing program was founded in 1975.

CBE has some unique characteristics that do not fit well within traditional educational organizations. From a CBE primer I wrote in 2012 and updated in 2013:

I would add that the integration of self-paced programs not tied to credit hours into existing higher education models presents an enormous challenge. Colleges and universities have built up large bureaucracies – expensive administrative systems, complex business processes, large departments – to address financial aid and accreditation compliance, all based on fixed academic terms and credit hours. Registration systems, and even state funding models, are tied to the fixed semester, quarter or academic year – largely defined by numbers of credit hours.

It is not an easy task to allow transfer credits coming from a self-paced program, especially if a student is taking both CBE courses and credit-hour courses at the same time. The systems and processes often cannot handle this dichotomy.

Beyond the self-paced student-centered scheduling issues, there are also different mentoring roles required to support students, and these roles are not typically understood or available at traditional institutions. Consider the mentoring roles at WGU as described in EvoLLLutions:

Faculty mentors (each of whom have at least a master’s degree) are assigned a student caseload and their full-time role is to provide student support. They may use a variety of communication methods that, depending on student preferences,include calling — but also Skype, email and even snail mail for encouraging notes.

Course mentors are the second type of WGU mentor. These full-time faculty members hold their Ph.D. and serve as content experts. They are also assigned a student caseload. Responsibilities of course mentors include creating a social community among students currently enrolled in their courses and teaching webinars focused specifically on competencies students typically find difficult. Finally, they support students one-on-one based on requests from the student or referral from the student’s faculty mentor.

Lesson 3: Competency is not the same as mastery

John Ebersole, the president of Excelsior College, called out the distinction between competency and mastery in an essay this summer at Inside Higher Ed.

On close examination, one might ask if competency-based education (or CBE) programs are really about “competency,” or are they concerned with something else? Perhaps what is being measured is more closely akin to subject matter “mastery.” The latter can be determined in a relatively straightforward manner, using various forms of examinations, projects and other forms of assessment.

However, an understanding of theories, concepts and terms tells us little about an individual’s ability to apply any of these in practice, let alone doing so with the skill and proficiency which would be associated with competence.

Deeming someone competent, in a professional sense, is a task that few competency-based education programs address. While doing an excellent job, in many instances, of determining mastery of a body of knowledge, most fall short in the assessment of true competence.

Ebersole goes on to describe the need for true competency measuring, and his observation that I share about programs confusing the two concepts..

A focus on learning independent of time, while welcome, is not the only consideration here. We also need to be more precise in our terminology. The appropriateness of the word competency is questioned when there is no assessment of the use of the learning achieved through a CBE program. Western Governors University, Southern New Hampshire, and Excelsior offer programs that do assess true competency.

Unfortunately, the vast majority of the newly created CBE programs do not. This conflation of terms needs to be addressed if employers are to see value in what is being sold. A determination of “competency” that does not include an assessment of one’s ability to apply theories and concepts cannot be considered a “competency-based” program.

Whither the Bandwagon

I don’t think that the potential of CBE is limited only to the existing models nor do I think WGU, CfA, and Excelsior are automatically the best initiatives. But an aphorism variously attributed to Pablo Picasso, Dalai Lama XIV or bassist Jeff Berlin might provide guidance to the new programs:

Know the rules well, so you can break them effectively

How many new CBE programs are being attempted that target the same student population as the parent institutions? How many new CBE programs are being attempted in the same organization structure? And how many new CBE programs are actually based on testing only of masteries and not competencies?

Judging by media reports and observations at EDUCAUSE, I think there are far too many programs attempting this new educational model of CBE as a silver bullet. They are moving beyond the model and lessons from WGU, College for America and Excelsior without first understanding why those initiatives have been successful. I don’t intend to name names here but just to note that the 350 new programs cited in Paul Fain’s article would do well to ground themselves in a solid foundation that understands and builds off of successful models.

The post Competency-Based Education: Not just a drinking game appeared first on e-Literate.

Kuali Student Sunsetting $40 million project, moving to KualiCo

Thu, 2014-10-09 13:00

The changes with Kuali are accelerating, and there are some big updates on the strategy.

Earlier this week the Kuali Foundation distributed an Information Update obtained by e-Literate on many of the details of the transition to Kuali 2.0 and the addition of the for-profit KualiCo. Some of the key clarifications:

  • KualiCo will be an independent C Corporation with a board of directors. KualiCo will not be a subsidiary of Kuali Foundation. Capital structure, equity allocations, and business plans are confidential and will not be shared publicly for the same reasons these things are rarely shared by private companies. The board of directors will start out with three members and will move to five or seven over time. Directors will include the CEO and an equal number of educational administrators and outside directors. One of the educational administrators will be appointed by the Kuali Foundation. Outside directors will be compensated with equity. Educational administrators will not be compensated in any way and could only serve as a director with the explicit permission of their university administration with attention to all relevant institutional policies.
  • KualiCo’s only initial equity investor is the Kuali Foundation. The Kuali Foundation will invest up to $2M from the Foundation’s cash reserves. [snip] For its equity investment, the Kuali Foundation will have the right to designate a director on the KualiCo Board of Directors. The Kuali Foundation, through its director, will have an exceptional veto right to block the sale of the company, an IPO of the company or a change to the open source license. This helps ensure that KualiCo will stay focused on marketplace-winning products and services rather than on flipping the company on Wall Street.
  • The Kuali Foundation is not licensing the Kuali software code for Kuali products to KualiCo as Kuali software is already fully open source and could be used by anyone for any purpose — as is already being done today. No license transfer or grant is needed by KualiCo or anyone else.
  • The copyright for the AGPL3 software will be copyright KualiCo for the open source distribution that is available to everyone. It would very quickly become untenable to even try to manage multiple copyright lines as various sections of code evolve through the natural enhancement processes of an open source community.

One key point the document describes at length is the lack of financial interest from individuals in the Kuali Foundation and KualiCo, including the uncompensated director position, the lack of equity held by individuals outside of KualiCo, etc.

Two other key points that are particularly relevant to yesterday’s news:

  • Each project board will decide if, when, to what extent, and for what term to engage with KualiCo. Project boards could decide to continue on as they currently do, to engage KualiCo in a limited way, or to allow KualiCo to help drive substantial change to the software approach to that product. If a project chooses not to engage KualiCo, KualiCo will have less initial funding to invest in enhancing the product, but will slowly build up those funds over time by hosting the product and enhancing the product for its customers. Choosing to engage with KualiCo in any fashion requires code to be reissued under the AGPL3 license (see Open Source section).
  • KualiCo will be working with the Kuali community to make improvements to current Kuali products. In addition to enhancing the current codebase, KualiCo is beginning the re-write of Kuali products with a modern technology stack. The initial focus will be on Kuali Student and then HR. Complete rewrites of KFS and KC will likely not begin for 3-5 years.
Kuali Student Changes

With this in mind, yesterday the Kuali Student (KS) Project Board met and made the decision to sunset their current project and to transition to KualiCo development. Bob Cook, CIO at the University of Toronto and chair of the KS Project Board confirmed by email.

I can say that the Board adopted its resolution because it is excited about the opportunity that KualiCo presents for accelerating the delivery of high quality administrative services for use in higher education, and is eager to understand how to best align our knowledgeable project efforts to achieve that goal. [snip]

In recognition of the opportunity presented by the establishment of KualiCo as a new facet in the Kuali community, the Kuali Student Board has struck a working group to develop a plan for transitioning future development of Kuali Student by the KualiCo. The plan will be presented to the Board for consideration.

While Bob did not confirm the additional level of details I asked (“It would be premature to anticipate specific outcomes from a planning process that has not commenced”), my understanding is that it is safe to assume:

  • Kuali Student will transition to AGPL license with KualiCo holding copyright;
  • KualiCo will develop a new product roadmap based on recoding / additions for multi-tenant framework; and
  • Some of all of the current KS development efforts will be shut down over the next month or two.

KS Project Director Rajiv Kaushik sent a note to the full KS team with more details:

KS Board met today and continued discussions on a transition to Kuali 2.0. That thread is still very active with most current investors moving in the Kuali 2.0 direction. In the meantime, UMD announced its intent to invest in Kuali 2.0 and to withdraw in 2 weeks from the current KS effort. Since this impacts all product streams, Sean, Mike and I are planning work over the next 2 weeks while we still have UMD on board. More to come on that tomorrow at the Sprint demo meeting.

I will update or correct this information as needed.

Kuali Student (KS) is the centerpiece of Kuali – it is the largest and most complex project and the most central value to higher education. KS was conceived in 2007. Unlike KFS, Coeus and Rice, Kuali Student was designed from the ground up. The full suite of modules within Kuali Student had been scheduled to be released between 2012 – 2015 in a single-tenant architecture. With the transition, there will be a new roadmap redeveloping for multi-tenant and updated technology stack.

Just how large has this project been? According to a financial analysis of 2009-2013 performed by instructional media + magic inc.[1] Kuali Student had $30 million in expenditures in that 5-year span. The 2014 records are not yet available nor the 2007-8 records, but an educated guess is that the total is closer to $40 million.[2]

Kuali Project Finances 2009-13

 

I mention this to show the scope of Kuali Student to date as well as the relative project size compared to other Kuali projects. I wrote a post on cloud computing around the LMS that might be relevant to the future KualiCo development, calling out how cloud technologies and services are driving down the cost of product development and time. In the case of the LMS, the difference has been close to an order of magnitude compared to the first generation:

Think about the implications – largely due to cloud technologies such as Amazon web services (which underpins Lore as well as Instructure and LoudCloud), a new learning platform can be designed in less than a year for a few million dollars. The current generation of enterprise LMS solutions often cost tens of millions of dollars (for example, WebCT raised $30M prior to 2000 to create its original LMS and scale to a solid market position, and raised a further $95M in 2000 alone), or product redesigns take many years to be released (for example, Sakai OAE took 3 years to go from concept to release 1.0). It no longer takes such large investments or extended timeframes to create a learning platform.

Cloud technologies are enabling a rapid escalation in the pace of innovation, and they are lowering the barriers to entry for markets such as learning platforms. Lore’s redesign in such a short timeframe gives a concrete example of how quickly systems can now be developed.

How will these dynamics apply to student information systems? Given the strong emphasis on workflow and detailed user functionality, I suspect that the differences will be less than for the LMS, but still significant. In other words, I would not see the redevelopment of Kuali Student to take anywhere close to $40 million or seven years, but I will be interested to see the new roadmap when it comes out.

This decision – moving Kuali Student to KualiCo – along with the foundation’s ability to hold on to the current community members (both institutions and commercial affiliates) will be the make-or-break bets that the Kuali Foundation has made with the move to Kuali 2.0. Stay tuned for more updates before the Kuali Days conference in November.

Say what you will about the move away from Community Source, Kuali is definitely not sitting on its laurels and being cautious. This redevelopment of Kuali Student with a new structure is bold and high-risk.

  1. Disclosure: Jim Farmer from im+m has been a guest blogger at e-Literate for many years.
  2. It’s probably more than that, but let’s use a conservative estimate to set general scope.

The post Kuali Student Sunsetting $40 million project, moving to KualiCo appeared first on e-Literate.

LinkedIn Releases College Ranking Service

Fri, 2014-10-03 09:57

I have long thought that LinkedIn has the potential to be one of the most transformative companies in ed tech for one simple reason: They have far more cross-institutional longitudinal outcomes data than anybody else—including government agencies. Just about anybody else who wants access to career path information of graduates across universities would face major privacy and data gathering hurdles. But LinkedIn has somehow convinced hundreds of millions of users to voluntarily enter that information and make it available for public consumption. The company clearly knows this and has been working behind the scenes to make use of this advantage. I have been waiting to see what they will come up with.

I have to say that I’m disappointed with their decision that their first foray would be a college ranking system. While I wouldn’t go so far as to say that these sorts of things have zero utility, they suffer from two big and unavoidable problems. First, like any standardized test—and I mean this explicitly in the academic meaning of the term “test”—they are prone to abuse through oversimplification of their meaning and overemphasis on their significance. (It’s not obvious to me that they would be subject to manipulation by colleges the way other surveys are, given LinkedIn’s ranking method, so at least there’s that.) Second and more importantly, they are not very useful even when designed well and interpreted properly. Many students change their majors and career goals between when they choose their college and when they graduate. According to the National Center for Education Statistics, 80% of undergraduates change their majors at least once, and the average student changes majors three times. Therefore, telling high schools students applying to college which school is ranked best for, say, a career in accounting has less potential impact on the students’ long-term success and happiness than one might think.

It would be more interesting and useful to have LinkedIn tackle cross-institutional questions that could help students make better decisions once they are in a particular college. What are the top majors for any given career? For example, if I want to be a bond trader on Wall Street, do I have to major in finance? (My guess is that the answer to this question is “no,” but I would love to see real data on it.) Or how about the other way around: What are the top careers for people in my major? My guess is that LinkedIn wanted to start off with something that (a) they had a lot of data on (which means something coarse-grained) and (b) was relatively simple to correlate. The questions I’m suggesting here would fit that bill while being more useful than a college ranking system (and less likely to generate institutional blow-back).

The post LinkedIn Releases College Ranking Service appeared first on e-Literate.

Kuali Foundation: Clarification on future proprietary code

Thu, 2014-10-02 08:35

Well that was an interesting session at Educause as described at Inside Higher Ed:

It took the Kuali leadership 20 minutes to address the elephant in the conference center meeting room.

“Change is ugly, and change is difficult, and the only difference here is you’re going to see all the ugliness as we go through the change because we’re completely transparent,” said John F. (Barry) Walsh, a strategic adviser for the Kuali Foundation. “We’re not going to hide any difficulty that we run into. That’s the way we operate. It’s definitely a rich environment for people who want to chuck hand grenades. Hey, have a shot — we’re wide open.” [snip]

Walsh, who has been dubbed the “father of Kuali,” issued that proclamation after a back-and-forth with higher education consultant Phil Hill, who during an early morning session asked the Kuali leadership to clarify which parts of the company’s software would remain open source.

While the article describes the communication and pushback issues with Kuali’s creation of a for-profit entity quite well (go read the whole article), I think it’s worth digging into what Carl generously describes as a “back-and-forth”. What happened was that there was a slide describing the relicensing of Kuali code as AGPL, and the last bullet caught my attention:

  • AGPL > GPL & ECL for SaaS
  • Full versions always downloadable by customers
  • Only feature “held back” is multi-tenant framework

If you need a read on the change of open source licenses and why this issue is leading to some of the pushback, go read Chuck Severance’s blog post.

Does ‘held back’ mean that the multi-tenant framework to enable cloud hosting partially existed but is not moving to AGPL, or does it mean that the framework would be AGPL but not downloadable by customers, or does it mean that the framework is not open course? That was the basis of my question.

Several Kuali Foundation representatives attempted to indirectly answer the question without addressing the license.

“I’ll be very blunt here,” Walsh said. “It’s a commercial protection — that’s all it is.”

The back-and-forth involved trying to get a clear answer, and the answer is that the multi-tenant framework to be developed / owned by KualiCo will not be open source – it will be proprietary code. I asked Joel Dehlin for additional context after the session, and he explained that all Kuali functionality will be open source, but the infrastructure to allow cloud hosting is not open source.

This is a significant clarification on the future model. While Kuali has always supported an ecosystem with commercial partners that can offer proprietary code, this is the first time that Kuali itself will offer proprietary, non open source code.[1]

What is not clear is whether any of the “multi-tenant framework” already exists and will be converted to a proprietary license or if all of this code will be created by KualiCo from the ground up. If anyone knows the answer, let me know in the comments.

From IHE:

“Unfortunately some of what we’re hearing is out of a misunderstanding or miscommunication on our part,” said Eric Denna, vice president of IT and chief information officer at the University of Maryland at College Park. “Brad [Wheeler, chair of the foundation’s board of directors,] and I routinely are on the phone saying, ‘You know, we have day jobs.’ We weren’t hired to be communications officers.”

Suggestion: Simple answers such as “What ‘held back’ means is that the framework will be owned by KualiCo and not open source and therefore not downloadable” would avoid some of the perceived need for communication officers.

  1. Kuali Foundation is partial owner and investor in KualiCo.

The post Kuali Foundation: Clarification on future proprietary code appeared first on e-Literate.