Yesterday, 58 faculty members from the Faculty of Arts and Sciences at Harvard wrote an open letter to the dean requesting faculty oversight of HarvardX. When schools sign up for edX, their implementations tend to be called SchoolX, thus HarvardX specifically refers to their usage of the MOOC platform, not to the overall edX organization. This distinction is important, given Harvard’s founding role in creating the edX organization and $30m pledge of support.
The letter is short, so I’ll quote it in full (the signatures are much longer than the letter itself).
As the university marks the first anniversary of edX and HarvardX, some faculty are tremendously excited about the potential of HarvardX; others are deeply concerned about the program’s costs and consequences. We appreciate the meetings, town halls, and other arenas in which faculty have been able to discuss HarvardX. But we believe that many critical questions about the relationship of the FAS to HarvardX, and to edX, have not yet been addressed. These questions (which fall outside the remit of the two existing HarvardX faculty committees, most of whose members are not from FAS) range from faculty oversight of HarvardX to the impact online courses will have on the higher education system as a whole.
The Faculty of Arts and Sciences is directly responsible for the teaching of Harvard undergraduates and Ph.D. students. It is our responsibility to ensure that HarvardX is consistent with our commitment to our students on campus, and with our academic mission. Given the rapid pace of development of HarvardX, we believe it is essential to have a formal, sustained, and structured faculty discussion on these issues as soon as possible. We write to request that you appoint a committee of FAS ladder faculty to draft a set of ethical and educational principles that will govern FAS involvement in HarvardX, to be brought before the FAS for a vote in the coming academic year.
Note that they request FAS ladder faculty, which means tenure and tenure track faculty and specifically not adjuncts and lecturers. It is possible, however, that the requested committee of ladder faculty could choose to involve adjuncts in the process.
In Michael’s recent post on the San Jose State University open letter regarding edX, he called out the missed opportunity for faculty involvement in the future of MOOCs.
By ignoring the scholarship of teaching, the department missed an opportunity to engage the MOOC question in a different way. Rather than thinking of MOOCs as products to be bought or rejected, they could have approached them as experiments in teaching methods that can be validated, refuted, or refined through the collective efforts of a scholarly community. Researchers collaborate across university boundaries all the time. The same can be true in the scholarship of teaching. The faculty could have demanded access to the edX data and the freedom to adjust the course design. The letter authors seem deeply invested in positioning the edX course as something that is locked down from a third-party commercial vendor. But in reality, the edX course is developed by a faculty member and provided by a university-based non-profit entity. Perhaps the department felt that there wasn’t sufficient opportunity in this particular course design to make a request to have a collaboration worthwhile. But their rhetoric gives no indication that there is any room for such exploration under any circumstances, or indeed that the department has anything to learn about use of educational technology that could lead to either improved outcomes or lower costs.
The Harvard letter, in my opinion, takes this more reasoned approach of viewing MOOCs as experiments in “teaching methods that can be validated, refuted, or refined through the collective efforts of a scholarly community”. Let’s hope that media coverage of the Harvard letter keeps this balanced view in mind rather than seeing another example to pit faculty members against the big three MOOC providers.
Update (5/24): In an afternoon article Steve Kolowich at the Chronicle describes more of the motivation for faculty writing the letter as well as the prospects for the requested committee.
That letter was on the minds of Harvard’s FAS professors when they convened to discuss MOOCs at a meeting this month, said Peter J. Burgard, a professor of German at Harvard. In their letter to Dean Smith, the Harvard professors allude to “many critical questions,” as yet unanswered, about “the impact online courses will have on the higher-education system as a whole.”
But, perhaps more immediately, the professors were irked that Harvard had become so deeply involved in MOOCs before consulting with them, said Mr. Burgard. [snip]
But the 58 signatories of the letter, out of the hundreds of professors in the FAS, might not get their way. In a written statement to The Chronicle, a spokesman for the dean suggested that a new committee, consisting solely of FAS professors, was not in the cards.
The post Harvard Faculty Request Faculty Oversight of HarvardX (Their Usage of edX) appeared first on e-Literate.
As we announced the other day, Phil and I have written a report sponsored by the 20 Million Minds Foundation responding to California SB 520, a.k.a. the “MOOC bill,” and making some recommendations for the governor and legislature to consider as they attempt to tackle the bottleneck course problem in the current budget discussions. You can see more about the report on 20 Million Minds website here. Ry Rivard of Insider Higher Education has a good write-up of the paper here.
Phil and I will doing a CrowdHall event Tuesday through Friday of next week. CrowdHall basically lets anyone ask questions (asynchronously) and then have participants vote up the best questions for responses by the “speakers.” I have no idea how well it will work, but I’m interested in trying it. We will be serializing the paper here on e-Literate during those days, posting a new section and some commentary each day to try to stimulate discussion. The event starts at 9 AM PST and ends at 2 PM PST each day (although how much those times mean during an asynchronous event is not clear to me).
Just over a week ago I had the opportunity to participate in a radio interview for the University of Delaware’s local station WVUD, with the Campus Voices interview airing on May 17th. The interview was in advance of Delaware’s summer faculty institute, where I will be speaking in just over a week. I really enjoyed the interview, and this is an area that needs more attention – local educational technology support for faculty innovation, with an emphasis on faculty sharing best practices. The summer institute is May 28th – 31st.
I was interviewed by Richard Gordon and Paul Hyde, and some of the key topics we explored:
- Not everyone is a reader of the Chronicle of Higher Education – what the heck is a MOOC?
- How do MOOCs affect faculty teaching in a bricks-and-mortar university?
- What are the completion rates of MOOCs and what are the student types?
- Are there applications beyond higher education?
- Why is there such significant pushback against MOOCs lately?
- What disciplines beyond science and engineering are using MOOCs?
Here is link to the U Delaware radio interview - audio only. It’s about a half hour in length, but with some cool NPR-sounding music to kick it off.
I have also added some graphics and created a video of the interview.
The post MOOCs Explained: Radio Interview with University of Delaware appeared first on e-Literate.
We are pleased to announce the publication of our white paper on California’s bottleneck course issue. Many thanks to the paper’s sponsor, the 20 Million Minds Foundation, for giving us the support and freedom to write exactly what we believe. If there is anything that you find wrong or objectionable in the paper, then blame us.
The central idea in the paper is that California should adopt the principle that students have a right to educational access. There is a fundamental difference between saying that we should do whatever we can to give students access and saying that we have an obligation to enable students to exercise their right to access. And that change of frame is critical to solving the problem of bottleneck courses.
The current incarnation of SB 520, which we have written about here repeatedly, has been accused by its detractors as being a potential vehicle for gutting and privatizing California’s public higher education. We believe that concern is legitimate. However, in the context of a larger bill supporting the students’ right to access, it could be not only positive but essential as path of last resort. As part of supporting every citizen’s right to due process when accused of a crime, the government is required to provide access to a public defender. But few people who have financial means are likely to choose a public defender over a private attorney because private attorneys, by and large, have access to resources (including time for individual attention) that public defenders do not. Likewise, we believe that access to third-party online courses disconnected from a student’s home institution is a poor solution to the student’s access problem. The only worse solution is not to have one at all, which is the current situation. If Californians believe that students should have a right to access, then they must provide a means of last resort for students to exercise that right.
But the best solution would be to eliminate bottleneck courses altogether, which is why much of our proposal centers on providing mechanisms and funding to empower faculty members, campuses, and systems to solve these problems within the California public education system, where students have the benefit of the campus support network and expertise of local faculty. Even the main funding for the third-party course provisions, which we characterize as the “safety valve” of the plan, would go toward developing infrastructure that would be equally useful to support students taking courses from other campuses within the California systems. If the faculty and administrators will lead an effort to solve the bottleneck course problem organically, with appropriate support from the state, then the actual use of the safety valve option by students could become a rarity.
We acknowledge that technology is not the only possible solution to the bottleneck course problem; nor do we assume that the underlying budget challenges should be accepted at face value. We have written about technology as one avenue to solve the problem because educational technology is what we know about and what we were asked to write about. None of what we suggest precludes discussions about allocation of funding in college budgets, levels of state funding support, allocation of faculty time to lower-division courses, or other relevant questions.
We believe strongly that students should have a right to educational access and that technology can be one useful tool in enabling them to exercise that right. We also believe that the educators in California’s public college and university system are still critical enablers of that right and have a central role to play in making that ideal a reality. And we think there is real value in bringing together educators across the state to focus on sensible application of technology to solve a real educational problem. The culture and collaboration, knowledge and infrastructure that could be created to solve the access problem could also be applied to problems such as improving completion rates, improving course quality, and lowering tuition costs.
You can read the white paper here.
Last month, I wrote this narrow defense of automated essay grading, hoping to clear the air on a new and controversial technology. In that post’s prolific comments section, Laura Gibbs made a comment echoing what I’ve heard from every teacher I speak to.
I am waiting for someone to show me a real example of this “useful supplement” provided by the computer that is responding to natural human language use – I understand what you want it to be, but I would contend that natural human language use is so complex (complex for a computer to apprehend) that trying to give writing mechanics feedback on spontaneously generated student writing will lead only to confusion for the students.
When we talk about machine learning being used to automatically grade writing, most people don’t know what that looks like. Because they don’t know the technology, they make it up. As far as I can tell, this is based on a combination of decades-old technology like Microsoft Word’s green grammar squiggles, clever new applications like Apple’s Siri personal assistant, and downright fiction, like Tony Stark’s snarky talking suits. What you get from this cross is a weird and incompetent artificial intelligence pointing out commas and giving students high grades for hiding the word “defenestration” in an essay.
My cofounder at LightSIDE Labs, David Adamson, taught in a high school for six years. If we were endeavoring to build something that was this unhelpful for teachers, he would have walked out a long time ago. In fact, though, David is a researcher in his own right. David’s Ph.D. research isn’t as focused on machine learning and algorithms as my own; instead, his work brings him into Pittsburgh public schools, talking with students and teachers, and putting technology where it can make a difference. In this post, rather than focus on essay evaluation and helping students with writing – which will be the subject of future posts – I’m going to explore the things he’s already doing in classrooms.Building computers that talk to students
David builds conversational agents. These agents are computer programs that sit in chatrooms for small-group discussion in class projects, looking by all appearances like a moderator or TA logged in elsewhere. They’re not human, however – they’re totally automated. They have a small library of lines that they can inject into the discussion, which can be automatically modified slightly in context. They use language technology, including machine learning as well as simpler techniques, to process what students are saying as they work together. The agent has to decide what to say and when.
Those pre-scripted lines aren’t thrown in arbitrarily. In fact, they’re descended from decades of research into education and getting classroom discussion right. This line of research is called Accountable Talk, and in fact there’s an entire course coming up on Coursera about how to use this theory productively. The whole thing is built on fairly basic principles:
First, students should be accountable to each other in a conversation. If you’re only sharing your own ideas and not building off of the ideas of others, then it’s just a bunch of people thinking alone, who happen to be in a chatroom together. You don’t get anything out of the discussion. Next, your thought process should be built off of connecting the dots, making logical conclusions, and reasoning about the connections between facts. Finally, those facts that you’re basing your decision-making on should be explicit. They should come from explicit sources and you should be able to point to them in your argument for why your beliefs are correct.
David’s agents are framed around Accountable Talk, doing what teachers know leads to a good discussion. Instead of giving students instructions or trying to evaluate whether they were right or wrong, they merely ask good questions at the right times. Agents were trained to look for places where students made a productive, substantial claim – the type of jumping-off point that Accountable Talk encourages. He never tried to correct those claims, though; he didn’t even evaluate whether they were right or wrong. He was just looking for the chance to make a difference in the discussion.
He used those automated predictions as a springboard for collaborative discussion. Agents were programmed to try to match student statements to existing facts about a specific chemistry topic. “So, let me get this right. You’re saying…” More often than not, he also programmed the agents to lean on other students for help. “[Student 2], can you repeat what [Student 1] just said, in your own words? Do you agree or disagree? Why?” Automated prompts like this leave the deep thinking to students. Instead of following computer instructions by rote, the students were being pushed into deeper discussions. Agents give the authority to students, asking them to lead and not taking on the role of a teacher and looming over them.Sometimes computers fail
In the real world, intervention to help students requires confidence that you’re giving good advice. If David’s agents always spout unhelpful nonsense, students will learn to ignore them. Perhaps worst of all, if the agent tries to reward students for information it thinks is correct, a wrong judgment means students get literally the opposite of helpful teaching. With all of this opportunity for downside, reliability seems like it would be the top priority. How can you build a system that’s useful for intervening in small groups if it makes big mistakes?
This is mostly accounted for by crafting the right feedback, designing agents that are tailored to the technology’s strengths and avoiding weaknesses. In large part this comes down to avoiding advice that’s so clear-cut that big mistakes are possible. Grammar checking and evaluations of accuracy within a sentence are doomed to fail almost from the start. If your goal with a machine learning system is to correct every mistake that every student makes, you’re going to need to be very confident, and because this is a statistics game we’re playing, that kind of technology is going to disappoint. Moreover, even when you get it right, what has a student gained by being told to fix a run-on sentence? At best, an improvement at small-scale grammar understanding. This is not going to sweep anyone off their feet.
By basing his conversational agents on the tenets of a good discussion, David was able to gain a lot of ground with what is, frankly, pretty run-of-the-mill machine learning. Whiz-bang technology is secondary to technology that does something that helps. When the system works, it skips the grammar lessons. Instead, it jumps into the conversation at just the right time to encourage students to think for themselves.
Sometimes, though, the agent misfires. When using machine learning, this is something you just have to accept. What we care about is that this doesn’t hurt students or start teaching wrong ideas. So let’s think about the cases where an agent can make a wrong decision: first, where the agent could have given feedback but didn’t, and second, where the agent gives the wrong feedback at the wrong time.
First, the easy case. Sometimes a student will say something brilliant and the agent will fail to catch it. Here, the balance of authority between agent and student matters. If students get used to the idea that the agent is a teacher, they’ll be looking for it to tell them they got every answer right. This is a danger zone for us – with statistical machine learning, we don’t want to back ourselves into a corner where the system has to make judgments every single time. Sometimes, we’ll be uncertain. If an agent misses its cue and the students expect a teacher’s authoritative judgment of correctness, the group will flounder when the agent misses a beat, or take the discussion into their own hands and leave the agent in the dust.
Let’s see what that looks like in practice. Here’s a transcript from an earlier study, when conversational agents weren’t as heavily rooted in the theory of Accountable Talk:
Tutor: You should now move on to discussing Condition C.
Tutor: [Student A], now would be a good time to ask [Student B] to build on what [Student C] is saying.
[Student B]: I’m so confused!
[Student A]: [Student B], would you like to build onto what [Student C] is saying? And me too!
Tutor: When you agree, write down your predictions on your worksheet.
This is dysfunctional; the tutor isn’t asking discussion-generating questions, and it’s pushing forward regardless of context. Focusing on the task single-mindedly doesn’t give students room to think.
By taking on the role of facilitator, though, the agent’s comments aren’t expected every time. We can use the system to chime in with an Accountable Talk question when we’re highly confident that the machine learning system is making the right prediction, but if the system stays silent, the students won’t even notice it’s missing; the agent is purely there to help scaffold learning, rather than judge correctness. When you’re augmenting teaching, rather than replacing humans entirely, you can afford to be cautious. Look at how the agent interacts when we start asking questions that are better suited to what we know we can do:
[Student D]: ok, boiling pt will go up and vdW will go up for all of them consecutively… right?
Tutor: Do you concur with [Student D]? Why, or why not?
[Student E]: hmm not necessarily
[Student F]: area goes up for each
[Student E]: would it?
[Student E]: im not sure
[Student D]: yea for sure area goes up
[Student F]: dipole increases first one
In this example excerpt the tutor didn’t give an instruction or evaluate anything that the first quoted student line. It simply asked a basic question in response to machine learning evaluating that spot as a good opening. The comments from these new agents use Accountable Talk principles, and get student groups discussing ideas.
Of course, these systems aren’t perfect. What we’re finding out, though, is that we can frame the discussion right for automated assessment by not trying to make our automated system the perfect arbiter of truth. What I’m describing isn’t a dire portrait of machines taking over the education system. It’s agents contributing meaningfully to learning by cautiously intervening when appropriate, using machine learning for educated guessing about when it’s time to get students to think more deeply. These agents are tireless and can be placed into every discussion in every online small group at all times – something a single teacher in a large class will never be able to do.
The results with these agents were clear: students learned significantly more than students who didn’t get the support. Moreover, when students were singled out and targeted by agent questioning, they participated more and led a more engaged, more assertive conversation with the other students. The agent didn’t have to give students remedial grammar instructions to be valuable; the data showed that the students took their own initiative, with the agents merely pushing them in the right direction. Machine learning didn’t have to be perfect. Instead, machine learning figured out the right places to ask questions, and worked towards making students think for themselves. This is how machine learning can help students.For helping students, automated feedback works.
We should be exercising caution with machine learning. Skeptics are right to second guess interventions from technologists who aren’t working with students. The goal is often to replace teachers, not help them, especially with the promise of tantalizingly quick cost savings. Yes – if you want to make standardized testing cheaper, machine learning works. I don’t to dismiss this entirely – we can, in fact, save schools and states a lot of money on existing standardized tests – but if that’s as far as your imagination takes you, you’re missing the point. What’s important isn’t that we can test students more, and more quickly, with less money. Focus on this: we can actually help students.
Not every student is going to get one-on-one time daily with a trained writing tutor. Many are never going to see a writing tutor individually in their entire education. For these students, machine learning is stepping in, with instant help. These systems aren’t going to make the right decision every time in every sentence. We need to know that, and we need to work with it. Rather than toss out technology promising the moon, look carefully at what it can do. Shift expectations as necessary. In David’s case, the shift was about authority. He empowered students to take up their own education, and chimed in when it saw an opportunity; it positioned the automated system as guide rather than dictator.
This goes way beyond grading, and way beyond grammar checking. Machine learning helps students when teachers aren’t there. Getting automated feedback right leads to students thinking, discussing ideas, and learning more – and that’s what matters. In my next post, I’d like to launch off from here and talk about what these lessons mean not just for discussion, but for writing. Stay tuned.A last note
The work I described from David is part of an extended series of more than 20 papers and journal articles from my advisor at Carnegie Mellon, Carolyn Rosé, and her students. While I won’t give a bibliography for a decade of research, some of the newest work is published as:
- “Intensification of Group Knowledge Exchange with Academically Productive Talk Agents,” in this year’s CSCL conference.
- “Enhancing Scientific Reasoning and Explanation Skills with Conversational Agents,” submitted to IEEE Transactions on Learning Technologies.
- “Towards Academically Productive Talk Supported by Conversational Agents,” in the 2012 conference on Intelligent Tutoring Systems.
I’ve asked David to watch this post’s comments section, and I’m sure he’ll be happy to directly answer any questions you have.
The post Getting students useful feedback from machine learning appeared first on e-Literate.
This is going to be a more personal blog post than I typically make here at e-Literate.
The open letter from San José State University’s philosophy department in protest of the edX JusticeX course being taught at SJSU is getting a lot of attention, as is the follow-up statement from the SJSU faculty senate. I have some concerns with both of these letters—particularly the one from the philosophy department—but before I get into them, I’d like to emphasize my points of agreement and solidarity with the department:
- As a former philosophy major and a former teacher of philosophy courses to seventh and eighth graders, I strongly believe that a course in social justice is critical to every American’s education.
- I also strongly agree that, in order for such a course to be effective, it must be up-to-date, relevant to the students, and involve in-depth facilitated discussion.
- I agree that there is a bit of a bait-and-switch going on, possibly unintentionally, with the rhetoric about MOOCs providing superior pedagogy over lecture classes (which is probably somewhat true) and then moving to swap out discussion classes for MOOCs instead.
- I agree that some MOOC fans (though by no means all of them) have simplistic notions of how MOOCs can make university education cheaper without thinking through the consequences either to the quality of education or the fiscal health of the colleges and universities that still provide tremendous value to our nation and our culture.
- I agree that intellectual diversity is very important, particularly when discussing complex issues that are essential to a functioning democracy, and that the potential for an intellectual monoculture is a concern worth taking very seriously.
- While I have no knowledge of the negotiations between edX and SJSU, I strongly agree that such partnerships should be conceived and implemented with active consultation and collaboration with faculty unless there is exceptionally strong justification to do otherwise.
Despite all this common ground on values that are dear to me, I find aspects of the department’s letter to be deeply problematic.
To begin with, there is this:
Good quality online courses and blended courses (to which we have no objections) do not save money, but pre-packaged ones do, and a lot.
That statement is demonstrably false. Good quality online courses and blended courses can, in fact, save money. How do we know? For starters, the National Center for Academic Transformation has a long list of course redesign projects they have been doing in collaboration with colleges in universities since 1999, many of which have achieved substantial cost savings. And some of them actually achieved substantial improvement in outcomes while achieving substantial cost savings. Nor is NCAT alone. There is a growing body of empirically backed academic literature showing that we can teach more students more effectively for less money across a variety of subjects. Some subjects are easier to redesign than others. But cost savings in high-quality courses is possible as a general proposition (and does not require open content licensing, by the way). The SJSU philosophy department’s blanket denial of this possibility is not credible.
As a result, the authors of the letter are also less credible when they write,
In addition to providing students with an opportunity to engage with active scholars, expertise in the physical classroom, sensitivity to its diversity, and familiarity with one’s own students is just not available in a one-size-fits-all blended course produced by an outside vendor….When a university such as ours purchases a course from an outside vendor, the faculty cannot control the design or content of the course; therefore we cannot develop and teach content that fits with our overall curriculum and is based on both our own highly developed and continuously renewed competence and our direct experience of our students’ abilities and needs.
There appears to be a significant disconnect here. On the one hand, the department argues (correctly, in my view) that philosophy students gain great benefit from “the opportunity to engage with active scholars.” On the other hand, they assert that the philosophy department has “expertise in the physical classroom” and a “highly developed and continuously renewed competence” despite the overwhelming likelihood that most of the faculty have not had significant opportunities to engage with active scholars in pedagogy-related fields.
They could have made their case just as effectively without foreclosing the possibility of improving on what they already do. As the letter from the SJSU Faculty Association notes in response to the improved completion rates of the edX course,
The pedagogical infrastructure and work that has gone into the preparation of the edX material could easily be replicated if SJSU made a commitment to pedagogy and made training in pedagogy central to all faculty.
This is a defensible argument that the philosophy department could have made. But it didn’t. Instead, it implicitly denied the existence of the scholarship of teaching and explicitly blamed the university’s financial issues on “industry” for “demanding that public universities devote their resources to providing ready-made employees, while at the same time…resisting paying the taxes that support public education.” The collective effect of these rhetorical moves is to absolve the department of all responsibility for addressing the real problems the university is facing.
By ignoring the scholarship of teaching, the department missed an opportunity to engage the MOOC question in a different way. Rather than thinking of MOOCs as products to be bought or rejected, they could have approached them as experiments in teaching methods that can be validated, refuted, or refined through the collective efforts of a scholarly community. Researchers collaborate across university boundaries all the time. The same can be true in the scholarship of teaching. The faculty could have demanded access to the edX data and the freedom to adjust the course design. The letter authors seem deeply invested in positioning the edX course as something that is locked down from a third-party commercial vendor. But in reality, the edX course is developed by a faculty member and provided by a university-based non-profit entity. Perhaps the department felt that there wasn’t sufficient opportunity in this particular course design to make a request to have a collaboration worthwhile. But their rhetoric gives no indication that there is any room for such exploration under any circumstances, or indeed that the department has anything to learn about use of educational technology that could lead to either improved outcomes or lower costs.
Equally disturbing is the tendency in both letters to dismiss the fiscal crisis as something caused solely by greedy capitalists. It’s worth requoting the earlier referenced comment from the philosophy department letter here:
Industry is demanding that public universities devote their resources to providing ready-made employees, while at the same time they are resisting paying the taxes that support public education.
To begin with, “industry” isn’t alone in demanding that public universities devote their resources to producing employable graduates. Students and their parents are asking for it too, as are individual human taxpayers. On this last point, I am not a Californian, but I understand that individual human taxpayers have an unusually direct say regarding tax rates in the state of California. The purpose of education as a public good is a serious and complicated question that deserves more careful treatment from people who should know better.
Nor are taxes the only issue. While it is true that there has been progressive defunding of public colleges and universities in the United States, it is also true that tuition costs have been rising dramatically across the country in private as well as public schools. And it is true that the public colleges and universities in California in particular are struggling with unanticipated swelling enrollments as they strive to meet the as-yet-unfulfilled moral imperative of universal access to education. Given all of this, it is not a morally defensible position to simply point the finger at the rich guys and say, “It’s their fault. Make them fix it.” To the degree that course redesign can positively impact student access to education, faculty have a moral obligation to be leading the charge. And from a strategic perspective, they are more likely to prevent dumb ideas—such as gutting quality residential education in favor of least-common-denominator, video-driven xMOOCs—from taking hold.
But perhaps the worst aspect of the simplistic finger pointing is the way in which it pollutes the civic discourse. It encourages individual stakeholders to harden into an “us vs. them” position that reduces the likelihood of citizens coming together to solve real, hard problems that are deeply intwined with issues of social justice. Here’s an example of a comment made on this blog in response to a post about the California SB 520 bill:
Remember that when the Nazis led the people into the gas chamber they told them that it was a refreshing shower after a long train ride. Do not be fooled! This sweet sounding bill is the gas chamber of good education in California. Once we are in the questions will be pointless. As the pellets drop we will realize we should have questioned things sooner.
Setting aside the fact that the only justifiable use of genocide as an analogy is when talking about another genocide, this kind of rhetoric is enormously damaging to the possibility of a productive dialectic regarding how to solve the very real and complicated problems that our system of higher education faces, including both the need to increase access and the complexities of funding that imperative. And, sadly, this comment was written by a member of the SJSU philosophy department.
So MindTap just won a CODiE award for “Best Post-secondary Personalized Learning Solution.” In and of itself, this isn’t a big deal. No offense intended to current or prior winners, but the CODiEs often feel like awards for “Best Instant Coffee” or “Best New Technology Product by an Important Sponsor of Our Awards Program.” They’re not exactly signals of breakthrough educational product design. But I’m glad that the award was given in this case because I think MindTap does represent an important innovation that addresses some of the trends that we’ve been blogging about here at e-Literate (which was one of the reasons that I was enticed to work on MindTap at Cengage for a while).
MindTap is not a “personalized learning solution.” While it does allow students to do things like integrate their Evernote accounts and choose whether they want to read or listen to texts, the level of personalization for the learners is not terribly different from other products on the market. (And it certainly is nowhere near as radical as the vision for a Personalized Learning Environment which came from the UK’s JISC and elsewhere, and from which terms like “personalized learning solution” and “personalized learning experience” have been bastardized). Nor are there adaptive analytics or other sorts of machine-driven personalization in the product at this time. Rather, the key differentiator in the current incarnation of MindTap is the way in which it creates a more refined and complete learning experience out of the box while still enabling faculty to customize those experiences to the needs of their students in pretty significant and, in some cases, new ways. This is exactly where the textbook, LMS, and MOOC markets are all headed, and MindTap got there first.
The Problem to be Solved
In order to understand the value of a product like MindTap, it’s important to understand where textbook publishers do and do not compete. You’re not going to see a lot of MindTap-style products for courses like “Advanced Topics in International Trade Policy,” “Research in Genetics,” “Greek Film,” or “Intermediate Killer Shark Genre.” These smaller courses are relatively uninteresting to textbook publishers because they don’t have the scale necessary to generate significant revenues, and they are also better suited to hand-crafted course designs that are tailored to the strengths of the particular professor doing the teaching and can be highly tailored to the needs and interests of the students in the class. Rather, the courses in question are more like “Introduction to Psychology,” “General Biology I,” “Microeconomics,” or “Survey of Western Civilization.” (English Composition is an anomaly in this categorization because of the way it is taught.) These courses are generally taught in large lecture halls with little or no writing—and when there is writing, it is often graded quickly on a narrow range of criteria by overworked graduate students—and relatively generic syllabi (particularly in non-elite institutions).
A lot of the heated debate over whether college is “broken” revolves around these sorts of classes without ever explicitly defining the scope of the problem. Those who say school is broken and need to be disrupted tend to argue as if all college courses are giant, boring lecture courses. Those who argue against the “school is broken” meme tend to characterize these large lecture-centric courses as exceptions. Neither characterization is entirely accurate. On one hand, there are huge swaths of courses in just about any college catalog that are not large lecture courses. On the other hand, because the large lecture courses are concentrated in core curriculum and core major classes, most students have to take a handful of these courses in order to graduate.
Regardless of how pervasive or rare you think these courses are, everybody seems to agree that they are not terribly effective. But what should be done about the problem? Shrinking the class size is simply not going to happen, given both budget realities and the moral imperative to increase access to education. And yet, the current situation is bad not only for the students but also for the instructors. Keep in mind that the people teaching these survey courses are disproportionately either junior faculty who are doing all kinds of other duties to earn tenure or adjuncts who are working unreasonable course loads just to make ends meet. They generally don’t have a lot of time to either carefully craft a course or give students a lot of (or any) individual attention. They often have little choice but to take what the publisher is giving them as their course outline and run with it. In and of itself, the direct adoption of a publisher’s curriculum isn’t necessarily bad for many of these courses. The whole idea of a core course is that it helps all students getting a particular degree or a particular major to master certain competencies that they should have. There is a strong argument for consistency of curriculum across core courses. But the current situation neither guarantees consistency of curriculum nor saves the instructor time for either thoughtful customization of the curriculum or any other purpose. There is still a lot of hand assembly required to pull together reading assignments, assessments, slides and lecture notes, and so on. It is generally not a creative process because there is little time for creativity, but it is nevertheless a labor-intensive process and one that is prone to introduce variation in hitting those core competencies without any checks or even necessarily a lot of reflection on it.A Better Compromise
If instructors are going to adopt a third-party course curriculum anyway, then we should at least use technology to remove the hand assembly. Why not provide the readings, multimedia, assignments and assessments, neatly integrated with a basic syllabus, into one ready-to-use digital package for the students? At its most basic, this is what “courseware” is and what MindTap does. It provides students and instructors with a ready-to-go complete course structure with all the materials and assessments placed in a logical and easily navigable order. Joel Spolsky once defined poor user interface design as forcing users to make choices that they don’t care about. That is also an apt description for 80% of the pre-semester course preparation process that instructors go through with these big survey courses. Pre-assembling the elements of the vanilla version of the course frees up the instructors’ time to focus on the customizations that they actually do care about. To begin with, the course structure is already assembled and visible, which makes it easier for the instructor to think about its total shape. Removing unwanted content or changing content order is trivially easy, making the roughing in of the course structure very quick.
But things get really interesting when you start looking at adding to the learning path structure in MindTap rather than just moving or deleting things. In ed tech discussions, we tend to talk about APIs as if the main differentiation is having them versus not having them. Can you or can you not integrate Google Docs into a course? But in reality, the specifics of the integration can make an enormous difference in how practically useful the added functionality is to teachers and students. Do you want to make a folder of your documents (like your syllabus) available to the students at all times in the course with one or two clicks, or do you want to insert your own supplemental document right into the course reading, zero clicks away for the student and on their default navigation path? These two types of integration serve fundamentally different purposes in the course. In MindTap, you can do both and more. And importantly, making these different customizations is intuitive and almost trivially easy. Radical customization of the course structure is very much possible. But both because there is far less instructor time wasted with hand assembly of course elements and because customizations are visible and visualizable in the learning path structure, the percentage of time spent on meaningful instructional activities, whether that’s course customization or student interaction, is likely to be higher. For this reason, the MindApp model and the learning path structure are MindTap’s crown jewels.Table Stakes
Of course, MindTap doesn’t have a monopoly on useful courseware platform design. For example, WileyPLUS enables instructors to see which course materials and assessments are associated with which learning objectives. This helps instructors to align what they’re teaching and assessing on to what they think the student should be learning. More importantly, none of these innovations from any of the platforms are going to magically change poor large lecture classes into great educational experiences. The key to solving that problem is not the technology by itself but the learning design that it enables. The classroom flipping craze is a craze precisely because it is a learning design that can improve the pedagogical impact of these large survey classes. But anyone who has actually tried to flip their class will tell you that it’s not easy to do well. Faculty need pedagogical models other than the ones that they learned from their own professors, including the practical tips and support necessary to make those models work in the real world. They need course designs based on learning science and collected experience of innovators, and supported by technology. The MindTap platform doesn’t provide that. No technology platform does. And as far as I can tell, Cengage is not yet designing courseware for MindTap that even attempts to do this. But in order to accomplish the bigger goal, we first need to strike a new balance regarding course design customization. It’s not a question of “more” versus “less.” There will always be times when it is wise to allow a skilled instructor to tune a course. But there needs to be more of a sophisticated collaboration between the individual instructor, a curriculum design team (whether that team works for a textbook publisher or a university), and the other instructors teaching the course at the same institution in order to arrive at better pedagogical approaches that can be adopted and adapted to best effect by individual teachers. In order to accomplish that, you need to start with a combination of platform and content that makes meaningless course assembly unnecessary and meaningful course customization both easy and visible. This is what we mean at e-Literate when we write about “courseware.” And at the moment, MindTap is the best example I know of what a next-generation courseware platform will look like.
Knewton CEO Jose Ferriera has an interesting and revealing blog post up about “the coming adaptive world.” In part, it is a response to a report on adaptive learning by Education Growth Advisors. Jose writes, “Despite our constant protestations to the contrary, observers often confuse Knewton with the many adaptive learning app makers who are now popping up. Or they confuse app makers with platforms. Or they think we’re all competitors.” It’s a bit of a red herring, since the report does distinguish between platform and publisher business models. That said, the meaning of the distinction between these two categories isn’t drawn terribly clearly, and it’s fair for Knewton to try to clarify its market positioning. But in doing so, Jose reveals what appears to be a shift in their thinking about the market for a platform like theirs which tells us something important about the ed tech market in general.
Knewton has always been a platform play. They don’t design educational products. They provide an analytics engine that can be used to make educational products. So they are business-to-business. They sell to other education companies. The value proposition they offer is that they have invested in data science talent and infrastructure that is more powerful and sophisticated than most education companies can manage. It’s a bit like Amazon saying, “Hey, you’re never going to have even a tiny fraction of the experience that we have running unbelievably massive systems that can never go down. Why don’t you just leave that part of things to us by using Amazon Web Services and focus your attention on building the parts of your product that are specific to you?” This is a reasonable pitch for a company like Knewton to make, in my view. The issue that I have had with the company’s public marketing is that there has been a little too much “WHEEEEEEE!” in it:
I think there is a certain ethical responsibility to demystify these technologies in order to help educators and students alike understand when and how they can be helpful. I also think that demystification makes good business sense from Knewton’s perspective. The company simply isn’t going to get good results (and therefore repeat engagements) by hooking up random customer content to their analytics engine. They need content that has been designed for analytics in some real sense in order to produce meaningful insights. They need customers to come to them having some idea of what capabilities they want to design into the product from the beginning.
And that’s where Jose’s post gets interesting. He writes,
Sure — it’s straightforward enough to wire up a simple, self-contained adaptive app, based on a pre-determined, limited decision-tree. But how much better would that app be if it contained an effectively unlimited amount of back-end content? If all of its assessment items had been algorithmically “normed” so that they resulted in exact concept proficiency data for each student? Or if the app pre-acted to the learning modalities of each student? Or if it “started hot” so that from Day 1 of a student taking a new course, all her prior concept proficiencies and learning styles had been preloaded?
Knewton makes possible all these things and more. Today, Knewton functionality includes pinpoint student proficiency measurement, content efficacy measurement (yes, we can tell you how effective your content is), student engagement optimization, atomic-concept adaptive learning, and concept-level analytics. Next year we’re adding “adaptive tutoring,” which combines the wisdom of crowds with Knewton’s network to find the perfect people online right now to give you real-time help.
Hmm. Assessment items being “algorithmically ‘normed’ so that they resulted in exact concept proficiency data for each student?” “Pinpoint student proficiency measurement?” Gee, that sounds suspiciously like Item Response Theory. And if you can find your way past Knewton’s marketing to their tech blog, you will find out that, in fact, Item Response Theory is exactly what Knewton uses for this. Still missing is a straightforward explanation of what ITR can and cannot do well as well as the kind of content design investment that Knewton’s customers would have to make to take advantage of this capability. It’s not as simple as sprinkling a little machine learning fairy dust on your content. Customers that come to Knewton without that understanding of the investment they will need to make are going to end up spending a lot more time and money than they anticipated. But the larger point is that framing specific capabilities that Knewton customers can think about in advance is a start toward positioning themselves as a real infrastructure platform company. Likewise, “adaptive tutoring,” which appears to be a whizzy name for expertise recommendation, is a specific function that app designers can think about when they are building out new services, whether it is math tutoring or college counseling or career counseling. This positioning begins to enable app developers to think about what they can do with learning analytics services. Jose writes, “Until recently, only large learning companies and university systems could use the Knewton platform. But now our enterprise API is flexible enough for a much wider audience. We’re happy to partner with anybody — even so-called ‘competitors.’ We can’t quite say “yes” to everyone who wants to work with us yet, but our capacity is growing by leaps and bounds every day.”
And there is the pivot. Up until now, Knewton has been focusing on the big publishers—particularly Pearson, with whom it has a big partnership deal. One reason for that certainly could be that their APIs were not ready for smaller players before now. But I suspect another driver is the huge growth in ed tech startups in general and companies claiming to have some sort of adaptive learning products in particular. Arguably, a market exists today where one didn’t exist a couple of years ago. I say “arguably” because it remains to be seen whether this onslaught of small companies is just the result of an investment bubble or a sustainable trend. Most of these companies are never destined for IPO, and it’s not clear what the long-term appetite is for acquisition in this sort of volume or, lacking that appetite, how many of these companies are geared up to be small but self-sustaining businesses for the rest of their natural lives. (The fact that so many of them are looking for venture money is not a good sign.) In any event, an analytics infrastructure like Knewton absolutely could make many of these small companies potentially interesting and sustainable on significantly less startup cash by providing them with infrastructure, in the same way that AWS makes it easier and cheaper for all sorts of internet startups to form. But in order to become that sort of trusted backbone, they have to stop talking like magicians and start talking like infrastructure partners.
Blackboard Inc. has been a company in transition long before CEO Michael Chasen announced his upcoming departure.
In fact, the presence of Blackboard’s longtime chief has been its most visible constant. So the big question coming out of Monday’s news is this: What will the ed-tech behemoth look like, sans Chasen? [snip]
If anything, it’s surprising that Chasen stayed on so long after the private equity buyout. He and Providence had “mutually sat down and worked out the right time frame for there to be a transition,” Chasen said in an interview Monday afternoon. “I’ve been here 15 years,” he said. “While I love Blackboard, and I think there is huge opportunity in front of us, I’ve been doing this since I was 25 years old and looking for there to be a good time for me to phase out.”
Today we are starting to get more insight into the changes, based on Bill’s interview with Jay Bhatt, Blackboard’s new CEO, and coverage of management changes at the company.
What hasn’t changed are the external pressures. Competitors in the core learning management system (LMS) market like Instructure and Desire2Learn are eating into Blackboard’s once-dominant market share. Open source is challenging the traditional licensed software model, just as mobile and cloud-based services challenge native desktop software. Blackboard is an internationally recognizable brand, with a broad array of products and a powerful private equity player behind it. It is, however, still seen as the legacy player.
In his first in-depth interview since joining Blackboard, Bhatt laid out his vision for navigating those challenges. And with less than half a year in the role, he’s marked off two key areas of investment: the online program management market, and international.
Bhatt emphasized the need to grow top-line revenue.
Bhatt was equally emphatic about what his mission at Blackboard isn’t.
“Make no mistake, our goal is to be a top-line growth company,” he said. “Obviously, we want to be profitable, and we want to generate the returns that our investors want. But we need to grow the top line. Software companies that grow the top line effectively are really adding value to their industry, they’re not just monetizing their industry.”
According to the article, there have been some significant management changes at Blackboard as well. Tim Hill (president of global marketing), Siegfried Behrens (president of global sales, recently hired from Microsoft) and David Mills (VP of R&D, formerly of ANGEL and MoodleRooms, key visionary behind xpLor) have all left the company in the past two weeks. Kayvon Beykpour (general manager of Blackboard Mobile and co-founder of TerriblyClever) is also on a leave of absence. Matthew Small has been promoted to president of international, and Jim Kelly from McGraw-Hill has been hired as VP of business development.
Bill indicates that he will have more information and insight on Blackboard’s future direction coming out soon. Read the whole article here.
The post Blackboard Changes Underway: Jay Bhatt Interview and Management Changes appeared first on e-Literate.
The other week I had the pleasure of attending the annual GSV Advisors Education Innovation Summit in Scottsdale. For those who aren’t aware, the main purpose of the event is to help ed tech startups and investors find each other. After last year’s summit, I wrote a post called “What Are Ed Tech Entrepreneurs Good For?“, the main lessons of which could equally apply to this year:
- The main stage speakers generally tilted to the right, ideologically (although there was a visible effort to achieve more balance this year). But by and large, that didn’t matter, since the important work of the conference was generally driven by what ed tech startups were available for funding.
- Many of the ed tech entrepreneurs have good intentions but varying degrees of knowledge about the theory and pragmatics of education.
- The degree to which ed tech entrepreneurs need that knowledge varies widely depending on the specifics of what problems they are trying to solve with their offerings.
- The success or failure of these startups are often heavily influenced by the market conditions, which means that we should be thinking about things like improving institutional purchasing practices if we want more from our vendorsThe main difference this year was that the conference was roughly twice the size of last year’s.
I’ll have more to say about the dynamism and/or frothiness of the ed tech startup market in a future post. For now, I want to focus on that continuing disconnect between what happens on the main stage of the conference and what happens in the rest of it, because I think there is a lost opportunity.
The very first night of the conference, one of the keynote speakers trotted out that old Ronald Reagan chestnut:
The nine most terrifying words in the English language are: “I’m from the government and I’m here to help.”
The obvious implication of the quote, of course, is that the government is so out of touch with what people actually need that they consistently do serious damage, even when they are trying to help. The irony here is that most of the main stage conversation for the rest of the conference was money guys interviewing other money guys about what education needs. The prevailing attitude in the Valley seems to be, “Hey, we built the internet. How hard could education be?”1 To be fair, there were a few voices representing people with real experience in education on the stage. But only a few.
To be clear, I’m not suggesting that the conference needs to be turned into a therapy session where educators and investors all hold hands and sing “Kumbaya.” Rather, I’m talking about providing investors with the context they need (and generally don’t have as part of their professional backgrounds) to judge which investments are going to be impactful and successful. As one conference participant put it to me, “It’s odd that there is almost no discussion of what are the kinds of problems there are in education that money can help.”
Some of this should be about the customer perspective. For example, it was very helpful to have Antioch University Chancellor Felice Nudelman participate in the MOOC panel, in part because she could comment on how administration of at least one university views MOOCs and how they impact their range of strategic choices. The participants could have benefited from a lot more of that. A lot of these investors are most familiar with direct-to-consumer products and have a lot to learn about the complex realities of our school systems. Nor can they just wave the magic “disruption” wand and make those complexities go away. They need to understand them if they are going to invest in companies that actually have a chance of building value.
Some of the main stage discussion should also be from a theoretical perspective. For example, just about every second or third company pitch I heard mentioned something about “adaptive learning.” But I’m pretty sure that not more than 10% of the investors there had any idea what adaptive learning is or when or why it is useful. When I was corresponding with Bill Jerome about what turned out to be his first (and outstanding) e-Literate post, I suggested the following visualization as one possible way to approach the post:
Imagine you’re at the Ed Innovation conference. You’ve just heard a pitch from a startup. Basically, they wrap whatever content customers are using in formative and summative assessments and then use “the big data” to identify which content is most effective and then recommend it to students, modified by personalization algorithms based on student preferences that they’ve deduced by watching their behavior in the LMS. “We want to be the Netflix of learning content.” Now you’re sitting at a bar with a venture capitalist, a university provost, and a member of a state legislature. They all think the pitch from the startup was awesome.
Your job is to explain to these three why the pitch they heard was not as spectacular as they thought it was, and what a genuinely good big data pitch would have to look like.
This is exactly the kind of thing that the investors at the conference need to hear—only before they hear the startup pitches, not afterward, so they can ask smarter questions during the pitch sessions. If you want to give that insight to the investors, then you don’t put Chris Whittle on stage. You put Bill Jerome up there.
To be clear, I think the conference serves a valuable function, and I hope that Deborah Quazzo and Michael Moe continue to run it well into the future. (Given the size of the event this year, I think that is a safe bet.) But as somebody who is interested in ed tech entrepreneurship and investing mainly insofar as they help actually improve education—and as somebody who has faith that companies capable of actually improving education will tend to be profitable investments—I would like to see the conference be as effective as possible at lining up real problems with real money funding real solutions.
- It is hardly a new phenomenon in American history for the latest captains of industry to think this way. A hundred and fifty years ago, it was “Hey, we built the railroads. How hard could education be?”
We are in the middle of the first anniversary of the creation of the big 3 MOOC providers (Coursera, Udacity, edX).
- Sebastian Thrun announced the creation of Udacity on January 23, 2012 as described by Reuters.
- Daphne Koller and Andrew Ng announced the creation of Coursera on April 18, 2012 in this NY Times article.
- MIT and Harvard announced the creation of edX on May 2, 2012 in this MIT article.
And what a year it’s been, especially in terms of the engagement of national media, university presidents and boards on the topic of MOOCs, online education and the future of higher education in general. I doubt there is a higher education conference this year that doesn’t mention MOOCs in one form or another.
But one year is a seriously short time period for higher education and educational technology, which are just not used to changes on this time scale. To give some perspective, consider the following:
- There are quite a few ed tech products that have been or will have been in beta or introductory mode for at least a year when fully released.
- This time one year ago, few people outside of Virginia knew what a Rector or how a board of them might force a president to resign partially based on discussions about MOOCs.
- Many LMS and almost all student information system / ERP selection processes (not even including implementation) take more than a year.
Update 4/25 and bumped due to changes: Thanks to Greg Ketcham and Robert Knipe, I have replaced the 2009 interim proposal document with the updated advisory team report. This changes the intro blurb, description of 9 inter-dependent components, and list of contributions below.
I have been surprised at how little interest the Open SUNY announcement last week generated in educational media and blog discussions. Perhaps the MOOC portion of the story, which was prominent in several headlines, caused people to assume this was just another school trying to jump on the bandwagon. What is significant, however, is that one of the largest statewide systems in the country is making a multi-pronged approach to reduce time-to-graduation and therefore lower student costs.
In brief, Open SUNY is part of the system’s agenda to expand access to public higher education by leveraging existing programs or experiments already in place at member campuses or at the system level, and it has strong ties to Open Educational Resources (OER) concepts. The concept for the strategic plan originated in 2009, eventually leading to the report Getting Down to Business: Interim Report of the Chancellor’s Online Education Advisory Team released in December 2012 [updated].
The Advisory Team recommends “Open SUNY” be officially adopted as the name of SUNY’s new online learning initiative. The term Open SUNY represents an opening up of the educational opportunities that SUNY can provide through the enhancement of existing—and development of new—online education resources, courses and degree programs.
Open SUNY has the clear potential to establish SUNY as the preeminent and most extensive online learning environment in the nation by providing affordable, high quality, convenient, innovative, and flexible online education opportunities for the citizens of the State of New York and beyond. As a collaborative online educational network, the Open SUNY Online Consortium (SUNY campuses and SUNY system offices) will draw on the Power of SUNY to connect students with faculty and peers from across the state and throughout the world, and link them to the best in research-based online teaching and learning environments, practices, and resources. Dedicated to providing access to open and online learning opportunities, Open SUNY will connect learner and community needs and will allow the State University of New York to bring this concept to scale like no other college, university, or system in the United States.
What is Open SUNY?
Open SUNY is a set of 9 interdependent components, as described by the advisory team report [updated]
1. Open SUNY Online Consortium - Comprised of courses from SUNY campuses across the system taught by SUNY faculty, the Open SUNY Online Consortium will collectively offer the most extensive array of online courses and degree programs in the country. This unified approach to online education will provide learners with cost effective options to compete with the rising costs of higher education and enable students taking courses across multiple SUNY institutions to receive financial aid from their home institution.
2. Open SUNY Degree - The term Open SUNY degree refers to functional coordination of policies and practices that “systemness” will allow for, not the actual degree conferrals that are the role of the campuses. The Office of the Provost will seek out campuses to offer new, high needs, online degree programs that will not necessarily require the host campus to develop or provide all the necessary courses to meet credit requirements to confer a degree.
3. Open SUNY Complete - Open SUNY will lead a SUNY-wide project to support degree completion for students who seek to return to college after a significant absence (commonly referred to as ”stopped out”). The Open SUNY Complete program will identify and support former students who wish to return to SUNY to earn and complete a degree. This will occur through use of market analyses and outreach to students who are now considered beyond the normal reach of the originating enrolling college, using a variety of cooperative strategies between SUNY institutions. tate University of New York Chancellor’s Online Education Advisory Team Interim Report 4
4. Open SUNY Resources - Open SUNY Resources will build on existing digital repositories, making vast amounts of high quality, credible material available to faculty and learners, while simultaneously staking ground as a world leader in creating new resources by leveraging the vast expertise available across SUNY disciplines.
5. Open SUNY PLA (Prior Learning Assessment) - Increasingly, people acquire and assimilate knowledge both internal and external to the academy. Recognition of the latter, when applied toward college level learning, provides greater access to higher education, decreased time to degree completion, increased retention and completion rates, and significantly lower costs to students. Open SUNY PLA will provide services to campuses that do not wish to establish their own prior learning assessment processes.
6. Open SUNY Workforce - A SUNY-wide strategy for the use of online learning in support of workforce development and adult/continuing education can strengthen SUNY’s role as an economic driver throughout NYS and provide access to SUNY higher education specifically for potential employees, employees and employers statewide (and nationally, who will be attracted to all that SUNY and New York have to offer).
7. Open SUNY International - Open SUNY International will provide a network for learning by linking faculty and students from around the world, demonstrating SUNY’s commitment to international education. In partnership with the Office of Global Affairs, Open SUNY International will provide new opportunities for SUNY students to engage in international and intercultural learning.
8. Open SUNY Research - Open SUNY Research will continue a long tradition of scholarship related to innovation, student access, and learning in open and online environments. Previous support from the Office of the Provost has fostered an active and ongoing research and development agenda with more than 150 conference papers, book chapters, peer-reviewed journal publications, monographs, and presentations directly related to SUNY Learning Network and online education initiatives. Open SUNY Research expands this work and will be supported by a combination of SUNY-wide innovation grants, external funding, formal initiatives, advisory group efforts, and campusbased research activities.
9. Open SUNY Learning Commons - The Open SUNY Learning Commons will be a set of technology applications and online environments to support all Open SUNY services and components. Facilitating communication across campuses, the Learning Commons will bring the user-friendliness of social media applications to the SUNY community. It will leverage advanced open source and commercially available online learning tools, while building communities of practice for students and faculty.
Open SUNY funding comes from a $18.6m funding from NY2020 legislation, and will eventually cost (according to estimates) $3.35m per year in operations.
The plan was announced during the SUNY Chancellor’s State of the University address on January 15, 2013. One of the goals of Open SUNY, according to the Chancellor is to expand access to public higher education:
Launch of Open SUNY in 2014, including 10 online bachelor’s degree programs that meet high-need workforce demands, three of which will be piloted in the fall. Open SUNY will leverage online degree offerings at every SUNY campus, making them available to students system-wide using a common set of online tools, including a financial aid consortium so that credits and aid can be received by students across campuses. Chancellor Zimpher said Open SUNY enrollment will reach 100,000 students within three years, making it the largest online education presence of any public institution in the nation.
On March 19, 2013, the Board of Trustees endorsed the plan. One of the motivations for this move was to coordinate campus efforts and gain system-wide synergies, as described by Ry Rivard at Inside Higher Ed. One of the key targets for the online expansion will be non-traditional adult learners.
SUNY Chancellor Nancy Zimpher wants to consolidate online course offerings after nearly 20 years of institutional independence.
“I think the problems the country is trying to solve simply cannot be solved one institution at a time,” Zimpher said in a recent interview. [snip]
SUNY began its online efforts in 1994 at Empire State College. Now, there are 150 online degree programs scattered across all its campuses. SUNY’s extensive offerings are, as it has said in documents related to its new effort, “fragmented” – the source of “countless unexplored opportunities for collaboration, economies of scale and innovation.”
Zimpher ultimately wants to enroll 100,000 new online students in the next several years while also adding new degree programs to train New Yorkers for industries with job openings. To reduce costs to students, she is also trying to speed degree completion times in online degrees to three years.
The chancellor said the whole online effort will target adults.
“We have all these adults who have some education but not enough,” she said. “We’re really trying to grow a major enrollment in an underserved population.”
Ry Rivard’s article also highlights potential pushback from the faculty unions.
A spokesman for the union that represents SUNY academics and instructors said the union had not been consulted about the push.
“SUNY hasn’t brought us into the conversation, hasn’t consulted us,” said Don Feldstein, spokesman for United University Professions, which represents about 32,000 SUNY employees.
SUNY spokesman David Doyle said the system had consulted with faculty by appointing some of them to a task force and by talking to faculty through the “appropriate governance channels,” such as the faculty senate.
How Will We Know?
The part of innovation that I don’t see mentioned enough, at least in the proposal and press releases, is a structured method of determining what works and what doesn’t work. The proposal does mention the metrics that should improve if Open SUNY is successful, but these are all at the initiative level, and not at the individual innovation level [updated].
The impact of Open SUNY will be measured by its contributions to:
- Enhancing and supporting academic excellence of faculty and students;
- Reducing the time required for degree completion;
- Reducing the overall cost of obtaining a SUNY degree;
- Meeting workforce and societal needs;
- Increasing SUNY completion rates;
- Increasing the number of online learners;
- Enhancing the profile of SUNY as an innovative leader in teaching and learning;
- Continuing to reduce a collective carbon footprint; and
- Increasing student and faculty international engagement through online interaction.
Some of these are laudable goals (reducing time to degree and overall cost, increase completion rate), but some are ill-defined (improved outcomes) and some are questionable (increased number of online learners as a goal rather than means to a goal, and enhancing the profile).
But a deeper problem is lack of discussion on determining which innovations to diffuse and which innovations to keep from diffusing. Perhaps there are plans for evaluating courses and programs, but there are no details available that I can find.
Focus on Spreading Innovations, not Creating Innovations
SUNY, of course, is not the first place to develop MOOCs, online courses, OER, open courseware or PLAs, so what is important about this announcement? I think the significance lies in SUNY’s scale and SUNY’s approach. SUNY appears to view the Open SUNY program as a method to spread educational innovations throughout one of the largest systems in the country rather than creating a new pilot program or experiment. SUNY has 468,000 students and plans to add 100,000 more. Rather than trying to create a new innovation, the role of the system is to foster innovation and then take the best ideas and make them available to all.
Although it’s not getting enough attention, Open SUNY will have an outsized impact on the future of online education in the US. State-wide initiatives, whether driven by the systems or the state government, are becoming one of the biggest factors in how higher education is changing in the US. I suspect that other states will be watching SUNY and adopting this model in part or in whole.
Pay attention to Open SUNY – it will matter.
Further reading in chronological order:
- SUNY Strategic Plan, “The Power of SUNY”, 2010
- Associated Press, “SUNY seeks to establish a ‘cradle to career’ future for its graduates”, April 13, 2010
- Empire State College, “Open SUNY Final Proposal” from 2012
- CNY Central, “SUNY Chancellor reveals ambitious agenda”, Jan 15, 2013
- USA Today, “State University of New York pushing online classes”, Jan 15, 2013
- Education News, “Open SUNY Will Mark New York’s Push into Online Education”, Jan 22, 2013
- Open SUNY Press Release, “SUNY Board Outlines Implementation of Open SUNY”, March 19, 2013
- Buffalo Business First, “Online courses to be available across SUNY system”, March 20, 2013
- Chronicle of Higher Education, “SUNY Signals Major Push Toward MOOCs and Other New Educational Models”, March 20, 2013
- Online Colleges, “State University of New York Embraces Online Learning with Open SUNY Initiative”, March 22, 2013
- e-Literate, “SUNY and the Expansion of Prior Learning Assessments”, March 26, 2013
- Inside Higher Education, “Economies of Online Scale”, March 27, 2013
Update 4/02: Fixed editing mistake to say “SUNY, of course, is not the first place to develop . . . “
Last week California SB520 – the bill aiming to create a pool of online availability of 50 high-demand lower-division courses for which the public systems would have to award credit – was amended based on ongoing discussions and negotiations. The fact that the bill has been amended is not surprising, as this is the intent of the legislative process.
The themes of the amendments are to:
- shift the approval of the pool of online courses from the California Open Access Resources Council (COERC) to the administration and faculty senates of the three systems (University of California, California State University, and California Community Colleges);
- tie the administration of the program to the California Virtual Campus;
- restrict each course to matriculated California public higher education and qualifying K-12 students;
- tie the provisions of the bill to funding in the Annual Budget Act; and
- remove any tie to American Council on Education recommendations.
Amended Bill Language
Below are some of the key changes to the bill, with markups (red strikethrough text for deletions, blue for additions).
This bill would establish the California Online Student Access Platform under the administration of the California Open Education Resources Council President of the University of California, the Chancellor of the California State University, and the Chancellor of the California Community Colleges, jointly, with the academic senates of the respective segments. The bill would require the platform, among other things, to provide an efficient statewide mechanism for online course providers to offer transferable courses for credit and to create a pool of these online courses. The bill would require the council, among other things, President of the University of California, the Chancellor of the California State University, and the Chancellor of the California Community Colleges, jointly, with the academic senates of the respective segments, to develop a list of the 50 most impacted lower division courses, as defined, at the University of California, the California State University, and the California Community Colleges that are deemed necessary for program completion or fulfilling transfer requirements, or deemed satisfactory for meeting general education requirements in areas defined as high-demand transferable lower division courses under the Intersegmental General Education Transfer Curriculum and, for each of those 50 courses, to promote the availability of multiple high-quality online course options, as specified.
The bill would establish the California Student Access Pool, through which students could access online courses, and would require the online courses approved by the council President of the University of California, the Chancellor of the California State University, and the Chancellor of the California Community Colleges, jointly, with the academic senates of the respective segments, under the bill to be placed in this pool the California Virtual Campus. The bill would require that matriculated students taking of campuses of the University of California, California State University, or California Community Colleges, and California high school pupils, who complete online courses available in the pool and achieving developed through the platform and achieve a passing score on corresponding course examinations, be awarded full academic credit for the comparable an equivalent course at the University of California, the California State University, or the California Community Colleges. Because Colleges, as applicable.The bill would provide that funding for the implementation of this provision would be provided in the annual Budget Act, and express the intent of the Legislature that the receipt of funding by the University of California for the implementation of this provision be contingent on its compliance with its requirements. Because this provision would require community colleges to award academic credit under these circumstances, it would constitute a state-mandated local program.
Section 1 is the findings and declarations portion of the bill, and changes include a focus on faculty partnership.
(e) California could significantly benefit from a statutorily enacted, quality-first, faculty-led framework that increases partnerships between faculty and online course technology providers aimed at allowing students in online courses in strategically selected lower division majors and general education fields to be awarded areas to take online courses for credit at the UC, CSU, and CCC systems. While providing easy access to these courses, these systems could also continually assess the value of the courses and the rates of student success in utilizing these alternative online pathways.
Section 2 is the major addition to California legislation if enacted, adding section 66409.3 to the Education Code. The phrase “in partnership with faculty members of the University of California, the California State University, and the California Community Colleges,” has been added in several areas. Some key section changes
(c) For purposes of accomplishing all of the objectives of the platform as specified in subdivision (b), the California Open Education Resources Council President of the University of California, the Chancellor of the California State University, and the Chancellor of the California Community Colleges, jointly, with the academic senates of the respective segments, shall do all of the following:
(1) (A) Develop a list of the 50 most impacted lower division courses at the University of California, the California State University, and the California Community Colleges that are deemed necessary for program completion or fulfilling transfer requirements, or deemed satisfactory for meeting general education requirements. requirements, in areas defined as high-demand transferable lower division courses under the Intersegmental General Education Transfer Curriculum.
(B) For purposes of this paragraph, “impacted lower division course” means a course in which, during most academic terms, the number of students seeking to enroll in the course exceeds the number of spaces available in the course.
(2) (A) For each of the 50 courses identified under paragraph (1), solicit and promote appropriate partnerships between online course technology providers and faculty of the University of California, California State University, and California Community Colleges which, by the fall term of the 2014–15 academic year, shall result in the availability of multiple high-quality online course options in which students may enroll in that term.
(B) An online course developed pursuant to this paragraph shall be deemed to meet the lower division transfer and degree requirements for the University of California, the California State University, and the California Community Colleges.
The amendments stipulate that faculty must be associated with each course, and enrollment is limited to matriculated California students.
(3) Create and administer a standardized review and approval process for online courses in which most or all course instruction is delivered online and is open to any interested person. When
reviewing for matriculated students of the University of California, California State University, and California Community Colleges, or for California high school pupils. No course shall be approved for purposes of this section unless the course has associated with it a faculty sponsor who is a member of the faculty of the University of California, the California State University, or the California Community Colleges.
In a significant change, any reference to recommendations coming from the American Council on Education have been removed.
(G)Includes content that has been reviewed and recommended by the American Council on Education.
Courses will be listed in the California Virtual Campus and budgeting applied through the Annual Budget Act.
(d) Online courses approved by the California Open Education Resources Council through the platform pursuant to this section shall be placed in the California Student Access Course Pool, which is hereby created Virtual Campus, through which students may access the courses. Students taking A matriculated student of a campus of the University of California, California State University, or California Community Colleges, or a California high school pupil, who completes an online course available in the California Student Access Course Pool and achieving developed through the platform and achieves a passing score on the corresponding course examination shall be awarded full academic credit for the comparable an equivalent course at the University of California, the California State University, or the California Community Colleges, as applicable.
(e) Funding for the implementation of this section shall be provided in the annual Budget Act. It is the intent of the Legislature that, notwithstanding Section 67400, the receipt of funding by the University of California for the implementation of this section be contingent on its compliance with the requirements of this section.
Response to Faculty Senate Pushback
Many of these changes appear to be in response to faculty senate pushback. The CSU faculty senate “voted unanimously to take a formal position of oppose unless amended with regard to SB 520″. The biggest concern from faculty centered on the involvement of California Open Access Resources Council (COERC), which they felt removed authority over curricula and bypassing existing quality measures in the three systems.
The part of the faculty senate pushback that goes to the intent of the bill, and therefore was not included in any amendments, is their concern over the applicability of online education to lower-division courses.
More specifically, the ASCSU has serious concerns about increasing access to California’s higher education system for lower division students through the use of online courses of study. CSU is a leader in online course delivery for upper division and graduate students. However, research has shown that online courses are not as effective for lower division students, underprepared students, or lower income students. Targeting lower division courses for online delivery puts these very students at greater risk for failure rather than facilitating their access to academic success.
These changes are very recent, so it remains to be seen the affect of these changes on faculty support or resistance.
The post Amendments of California SB520 Bill for Online Courses appeared first on e-Literate.
For any online program in the US that enroll students from more than one state, the issue of the Department of Education’s State Authorization proposed regulations is a major issue. WCET has played a leading role in raising awareness on the issue as well as pushing for a solution. From their summary page (read the whole page for a summary of the timeline, pushback, state regulations, etc):
On October 29, 2010, the U.S. Department of Education (USDOE) released new “program integrity” regulations. One of the regulations focused on the need for institutions offering distance or correspondence education to acquire authorization from any state in which it “operates.” This authorization is required to maintain eligibility for students of that state to receive federal financial aid. Institutions have until July 1, 2014, to have obtained the appropriate approvals. Meanwhile, institutions are required to demonstrate a ‘good faith’ effort to comply in each state in which it serves students. While the regulation has been ‘vacated’ by court order, we believe it will be reinstated.
To give an idea of the issues, consider that Missouri charges institutions $5,000 – $25,000 fees to register in the state, and there is a burdensome process. While not all states are as expensive as Missouri, the costs and overhead add up quickly, and there are conflicting and inconsistent requirements from state to state. According to a survey from UPCEA, WCET and Sloan-C, one third of online programs have not applied to any states outside their home, despite the serving a median of 37 states. Furthermore State Authorization rules would stifle online education programs and is already causing many programs to reject students in certain states.
Despite losing in court (the ruling was vacated), the Department of Education still plans on pushing forward and planning to revive State Authorization.
The most promising approach to dealing with this situation is the State Authorization Reciprocity Agreement (SARA).
The backbone of the Commission’s recommendations is a system of interstate reciprocity based on the voluntary participation of states and institutions to govern the regulation of distance education programs. Participating states will agree on a uniform set of standards for state authorization that ensure that institutions can easily operate distance education programs in multiple states as long as they meet certain criteria relating to institutional quality, consumer protection, and institutional financial responsibility (further described below). Participating institutions must be authorized by their “home state” (which is, presumptively, the institution’s state of legal domicile).Once designated, the home state should have responsibility for authorizing the institution for purposes of interstate reciprocity and be the default forum for consumer complaints.
WCET has a summary post up by Russ Poulin that describes the latest report and commission meeting on SARA.
A national meeting on next steps in state reciprocity was held in Indianapolis on April 16 and 17. The purpose of the event was to serve as an initial introduction to representatives from each state about next steps in reciprocity.
The session focused on the report: Advancing Access through Regulatory Reform: Findings, Principles, and Recommendations for the State Authorization Reciprocity Agreement (SARA) that was recently released by the Commission on the Regulation on Postsecondary Distance Education. The Commission, which is a committee formed by APLU (the land-grant universities) and the State Higher Education Executive Officers, built upon the work of previous efforts of the Presidents’ Forum/Council of State Governments and the regional higher education compacts. You can see a short history of state authorization and the reciprocity efforts on our web page.
Russ goes on to describe support from ACE and even Hal Plotkin from the Department of Education:
While the Department of Education cannot formally endorse the work, he brought a two-word message from the Secretary Arne Duncan and Under Secretary Martha Kanter: “thank you.”
There is also a summary of the key questions being considered, including accreditation affects, fees for institutions participating in SARA, determination of Home State, and impact of the 25% rule (more on that one in a future post).
In short – this is an important issue to track, and WCET has some excellent resources to help online programs stay up-to-date.
The post Summary from WCET on State Authorization Reciprocity Agreement appeared first on e-Literate.
Instructure took another step this past week to establish Canvas as a true learning platform, moving beyond the traditional bounds of an LMS. The company announced the upcoming release of the Canvas App Center, scheduled for availability at the same time as their annual users confer in June, which will allow end-user (read faculty and students) integration of third-party apps.
I wrote about the trend of the market moving towards learning platforms last year.
In my opinion, when we look back on market changes, 2011 will stand out as the year when the LMS market passed the point of no return and changed forever. What we are now seeing are some real signs of what the future market will look like, and the actual definition of the market is changing. We are going from an enterprise LMS market to a learning platform market.
What I mean by ‘enterprise LMS’ is the legacy model of the LMS as a smaller, academically-facing version of the ERP. This model was based on monolithic, full-featured software systems that could be hosted on-site or by a managed hosting provider. A ‘learning platform’, by contrast, does not contain all the features in itself and is based on cloud computing – multi-tenant, software as a service (SaaS). [emphasis added]
The key idea is that the platform is built to easily add and support multiple applications. The apps themselves will come from edu-apps.org, a website that launched this past week. There are already more than 100 apps available, with the apps built on top of the Learning Tools Interoperability (LTI) specification from IMS global learning consortium. There are educational apps available (e.g. Khan Academy, CourseSmart, Piazza, the big publishers, Merlot) as well as general-purpose tools (e.g. YouTube, Dropbox, WordPress, Wikipedia).
The apps themselves are wrappers that pre-integrate and give structure access to each of these tools. Since LTI is the most far-reaching ed tech specification, most of the apps should work on other LMS systems. The concept is that other LMS vendors will also sign on the edu-apps site, truly making them interoperable. Whether that happens in reality remains to be seen.
What the App Center will bring once it is released is the simple ability for Canvas end-users to add the apps themselves. If a faculty adds an app, it will be available for their courses, independent of whether any other faculty use that set up. The same applies for students who might, for example, prefer to use Dropbox to organize and share files rather than native LMS capabilities.
Not a New Idea, Just Taking Concept to Application
The idea of having the ability to easily integrate multiple applications into a learning environment is not new. SUNY Learning Network (SLN) was working on the Learning Management Operation System (LMOS) concept back in the mid 2000s (where Michael was one of the key drivers behind this initiative), but the LMOS implementation did not pan out. Patrick Masson, another key player in the initiative, went on to UMassOnline after SLN and has been instrumental in creation of the Needs Identification Framework for Technology Innovation (NIFTI) to enable local adoption of learning tools. The general desire to support easy integration of apps also lead to the LTI specification.
What has not been available, however, is the empowerment of end users to make these decisions without going through the IT department or LMS system administrators.
IMS global is also talking about the need for an educational app store, as described in Rob Abel’s blog last week.
For those of us that have been attending Learning Impact the last several years (and, yes, don’t forget to sign up right now for this year’s because space is getting short!), we already know what the future of the “LMS” is (and that the term LMS is a bad name for what it has been or what it will be). We also know what the general roadmap for digital learning resources is and how this evolution is intertwined with the evolution of the LMS. That’s because the LMS is evolving into a disaggregation of features and resources that come together easily and seamlessly for the needs of teachers and students.
The post also announced the IMS plans to support development of an app store to be available in a few years.
Can universities and school districts control their own online “store” of educational content and applications for easy access and use by students and faculty? Yes they can – and they will in only a few short years. Will such an “app store” be based on Apple, Google or Amazon? No it will not.
The “take it or leave it” proprietary vertical integration strategies of consumer-oriented providers of digital books and applications, that maximizes their ability to create revenues from sales of such resources, have left educational institutions with a conundrum. Do we dare dictate to our students and teachers a “preferred platform?” Of course, the answer to that question needs to be “no.”
What is not apparent, however, is whether the Canvas App Center will be seen as friend or foe with the IMS effort. The Canvas effort will be ready years before the proposed IMS effort, it is offered for free, the apps are built on LTI, and the API for the app is itself open-source. But . . . it will be run by a vendor.
Update: Clarification provided by Rob Abel here in the comments. Short answer – IMS does not see Canvas App Center as a threat but as a very positive development; there is concern over language of “LTI compliant” apps that are not cross-platform compatible.
Who’s In Control?
The closest vendor-based effort to the Canvas App Center is probably xpLor from Blackboard, which Michael described in this post. This cloud-based platform is not technically an app store model, but it does enable standards-based content and applications to be shared with the core LMS from a cloud-based platform. xpLor appears to be focused more on packages of content, grouped learning material and communities of interest. Despite some of the similarities, xpLor focuses more on institutional decision-making and system administrator control, whereas the Canvas App Center focuses more on easy access to consumer-based tools for faculty, students or system administrators.
From the press release:
“We want to tear down the walled garden that has plagued the LMS market,” Instructure co-founder and CPO Brian Whitmer said. “Third party integrations have existed, but they’ve required the IT department to make them work. With Canvas App Center, we want to let anyone install an app with one click and begin personalizing their learning experience with these tools.”
Tired of Waiting
While the core concept is not new, and as seen by IMS plans is not unique, the significance of the Canvas App Center and the corresponding edu-apps site is in making the idea much more of a reality. Brian Whitmer created a slideshare with audio that gives more detail on the announcement, including a description of Instructure’s frustration that educational technology is still not an ecosystem. I recommend the slideshare to people wanting to get more of a UI-based explanation of the concept.
This attitude exhibited by Instructure – focus on consumer-based tools and desire to implement basic concepts in a quick fashion – matches their pedigree as a venture-capital backed company with a startup mentality.
I believe that the App Center will significantly push forward the adoption and importance of LTI, but it is not clear whether the benefits will only affect Canvas customers or actually push the LMS field further into a learning platform market. As with all pre-announcements, a great deal of the impact will depend on the actual implementation of the new software.
One other factor to watch will be whether Canvas institutions can (or should) adjust to the paradigm shift of enabling faculty and student adoption of pre-integrated tools. Concerns over data security, standardization and loss of control could cause some schools to take a cautious stance towards the app center.
And now for this week’s version of “do you notice which publications are not covering this story”:
- PR Neswire (official press release), “Instructure Announces Canvas App Center”
- TechCrunch, “Instructure Launches App Center To Let Teachers, Students Install Third-Party Apps Across Learning Platforms”
- CampusTechnology, “Canvas App Center Brings 1-Click Access to LMS Add-ons”
- InformationWeek, “Canvas LMS Maker Launches Open Education Apps Directory”
- PandoDaily, “Instructure launches open Canvas App Store to turn education into an ecosystem”
The post Tear Down This Wall(ed Garden): Canvas App Center to Offer End User Control Over Apps appeared first on e-Literate.
Editor’s Note: I am pleased to announce that Bill has agreed to continue contributing blog posts from time to time. Therefore, he is now officially a “Featured Blogger” rather than a “Guest Blogger.”
Last week, I had the privilege of speaking at a workshop on online graduate education. At that workshop, Carnegie Mellon University Provost and Executive Vice President Dr. Mark Kamlet used the words “Learning Engineering” in his keynote which I built upon in my talk. In my previous post I referenced the need of semantic data and algorithms to support learning engineers to create and iteratively improve courses and courseware (among other things). I felt it was worth taking a little time to describe just what I believe that means.
For over 10 years, the Open Learning Initiative has been bringing together teams to develop online course materials. Carnegie Mellon is an ideal place to cultivate this work due to its multi-disciplinary programs and culture aside from its expertise in the related fields. During that time we’ve built a team of experts that are critical to the building of learning environments informed by research and capable of recording data for iterative improvement as well as creating dynamic reports for stakeholders.Discovering Learning Engineering
At OLI, we have followed a path that was outlined by CMU professor Herb Simon, Noble Laurate:
“Improvement in post secondary education will require converting teaching from a solo sport to a community based research activity.”
If you’ve seen someone from OLI speak more than once, you’ve seen this quote and might be tempted to gloss right over it. But it’s worth considering closely, particularly in this context. We have found that the best way to build effective learning environments is to regularly convene faculty, software engineers, usability specialists, learning scientists, and others.
What does it take then to be someone who can sit at the center of this kind of diverse group and produce an online learning environment that has a successful outcomes? We’ve admittedly struggled with this question as we’ve grown as a project. It turns out that part of what we were missing was trying to shoehorn people with existing skill sets into a role that is really what we’ve come to call the learning engineer.Engineering Learning? You Bet.
Starting with the source of all knowledge, I look to how Wikipedia defines engineering:
Engineering is the application of scientific, economic, social, and practical knowledge, in order to design, build, and maintain structures, machines, devices, systems, materials and processes. It may encompass using insights to conceive, model and scale an appropriate solution to a problem or objective. The discipline of engineering is extremely broad, and encompasses a range of more specialized fields of engineering, each with a more specific emphasis on particular areas of technology and types of application.
I can’t think of a better way to describe what it is we ask our learning engineers to do. But I work with them every day. So let me draw a rudimentary comparison: Imagine a more “traditional” engineer hired to design a bridge. They don’t revisit first principles to design a new bridge. They don’t investigate gravity, nor do they ignore the lessons learned from previous bridge-building efforts (both the successes and the failures). They know about many designs and how they apply to the current bridge they’ve been asked to design. They are drawing upon understandings of many disciplines in order to design the new bridge and, if needed, can identify where the current knowledge doesn’t account for the problem at hand and know what particular deeper expertise is needed. They can then inquire about this new problem and incorporate a solution.
In this way, a learning engineer applies learning science and what is known about other relevant disciplines (user experience, for example) and pedagogy to problems developing learning environments. When designing for platforms that collect semantic data they understand the requirements of the materials they are creating and can ensure that the data collection that will be done will provide actionable results. This does not mean a learning engineer has to understand the intricacies of the algorithms that operate on data, but they need to have a sufficient understanding of the needs of that data collection.
In one way, this type of engineering is more rapid and responsive that “traditional” engineering. We can learn from the delivery of the “built bridge” just what parts are effective and what parts need improvement. (This requires semantic data in order to discern). In the comparison I’ve made, one doesn’t usually go back and make a bridge better unless something terribly wrong comes to light. Here we can monitor and continually improve our previous work as well as apply those lessons forward to new developments.
That addresses lessons learned “in the field” (practice informing sciences). In the other direction (sciences informing practice), the comparison is harder to make. If some critical flaw is discovered one might go back and “patch” a bridge. For a learning engineer, revisiting work is not a rare occurrence but an expected iterative improvement process. Thus, a learning engineer must be aware of ongoing research in related fields and stay current with our understanding of how to teach effectively. We’ve only begun to understand teaching and learning in scientific ways and cannot rest on what we know so far. Learning engineering then, as a field, is really about developing processes and methodologies to support this work.
One good point made to me by a workshop attendee after my talk: if a bridge falls down, you know about it. In the world of online education where rich evaluation is rare, we don’t even know if our bridges are falling down.Something We’ve Needed All Along?
Although the work to advance online education has been the spark that has made obvious the need for collaborative efforts and individuals who can work in those highly interdisciplinary teams, I refer back to the quote at the opening. Simon wasn’t saying online education required the conversion of how we teach. It just so happens that it is now obvious. If we’re truly honest with ourselves, not all experts make the best teachers. This is not to say that top-tier institutions with high-caliber faculty aren’t offering a great opportunity to students by providing access to leading researchers. (“Minds rubbing against minds” as it were). But those leading researchers are not guaranteed to be the best teachers, especially when they’re often handed a course to teach as a secondary requirement to their role that they may not be interested in.
Some shared experiences of undergraduates everywhere:
- I thought I understood the lecture, but I don’t know where to start on this homework!
- That midterm came out of nowhere – I didn’t understand it.
- I read the chapter as told but then the lecture made no sense to me.
These are the result of poor alignment in objectives, practice and assessment, which is already known to be important. This is the kind of insight and experience that the most brilliant minds can benefit from when it comes to teaching the novice. (See also the expert blind spot).
A learning engineer works with content experts and guides their work and brings in other points of view as needed in order to best develop learning experiences – it just so happens that now we really need them even more for the online experience.How to Find a Learning Engineer
The reality is that right now individuals with such skill sets are hard to find “in the wild” and it will be some time before that changes dramatically. What is required is to find talented people interested in the work who already have some of the skills needed. It could be someone with a strong learning science background who is interested in seeing immediate practical application of their work, or someone with a strong instructional design background interested in learning how to apply learning science and data analytics to what they do, and moving those groups together. That model does provide a way to find candidates and acknowledges the fact that some effort has to be made to develop the skill sets of a learning engineer upon hiring.
I do not believe this is a case of looking for what in the software world you’d refer to as a unicorn. It really is vital to all of us in education to develop a workforce of people who understand how the creation of learning material happens as well how to apply developments happening in the understanding of how to effectively develop and test those materials.Aren’t Learning Engineering and Instructional Design the Same?
This reminds me of when I started my career as a programmer. When I started programming, I was a software developer and not a software engineer. I knew how to write code, but I wasn’t ready to architect it or account for other disciplines in my work. A similar comparison applies here. The role of a learning engineer is not a support role, but a full contributor and participant in the process of developing an online learning environment. I asked one of our learning engineers how she viewed her role, to which she said “We want to learn about learning – what makes rich, deep, meaningful and lasting impact.” She builds environments that report data so her work can be evaluated, not to ask if she did a good job, but to learn how we might improve upon what we know to better the environment.
A learning engineer is a part of the process that improves or expands the technologies they work with. An instructional designer is often handed a suite of available technologies and content and told to make something of it. A learning engineer works both pedagogically and technologically to improve, create and make a whole experience and then evaluate the effectiveness of it with data.An Essential Field
Learning engineering is part of what drives the success at OLI and is going to drive the development of well-informed online environments going forward anywhere such work is being done in the future We believe this is an important area to define and then expand.
With that in mind, I leave you with a work in progress statement attempting to capture the key aspects of this field. (I already know it’s not easy to read, especially out loud in a talk without stopping to get your breath!) But I’m interested in hearing what others think of the content of this sentence. It doesn’t get into some of the practical implications I outline above but hopefully it captures the essence of the idea.
Learning Engineering: The development, evaluation and improvement of the processes, methodologies, and educational technologies that lead to predictable, repeatable development and improvement of learning environments which leverage learning science and the affordances of technology to address instructional challenges and create conditions that enable robust learning and effective instruction.
The post The Need For Learning Engineers (and Learning Engineering) appeared first on e-Literate.
As Phil mentioned in his last post, he and I had the privilege of participating in a two-day ELI webinar on MOOCs. A majority of the speakers had been involved in implementing MOOCs at their institutions in one way or another. And an interesting thing happened. Over the course of the two days, almost none of the presenters—with the exception of the ACE representative, who has a vested interest—expressed the belief that MOOCs provide equivalent learning experiences to traditional college courses. Keep in mind, these folks were believers. They were enthusiastic about MOOCs in general. But they tended to describe the value of MOOCs as reaching a different audience than the traditional matriculated college student and provide a different value. They talked about it extending the university mission. By and large, they did not talk about it as being an improvement on, or even equal to, a traditional class. Now, there were well over 400 participants, so it wouldn’t be fair of me to say that there was unanimity, about this point or any other. But the level of agreement was remarkable.
On the other hand, there was widespread enthusiasm for using MOOCs as essentially substitutions for textbooks in classes that included instructors from the local campus. Vanderbilt created what they called a course “wrapper” around a Coursera MOOC on machine learning. Folks from Stanford talked about the notion of a “distributed flip,” i.e., a group of flipped classrooms participating together in a MOOC. And SJSU talked about using an edX course in a blended course environment on one hand, and a Udacity course with Udacity-provided “course mentors” on the other.
The obvious conclusion is that MOOCs are more of a threat to textbook companies than they are to universities. I think that’s true, but I also think it’s an oversimplification. There is a deeper (and older) trend to boil down a course into a set of digital artifacts that can be “played” by the student at will. It’s worth taking a deeper look at that trend, where it’s going, what’s useful about it, and what’s pernicious about it.The Course as an Artifact: A Brief History
Course artifacts, in and of themselves, are hardly new. In fact, the textbook as a collection of catechisms (or questions and answers designed to facilitate memorization) goes back to at least the 4th Century A.D. Basically, the catechisms were the course. We tend to think of these being used in what we would call primary school today, but in fact, this sort of text-as-course was used at all levels of education. For example, check out the Catechism of the Steam Engine, published in the 1850s.
In the modern higher education context, there is a strong sense among many teaching faculty of themselves as craftspeople. In this view, they teach their courses their own way and use their unique strengths and knowledge to benefit their students. The degree to which this rhetoric matches reality varies wildly depending on the individual instructor, the level and subject of the course, and the school at which the course is being taught. There is a tendency among instructors of lower-division courses to follow the textbook pretty closely, including the homework and quizzes, and decorate that pre-packaged curriculum with lectures—particularly in courses that are easily machine-graded and tend to have very large enrolments.
This is not to say that the instructors and TAs in these classes add zero value over the textbook content. One of the most important but least valorized functions that an instructor serves in the class is providing support to students when they are stuck—answering questions, modeling good problem-solving skills, providing mentoring about study skills, and so on. Likewise, the curation that these faculty do in terms of picking the books, selecting the problem sets within the book, and so on, provides real value. (And this is a spectrum, rather than a binary distinction between faculty who just follow the book completely and faculty who make up their own curricula completely.) But the point is that much of what we refer to as the “course” is often a packaged up in a set of artifacts that come from the textbook publishers and are augmented by pre-packaged performances of lectures by the professors. The degree to which this sort of thing happens is just hidden from view because it happens behind the closed doors of individual classrooms.
When the LMS first came onto the scene in the late 1990s, the one artifact that every professor would put online if they were putting up just one would be the syllabus. Then they might add lecture notes, and then possibly some readings. None of that really changed anything, since it was still happening behind the virtual closed doors of the LMS course logins. But in 2002, when MIT announced their OpenCourseWare initiative, the conversation began to change. Even though the process of adopting OpenCourseWare wasn’t essentially different from one of adopting a textbook publisher’s book and ancillaries, MIT’s brand imprimatur carried with it a sense of superiority in some quarters. Why would you, a professor at Unremarkable College, think that your course design is better than the famous MIT professor’s? On the one hand, it felt to me at the time like there was a strong undercurrent of elitism in these conversations. On the other hand, it raised the useful question of when the instructor is crafting the course curriculum to meet the particular needs of the students in the room versus when she is crafting it in order to satisfy her own creative needs as a craftsperson. But even here, OCW ultimately didn’t do much to disrupt the Order of Things. At most, OCW courses are recipes that can be adopted either in whole or in part by the instructors, and how they are adopted is still mostly kept behind closed doors.
Meanwhile, the textbook publishers were combining their textbooks—now online—with their ancillary materials and their homework platforms into a kind of higher-end courseware that goes a few steps beyond what you can get from a typical OCW package. Whether we are talking about Cengage MindTap, Pearson CourseConnect, or WileyPLUS, these product packages basically provide the curriculum, the course materials, the assessments and, in some cases, the analytics to track student progress and make study suggestions. Yet still, these are adopted mostly behind the closed door of the classroom.Enter the MOOC
In some ways, the xMOOC in its current form is this trend to turn the course into an artifact taken to its logical conclusion (possibly ad absurdum). Course lectures are now artifacts in the form of videos. Assignment and assessment functions are packaged into machine-graded tools. Certification of knowledge is provided by the machines as well. Yes, there are still class discussions, and yes, the course instructors do participate sometimes, but they appear to be rather secondary in most of the xMOOC course designs I have looked at. In general, xMOOCs tend to explore the degree to which the pedagogical function can be fulfilled by artifacts.1
One critical difference is that, by raising the question of whether this package is worthy of being offered for credit, the MOOC also is forcing us to begin to articulate the value instructors add—both that they can in principle and what they are adding in practice today in large survey courses under the conditions that they are often taught. This is a big and complex question. It’s far too big to address fully in one post. But I think the conversations that happen in places like the ELI webinar about what MOOC instructors think is missing from MOOCs that keeps them from being credit-worthy is an interesting first approximation at an answer. The sentiment articulated by some of the ELI webinar participants, which was echoed by a presentation at this week’s MOOC colloquium at RPI, is that xMOOCs don’t tend to be able to get at deep skill acquisition because students have limited opportunities to either see those skills modeled for them or to practice them. As Jim Hendler put it during the RPI colloquium, “I don’t hear a lot of talk about using MOOCs to give students PhDs.” To be clear, I don’t believe that it is impossible to give that kind of deep skill learning in an online context; nor do I think that today’s giant lower-division survey courses do a whole lot to teach deep skills, by and large. But I do think that the gut reactions that folks in the MOOC conversations seem to be having is revealing in terms of the limits of what we think we can achieve at the moment with the course as a product—whether that product is instantiated through a MOOC or through an instructor “teaching” a traditional survey class and going through the motions as described to him by his textbook vendor. To the degree that a graduate seminar as a MOOC seems like a strange idea to you, ask yourself what would be missing and whether that missing element also belongs in our undergraduate survey courses.The “Distributed Flip” and Other Amazing Feats
Equally revealing, in my view, is the significantly higher level of enthusiasm among MOOC veterans for using MOOCs as course materials for blended learning. But not just any blended learning. Two themes have been coming up repeatedly: flipping the classroom and collaboration between professors teaching the same class. You can get a clear sense of what’s going on from this guest column on The Chronicle’s “Professor Hacker” blog by Douglas Fisher of Vanderbilt University, who used a MOOC as the basis for his flipped class:
I now view MOOCs, and the assessment and discussion infrastructure that comes with them, as invaluable resources that I embrace and to which I add value. I, and I am guessing many others, are short steps away from full-blown customizations of individual courses and even entire curricula, drawing upon resources from around the world and contributing back to those resources.
The implications of MOOCs for community between faculty and students, as well as the relationships within and between local and global learning communities, interest and excite me. In fact, it is a nuance on the theme of community that I think is most responsible for my excitement as I embrace online educational content. For the first time in 25 years of teaching, I feel as though I am in a scholarly-like community with my fellow educators. I have long regarded scholarship as the noblest aspect of academia– the scholar’s tenacity in identifying, acknowledging, addressing and building on the intellectual contributions of others. I have not experienced the same profound sense of community among my colleagues in the education realm, however – I have largely been a lone wolf. Now there has been a profound shift in my mindset – I use and build on the educational production of others; I do it openly on public sites, of which I am proud rather than embarrassed; I contribute back, and my students see and learn from this practice of scholarly appreciation, and are even encouraged to contribute to it through their own content creation and sharing. This opportunity for “scholarship” in educational practice is what, as an educator and scholar, I find most exciting about this nascent and exploding online education movement.
I think the point about the missing community around teaching is particularly critical. The aforementioned RPI professor Jim Hendler, who was recognized by Playboy Magazine as “one of the nation’s most influential and imaginative college professors” who are “reinventing the classroom,”2 talked about how he struggled to flip his classroom in a way that his students would embrace and lamented that he had no training in pedagogy. Later in his presentation, he talked about how university libraries and computer labs, which used to be places where students would go and solve problems together, are largely empty now. I wondered about how college education would be different if professors had shared problem-solving spaces for their teaching, like the study carrels and computer centers of yore, and if there were no stigma attached to sharing.
Meanwhile this week, San José State University announced the creation of the Center for Excellence in Adaptive and Blended Learning, the first project of which will be to teach faculty at 11 other CSU campuses how to use an edX course on circuits and electronics as the basis for a flipped class. It’s a short step from training faculty on how to flip a class using the MOOC to a “distributed flip,” where those faculty members are sharing best practices with each other as they teach the same class using the same class using the same materials, and having their students interact with each other on the MOOC discussion board. This is promising.
It also raises questions about the MOOC course designs. At RPI, I was able to ask edX’s Howard Lurie about whether the course design for the blended classes in the SJSU project will be the same as the fully online one. He acknowledged that there would have to be a variant. We’re going to see more of that. To the degree that MOOCs are going to used in this way, they need to (1) have the curricular wrap-around that scaffolds the classroom use, and (2) be designed to be modular so that faculty using them in their own classrooms can customize them to the local needs of their students. In other words, we need to be able to draw different and more flexible lines between where the course-as-artifact ends and human-directed course experience begins. Which, by the way, is essentially what I think Adrian Sannier was saying in his interview with me a while back when he positioned OpenClass courses in contrast to MOOCs:
“Somebody will make a math class with 6 million students around the world. But it will be offered locally with teachers at a scale of between 1 to 20 and 1 to 50. Because teachers matter.”
And this is where we get to the part of the MOOCs competing with the textbook vendors. Both MOOC producers and textbook vendors are beginning to converge on a product model of courseware that is more of a complete curriculum than a traditional textbook but less of a stand-alone, autopilot course than a current-generation xMOOC. Both groups have a lot to learn creating flexible designs that make the right compromises between completeness and ease of localization as well as facilitating communities of practice among teaching faculty. But it’s clear that’s where we’re headed.
- This is in contrast with cMOOCs, which tend to explore the degree to which the pedagogical function can be fulfilled through crowdsourcing among the students.
- No, there wasn’t a photo spread.
Today Asahi Net International acquired the Sakai Division of rSmart. rSmart CEO Chris Coppola will join the Ashai Net International Board creating interlocking boards. The financial arrangements are not known.
rSmart is a well known contributor to Apereo Inc.’s Sakai learning management system and to the Kuali suite of administrative software applications. rSmart has enhanced, implemented, and supported Sakai. It has also implemented the Kuali Financial System for colleges and universities.
This reorganization of effort may represent the changes in higher education: The relentless promotion of online learning and demand for more productive administrative systems is being advocated as a solution to the rising cost of higher education in developed countries.
“Worldwide over 350 educational organizations [have confirmed use of ] Sakai as a learning management system, research collaboration system and ePortfolio solution.” The actual number is much higher. (Sakai users are not required to register their use and the Sakai software does not automatically report its use to a central site). A new version of the Sakai software, Sakai OAE, is expected soon. It passed intense performance testing late last year and now can serve a large number of students. The market for Sakai support and Sakai as a “cloud” application is accelerating as colleges and universities continue to expand online education.
Ashai Net International’s parent company, Japan-based Ashai Net Inc. developed a learning management system, called manaba, beginning in 2007. The company offers manaba cloud-based learning services to 190 colleges and universities.
Georgetown University’s East Asia National Resource Center and Harvard College’s Japan Initiative are manaba users. Manaba support is led by Tomoka Higuchi McElwain M Ed, a Stanford University trained educator. Presentations on the use of manaba were recently made at Educause 2012, 20th International Conference on Computers in Education (ICCE 2012), and the Association of International Education Administrators 2013 Conference.
In April 2011, Asahi Net International, Inc. was established as a New York company. It was founded to support the growing international use of manaba.
In August 2012 parent Asahi Net, GSV Capital and others invested $10.75 million in rSmart. GSV investment advisor Michael Moe advised. At that time Moe expressed confidence in the firm saying “rSmart is helping universities realize lower total cost of ownership and higher-quality products that are easy to use.” Moe was a principal investment advisor for the last wave funding higher education—private for-profit colleges and universities.
CEO Takashi “Take” Takekawa is the President and CEO of manaba – Asahi Net International, Inc. He remains on the Board of Directors of rSmart. “Take” received an MBA from Harvard Business School.
The Kuali Foundation developed the community source model where colleges and universities would cooperatively develop administrative software and make it open source so higher education as a whole could benefit from that investment. rSmart CEO Chris Coppola was an early and vigorous supporter of that community model.
rSmart Board Chair John Robinson has a long and successful history founding and developing companies providing administrative software to colleges and universities. He founded Information Associates, later acquired by Sungard SCT. Robinson describes his commitment: “A large part of my job is spreading the word, helping open source become the business model for the development and distribution of software in the higher-education marketplace. The most gratifying aspect of this work is seeing the open-source community grow in education and collaborating with so many exceptional people.” Kuali has benefited from his “working with community leaders in education.”
Currently, the Kuali Foundation has four open-source software product lines: finance, research administration, IT infrastructure—called Rice, and student systems. rSmart is a Kuali commercial affiliate supporting all four software systems. The number of installations is shown in the Figure.
The Kuali Foundation has developed and supports research administration software—called Coeus. This is a higher education-specific application based on the Massachusetts Institute of Technology system of the same name. This system is becoming mission-critical for research universities as federal funding shifts emphasis, and licenses are expected to be increasing potential revenue for research universities. rSmart can be expected to benefit from their immediate need.
The two organizations combine talented staff long committed to higher education. They have a reservoir of experience and knowledge. This has been shown by their contribution to open-source software- products designed specifically for higher education. Both companies have been and should continue to be successful without yielding control to organizations that have different values. Higher education should be pleased with today’s announcement.
Edited by Paul Heald, Sigma Systems Inc.
Correction: Data received today reported the manaba network serves 190 institutions globally. This correction has been made. Combined the Asahi Net International will be supporting 230 academic institutions serving 550,000 students.
The post Asahi Net International Acquires the Sakai Division of rSmart appeared first on e-Literate.
Last week, edX made a splashy spectacle of an announcement about automated essay grading, leaving educators fuming. Let’s rethink their claims.
“Give Professors a break,” the New York Times suggested in this joint press release from edX, Harvard, and MIT. The breathless story weaves a tale of robo-professors taking over the grading process, leaving professors free to kick back their feet and take a nap, and subsequently inviting universities, ever-focused on the bottom-line, to fire all the professors. If I had set out to write an article intentionally provoking fear, uncertainty, and doubt in the minds of teachers and writers, I don’t think I could have done any better than this piece.
Anyone who’s seen their claims published in science journalism knows that the popular claims bear only the foggiest resemblance to academic results. It’s unclear to me whether the misunderstanding is due to edX intentionally overselling their product for publicity, or if something got lost in translation while writing the story. Whatever the cause, the story was cocksure and forceful about auto-scoring’s role in shaping the future of education.
I was a participant in last year’s ASAP competition, which served as a benchmark for the industry; the primary result of this, aside from convincing me to found LightSIDE Labs, is that I get email; a lot of email. I’ve been told that automated essay grading is both the salvation of education and the downfall of modern society. Naturally, I have strong opinions about that, based both on my experience with developing the technology and participating in the contest, as well as in the conversations I’ve had since then.
Before we resign ourselves to burning the AI researchers at the stake, let’s step back for a minute and think about what the technology actually does. Below, I’ve tried to correct the most common fallacies I’ve seen coming both from articles like the edX piece, as well as the incendiary commentary that it provokes.Myth #6: Automated essay grading is reading essays
Nothing will ever puzzle me like the way journalists require machine learning to behave like a human. When we talk about machine learning “reading” essays, we’re already on the losing side of an argument. If science journalists continue to conjure images of robots in coffee shops poring over a stack of papers, it will seem laughable, and rightly so.
To read an essay well, we’ve learned for our entire lives, you need to appreciate all of the subtleties of language. A good teacher reading through an essay will hear the author’s voice, to look for a cadence or rhythm in writing, to appreciate the poetry in good responses to even the most banal of essay prompts.
LightSIDE doesn’t read essays – it describes them. A machine learning system does pour over every text it receives, but it is doing what machines do best – compile lists and tabulate them. Robotically and mechanically, it is pulling out every feature of a text that it can find, every word, every syntactic structure, and every phrase.
If I were to ask whether a computer can grade an essay, many readers will compulsively respond that of course it can’t. If I asked whether that same computer could compile a list of every word, phrase, and element of syntax that shows up in a text, I think many people would nod along happily, and few would be signing petitions denouncing the practice as immoral and impossible.Myth #5: Automated grading is “grading” essays at all
Take a more blatantly obvious task. If I give you two pictures, one of a house and one of a duck, and asked you to find the duck, would you be able to tell the two apart?
Let’s be even more realistic. I give you two stacks of photographs. One is a stack of 1,000 pictures of the same duck, and one is a stack of 1,000 pictures of the house. However, they’re not all good pictures. Some are zoomed out and fuzzy; others are way too small, and you only get a picture of a feather or a door handle. Occasionally, you’ll just get a picture of grass, which might be either a front lawn or the ground the duck is standing on. Do you think that you could tell me, after poring over each stack of photographs, which one was a pile of ducks? Would you believe the process could be put through an assembly line and automated?
Automated grading isn’t doing any more than this. Each of the photographs in those stacks is a feature. After poring over hundreds or thousands of features, we’re asking machine learning to put an essay in a pile. To a computer, whether this is a pile of ducks and a pile of houses, or a pile of A essays and a pile of C essays, makes no difference. The computer is going to comb through hundreds of features, some of them helpful and some of them useless, and it’s going to put a label on a text. If it quacks like a duck, it will rightly be labeled a duck.Myth #4: Automated grading punishes creativity (any more than people do)
This is the assumption everyone makes about automated grading. Computers can’t feel and express; they can only robotically process data. This inevitably must lead to stamping out any hint of humanity from human graders, right?
Well, no. Luckily, this isn’t a claim that the edX team is making. However, by not addressing it head-on, they left themselves (and, by proxy, me, and everyone else who cares about the topic) open to this criticism, and haven’t done much to assuage people’s concerns. I’ll do them a favor and address it on their behalf.An Extended Metaphor
Go back to our ducks and houses. As obvious as this task might be to a human, we need to remember once again, that machines aren’t humans. Presented with this task with no further explanation, not only would a computer do poorly at it; it wouldn’t be able to do it at all. What is a duck? What is a house?
Machine learning starts at nothing – it needs to be built from the ground up, and the only way to learn is by being shown examples. Let’s say we start with a single example duck and its associated pile of photographs. There will be some pictures of webbed feet, an eye, perhaps a photograph of some grass. Next, a single example house; its photographs will have crown molding, a staircase; but it’ll also have some pictures of grass, and some photographs might be so zoomed in that you can’t tell whether you’re looking at a feather or just some wallpaper.
Now, let’s find many more ducks, and give them the same glamour treatment. The same for one hundred houses. The machine learning algorithm can now start making generalizations. Somewhere in every duck’s pile, it sees a webbed foot , but it never saw a webbed foot in any of the pictures of houses. On the other hand, many of the ducks are standing in grass, and there’s a lot of grass in most house’s front lawns. It learns from these examples – label a set of photographs as a duck if there’s a webbed foot, but don’t bother learning a rule about grass, because grass is a bad clue for this problem.
This problem gets to be easy rather quickly. Let’s make it harder and now say that we’re trying to label something as either a house or an apartment. Again, every time we get an example, the machine learning model is given a large stack of photographs, but this time, it has to learn more subtle nuances. All of a sudden, grass is a pretty good indicator. Maybe 90% of the houses have a front lawn photographed at one point or another, but since most of the apartments are in urban locations or large complexes, only one out of every five has a lawn. While it’s not a perfect indicator, that feature suddenly gets new weight in this more specific problem.
What does this have to do with creativity? Let’s say that we’ve trained our house vs. apartment machine learning system. However, sometimes there are weird cases. My apartment in Pittsburgh is the first floor of a duplex house. How is the machine learning algorithm supposed to know about that one specific new case?
Well, it doesn’t have to have matched up this exact example before. Every feature that it sees, whether it’s crown molding or picket fence, will have a lot of evidence backing it up from those training examples. Machine learning isn’t a magic wand, where a one-word incantation magically produces a result. Instead, all of the evidence will be weighed and a decision will be made. Sometimes, it’ll get the label wrong, and sometimes even when it’s the “right” decision, there’ll be room for disagreement. But unlike most humans, with a machine learning system we can point to exactly the features being used, and recognize why it made that decision. That’s more than can be said about a lot of subjective labeling done by humans.Back to Essay Grading
All of the same things that apply to ducks, houses, and apartments apply to essays that deserve an A, a B, or a C. If a machine grading system is being asked to label essays with those categories, then machine learning will start out with no notion of what that means. However, after many hundreds or thousands of essays are exhaustively examined for features, it’ll know what features are common in the writing that teachers graded in the A pile, in the B pile, and in the C pile.
When a special case arrives, an essay that doesn’t fit neatly into the A pile or the B pile, we’d have no problem admitting that a teacher has to make a judgment call by weighing multiple sources of evidence from the text itself. Machine learning learns to mimic this behavior from teachers. For every feature of a text – conceptually no different from poring over a stack of photographs of ducks – the model checks whether it has observed similar features from human graders before, and if so, what grade the teacher gave. All of this evidence will be weighed and a final grade will be given. What matters, though, might not be the final grade – instead, what matters is the text itself, and the characteristics that made it look like it deserved an A, or a C, or an F. What matters is that the evidence used is tied back to human behaviors, based on all the evidence that the model has been given.Myth #3: Automated grading disproportionately rewards a big vocabulary
Every time I talk to a curious fan of automated scoring, I’m asked, “What are the features of good writing? What evidence ought to be used?” This question flows naturally, but the easy answers are thoughtless ones. The question is built on a bad premise. Yes, there are going to be some features that are true in almost all good writing, with connective vocabulary words and transition function words at the start of paragraphs. These might be like webbed feet in photos of ducks – we know they’ll always be a good sign. Almost always, though, the weight of any one feature depends on the question being asked.
When I work with educators, I recommend not just that they collect several hundred essays. I ask that they collect several hundred essays, graded thoroughly by trained and reliable humans, for every single essay question they intend to assign. This unique set allows the machine learning algorithm to learn not just what makes “good writing” but what human graders were using to label answers as an A essay or a C essay in that specific, very targeted domain.
This means that we don’t need to learn a list of the most impressive-sounding words and call it good writing; instead, we simply need to let the machine learning algorithm observe what humans did when grading those hundreds of answers to a single prompt.
Take, as an example, the word “homologous.” Is an essay better if it uses this word instead of the word “same”? In the general case, no; I dare anyone to collect a random sampling of 1,000 essays and show me a statistical pattern that human graders were more likely to grade a random essay a higher grade if they were to make that swap. It’s simply not how human teachers behave, it won’t show up in statistics, and machine learning won’t learn that behavior.
On the other hand, let’s say an essay is asking a specific, targeted question about the wing structure of birds, and the essay is being used in a college freshman-level course on biology. In this domain, if we were to collect 1,000 essays that have been graded by professors, a pattern is likely to emerge. The word “homologous” will likely occur more often in A papers than C papers, based on the professors’ own grades. Students who use the word “homologous” in place of the word “same” have not singularly demonstrated, with their mastery of vocabulary, that they understand the field; however, it’s one piece of evidence in a larger picture, and it should be weighted accordingly. So, too, will features of syntax and phrasing, all of which will be used as features by a machine learning algorithm. These features will only be given weight in machine learning’s decision-making to the extent that they matched the behavior of human graders. By this specialized process of learning from very targeted datasets, machine learning can emulate human grading behavior.
However, this leads into the biggest problem with the edX story.Myth #2: Automated grading only requires 100 training examples.
Machine learning is hard. Getting it right takes a lot more help at the start than you think. I don’t contact individual teachers about using machine learning in their course, and when a teacher contacts me, I start out my reply my telling them they’re about to be disappointed.
The only time that it benefits you to grade hundreds of examples by hand to train an automated scoring system is when you’re going to have to grade many hundreds more. Machine learning makes no sense in a creative writing context. It makes no sense in a seminar-style course with a handful of students working directly with teachers. However, machine learning has the opportunity to make massive in-roads for large-scale learning; for lecture hall courses where the same assignment is going out to 500 students at a time; for digital media producers who will be giving the same homework to students across the country and internationally, and so on.
It’s dangerous and irresponsible for edX to be claiming that 100 hand-graded examples is all that’s needed for high-performance machine learning. It’s wrong to claim that a single teacher in a classroom might be able to automate their curriculum with no outside help. That’s not only untrue; it will also lead to poor performance, and a bad first impression is going to turn off a lot of people to the entire field.Myth #1. Automated grading gives professors a break
Look at what I’ve just described. Machine learning gives us a computer program that can be given an essay and, with fairly high confidence, make a solid guess at labeling the essay on a predefined scale. That label is based on its observation of hundreds of training examples that were hand-graded by humans, and you can point to specific, concrete features that it used for its decision, like seeing webbed feet in a picture and calling it a duck.
Let’s also say that you can get that level of educated estimation instantly – less than a second – and the cost is the same to an institution whether the system grades your essay once or continues to give a student feedback through ten drafts. How many drafts can a teacher read to help in revision and editing? I assure you, fewer than a tireless and always-available machine learning system.
We shouldn’t be thinking about this technology as replacing teachers. Instead, we should be thinking of all the places where students can use this information before it gets to the point of a final grade. How many teachers only assign essays on tests? How many students get no chance to write in earlier homework, because of how much time it would take to grade; how many are therefore confronted with something they don’t know how to do and haven’t practiced when it comes time to take an exam that matters?
Machine learning is evidence-based assessment. It’s not just producing a label of A, B, or F on an essay; it’s making a refined statistical estimation of every single feature that it pulls out of those texts. If this technology is to be used, then it shouldn’t be treated as a monolithic source of all knowledge; it should be forced to defend its decisions by making its assessment process transparent and informative to students. This technology isn’t replacing teachers; it’s enabling them to get students help, practice, and experience with writing that the education field has never seen before, and without machine learning technology, will never see.Wrapping Up
“Can machine learning grade essays?” is a bad question. We know, statistically, that the algorithms we’ve trained work just as well as teachers for churning out a score on a 5-point scale. We know that occasionally it’ll make mistakes; however, more often than not, what the algorithms learn to do are reproduce the already questionable behavior of humans. If we’re relying on machine learning solely to automate the process of grading, to make it faster and cheaper and enable access, then sure. We can do that.
But think about this. Machine learning can assess students’ work instantly. The output of the system isn’t just a grade; it’s a comprehensive, statistical judgment of every single word, phrase, and sentence in a text. This isn’t an opaque judgment from an overworked TA; this is the result of specific analysis at a fine-grained level of detail that teachers with a red pen on a piece of paper would never be able to give. What if, instead of thinking about how this technology makes education cheaper, we think about how it can make education better? What if we lived in a world where students could get scaffolded, detailed feedback to every sentence that they write, as they’re writing it, and it doesn’t require any additional time from a teacher or a TA?
That’s the world that automated assessment is unlocking. edX made some aggressive claims about expanding accessibility because edX is an aggressive organization focused on expanding accessibility. To think that’s the only thing that this technology is capable of is a mistake. To write the technology off for the audacity of those claims is a mistake.
In my next few blog posts, I’ll be walking through more of how machine learning works, what it can be used for, and what it might look like in a real application. If you think there are specific things that ought to be elaborated on, say so! I’ll happily adjust what I write about to match the curiosities of the people reading.
The post Six Ways the edX Announcement Gets Automated Essay Grading Wrong appeared first on e-Literate.
When the story first broke a while back about the Kaggle contest for robo-grading essays that could be “similar to” human graders, I got interested. So after doing a little reading, I ended up contacting a guy by the name of Elijah Mayfield, a PhD student at Carnegie Mellon University and one of the winners of the contest. The net result of our conversation is that I ended up writing a blog post called “What Is Machine Learning Good For?” Fast-forward a bit. Elijah now has a start-up called LightSIDE Labs based on the same technology, and The New York Times is writing puff pieces on how edX is going to change the world with this technology. In the meantime, I have been talking to Elijah for a while about getting him to write at e-Literate.
I’m happy to say that Elijah will be doing a series of posts for us on machine learning in education, starting today. Please welcome him.