Michael and I have been at the MOOC Research Initiative conference in Arlington, TX (#mri13) for the past three days. Actually, thanks to the ice storm it turns out MRI is the Hotel California of conferences.
credit: Bailey Carter assignment for Laura Gibbs’ class
While I’m waiting to find out which fine Texas hotel dinner I might enjoy tonight, I thought it would be worthwhile to share more information from the University of Pennsylvania research that seems to be the focus of media reports on the conference (see Chronicle, Inside Higher Ed, and eCampusNews, for example). Penn has tracked approximately one million students through their 17 first-generation MOOCs on Coursera, which provided the foundation for this research.
“Emerging data … show that massive open online courses (MOOCs) have relatively few active users, that user ‘engagement’ falls off dramatically especially after the first 1-2 weeks of a course, and that few users persist to the course end,” a summary of the study reads.
For anyone who has paid even the slightest bit of attention to the MOOC space over the past year, those conclusions hardly qualify as revelations. Yet some presenters said they felt the first day of the conference served as an opportunity to confirm some of those commonly held beliefs about MOOCs.
While it is accurate that these basic observations have been made in the past, there was some additional information from U Penn worth considering. The following slide images are courtesy of Laura Perna, a member of the research team.
For such a large initiative, it turns out that only two of the courses studied were targeted at college students (Single-variable Calculus and Principles of Microeconomics). There were seven courses targeted at “occupational” students (Cardiac Arrest, Gamification, Networked Life, Into to Ops Management, Fundamentals of Pharmacology, Scarce Medical Resources and Vaccines) and eight for “enrichment” (ADHD, Artifacts in Society, Health Policy and ACA, Genome Science, Modern American Poetry, Greek and Roman Mythology, Listening to World Music, and Growing Old).
As the Chronicle pointed out, there was a wide variation in these courses.
The courses varied widely in topic, length, intended audience, amount of work expected, and other details. The largest, “Introduction to Operations Management,” enrolled more than 110,000 students, of whom about 2 percent completed the course. The course with the highest completion rate, “Cardiac Arrest, Resuscitation Science, and Hypothermia,” enrolled just over 40,000 students, of whom 13 percent stuck with it to the end.
This variation included the use of teaching assistants.
The research tracked several characteristics of the student population:
- Users – these are all students who registered for the course, regardless of time frame.
- Registrants – these are the subset of Users who registered before the course through the last week of the course. The difference is interesting, as there were quite a few Users who registered well after the course was over, essentially opting for a self-paced experience. We have seen very little analysis of this difference.
- Starters – these are the students who logged into the course and had some basic course activity.
- Active User – these are the students who watched at least one video (I’m not 100% sure if this is accurate, but it is close).
- Persister – these are the students who were still active within the last week of the course.
Given their categories, the Penn team showed percentages across all the courses in question. The completion rate (% of Registrants who were Persisters) varied from 13% to 2%. More useful, in my opinion, was the view of all categories across all courses.
And finally, they showed the pattern of MOOC activity over time, as shown by this view of quizzes in one course. This general pattern of steep drop-off in week one, followed by a slower decrease.
1) Which Categories - I think the team missed an opportunity to build on the work of the Stanford team, which identified different student patterns with more precision (see Stanford report here and my graphical mash-up here).
2) Self-Paced - As mentioned before, it is interesting the separation of students who registered during the course official time frame (Registrants) and those who registered after the course was over. This later group ranged from 2% to 23%, which is significant. Thousands and even tens of thousands of students are choosing to register and access course material when the course is not even “running”. They would have access to open material, quizzes and presumably assignments on a self-paced basis, but likely have no interactions with other students or the faculty.
3) Learner Goals - As was discussed frequently at the conference (but not in news articles about the conference), when you open a course up in terms of enrollment, one result is that you get a variety of student types with different goals. Not everyone desires to “complete” a course, and it is a mistake to solely focus on “course completion” when referring to MOOCs. For future research, I would hope that U Penn and others would find a way to determine learner goals near the beginning of the course then measure whether students met their learning goals either when finishing or dropping out.
Full disclosure: Coursera has been a client of MindWires Consulting.
Two weeks ago I attended the WCET Conference in Denver. While much smaller than EDUCAUSE and some others, I find these conferences to be great learning experiences, especially as WCET supports open dialogue between academic leaders (provosts, deans, etc), academic IT and edtech support, and industry leaders. The combination of mindsets – especially the combination of academic and technology – leads to very strategic discussions.
This year the keynote was presented by Dr. Paul LeBlanc, the president of Southern New Hampshire University, which has made quite a name for itself with the College for America (CfA) program. College for America is probably the second best-known example of competency-based education (CBE) after Western Governors University (WGU), and in fact the CfA program was the first to gain Department of Education approval using the “direct assessment” rule that completely avoids seat time. See my previous post for a primer on CBE.
While you can see the entire keynote here (using the mediasite player), I wanted to highlight four key points that help illuminate the reality of competency-based education today. As CBE becomes more hotly debated, it is useful to have real examples to evaluate. I have very roughly paraphrased and taken notes on what I heard in the keynote on these points, not as a defense or endorsement of CfA, but as a real-world early example of CBE.
1) Competency-based education is typically targeted at working adults (14:11 – 20:40)
One of the things that muddies our own internal debates and policy maker debates is that we say things about higher education as if it’s monolithic. We say that ‘competency-based education is going to ruin the experience of 18-year-olds’. Well, that’s a different higher ed than the people we serve in College for America. There are multiple types of higher ed with different missions.
The one CfA is interested in is the world of working adults – this represent the majority of college students today. Working adults need credentials that are useful in the workplace, they need low cost, they need me short completion time, and they need convenience. Education has to compete with work and family requirements.
CfA targets the bottom 10% of wage earners in large companies – these are the people not earning sustainable wages. They need stability and advancement opportunities.
CfA has two primary customers – the students and the employers who want to develop their people. In fact, CfA does not have a retail offering, and they directly work with employers to help employees get their degrees.
2) Competency-based education can require the unbundling of instruction (25:56 – 32:14)
One of the goals of CfA is to use technology to rethink its own business processes, which leads to disaggregation or unbundling. Higher ed does have experience with unbundling with food services, marketing services, but typically higher ed has resisted unbundling the core of what we do – instruction. Traditional faculty can fiercely hold on to these functions. This unbundling causes you to rethink the processes around course design, instruction, advising, and assessment, and there are a growing number of sources for these services (see slide below).
The most important change is to rethink the credit hour, which is the Higgs-Boson – the god particle – of higher ed. Amy Laitinen has a great article explaining the history of the credit hour. It’s great at telling you how long a student has sat down, but not very good at telling you what the student knows. We know that employers trust higher ed less and less. This leads to the core concept of competency-based education – focus on competencies rather than seat times. Jobs for students are not the only goal for higher education, but employers don’t think we do a very good job.
credit: Paul LeBlanc slides at WCET13
3) Competency-based assessment does not equal testing (41:14 – 45:05)
Competencies are can-do statements, they’re measurable, they’re observable, and they are harder for some disciplines than for others.
CfA doesn’t do tests – they instead rely on project-based learning. Competencies never exist in isolation, as they end up in workplace situations. When students select which cluster of competencies to work on, they select an appropriate role to take and work through the projects.
The projects lead to filled-out rubrics that are evaluated by trained faculty, typically with a 48-hour turn-around time.
4) Competency-based education often requires custom IT systems such as the LMS (45:08 – 46:19)
Most higher ed IT systems have been designed with the traditional credit hour in mind, with defined start and end dates. CfA started using Blackboard but abandoned that effort right away. They then went to Canvas, but Canvas is still very credit-hour based. This is not a knock on those LMSs, since they were built to solve a different problem.
In the end, CfA developed their own LMS based on the Salesforce.com platform. The first iteration of the platform was a kludgy mashup, and they had to do a lot of work to simplify the user interface.
Full disclosure: Western Governors University has been a client of MindWires Consulting.
Long-time readers know that I have had a close affiliation with the Sakai Foundation at teams and served on the Board of Directors relatively recently. This year, Sakai merged with the Jasig Foundation to form the Apereo Foundation. The purpose of the new organization is to become a sort of Apache Foundation of higher education in the sense that it is an umbrella community where members of different open source projects can share best practices and find fellow travelers for higher ed-relevant open source software projects. In this merger, the whole is greater than the sum of its parts. For example, last week the foundation just announced that the board of the Opencast project, which supports the Matterhorn lecture capture and video management project, voted to effectively merge with the Apereo Foundation. (The decision is subject to discussion and feedback by the Opencast community.) On the one hand, Opencast probably wouldn’t have joined the Sakai Foundation because they want to interoperate with all LMSs and wouldn’t want to be perceived as favoring one over another. On the other hand, had they joined Jasig prior to the merger, they would not have had access to the rich community of education-focused technologists that Sakai has to offer. (Jasig has historically focused mostly on tech-oriented solutions like portals and identity management solutions). The fact that the Opencast community is interested in joining Apereo is a strong indicator that the new foundation is achieving its goal of establishing an ecumenical brand.
And in that context, I am pleased to tell you that I will be involved with the foundation in a more ecumenical role. Specifically, I will be facilitating an advisory council.About the Council
The goal of the council is to provide the Apereo Foundation Board and community with perspective. When you are running an open source software project, it’s easy to get a little near-sighted as you focus on the hard work of shipping code, gathering requirements, coordinating volunteer developers, finding additional volunteers, and so on. You can get lost in the details that are essential to short-term viability for the project but that can distract you from issues of long-term sustainability, such as thinking about schools that have not adopted your project that might, or about important changes on campuses that are relevant tot he ways in which your software will be perceived and used. The Advisory Council is meant to offer some of that perspective. We have invited participants who are one or several steps removed from the projects. They might work at a school that is heavily involved but not be personally heavily involved. They might be at a school that has adopted a community project but is not deeply involved with the community at the moment. They might have been active in the community in their previous job but further removed from it at present. Or they might come from a school that has not adopted any of the projects but could be receptive to adopting the right project some time in the future.
The group will convene four times a year to provide feedback on presentations from the Board and from various project teams on their vision, goals and plans. That’s it, really. Provide perspective. It’s a simple role, but an important one. We hope that if all goes well our council members will choose to act as informal ambassadors between the Apereo community and the broader community of higher education, but that would really be a byproduct of success rather than something that we’re asking our councilors to do.The Members
I am absolutely delighted with the group that we have assembled:
- Kimberly Arnold, Evaluation Consultant, University of Wisconsin
- Lois Brooks, Vice Provost of Information Services, Oregon State University
- Laura Cierniewicz, Director of the Centre for Educational Technology, University of Cape Town
- Tedd Dodds, CIO, Cornell University
- Kent Eaton, Provost, McPherson College
- Stuart Lee, Deputy CIO, IT Services, Oxford University
- Patrick Masson, General Manager, Open Source Initiative
- Lisa Ruud, Associate Provost, Empire Education Corporation
I feel privileged to be able to work with this group.Diversity
If your goal is to provide perspective, then it is particularly important to get a diverse group together. Overall, I’m pleased to say that we have achieved diversity across a number of dimensions:
- Roles: I wanted to get a good balance of academic and IT stakeholders, as well as at least one ed tech researcher. We’ve achieved that.
- Institutions: We definitely achieved some diversity of institutions, particularly when you throw the Empire Education Corporation and the Open Source Initiative into the mix. In the future, though, I would like to get more representation from smaller schools that don’t typically get involved in open source projects.
- Geography: Apereo is a global community and ultimately needs global input. But we have to balance that against the need to have a workable spread of time zones among council members who will be meeting with each other mostly virtually. The compromise we struck this time around was to have one representative from Europe and one from Africa as a down payment toward that goal. Ultimately, we will probably need to have several regional advisory councils.
- Gender: In keeping with the Apereo Foundation Board’s stated goal of cultivating women leaders in the community, I am delighted that we have achieved gender balance in our council membership. By the way, this was not hard. Asking colleagues to recommend women who would be good isn’t any different from asking them to recommend academic deans or leaders from small schools.
- Race: I don’t know for certain how we did on this metric because I haven’t met some of the council members in person yet, but my sense is that it is either a near or a total failure. Going forward, I would like to shoot for more racial diversity.
* * * * *
So that’s the deal. I am looking forward to convening the first meeting of this group (probably in January) and getting to know them better. More generally, I am really happy with the direction that the new foundation is taking and am privileged to be able to play a small part in it.
There has been a significant amount of progress and interest in competency-based education, as Wisconsin has launched its Flexible Option program, federal legislators are pushing the concept, ‘patient zero’ has graduated from College for America, and resistance is emerging to the concept. I thought it might be useful to update and re-share the primer that was originally posted in August, 2012.
I’m not an expert in the academic theory and background of outcomes and competencies, so in this post I’ll summarize the key points from various articles as they relate to the current market for online programs. Links are included for those wanting to read in more depth.
What is Competency-Based Education?
SPT Malan wrote in a article from 2000 about the generally-accepted origins:
Competency-based education was introduced in America towards the end of the 1960s in reaction to concerns that students are not taught the skills they require in life after school.
Competency-based education (CBE) is based on the broader concept of Outcomes-based education (OBE), one that is familiar to many postsecondary institutions and one that forms the basis of many current instructional design methods. OBE works backwards within a course, starting with the desired outcomes (often defined through a learning objectives taxonomy) and relevant assessments, and then moving to the learning experiences that should lead students to the outcomes. Typically there is a desire to include flexible pathways for the student to achieve the outcomes.
OBE can be implemented in various modalities, including face-to-face, online and hybrid models.
Competency-based education (CBE) is a narrower concept, a subset or instance of OBE, where the outcomes are more closely tied to job skills or employment needs, and the methods are typically self-paced. Again based on the Malan article, the six critical components of CBE are as follows:
- Explicit learning outcomes with respect to the required skills and concomitant proficiency (standards for assessment)
- A flexible time frame to master these skills
- A variety of instructional activities to facilitate learning
- Criterion-referenced testing of the required outcomes
- Certification based on demonstrated learning outcomes
- Adaptable programs to ensure optimum learner guidance
The Council for Adult and Experiential Learning put out a paper last summer that examines the current state of CBE. In this paper the author, Rebecca Klein-Collins, shows that there is a spectrum of implementation of competency models.
One subset of institutions uses competency frameworks in the context of a course-based system. By course-based system, we mean that students take the same kinds of courses that have always been offered by colleges and universities: instructor-led and credit-hour based.
These may be offered on campus or off, in the classroom or online, accelerated or normally paced. These institutions define competencies that are expected of graduates, and students demonstrate these competencies by successfully completing courses that relate to the required competencies. In some cases, institutions embed competency assessments into each course. In most of the examples presented in this paper, the institution also offers the option of awarding credit for prior learning, and usually [prior learning assessments] is course-based as well.
There seems to be a fairly big jump, however, once the program moves into a self-paced model. For these self-paced CBE initiatives, which are the subject of recent growth in adoption, the current implementations of CBE tend to be:
- Flexible to allow for retaking of assessments until competency demonstrated; and
- Targeted at college completion for working adults.
What is driving the current growth and interest in competency-based models?
In a nutshell, the current emphasis and growth in CBE is driven by the desire to provide lower-cost education options through flexible programs targeted at working adults.
Playing a significant role is government at both the federal and state level. In March of 2013 the Department of Education offered guidance to encourage and support the new competency programs, and in President Obama’s higher ed plan unveiled in August, there was direct reference to competency programs:
To promote innovation and competition in the higher education marketplace, the President’s plan will publish better information on how colleges are performing, help demonstrate that new approaches can improve learning and reduce costs, and offer colleges regulatory flexibility to innovate. And the President is challenging colleges and other higher education leaders to adopt one or more of these promising practices that we know offer breakthroughs on cost, quality, or both – or create something better themselves:
- Award Credits Based on Learning, not Seat Time. Western Governors University is a competency-based online university serving more than 40,000 students with relatively low costs— about $6,000 per year for most degrees with an average time to a bachelor’s degree of only 30 months. A number of other institutions have also established competency-based programs, including Southern New Hampshire University and the University of Wisconsin system. [snip]
- Reduce Regulatory Barriers: The Department will use its authority to issue regulatory waivers for “experimental sites” that promote high-quality, low-cost innovations in higher education, such as making it possible for students to get financial aid based on how much they learn, rather than the amount of time they spend in class. Pilot opportunities could include enabling colleges to offer Pell grants to high school students taking college courses, allowing federal financial aid to be used to pay test fees when students seek academic credit for prior learning, and combining traditional and competency-based courses into a single program of study. The Department will also support efforts to remove state regulatory barriers to distance education.
Why has it taken so long for the model to expand beyond WGU?
Despite the history of CBE since the 1960′s, it has only been since the early 2000′s that CBE has started to take hold in US postsecondary eduction, with a rapid growth occurring in the past year. From my earlier post:
Consider that just [three] years ago Western Governors University stood almost alone as the competency-based model for higher education, but today we can add Southern New Hampshire University, the University of Wisconsin System, Northern Arizona University, StraighterLine and Excelsior College.
Why has it taken so long? Although there is a newfound enthusiasm for CBE from the Obama administration, there have been three primary barriers to adoption of competency-based programs: conflicting policy, emerging faculty resistance, and implementation complexity.
1) Conflicting Policy
In Paul Fain’s article at Inside Higher Ed, he described some of the policy-related challenges pertaining to CBE.
Competency-based higher education’s time may have arrived, but no college has gone all-in with a degree program that qualifies for federal aid and is based on competency rather than time in class.
Colleges blame regulatory barriers for the hold-up. The U.S. Education Department and accreditors point fingers at each other for allegedly stymieing progress. But they also say the door is open for colleges to walk through, and note that traditional academics are often skeptical about competency-based degrees.
In 2005 Congress passed a law intended to help Western Governors University (WGU) and other CBE models, defining programs that qualify for federal financial aid to include:
an instructional program that, in lieu of credit hours or clock hours as the measure of student learning, utilizes direct assessment of student learning, or recognizes the direct assessment of student learning by others, if such assessment is consistent with the accreditation of the institution or program utilizing the results of the assessment.
Despite this law, WGU did not even use the law due to policy complexity, opting instead to map its competencies to seat time equivalents.
In fact, the first program to use this “direct assessment” clause to fully distinguish itself from the credit hour standard was Southern New Hampshire University’s College for America program in March of 2013. As described in the Chronicle:
Last month the U.S. Education Department sent a message to colleges: Financial aid may be awarded based on students’ mastery of “competencies” rather than their accumulation of credits. That has major ramifications for institutions hoping to create new education models that don’t revolve around the amount of time that students spend in class.
Now one of those models has cleared a major hurdle. The Education Department has approved the eligibility of Southern New Hampshire University to receive federal financial aid for students enrolled in a new, self-paced online program called College for America, the private, nonprofit university has announced.
Southern New Hampshire bills its College for America program as “the first degree program to completely decouple from the credit hour.”
2) Emerging Faculty and Institutional Resistance
Not everyone is enamored of the potential of competency-based education, and there is an emerging resistance from faculty and traditional institutional groups. The American Association of Colleges and Universities published an article titled “Experience Matters: Why Competency-Based Education Will Not Replace Seat Time” which argued against applying competency models to liberal arts:
Perhaps such an approach makes sense for those vocational fields in which knowing the material is the only important outcome, where the skills are easily identified, and where the primary goal is certification. But in other fields—the liberal arts and sciences, but also many of the professions—this approach simply does not work. Instead, for most students, the experience of being in a physical classroom on a campus with other students and faculty remains vital to what it means to get a college education.
In addition, I am hearing more and more discomfort from faculty members, especially as there are new calls to broadly expand competency-based programs beyond the working adult population. I have yet to see much formal resistance from faculty groups, however, and the bigger challenge is cultural barriers within the institutions or systems where CBE programs have been announced.
3) Implementation Complexity
I would add that the integration of self-paced programs not tied to credit hours into existing higher education models presents an enormous challenge. Colleges and universities have built up large bureaucracies – expensive administrative systems, complex business processes, large departments – to address financial aid and accreditation compliance, all based on fixed academic terms and credit hours. Registration systems, and even state funding models, are tied to the fixed semester, quarter or academic year – largely defined by numbers of credit hours.
It is not an easy task to allow transfer credits coming from a self-paced program, especially if a student is taking both CBE courses and credit-hour courses at the same time. The systems and processes often cannot handle this dichotomy.
I suspect this is one of the primary reasons the CBE programs that have gained traction to date tend to be separated in time from the standard credit-hour program. CBE students either take their courses, reach a certain point, and transfer into a standard program; or they enter a CBE program after they have completed a previous credit-hour based program. In other words, the transfer between the competency world and credit-hour world happen along academic milestones. Some of the new initiatives, however, such as the University of Wisconsin initiative are aiming at more of a mix-and-match flexible degree program.
The result is that the implementation of a competency-based initiative can be like a chess match. Groups need to be aware of multiple threats coming from different angles, while thinking 2 or 3 moves at a time.
It will be interesting to watch the new initiatives develop. However difficult their paths are, I think this is an educational delivery model that will continue to grow.
Full disclosure: Western Governors University has been a client of MindWires Consulting.
Update 12/1: I have bumped this link from the comments via Nick DiNardo. It is a ~20 minute interview with ‘student zero’ or ‘patient zero’ of the College for America program, giving direct feedback from a CBE student. Thanks Nick.
The post Competency-Based Education: An (Updated) Primer for Today’s Online Market appeared first on e-Literate.
It’s fair to say that Purdue University has sparked several important conversations in ed tech through their work on Course Signals. First, they pretty much put the retention early warning system as a product category on the map, conducting ground-breaking research and building a system that several major ed tech players have either licensed or imitated. More recently, they have sparked a conversation about the state of ed tech research and peer review as their more recent research has been called into question. I highly recommend reading the comment threads on these two posts to get a sense of that conversation.
Now I think Purdue may spark a third conversation—this time around the ethics of institutional learning analytics research and commercialization. Because there is no question in my mind that they have a serious ethical problem on their hands.
While I have no proof that Purdue is aware of the concerns that have been raised about the Course Signals research, I think it highly unlikely that they are unaware, after articles have been published in Inside Higher Ed and the Times Higher Education. The questions have been out for a month now, and so far we have nothing in the way of an official response from the university.
That’s a big problem for several reasons. First, has have been mentioned here before, Purdue has licensed its technology to Ellucian for sale to other schools. In other words, the university is effectively making money on the strength of research claims that have now been called into question. Second, the people who conducted and published the research are not tenured faculty but non-tenurable staff, and they did so using institutional data the access to which Purdue ostensibly controls. It seems overwhelmingly likely that the researchers whose work is being challenged are effectively powerless to respond without permission and support from their institution. If so, then these people are being put in a terrible position. They are listed as the authors of the research, but they do not have the power that an academic Principal Investigator would have to be properly accountable for the work.
For both of these reasons, I believe that Purdue has an ethical obligation as an institution to respond to the criticism. Since they seem disinclined (or at least slow) to do so of their own accord, perhaps some appropriate pressure can be brought to bear. If you are an Ellucian customer, I urge you to contact them and ask why there has not been an official response to the challenge regarding the research. Both of the partners here should know that their brand reputations and therefore future revenue streams are at stake here. (I would be grateful if you would let me know, either publicly or privately, if you take this step. I would like to keep track of the pressure that is being brought to bear. I will keep your name and that of your institution private if you want me to.)
But I also think there is a broader conversation that needs to happen about the general problem. On the one hand, schools have an obligation to protect the privacy of their students. This makes releasing student success research data challenging. On the other hand, if the research cannot be properly peer reviewed because it cannot be shared, then we cannot develop confidence in the research that is coming to us. This problem is exacerbated when research is conducted by staff whose independence is not protected, and by the increasing tendency of institutions to commercialize their educational technology research and development work. There needs to be a community-developed framework to help facilitate the safe and appropriate sharing of the data so institutions can be held accountable for their research and the staff who conduct that research can be appropriately protected.
Douglas Belkin wrote an article yesterday in the Wall Street Journal based on a study from Moody’s Investors Service. The lede of the article is that “nearly half of the nation’s colleges and universities are no longer generating enough tuition revenue to keep pace with inflation”, which comes from Moody’s interest in institutional financial stability, but I think there are other lessons available. While the revolution in college pricing effects is quiet, it is profound.
It is worth noting that the big changes are based on FY2013 and 2014, which includes survey-based estimates and projections rather than hard data. The article is behind a paywall, but here are some relevant excerpts (read the whole article if you can).
Nearly half of the nation’s colleges and universities are no longer generating enough tuition revenue to keep pace with inflation, highlighting the acceleration of a downward spiral that began as the recession ended, according to a new survey by Moody’s Investors Service.
Technically speaking, the US recession ended in June 2009, so the argument that the downward spiral began at that point does not match the data. The downward spiral started in the 2012 -13 school year, fully three years later. There are other correlations that might be more relevant such as student loans and defaults, acceptance that job prospects are not rebounding, etc.
The survey of nearly 300 schools reflects a cycle of disinvestment and falling enrollment that places a growing number at risk. While schools for two decades were seeing rising enrollments and routine increases of 5% to 8% in net tuition, many now are facing grimmer prospects: a shrinking pool of high-school graduates, depressed family incomes and precarious job prospects after college.
These are all good points, but I would extend this argument to say that families are now becoming value shoppers for college certificates and degrees. While there is still strong sentiment that the investment in college will pay off over time, families and students want to minimize the investment amount (and risk).
The softening demand for four-year degrees is prompting schools to rein in tuition increases while increasing scholarships. Those moves are cutting into net tuition revenue—the amount of money a university collects from tuition, minus school aid.
For 44% of public and 42% of private universities included in the survey, net tuition revenue is projected to grow less than the nation’s roughly 2% inflation rate this fiscal year, which for most schools ends in June. Net tuition revenue will fall for 28% of public and 19% of private schools.
What you can see from the data is not stasis followed by an accelerating drop-off; what can be seen is a reversal from tuition revenue increases far above to far below inflation rates. 5% to 8% in net tuition revenue was not sustainable, and it is likely one of the major causes of the dramatic changes over the past few years.
As Herbert Stein taught us, Trends that can’t continue, won’t.
Keep in mind that we’re talking net tuition revenue, which subtracts out school aid but includes federal financial aid. The change in school revenue cannot be explained by a drop in federal financial aid, however.
This drop in net tuition revenue cannot be explained a drop in state spending on public institutions – that is another category.
“We don’t know where the bottom is; if we knew, we could structure appropriately,” said [U Louisiana President] Mr. Bruno, with regard to the budget cuts. The result: “We have to look at a different business model; we can’t just depend on our region anymore.”
The context of this comment is higher tuition for out-of-state students, but that is a losing strategy in my opinion. He does have a point with “we have to look at a different business model”.
Schools with the strongest brands are less vulnerable to these trends. For instance, as the international market consolidates, flagship state schools with strong reputations already established in foreign countries stand to benefit from their alumni networks. Midtier schools lacking a presence overseas will find it harder to break into new overseas markets.
This point is key, as it backs up two important observations on higher ed we have seen recently.
- The schools most at risk are those without strong name recognition. While that might be unfair, it seems to be a fact of life.
- While Clayton Christensen has taken some heat from the projection that “in 15 years from now half of US universities may be in bankruptcy”, the data from this survey lends some credibility to the concept.
As stated before, keep in mind that this is survey-based data, but it does provide insight into some important issues faced by US colleges and universities.
A company called Sampo IP, which is a wholly owned subsidiary of patent troll Marathon Patent Group, is suing Blackboard for patent infringement. The patents in question appear to be incredibly broad and have also been asserted against Salesforce as well as high-profile customers of collaboration companies Jive, Hyperoffice, and Rally, including Dell, Starbucks, Hewlett-Packard, Aetna, and about 15 others. I have not read the patents carefully, but they seem to be related to applying push notifications to allow one person in a group to send notifications or requests to other people in the group.
For those of you tempted to Schadenfreude because of sins of Blackboard’s past, that would be a mistake. Patent trolls like Marathon can come after any company and their customers. In this case, the Marathon patents appear to be broad enough to be important for countless educational applications and could be applied against a range of vendors and schools alike. At the moment, there are no reports of suits against education customers, and there are practical reasons why the patent trolls would be somewhat unlikely to file suit against such customers in the future. But there is no legal barrier. We should all be rooting for Blackboard and the other defendants in this case.
Patents are a serious and ongoing threat to education. I had hoped during the Blackboard v. D2L fight that there would have been enough concern and awareness within the community to take some broader action, but that never happened. Unfortunately, there are very few tools to employ against patent trolls but what steps can be taken to minimize edupatent suits in general should be taken. Vendors in the space, who own substantial IP, should be encouraged by their customers to form a protective patent pool, for example. Patent pools are of little value against trolls, but at least it would reduce the likelihood of lawsuits by practicing companies. And since big technology players like Google, IBM, Microsoft, and Oracle do substantial business in this space, perhaps they could be persuaded to contribute broader patents and create a substantial umbrella of protection. It is even possible the patent pool might yield prior art that could be effective in invalidating patents held by trolls. The sector could also unite to give companies like Marathon a PR black eye whenever they come after educational software. There are steps that could be taken, but educational leadership needs to step up to make it happen.
You can find and read the legal complaint here.
The post Patent Troll Sues Blackboard for Patent Infringement appeared first on e-Literate.
Doug Clow of the Open University has published a thoughtful and detailed blog post in response to the Course Signals effectiveness controversy. He covers far too much ground for me to attempt to summarize here, but I think there are some common themes emerging from the commentary so far:
- The concerns over the one study have not changed the fact that Course Signals and the researchers who have been studying it are generally held in high regard. They have some very strong intra-course results which have not been challenged by current re-analysis. Even on the research that is now being challenged, they made a good faith effort to follow best research practices and are exemplars in some respects. On a personal note, I know Kim Arnold (although I did not realize she was an author on the particular study in question until Doug mentioned it in his post) and, like Doug, I think very highly of her and have learned a lot from her about learning analytics over the years.
- That said, both the researchers and, especially, Purdue as an institution have an obligation to respond as promptly as is feasible to the challenge, in part because Purdue has chosen to license the technology in question and stands to make money based on these research claims (regardless of whether the researchers’ work was independent of the business deal). To be clear, nobody is accusing anyone of deliberately cooking the books. The point is simply that Purdue has an added ethical obligation as a consequence of the business deal.
- The larger problem is not so much with the Purdue work itself as it is with the fact that both pre- and post-publication peer review failed. This can happen, even in papers that get a lot more eyeballs than this one did; Mike Caulfield has aptly pointed to the widely influential Reinhart and Rogoff economics paper on the effects of debt on national economies which has been belatedly proven to be in error as an apt analogy. Nevertheless, any such failure should trigger some introspection within a field regarding whether we should be doing more to cultivate robust community-based exploration of these studies, of which peer review is a part.
I highly recommend reading Doug’s post in full.
The post Comments from a Researcher on the Course Signals Kerfluffle appeared first on e-Literate.
The “Course Signals” story originally covered here has recently gone international, with Britain’s prestigious Times Higher Education magazine picking up the Inside Higher Ed story and publishing it as an “Editor’s Pick”. Hopefully this will push the Course Signals team to answer questions asked of them nearly two months ago, questions that have still not been satisfactorily answered.
We realize those watching the posts on e-Literate over the past couple weeks may have some questions about what the “Course Signals issue” is, what it isn’t, and why it is so important for the educational technology community to make sure Purdue is accounting for the recent issue discovered with their statistical approach. This explainer should get you up to speed.What is Course Signals? Why is it important?
Course Signals is a software product developed at Purdue University to increase student success through the use of analytics to alert faculty, students, and staff to potential problems. Through using a formula that takes into account a variety of predictors and current behaviors (e.g. previous GPA, attendance, running scores), Course Signals can help spot potential academic problems before traditional methods might. That formula labels student status in given course according to a green-yellow-red scheme that clearly indicates whether students are in danger of the dreaded DWIF (dropping out, withdrawing, getting an incomplete, or failing)
While the product is used to to improve in-class student performance, the product is most often discussed in in a larger frame, as a product that increases long-term student success. The product has won prestigious awards for its approach to retention, and the product is particularly important in the analytics field, as its reported ability to increase retention by 21% makes it one of the most effective interventions out there, and suggests that technological solutions to student success can significantly outperform more traditional measures.What problems were found in the data supporting the retention effects?
Purdue had been claiming that taking classes using CS technology led to better retention. Several anomalies in the data led to the discovery that the experiment may suffer from a “reverse-causality” problem.
One such anomaly was an odd “dose-response” curve. With many effective interventions, as exposure to the intervention increases, the desired benefit increases as well. In the recent Purdue data, taking one Course Signal-enhanced course was shown to have a very slight negative benefit, while taking two had a very strong benefit.
The story became even more complex when older data was examined. Early in the program taking one CS-enhanced course had a very substantial impact on retention, nearly equal to taking two CS-enhanced classes. But as the program expanded over the years, taking one CS-enhanced class started to show no impact at all. This behavior is not consistent with Course Signals causing higher retention.
I hypothesized a simple model to explain this shift: rather than students taking more CS-courses retaining at a higher rate, what was really happening was that the students who dropped out mid-year were taking less CS classes because they were taking less classes period. In other words, the retention/CS link existed, but not in a meaningful way. Unlike the Purdue model where taking CS-enhanced courses caused retention, this “reverse-causality” model explained why as participation expanded taking one CS-enhanced course might move from being a strong predictor to having no predictive force at all.
Michael Feldstein picked up on this analysis, and prodded the Purdue team for a response. When no response came, Alfred Essa, head of R & D and Analytics at McGraw-Hill, took my “back-of-the-envelope” model, and built it out into a full-fledged simulation. The simulation confirmed the reverse-causality model explained the data anomalies very well, much better than Purdue’s causal model. Purdue’s response to the simulation did not address the serious issues raised.Does this mean Course Signals does not work?
It depends. Purdue has yet to respond to the new information in any meaningful way, and until they either release revised estimates that control for this effect or release their data for third-party analysis, we don’t know the full story. Additionally, there are some course level effects seen in early Signals testing that will be unaffected by the issue.
However, Purdue’s recent response to Inside Higher Ed indicates that they did not control for the reverse-causality issue at all. If this is true, then the likelihood is that the retention impact of Course Signals will be positive, but significantly below the 21% they have been claiming.But positive impact is good, right?
Not really. The great insight regarding educational interventions of the past decade or so is what we might term “Hattie’s Law”, after researcher John Hattie. Most educational interventions have some effect. Doing something is usually better than doing nothing. The question that administrators face is not which interventions “work”, but which interventions “work better than average.”
At a 21% impact on retention, Course Signals was clearly in the “better than average” category, and its unparalleled dominance in that area suggested that the formula and approach embraced by Course Signals formed the best possible path forward.
Halve that impact and everything changes. Peer coaching models such as InsideTrack have shown impact in the 10-15% range. Increased student aid has shown moderate impact, as has streamlined registration and course access initiatives.
Additionally, other analytics packages exist that have taken a different route than Course Signals. Up until now, they have lived in the shadow of Purdue’s success. If CS impact is shown to be significantly reduced, it may be time to give those approaches a second look.What is unaffected by the new analysis?
Until Purdue fixes and reruns their analysis, it it hard to know what the effects might be. However, there were a number of claims Purdue made that were not based on longitudinal analysis, and these should stand. For instance, students in Course Signals do tend to get more A’s and less F’s, and that data would be unaffexted by this issue.
While that’s good, it’s not the major intent of at least some institutions interested in the system. What makes systems like this particularly attractive is their ability to pay for themselves over time by increasing retention.
There remains a question as to how a system that boosts grades could fail to boost retention. There are a couple potential hypotheses. First of all, it is quite possible that when the numbers are rerun there will still be a significant, though reduced, retention effect, and that reduced effect is still congruent with the better scores.
Alternately, it could be that students in Course Signals courses score highly in Course Signals-enhanced courses, but at the expense of other courses. My daughter’s math teacher has a very strict policy on math homework which has whipped her into shape in that class, but this means she often delays studying for other things. Students with finite time resources can rearrange their time, but not always expand it.
Finally, for some nontrivial amount of students, retention problems are not due to grades. Not to push the reverse-causality logic too far, but for some students low grades could be a sign of financial or domestic difficulty; fixing the grade would not address the larger problem.What are the larger cultural implications?
As Michael has outlined in a different post, there are major cultural implications to this error, ones which partially indict the research analytics community’s approach to research. To my knowledge, the study was never peer-reviewed outside of its inclusion in conference proceedings, but it is one of the most referenced studies in learning analytics.
Technology does move fast enough that old publication cycles do not serve the industry well. But if pre-publication peer-review does not exist, there are a host of things we need to make post-publication review work. We need to release more underlying data, invite more criticism, and separate the PR arm of many organizations from their research arm (or at least insure more autonomy). Additionally, we may need to place more rigorous controls on conference presentation, and make sure that presentations making strong statistical claims undergo a more thorough and profiessional review.
The cultural implications of an error like this going undetected this long in a community that is supposedly a community of data analysts are also stunning, and will be the subject of a future post. For the moment we are still waiting for Purdue to engage honestly with the critique, and re-run their numbers after controlling for this effect. Hopefully that will happen later this week.
UPDATE: As Doug notes below, the paper did undergo a full peer review before its inclusion in the LAK conference. I was aware of that, but reading through the post, I realize that is not clear. As I mentioned, we’re looking at putting together a more detailed analysis on how we got here after we know better what the damage is, and will walk through those issues more thoroughly at that time. In the meantime, I’d love to start a conversation about that issue in the comments. Let’s assume that some analytics is sugar water, and some is useful medicine. How do we create a culture and a process that helps us separate one from the other? What’s preventing us from doing that now?
I shared the most recent graphic summarizing the LMS market in September 2012, and thanks to new data sources it’s time for an update. As with all previous versions, the 2005 – 2009 data points are based on the Campus Computing Project, and therefore is based on US adoption from non-profit institutions. This set of longitudinal data provides an anchor for the summary.
What I’ve been attempting to do lately is to expand the market definition beyond the US. Last year I used some heuristics:
The data has been adjusted to include international usage and online programs in order to capture the rise of online programs, including MOOCs, as a driving force in the future market. Keep in mind that there is no consistent data set to capture the entire market, so treat the graphic as telling a story of the market rather than being a chart of precise data. Sources for this summary include a combination of Campus Computing reports, ITC surveys, company press releases, and extrapolations from Blackboard’s and Pearson’s quarterly earnings. Caveat emptor. While the data captures a broader set of information than previous US-only data, it is primarily focused on north american institutions.
This year, thanks to George Kroner’s great work with Edutechnica, I can more confidently represent the model as covering more of the Anglosphere – the US, UK, Canada and Australia. This is done with a single data source for 2013 rather than interpretations among several sources.
Two items to note:
- Despite the addition of a new data source, the basic shape and story of the graphic have not changed. My confidence has gone up since last year, but the heuristics were not far off. Sakai is larger than shown last year.
- There are no dramatic changes to the market in general this year. The same players keep growing (primarily Canvas, followed by Moodle and Desire2Learn), and the same players keep shrinking (primarily Blackboard). The only new potential system of interest this year is OpenEdX.
- While the data is more solid for 2013, keep in mind that you should treat the graphic as telling a story of the market rather than being a chart of precise data.
The post State of the Anglosphere’s Higher Education LMS Market: 2013 Edition appeared first on e-Literate.
Inside Higher Ed’s Carl Straumsheim has some reporting on the Course Signals data controversy. He was able to get Purdue research scientist Matt Pistilli on record about it. Here is the sum total of the quotes from the article:
Pistilli defended the claims about Signals’ ability to increase retention — with the caveat that more research needs to be done. “The analysis that we did was just a straightforward analysis of retention rates,” he said. “There’s nothing else to it.”
To ensure an empirically grounded analysis of Signals, Essa urged Purdue to give researchers access to as much data as possible. Pistilli said he is open to participate in that conversation, but pointed out that granting open access could violate students’ privacy rights.
With Signals marking its fifth anniversary this year, Pistilli said “it was probably just a matter of time for people to start looking for these pieces and begin to draw conclusions.” In that sense, the discussion about early warning systems resembles that of other ed-tech innovations, like flipping the classroom and massive open online courses, where hype drowns out any serious criticism.
Now, I understand that interviews necessarily get edited down for news articles, so I am not going to jump to any deep conclusions about Dr. Pistilli’s views or Purdue’s official positions from these three short paragraphs. But I will make two points that do not require a close reading.
1. Purdue’s credibility is on the line.
Purdue has made significant noise about their retention claims. They have published articles and made presentations. They have commercially licensed their system for use by other schools, based in part on the strength of those claims. We now have credible analysis calling some of those claims into question. For its own sake, as well as for that of the academic community, Purdue now needs to go on record with a response to the critique, either acknowledging its legitimacy and amending their claims or demonstrating why the analysis is off base. There is nothing in the quote above to indicate that either Dr. Pistilli or the university understand that they have a substantial problem which demands a substantive answer.
2. Purdue would not have to violate student privacy in order to answer the concerns being raised.
Course Signals collects substantial fine-grained data about student activity, some of which could be personally identifying. None of which is necessary in order to respond to the questions being raised. In order to test the retention claims, we only need to see gross outcomes, which are easily anonymized. In fact, I would be surprised if Purdue does not already have an anonymized version of the data. But even if they don’t, the amount of effort required to scrub it should be relatively modest and easily proportional to the level of concern being raised here.
There is no shame in getting research analysis wrong. It happens all the time, even to the very best researchers. This is why academia places such a high value on peer review, whether it takes place before or after publication. We are smarter as a group than we are individually. However, there is shame in using research on student success to promote the brand of an institution, and to make money, and then decline to open that research up to appropriate scrutiny by the academic community. I am not accusing either Purdue or Dr. Pistilli of doing so at this point. The critical analysis has only recently come to public attention, and I imagine that it takes a little time for any academic institution to formulate and approve an appropriate response. But the clock is ticking.
If Dr. Pistilli or any other appropriate representative of Purdue who can speak to the substance of the research would like to respond, we certainly would give them air time on e-Literate. It doesn’t have to be here; I am sure there are other appropriate forums. But we are happy to give them an opportunity to respond here if that would help cultivate a dialog.
Despite much talk about the demise of the LMS market, the end is nowhere in sight. Unlike many of the newer learning platform concepts (e.g. MOOCs, free platforms, unbundled learning platforms), the LMS market has an established business model and real revenues. Just today came news of an investment analysis report predicting that total LMS market (higher ed, corporate training, K-12) would triple in revenue by 2018, moving from $2.6B to $7.8B. The LMS ain’t sexy, but it’s still important.
This is why I have found it surprising how long it is taking to move from survey-based and anecdotal market information to harder data directly measured by the actual LMS implementations. Until recently, that is, when there are at least two very useful sites available.
George Kroner, a former engineer at Blackboard who now works for University of Maryland University College (UMUC), has developed what may be the most thorough measurement of LMS adoption in higher education at Edutechnica (OK, he’s better at coding and analysis than site naming). This side project (not affiliated with UMUC) started two months ago based on George’s ambition to unite various learning communities with better data. He said that he was inspired by the Campus Computing Project (CCP) and that Edutechnica should be seen as complementary to the CCP.
The project is based on a web crawler that checks against national databases as a starting point to identify the higher education institution, then goes out to the official school web site to find the official LMS (or multiple LMSs officially used). The initial data is all based on the Anglosphere (US, UK, Canada, Australia), but there is no reason this data could not expand.
Here’s what is interesting with this data-driven approach:
- The data already goes beyond just the US market, which has been a real challenge for market analysis.
- The coverage is for the vast majority of institutions rather than a sample. For the US, it is all institutions with more than 2,000 enrollment, but for the others (UK, Canada, Australia) it is complete coverage of all institutions identified in official databases.
- Given the tie-in to official databases, the data can go beyond count of institutions and be scaled by enrollment.
- The data already includes additional information such as version number and whether the LMS is self-hosted or not.
- Due to the light touch, the data can be collected more frequently (several times per year). Eventually this will allow for longitudinal data on LMS migrations.
Of course no data set of this type can be 100% clean, and the Edutechnica site relies on partially-manual cleanup to remove pilot programs and small-scale usage of LMSs. The script counts an LMS if it is the official system at the college or above level. If an LMS is just used by a few faculty or a program, then it is removed. Over time George has indicated that the script is being updated based on the results of the manual process to automatically do more of the cleaning of data.
The plan is to keep the site updated and free, with no plans to either share the institutional data or to charge for it. The reason for not sharing the full data set is to avoid a direct-marketing campaign by vendors.
What are some of the results? The first blog post shared the following LMS adoption numbers:
Given the data coverage, there is also interesting comparisons by country:
There is also some interesting data on product versions used and hosting status that is worth checking out.
LISTedTECH was created by Justin Menard, who is Business Intelligence Senior Analyst at University of Ottawa. I was not able to interview Justin, but the about page describes the site and data. First of all, the site is far broader in scope than just the LMS – and there are a ton of useful visualizations based on his dataset. It seems that most data is presented in Tableau for interactive visualizations. For this post I’ll just share some LMS data as images, but head on over for more. It seems like the first data was presented in 2012, but I could be wrong here (Justin, waiting for you in the comments).
In its first version it was a simple blog with lists and graphs. I had used data and research that I and co-workers had accumulated at work. The site was up about two months before I basically got a cease and desist. I had used part of a legal text from another website and hadn’t taken out there name (Oupsi). They then accused me a copying there data. No chance in hell.
After coming back from vacation (this happened when I was at Disney World during our xmas vacation), I met with a colleague (who is also a lawyer) at work and asked advice. She told me that since I was just doing this for fun (and they had money and lawyers), it would be simple to just shut it down. So I did.
After licking my wounds (more like pride), I started discussing this with a friend and colleague. I came to the conclusion that I could not let this go (again pride or stubbornness). I started working on the data and adding links to the data I had in the DB. My goal was to redo the site but with proof of where the data could be found. It also made me realise how much data was out there and how much work this would be if I was to do it right.
Since money is not an issue (I’m helping a wealthy Nigerian family move a very large sum of money out of the country) I opted for Open source systems and modules. The version that is live is a simple Drupal 7 installation. My goal is a simple one. See if it’s worth pursuing this project.
The site is now in beta with version 1.3.
While Justin does not attempt to get 100% coverage, there are over 5,600 institutions in his database. In one post he mapped out the market-leading LMS per country:
In another post, he mapped the 451 partner institutions from the major MOOC vendors to determine the identity of their official LMS:
Update: This is not to say that the MOOC runs on the selected LMS as a platform, but that the rest of the school (outside of MOOC usage) runs a particular LMS.
Both of these sites are great, relatively new, sources of information on the LMS market.
Update: I should point out that no one to my knowledge has independently verified the accuracy of the data in these sites. In order to gain longer-term acceptance of these data sets, we will need some method to provide some level of verification.
The post New data available for higher education LMS market appeared first on e-Literate.
My father likes to say, “If you stick your head in the freezer and your feet in the oven, on average you’ll be comfortable.” Behind this pithy saying is an insight that is a little different from the “three kinds of lies” saying about statistics. It suggests that certain types of analysis produce a false coherence to the world. We see honestly patterns where there aren’t any. This is what appears to have happened with studies that Purdue University has done regarding Course Signals’ effectiveness.
From this desk here, without a stitch of research, I can show that people who have had more car accidents live, on average, longer than people that have had very few car accidents.
Why? Because each year you live you have a chance of racking up another car accident. In general, the older you live, the more car accidents you are likely to have had.
If you want to know whether people who have more accidents are more likely to live longer becauseof the car accidents, you have to do something like take 40 year-olds and compare the number of 40 year-olds that make it to 41 in your high and low accident groups (simple check), or use any one of a number of more sophisticated methods to filter out the age-car accident relation.
The Purdue example is somewhat more contained, because the event of taking a Course Signals class or set of classes happens once per semester. But what I am asking is whether
- the number of classes a student took is controlled, for, and more importantly,
- whether first to second year retention is calculated as
- the number of students that started year two / the number of students who started year one (our car accident problem), or
- the number of students who started year two / the number of students who finished year one (our better measure in this case).
Pointing to the first of these posts, I suggested that a response from the Purdue researchers would be helpful. (It still would be.) Now Al Essa, recently of Desire2Learn and currently of McGraw Hill, has done a more mathematically rigorous analysis showing that Mike’s intuition appears to be correct. He took the Purdue findings, substituted the phrase “were given a chocolate” for “took a class using Course Signals,” and ran a simulation to see whether he could reproduce the Purdue results with no causal connection between the chocolatey intervention and the retention results:
The following are some results from the simulation. The first row displays retention rates for students who received no chocolates. The second row displays retention rates for students who received at least one chocolate. The last row shows students who received two or more chocolates. Why track students who received two or more chocolates? Because the authors of the study claim that two is the “magic number” where significant retention gains kick in.
The simulation data shows us that the retention gain for students is not a real gain (i.e. causal) but an artifact of the simple fact that students who stay longer in college are more likely to receive more chocolates. So, the answer to the question we started off with is “No.”. You can’t improve retention rates by giving students chocolates.
This is a problem that goes well beyond Course Signals itself for several reasons. First, both Desire2Learn and Blackboard have modeled their own retention early warning systems after Purdue’s work. For that matter, I have praised Course Signals up and down and criticized these companies for not modeling their products more closely on that work, largely based on the results of the effectiveness studies. So we don’t know what we thought we knew about effective early warning systems. The fact that the research results appear to be spurious does not mean that systems like Course Signals has no value, but it does mean that we don’t have the proof that we thought we had of their value.
More generally, we need to work much harder as a community to critically evaluate effectiveness study results. Big decisions are being made based on this research. Products are being designed and bought. Grants are being awarded. Laws are starting to be written. I believe strongly in effectiveness research, but I also believe strongly that effectiveness research is hard. The Purdue results have been around for quite a while now. It is disturbing that they are only now getting critical examination.
Update: Seeing some of the posts in the comments thread, I feel the need to make a clarification in fairness to Purdue. They have two important sets of research findings, only one of which is being called into question here. Their early findings, that Course Signals can increase student grades and chances of completion within a class, are not being challenged here. Those are important results. The findings that are being questioned are their longitudinal analysis showing that students who Course Signals in one class are more likely to do well in future classes. Even there, it is possible that Course Signals does have some long-term effect. The point is simply that the result the researchers got in their analysis looks suspiciously similar to the results one would get from a fairly straightforward selection bias.
The post Course Signals Effectiveness Data Appears to be Meaningless (and Why You Should Care) appeared first on e-Literate.
Update 10/26: We now have Rachel Levy and Nancy Lape (who was the researcher interviewed by USA Today) both agreeing with Darryl’s comments. That’s three of the four members of the research team. While I do not claim to understand how the reporter developed her story line (I have asked for comment), it is quite clear that the article does not represent the work of the Harvey Mudd research team and has a misleading headline and lede.
As a follow up to my response to the USA Today article on flipped classroom research, there was a very informative comment one Google+ from Darryl Yong (one of the members of the Harvey Mudd research team). I thought this comment deserved to be seen by a wider audience, so I’m reproducing in full.
Thanks, Phil, for this post. The USA Today article paints an inaccurate picture of our work and I wanted to try to clarify some things and continue this conversation with you and your readers. (I am only writing on my own behalf and not for my collaborators on this study.)
My biggest regret is that the article greatly oversimplifies things by portraying our study as an attempt to answer whether flipped classrooms work or not. That kind of research question is too blunt to be useful. Our goal is to better understand the conditions under which flipped classrooms lead to better student outcomes. As Phil and others here point out, there are many different manifestations of what we mean by the term “flipped classroom” and that we should be wary about talking about it as if there is one canonical implementation. How are the benefits of flipped classrooms affected by school and student contexts, and are there other benefits that we haven’t yet characterized? We don’t have any preconceived answers to these research questions. While folks might disagree with us whether these research questions are interesting or not, they should at least know that we’re aware of the good work that others have already done and we’re trying to build on it rather than debunk it.
Phil rightly points out that Harvey Mudd College’s learning environment differs from other institutions, and that we should be cautious about extrapolating our experience here to other contexts. There are three important things folks should know about Mudd with regards to this study. First, there is a healthy culture of cooperation at Mudd, which is mostly residential and has about 800 undergraduates, and students already spend a lot of time working together in groups in and out of class. Second, our students generally have high self-efficacy and positive attitudes about learning science and mathematics. Third, a lot of us faculty at Mudd currently use formative assessment, active learning and group work in our classes (minute papers, iClickers, think-pair-share, etc.). Some of us participating on this study have been flipping our classrooms for a while and the four of us got together because we are interested in better understanding the conditions under which classroom inversion could lead to improved student outcomes at Mudd.
There are lots of studies that show that active learning, formative assessment, group work, project-based learning, just-in-time learning are all measurably good things for students. If someone, through classroom inversion, adds these demonstrably good things to a learning environment where those things are not the norm, then we would expect positive changes in student attitudes and learning. This leads us to wonder: are flipped classrooms good for students because they create more time for these demonstrably good things, or are there other benefits that we haven’t yet characterized? If it is the case that flipped classrooms have positive effects for students mainly because they free up more time for active learning and those other good things, what happens when you already do those things in your “lecture” class? Is there some sort of saturation effect? And if it turns out that there are positive aspects about the delivery of instruction via videos, we should try to understand them too.
The idea of our study is to try to control for as many variables as we reasonably can so as to understand the effect of classroom inversion on real students’ performance and attitudes. The instruction, whether via video or in person, is given using the same set of instructional materials (handouts, slides, etc.); all students complete exactly the same tasks, assignments, quizzes, exams. We do our best to randomly assign students to treatment or control groups. We are trying to measure both student learning gains (using vetted instruments when possible) and attitudes in three courses from three different disciplines and to track students’ performance in “downstream” courses.
Many studies of flipped classrooms involve comparing students taking different versions of a course (often taught in different semesters/quarters). Students in the flipped version of the course get to practice more and/or learn more material—if students practice more or get exposed to more material, you would hope for improved student outcomes. So if you’re still reading this far in, you might be asking why one would expect improved student outcomes from classroom inversion if students aren’t asked to practice more or learn more material in the flipped classroom. Because we’re trying to avoid giving students in our flipped classrooms tasks different tasks, we are mainly using class time in our flipped sections for students to work together on homework problems. What if those problems are designed so as to elicit potential misconceptions? Could another potential benefit of flipping be to allow instructors to more quickly identify and resolve students’ misconceptions, and might that lead to increased student learning? That’s one potential idea, and we’d love to hear your thoughts about what other things one might be able to do as a result of having more time for instructor-mediated learning.
I was not present when Nancy was being interviewed by the USA Today reporter, but I strongly believe the reporter took Nancy’s words out of context. Nancy was not saying that flipped classrooms are dubious in general. She was saying that given our study design and Mudd context, we have not yet seen any difference in student outcomes. Of course, this was only the first year of the study and we are admittedly working out all of the kinks in our flipped classes.
As Chuck Severance points out, the danger is that flipped classrooms become the next flavor-of-the-month and that people adopt it without really understanding why it might lead to better student learning. This is our small way of contributing to what is known about flipped classrooms, and we’d appreciate your ideas and constructive feedback.
Per Twitter, Rachel Levy stated that Darryl’s comment also reflect her views (she is another member of research team). I appreciate both of their replies and this important clarification to the article.
The post Comment from member of research team on USA Today flipped classroom article appeared first on e-Literate.
What’s interesting about von Hippel is that his research hits on the common themes of the open education movement, but does so in a slightly different key.
Briefly, there are a number of intersecting debates about MOOCs. There is what Reich frames as the Dewey/Thorndike debate about what learning is. There is the centralized/de-centralized debate about what the web does best. There is the debate about about whether MOOCs are disruptive or innovative or neither, and the discussion over how much ability to remix teachers need to make classroom learning work well (answer, probably, is quite a bit).
But people on both sides of the debates are often driven by a larger question that we are not naming directly enough: “What are the sources of innovation?”User Innovation
This is the question that von Hippel has been investigating for over thirty years now. And if we see innovation not as something that has happened, but as something we want to continue to happen, this may be the most important question of all.
The traditional answer, says von Hippel, is that product industries (“suppliers”) are the innovators. In this view a company comes across a set of “sad users”, finds what their problems are, and designs (via research and development) a solution.
But is that really how things happen? Since the 1980s von Hippel has been looking at the history of “transforming” innovations in various industries. These are innovations which haven’t just offered a slightly better or slightly cheaper product, but ones that have radically altered what is possible in an industry. A great example of such a transforming innovation is the center pivot irrigation system, considered by many to be on par with the invention of the tractor in the history of agricultural technology:
Before the center pivot system, farmers had to draw water from a single well and then pipe that water throughout the farm. The fundamental insight of the system was that instead of piping the water all over the farm (with the resulting leakage) you could drop a well in the center of a section of crops, and then use a gigantic rotating sprinkler to irrigate a large section of crops from that well. If you’ve ever flown over the country, you’ve seen what such farms look like from the air:
What von Hippel points out is that major innovations like these almost always come not from suppliers, but from “lead users”, a set of highly motivated and skilled users for whom the current technology or practice is restrictive. In this case, for example, the first center pivot system was created by an individual in the 1950s who wasn’t initially looking to market it, but simply to solve the set of local problems he was facing:
The Valley Corporation then came in later and perfected it, allowing it to work more flawlessly, with less user intervention. They perfected it and prepared it for mass adoption. But the innovation was not theirs.
Look under between 75% to 80% of all major innovations, and this is the story you find again and again, from the first heart-lung machine, to the development of wider skateboards, to protein-enhanced hair conditioners. On the web, people were running makeshift blogs well before Blogger, net-sync’ed folders well before Dropbox, video + question sequences well before Coursera. What smart companies do, for the most part, is not “innovate” but find what “lead users” are hacking together and figure out how to make that simpler for the general population to tap into. Research often plays its most important role after the fact, not in producing designs, but allowing us to determine which lead-user designs work best in common scenarios, and to understand what, exactly, is making them work.EDUPUNK and User Innovation
For many readers, this process may call to mind the EDUPUNK wave of 2008. The term was coined by Jim Groom in a conversation with Brian Lamb and subsequently extrapolated on by a number of edubloggers, eventually hitting the New York Times (if I remember correctly) as a word of the year.
What some may not remember is that the coining of the term was a reaction to the announcement of Blackboard that they were moving to a Learning 2.0 platform, one that would (supposedly) finally integrate the technologies they had worked so hard to keep out of education because they weren’t perceived as serious or safe.
Lead users like Jim had gone out and done their own thing, hacking together syndication feeds, wikis, and modded themes into a workable replacement for a learning management system that did far better at meeting the emerging needs of the open classroom. And when it was looking like they were out of Blackboard for good, Blackboard came up with this system of blogs and other “2.0″ features which replicated much of the functionality, but at the cost of hackability. Here’s Jim in that piece:
Corporations are selling us back our ideas, innovations, and visions for an exorbitant price. I want them all back, and I want them now!
Enter stage left: EDUPUNK!
My next series of posts will be about what I think EDUPUNK is and the necessity for a communal vision of EdTech to fight capital’s will to power at the expense of community.
I’ve never fully gone for the “capital’s will to power” bit of that, although I know that piece remains important to Jim. But for me the piece that resonated — and still resonates — is the disturbing vision of an educational-technology-complex that is aligned against the communities of innovators that it supposedly serves.
While a company like Blackboard, which produces tools to create things, may seem qualitatively different than an irrigation system company, it’s not different in the respect that it codifies practice. To the farmer coming up with an irrigation plan the range of devices and options available to her are just as much building blocks in an overall design as is the Blackboard gradebook or discussion forum.
And as with other industries, most of the practice that Blackboard codifies (and the rudimentary architecture to support it) was developed outside of Blackboard by user innovators. And that’s fine. But the message Blackboard sent (and I think intentionally sent) over the years to skittish administrators was “Now that we’ve offered these innovations in the product itself, you can rein in all your experimenters and put them back in the box.”
As Jim so rightly points out, such actions and attitudes destroy innovation communities rather than foster them. And it’s not just Blackboard either. The entire education reform-industrial complex has often waged war on educational communities, based on the perception that questions of educational practice are mostly solved, and if we could get teachers to just teach using the centrally specified method (or foundation-approved test) we’d be set. Technology thought leaders even make bizarre claims that there is no innovation going on in education, outside, of course, the Silicon Valley entities here to save us.
People have termed this approach “a war on teachers”. It’s that, certainly. But since a subset of those teachers are where the innovations of the future are likely to come from, it’s a war on innovation as well.The Sources of Educational Innovation
Once we see the question “What is the source of educational innovation?” as a core question of the debate, certain things become clearer. In fact, the answer an individual has to that question is probably highly predictive of what technologies they favor.
The current breed of xMOOCs emerged as a fluid hacking together of different educational elements in places like Stanford. In this environment, teachers using the system were encouraged to extend and supplement the product through both technological and pedagogical innovation.
But, as Bob Dylan would say, things have changed. As MOOCs have reoriented to see a significant piece of their customer base as providers of blended learning (rather than the students themselves) they have failed to invite that user base into the culture of innovation, presumably due to their erroneous belief that innovation begins at the top, then filters down to the masses. The licensing, technology, and content, and supporting community are all designed to preserve their innovation as shipped, in an effort to protect it from the users.
On the other hand, EDUPUNK technologies (varieties of cMOOCs, ds106, FemTechNet, Open Course Frameworks, P2PU) have continued to engage their users, asking the the users to experiment, remix, hack, and redistribute. They are, in the words of von Hippel, “user innovation toolkits” which encourage users to alter, and even subvert, given designs. Because they codify much practice in convention rather than code (see, for example, the use of tag-based RSS and the harnessing together of readily accessible technologies) they retain a fluidity that promotes experimentation. They are, in a word, so EDUPUNK.
You can look at either of these paradigms, and ask which one is more innovative, or which one fits with your model of education. We can ask which framework is more effective or more suited to various local conditions. But the key question for administrators and policy makers is not just which system is more effective today, but which framework will continue to grow and adapt in the future.
And on this question the historical record is fairly clear — open frameworks which allow lead users to hack are the systems that will produce long-term gains. As a case in point, take Lego Mindstorms, a project built over 7 years by LEGO engineers which was significantly improved by user hackers within three weeks of its release.
Rather than fight against those hackers, LEGO decided to embrace them. And maybe this is where I differ from Jim in this respect — I don’t think gutting user communities is necessary to for-profit enterprise. Counterexamples like the one below show that both the interests of investors and users can be aligned. In fact, given LEGO’s explosive growth in the face of a recession, one could see a more enlightened capitalism as a force for good:
I believe that this idea of fostering user innovation informs the rhetoric of Instructure around the Canvas LMS (the reality will emerge over time). It’s the business plan of Lumen Learning’s Candela OER Project, which acts as a publisher, polisher, and integrator of products produced and maintained by their user base. It’s something along the lines of what Alan Levine is proposing in his recent Shuttleworth grant proposal, and what Jim Groom and Tim Owens having been pitching under the Reclaim Project banner.
And at the same time, it is the antithesis of much of what we see out of Silicon Valley, which, not well versed enough to invent the wheel reinvents instead the tree trunk roller, and then mounts a campaign to get lead users to give up their makeshift wheel-and-axle systems as too ad hoc.
The situation is further complicated, because local knowledge is “sticky” in two major ways. First of all, many educators and educational technologists have extensive tacit knowledge of what works that is difficult to express to people who design products. As von Hippel points out, when such knowledge is sticky at the point of use (in this case the classroom), it makes sense to push design functions downstream.
Knowledge is also sticky in another way in education. It resists generalization. Despite what Udacity might tell you, there is no “magic formula”. Rather, there are dozens, perhaps hundreds of magic formulas: the success and applicability of which are determined by the subject and skills being taught, the specific capacities of the students, and the nature of the local learning environment. What works in one situation is not always applicable to other situations.
When knowledge is sticky in this way, the importance of hackability to innovation is even greater. Yet while industry moves more and more towards recognizing the importance of user-driven innovation the educational-reform-industrial complex still treats such innovation as a disease in need of a cure.The Last Innovators
The truth is that Salman Khan, Sebastian Thrun, Andrew Ng and others know this at heart — they are all, in fact, former lead users who solved their own problems with technology and then took their solutions to a broader market. And that’s wonderful: we’ve benefited from their contributions.
But they are only a fraction of a fraction of user innovators out there. We can’t afford to regard these figures as the last innovators to ever walk the earth. If we wish to engage in ongoing innovation, we need to focus on generating conditions that foster more communities of more such people, not less. That means making sure that educational technology is as hackable as farm equipment, shampoo, and skateboards. That means choosing technology for your campus based on what your most creative and effective users need, so that they can advance your local practice, and steering away from lowest common denominator technology. It means looking to our practitioners to lead the way, and then asking industry to follow. And ultimately it requires that we cease to see innovation as a set-and-forget product we buy, and engage with it as a process and a culture we intend to foster.
Photo/Image Credits: Center pivot system: USDA, via Wikipedia; Kansas fields: U.S. satellite image via Wikipedia; Center pivot prototype: T-L irrigation; Jim Groom as EDUPUNK: bavatuesdays; Tree-trunk roller: Jonnie Hughes.
The post Educational Technology and the Sources of Innovation appeared first on e-Literate.
Michael and I have been very impressed with the articles from Mike Caulfield, an edublogger who writes at Hapgood. Mike is director of blended and networked learning at Washington State University Vancouver, and he and Michael first met at a Lumen Learning event this summer. In addition to writing at his blog, Mike Caulfield has also written an excellent EDUCAUSE Review article along with Amy Collier and Sherif Halawa, titled “Rethinking Online Community in MOOCs Used for Blended Learning”.
I’m happy to say that Mike will be doing a series of posts for us, starting today. Please welcome him.
Update 10/25: Bumped comment from Darryl Yong, a member of the research team, into its own post here.
Update 10/26: We now have Rachel Levy and Nancy Lape (who was the researcher interviewed by USA Today) both agreeing with Darryl’s comments. That’s three of the four members of the research team. While I do not claim to understand how the reporter developed her story line (I have asked for comment), it is quite clear that the article does not represent the work of the Harvey Mudd research team and has a misleading headline and lede.
USA Today published an article today titled “‘Flipped classrooms’ may not have any impact on learning“, based on research from four Harvey Mudd professors. This research is backed by a “$199,544 grant from the National Science Foundation to study the effects of the flipped classroom on students’ learning”. This is newsworthy, right? A real research report with NSF funding finding no statistical difference in learning outcomes from flipped classroom seems to contradict much of the recent promise of ed tech.
Upon closer reading, however, there are some major problems with the story. Exaggerated claims by ed tech enthusiasts are not helpful, but neither are exaggerated claims by ed tech skeptics. We at e-Literate have been critical of both flavors (witness our analysis of San Jose State claims, Desire2Learn claims, and edX claims for examples of the former).
Let’s review today’s story as an example of the latter.
In a flipped classroom, students watch their professors’ lectures online before class, while spending class time working on hands-on, “real world” problems.
The potential of the model has many educators thrilled — it could be the end of vast lecture halls, students falling asleep and boring, monotone professors.
This is a decent summary, although I would argue there should not be a one-size-fits-all mentality. Flipped classrooms, even where successful, should not replace all lecture-based classes. But for national media, this is not a bad description.
But four professors at Harvey Mudd College in Claremont, Calif. who are studying the effectiveness of a flipped classroom have bad news for advocates of the trend: it might not make any difference.
This is the lede, and the claim that we should examine. What is the basis of their study?
Though their official research is just beginning, the professors flipped their STEM classrooms as a pilot during the 2012-2013 academic year and gathered some first impressions on the matter.
While [Professor Nancy] Lape stresses that their preliminary research is just that — preliminary — she says the benefits of flipping a classroom are dubious.
During this pilot, each professor taught two sections of the same course — one “flipped” and one traditional, using the same material as much as possible.
So there is no report yet, and the study is not complete. But they are ready to make conclusions? I’d be willing to bet that the vast majority of readers will remember the headline and lede and not the qualifier of “preliminary research”. There is no ability for people to study the team’s results given the nature of this article, so we are supposed to just trust the research team and the reporter.
The potential of flipped classrooms, or any redesign, is not based on just changing delivery. It often comes from changing or enhancing the learning content to fit the course redesign. This study seems to force-fit traditional material into a flipped classroom. While that might be appropriate for their classes, it is not the only option.
And making conclusions on eight classes – four traditional, four flipped – taught by the the same four professors? While that is great for self-reflection and experimentation, I’m glad they aren’t extrapolating their preliminary findings.
“I would say that the fact is that there is no statistical difference,” Lape says. “People are really gung ho about the (flipped) classroom, but there’s no real results.”
Wow. They have enough data to claim the fact of no statistical difference and no results on flipped classrooms.
Does the research team know about the work of the National Center of Academic Transformation or the Open Learning Initiative and their documented results on course redesign? There are other studies showing promising results from the flipped classroom concept, including the recent redesign at San Jose State University. None are fully conclusive for all cases, but to claim that there are “no real results” seems disingenuous.
Did I mention (thanks to reminder from @GlobalHigherEd) that Harvey Mudd has a 9:1 student to teacher ratio? That’s not exactly a good basis for extrapolating results to the broad usage of flipped classrooms. If all colleges had 9:1 ratios, it would be difficult to find any course redesign with improved results. Not impossible, but the bar would be set much higher.
Maybe the research team will be cautious about advising other faculty what to do, given their very unique experience at Harvey Mudd.
Professors, too, had to spend considerably more time making and editing the videos and crafting engaging, hands-on sessions for their classes, she says.
Given these drawbacks, the fact that the actual learning outcomes seemed unaffected by the switch suggested that it might not be worth the hassle, Lape says.
“(The professors’) lives might be easier and their students might be happier if they just do a traditional class,” she says.
Yikes. Those are some pretty audacious claims in national media for preliminary results from eight classes taught by the research team themselves at a school with 9:1 student to faculty ration. At the end the reporter adds some needed context.
Andrew Miller, an education consultant who teaches online classes for a variety of universities, agrees that benefits such as students’ ability to review material are promising, but says nothing will change if professors don’t handle the “flip” correctly.
For example, the newly freed-up class time can be daunting for professors, especially those who are particularly gifted at lecturing, he says.
Sometimes these professors aren’t able to come up with good hands-on activities and resort to filling the time with even more lecturing.
“If you’re not a good instructor, flipping the classroom won’t really ensure better learning,” he says. “If you aren’t doing something to fill that space, it won’t do you any good.”
And to be fair, the quoted professor at the end softens her tone.
Lape says she hopes those within academia take a more critical look at flipped classrooms.
“It’s a hot topic, and there are reasons why I think people believe it will be a good method,” Lape says. “But I would really put the call out to more people to really look at this.” [emphasis added]
That is a call that I support.
There could be an argument that this article is a case of a reporter trying to find a sensational topic from a nuanced report. But the real problems in this article seem to be direct quotes from one of the research professors, despite the qualifier of “preliminary”.
We should expect better from both ed tech enthusiasts and from ed tech skeptics, especially when lending the credibility of official research. I look forward to the full report from the Harvey Mudd researchers.
The post A response to USA Today article on Flipped Classroom research appeared first on e-Literate.
Last week Michael announced a new project of ours called e-Literate TV. To recap, the idea is that there are new groups involved in decisions that impact ed tech and online education. Deans, provosts, presidents, boards of trustees, state legislators, and even national media are newly interested in topics with different levels of understanding and different perspectives.
This new situation can be frustrating for e-learning professionals who have been involved in ed tech for years, but we cannot and should not wish away this new interest. Instead, there is an opportunity to develop new ways to engage the various groups in discussions about the usage of technology to enable change in education.
This is the goal of e-Literate TV. One way we see e-Literate TV being used is by e-learning professionals sharing easy-to-consume ~10-minute videos with various stake holders as a method to enable more productive conversations. As such, all of the video segments will come with a creative commons license for easy sharing.
At EDUCAUSE 2013, we had a chance to interview several people as part of developing the first series. We also joined In The Telling, our partners in video production and the transmedia platform, in hosting a reception as part of the announcement.
Here is the full trailer that we shared at the reception, and it includes scenes from the interviews at the conference.
And for a few photos from the reception:
We’re very excited about the project and will share more snippets before the expected January 2014 launch.
This is just a quick note on behalf of my friends at the Apereo Foundation to note that their reception will be at 6:30 PM at the Hilton tonight at EDUCAUSE. For some reason, it got left off the program. Details are here. I, unfortunately, will not be able to join, but if you’re interested in talking to good people ding good work in higher ed open source, then drop on by.
Josh Kim wrote three predictions at Inside Higher Ed for the EDUCAUSE 2013 conference, and I particularly agree with the basis of #2:
Prediction 2: Adaptive Learning Platforms Will Be the Toast of the Party
Everyone will want to talk to Knewton. The ASU / Pearson / Knewton partnership is a huge deal. Knewton has the technology, relationships, funding, and management team to make a huge impact.
I’ll be looking at EDUCAUSE at the other adaptive learning players. Where are they focusing their platform work? What deals and relationships do they currently have? How big is their market penetration? What is the quality of their leadership team and employees they have a EDUCAUSE?
I’m betting we will see at least one major adaptive learning vendor announcement. A purchase, a big collaboration deal, or a new huge round of funding.
I also expect much of the discussion this year to be on adaptive learning. But one risk of this zeitgeist (if it comes to pass) is that terminology becomes fuzzy and often devoid of meaning. Hey, get your adaptive here. You want to be adaptive, don’t you? We are the adaptive makers… and we are the dreamers of dreams.
What does adaptive learning mean? I don’t believe anyone can thoroughly describe all the concepts accurately and thoroughly in one place, but I did see a video from Knewton that is helpful. They have a “Knerds on the Board” blog series that includes various Knewton staff giving short video explanations of key concepts. In a recent post, Jess Nepom described the differences between differentiated, personalized Learning and adaptive learning, which I have paraphrased below.
- Differentiated Learning describes the case where there are different pathways that students can take within a learning environment, typically organized as pre-set categories.
- Personalized Learning describes the case where there is a different pathway for each individual student, often implemented in a rules-based method with a decision tree. Students might take a diagnostic test on the first day that will be fed into a rules engine to lay out that individual’s path and content.
- Adaptive Learning is data-driven and continually takes data from students and adapts their learning pathway to “change and improve over time for each student”.
In Knewton’s world, these three are steps towards the ideal – Differentiated is step 1, Personalized is step 2, and Adaptive is step 3. I suspect that many other platform vendors share this view of the world.
While this video is helpful for basic clarity on adaptive learning and related concepts, it makes the implicit assumption that the machine should do the selection of learning pathways for the students. Algorithms relying on big data are the way to go. But this is only one version of how to effectively design learning around the student.
Another approach is to empower the student to select their own learning pathway as either a pre-set category (described above as differentiated learning) or even to create their own pathway that adjusts over time based on the learning process and interactions with other learnings. This gets close to the Connectivism model behind cMOOCs.
It will be interesting to see if the various vendor demos and conference sessions include descriptions of what is meant by differentiated, personalized or adaptive learning, and if presenters describe the key issue of who selects the pathway – the instructor, the student, or the machine.
The post Differentiated, Personalized & Adaptive Learning: some clarity for EDUCAUSE appeared first on e-Literate.