By Michael FeldsteinMore Posts (1069)
ASU’s Lou Pugliese was kind enough to invite me to participate on a panel discussion on “Next-Generation Digital Platforms,” which was really about a soup of adaptive learning, CBE, and other stuff that the industry likes to lump under the heading “personalized learning” these days. One of the reasons the panel was interesting was that we had some smart people on the stage who were often talking past each other a little bit because the industry wants to talk about the things that it can do something about—features and algorithms and product design—rather than the really hard and important parts that it has little influence over—teaching practices and culture and other messy human stuff. I did see a number of signs at the conference (and on the panel) that ed tech businesses and investors are slowly getting smarter about understanding their respective roles and opportunities. But this particular topic threw the panel right into the briar patch. It’s hard to understand a problem space when you’re focusing on the wrong problems. I mean no disrespect to the panelists or to Lou; this is just a tough nut to crack.
I admit, I have few filters under the best of circumstances and none left at all by the second afternoon of an ASU/GSV conference. I was probably a little disruptive, but I prefer to think of it as disruptive innovation.
Here’s the video of the panel:
The post No Filters: My ASU/GSV Conference Panel on Personalized Learning appeared first on e-Literate.
By Phil HillMore Posts (401)
The National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) provide the most official data on colleges and universities in the United States. This is the third year of data.
Let’s look at the top 30 online programs for Fall 2014 (in terms of total number of students taking at least one online course). Some notes on the data source:
- I have combined the categories ‘students exclusively taking distance education courses’ and ‘students taking some but not all distance education courses’ to obtain the ‘at least one online course’ category;
- Each sector is listed by column;
- IPEDS tracks data based on the accredited body, which can differ for systems – I manually combined most for-profit systems into one institution entity as well as Arizona State University;
- See this post for Fall 2013 Top 30 data and see this post for Fall 2014 profile by sector and state.
The post Fall 2014 IPEDS Data: Top 30 largest online enrollments per institution appeared first on e-Literate.
By Phil HillMore Posts (401)
The National Center for Educational Statistics (NCES) and its Integrated Postsecondary Education Data System (IPEDS) provide the most official data on colleges and universities in the United States. I have been analyzing and sharing the data in the initial Fall 2012 dataset and for the Fall 2013 dataset. Both WCET and the Babson Survey Research Group also provide analysis of the IPEDS data for distance education. I highly recommend the following analysis in addition to the profile below (we have all worked together behind the scenes to share data and analyses).
- WCET’s initial analysis of Fall 2014 data
- WCET’s comparison of Fall 2014 to past years
- BSRG’s annual report on distance education using Fall 2014 data
- WCET’s update on the data quality issues with IPEDS data
Below is a profile of online education in the US for degree-granting colleges and university, broken out by sector and for each state.
Please note the following:
- For the most part distance education and online education terms are interchangeable, but they are not equivalent as DE can include courses delivered by a medium other than the Internet (e.g. correspondence course).
- I have provided some flat images as well as an interactive graphic at the bottom of the post. The interactive graphic has much better image resolution than the flat images.
- There are three tabs below in the interactive graphic – the first shows totals for the US by sector and by level (grad, undergrad); the second also shows the data for each state; the third shows a map view.
- Yes, I know I’m late this year in getting to the data.
If you select the middle tab, you can view the same data for any selected state. As an example, here is data for Virginia in table form.
There is also a map view of state data colored by number of, and percentage of, students taking at least one online class for each sector. If you hover over any state you can get the basic data. As an example, here is a view highlighting Virginia private 4-year institutions.
For those of you who have made it this far, here is the interactive graphic. Enjoy the data.
The post Fall 2014 IPEDS Data: New Profile of US Higher Ed Online Education appeared first on e-Literate.
By Phil HillMore Posts (401)
This following excerpt is based on a post first published at The Chronicle of Higher Education.
With all of the discussion around the role of online education for traditional colleges and universities, over the past month we have seen reminders that key concerns are about people and pedagogy, not technology. And we can thank two elite universities that don’t have large online populations — MIT and George Washington University — for this clarity.
On April 1, the MIT Online Education Policy Initiative released its report,“Online Education: A Catalyst for Higher Education Reforms.” The Carnegie Corporation-funded group was created in mid-2014, immediately after an earlier initiative looked at the future of online education at MIT. The group’s charter emphasized a broader policy perspective, however, exploring “teaching pedagogy and efficacy, institutional business models, and global educational engagement strategies.”
While it would be easy to lament that this report comes from a university with few online students and yet dives into how online learning fits in higher education, it would be a mistake to dismiss the report itself. This lack of “in the trenches” experience with for-credit online education helps explain the report’s overemphasis on MOOCs and its underemphasis on access and nontraditional learner support. Still, the MIT group did an excellent job of getting to some critical questions that higher-education institutions need to address. Chief among them is the opportunity to use online tools and approaches to instrument and enable enhanced teaching approaches that aren’t usually possible in traditional classrooms.
The core of the report, in fact, is based on the premise that online education and online tools can enable advances in effective pedagogical approaches, including constructivism, active learning, flipped classrooms, problem-based learning, and student-centered education. It argues that the right way to use technology is to help professors teach more effectively:
“Technology can support teachers in the application of the relevant principles across a group of students with high variability. In fact, technology can help tailor lessons to the situation in extremely powerful ways.
The instrumentation of the online learning environment to sense the student experience and the ability to customize content on a student-by-student basis may be the key to enabling teachers to provide differentiated instruction, informed by a solid foundation in cognitive science. Modern online courses and delivery platforms already implement some of these concepts, and provide a framework for others.”
But there is value in seeing what happens when that advice is ignored. And that’s where an incident at George Washington University comes in. If technology is just thrown at the problem with no consideration of helping educators to adopt sound pedagogical design, then we can see disasters.
On April 7, four students who took an online program for a master’s degree in security and safety leadership from George Washington’s College of Professional Studies filed a class-action lawsuit against the university for negligence and misleading claims. As reported byThe GW Hatchet, a student newspaper:
For a non-paywall version of the full article, good through 4/26, follow this link.
Update: What interesting timing! See Michelle Pacansky-Brock’s post on very similar topic.
The nature of online classes varies dramatically, much like face-to-face classes. But, in both scenarios, the teacher matters and the teaching matters. When an online class is taught by an engaged and empathetic instructor who seeks to be aware of the needs of her students, the asynchronous nature of online learning may become a benefit to students, not a disadvantage. This is contingent upon the design of the course, which is where instructional designers or “learning engineers” can play an important role. Many instructors, however, play both roles — and those who do are often the professors who experience deep transformations in their face-to-face classes as a result of what they learned from teaching online.
The post A Moment of Clarity on the Role of Technology in Teaching appeared first on e-Literate.
By Phil HillMore Posts (401)
Just over four years after Providence Equity Partners acquired Blackboard and three years after they brought in Jay Bhatt to replace co-founder Michael Chasen, the company hired Bill Ballhaus as its new CEO at the beginning of January. 100 days in, Ballhaus is starting to make changes to the organization and providing some insights into future corporate directions.
The most significant change is a reorganization that combines strategy, product management and marketing in one group under Katie Blot. In an interview Michael and I had with Ballhaus and Blot earlier this week, they described the primary motivation for the organizational change as the need to more tightly align those functions. Also significant is that this change means the departure of Mark Strassman, SVP Product Marketing & Management, and Tracey Stout, SVP of Marketing & Sales Effectiveness. Blackboard provided the following statement.
We are deeply grateful for the many contributions both Mark and Tracey have made at Blackboard. Both of these individuals have been critical to driving the transformation and evolution of our PMM and Marketing organizations.
Katie Blot joined Blackboard in 2008 as President of Global Services and has been SVP of Corporate & Industry Strategy since early 2015. Her long experience at Blackboard is worth considering as is the fact that both departing executives both worked with Jay Bhatt at Autodesk earlier in their careers and were brought into Blackboard as part of his new management team. I am not suggesting that the purpose of the move was based on corporate pedigree, but I am suggesting that the move effectively changes the balance in how much ed tech experience and even Blackboard experience rests with the top company executives.
When we asked Ballhaus about lessons learned after his listening tour with customers, he told us that the company must do a few things very well. And the top of his priority list is the Learn LMS product family. This focus on products stands in contrast to Bhatt’s broader and more vague focus on solutions. Michael noted the change in tone back in the 2013 BbWorld keynote:
The big corporate keynote had to be one of the strangest I’ve ever seen. CEO Jay Bhatt ran through a whole long list of accomplishments for the year, but he only gave each one a few seconds as he rattled through the checklist. He mentioned that the company has a new mission statement but didn’t bother to explain it. It took nearly an hour of mostly talking about big macro trends in education and generalities about the categories of goals that the company has set before he finally got around to new product announcements. And then commenced what I can only describe as a carpet bombing run of announcements—a series of explosions that were over by the time you realized that they had started, leaving you to wonder what the heck had just happened.
At that same 2013 keynote (and in Michael’s post) Blackboard announced a major UX overhaul for Learn (the Ultra part) and a move to the cloud (the SaaS part). By the 2015 BbWorld conference Michael shared how Ultra was a year late and not yet ready for schools to test. The company has tripped over itself in not getting product out the door and not being able to create effective messaging. Just what is Learn Ultra and Learn SaaS and when will real colleges and universities get to evaluate them?
When we asked when Learn Ultra would be available for schools to actively pilot (real courses, real students, with major integrations to student rosters, etc), it was interesting to hear both Ballhaus and Blot take a very different approach and give what appears to be much more conservative estimates. Learn Ultra should be available for limited-functionality pilots for specific faculty (e.g. for courses not using the LMS heavily) by Fall 2016 and more broadly for institutions in Spring 2017, leading to general availability in Summer or Fall 2017.
It is encouraging that Blackboard appears to be increasing its focus on getting the core LMS product updates, and we have also noticed a tighter message about Ultra over the past two months. There is now a Learn Ultra preview for educators, where people can sign up and play around with courses both in Original View (what you know as Learn 9.1) and Ultra View (the new UX). Part of the purpose of this preview is to enable customers to get a better feel of Learn SaaS and also to help them determine whether a Fall 2016 or a Spring 2017 Learn Ultra pilot makes sense for them.
We will bring you more analysis of the Learn Ultra preview and of the broader analysis of the organizational changes at Blackboard in future posts. Stay tuned, and you can also sign up for more information on our upcoming e-Literate LMS subscription service.
"2016 is going to be an eventful year for the LMS" ® by @mfeldstein67
— Phil Hill (@PhilOnEdTech) February 19, 2016
The post Blackboard CEO’s First 100 Days: Reorganization and Learn Ultra Updates appeared first on e-Literate.
By Michael FeldsteinMore Posts (1069)
Which CEO has recently said or done all of the following:
- Suggested to an audience of VCs and ed tech entrepreneurs at the GSV conference that the importance of big data in education has been overstated
- Told that same audience that the biggest gains from adaptive learning come when it is wrapped in good pedagogy delivered by good teachers
- Asked former CIOs from Harvard and MIT, both of whom are senior company employees, to develop collaborations with the academic learning science community
- Accurately described Benjamin Bloom’s two-sigma research, with special attention to the implications for the bottom half of the bell curve
- When asked a question by an audience member about an IMS technical interoperability standard in development, correctly described both the goals of the standard and its value to educators in plain English
Answer: David Levin of McGraw Hill.
Yes yes, those are just words. But I have gotten a good look at some of what their ed tech product and data science groups have been up to lately, and I have spoken to Levin at length on a few occasions (and grilled him at length on two of them).
My advice: Pay attention to this company. They are not screwing around.
By Phil HillMore Posts (401)
This is the eighth year I have shared the LMS market share graphic, commonly known as the squid graphic, for (mostly) US higher education. The original idea remains – to give a picture of the LMS market in one page, highlighting the story of the market over time. The key to the graphic is that the width of each band represents the percentage of institutions using a particular LMS as its primary system.
This year marks a significant change based on our upcoming LMS subscription service. We are working with LISTedTECH to provide market data and visualizations. This data source provides historical and current measures of institutional adoptions, allowing new insights into how the market has worked and current trends. This current graphic gets all of its data from LISTedTECH. Where previous versions of the graphic used an anchoring technique, combining data from different sources in different years, with interpolation where the data was unavailable. Now, every year’s data is based on this single data source.
This graphic has been in the public domain for years, however, and we think it best to keep it that way. In this way we hope that the new service will provide valuable insight for subscribers but also improve what we continue to share here on the e-Literate blog.
Since we have data over time now and not just snapshots, we have picked the end of each year for that data. For this reason, the data goes through the end of 2015. We have 2016 data but chose not to share partial-year results in an effort to avoid confusion.
A few items to note:
- As noted in previous years, the fastest-growing LMS is Canvas. There is no other solution close in terms of matching the Canvas growth.
- Blackboard continues to lose market share, although the vast majority of that reduction over the past two years has been from customers leaving ANGEL. Blackboard Learn lost only a handful of clients in the past year.
- While the end-of-life occurs next year, Pearson’s has announced LearningStudio’s end-of-life for the end of 2017.
- With the new data set, the rapid rise and market strength of WebCT becomes much more apparent than previous graphics.
- There is a growing line for “Other”, capturing the growth of those systems with less than 50 active implementations as primary systems; systems like Jenzabar, Edvance360, LoudCloud Systems, WebStudy, Schoology, and CampusCruiser.
- While we continue to show Canvas in the Open Source area, we have noted a more precise description as an Open Core model.
The post State of Higher Ed LMS Market for US and Canada: Spring 2016 Edition appeared first on e-Literate.
By Phil HillMore Posts (401)
As we roll out our upcoming LMS subscription service here at e-Literate (see Michael’s post for initial description), we suspect that many of the e-Literate readers will be interested, but not all. We value the community here at e-Literate and want to ensure that the blog site itself remains as it’s always been – ad free, uncluttered, and with the same rough amount and breadth of content and discussions.
To help maintain the blog site’s feel, we have created a second email subscription for those people who would like more information on the LMS subscription service – when it’s going to be available, what the reports will look like, summaries of LMS analysis from the report and curated from the blog site, etc. You should see this new signup on the top of the right column in the desktop view, right under the signup for e-Literate posts. We will also include the form within posts that are relevant to analysis of the LMS market.
We are not going to automatically add current e-Literate subscribers to this new list, so if you’re interested in learning more on the service and getting content updates, sign up for the new subscription at this link.
Update: I apologize for any confusion as we test the signup button. The fields embedded in the post were not working in all cases, so we have replaced with a link to a web signup page. Thank you for your patience as we fix any remaining issues.
The post Signup For More Information On LMS Subscription Service appeared first on e-Literate.
By Michael FeldsteinMore Posts (1069)
Not too long ago, Phil and I wrote a post about our long, slow process of realization that our blogging at e-Literate and our consulting at MindWires are not two mostly unrelated things but really two halves of a whole. And we teased the idea that these two worlds would be coming together soon.
Today we’re ready to pull back the curtain a little bit on what we’ve been working on for the short term and offer some hints about what we’re thinking about for the medium term. We have some fairly audacious ambitions for the long term, but we don’t expect to get there overnight. In fact, we are going to start with a humble and somewhat unlikely (but hopefully useful) first paid subscription offering, which we will be making available in just a couple of weeks under the e-Literate brand. It will provide information and analysis above and beyond our continuing free content here on the blog (to which we remain strongly committed).
We’re going to be releasing an LMS market dynamics report, in partnership with LISTedTECH. We’d like to explain why we’re starting there, what we hope the report will accomplish, and where we will go from there.The Much Maligned Minivan
Phil has called the LMS “the minivan of ed tech.” I remember back around 2000, I saw a Harvard Kennedy School executive education course start with a clicker question:
Which are you more likely to buy?
It turned out that there was a pretty interesting correlation between participant reactions to different case studies and their minivan/SUV split. Their car preferences provided a window into their larger views of the world and themselves.
But a lot has changed in the last sixteen years. While a Dodge Caravan is still nobody’s answer to a mid-life crisis, and while there are certainly still people who wouldn’t be caught dead in a minivan, the current generation of young parents, the ones who grew up in Dodge Caravans, are often unembarassed to admit how much they like their family vehicle. It’s convenient. It solves a lot of problems. It’s not life-changing, it’s far from perfect, but it mostly works. Some of that change in attitude has been because of innovation in the product category, but we’re talking about innovation on the level of sliding doors on both sides that can close by remote control. Most of the change is really about people growing up with the product, getting used to it as a feature in the environment, and getting better at figuring out how to take advantage of what does well and work around what it doesn’t do well.
We have reached a similar stage with the LMS. We have entered the late majority phase. Most faculty that we speak to these days take the LMS for granted and, while they will often grumble about some aspect that they are unhappy with, more and more of them are making significant use of the platform—more than just posting a syllabus and some announcements. More of them will use adjectives like “useful,” unprompted, when talking about their particular LMS. I even heard one faculty member describe his school’s particular LMS as “humane” recently.
This is a more profound change than may be immediately obvious. One of the big reasons that I originally got interested in LMS design (back in 2000, around the same time as the minivan/SUV poll) was that I wanted to make it easier for more faculty to try teaching online, and the main reason I wanted to see that was that I knew from experience that moving to online teaching forces an instructor to think about pedagogy. I wanted to have that conversation with faculty. I wanted to help them think about how they could teach differently. The LMS is a kind of a gateway drug for ed tech and, to a certain degree, for course redesign. Many faculty who end up teaching on WordPress or Mediawiki started on an LMS and then got passionate enough and clear enough about what they wanted that it launched them on a quest. Slowly, more and more faculty are beginning to have those pedagogy conversations. And the LMS is very often the catalyst.
At the same time, the LMS market remains mystifying. There was a time when everybody was sure that Blackboard would buy all the major LMS vendors and become a monopoly. Then a lot people thought that the market would react against Blackboard’s looming monopoly by moving en masse to open source and staying there. Now people are wondering how long it will take for Instructure to completely own the market. And in the middle of all that, there are persistent predictions that the LMS will die any time now, either killed off by WordPress or disrupted by some startup or made free by Google or something else. We have gotten our sliding doors on both sides that close by remote control, but mainly, the LMS remains a stubbornly persistent category overall, and it has been hard to predict how people will move from one product to another—even if you follow the news obsessively and talk to a lot of people about it, like we do.
In some ways, maybe that’s OK. Maybe the old-timers and hard-liners are just going to have to make room in their hearts for the much maligned minivan. But at the same time, if we really are at a moment when change is creating new possibilities for conversations about improving teaching, then we don’t want to miss the opportunity to take maximum advantage. As Phil and I have both written about recently, there is currently a very poor connection between the real faculty and student needs that are surfacing on campuses and the ways in which those needs get translated, prioritized, and communicated to the folks who make the enabling technologies. This is one major reason that the market behavior is both hard to understand and frustratingly slow to respond sometimes. Education may not be “broken,” but the ed tech market most definitely is.Increasing Capacity for Change
We have had an increasingly strong intuation for maybe the last six months or so, based on various bits and pieces that we have observed in the course of our daily blogging and consulting work, that the LMS landscape is far more ripe for dramatic change than it has been in a long time—maybe since the market reached saturation around 2003 or 2004. More importantly, we think this change is both partly driven by changes on campus and, depending on how the LMS market evolution goes, can either be more helpful to supporting and amplifying positive change or not. There is an opportunity to increase capacity for pedaogical sophistication on campuses, if only we can get educators and ed tech developers working together in the right ways.This is exactly the kind of problem that Phil and I most love to take on, both in our blogging and in our consulting.
We decided to start with an “outside in” look at the market data—who is moving to what, how quickly campuses are moving, and so on. There are some folks who came before us in this regard that are worth calling out. Campus Computing Project was the only source of this kind of data for a long time, and the market has benefited from its long-term analysis. Edutechnica expanded the universe of available data by building a web crawler. After much thought, we decided to work with LISTedTECH, partly because their unique collection of historic and current data allowed us to ask and answer new questions. Some of what we learned in the process of that partnership work confirmed our previous intuitions, and some of it surprised us. We’re going to hold off on sharing the details for just a couple of more weeks until the report is ready to be released, but one thing we can say now is that the process of compiling the report increased our conviction that we are at an inflection point, both for the LMS market and for the broader positive use of technology in education.
We will be issuing a new report with updated analysis every six months, with regular updates (including the same level of free coverage that we have always given on e-Literate) more frequently.
In the fall, we will take the next step by offering a more “inside out” analysis aimed at supporting colleges and universities through the often painful process of selecting an LMS. We do a lot consulting work of this type—not picking the LMS for the schools, but helping them to make better decisions for themselves by getting high-quality input from all their important campus stakeholder groups, organizing a rational process, asking good questions of the vendors, and so on. Part of this work is our analysis of what is going on with each of the LMS platforms—the development strategies, differentiators, and above all what we hear from customers—and part of it is the advising on the decision process elements that I just mentioned. We are going to boil as much of that as we can down to subscription content and tools that campuses can use either with or without our facilitation help.
We’ll have more to say about the report in the next couple of weeks as we get closer to its release, and we’ll also have more to say in the coming months about our larger ambitions, both for helping with LMS selection and for helping more broadly to improve the pace of positive educational change on campuses, supported by appropriate enabling technologies.
Update (PH): To sign up for more information on the upcoming service, go to this page.
- Not even the Grand Caravan.
By Michael FeldsteinMore Posts (1069)
My latest Chronicle column is on how inherently difficult it is to evaluate learning science claims, particularly when they get boiled down to marketing claims about product efficacy, and how deep academic distrust of vendors makes this already incredibly difficult challenge nearly impossible.
Here’s where I stand on vendor participation in ed tech and learning science research:
On the one hand, vendors have access to data and resources that, for a variety of reasons, are difficult or even impossible for universities to access. They sometimes have millions of students using their products and relatively few internal barriers to doing certain kinds of large-scale effectiveness research (although to a certain degree, territoriality, bureaucracy, and poorly designed data architectures are universal problems). On the other hand, vendors should not under any circumstances be allowed to define the research agenda and arbitrate the validity of learning sciences claims for all of education. Theoretically, there is a simple solution to this. Higher ed knows how to do science. It knows how to review scientific claims. Vendor research should be put through the peer review process. They should publish enough information that their results can be evaluated and, when feasible, duplicated. Their reputations should be based in part on who is doing (or, at least, properly using) good scientific research. After all, most of the researchers at these companies are PhDs who are trained in the academic research process. There is no reason in the world why these companies can’t contribute to real, rigorous, peer-reviewed dialog and progress in the field.
Unfortunately, the current state of the learning sciences research community is weak and fragmented. There is incredibly good work going on in pockets here and there, but overall, it’s a mess. This is bad for many reasons. It both lets vendors get away with junk science claims while failing to reward good behavior. Worse, it effectively abdicates the central role that academia should be playing in driving the research agenda.
The post We Need a More Robust Learning Sciences Research Community appeared first on e-Literate.
By Phil HillMore Posts (401)
I have previously written a primer on competency-based education (CBE) using SPT Malan’s seminal article as the basis for understanding the key elements. Chris Mallett, formerly associate provost at Western Governors University (WGU) and currently VP for online programs at Northeastern University, has just posted a broader historical survey on CBE that is well worth reading. His extensive first-hand knowledge of the development of CBE in higher ed adds another reason to read the article.
In “What’s old is new again . . . a CBE long read”, Chris traces the origins of CBE back further than does Malan.
The earliest American competency-based education initiatives are said to have emerged through the development of the training programs used to quickly prepare soldiers, airmen, and others who were needed in support of the nation’s efforts in World Word II (Joyce, 1971). The ability to deliver “precise and rapid training which considered the learner chiefly in terms of his capacity to respond to the training” was of paramount concern at the time (Joyce, 1971, p. 21). According to Gagne (as cited in Joyce, 1971), training programs deployed for these purposes were developed in four phases:
1. Program goals were identified with particular emphasis on behavioral elements and competencies to be achieved.
2. Behavioral elements and competencies were organized into coherent units.
3. Training exercises that aligned with desired behaviors and competencies were developed.
4. An evaluation system to assess acquisition of the desired behaviors and competencies was developed. Feedback from the system was provided to trainees and their instructors.
As seen in this excerpt, the article provides useful summaries of key concepts to help readers understand what CBE is and what it isn’t. Jumping ahead to the 1980s we get to a point that is crucial to understand – the focus on adult learners.
Early 1980s competency-based education programs used development methods and service practices similar to those used by the competency-based practitioners of the 1960s and 1970s. According to Kasworm (1980), the programs identified specific learning outcomes and used both pre- and post-assessment instruments to determine if competencies had been achieved and mastered. Course content, instructional strategies, and processes all varied by program and were deployed consistent with the needs of students. Most programs employed an adult-learner orientation and aspired to achieve the certification of mastery, not just minimal competence (1980).
All of the programs studied by Kasworm were designed with the realities of adult learners in mind and centered on prescribed objectives and outcomes (i.e., the competencies). Most offered flexibility of time and participation so that “students may begin their learning at any time, progress in their learning of competencies at their own pace, and have opportunities to return to inadequately learned concepts of skills until mastery (Kasworm, 1980, p. 19). Many programs offered personalized instruction. Pre-assessments were often used to diagnose skills and knowledge gaps. Post-learning assessment instruments provided for the certification of mastery. Most programs allowed for variable instruction, allowing students to select learning resources and experiences that would best meet their specific needs. Some competency-based education programs provided advisement or counseling. Some featured established competencies with an aligned, standard curriculum. Others directed students to curricular resources but left it to the learner to choose an appropriate path on his or her own (Kasworm, 1980).
Given the wide range of programs trying some flavor of CBE in the past few years, it is useful to see acknowledgement of the diversity of approaches.
The competency-based education practices Klein-Collins (2012) examined varied dramatically by institution. Some institutions emphasized competencies within traditional, instructor-led, credit-hour based systems. Klein-Collins described these institutions as offering “competency-focused programs” (2012, p. 31) in that leaders had applied a competencies framework to their existing, credit hour-based programs. Other institutions, she said, used “purely competency-based programs” (2012, p. 31) in lieu of traditional systems, creating efficiencies, learning flexibility, and economic advantages in the process. Among the latter group, institutions Klein-Collins examined all relied on the use assessments to verify students’ competencies and awarded credits and credentials strictly according to students’ performance with such instruments (2012).
Chris indicates that there will be future posts on the subject. I hope that he addresses two in particular:
- What are the limits of CBE, or under what conditions should CBE be attempted?
- What are examples of assignments and assessments within CBE programs that go beyond simple quizzes and multiple-choice assessments?
For those interested in CBE, go read the full article.
By Michael FeldsteinMore Posts (1068)
Those of you subscribed to the site by email may have noticed that you didn’t get anything in the last 24 hours. (Or maybe you didn’t notice, since the email never came.) We are aware of the problem. The new system sends a message once a day and is next scheduled to send an email digest every morning at 4 AM Eastern Time. For whatever reason, it did not engage this morning, but we believe that it is working correctly now. You should get something in your inbox that includes this post tomorrow morning. Obviously, if you are reading this by email, then all is well.
Thanks for your patience as we work out the kinks in the new system.
By Michael FeldsteinMore Posts (1068)
For those of you who subscribe to e-Literate by email, be aware that we’re switching over to a new system for handling emails today. Among other things, we’ve had complaints that a few people had trouble unsubscribing.
Which is bad.
The new plugin, Mailchimp, will hopefully solve this problem while enabling us to do some new things we’ve been thinking about as well. (More on that soon.)
We will switch over to the new plugin shortly after I post this message and will be publishing one or two new posts in the next 24 hours. So if you don’t receive any emails by this time tomorrow, or if you have any other email-related problems, then please let us know.
By Michael FeldsteinMore Posts (1068)
Amy Collier was kind enough to post the video and notes from a recent keynote she gave. (For those of you who don’t know Amy, she is the Associate Provost for Digital Learning at Middlebury College and well worth following. She doesn’t blog that often, but when she does, she has interesting things to say.) A central element of her talk was on the “learnification” of education; that is, how the teacher disappears from the conversation about “good” education and the whole thing gets reduced to learners gobbling up little learning objects to get their competency level-up, like a human game of Pac-Man. This is one reason why Phil and I felt it was important to reframe “personalized learning” as a set of teaching practices that we called “undepersonalized teaching” in our recent EDUCAUSE Review article. When we visited actual classes (including Middlebury’s) and saw what actual teachers were doing as part of our e-Literate TV case study series, it was clear that there were thoughtful teachers using these tools in interesting ways that enabled them to spend more time on the high-value aspects of their teaching craft. But the student-as-Pac-Man story is the one that gets heard.
I would like to bring out one additional point about the learnification of education that Amy briefly hints at in her keynote but that you will miss if you only read her notes is that “learnification” is not something that only comes from the “neo-liberal” emphasis on measurement of learning outcomes, career readiness, scale, and student as consumer. The Gert Biesta paper that she references actually pins some of the blame on constructivism, which very often gets flattened into some version of “Hey, teacher, leave them kids alone!” In its historical context, constructivism is a useful antidote to Pac-Man reductivism of the prevailing educational theory of the time, but like the modern, mechanized variant that it is most frequently deployed to counter today, it tends to de-emphasize the role of the teacher to the point where that role all but disappears.
The picture that I have in my own head for the role of the teacher involves a kind of a bell curve for formal education. At the left-hand tail is the basic stuff that people need to know in order to start reasoning and learning for themselves—facts, basic concepts, habits of mind and of study, and so on. The specifics of what goes into that category are somewhat up for grabs and depend a lot on the student’s long-term goals. But whatever they are, many of them lend themselves to the sort of learning objectification/adaptive learning approach just fine. On the far end of the bell curve of formal education (but hopefully in the fat part of bell curve of normal adult life) is the complex reasoning that students apply on their own to synthesize what they are seeing into their own individual perspectives on the world. It’s the sort of thing that one hopes goes on in an undergraduate’s capstone or senior thesis project (and often sooner for different students in different classes). At that point, the training wheels of formal education come off.
The fat middle part of the curve is where good teachers should be spending most of their time. It’s the work of getting the students ready to ride their bikes without the training wheels. The “not yetness” that Amy talks about in her keynote (and elsewhere) is basically about taking your hand off the kid’s shoulder to give her a few seconds to ride on her own. It’s the moment when she realizes that she is free to steer, to ride a head, and yes, maybe to fall. You catch her if you see her in danger of falling badly enough to hurt herself, but not before. It’s the important how part of Vygotsky’s zone of proximal development.
The shape of the curve changes somewhat depending on the student population, but a classic bell curve image fits well both with the typical student progression in a four-year college as well as within the work to be done in a typical 100- or 200-level course. Most of the work of college should be about coaching students on how to think for themselves. Unfortunately, teachers face some practical problems in getting to what is supposed to be the fat part of the curve. The first one is that some students get stuck on the foundational stuff—different students in different places—and the standard lecture-format class doesn’t give teachers much opportunity to see where they are stuck and help them to get unstuck. As a result, teachers either have to deal with the “slowest car on the freeway” problem in their class or else decide to let more students fail rather than taking the time to help them get unstuck. (Or both. If you slow down too much, the students who understand most of the basics tune out and can miss the few critical pieces that they still need to learn.) Second, we have trained students through that very same lecture format that the basics are not something that they should even try to master for themselves. Basics are transmitted during class time as lectures.
Personalized learning can be seen as a set of techniques designed to fix those two problems through the use of supporting technology. Proper use of adaptive “skill and drill” platforms, student retention early warning systems, and the like can actually free up the teacher to create that environment of “not-yetness” in the classroom. It’s not either/or but both/and.
- Honestly, I still don’t really know what that word means, other than “bad” and “vaguely associated with people who have money.”
By Phil HillMore Posts (397)
The following essay, co-written by Russ Poulin and Phil Hill, was originally published at Inside Higher Ed in response to articles in the New York Times and Inside Higher Ed regarding whether New York state should sign the SARA agreement.
A coalition of consumer groups, legal aid organizations and unions object to the state of New York joining an agreement that would change how colleges offering distance education courses in the state would be regulated. As coalition members asserted in an Inside Higher Ed article, the state would be ceding its authority to other states. Students would be left with no protection from predatory colleges and it would make it easier for “bad actors to take advantage of students and harder for states to crack down on them.”
That all sounds ominous. It would be, if it were true.
Even in the digital era, the regulation of educational institutions is left to each state. The resulting array of requirements confuses both students and institutional faculty and staff. The State Authorization Reciprocity Agreement (SARA) was created to apply consistent review standards across the states. An institution approved in its home state is eligible to enroll students (within limits) in any other SARA member state. As of this writing, 36 states have joined in a little over two years. That number may approach 45 by the end of 2016.
SARA means now there is a consistently-applied set of regulations over distance education when students from one state take courses from an institution in another SARA state. Chief critic Robert Shireman, a senior fellow at the Century Foundation and former official at the U.S. Department of Education, cites Iowa as proof that “some states have discovered they can’t add more qualifications,” as if that were a surprise. Reciprocity agreements depend upon consistency. If Iowa wishes to change a policy, there is a process for regulators in the state to suggest a change. States enter into the agreement openly knowing that consistency is a requirement.
Currently, many states — notably including New York — have no regulations in place to protect their in-state students who enroll in courses from many out-of-state colleges. SARA’s critics depict New York as “a national leader in protecting its citizens from unfair business practices”. If a college has no other physical presence in New York other than enrolling students in an online course, it is not regulated and those students are not protected. The state has not allocated any funds to regulate the estimated hundreds of colleges from throughout the country currently serving online students in the state. Asking each state to regulate the institutions headquartered in their state regardless of where they serve students is a much more reasonable solution. Put another way, SARA increases the amount of regulatory oversight of distance education, but does it in a manner more relevant to today’s economy
To be fair, New York has been aggressive in pursuing bad actors in the for-profit education sector, as evidenced by its $10.25 million settlement with Career Education Corporation. It is worth noting, however, that the lawsuit was largely based on brick-and-mortar schools that have nothing to do with SARA. In addition, this action was brought by the New York attorney general’s office and was not the result of education-based regulation. There is a relevant section in the SARA policy stating that nothing precludes “a state from using its laws of general application to pursue action against an institution that violates those laws” and another stating that “nothing precludes the state in which the complaining person is located from also working to resolve the complaint”.
The reality of SARA hardly qualifies as “ceding the ability to guard its citizens against abusive practices” as a Century Foundation letter objecting to New York signing the SARA agreement claims.
What would be lost if New York were not to sign the SARA agreement? There is certainly a downside for institutions offering distance education courses and programs for out-of-state students. It might surprise readers of the letter, but fully 70 percent of students who take all of their courses at a distance do so from public and non-profit institutions. Institutions like Empire State College, a long-time leader in distance education that is part of the SUNY system. Furthermore, the large for-profit institutions referenced in the article have the budget and history of obtaining state-by-state approval already. It is the smaller profile non-profits that have the most difficulty in obtaining authorization to serve students in different states.
A reciprocity agreement between Massachusetts and Connecticut is cited as an alternative. As best we can tell, it allows each state to continue using its own current regulations. This is not reciprocity and does not improve the consumer protection landscape for students or institutions.
Were New York to avoid signing the agreement, students who live in the state would end up with fewer choices, primarily from fewer non-profit institutions that can operate there. Under SARA, New York students actually would have more consumer protection than currently exists as well as regulatory support for any complaint process, including from in-state agencies. Additionally, states systematically working in concert through SARA will more quickly find and deal with institutions that treat students poorly. This is far better than hypothetical, unfunded regulatory oversight by New York trying to operate independently from any other state.
New York has the opportunity to sign an agreement that would expand the regulatory oversight of distance education programs, would leave the state with the same ability to go after bad actors as they have done in the past and would increase choices for resident students — particularly working adults — seeking to get a valuable degree that is only enabled by distance education. It would be a mistake to let a complaint based on hypotheticals and misrepresentations of reality derail this progress.
Director, Policy & Analysis
WCET – the WICHE Cooperative for Educational Technologies
The post IHE Essay: Getting the political facts straight about State Authorization Reciprocity Agreement appeared first on e-Literate.
By Phil HillMore Posts (397)
An edited version of this post was first published at The Chronicle of Higher Education
Let’s admit it, there can be some real tension when a college is faced with choosing a new learning-management system, or any software used by more than one department.
Since the decision involves the administrators who will support the system — commonly called an LMS — and professors who will use it, who should lead the process? Should staff members just get input from faculty members, or should professors vote on the final decision? Or should professors run the process?
This is when distrust between the two sides can emerge. IT administrators may fear that professors will make the pace unbearably slow with an overly deliberative approach. And professors may assume that IT has already determined what system to buy, and may not take their input seriously. Or they may worry that the meetings will be dominated by those needy power users from the psychology department.
In one sense, the LMS has been a huge success. In just a few years starting in the late 1990s, purchases by colleges of these systems went from zero to some 90 percent of all American institutions.
Actual adoption by professors, however, has been a different, slower story. Just over half of students reported using an LMS in most or all of their courses as recently as two years ago, two decades after the creation of modern learning-management software. In short, professors aren’t as sold on using an LMS as administrators are.
For the selection of software used by professors, evaluations and decisions should be anchored by an understanding of the problems to be solved and not just the solutions. A lot of the tension in decision making on technology in higher education comes from jumping too quickly into discussions of specific features. Michael Feldstein described the nature of a better process that focuses on needs rather than features:
“Higher education needs to get better at academic needs assessment. That requires an entirely different and deeper set of questions than which features are important to put on a checklist. It requires an in-depth exploration of how teaching and learning happens in various corners of the campus community and what capabilities would be most helpful to support those efforts.”
To understand academic needs, it helps to gain a better understanding of the people making the decisions on whether and how to use an LMS. For a majority of institutions, this means professors. They are seen as the primary users, more so even than students. But one mistake to avoid is lumping all faculty members into an amorphous mass that can be measured with a simple metric of how fast they adopt the system. How many professors use these features? How many faculty members are satisfied with the system?
In tech circles there is a popular notion of a “technology-adoption curve,” first proposed by Everett Rogers and extended by Geoffrey Moore. The technology-adoption curve creates categories of adopters over time that include innovators, early adopters, early majority, late majority, and laggards. (I prefer to use “holdouts” instead of “laggards” to remove the implicit assumption that new technology should be adopted in all cases.)
Moore described the enormous difference between the innovators and early adopters on one side and the holdouts on the other. Early adopters tend to be risk takers with the technology in question, willing to experiment and willing to fill in the gaps of cool ideas that are not complete solutions. Early majority tend to be pragmatic, risk-averse, wanting complete and proven solutions. This is the “chasm” that traps many technologies and prevents them from being widely used.
The natural description that Mr. Moore used was the concept of “Crossing the Chasm,” assuming that, as a market matures, technology providers should pick which side of the chasm to serve, with the riches being available in the mainstream, majority case.
We don’t have it that easy in education, and we should think of adoption somewhat differently than a consumer tool like LinkedIn. Ed tech should not be a market to be conquered but rather a continuous process of improving student learning and meeting institutional goals. Faculty members are not just end users to be converted and trained. We will always have a subset of professors who are ed-tech enthusiasts and often drive the exploration of different innovations, and we will always have a larger subset of faculty members who may or may not be interested in technology in the classroom and don’t have the time or inclination to be proactive in figuring it out.
Whether a technology such as an LMS should be used, or used more deeply, depends on the teaching and learning context: discipline, lower or upper division, the type of students enrolled, personal experience of the instructor, etc. And when you look at the broad range of faculty members to be supported, the difficult reality is more one of “Straddling the Chasm” than “Crossing the Chasm.”
Rather than going to faculty members to create monochromatic lists of desired features and attributes, a stronger process is to acknowledge variation in professors and rely on them in different roles as part of the technology-selection process.
You might have one approach for dealing with the ed-tech enthusiasts. In most cases, a system seems easy to use once you know how to use it. Therefore, a system with which you are intimately familiar will probably look easier to use than one with which you are unfamiliar. That is not a good test of usability. These professors are quite good at pointing out what a system can and cannot do, however.
The best way to use your ed-tech enthusiasts is to have them sit down with well-informed and passionate teachers who use and advocate for other platforms. These peer-to-peer conversations will help them develop perspective on the guts of the platform alternatives that will be very valuable to you. It may also help them come to terms with the inevitable grieving process they will experience at the prospect of giving up the system they have invested so much in mastering. Unfortunately, the migration process is probably harder on your ed-tech enthusiast faculty member than on anyone else, even including the support staff.
Outside of specialized programs or colleges using competency-based education, it is likely that many of your ed-tech enthusiasts are using or will want to use many tools in addition to the LMS. It is also likely that many will do so whether or not their use is officially supported. Support staff members should consider how to support this type of “unofficial” adoption.
Then there are your mainstream professors. Somebody who has taught with more than one LMS could be a good judge of usability: Faculty members who have taught with two or three (or more) systems generally have some sense of what differences between platforms really matter and what differences don’t in a practical sense. If you have such faculty members on your campus, then you really need their input.
Somebody who has never taught with any LMS but would be open to doing so in the future could be a good judge of usability: You don’t want somebody who still can’t do an email attachment, but you do want somebody who is not a technology fetishist and has no preconceptions. Talk through with this person a small handful of tasks or activities that she might want to try in her first or second attempt to web-enhance her class. Then ask her to look at the candidate platforms just from the perspective of learning how to do those particular tasks. You’ll learn a lot about how easy each platform will make it to expand your faculty commitment.
Understanding the needs of faculty members before jumping into features and solutions is an important way to improve the process, and doing so requires an understanding of different types of professors.
The post The Odd Couple: How Ed Tech Must Support Vastly Different Types of Professors appeared first on e-Literate.
By Michael FeldsteinMore Posts (1066)
Periodically, we write “full disclosure” posts describing our work and how it relates to our blogging, mostly so that readers can judge any conflict of interests we may have. They are usually not particularly fun or interesting posts, but we feel they are important nevertheless.
This time is a little different. We have been thinking hard about the relationship between the blog and our paid work. After listening to a lot of readers and customers, we have some new ideas about how we can have the most impact—some of which we can talk about today, some of which we can talk about soon, and some of which we will talk about down the road.
When we first started MindWires, our consulting business, we thought of it as our day gig that paid the bills so that we could feed our blogging habit. The main connection that we thought about between the blogging and the consulting was managing any conflict of interest, so that people would have a chance to evaluate whether our client work had any influence on our writing. In other words, the main connection we saw between the two was risk.
But a funny thing happened along the way. We began to get signs that people in both our writing and our work lives saw clear benefits in the connection between the two, as well as between our work with schools and with vendors:
- The vast majority of our clients, both schools and vendors, approach us through or because of the blog, and many of them are not even aware that we are consultants. “Hey, something you wrote really helped us understand something important that we are wrestling with. We don’t know what you do, but is it possible for us to pay you to help us more?”
- When we warn clients of a potential conflict of interest—for example, we when a school asks us to provide feedback on a product made by a company we had consulted for in the recent past—the vast majority of times, the client is pleased rather than worried. “Great! That means you know how they think.”
- That last comment is important. University folks seem to count on us to explain how vendors think, and vendor folks seem to count on us to explain how universities think. (See Michael’s recent column in the Chronicle for an example of a piece that shows each group how the other side thinks.) We get more positive comments on our writing, and more consulting business, because of this shuttle diplomacy role we play than for any other reason.
- Most folks we talked to are much more positive about a careful mixing of our blogging and paid work than we feared they would be.
All of this has gotten us thinking that an important role that many of our readers and clients seem to want us to play is as translators, mediators, and honest brokers between schools and vendors. We have resisted this idea for a while because we have seen other organizations struggle in this capacity and we recognize that it is a very hard role to take on in a way that manages some tricky ethical balancing acts and strikes the right tone. But we have gotten a lot of encouragement from our readers to take it on, and we have come to see that some of the tensions inherent in our work that we assumed were mainly risks can be assets, as long as we continue to work every day to earn and keep your trust.
So here are some of the things that are going to start changing over the next months:
- We will begin to move in the direction of building an analyst business, which means that we will sell subscription content and services that are complementary to the free content we will continue to provide on e-Literate. We will start with fairly humble but, we hope, useful reports and services, the first of which we will be announcing in the next month. But we have plans that go well beyond the normal analyst packages. We have some long-term ideas for catalyzing more productive campus conversations about how best to use technology in the service of education.
- We will further limit and focus the kinds of consulting that we do with vendors to make it complementary with our analysis and blogging. We are not sales consultants, marketing consultants, or management consultants. Most vendors come to us to help them better understand customer needs so that they can make better products. As we develop our role as mediators, we will be able to refine those kind of help offerings in a way that hopefully makes our university customers feel like we are representing their needs.
- We will slowly begin to bring the MindWires and e-Literate branding together, so that the relationship between the various parts of our work are clearer.
Here are some things that will not change:
- This blog will continue to provide the same amount, range, and quality of content that we have always attempted provide. If anything, we believe that more closely aligning our paid work to our blogging will give us more useful content that we can ethically blog about.
- All content on e-Literate and on e-Literate TV will continue to be provided under a Creative Commons license.
- We will continue to run critical pieces, of both vendors and universities, whenever we believe that doing so will have value for higher education. Pointed but constructive criticism will remain a central part of what we do.
- Neither e-Literate nor e-Literate TV will run product ads or marketing puff pieces for the vendors that we cover. Ever. Nor will they have paid advertising of any kind.
- We will continue to provide periodic updates, both in the blog and on our editorial policies page, that explain our evolving policies on disclosure and conflict of interest (among other things), so that you can judge the objectivity of our perspective for yourself.
- As part of that policy, we will continue to err on the side of calling out any potentially relevant commercial interests of ours, including but not limited to current or recent client relationships with vendors or universities that we write about.
We’re pretty excited about the clarity we have come to regarding this journey we have been on and look forward to sharing the first fruits of our labor within a few weeks.
By Michael FeldsteinMore Posts (1066)
In our recent EDUCAUSE Review article, Phil and I defined personalized learning as a set of technology-supported practices that help undepersonalize teaching. The three general practices that we identified are as follows:
- Moving content broadcast out of the classroom: Even in relatively small classes, a lot of class time can be taken up with content broadcast such as lectures and announcements. Personalized learning strategies often try to move as much broadcast out of class time as possible in order to make room for more conversation. This strategy is sometimes called “flipping” because it is commonly accomplished by having the teacher record the lectures they would normally give in class and assign the lecture videos as homework, but it can be accomplished in other ways as well, for example with reading-based or problem-based course designs.
- Turning homework time into contact time: In a traditional class, much of the work that the students do is invisible to the teacher. For some aspects, such as homework problems, teachers can observe the results but are often severely limited by time constraints. In other cases, such as comprehension of assigned readings, the students’ work is invisible to the teacher and can be observed only indirectly and with significant effort. Personalized learning approaches often allow the teacher to observe the students’ work in digital products, so that there is more opportunity to coach students. Further, personalized learning often identifies meaningful trends in a student’s work and calls the attention of both teacher and student to those trends through analytics.
- Providing tutoring: Sometimes students get stuck in problem areas that don’t require help from a skilled human instructor. Although software isn’t good at teaching everything, it can be good at teaching some things. Personalized learning approaches can offload the tutoring for those topics to adaptive learning software that gives students interactive feedback while also turning the students’ work into contact time by making it observable to the teacher at a glance through analytics.
Personalized learning is a set of things that you do, not a set of things that you buy. Products can support or enable personalized learning practices, but they are not “personalized learning products.” And in fact, once you have products that support personalized learning practices, that’s actually when the hard work begins. OK, so you moved content broadcast out of the classroom. Now what are you going to do with class time? Students get some benefit from having course content that they can review multiple times at home, but most of the big gains typically come from the teacher reclaiming class time for more high-value direct and interactive work with the students.
On the other hand, adaptive learning is a label that applies to products. Further, adaptive learning products can support all of the practice areas of personalized learning. They enable teachers to move content broadcast outside of class time, they make homework time into contact time through analytics, and they provide some tutoring function. “Adaptive” tends to provoke a lot of discussion around the latter of the three practice areas. See the piece I wrote for the American Federation of Teachers if you want a primer on the strengths and limitations of adaptive learning products as tutors. But as often as not the first two capabilities, neither of which is dependent on adaptive algorithms, are the ones that enable the biggest gains in personalized learning teaching practices. “Courseware” is a set of products that, when designed well and used properly, can enable faculty to move content broadcast out of the classroom and make homework time content time. Adaptive courseware adds the tutoring element while also, if done well, increasing the value of that homework contact time by providing better feedback through more targeted analytics.
By Michael FeldsteinMore Posts (1066)
As we have been writing about here for some time, there has been an open question about the future of Blackboard’s partnership with Moodle. Through its acquisitions, Blackboard has become the world’s largest Moodle support company. This means that they also contribute a hefty percentage of the annual operating budget for Moodle Pty. Both sides have good reason to get along—Blackboard because Moodle has been key to their growth in international markets and Moodle because Blackboard contributes such a substantial portion of the money that keeps Moodle running every year. But there have been challenges in aligning the specific needs of both sides to make the relationship work. In my post about Blackboard’s new CEO, Bill Ballhaus, I wrote the following as one of the things the new leader will have to accomplish:
- Resolve the tensions with Moodle HQ (one way or another): As Phil and I have written about before, Moodle is critical to Blackboard’s international growth, but there are growing signs of tension between Blackboard and Moodle HQ, the company that shepherds Moodle’s open source development. While this item is less of a “must do” than it is a “probably gonna happen,” I think it likely that Blackboard will either mend fences or go its separate ways in the next six months. Unresolved tensions are not good for either organization.
Well, it looks like fences have been mended. Blackboard and Moodle Pty. have announced a renewal of the partnership. Based on both from the press release and comments made by both sides, Phil and I believe that both sides made and received concessions to restructure the relationship, and that both sides seemed to be happy about the result.
If you’re a fan of Moodle, or just of diversity in the LMS market (including open source options), then you should be happy about this resolution. At the same time, it enables Blackboard to keep driving its successful international growth strategy, of which Moodle is an essential part.
By Phil HillMore Posts (394)
Schoology, a social cloud-based LMS known mostly used in the K-12 market, has set its sights on expanding into the higher education market using their recent $32 million funding round. Last January, Colorado State University’s Global Campus selected Schoology to replace Blackboard Learn, yet the market impact of this decision was limited, partially due to the non-traditional nature of the online-only campus. What Schoology has needed to gain more market credibility and awareness is a second set of higher ed institutions selecting Schoology in an open competition. Enter Wheaton College in Illinois, which recently selected Schoology as its new LMS. This is the most significant new selection that I have seen.
I interviewed CIO Wendy Woodward from Wheaton to find out more on the nature of this decision. For context, Woodward became Wheaton’s first CIO in January 2015, moving from her previous job at Northwestern University. Based on her listening tour to understand campus needs, the decision to replace the LMS was listed near the top of the list. Given Northwestern’s recent move to Canvas, Woodward related that there was a common assumption that Wheaton would do the same. The faculty committee fully went through the evaluation process, however, and ended up selecting Schoology after inviting responses from 10 different solutions. The campus has already begun pilots and plans to be fully migrated to the new system by Fall 2016.
Woodward related that she personally has a bias for working with underdogs, and she likes being an early adopter if there is significant upside. She therefore seems to relish the opportunity that the campus decision enables to work with Schoology and provide guidance on what the higher ed market needs. One area mentioned, which I have also noticed, is the need to clean up the language and avoid referring to “districts” and “teachers”. Words matter, particularly in signaling that a vendor understands their customers.
I asked if there were any major functional gaps from a higher ed perspective in the Schoology LMS. Woodward mentioned some issues with testing and scoring, but she related that there were no major gaps. Issues to be addressed, but all manageable. Wheaton College is also putting a priority on serving the 80% needs before worrying about the 20% needs.
The summary of why Wheaton College selected Schoology from a functional perspective:
- It’s simple and easy, and faculty seemed to understand how to build courses and use the system after just 15 minutes orientation;
- The stream-based social interface moves beyond a course-centric view of an LMS; and
- Wheaton evaluators were impressed with Schoology’s analytics approach.
My view is that many higher ed customers are looking for modern alternatives to Canvas – not in frustration but in terms of wanting real competition. There should be more choices for cloud-based systems, leveraging interoperability standards such as LTI, and engaging user experiences. Significantly, Schoology is neither a brand new startup nor a legacy provider needing refactoring. Schoology was founded the same year as Instructure, and the comments on functionality gaps and 80/20 rules are based on the differences between K-12 needs and higher ed needs. I have seen demos, and this is a feature-rich system. The question is whether they have the right features.
If Schoology can prove through CSU Global Campus and Wheaton College that they can fully provide the institutional LMS needs of higher ed institutions, I suspect we’ll see them on many more short-lists for LMS evaluations starting this year.
The post Wheaton College Selects Schoology As New LMS In Surprise Decision appeared first on e-Literate.