By Phil HillMore Posts (304)
On April 1, long-time eCollege (aka Pearson’s LearningStudio) customer Texas Christian University (TCU) gave an update on their LMS selection process to the student newspaper TCU360. In this article there was an interesting statement worth exploring [emphasis added].
“eCollege” will soon be a thing of the past.
TCU has narrowed its search for a Learning Management System to two platforms, Blackboard and Desire2Learn (D2L).
“We’ve had feedback, from faculty specifically, that it’s time for change,” Assistant Provost of Educational Technology and Faculty Development Romy Hughes said.
TCU has used Pearson’s Learning Studio system since 1999.
“Pearson is out of the learning management system game,” Hughes said. “We need something to evolve with the Academy of Tomorrow and where we’re moving to at TCU.”
That last comment got my attention. The eCollege / LearningStudio platform has been around for a long time, and there have been questions about where Pearson was going in the LMS market based on 2011’s introduction of OpenClass. Would OpenClass replace LearningStudio over time, and would it strongly change the LMS market? Would both OpenClass and LearningStudio continue as standalone LMS products? It is quite clear by now that OpenClass itself has not changed the market, but LearningStudio has a long-time customer base of fully online programs – many in the for-profit sector.
The overarching idea was that our investments should be driven towards those products which deliver the highest impact for learners while sustaining us financially so we can continue to invest in new models and improvements.
There is a question of whether Pearson’s internal reviews around LearningStudio and OpenClass are leading to strategic changes around their position in the LMS market.
I asked for Pearson to provide official comment, and David Daniels, president of Pearson Education, responded with the following clarification.
Pearson has not left the LMS space and will continue to invest in our current generation MyLabs and support our many customers on LearningStudio into the future. Pearson’s Learning Studio still powers over 3 Million enrollments annually in the fully remote, online learning space. Our commitment to servicing these students and their institutions is unwavering. Our focus has been and will be on how we support these students within the learning environment. Our range of support services includes learning design and assessment support, integration, data and analytics , student retention, tutoring, and technical support.
This statement is quite clear that there is no imminent end-of-life for LearningStudio, and it is also quite clear about their focus on the “fully remote, online learning space”. This system is primarily used by fully online programs, but there have been a handful of campus-wide clients such as TCU still using the system from the early days. That Pearson LearningStudio would not be appropriate for TCU’s future is partially explained by this focus on full online.
The statement does make an interesting distinction, however, between investing in MyLabs and supporting LearningStudio. My read is that Pearson is not investing in LearningStudio in terms of major product advances and next generation plans but is continuing to fully support current customers. My read is also that Pearson would add new customers to LearningStudio if part of a broader deal tied to content or online “enabling” services (such as Embanet), but that there is no plan for the company to compete in pure LMS competitions.
To help back up this reading, I discovered that the TCU360 article was updated as follows:
“Pearson is out of the learning management system game,” Hughes said. “We need something to evolve with the Academy of Tomorrow and where we’re moving to at TCU.”Hughes said Pearson withdrew from the LMS search process for TCU but remains an LMS provider.
At TCU, at least, the competition is down to Blackboard and D2L, with D2L in the driver’s seat. This competition is also notable by Canvas not being one of the finalists (haven’t seen this situation lately).
One final note on TCU’s selection process described in the article.
These percentages were based on a 214-item questionnaire called the Review Request for Information (RFI) document. These questions were used to assess whether or not a system had the features that TCU was looking for.
“Most LMS vendors told us it took them exactly three months to complete [the questionnaire] because there were so many specific details we were looking for,” Hughes said.
I’ve said it before and I’ll say it again – making a strategic platform selection by a laundry list of hundreds of detailed feature requirements is not a healthy process. I would not brag that it took vendors three full months to complete a questionnaire. But we have one more example to clarify Michael’s classic “Dammit, the LMS” post.
Do you want to know why the LMS has barely evolved at all over the last twenty years and will probably barely evolve at all over the next twenty years? It’s not because the terrible, horrible, no-good LMS vendors are trying to suck the blood out of the poor universities. It’s not because the terrible, horrible, no-good university administrators are trying to build a panopticon in which they can oppress the faculty. The reason that we get more of the same year after year is that, year after year, when faculty are given an opportunity to ask for what they want, they ask for more of the same.
I’d be willing to bet that the vast majority of those 214 items in the RFI are detailed features or direct derivatives of what TCU already has. Even if I’m wrong, it makes little sense for a school to specify the future with detailed requirements; they’re selecting a vendor, not specifying a new design. I wish TCU the best in their LMS selection process, but I would recommend that they put more emphasis on strategic analysis and less on counting check-boxes.
- Statement from the original article before it was updated.
The post Interesting Comment on Pearson’s LMS Plans From Customer appeared first on e-Literate.
By Phil HillMore Posts (304)
At this year’s Ellucian users’ conference #elive15, one of the two big stories has been that Ellucian acquired the Helix LMS, including taking on the development team. I have previously described the Helix LMS in “Helix: View of an LMS designed for competency-based education” as well as the subsequent offer for sale in “Helix Education puts their competency-based LMS up for sale”. The emerging market for CBE-based learning platforms is quickly growing, at least in terms of pilot programs and long-term potential, and Helix is one of the most full-featured, well-designed systems out there.The Announcement
Ellucian has acquired Helix Education’s competency-based education LMS and introduced a 2015 development partner program to collaborate with customers on the next-generation, cloud-only solution.
As the non-traditional student stands to make up a significant majority of learners by 2019, Ellucian is investing in technologies that align with priorities of colleges and universities it serves. CBE programs offer a promising new way for institutions to reduce the cost and time of obtaining a high-quality degree that aligns with the skills required by today’s employers.
I had been surprised at the announcement of intent-to-sell in December, noting:
The other side of the market effect will be determined by which company buys the Helix LMS. Will a financial buyer (e.g. private equity) choose to create a standalone CBE platform company? Will a traditional LMS company buy the Helix LMS to broaden their reach in the quickly-growing CBE space (350 programs in development in the US)? Or will an online service provider and partial competitor of Helix Education buy the LMS? It will be interesting to see which companies bid on this product line and who wins.
And I am surprised at the answer – a private equity owned ERP vendor. Throughout the mid 2000s there was talk about the ERP vendors like SunGard Higher Education (SGHE) (which combined with Datatel in 2011 and renamed as Ellucian in 2012) and Oracle entering the LMS market by acquisition, yet this did not materialize beyond the dreaded strategic partnership . . . until perhaps this week. But the Helix LMS was designed specifically for CBE programs, not general usage, so is this really a move into the broader LMS market?
When I interviewed Helix Education about the LMS last summer, they stated several times that the system could be used for non-CBE programs, but there is no evidence that this has actually occurred. I’ll admit that it is more likely to expand a CBE system into general usage than it is to convert a course-based traditional LMS into a CBE system, but it is not clear that the end result of such an expansion would remain a compelling product with user experience appreciated by faculty and students. The path is not risk-free.
Based on briefings yesterday at #elive15, there is evidence that:
- Ellucian plans to expand the Helix LMS (which will be renamed) beyond CBE; and
- Ellucian understands that there is development still remaining for this broader usage.
Support for broad set of delivery models: CBE, Online, Hybrid, Blended, Traditional, CE/WFDOne Challenge: Strategy
But there are already signs that Ellucian is not committed to deliver an LMS with “support for broad set of delivery models”. As described at Inside Higher Ed:
At its user conference in New Orleans, Ellucian announced the acquisition of Helix Education’s learning management system. The company will “blend” the software, which supports nontraditional methods of tracking student progress, into its student information system, said Mark Jones, chief product officer at Ellucian. While he stressed that the company is not planning to become a major learning management system provider, Ellucian will make the system available to departments interested in offering competency-based education.
“The initial goal and focus is on enabling competency-based education programs to flourish,” Jones said. “In terms of being a broader L.M.S. solution, if our customers find value… we will certainly have that conversation.”
I asked Jim Ritchey, president of Delta Initiative and who is attending the conference, for his reaction to Ellucian’s strategy. Jim noted the reaction at the conference to the news “seemed to be more of a curiosity than interest”, and then added:
To me, one of the key questions is how Ellucian will “blend” the software. Do they mean that schools will be able to post the results of the competency based courses to the SIS, or are they talking about leveraging other products within the LMS? For example, some of the capabilities of Pilot could be leveraged to deliver additional capabilities to the LMS. The concern I would have is that tying the LMS to other products will cause the LMS development to be dependent on the roadmaps of the other products. Ellucian will need to find the right level of independence for the LMS so it can grow as a solution while using other products to enhance capabilities. Will the LMS get lost?
In addition there the differing nature of the products to consider. The Helix LMS is centered on the learner and the learner’s schedule, while Banner, Colleague, and PowerCampus are centered on academic terms and courses. These differing design concepts could cause the blending process to remove some of the unique value of the LMS.Another Challenge: Execution
On paper, this deal seems significant. The company with arguably the greatest number of US higher ed clients now owns an LMS that not only has a modern design but also is targeted at the new wave of CBE programs. The real question, however, is whether Ellucian can pull this off based on their own track record.
Since the 2011 acquisition of SGHE by the private equity firm Hellman & Friedman, Ellucian has endured wave after wave of layoffs and cost cutting measures. I described in 2011 how the SGHE acquisition could pay for itself.
If Hellman & Friedman can achieve reasonable efficiencies by combing SGHE with Datatel, this investment could potentially justify itself in 5 – 7 years by focusing on cash flow operating income, even without SGHE finding a way to reverse its decline in revenue.
Add to this Ellucian’s poor track record of delivering on major product upgrades. The transition from Banner 8 to Banner 9, or later to Banner XE, was described in 2008, promised in 2010, re-promised in 2011, and updated in 2012 / 2013. Banner XE is actually a strategy and not a product. To a degree, this is more a statement of the administrative systems / ERP market in general than just on Ellucian, but the point is that this is a company in a slow-moving market. Workday’s entry into the higher education ERP market has shaken up the current vendors – primarily Ellucian and Oracle / Peoplesoft – and I suspect that many of Ellucian’s changes are in direct response to Workday’s new market power.
Ellucian has bought itself a very good LMS and a solid development team. But will Ellucian have the management discipline to finish the product development and integration that hits the sweet spot for at least some customers? Furthermore, will the Ellucian sales staff sell effectively into the academic systems market?
A related question is why Ellucian is trying to expand into this adjacent market. It seems that Ellucian is suffering from having too many products, and the LMS addition that from the outset requires a new set of development could be a distraction. As Ritchey described after the 2012 conference (paraphrasing what he heard from other attendees):
The approach makes sense, but the hard decisions have not been made. Supporting every product is easy to say and not easy to deliver. At some point in time, they will finalize the strategy and that is when we will begin to learn the future.In The End . . .
The best argument I have read for this acquisition was provided by Education Dive.
Ellucian is already one of the largest providers of cloud-based software and this latest shift with Banner and Colleague will allow its higher education clients to do even more remotely. Enterprise resource planning systems help colleges and universities increase efficiency with technology. Ellucian touts its ERPs as solutions for automating admissions, creating a student portal for services as well as a faculty portal for grades and institutional information, simplifying records management, managing records, and tracking institutional metrics. The LMS acquisition is expected to take the data analytics piece even further, giving clients more information about students to aid in retention and other initiatives.
But these benefits will matter if and only if Ellucian can overcome its history and deliver focused product improvements. The signals I’m getting so far are that Ellucian has not figured out its strategy and has not demonstrated its ability to execute in this area. Color me watchful but skeptical.
- See the “development partner program” part of the announcement.
By Michael FeldsteinMore Posts (1024)
The basic underlying theme of the 2015 GSV Ed Innovation conference is “more is more.” There were more people, more presentations, more deal-making, more celebrities…more of everything, really. If you previously thought that the conference and the deal-making behind it was awesome, you would probably find this year to be awesomer. If you thought it was gross, you would probably think this year was grosser. Overall, it has gotten so big that there is just too much to wrap your head around. I really don’t know how to summarize the conference.
But I can give some observations and impressions.
More dumb money: Let’s start with a basic fact: There is more money coming into the market.
If there is more total money coming in, then it stands to reason that there is also more dumb money coming in. I definitely saw plenty of stupid products that were funded, acquired, and/or breathlessly covered. While it wasn’t directly conference-related, I found it apropos that Boundless was acquired right around the time of the conference. I have made my opinions about Boundless clear before. I have no opinion about Valore’s decision to acquire them, in large part because I don’t know the important details. It might make sense for a company like Valore to acquire Boundless for their platform—if the price is right. But this doesn’t appear to be a triumph for Boundless or their investors. To the contrary, it smells like a bailout of Boundless’ investors to me, although I admit that have no evidence to prove that. If the company were doing so awesomely, then I don’t think the investors would have sold at this point. (Boundless, in typical Boundless fashion, characterizes the transaction as a “merger” rather than an “acquisition.” #Winning.) Of course, you wouldn’t know that this is anything less than the total takeover of education from the breathless press coverage. Xconomy asks whether the combined company will be the “Netflix of educational publishing.”
So yeah, there’s plenty of dumb money funding dumb companies, aided and abetted by dumb press coverage. But is there proportionally more dumb money, or is there just more dumb money in absolute terms as part of the overall increase in investment? This is an important question, because it is a strong indicator of whether the idiocy is just part of what comes when an immature industry grows or whether we are in a bubble. This particular kind of market analysis is somewhat outside my wheelhouse, but my sense, based on my fragmented experience of the conference added to other recent experiences and observations, is that it’s a bit of both. Parts of the market have clearly gotten ahead of themselves, but there also are some real businesses emerging. Unsurprisingly, some of the biggest successes are not the ones that are out to “disrupt” education. Apparently the ed tech company that got the most money last year was Lynda.com which, in addition to being a good bet, doesn’t really compete head-on with colleges (and, in fact, sells to schools). Phil has written a fair bit about 2U; that company only exists because they have been able to get high-end schools to trust them with their prestige brands. This brings me to my next observation:
More smart money: 2U is a good example of a company that, if you had described it to me in advance, I probably would have told you that it never could work. The companies that do well are likely to be the ones that either figure out an angle that few people see coming or execute extremely well (or, in 2U’s case, both). 2U is also one of very few ed tech that have made it to a successful IPO (although there are more that have been successfully sold to a textbook publisher, LMS vendor, or other large company). I am seeing more genuinely interesting companies getting funding and recognition. Three recent examples: Lumen Learning getting angel funding, Acrobatiq winning the ASU-GSV Return on Education Award, and Civitas closing Series C funding a couple of months ago. I also had more interesting and fewer eye-rolling conversations at the conference this year than in past years. Part of that is because my filters are getting better, but I also think that the median educational IQ of the conference attendees has risen a bit as at least some of the players learn from experience.
Textbooks are dead, dead, dead: McGraw Hill Education CEO David Levin was compelled to start his talk by saying, essentially, “Yeah yeah yeah, everybody hates textbooks and they are dying as a viable business. We get it. We’re going to have all digital products for much less money than the paper textbooks very soon, and students will be able to order the paper books for a nominal fee.” He then went on to announce a new platform where educators can develop their own content.
I saw Mark Cuban: He has noticeably impressive pecs. Also,
Arizona is nicer than Massachusetts in early April.
- Corollary: Companies trying to be the “Netflix of education” or the “Uber of education” or the “Facebook of education” will usually turn out to be as ridiculous—meaning “worthy of ridicule”—as they sound.
By Michael FeldsteinMore Posts (1024)
A few folks have asked me to elaborate on why I think LinkedIn is the most interesting—and possibly the most consequential—company in ed tech.
Imagine that you wanted to do a longitudinal study of how students from a particular college do in their careers. In other words, you want to study long-term outcomes. How did going to that college affect their careers? Do some majors do better than others? And how do alumni fare when compared to their peers who went to other schools? Think about how you would get the data. The college could ask alumni, but it would be very hard to get a good response rate, and even then, the data would go stale pretty quickly. There are governmental data sources you could look at, but there are all kinds of thorny privacy and regulatory issues.
There is only one place in the world I know of where bazillions of people voluntarily enter their longitudinal college and career information, keep it up-to-date, and actually want it to be public.
LinkedIn is the only organization I know of, public or private, that has the data to study long-term career outcomes of education in a broad and meaningful way. Nobody else comes close. Not even the government. Their data set is enormous, fairly comprehensive, and probably reasonably accurate. Which also means that they are increasingly in a position to recommend colleges, majors, and individual courses and competencies. An acquisition like Lynda.com gives them an ability to sell an add-on service—“People who are in your career track advanced faster when they took a course like this one, which is available to you for only X dollars”—but it also feeds their data set. Right now, schools are not reporting individual courses to the company, and it’s really too much to expect individuals to fill out comprehensive lists of courses that they took. The more that LinkedIn can capture that information automatically, the more the company can start searching for evidence that enables them to reliably make more fine-grained recommendations to job seekers (like which skills or competencies they should acquire) as well as to employers (like what kinds of credentials to look for in a job candidate). Will the data actually provide credible evidence to make such recommendations? I don’t know. But if it does, LinkedIn is really the only organization that’s in a position to find that evidence right now. This is the enormous implication of the Lynda.com acquisition that the press has mostly missed, and it’s also one reason of many why Pando Daily’s angle on the acquisition—“Did LinkedIn’s acquisition of Lynda just kill the ed tech space?“—is a laughable piece of link bait garbage. The primary value of the acquisition wasn’t content. It was data. It was providing additional, fine-grained nodes on the career graphs of their users. Which means that LinkedIn is likely to do more acquisitions and more partnerships that help accomplish the same end, including providing access of that data for companies and schools to do their own longitudinal outcomes research. Far from “killing ed tech,” this is the first step toward building an ecosystem.
By Michael FeldsteinMore Posts (1024)
In December 2012, I tweeted:
Let it be known that I was the first to predict that Coursera will be acquired by LinkedIn.
— Michael Feldstein (@mfeldstein67) December 5, 2012
At the time, Coursera was the darling of online ed startups. Since then, it has lost its way somewhat, while Lynda.com has taken off like a rocket. Which is probably one big reason why LinkedIn chose to acquire Lynda.com (rather than Coursera) for $1.5 billion. I still think it’s possible that they could acquire a MOOC provider as well, but Udacity seems like a better fit than Coursera at this point.
I’ve said it before and I’ll say it again: LinkedIn is the most interesting company in ed tech.
By Phil HillMore Posts (304)
This is part 3 in this series. Part 1 described the most reliable data on A) how much US college textbook prices are rising and B) how much students actually pay for textbooks, showing that the College Board data is not reliable for either measure. Part 2 provided additional detail on the data source (College Board, NCES, NACS, Student Monitor) and their methodologies. Note that the textbook market is moving into a required course materials market, and in the immediate series I use both terms somewhat interchangeably based on which source I’m quoting. They are largely equivalent, but not identical.
Based on the most reliable data we have, the average college textbook prices are rising at three times the rate of inflation while average student expenditures on textbooks is remaining flat or even falling, in either case below the rate of inflation. Average student expenditures of approximately $600 per year is about half of what gets commonly reported in the national media. The combined chart comes from this GAO Report (using CPI data) and this NPR report (using Student Monitor data).
Does this indicate a functioning market, and does this indicate that we don’t have a textbook pricing problem? No, and no.Why Are Student Expenditures Not Rising Along With Prices?
The answer to this question can be partly found in the financials of your major publishing company. If students were buying new textbooks at the same rate as they used to, publishing companies would be thriving instead of cutting thousands of employees or even resorting to bankruptcy to stay afloat. Students are increasingly choosing to not buy new textbooks.
Let’s look at the NACS data (this one from Fall 2013 data, new data coming out later this week):
A few notes to highlight:
- 30% of surveyed students chose not to acquire at least one required course material. On average, these students skipped acquiring three textbooks in just one term.
- The top reason in this report is not based on price: 38.5% chose not to acquire required course materials because they felt the materials were not needed or wanted, and 30.2% chose not to acquire based on price.
- By combining answers, 38.5% chose to borrow the course materials or “it was available elsewhere without purchase”.
- From the following page (not shown), when asked what students used to substitute for non-acquired course materials:
- 57.1% just used notes from class;
- 46.5% borrowed material from friends or libraries; and
- 19.1% got the chapter or material illegally.
Average expenditures don’t capture the full story, and later in the report it is noted that:
- Students at two-year colleges spent 31% more than the average on required course materials;
- Overall first year students spent 23% more than the average on required course materials; and
- Overall second year students spent 10% more than the average on required course materials.
In other words, the high enrollment courses in the first two years lead to the highest student expenditures on textbooks. Note that we’re still not talking about $1,200 per year spending as often reported based on College Board data, even for these first two years.
Student Monitor also captures some information of note.
- They report identical data – 30% choosing not to acquire at least one textbook.
- 29% of students report they bought ‘required course materials’ that ended up not being used. Of these students, 52% will be more likely to “wait longer before purchasing course materials”.
- They categorize the reasons for not acquiring textbooks differently; professor not using the “required” material was listed by 22% of students, lower than affordability at 31%.
- 73% of students who downloaded textbooks illegally did so to “save money”.
It is important to look at both types of data – textbook list prices and student expenditures – to see some of the important market dynamics at play. All in all, students are exercising their market power to keep their expenditures down – buying used, renting, borrowing, obtaining illegally, delaying purchase, or just not using at all. And textbook publishers are suffering, despite (or largely because of) their rising prices.
But there are downsides for students. There are increasing number of students just not using their required course materials, and students often delay purchase until well into the academic term. Whether from perceived need or from rising prices, this is not a good situation for student retention and learning.
The post About the Diverging Textbook Prices and Student Expenditures appeared first on e-Literate.
By Phil HillMore Posts (304)
There has been a fair amount of discussion around my post two days ago about what US postsecondary students actually pay for textbooks.
The shortest answer is that US college students spend an average of $600 per year on textbooks despite rising retail prices.
I would not use College Board as a source on this subject, as they do not collect their own data on textbook pricing or expenditures, and they only use budget estimates.
<wonk> I argued that the two best sources for rising average textbook price are the Bureau of Labor Statistics and the National Association of College Stores (NACS), and when you look at what students actually pay (including rental, non-consumption, etc) the best sources are NACS and Student Monitor. In this post I’ll share more information on the data sources and their methodologies. The purpose is to help people understand what these sources tell us and what they don’t tell us.College Board and NPSAS
My going-in- argument was that the College Board is not a credible source on what students actually pay:
The College Board is working to help people estimate the total cost of attendance; they are not providing actual source data on textbook costs, nor do they even claim to do so. Reporters and advocates just fail to read the footnotes.
Both the College Board and National Postsecondary Student Aid Study (NPSAS, official data for the National Center for Education Statistics, or NCES) currently use cost of attendance data created by financial aid offices of each institution, using the category “Books and Supplies”. There is no precise guidance from DOE on the definition of this category, and financial aid offices use very idiosyncratic methods for this budget estimate. Some schools like to maximize the amount of financial aid available to students, so there is motivation to keep this category artificially high.
The difference is three-fold:
- NPSAS uses official census reporting from schools while the College Board gathers data from a subset of institution – their member institutions;
- NPSAS reports the combined data “Average net price” and not the sub-category “Books and Supplies”; and
- College Board data targeted at freshman full-time student.
The budget includes room and board, books and supplies, transportation, and personal expenses. This value is used as students’ budgets for the purposes of awarding federal financial aid. In calculating the net price, all grant aid is subtracted from the total price of attendance.
And the databook definition used, page 130:
The estimated cost of books and supplies for classes at NPSAS institution during the 2011–12 academic year. This variable is not comparable to the student-reported cost of books and supplies (CSTBKS) in NPSAS:08.
What’s that? It turns out that in 2008 NCES actually used a student survey – asking them what they spent rather than asking financial aid offices for net price budget calculation. NCES fully acknowledges that the current financial aid method “is not comparable” to student survey data.
As an example of how this data is calculated, see this guidance letter from the state of California [emphasis added].
The California Student Aid Commission (CSAC) has adopted student expense budgets, Attachment A, for use by the Commission for 2015-16 Cal Grant programs. The budget allowances are based on statewide averages from the 2006-07 Student Expenses and Resources Survey (SEARS) data and adjusted to 2015-16 with the forecasted changes in the California Consumer Price Index (CPI) produced by the Department of Finance.
The College Board asks essentially the same question from the same sources. I’ll repeat again – The College Board is not claiming to be an actual data source for what students actually spend on textbooks.NACS
NACS has two sources of data – both bookstore financial reporting from member institutions and from a Student Watch survey report put out in the Fall and Spring of each academic year. NACS started collecting student expenditure data in 2007, initially every two years, then every year, then twice a year.
NACS sends their survey through approximately 20 – 25 member institutions to distribute to the full student population for that institution or a representative sample. For the Fall 2013 report:
Student WatchTM is conducted online twice a year, in the fall and spring terms. It is designed to proportionately match the most recent figures of U.S. higher education published in The Chronicle of Higher Education: 2013/2014 Almanac. Twenty campuses were selected to participate based on the following factors: public vs. private schools, two-year vs. four-year degree programs, and small, medium, and large enrollment levels.
Participating campuses included:
- Fourteen four-year institutions and six two-year schools; and
- Eighteen U.S. states were represented.
Campus bookstores distributed the survey to their students via email. Each campus survey fielded for a two week period in October 2013. A total of 12,195 valid responses were collected. To further strengthen the accuracy and representativeness of the responses collected, the data was weighted based on gender using student enrollment figures published in The Chronicle of Higher Education: 2013/2014 Almanac. The margin of error for this study is +/- 0.89% at the 95% confidence interval.
I interviewed Rich Hershman and Liz Riddle, who shared the specific definitions they use.
Required Course Materials:Professor requires this material for the class and has made this known through the syllabus, the bookstore, learning management system, and/or verbal instructions. These are materials you purchase/rent/borrow and may include textbooks (including print and/or digital versions), access codes, course packs, or other customized materials. Does not include optional or recommended materials.
The survey goes to students who report what they actually spent. This includes the categories of sharing materials, choosing not to acquire, rental, purchase new and purchase used.
The data is aggregated across full-time and part-time students, undergraduates and graduates. So the best way to read the data I shared previously ($638 per year) is as per-capita spending. The report breaks down further by institution type (2-yr public, etc) and type (purchase new, rental, etc). The Fall 2014 data is being released next week, and I’ll share more breakdowns with this data.
In future years NACS plans to expand the survey to go through approximately 100 institutions.Student Monitor
Student Monitor describes their survey as follows:
- Conducted each Spring and Fall semester
- On campus, one-on-one intercepts conducted by professional interviewers during the three week period March 24th to April 14th, 2014 [Spring 2014 data] and October 13th-27th [Fall 2014 data]
- 1,200 Four Year full-time undergrads (Representative sample, 100 campuses stratified by Enrollment, Type, Location, Census Region/Division)
- Margin of error +/- 2.4%
In other words, this is an intercept survey conducted with live interviews on campus, targeting full-time undergraduates. This includes the categories of sharing materials, choosing not to acquire, rental, purchase new and purchase used.
In comparison to NACS, Student Monitor tracks more schools (100 vs. 20) but fewer students (1,200 vs. 12,000).
Despite the differences in methodology, Student Monitor and NACS report spending that is fairly consistent (both on the order of $600 per year per student).New Data in Canada
Alex Usher from Higher Education Strategy Associates shared a blog post in response to my post that is quite interesting.
This data is a little old (2012), but it’s interesting, so my colleague Jacqueline Lambert and I thought we’d share it with you. Back then, when HESA was running a student panel, we asked about 1350 university students across Canada about how much they spent on textbooks, coursepacks, and supplies for their fall semester. [snip]
Nearly 85% of students reported spending on textbooks. What Figure 1 shows is a situation where the median amount spent is just below $300, and the mean is near $330. In addition to spending on textbooks, another 40% or so bought a coursepack (median expenditure $50), and another 25% reported buying other supplies of some description (median expenditure: also $50). Throw that altogether and you’re looking at average spending of around $385 for a single semester.
Subtracting out the “other supplies” that do not fit in NACS / Student Monitor definitions, and acknowledging that fall spending is typically higher than spring due to full-year courses, this data is also in the same ballpark of $600 per year (slightly higher in this case).Upcoming NPSAS Data
The Higher Education Act of 2008 required NCES to add student expenditures on course materials to the NPSAS database, but this has not been added yet. According to Rich Hershman from NACS, NCES is using a survey question that is quite similar to NACS and field testing this spring. The biggest difference will be that NPSAS is annual data whereas NACS and Student Monitor send out their survey in fall and spring (then combining data).
Sometime in 2016 we should have better federal data on actual student expenditures.
Update: Mistakenly published without reference to California financial aid guidance. Now fixed.
Update 3/30: I mistakenly referred to the IPEDS database for NCES when this data is part of National Postsecondary Student Aid Study (NPSAS). All references to IPEDS have been corrected to NPSAS. I apologize for confusion.
The post Postscript on Student Textbook Expenditures: More details on data sources appeared first on e-Literate.
By Phil HillMore Posts (304)
With all of the talk about the unreasonably high price of college textbooks, the unfulfilled potential of open educational resources (OER), and student difficulty in paying for course materials, it is surprising how little is understood about student textbook expenses. The following two quotes illustrate the most common problem.
Atlantic: “According to a recent College Board report, university students typically spend as much as $1,200 a year total on textbooks.”
US News: “In a survey of more than 2,000 college students in 33 states and 156 different campuses, the U.S. Public Interest Research Group found the average student spends as much as $1,200 each year on textbooks and supplies alone.”
While I am entirely sympathetic to the need and desire to lower textbook and course material prices for students, no one is served well by misleading information, and this information is misleading. Let’s look at the actual sources of data and what that data tells us, focusing on the aggregate measures of changes in average textbook pricing in the US and average student expenditures on textbooks. What the data tells us is that the answer is that students spend on average $600 per year on textbooks, not $1,200.
First, however, let’s address the all-too-common College Board reference.College Board Reference
The College Board positions itself as the source for the cost of college, and their reports look at tuition (published and net), room & board, books & supplies, and other expenses. This chart is the source of most confusion.
The light blue “Books and Supplies” data, ranging from $1,225 to $1,328, leads to the often-quoted $1,200 number. But look at the note right below the chart:
Other expense categories are the average amounts allotted in determining total cost of attendance and do not necessarily reflect actual student expenditures.
That’s right – the College Board just adds budget estimates for the books & supplies category, and this is not at all part of their actual survey data. The College Board does, however, point people to one source that they use as a rough basis for their budgets.
According to the National Association of College Stores, the average price of a new textbook increased from $62 (in 2011 dollars) in 2006-07 to $68 in 2011-12. Students also rely on textbook rentals, used books, and digital resources. (http://www.nacs.org/research/industrystatistics/higheredfactsfigures.aspx)
The College Board is working to help people estimate the total cost of attendance; they are not providing actual source data on textbook costs, nor do they even claim to do so. Reporters and advocates just fail to read the footnotes. The US Public Interest Research Group is one of the primary reasons that journalists use the College Board data incorrectly, but I’ll leave that subject for another post.
The other issue is the combination of books and supplies. Let’s look at actual data and sources specifically for college textbooks.Average Textbook Price Changes
What about the idea that textbook prices keep increasing?BLS and Textbook Price Index
The primary source of public data for this question is the Consumer Price Index (CPI) from the Bureau of Labor Statistics (BLS). The CPI sets up a pricing index based on a complex regression model. The index is set to 100 for December, 2001 when they started tracking this category. Using this data tool for series CUUR0000SSEA011 (college textbooks), we can see the pricing index from 2002 – 2014.
This data equates to roughly 6% year-over-year increases in the price index of new textbooks, roughly doubling every 11 years. But note that this data is not inflation-adjusted, as the CPI is used to help determine the inflation rate. Since the US average inflation rate over 2002 – 2014 has averaged 2%, this means that textbook prices are rising roughly 3 times the rate of inflation.NACS and Average Price Per Textbook
NACS, as its name implies, surveys college bookstores to determine what students spend on various items. The College Board uses them as a source. This is the most concise summary, also showing rising textbook prices on a raw, non inflation-adjusted basis, although a lower rate of increase than the CPI.
The following graph for average textbook prices is based on data obtained in the annual financial survey of college stores. The most recent data for “average price” was based on the sale of 3.4 million new books and 1.9 million used books sold in 134 U.S. college stores, obtained in the Independent College Stores Financial Survey 2013-14.
The Government Accountability Office (GAO) did a study in 2013 looking at textbook pricing, but their data source was the BLS. This chart, however, is popularly cited.
There are several private studies done by publishers or service companies that give similar results, but by definition these are not public.Student Expenditure on Books and Supplies
For most discussion on textbook pricing, the more relevant question is what do students actually spend on textbooks, or at least on required course materials. Does the data above indicate that students are spending more and more every year? The answer is no, and the reason is that there are far more options today for getting textbooks than there used to be, and one choice – choosing not to acquire the course materials – is rapidly growing. According to Student Monitor, 30% of students choose to not acquire every college textbook.
Prior to the mid 2000s, the rough model for student expenditures was that roughly 65% purchased new textbooks and 35% bought used textbooks. Today, there are options for rentals, digital textbooks, and courseware, and the ratios are changing.
The two primary public sources for how much students spend on textbooks are the National Association of College Stores (NACS) and The Student Monitor.NACS
The NACS also measures average student expenditure for required course materials, which is somewhat broader than textbooks but does not include non-required course supplies.
The latest available data on student spending is from Student Watch: Attitudes & Behaviors toward Course Materials, Fall 2014. Based on survey data, students spent an average of $313 on their required course materials, including purchases and rentals, for that fall term. Students spent an average of $358 on purchases for “necessary but not required” technology, such as laptops, USB drives, for the same period.
Note that by the nature of analyzing college bookstores, NACS is biased towards traditional face-to-face education and students aged 18-24.
Update: I should have described the NACS methodology in more depth (or probably need a follow-on post), but their survey is distributed through the bookstore to students. Purchasing through Amazon, Chegg, rental, and decisions not to purchase are all captured in that study. It’s not flawless, but it is not just for purchases through the bookstore. From the study itself:
Campus bookstores distributed the survey to their students via email. Each campus survey fielded for a two week period in October 2013. A total of 12,195 valid responses were collected. To further strengthen the accuracy and representativeness of the responses collected, the data was weighted based on gender using student enrollment figures published in The Chronicle of Higher Education: 2013/2014 Almanac. The margin of error for this study is +/- 0.89% at the 95% confidence interval.Student Monitor
Student Monitor is a company that provides syndicated and custom market research, and they produce extensive research on college expenses in the spring and fall of each year. This group interviews students for their data, rather than analyzing college bookstore financials, which is a different methodology than NACS. Based on the Fall 2014 data specifically on textbooks, students spent an average of $320 per term, which is quite close to the $638 per year calculated by NACS. Based on information from page 126:
Average Student Acquisition of Textbooks by Format/Source for Fall 2014
- New print: 59% of acquirers, $150 total mean
- Used print: 59% of acquirers, $108 total mean
- Rented print: 29% of acquirers, $38 total mean
- eTextbooks (unlimited use): 16% of acquirers, $15 total mean
- eTextbooks (limited use): NA% of acquirers, $9 total mean
- eTextbooks (file sharing): 8% of acquirers, $NA total mean
- Total for Fall 2014: $320 mean
- Total on Annual Basis: $640 mean
Note, however, that the Fall 2014 data ($640 annual) represents a steep increase from the previous trend as reported by NPR (but based on Student Monitor data). I have asked Student Monitor for commentary on the increase but have not heard back (yet).
Like NACS, Student Monitor is biased towards traditional face-to-face education and students aged 18-24.Summary
I would summarize the data as follows:
- According to the Bureau of Labor Statistics (BLS), new college textbook prices have risen by roughly 6% per year since 2001, which is approximately 3 times the rate of inflation.
- According to the National Association of College Stores (NACS), the average new college textbook price rose from $57 in 2007 to $79 in 2013.
- According to the General Accountability Office (GAO), from 2002 – 2012 college tuition and fees rose 89% and average new college textbook prices rose 82% while overall consumer prices rose only 28%.
- According to the National Association of College Stores (NACS), the average college student’s expenditures on required course materials dropped from $701 in the 2007/08 school year to $638 in the 2013/14 school year.
- According to Student Monitor and Quoctrung Bui/NPR, per capita college student expenditures on textbooks has stayed relatively flat at approximately $600 per year.
The shortest answer is that US college students spend an average of $600 per year on textbooks despite rising retail prices.
I would not use College Board as a source on this subject, as they do not collect their own data on textbook pricing or expenditures, and they only use budget estimates.
I would like to thank Rob Reynolds from NextThought for his explanation and advice on the subject.
Update (3/25): See note on NACS above.
Update (3/27): See postcript post for additional information on data sources.
- Note that BLS has a category CUSR0000SEEA (Educational Books & Supplies) that has been tracked far longer than the sub-category College Textbooks. We’ll use the textbooks to simplify comparisons.
The post How Much Do College Students Actually Pay For Textbooks? appeared first on e-Literate.
By Phil HillMore Posts (304)
While at SXSWedu, I was able to visit Austin Community College’s ACCelerator lab, which got a fair bit of publicity over the past month. While the centerpiece of ACCelerator usage is for developental math, the 600+ workstation facility spread over 32,000 square feet also supports Tutoring in a variety of subjects, First year experience, Group advising, Academic Coaching, Adult Education, Continuing Education, College readiness assessment preparation, and Student skills workshops.
But it is the developmental math course that has received the most coverage.
Austin Community College welcomed second lady Dr. Jill Biden and Under Secretary of Education Dr. Ted Mitchell on Monday, March 9, to tour the Highland Campus’ ACCelerator and meet with students and faculty of the college’s new developmental math course, MATD 0421. [snip]
“I teach a lot of developmental students,” says Dr. Biden. “The one stumbling block does seem to be math and math anxiety and ‘Can I do it?’. This (course) seems to be so empowering and so positive. Students can see immediate success.”
MATD 0421 is a self-paced, emporium-style course that encompasses all three levels of developmental math. Paul Fain at Inside Higher Ed had an excellent article that included a description of the motivation.
Dismal remedial success rates have been a problem at Austin, which enrolls 60,000 students. So faculty members from the college looked around for alternative approaches to teaching math.
“Really, there’s nothing to lose,” said [Austin CC president] Rhodes.
The Highland Campus, where the ACCelerator lab is located, is built in a former shopping mall. Student in Austin CC can choose courses at any of the 8 campuses or 5 centers. All developmental math at the Highland Campus is run through MATD 0421, so students across the system can choose traditional approaches at other campuses of the emporium approach at Highland.
Austin CC picked this approach after researching several other initiatives (Fain describes Virginia Tech and Montgomery College examples). The IHE article then describes the design:
Austin officials decided to try the emporium method. They paired it with adaptive courseware, which adjusts to individual learners based on their progress and ability to master concepts. The college went with ALEKS, an adaptive software platform from McGraw-Hill Education.
Fain describes the personalization aspect:
The new remedial math course is offered at the ACCelerator. The computer stations are arranged in loose clusters of 25 or so. Faculty members are easy to spot in blue vests. Student coaches and staff wear red ones.
This creates a more personalized form of learning, said Stacey Güney, the ACCelerator’s director. That might seem paradoxical in computer lab that has a bit of a Matrix feel. But Güney said that instead of a class size of 25 students per instructor, the course features 25 classes of one student.
“In here there is no back of the classroom,” she said.
While the program is fairly new (second term), there are some initial results described by the official site:
In MATD 0421’s inaugural semester:
- The withdrawal rate was less than half the rate for traditional developmental math courses.
- 75 percent of the students completed the equivalent of one traditional course.
- Nearly 45 percent completed the equivalent to a course and one-half.
- Over 14 percent completed the equivalent to two courses.
- 13 students completed all the equivalent of three courses.
Go read the full IHE article for a thorough description. I would offer the following observations.
- Rather than a pilot program, which I have argued plagues higher ed and prevents diffusion of innovations, Austin CC has committed to a A) a big program up front (~700 students in the Fall 2014 inaugural semester) and ~1,000 students in Spring 2015, yet B) they offer students the choice of traditional or emporium. To me, this offers the best of both worlds in allowing a big bet that doesn’t get caught in the “purgatory of pilots” while offering student choice.
- While the computer lab and software are easy headlines, I hope people don’t miss the heavy staffing that are a central feature of this lab – there are more than 90 faculty and staff working there, teaching the modular courses, roving the aisles to provide help, and working in help desks. The ACCelerator is NOT an exercise in replacing faculty with computers.
- During my tour, instructor Christie Allen-Johnson and associate professor Ann P. Vance described their plans to perform a more structured analysis of the results. Expect to see more validated outcomes starting at the end of CY2015.
- When and if Austin CC proves the value and results of the model, that would be the time to migrate most of the remaining developmental math courses into this emporium model.
- The one area that concerns me is the lack of structured time for students away from the workstations. Developmental students in community colleges often have not experienced academic success – knowing how to succeed, learning how to learn, believing in their ability to succeed – and often this non-cognitive aspect of math is as important as the actual coursework. Allen-Johnson described the availability of coaching that goes beyond coursework, but that is different than providing structure for coaching and self-regulated learning.
The post Austin Community College’s ACCelerator: Big bet on emporium approach with no pilots appeared first on e-Literate.
By Michael FeldsteinMore Posts (1024)
In the wake of the Pearson social media monitoring controversy, edubloggers like Audrey Watters and D’arcy Norman have announced their policies regarding code that can potentially track users on their blogs. This is a good idea, so we are following their example.
We use Google Analytics and WordPress analytics on both e-Literate and e-Literate TV. The main reason we do so is that we believe the information these packages provide help us create more useful content. Even after a decade of blogging, we are still surprised sometimes by which posts earn your attention and which ones don’t. We look at our analytics results fairly regularly to see what we can learn about writing more content that you find to be worth your time. This is by no means the only or even the main way that we decide what we will write, but we think of it as one of relatively few clues we have to understand to which posts and topics will have the most value to you. We do not run ads and have no intention of doing so in the future. In the case of e-Literate TV, where the content is expensive to make, we may also use information regarding the number of viewers of the episodes in the future to demonstrate to sponsors that our content is having an impact. We make no effort to track individuals and, in fact, have always had a policy of letting our readers comment on posts without registering on the site. But Google in particular is likely making more extensive use of the usage data that they gather.
In addition to the two analytics packages mentioned above, we do embed YouTube videos and use social media buttons, which may carry their own tracking code with them from the companies that supply them. Unfortunately, this is just part of the deal with embedding YouTube videos or adding convenient “Tweet this” links. The tracking code (which usually, but not always, means the same thing as “cookies”) on our site is pretty typical for what you will find for any site that provides these sorts of conveniences.
But that doesn’t mean that you have to allow yourself to be tracked if you prefer not to be. There are a number of excellent anti-tracking plugins available for the mainstream browsers, including Ghostery and Disconnect. If you are concerned about being tracked (here or anywhere), then we recommend installing one or more of these plugins, and we also recommend spending a little time to learn how they work and what sorts of tracking code are embedded on the different sites you visit so that you can make informed and fine-grained decisions about what information you do and do not want to share. These tools often let you make service-by-service and site-by-site decisions, but they generally start with the default of protecting your privacy by blocking everything.
To sum up and clarify our privacy policies:
- We do use Google Analytics and WordPress analytics.
- We do embed social media tools that in some cases carry their own tracking code.
- We do not make any effort to track individuals on our sites.
- We do not use or plan to use analytics for ads or in any way sell the information from our analytics to third parties, including but not limited to ads.
- We may in the future provide high-level summaries of site traffic and video views to e-Literate TV sponsors.
- We do support commenting on blog posts without registration.
- We do provide our full posts in our RSS feed, which excludes most (but not all) tracking code.
- We do provide CC-BY licensing on our content so that it can be used on other sites, including ones that do not have any tracking code .
- Note: We do require an email address from commenters for the sole purpose of providing us with a means of contacting the poster in the event that the person has written something uncivil or marginally inappropriate and we need to discuss the matter with that person privately before deciding what to do about moderation. In the 10-year history of e-Literate, this has happened about three or four times. There are two differences relevant to reader privacy between requiring the email address and requiring registration. First, we allow people to use multiple email addresses or even temporary email addresses if they do not wish that email to be personally identifiable. We only require that the email address be a working address. Second and probably more importantly, without registration, there is no mechanism to link comments to browsing behavior on the site.
By Phil HillMore Posts (303)
At today’s Learning Analytics and Knowledge 2015 conference (#LAK15), Charles Severance (aka Dr. Chuck) gave the morning keynote organized around the theme of going back in time to see what people (myself and Richard Katz primarily) were forecasting for education. By looking at the reality of 2015, we can see which forecasts were on track and which were not. I like this concept, as it is useful to go back and see what we got right and wrong, so this post is meant to provide some additional context particularly for LMS market. Chuck’s keynote also gives cover for doing so without seeming too self-absorbed.
But enough about me. What do you think about me?
I use the term forecast since I tend to describe patterns and trends and then try to describe the implications. This is different than the Katz video which aimed to make specific predictions as a thought-provoking device.Pre-2011
I introduced the LMS squid diagram in 2008 as a tool to help people see the LMS market holistically rather than focusing on detailed features. Too much of campus evaluations then (and even now) missed the big picture that there were only a handful of vendors and some significant market dynamics at play.
A 2009 presentation, by the way, was the basis for Michael and me connecting for the first time. Bromance.
In early 2011 I wrote a post on Visigoths at the LMS Gates, noting:
I am less inclined to rely on straight-line projections of market data to look ahead, and am more inclined to think the market changes we are seeing are driven by outside forces with potentially nonlinear effects. Rome may have been weakened from within, but when real change happened, the Visigoths made it happen. [snip]
Today, there is a flood of new money into the educational technology market. In addition to the potential acquisition of Blackboard, Instructure just raised $8M in venture funding and vying for the role of Alaric in their marketing position, Pearson has been heavily investing in Learning Studio (eCollege for you old-timers), and Moodlerooms raised $7+M in venture funding. Publishing companies, ERP vendors, private equity, venture funding – these are major disruptive forces. And there is still significant moves being made by technology companies such as Google.
In August I started blogging at e-Literate with this post on Emerging Trends in LMS / Ed Tech Market. The trends I described (summary here, see post for full description):
From my viewpoint in 2011, the market has essentially moved beyond Blackboard as the dominant player driving most of the market dynamics.
- The market is more competitive, with more options, than it has been for years.
- Related to the above, there is a trend towards software as a service (SaaS) models for new LMS solutions.
- Also related to the above, the market is demanding and getting real Web 2.0 and Web 3.0 advances in LMS user interfaces and functionality. We are starting to see some real improvements in usability in the LMS market.
- The lines are blurring between content delivery systems (e.g. Cengage MindTap, Pearson MyLabs, etc) and LMS.
- Along those same lines, it is also interesting in what is not being seen as a strategic blurring of lines – between LMS and student information systems.
- Analytics and data reporting are not just aspirational goals for LMS deployments, but real requirements driven by real deadlines.
Looking back at the 2011 posts, I would note the following:
- I think all of the basic trends have proven to be accurate, although I over-stated the analytics importance of “real requirements driven by real deadlines”. Analytics are important and some schools have real requirements, but for most schools analytics is not far beyond “aspirational goals”.
- Chuck over-interpreted the “it’s all about MyLabs”. The real point is the blurring of lines between previously distinct categories of delivery platforms and digital content. I would argue that the courseware movement as well as most CBE platforms shows this impact in 2015. MyLabs was just an example in the graphic.
- My main message about outside forces was that the internal players (Blackboard, Desire2Learn, Moodle, etc) were not going to be the source of change, rather “new competitors and new dynamics” would force change. Through the graphic, I over-emphasized the ERP and big tech players (Oracle, Google, Pearson & eCollege, etc) while I under-emphasized Instructure, which has proven to be the biggest source of change (although driven by VC funding).
- I still like the Rome / Visigoths / Alaric metaphor.
In early 2012 I had a post Farewell to the Enterprise LMS, Greetings to the Learning Platform that formed the basis of the forecasts Chuck commented on in the LAK15 keynote.
In my opinion, when we look back on market changes, 2011 will stand out as the year when the LMS market passed the point of no return and changed forever. What we are now seeing are some real signs of what the future market will look like, and the actual definition of the market is changing. We are going from an enterprise LMS market to a learning platform market.
In a second post I defined the characteristics of a Learning Platform (or what I meant by the term):
- Learning Platforms are next-generation technology compared to legacy LMS solutions arising in the late 1990’s / early 2000’s. While many features are shared between legacy LMS and learning platforms, the core designs are not constrained by the course-centric, walled-garden approach pioneered by earlier generations.
- Learning Platforms tend to be SaaS (software as a service) offerings, based in a public or private cloud on multi-tenant designs. Rather than being viewed as an enterprise application to be set up as a customized instance for each institution, there is a shared platform that supports multiple customers, leveraging a shared technology stack, database, and application web services.
- Learning Platforms are intended to support and interoperate with multiple learning and social applications, and not just as extensions to the enterprise system, but as a core design consideration.
- Learning Platforms are designed around the learner, giving a sense of identify that is maintained throughout the learning lifecycle. Learners are not just pre-defined roles with access levels within each course, but central actors in the system design.
- Learning Platforms therefore are social in nature, supporting connections between learners and customization of content based on learner needs.
- Learning Platforms include built-in analytics based on the amalgamation of learner data across courses, across institutions, and even beyond institutions.
- Learning Platforms allow for the discovery of instructional content, user-generated content, and of other learners.
Going back to the Farewell post, the forecast was:
Another trend that is becoming apparent is that many of the new offerings are not attempting to fully replace the legacy LMS, at least all at once. Rather than competing with all of the possible features that are typical in enterprise LMS solutions, the new platforms appear to target specific institutional problems and offer only the features needed. Perhaps inspired by Apple’s success in offering elegant solutions at the expense of offering all the features, or perhaps inspired by Clayton Christensen’s disruptive innovation model, the new learning platform providers are perfectly willing to say ‘no – we just don’t offer this feature or that feature’.
Looking back at the 2012 posts, I would note the following:
- I still see the move from enterprise LMS to learning platform, but it is happening slower than I might have thought and more unevenly. The attributes of SaaS and fewer features has happened (witness Canvas in particular), and the interoperability capabilities are occurring (with special thanks to Chuck and his work with IMS developing LTI). However, the adoption and true usage of multiple learning and social applications connected through the platform is quite slow.
- The attributes of learner-centric design built-in analytics can be seen in many of the CBE platforms, but not really in the general LMS market itself.
- Chuck was right to point out the revision that I no longer included the outside forces of ERP & big tech. The key point of 2011 forecasts was outside forces making changes, but by 2013 it was clear that ERP & big tech were not part of this change.
- There is also a big addition of homegrown solutions, or alternative learning platforms that is worth noting. The entrance of so new CBE platforms designed from the ground up for a specific purposes is an example of this trend.
Thanks to Chuck, this has been informative (to me, at least) to go back and review forecasts and see what I got right and what I got wrong. Chuck’s general point on my forecasts seem to be that I am over-emphasizing the emergence of learning platforms at least as a distinct category from enterprise LMS, and that we’re still seeing LMS market although with changed internals (fewer features, more interoperability). I don’t disagree with this point (if I am summarizing accurately). However, if you read the actual forecasts above, I don’t think Chuck and I are too far apart. I may be more optimistic than he is and need to clarify my terminology somewhat, but we’re in the same ball park.
Now let’s turn the tables. My main critique with Dr. Chuck’s keynote is that he just didn’t commit on the song. We know he is willing to boldly sing, after all (skip ahead to 1:29).
Update: Clarified language on LTI spec
The post Back To The Future: Looking at LMS forecasts from 2011 – 2014 appeared first on e-Literate.
By Phil HillMore Posts (302)
In August 2013 Michael described Ray Henderson’s departure from an operational role at Blackboard. As of the end of 2014, Ray is no longer on the board of directors at Blackboard either. He is focusing on his board activity (including In The Telling, our partner for e-Literate TV) and helping with other ed tech companies. While Ray’s departure from the board did not come as a surprise to me, I have been noting the surprising number of other high-level departures from Blackboard recently.
As of December 24, 2014, Blackboard listed 12 company executives in their About > Leadership page. Of those 12 people, 4 have left the company since early January. Below is the list of the leadership team at that time along with notes on changes:
- Jay Bhatt, CEO
- Maurice Heiblum, SVP Higher Education, Corporate And Government Markets (DEPARTED February, new job unlisted)
- Mark Belles, SVP K-12 (DEPARTED March, now President & COO at Teaching Strategies, LLC)
- David Marr, SVP Transact
- Matthew Small, SVP & Managing Director, International
- Gary Lang, SVP Product Development, Support And Cloud Services (DEPARTED January, now VP B2B Technology, Amazon Supply)
- Katie Blot, SVP Educational Services (now SVP Corporate Strategy & Business Development)
- Mark Strassman, SVP Industry and Product Management
- Bill Davis, CFO
- Michael Bisignano, SVP General Counsel, Secretary (DEPARTED February, now EVP & General Counsel at CA Technologies)
- Denise Haselhorst, SVP Human Resources
- Tracey Stout, SVP Marketing
Beyond the leadership team, there are three others worth highlighting.
- Brad Koch, VP Product Management (DEPARTED January, now at Instructure)
- David Ashman, VP Chief Architect, Cloud Architecture (DEPARTED February, now CTO at Teaching Strategies, LLC)
- Mark Drechsler, Senior Director, Consulting (APAC) (DEPARTED March, now at Flinders University)
I mentioned Brad’s departure already and the significance in this post. Mark is significant in terms of his influence in the Australian market, as he came aboard from the acquisition of NetSpot.
David is significant as he was Chief Architect and had the primary vision for Blackboard’s impending moving into the cloud. Michael described this move in his post last July.
Phil and I are still trying to nail down some of the details on this one, particularly since the term “cloud” is used particularly loosely in ed tech. For example, we don’t consider D2L’s virtualization to be a cloud implementation. But from what we can tell so far, it looks like a true elastic, single-instance multi-tenant implementation on top of Amazon Web Services. It’s kind of incredible. And by “kind of incredible,” I mean I have a hard time believing it. Re-engineering a legacy platform to a cloud architecture takes some serious technical mojo, not to mention a lot of pain. If it is true, then the Blackboard technical team has to have been working on this for a long time, laying the groundwork long before Jay and his team arrived. But who cares? If they are able to deliver a true cloud solution while still maintaining managed hosting and self-hosted options, that will be a major technical accomplishment and a significant differentiator.
This seems like the real deal as far as we can tell, but it definitely merits some more investigation and validation. We’ll let you know more as we learn it.
This rollout of new cloud architecture has taken a while, and I believe it is hitting select customers this year. Will David’s departure add risk to this move? I talked to David a few weeks ago, and he said that he was leaving for a great opportunity at Teaching Strategies, and that while he was perhaps the most visible face of the cloud at Blackboard, others behind the scenes are keeping the vision. He does not see added risk. While I appreciate the direct answers David gave me to my questions, I still cannot see how the departure of Gary Lang and David Ashman will not add risk.
So why are so many people leaving? From initial research and questions, the general answer seems to be ‘great opportunity for me professionally or personally, loved working at Blackboard, time to move on’. There is no smoking gun that I can find, and most departures are going to very good jobs.
Jay Bhatt, Blackboard’s CEO, provided the following statement based on my questions.
As part of the natural evolution of business, there have been some transitions that have taken place. A handful of executives have moved onto new roles, motivated by both personal and professional reasons. With these transitions, we have had the opportunity to add some great new executive talent to our company as well. Individuals who bring the experience and expertise we need to truly capture the growth opportunity we have in front of us. This includes Mark Gruzin, our new NAHE/ProEd GTM lead, Peter George, our new head of product development and a new general counsel who will be starting later this month. The amazing feedback we continue to receive from customers and others in the industry reinforces how far we’ve come and that we are on the right path. As Blackboard continues to evolve, our leaders remain dedicated to moving the company forward into the next stage of our transformation.
While Jay’s statement matches what I have heard, I would note the following:
- The percentage of leadership changes within a 3 month period rises above the level of “natural evolution of business”. Correlation does not imply causation, but neither does it imply a coincidence.
- The people leaving have a long history in educational technology (Gary Lang being the exception), but I have not seen the same in reverse direction. Mark Gruzin comes from a background in worldwide sales and federal software group at IBM. Peter George comes from a background in Identity & Access Management as well as Workforce Management companies. They both seem to be heavy hitters, but not in ed tech. Likewise, Jay himself along with Mark Strassman and Gary Lang had no ed tech experience when they joined Blackboard. This is not necessarily a mistake, as fresh ideas and approaches were needed, but it is worth noting the stark differences in people leaving and people coming in.
- These changes come in the middle of Blackboard making huge bets on a completely new user experience and a move into the cloud. These changes were announced last year, but they have not been completed. This is the most important area to watch – whether Blackboard completes these changes and successfully rolls them out to the market.
We’ll keep watching and update where appropriate.
The post Blackboard Brain Drain: One third of executive team leaves in past 3 months appeared first on e-Literate.
By Phil HillMore Posts (302)
If you want to observe the unfolding impact of an institution ignoring the impact of policy decisions on students, watch the situation at Rutgers University. If you want to see the power of a single student saying “enough is enough”, go thank Betsy Chao and sign her petition. The current situation is that students are protesting the Rutgers usage of ProctorTrack software – which costs students $32 in additional fees, accessing their personal webcams, automatically tracks face and knuckle video as well as watching browser activity – in online courses. Students seem to be outraged at the lack of concern over student privacy and additional fees.
Prior to 2015, Rutgers already provided services for online courses to comply with federal regulations to monitor student identity. The rationale cited [emphasis added]:
The 2008 Higher Education Opportunity Act (HEOA) requires institutions with distance education programs to have security mechanisms in place that ensure that the student enrolled in a particular course is in fact the same individual who also participates in course activities, is graded for the course, and receives the academic credit. According to the Department of Education, accrediting agencies must require distance education providers to authenticate students’ identities through secure (Learning Management System) log-ins and passwords, proctored exams, as well as “new identification technologies and practices as they become widely accepted.”
This academic term, Rutgers added a new option – ProctorTrack:
Proctortrack is cost-effective and scalable for any institution size. Through proprietary facial recognition algorithms, the platform automates proctoring by monitoring student behavior and action for test policy compliance. Proctortrack can detect when students leave their space, search online for additional resources, look at hard notes, consult with someone, or are replaced during a test.
This occurred at the same time as the parent company Verificient received a patent for their approach, in January 2015.
A missing piece not covered in the media thus far is that Rutgers leaves the choice of student identify verification approach up to individual faculty or academic program [emphasis added].
In face-to-face courses, all students’ identities are confirmed by photo ID prior to sitting for each exam and their activities are monitored throughout the exam period. To meet accreditation requirements for online courses, this process must also take place. Rutgers makes available electronic proctoring services for online students across the nation and can assist with on-site proctoring solutions. Student privacy during a proctored exam at a distance is maintained through direct communication and the use of a secure testing service. Students must be informed on the first day of class of any additional costs they may incur for exam proctoring and student authentication solutions.
The method of student authentication used in a course is the choice of the individual instructor and the academic unit offering the course. In addition to technology solutions such as Examity and ProctorTrack, student authentication can also be achieved through traditional on-site exam proctoring solutions. If you have any questions, talk to your course instructor.
As the use of of ProctorTrack rolled out this term, at least one student – senior Betsy Chao – was disturbed and on February 5th created a petition on change.org.
However, I recently received emails from both online courses, notifying me of a required “Proctortrack Onboarding” assessment to set up Proctortrack software. Upon reading the instructions, I was bewildered to discover that you had to pay an additional $32 for the software on top of the $100 convenience fee already required of online courses. And I’m told it’s $32 per online class. $32 isn’t exactly a large sum, but it’s certainly not pocket change to me. Especially if I’m taking more than one online class. I’m sure there are many other college students who echo this sentiment. Not only that, but nowhere in either of the syllabi was there any inkling of the use of Proctortrack or the $32 charge. [snip]
Not only that, but on an even more serious note, I certainly thought that the delicate issue of privacy would be more gracefully handled, especially within a school where the use of webcams was directly involved in a student’s death. As a result, I thought Rutgers would be highly sensitive to the issue of privacy.
If accurate, this clearly violates the notification policy of Rutgers highlighted above. Betsy goes on to describe the alarming implications relating to student privacy.
On February 7th, New Brunswick Today picked up on the story.
Seven years ago, Congress passed the Higher Education Opportunity Act of 2008, authorizing the U.S Department of Education to outline numerous recommendations on how institutions should administer online classes.
The law recommended that a systemic approach be deveoped to ensure that the student taking exams and submitting projects is the same as the student who receives the final grade, and that institutions of higher education employ “secure logins and passwords, or proctored exams to verify a student’s identity.”
Other recommendations include the use of an identity verification process, and the monitoring by institutions of the evolution of identity verification technology.
Under these recommendations by the U.S Department of Education, Rutgers would technically be within its right to implement the use of ProctorTrack, or an alternative form of identity verification technology.
However, the recommendations are by no means requirements, and an institution can decide whether or not to take action.
The student newspaper at Rutgers, The Daily Targum, ran stories on February 9th and February 12th, both highly critical of the new software usage. All of this attention thanks to one student who refused to quietly comply.
The real problem in my opinion can be found in this statement from the New Brunswick Today article.
“The university has put significant effort into protecting the privacy of online students,” said the Rutgers spokesperson. “The 2008 Act requires that verification methods not interfere with student privacy and Rutgers takes this issue very seriously.”
The Rutgers Center for Center for Online and Hybrid Learning and Instructional Technologies (COHLIT) would oversee the implementation and compliance with the usage of ProctorTrack, according to Rutgers spokesperson E.J. Miranda, who insisted it is not mandatory.
“ProctorTrack is one method, but COHLIT offers other options to students, faculty and departments for compliance with the federal requirements, such as Examity and ExamGuard,” said Miranda.
Rutgers has also put up a FAQ page on the subject.
The problem is that Rutgers is paying attention to federal regulations and assuming their solutions are just fine, yet:
- Rutgers staff clearly spent little or no time asking students for their input on such an important and highly charged subject;
- Rutgers policy leaves the choice purely up to faculty or academic programs, meaning that there was no coordinated decision-making and communication to students;
- Now that students are complaining, Rutgers spokes person has been getting defensive, implying ‘there’s nothing to see here’ and not taking the student concerns seriously;
- At no point that I can find has Rutgers acknowledged the problem of a lack of notification and new charges for students, nor have they acknowledged that students are saying that this solution goes too far.
That is why this is a fiasco. Student privacy is a big issue, and students should have some input into the policies shaped by institutions. The February 12th student paper put it quite well in conclusion.
Granted, I understand the University’s concern — if Rutgers is implementing online courses, there need to be accountability measures that prevent students from cheating. However, monitoring and recording our computer activity during online courses is not the solution, and failing to properly inform students of ProctorTrack’s payment fee is only a further blight on a rather terrible product. If Rutgers wants to transition to online courses, then the University needs to hold some inkling of respect for student privacy. Otherwise, undergraduates have absolutely no incentive to sign up for online classes.
If Rutgers administration wants to defuse this situation, they will be to find a way to talk and listen to students on the subject. Pure and simple.
Update: Bumping comment from Russ Poulin into post itself [emphasis added]:
The last paragraph in the federal regulation regarding academic integrity (602.17) reads:
“(2) Makes clear in writing that institutions must use processes that protect student privacy and notify students of any projected additional student charges associated with the verification of student identity at the time of registration or enrollment.”
The privacy issue is always a tricky one when needing to meet the other requirements of this section. But, it does sound like students were not notified of the additional charges at the time of registration.
The post Rutgers and ProctorTrack Fiasco: Impact of listening to regulations but not to students appeared first on e-Literate.
By Phil HillMore Posts (302)
Today I facilitated a faculty development workshop at Aurora University, sponsored by the Center for Excellence in Teaching and Learning and the IT Department. I always enjoy sessions like this, particularly with the ability to focus our discussions squarely on technology in support of teaching and learning. The session was titled “Emerging Trends in Educational Technology and Implications for Faculty”. Below are very rough notes, slides, and a follow-up.Apparent Dilemma and Challenge
Building off of previous presentations at ITC Network, there is an apparent dilemma:
- One one hand, little has changed: Despite all the hype and investment in ed tech, there is only one new fully-established LMS vendor in the past decade (Canvas), and the top uses of LMS are for course management (rosters, content sharing, grades). Plus the MOOC movement fizzled out, at least for replacing higher ed programs or courses.
- On the other hand, everything has changed: There are examples of redesigned courses such as Habitable Worlds at ASU that are showing dramatic results in the depth of learning by students.
The best lens to understand this dilemma is Everett Rogers’ Diffusions of Innovations and the technology adoption curve and categories. Geoffrey Moore extended this work to call out a chasm between Innovators / Early Adopters on the left side (wanting advanced tech, OK with partial solutions they cobble together, pushing the boundary) and Majority / Laggards on the right side (wanting full solution – just make it work, make it reliable, make it intuitive). Whereas Moore described Crossing the Chasm for technology companies (moving from one side to the other), in most cases in education we don’t have that choice. The challenge in education is Straddling the Chasm (a concept I’m developing with a couple of consulting clients as well as observations from e-Literate TV case studies):
This view can help explain how advances in pedagogy and learning approaches generally fit on the left side and have not diffused into mainstream, whereas advances in simple course management generally fit on the right side and have diffused, although we want more than that. You can also view the left side as faculty wanting to try new tools and faculty on the right just wanting the basics to work.
The trend in market moving away from walled garden offers education the chance to straddle the chasm.Implications for Faculty
1) The changes are not fully in place, and it’s going to be a bumpy ride. One example is difficulty in protecting privacy and allowing accessibility in tools not fully centralized. Plus, the LTI 2.0+ and Caliper interoperability standards & frameworks are still a work in progress.
2) While there are new possibilities to use commercial tools, there are new responsibilities as the left side of chasm and non-standard apps require faculty and local support (department, division) to pick up support challenges.
3) There is a challenge is balance innovation with the student need for consistency across courses, mostly in terms of navigation and course administration.
4) While there are new opportunities for student-faculty and student-student engagement, there are new demands on faculty to change their role and to be available on the students’ schedule.
5) Sometimes, simple is best. It amazes me how often the simple act of moving lecture or content delivery online is trivialized. What is enabled here is the ability for students to work at their own pace and replay certain segments without shame or fear of holding up their peers (or even jumping ahead and accelerating).Slides
Emerging Trends in Educational Technology and Implications for Faculty from Phil Hill Follow-Up
One item discussed in the workshop was how to take advantage of this approach in Aurora’s LMS, Moodle. While Moodle has always supported the open approach and has supported LTI standards, I neglected to mention a missing element. Commercial apps such as Twitter, Google+, etc, do not natively follow LTI standards, which are education-specific. The EduAppCenter was created to help with this challenge by creating a library of apps and wrappers around apps that are LTI-compliant.
The post Slides and Follow-up From Faculty Development Workshop at Aurora University appeared first on e-Literate.
By Phil HillMore Posts (302)
Just over a year and a half ago, Devlin Daley left Instructure, the company he co-founded. It turns out that both founders have made changes as Brian Whitmer, the other company co-founder, left his operational role in 2014 but is still on the board of directors. For some context from the 2013 post:
Instructure was founded in 2008 by Brian Whitmer and Devlin Daley. At the time Brian and Devlin were graduate students at BYU who had just taken a class taught by Josh Coates, where their assignment was to come up with a product and business model to address a specific challenge. Brian and Devlin chose the LMS market based on the poor designs and older architectures dominating the market. This design led to the founding of Instructure, with Josh eventually providing seed funding and becoming CEO by 2010.
Brian had a lead role until last year for Instructure’s usability design and for it’s open architecture and support for LTI standards.
The reason for Brian’s departure (based on both Brian’s comments and comments from Instructure statements) is based on his family. Brian’s daughter has Rett Syndrome:
Rett syndrome is a rare non-inherited genetic postnatal neurological disorder that occurs almost exclusively in girls and leads to severe impairments, affecting nearly every aspect of the child’s life: their ability to speak, walk, eat, and even breathe easily.
As Instructure grew, Devlin became the road show guy while Brian stayed mostly at home, largely due to family. Brian’s personal experiences have led him to create a new company: CoughDrop.
Some people are hard to hear — through no fault of their own. Disabilities like autism, cerebral palsy, Down syndrome, Angelman syndrome and Rett syndrome make it harder for many individuals to communicate on their own. Many people use Augmentative and Alternative Communication (AAC) tools in order to help make their voices heard.
We work to help bring out the voices of those with complex communication needs through good tech that actually makes things easier and supports everyone in helping the individual succeed.
This work sounds a lot like early Instructure, as Brian related to me this week.
Augmentative Communication is a lot like LMS space was, in need of a reminder of how things can be better.
By the middle of 2014, Brian left all operational duties although he remains on the board (and he plans to remain on the board and acting as an adviser).
How will this affect Instructure? I would look at Brian’s key roles in usability and open platform to see if Instructure keeps up his vision. From my view the usability is just baked into the company’s DNA and will likely not suffer. The question is more on the open side. Brian led the initiative for the App Center as I described in 2013:
The key idea is that the platform is built to easily add and support multiple applications. The apps themselves will come from EduAppCenter, a website that launched this past week. There are already more than 100 apps available, with the apps built on top of the Learning Tools Interoperability (LTI) specification from IMS global learning consortium. There are educational apps available (e.g. Khan Academy, CourseSmart, Piazza, the big publishers, Merlot) as well as general-purpose tools (e.g. YouTube, Dropbox, WordPress, Wikipedia).
The apps themselves are wrappers that pre-integrate and give structure access to each of these tools. Since LTI is the most far-reaching ed tech specification, most of the apps should work on other LMS systems. The concept is that other LMS vendors will also sign on the edu-apps site, truly making them interoperable. Whether that happens in reality remains to be seen.
What the App Center will bring once it is released is the simple ability for Canvas end-users to add the apps themselves. If a faculty adds an app, it will be available for their courses, independent of whether any other faculty use that set up. The same applies for students who might, for example, prefer to use Dropbox to organize and share files rather than native LMS capabilities.
The actual adoption by faculty and institutions of this capability takes far longer than people writing about it (myself included) would desire. It takes time and persistence to keep up the faith. The biggest risk that Instructure faces by losing Brian’s operational role is whether they will keep this vision and maintain their support for open standards and third-party apps – opening up the walled garden, in other words.
Melissa Loble, Senior Director of Partners & Programs at Instructure, will play a key role in keeping this open vision alive. I have not heard anything indicating that Instructure is changing, but this is a risk from losing a founder who internally ‘owned’ this vision.
I plan to share some other HR news from the ed tech market in future posts, but for now I wish Brian the best with his new venture – he is one of the truly good guys in ed tech.
Update: I should have given credit to Audrey Watters, who prompted me to get a clear answer on this subject.
- Much to Brian’s credit
- Formerly Associate Dean of Distance Ed at UC Irvine and key player in Walking Dead MOOC
The post Brian Whitmer No Longer in Operational Role at Instructure appeared first on e-Literate.
By Phil HillMore Posts (301)
Last week the University of Texas’ Dana Center announced a new initiative to digitize their print-based math curriculum and expand to all 50 community colleges in Texas. The New Mathways Project is ‘built around three mathematics pathways and a supporting student success course’, and they have already developed curriculum in print:
Tinkering with the traditional sequence of math courses has long been a controversial idea in academic circles, with proponents of algebra saying it teaches valuable reasoning skills. But many two-year college students are adults seeking a credential that will improve their job prospects. “The idea that they should be broadly prepared isn’t as compelling as organizing programs that help them get a first [better-paying] job, with an eye on their second and third,” says Uri Treisman, executive director of the Charles A. Dana Center at UT Austin, which spearheads the New Mathways Project. [snip]
Treisman’s team has worked with community-college faculty to create three alternatives to the traditional math sequence. The first two pathways, which are meant for humanities majors, lead to a college-level class in statistics or quantitative reasoning. The third, which is still in development, will be meant for science, technology, engineering, and math majors, and will focus more on algebra. All three pathways are meant for students who would typically place into elementary algebra, just one level below intermediate algebra.
When starting, the original problem was viewed as ‘fixing developmental math’. As they got into the design, the team restated the problem to be solved as ‘developing coherent pathways through gateway courses into modern degrees of study that lead to economic mobility’. The Dana Center worked with the Texas Association of Community Colleges to develop the curriculum, which is focused on active learning and group work that can be tied to the real world.
The Dana Center approach is based on four principles:
- Courses student take in college math should be connected to their field of study.
- The curriculum should accelerate or compress to allow students to move through developmental courses in one year.
- Courses should align with student support more closely, and sophisticated learning support will be connected to campus support structures.
- Materials should be connected to context-sensitive improvement strategy.
What they have found is that there are multiple programs nationwide working roughly along the same principles, including the California improvement project, Accelerated learning project at Baltimore City College, and work in Tennessee at Austin Peay College. In their view the fact of independent bodies coming to similar conclusions adds validity to the overall concept.
One interesting aspect of the project is that it is targeted for an entire state’s community college system – this is not a pilot approach. After winning an Request for Proposal selection, Pearson will integrate the active-learning content into a customized mix of MyMathLabs, Learning Catalytics, StatCrunch and CourseConnect tools. Given the Dana Center’s small size, one differentiator for Pearson was their size and ability to help a program move to scale.
Another interesting aspect is the partnership approach with TACC. As shared on the web site:
- A commitment to reform: The TACC colleges have agreed to provide seed money for the project over 10 years, demonstrating a long-term commitment to the project.
- Input from the field: TACC member institutions will serve as codevelopers, working with the Dana Center to develop the NMP course materials, tools, and services. They will also serve as implementation sites. This collaboration with practitioners in the field is critical to building a program informed by the people who will actually use it.
- Alignment of state and institutional policies: Through its role as an advocate for community colleges, TACC can connect state and local leaders to develop policies to support the NMP goal of accelerated progress to and through coursework to attain a degree.
MDRC, the same group analyzing CUNY’s ASAP program, will provide independent reporting of the results. There should be implementation data available by the end of the year, and randomized controlled studies to be released in 2016.
To me, this is a very interesting initiative to watch. Given MDRC’s history of thorough documentation, we should be able to learn plenty of lessons from the state-wide deployment.
- Disclosure: Pearson is a client of MindWires Consulting.
The post Dana Center and New Mathways Project: Taking curriculum innovations to scale appeared first on e-Literate.
By Michael FeldsteinMore Posts (1021)
In parts 1, 2, 3, and 4 of this series, I laid out a model for a learning platform that is designed to support discussion-centric courses. I emphasized how learning design and platform design have to co-evolve, which means, in part, that a new platform isn’t going to change much if it is not accompanied by pedagogy that fits well with the strengths and limitations of the platform. I also argued that we won’t see widespread changes in pedagogy until we can change faculty relationships with pedagogy (and course ownership), and I proposed a combination of platform, course design, and professional development that might begin to chip away at that problem. All of these ideas are based heavily on lessons learned from social software and from cMOOCs.
In this final post in the series, I’m going to give a few examples of how this model could be extended to other assessment types and related pedagogical approaches, and then I’ll finish up by talking about what it would take to make the peer grading system described in part 2 be (potentially) accepted by students as at least a component of a grading system in a for-credit class.Competency-Based Education
I started out the series talking about Habitable Worlds, a course out of ASU that I’ve written about before and that we feature in the forthcoming e-Literate TV series on personalized learning. It’s an interesting hybrid design. It has strong elements of competency-based education (CBE) and mastery learning, but the core of it is problem-based learning (PBL). The competency elements are really just building blocks that students need in the service of solving the big problem of the course. Here’s course co-designer and teacher Ariel Anbar talking about the motivation behind the course:
It’s clear that the students are focused on the overarching problem rather than the competencies:
And, as I pointed out in the first post in the series, they end up using the discussion board for the course very much like professionals might use a work-related online community of practice to help them work through their problems when they get stuck:
This is exactly the kind of behavior that we want to see and that the analytics I designed in part 3 are designed to measure. You could attach a grade to the students’ online discussion behaviors. But it’s really superfluous. Students get their grade from solving the problem of the course. That said, it would be helpful to the students if productive behaviors were highlighted by the system in order to make them easier to learn. And by “learn,” I don’t mean “here are the 11 discussion competencies that you need to display.” I mean, rather, that there are different patterns of productive behavior in a high-functioning group. It would be good for students to see not only the atomic behaviors but different patterns and even how different patterns complement each other within a group. Furthermore, I could imagine that some employers might be interested in knowing the collaboration style that a potential employee would bring to the mix. This would be a good fit for badges. Notice that, in this model, badges, competencies, and course grades serve distinct purposes. They are not interchangeable. Competencies and badges are closer to each other than either is to a grade. They both indicate that the student has mastered some skill or knowledge that is necessary to the central problem. But they are different from each other in ways that I haven’t entirely teased out in my own head yet. And they are not sufficient for a good course grade. To get that, the student must integrate and apply them toward generating a novel solution to a complex problem.
The one aspect of Habitable Worlds that might not fit with the model I’ve outlined in this series is the degree to which it has a mandatory sequence. I don’t know the course well enough to have a clear sense, but I suspect that the lessons are pretty tightly scripted, due in part to the fact that the overarching structure of the course is based on an equation. You can’t really drop out one of the variables or change the order willy-nilly in an equation. There’s nothing wrong with that in and of itself, but in order to take full advantage of the system I’ve proposed here, the course design must have a certain amount of play in it for faculty teaching their individual classes to contribute additions and modifications back. It’s possible to use the discussion analytics elements without the social learning design elements, but then you don’t get potential the system offers for faculty buy-in “lift.”Adding Assignment Types
I’ve written this entire series talking about “discussion-based courses” as if that were a thing, but it’s vastly more common to have discussion and writing courses. One interesting consequences of the work that we did abstracting out the Discourse trust levels is that we created a basic (and somewhat unconventional) generalized peer review system in the process. As long as conversation is the metric, we can measure the conversational aspects generated by any student-created artifact. For example, we could create a facility in OAE for students to claim the RSS feeds from their blogs. Remember, any integration represents a potential opportunity to make additional inferences. Once a post is syndicated into the system and associated with the student, it can generate a Discourse thread just like any other document. That discussion can be included in With a little more work, you could have student apply direct ratings such as “likes” to the documents themselves. Making the assessment work for these different types isn’t quite as straightforward as I’m making it sound, either from a user experience design perspective or from a technology perspective. But the foundation is there to build on.
One of the commenters on part 1 of the series provided another interesting use case:
I’m the product manager for Wiki Education Foundation, a nonprofit that helps professors run Wikipedia assignments, in which the students write Wikipedia articles in place of traditional term papers. We’re building a system for managing these assignments, from building a week-by-week assignment plan that follows best practices, to keeping track of student activity on Wikipedia, to pulling in view data for the articles students work on, to finding automated ways of helping students work through or avoid the typical stumbling blocks for new Wikipedia editors.
Wikipedia is its own rich medium for conversation and interaction. I could imagine taking that abstracted peer review system and just hooking it up directly to student activity within Wikipedia itself. Once we start down this path, we really need to start talking about IMS Caliper and federated analytics. This has been a real bottom-up analysis, but we quickly reach the point where we want to start abstracting out the particular systems or even system types, and start looking at a general architecture for sharing learning data (safely). I’m not going to elaborate on it here—even I have to stop at some point—but again, if you made it this far, you might find it useful to go back and reread my original post on the IMS Caliper draft standard and the comments I made on its federated nature in my most recent walled garden post. Much of what I have proposed here from an architectural perspective is designed specifically with a Caliper implementation in mind.Formal Grading
No, really. Even I run out of gas. Eventually.
For a while.
By Michael FeldsteinMore Posts (1021)
In part 1 of this series, I talked about some design goals for a conversation-based learning platform, including lowering the barriers and raising the incentives for faculty to share course designs and experiment with pedagogies that are well suited for conversation-based courses. Part 2 described a use case of a multi-school faculty professional development course which would give faculty an opportunity to try out these affordances in a low-stakes environment. In part 3, I discussed some analytics capabilities that could be added to a discussion forum—I used the open source Discourse as the example—which would lead to richer and more organic assessments in conversation-based courses. But we haven’t really gotten to the hard part yet. The hard part is encouraging experimentation and cross-fertilization among faculty. The problem is that faculty are mostly not trained, not compensated, and otherwise not rewarded for their teaching excellence. Becoming a better teacher requires time, effort, and thought, just as becoming a better scholar does. But even faculty at many so-called “teaching schools” are given precious little in the way of time or resources to practice their craft properly, never mind improving it.
The main solution to this problem that the market has offered so far is “courseware,” which you can think of as a kind of course-in-a-box. In other words, it’s an attempt to move as much as the “course” as possible into the “ware”, or the product. The learning design, the readings, the slides, and the assessments are all created by the product maker. Increasingly, the students are even graded by the product.
This approach as popularly implemented in the market has a number of significant and fairly obvious shortcomings, but the one I want to focus on for this post is these packages are still going to be used by faculty whose main experience is the lecture/test paradigm. Which means that, whatever the courseware learning design originally was, it will tend to be crammed into a lecture/test paradigm. In the worst case, the result is that we have neither the benefit of engaged, experienced faculty who feel ownership of the course nor an advanced learning design that the faculty member has not learned how to implement.
One of the reasons that this works from a commercial perspective is that it relies on the secret shame that many faculty members feel. Professors were never taught to teach, nor are they generally given the time, money, and opportunities necessary to learn and improve, but somehow they have been made to feel that they should already know how. To admit otherwise is to admit one’s incompetence. Courseware enables faculty to keep their “shame” secret by letting the publishers do the driving. What happens in the classroom stays in the classroom. In a weird way, the other side of the shame coin is “ownership.” Most faculty are certainly smart enough to know that neither they nor anybody else is going to get rich off their lecture notes. Rather, the driver of “ownership” is fear of having the thing I know how to do in my classroom taken away from me as “mine” (and maybe exposing the fact that I’m not very good at this teaching thing in the process). So many instructors hold onto the privacy of their classrooms and the “ownership” of their course materials for dear life.
Obviously, if we really want to solve this problem at its root, we have to change faculty compensation and training. Failing that, the next best thing is to try to lower the barriers and increase the rewards for sharing. This is hard to do, but there are lessons we can learn from social media. In this post, I’m going to try to show how learning design and platform design in a faculty professional development course might come together toward this end.
You may recall from part 2 of this series that use case I have chosen is a faculty professional development “course,” using our forthcoming e-Literate TV series about personalized learning as a concrete example. The specific content isn’t that important except to make the thought experiment a little more concrete. The salient details are as follows:
- The course is low-stakes; nobody is going to get mad if our grading scheme is a little off. To the contrary, because it’s a group of faculty engaged in professional development about working with technology-enabled pedagogy, the participants will hopefully bring a sense of curiosity to the endeavor.
- The course has one central, course-long problem or question: What, if anything, do we (as individual faculty, as a campus, and as a broader community of teachers) want to do with so-called “personalized learning” tools and approaches? Again, the specific question doesn’t matter so much as the fact that there is an overarching question where the answer is going to be specific to the people involved rather than objective and canned. That said, the fact that the course is generally about technology-enabled pedagogy does some work for us.
- Multiple schools or campuses will participate in the course simultaneously (though not in lock-step, as I will discuss in more detail later in this post). Each campus cohort will have a local facilitator who will lead some local discussions and customize the course design for local needs. That said, participants will also be able (and encouraged) to have discussions across campus cohorts.
- The overarching question naturally lends itself to discussion among different subgroups of the larger inter-campus group, e.g., teachers of the same discipline, people on the same campus, among peer schools, etc.
That last one is critical. There are natural reasons for participants to want to discuss different aspects of the overarching question of the course with different peer groups within the course. Our goal in both course and platform design is to make those discussions as easy and immediately rewarding as possible. We are also going to take advantage of the electronic medium to blur the distinction between contributing a comment, or discussion “post,” with longer contributions such as documents or even course designs.
We’ll need a component for sharing and customizing the course materials, or “design” and “curriculum,” for the local cohorts. Again, I will choose a specific piece of software in order to make the thought experiment more concreted, but as with Discourse in part 3 of this series, my choice of example is in no way intended to suggest that it is the only or best implementation. In this case, I’m going to use the open source Apereo OAE for this component in the thought experiment.
When multiple people teach their own courses using existing the same curricular materials (like a textbook, for example), there is almost always a lot of customization that goes on at the local level. Professor A skips chapters 2 and 3. Professor B uses her own homework assignments instead of the end-of-chapter problems. Professor C adds in special readings for chapter 7. And so on. With paper-based books, we really have no way of knowing what gets used and reused, what gets customized (and how it gets customized), and what gets thrown out. Recent digital platforms, particularly from the textbook publishers, are moving in the direction of being able to track those things. But academia hasn’t really internalized this notion that courses are more often customized than built from scratch, never mind the idea that their customizations could (and should) be shared for the sake of collective improvement. What we want is a platform that makes the potential for this virtuous cycle visible and easy to take advantage of without forcing participants to sacrifice any local control (including the control to take part or all of their local course private if that’s what they want to do).
OAE allows a user to create content that can be published into groups. But published doesn’t mean copied. It means linked. We could have the canonical copy of the ETV personalized learning MOOC (for example), that includes all the episodes from all the case studies plus any supplemental materials we think are useful. The educational technology director at Some State University (SSU) could create a group space for faculty and other stakeholders from her campus. She could choose to pull some, but not all, of the materials from the canonical course into her space. She could rearrange the order. You may recall from part 3 that Discourse can integrate with WordPress, spawning a discussion for every new blog post. We could easily imagine the same kind of integration with OAE. Since anything the campus facilitator pulls from the canonical course copy will be surfaced in her course space rather than copied into it, we would still have analytics on use of the curricular materials across the cohorts, then any discussions in Discourse that are related to the original content items would maintain their linkage (including the ability to automatically publish the “best” comments from the thread back into SSU’s course space). The facilitator could also add her own content, make her space private (from the default of public), and spawn private cohort-specific conversations. In other words, she could make it her own course.
I slipped the first bit of magic into that last sentence. Did you catch it? When the campus facilitator creates a new document, the system can automatically spawn a new discussion thread in Discourse. By default, new documents from the local cohort become available for discussion to all cohorts. And with any luck, some of that discussion will be interesting and rewarding to the person creating the document. The cheap thrill of any successful social media platform is having the (ideally instant) gratification of seeing somebody respond positively to something you say or do. That’s the feeling we’re trying to create. Furthermore, because of the way OAE share documents across groups, if the facilitator in another cohort were to pull your document into her course design, it wouldn’t have to be invisible to you the way creating a copy is. We could create instant and continuously updated feedback on the impact of your sharing. Some documents (and discussions) in some cohorts might need to be private, and OAE supports that, but the goal is to get private, cohort- (or class-)internal sharing feel something like direct messaging feels on Twitter. There is a place for it, but it’s not what makes the experience rewarding.
To that end, we could even feed sharing behavior from OAE into the trust analytics I described in part 3 of this post series. One of the benefits of abstracting the trust levels from Discourse into an external system that has open APIs is that it can take inputs from different systems. It would be possible, for example, to make having your document shared into another cohort on OAE or having a lot of conversation generated from your document count toward your trust level. I don’t love the term “gamification,” but I do love the underlying idea that a well-designed system should make desired behaviors feel good. That’s also a good principle for course design.
I’m going to take a little detour into some learning design elements here, because they are critical success factors for the platform experience. First, the Problem-based Learning (PBL)-like design of the course is what makes it possible for individual cohorts to proceed at their own pace, in their own order, and with their own shortcuts or added excursions and still enable rich and productive discussions across cohorts. A course design that requires that units be released to the participants one week at a time will not work, because discussions will get out of sync as different cohorts proceed differently, and synchronization matters to the course design. If, on the other hand, synchronization across cohorts doesn’t matter because participants are going to the discussion authentically as needed to work out problems (the way they do all the time in online communities but much less often in typical online courses), then discussions will naturally wax and wane with participant needs and there will be no need to orchestrate them. Second, the design is friendly to participation through local cohorts but doesn’t require it. If you want to participate in the course as a “free agent” and have a more traditional MOOC-like experience, you could simply work off the canonical copy of the course materials and follow the links to the discussions.
End of detour. There’s one more technology piece I’d like to add to finish off the platform design for our use case. Suppose that all the participants could log into the system with their university credentials through an identity management scheme like InCommon. This may seem like a trivial implementation detail that’s important mainly for participant convenience, but it actually adds the next little bit of magic to the design. In part 3, I commented that integrating the discussion forum with a content source enables us to make new inferences because we now know that a discussion is “about” the linked content in some sense, and because content creators often have stronger motivations than discussion participants to add metadata like tags or learning objectives that tell us more about the semantics. One general principle that is always worth keeping in mind when designing learning technologies these days is that any integration presents an potential opportunity for new inferences. In the case of single sign-on, we can go to a data source like IPEDS to learn a lot about the participants’ home institutions and therefore about their potential affinities. Affinities are the fuel that provides any social platform with its power. In our use case, participants might be particularly interested in seeing comments from their peer institutions. If we know where they are coming from, then we can do that automatically rather than forcing them to enter information or find each other manually. In a course environment, faculty might want to prioritize the trust signals from students at similar institutions over those from very different institutions. We could even generate separate conversation threads based on these cohorts. Alternatively, people might want to find people with high trust levels who are geographically near them in order to form meetups or study groups.
And that’s it, really. The platform consists of a discussion board, a content system, and a federated identity management system that have been integrated in particular ways and used in concert with particular course design elements. There is nothing especially new about either the technology or the pedagogy. The main innovation here, to the degree that there is one, is combining them in a way that creates the right incentives for the participants. When I take a step back and really look at it, it seems too simple and too much like other things I’ve seen and too little like other things I’ve seen and too demanding of participants to possibly work. Then again, I said the same thing about blogs, Facebook, Twitter, and Instagram. They all seemed stupid to me before I tried them. Facebook still seems stupid to me, and I haven’t tried Instagram, but the point remains that these platforms succeeded not because of any obvious feat of technical originality but because they got the incentive structures right in lots of little ways that added up to something big. What I’m trying to do here with this design proposal is essentially to turn the concept of courseware inside out, changing the incentive structures in lots of little ways that hopefully add up to something bigger. Rather than cramming as much of the “course” as possible into the “ware,” reinforcing the isolation of the classroom in the process, I’m trying to make the “ware” generated by the living, human-animated course, making learning and curriculum design inherently social processes and hopefully thereby circumventing the shame reflex. And I’m trying to do that in the context of a platform and learning design that attempt to both reward and quantify social problem solving competencies in the class itself.
I don’t know if it will fly, but it might. Stranger things have happened.
In the last post in this series, I will discuss some extensions that would probably have to be made in order to use this approach in a for-credit class as well as various miscellaneous considerations. Hey, if you’ve made it this far, you might as well read the last one and find out who dunnit.
- Of course, I recognize that some disciplines don’t do a lot of lecture/test (although they may do lecture/essay). These are precisely the disciplines in which courseware has been the least commercially successful.
- My wife agreeing to marry me, for instance.
By Michael FeldsteinMore Posts (1021)
In the first part of this series, I identified four design goals for a learning platform that supports conversation-based courses. In the second part, I brought up a use case of a kind of faculty professional development course that works as a distributed flip, based on our forthcoming e-Literate TV series on personalized learning. In the next two posts, I’m going to go into some aspects of the system design. But before I do that, I want to address a concern that some readers have raised. Pointing to my apparently infamous “Dammit, the LMS” post, they raise the question of whether I am guilty of a certain amount of techno-utopianism. Whether I’m assuming just building a new widget will solve a difficult social problem. And whether any system, even if it starts out relatively pure, will inevitably become just another LMS as the same social forces come into play.
I hope not. The core lesson of “Dammit, the LMS” is that platform innovations will not propagate unless the pedagogical changes that take advantages of those changes also propagate, and pedagogical changes will not propagate without changes in the institutional culture in which they are embedded. Given that context, the use case I proposed in part 2 of this series is every bit as important as the design goals in part 1 because it provides a mechanism by which we may influence the culture. This actually aligns well with the “use scale appropriately” design goal from part 1, which included this bit:
Right now, there is a lot of value to the individual teacher of being able to close the classroom door and work unobserved by others. I would like to both lower barriers to sharing and increase the incentives to do so. The right platform can help with that, although it’s very tricky. Learning Object Repositories, for example, have largely failed to be game changers in this regard, except within a handful of programs or schools that have made major efforts to drive adoption. One problem with repositories is that they demand work on the part of the faculty while providing little in the way of rewards for sharing. If we are going to overcome the cultural inhibitions around sharing, then we have to make the barrier as low as possible and the reward as high as possible.
When we get to part 4 of the series, I hope to show how the platform, pedagogy, and culture might co-evolve through a combination of curriculum design, learning design, platform design, prepared for faculty as participants in a low-stakes environment. But before we get there, I have to first put some building blocks in place related to fostering and assessing educational conversation. That’s what I’m going to try to do in this post.
You may recall from part 1 of this series that trust, or reputation, has been the main proxy for expertise throughout most of human history. Credentials are a relatively new invention designed to solve the problem that person-to-person trust networks start to break down when population sizes get beyond a certain point. The question I raised was whether modern social networking platforms, combined with analytics, can revive something like the original trust network. LinkedIn is one example of such an effort. We want an approach that will enable us to identify expertise through trust networks based on expertise-relevant conversations of the type that might come up in a well facilitated discussion-based class.
It turns out that there is quite a bit of prior art in this area. Discussion board developers have been interested in ways to identify experts in the conversation for as long as internet-based discussions have grown large enough that people need help figuring out who to pay attention to and who to ignore (and who to actively filter out). Keeping the signal-to-noise ratio was a design goal, for example, in the early versions of the software developed to manage the Slashdot community in the late 1990s. (I suspect some of you have even earlier examples.) Since that design goal amounts to identifying community-recognized expertise and value in large-scale but authentic conversations (authentic in the sense that people are not participating because they were told to participate), it makes sense to draw on that accumulated experience in thinking through our design challenges. For our purposes, I’m going to look at Discourse, an open source discussion forum that was designed by some of the people who worked on the online community Stack Overflow.
Discourse has a number of features for scaling conversations that I won’t get into here, but their participant trust model is directly relevant. They base their model on one described by Amy Jo Kim in her book Community Building on the Web:
The progression, visitor > novice > regular > leader > elder, provides a good first approximation of levels for an expertise model. (The developers of Discourse change the names of the levels for their own purposes, but I’ll stick with the original labels here.) Achieving a higher level in Discourse unlocks certain privileges. For example, only leaders or elders can recategorize or rename discussion threads. This is mostly utilitarian, but it has an element of gamification to it. Your trust level is a badge certifying your achievements in the discussion community.
The model that Discourse currently uses for determining participant trust levels is pretty simple. For example, in order to get to the middle trust level, a participant must do the following:
- visiting at least 15 days, not sequentially
- casting at least 1 like
- receiving at least 1 like
- replying to at least 3 different topics
- entering at least 20 topics
- reading at least 100 posts
- spend a total of 60 minutes reading posts
This is not terribly far from a very basic class participation grade. It is grade-like in the sense that it is a five-point evaluative scale, but it is simple like a the most basic of participation grades in the sense that it mostly looks at quantity rather than quality of participation. The first hint of a difference is “receiving at least 1 like.” A “like” is essentially a micro-scale peer grade.
We could also imagine other, more sophisticated metrics that directly assess the degree to which a participant is considered to be a trusted community member. Here are a few examples:
- The number of replies or quotes that a participant’s comments generate
- The number of mentions the participant generates (in the @twitterhandle sense)
- The number of either of the above from participants who have earned high trust levels
- The number of “likes” you get for posts in which a participant mentions or quotes another post
- The breadth of the network of people with whom the participant converses
- Discourse analysis of the language used in the participant’s post to see if they are being helpful or if they are asking clear questions (for example)
Some of these metrics use the trust network to evaluate expertise, e.g., “many participants think you said something smart here” or “trusted participants think you said something smart here.” But some directly measure actual competencies, e.g., the ability to find pre-existing information and quote it. You can combine these into a metric of the ability to find pre-existing relevant information and quote it appropriately by looking at posts that contain quote and were liked by a number of participants or by trusted participants.
Think about these metrics as the basis for a grading system. Does the teacher want to reward students who show good teamwork and mentoring skills? Then she might increase the value of metrics like “post rated helpful by a participant with less trust” or “posts rated helpful by many participants.” If she wants to prioritize information finding skills, then she might increase the weight of appropriate quoting of relevant information. Note that, given a sufficiently rich conversation with a sufficiently rich set of metrics, there will be more than one way to climb the five-point scale. We are not measuring fine-grained knowledge competencies. Rather, we are holistically assessing the student’s capacity to be a valuable and contributing member of a knowledge-building community. There should be more than one way to get high marks at that. And again, these are high-order competencies that most employers value highly. They are just not broken down into itsy bitsy pass-or-fail knowledge chunks.
Unfortunately, Discourse doesn’t have this rich array of metrics or options for combining them. So one of the first things we would want to do in order to adapt it for our use case is abstract Discourse’s trust model, as well as all the possible inputs, using IMS Caliper (or something based on the current draft of it, anyway). There are a few reasons for this. First, we’d want to be able to add inputs as we think of them. For example, we might want to include how many people start using a tag that a participant has introduced. You don’t want to have to hard code every new parameter and every new way of weighing the parameters against them. Second, we’re eventually going to want to add other forms of input from other platforms (e.g., blog posts) that contribute to a participant’s expertise rating. So we need the ratings code in a form that is designed for extension. We need APIs. And finally, we’d want to design the system so that any vendor, open source, or home-grown analytics system could be plugged in to develop the expertise ratings based on the inputs.
Discourse also has integration with WordPress which is interesting not so much because of WordPress itself but because the nature of the integration points toward more functionality that we can use, particularly for analytics purposes. The Discourse WordPress plugin can automatically spawn a discussion tread in Discourse automatically for every new post in WordPress. This is interesting because it gives us a semantic connection between a discussion and a piece of (curricular) material. We automatically know what the discussion is “about.” It’s hard to get participants in a discussion to do a lot of tagging of their posts. But it’s a lot easier to get curricular materials tagged. If we know that a discussion is about a particular piece of content and we know details about the subjects or competencies that the content is about (and whether that content contains an explanation to be understood, a problem to be solved, or something else), then we can make some relatively good inferences about what it says about a person’s expertise when she makes a several highly rated comments in discussions about content items that share the same competency or topic tag. Second, Discourse has the ability to publish the comments on the content back to the post. This is a capability that we’re going to file away for use in the next part of this series.
If we were to abstract the ratings system from Discourse, add an API that lets it take different variables (starting with various metadata about users and posts within Discourse), and add a pluggable analytics dashboard that let teachers and other participants experiment with different types of filters, we would have a reasonably rich environment for a cMOOC. It would support large-scale conversations that could be linked to specific pieces of curricular content (or not). It would help people find more helpful comments and more helpful commenters. It could begin to provide some fairly rich community-powered but analytics-enriched evaluations of both of these. And, in our particular use case, since we would be talking about analytics-enriched personalized learning products and strategies, having some sort of pluggable analytics that are not hidden by a black box could give participants more hands-on experience with how analytics can work in a class situation, what they do well, what they don’t do well, and how you should manage them as a teacher. There are some additional changes we’d need to make in order to bring the system up to snuff for traditional certification courses, but I’ll save those details for part 5.
By Michael FeldsteinMore Posts (1020)
- Kill the grade book in order to get faculty away from concocting arcane and artificial grading schemes and more focused on direct measures of student progress.
- Use scale appropriately in order to gain pedagogical and cost/access benefits while still preserving the value of the local cohort guided by an expert faculty member, as well as to propagate exemplary course designs and pedagogical practices more quickly.
- Assess authentically through authentic conversations in order to give credit for the higher order competencies that students display in authentic problem-solving conversations.
- Leverage the socially constructed nature of expertise (and therefore competence) in order to develop new assessment measures based on the students’ abilities to join, facilitate, and get the full benefits from trust networks.
I also argued that platform design and learning design are intertwined. One implication of this is that there is no platform that will magically make education dramatically better if it works against the grain of the teaching practices in which it is embedded. The two need to co-evolve.
This last bit is an exceedingly tough nut to crack. If we were to design a great platform for conversation-based courses but it got adopted for typical lecture/test courses, the odds are that faculty would judge the platform to be “bad.” And indeed it would be, for them, because it wouldn’t have been designed to meet their particular teaching needs. At the same time, one of our goals is to use the platform to propagate exemplary pedagogical practices. We have a chicken and egg problem. On top of that, our goals suggest assessment solutions that differ radically from traditional ones, but we only have a vague idea so far of what they will be or how they will work. We don’t know what it will take to get them to the point where faculty and students generally agree that they are “fair,” and that they measure something meaningful. This is not a problem we can afford to take lightly. And finally, while one of our goals is to get teachers to share exemplary designs and practices, we will have to overcome significant cultural inhibitions to make this happen. Sometimes systems do improve sharing behavior simply by making sharing trivially easy—we see that with social platforms like Twitter and Facebook, for example—but it is not at all clear that just making it easy to share will improve the kind of sharing we want to encourage among faculty. We need to experiment in order to find out what it takes to help faculty become comfortable or even enthusiastic about sharing their course designs. Any one of these challenges could kill the platform if we fail to take them seriously.
When faced with a hard problem, it’s a good idea to find a simpler one you can solve that will get you partway to your goal. That’s what the use case I’m about to describe is designed to do. The first iteration of any truly new system should be designed as an experiment that can test hypotheses and assumptions. And the first rule of experimental design is to control the variables.
Of the three challenges I just articulated, the easiest one to get around is the assessment trust issue. The right use case should be an open, not-for-credit, not-for-certification course. There will be assessments, but the assessments don’t count. We would therefore be creating a situation somewhat like a beta test of a game. Participants would understand that the points system is still being worked out, and part of the fun of participation is seeing how it works and offering suggestions for improvement. The way to solve the problem of potential mismatches between platform and content is to test the initial release of the platform with content that was designed for it. As for the third problem, we need to pick a domain that is far enough away from the content and designs that faculty feel are “theirs” that the inhibitions regarding sharing are lower.
All of these design elements point toward piloting the platform with a faculty professional development cMOOC. Faculty can experience the platform as students in a low-stakes environment. And I find that even faculty who are resistant to talking about pedagogy in their traditional classes tend to be more open-minded when technology enters the picture because it’s not an area where they feel they are expected to be experts. But it can’t be a traditional cMOOC (if that isn’t an oxymoron). We want to model the distributed flip, where there are facilitators of local cohorts in addition to the large group participation. This suggests a kind of a “reading group” or “study group” structure. The body of material for the MOOC is essentially a library of content. Each campus-based group chooses to go through the content in their own way. They may cover all of it or skip some of it. They may add their own content. Each group will have its own space to organize its activities, but this space will be open to other groups. There will be discussions open to everyone, but groups and individual members can participate in those or not, as they choose. Presumably each group would have at least a nominal leader who would take the lead on organizing the content and activities for the local cohort. This would typically be somebody like the head of a Center for Educational Technology, but it could also be an interested faculty member, or the group could organize its activities by consensus.
To make the use case more concrete, let’s assume that the curriculum will revolve around the forthcoming e-Literate TV series on personalized learning. This is something that I would ideally like to do in the real world, but it also has the right characteristics for the current thought experiment. The heart of the series is five case studies of schools trying different personalized learning approaches:
- Middlebury College, an elite New England liberal arts school in rural Vermont
- Essex County College, a community college in Newark, NJ
- Empire State College, a SUNY school that focuses on non-traditional students and has a heavy distance learning program
- Arizona State University, a large public university with a largely top-down approach to implementing personalized learning
- A large public university with a largely bottom-up approach to implementing personalized learning
These thirty-minute case studies, plus the wrapper content that Phil and I are putting together (including a recorded session at the last ELI conference), covers a number of cross-cutting issues. Here are a few:
- What does “personalized” really mean? When (and how) does technology personalize, and when does it depersonalize?
- How does the idea of “personalized” change based on the needs of different kinds of students in different kinds of institutions?
- How do personalized learning technologies, implemented thoughtfully in these different contexts, change the roles of the teacher, the TA, and the students?
- What kinds of pedagogy seem to work best with self-paced products that are labeled as providing personalized learning?
- What’s hard about using these technologies effectively, and what are the risks?
That’s the content and the context. Since we’re going for something like a PBL design, the central problem that each cohort would need to tackle is, “What, if anything, should we be doing with personalized learning tools and pedagogical approaches in our school?” This question can be tackled in a lot of different ways, depending on the local culture. If it is taken seriously, there are likely to be internal discussions about politics, budgets, implementation issues, and so on. Cohorts might also be very interested to have conversations with other cohorts from peer schools to see what they are thinking and what their experiences have been. Not only that, they may also be interested in how their peers are organizing their campus conversations about personalized learning. This is the equivalent of sharing course designs in this model. And of course, there will hopefully also be very productive conversations across all cohorts, pooling expertise, experience, and insight. This sort of community “sharding” is consistent with the cMOOC design thinking that has come before. We’re simply putting some energy into both learning design and platform design to make that approach work with a facilitation structure that is closer to a traditional classroom setting. We’re grafting a cMOOC-like course design onto a distributed flip facilitation structure in the hopes of coming up with something that still feels like a traditional class in some ways but brings in the benefits of a global conversation (among teachers as well as students).
The primary goal of such a “course” wouldn’t be to certify knowledge or even to impart knowledge but rather to help participants build their intra- and inter-campus expertise networks on personalized learning, so that educators could learn from each other more and re-invent the wheel less. But doing so would entail raising the baseline level of knowledge of the participants (like a course) and could support the design goals. The e-Literate TV series provides us with a concrete example to work with, but any cross-cutting issue or change that academia is grappling with would work as a use case for attacking our design goals in an environment that is relatively lower-risk than for-credit classes. The learning platform necessary to make such a course work would need to both support the multi-layered conversations and provide analytics tools to help identify both the best posts and the community experts.
In the next two posts, I will lay out the basic design of the system I have in mind. Then, in the final post of the series, I will discuss ways of extending the model to make it more directly suitable for traditional for-credit class usage.