A number of literature reviews have been published on MOOCs. None has focused exclusively on the empirical literature. In a recent paper, we analyzed the empirical literature published on MOOCs in 2013-2015 to make greater sense of who studies what and how. We found that:
- more than 80% of this literature is published by individuals whose home institutions are in North America and Europe,
- a select few papers are widely cited while nearly half of the papers are cited zero times,
- researchers have favored a quantitative if not positivist approach to the conduct of MOOC research,
- researchers have preferred the collection of data via surveys and automated methods
- some interpretive research was conducted on MOOCs in this time period, but it was often basic and it was the minority of studies that were informed by methods traditionally associated with qualitative research (e.g., interviews, observations, and focus groups)
- there is limited research reported on instructor-related topics, and
- even though researchers have attempted to identify and classify learners into various groupings, very little research examines the experiences of learner subpopulations (e.g., those who succeed vs those who don’t; men vs women).
We believe that the implications arising from this study are important for research on educational technology in general and not jut MOOC research. For instance, given the interest on big data and automated collection/analysis of the data trails that learners leave behind on digital learning environments, a broader methodological toolkit is imperative in the study of emerging digital learning environments.
Here’s a copy of the paper:
Veletsianos, G. & Shepherdson, P. (2016). A Systematic Analysis And Synthesis of the Empirical MOOC Literature Published in 2013-2015. The International Review of Research in Open and Distributed Learning, 17(2).
I’m in the process of creating an activity for a new course, and I thought that this particular activity might be valuable to others. Here’s what it currently looks like:
Task: Examine institutional aspirations for 2025 and beyond
Process: In your assigned teams, read one strategic vision document and you create a 4 minute audio summary to share with the rest of the class. You may use any tool that you feel comfortable with to create this audio summary, but if you are need an easy solution you can try Vocaroo or SoundCloud.
Individually, read the assigned document. Consider the following questions: What are the main themes in the document? What are the institutions’ main goals or aspirations for the future? How is technology described as enabling the institution to achieve these goals? Is technology used in interesting and creative ways? Which of the challenges that we identified as facing contemporary universities is the document aiming to address?
Next, discuss your findings with your team and collaborate to craft an audio summary of your assigned document.
Your audio can take many forms. It can be a summary spoken by one person. Or, a conversation between two or more people. Fel free to be more creative than these two examples. You could for instance imagine that you are in a leadership position at the assigned institution and you are delivering a 4-minute speech to the university community summarizing the institution’s aspirations for 2025.
Strategic document assignments are as follows:
|Team 1||Team choice or UBC. (2014). Flexible learning: Charting a strategic vision for UBC (Vancouver campus). Office of the Provost.|
|Team 2||Team choice or University of Saskatchewan. (n.d.). Vision 2025: From spirit to action.|
|Team 3||MIT. (2013). Institute-wide taskforce on the future of MIT education: Preliminary report.|
|Team 4||Standford. (n.d.). Learning and living at Stanford 2025.|
|Team 5||Royal Roads University (2016). RRU Learning and Teaching Model.|
|Team 6||Team choice or University of The Fraser Valley (2016). UFV 2025: A vision for our future.|
The attention that education and educational technology are receiving are significant, and I feel fortunate to be able to participate in efforts to improve education. Nonetheless, for the past couple of years has been bombarded with announcement after announcement of what the latest and greatest technology can do for education. These announcements are almost always filled with claims about their potential impact. Here’s a clipping of a recent email i received:
Let’s pause. Research from ISTE “found that the use of education technology (EdTech) resulted in 35 percent of students showing higher scores on class assessments and 32 percent increased engagement?” The link pointed to this post, written by the amazing Wendy Drexel. Let’s hone in on what Wendy actually wrote:
“Nationwide, we are seeing powerful results from the effective use of technology in classrooms. For example, results of research by ISTE and the Verizon Foundation earlier this year into the use of education technology had teachers reporting that 35 percent of their students showed higher scores on classroom assessments; 32 percent showed increased engagement; and 62 percent demonstrated increased proficiency with mobile devices. In fact, 60 percent of participating teachers also reported that by using their mobile devices, they provided more one-on-one help to students, and 47 percent said they spent less time on lectures to the entire class.”
These are interesting and worthwhile results. But they are mischaracterized in the email above. From the summary of the research posted on the ISTE site and a more detailed report of the research (pdf), we can begin to see how some edtech companies purport that there is evidence of impact and use that to further their cause. Here’s a summary of two issues ignored by the ad/email:
- The email claims that use of edtech resulted in 35% and 32% more students scoring higher in classroom assessment and engagement. What the research actually reported about this particular area is the following: teachers reported that edtech use led to increased scores/engagement. In more plain language, the teachers said that their students did better. We don’t know if they did or did not.
- The email paints a direct and unequivocal relationship between edtech use and outcomes.: “use of educational technology resulted in.” That’s not actually the case. Why?
- The actual research showed that even though there were differences in math and science outcomes between schools that participated and schools that did not, the results were not statistically significant. In different words: similar results could be expected without the use of this particular technology
- How is the technology used? The research report notes: “During site visits, observers noted that edtech-using teachers used technology to efficiently
facilitate drill and practice test preparation activities.” In other words: Edtech helps with teaching to the test and that seems to work. Put differently: We have powerful technologies that empower people to be creative and allow global collaboration, but have created systems that put teachers in situations in which they have to use these tools in simplistic ways.
Royce Kimmons and I have been exploring the use of large-scale data in a number of recent studies. We just published a paper that tries to make sense of students’ and professors’ social media participation on a large scale. We are continuing our qualitative investigations to understand “why, in what ways, and how” scholars (students & professors) are using social media, but this is our first data mining study making use of Twitter data. It’s also the first study using large-scale Twitter data to make sense of how professors and students of education are using Twitter.
Here’s a high-level summary of three of our findings:
- There is significant variation in how scholars participate on Twitter. The platform may not be the democratizing tool it is often purported to be: The most popular 1% scholars have an average follower base nearly 100 times that of scholars in the lower 99% and 700 times those in the bottom 50%.
- Civil rights and advocacy seem to be an important activity of social media participation – this is rarely captured in research to date, which most often focuses on how social media are used in teaching & research. Scholars’ participation on Twitter extends well beyond traditional notions of scholarship.
We found that those scholars who follow more users, have tweeted more, signal themselves as professors, and have been on Twitter longer will have more followers. This model predicts 83% of the variation on follower counts. This finding raises questions as to the meaning of follower counts and its use as a metric in conversations pertaining to scholarly quality/reach.
Veletsianos, G., & Kimmons, R. (2016). Scholars in an Increasingly Digital and Open World: How do Education Professors and Students use Twitter? The Internet and Higher Education, 30, 1-10.