Bear with me. This work-in-progress is a bit raw. I’d love any feedback that you might have.
Back in 2008, my colleagues and I wrote a short paper arguing that social justice is a core element of good instructional design. Good designs were, and still are, predominantly judged upon their effectiveness, efficiency, and engagement (e3 instruction). Critical and anti-opressive educators and theorists have laid the foundations of extending educational practice beyond effectiveness a long time ago.
I’m not convinced that edtech, learning design, instructional design, digital learning, or any other label that one wants to apply to the “practice of improving digital teaching and learning” is there yet.
I’ve been thinking more and more about compassion with respect to digital learning. More specifically, I’ve been reflecting on the following question:
What does compassion look like in digital learning contexts?
I’m blogging about this now, because my paper journal is limiting and there is an increasing recognition within various circles in the field that are coalescing around similar themes. For instance,
- The CFP for Learning with MOOCs III asks: What does it mean to be human in the digital age?
- Our research questions reductionist agendas embedded in some approaches to evaluating and enhancing learning online. Similar arguments are made by Jen Ross, Amy Collier, and Jon Becker.
- Kate Bowles says “we have a capacity to listen to each other, and to honour what is particular in the experience of another person.”
- Lumen Learning’s personalized pathways recognize learner agency (as opposed to dominant personalization paradigms that focus on system control)
Compassion is one commonality that these initiatives, calls to action, and observations have in common (and, empowerment, but that’s a different post).
This is not a call for teaching compassion or empathy to the learner. That’s a different topic. I’m more concerned here with how to embed compassion in our practice – in our teaching, in our learning design processes, the technologies that we create, in the research methods that we use. At this point I have a lot of questions and some answers. Some of my questions are:
- What does compassionate digital pedagogy look like?
- What are the purported and actual relationships between compassion and various innovations such as flexible learning environments, competency-based learning, and open education?
- What are the narratives surrounding innovations [The work of Neil Selwyn, Audrey Watters, and David Noble is helpful here]
- What does compassionate technology look like?
- Can technologies express empathy and sympathy? Do students perceive technologies expressing empathy? [Relevant to this: research on pedagogical agents, chatbots, and affective computing]
- What does compassion look like in the design of algorithms for new technologies?
- What does compassionate learning design look like?
- Does a commitment to anti-oppressive education lead to compassionate design?
- Are there any learning design models that explicitly account for compassion and care? Is that perhaps implicit in the general aim to improve learning & teaching?
- In what ways is compassion embedded in design thinking?
- What do compassionate digital learning research methods look like?
- What are their aims and goals?
- Does this question even make sense? Does this question have to do with the paradigm or does it have to do with the perspective employed in the research? Arguing that research methods informed by critical theory are compassionate is easy. Can positivist research methods be compassionate? Researchers may have compassionate goals and use positivist approaches (e.g., “I want to evaluate the efficacy of testing regimes because I believe that they might be harmful to students”).
- What does compassionate digital learning advocacy look like?
- Advocating for widespread adoption of tools/practices/etc without addressing social, political, economic, and cultural contexts is potentially harmful (e.g., Social media might be beneficial but advocating for everyone to use social media ignores the fact that certain populations may face more risks when doing so)
There’s many other topics here (e.g., adjunctification, pedagogies of hope, public scholarship, commercialization….) but there’s more than enough in this post alone!
A number of literature reviews have been published on MOOCs. None has focused exclusively on the empirical literature. In a recent paper, we analyzed the empirical literature published on MOOCs in 2013-2015 to make greater sense of who studies what and how. We found that:
- more than 80% of this literature is published by individuals whose home institutions are in North America and Europe,
- a select few papers are widely cited while nearly half of the papers are cited zero times,
- researchers have favored a quantitative if not positivist approach to the conduct of MOOC research,
- researchers have preferred the collection of data via surveys and automated methods
- some interpretive research was conducted on MOOCs in this time period, but it was often basic and it was the minority of studies that were informed by methods traditionally associated with qualitative research (e.g., interviews, observations, and focus groups)
- there is limited research reported on instructor-related topics, and
- even though researchers have attempted to identify and classify learners into various groupings, very little research examines the experiences of learner subpopulations (e.g., those who succeed vs those who don’t; men vs women).
We believe that the implications arising from this study are important for research on educational technology in general and not jut MOOC research. For instance, given the interest on big data and automated collection/analysis of the data trails that learners leave behind on digital learning environments, a broader methodological toolkit is imperative in the study of emerging digital learning environments.
Here’s a copy of the paper:
Veletsianos, G. & Shepherdson, P. (2016). A Systematic Analysis And Synthesis of the Empirical MOOC Literature Published in 2013-2015. The International Review of Research in Open and Distributed Learning, 17(2).
I’m in the process of creating an activity for a new course, and I thought that this particular activity might be valuable to others. Here’s what it currently looks like:
Task: Examine institutional aspirations for 2025 and beyond
Process: In your assigned teams, read one strategic vision document and you create a 4 minute audio summary to share with the rest of the class. You may use any tool that you feel comfortable with to create this audio summary, but if you are need an easy solution you can try Vocaroo or SoundCloud.
Individually, read the assigned document. Consider the following questions: What are the main themes in the document? What are the institutions’ main goals or aspirations for the future? How is technology described as enabling the institution to achieve these goals? Is technology used in interesting and creative ways? Which of the challenges that we identified as facing contemporary universities is the document aiming to address?
Next, discuss your findings with your team and collaborate to craft an audio summary of your assigned document.
Your audio can take many forms. It can be a summary spoken by one person. Or, a conversation between two or more people. Fel free to be more creative than these two examples. You could for instance imagine that you are in a leadership position at the assigned institution and you are delivering a 4-minute speech to the university community summarizing the institution’s aspirations for 2025.
Strategic document assignments are as follows:
|Team 1||Team choice or UBC. (2014). Flexible learning: Charting a strategic vision for UBC (Vancouver campus). Office of the Provost.|
|Team 2||Team choice or University of Saskatchewan. (n.d.). Vision 2025: From spirit to action.|
|Team 3||MIT. (2013). Institute-wide taskforce on the future of MIT education: Preliminary report.|
|Team 4||Standford. (n.d.). Learning and living at Stanford 2025.|
|Team 5||Royal Roads University (2016). RRU Learning and Teaching Model.|
|Team 6||Team choice or University of The Fraser Valley (2016). UFV 2025: A vision for our future.|
The attention that education and educational technology are receiving are significant, and I feel fortunate to be able to participate in efforts to improve education. Nonetheless, for the past couple of years has been bombarded with announcement after announcement of what the latest and greatest technology can do for education. These announcements are almost always filled with claims about their potential impact. Here’s a clipping of a recent email i received:
Let’s pause. Research from ISTE “found that the use of education technology (EdTech) resulted in 35 percent of students showing higher scores on class assessments and 32 percent increased engagement?” The link pointed to this post, written by the amazing Wendy Drexel. Let’s hone in on what Wendy actually wrote:
“Nationwide, we are seeing powerful results from the effective use of technology in classrooms. For example, results of research by ISTE and the Verizon Foundation earlier this year into the use of education technology had teachers reporting that 35 percent of their students showed higher scores on classroom assessments; 32 percent showed increased engagement; and 62 percent demonstrated increased proficiency with mobile devices. In fact, 60 percent of participating teachers also reported that by using their mobile devices, they provided more one-on-one help to students, and 47 percent said they spent less time on lectures to the entire class.”
These are interesting and worthwhile results. But they are mischaracterized in the email above. From the summary of the research posted on the ISTE site and a more detailed report of the research (pdf), we can begin to see how some edtech companies purport that there is evidence of impact and use that to further their cause. Here’s a summary of two issues ignored by the ad/email:
- The email claims that use of edtech resulted in 35% and 32% more students scoring higher in classroom assessment and engagement. What the research actually reported about this particular area is the following: teachers reported that edtech use led to increased scores/engagement. In more plain language, the teachers said that their students did better. We don’t know if they did or did not.
- The email paints a direct and unequivocal relationship between edtech use and outcomes.: “use of educational technology resulted in.” That’s not actually the case. Why?
- The actual research showed that even though there were differences in math and science outcomes between schools that participated and schools that did not, the results were not statistically significant. In different words: similar results could be expected without the use of this particular technology
- How is the technology used? The research report notes: “During site visits, observers noted that edtech-using teachers used technology to efficiently
facilitate drill and practice test preparation activities.” In other words: Edtech helps with teaching to the test and that seems to work. Put differently: We have powerful technologies that empower people to be creative and allow global collaboration, but have created systems that put teachers in situations in which they have to use these tools in simplistic ways.