Scholars are often encouraged to be public intellectuals – to ‘go online’ and engage with diverse audiences. Yet, scholars’ online activities appear to be rife with tensions, dilemmas, and conundrums. In a presentation that I gave last week at AERA, I discuss some tensions and challenges scholars face when engaging networked publics and highlight some uncomfortable realities of being a public scholar. Evangelizing public and networked scholarship without acknowledging the existence of tensions is detrimental to the field and misleading to the scholars who may be considering greater public engagement- becoming more networked, more public, and more “digital.” Individual scholars and institutions need to evaluate the purposes and functions of scholarship and take part in devising systems that reflect and safeguard the values of scholarly inquiry.
Bear with me. This work-in-progress is a bit raw. I’d love any feedback that you might have.
Back in 2008, my colleagues and I wrote a short paper arguing that social justice is a core element of good instructional design. Good designs were, and still are, predominantly judged upon their effectiveness, efficiency, and engagement (e3 instruction). Critical and anti-opressive educators and theorists have laid the foundations of extending educational practice beyond effectiveness a long time ago.
I’m not convinced that edtech, learning design, instructional design, digital learning, or any other label that one wants to apply to the “practice of improving digital teaching and learning” is there yet.
I’ve been thinking more and more about compassion with respect to digital learning. More specifically, I’ve been reflecting on the following question:
What does compassion look like in digital learning contexts?
I’m blogging about this now, because my paper journal is limiting and there is an increasing recognition within various circles in the field that are coalescing around similar themes. For instance,
- The CFP for Learning with MOOCs III asks: What does it mean to be human in the digital age?
- Our research questions reductionist agendas embedded in some approaches to evaluating and enhancing learning online. Similar arguments are made by Jen Ross, Amy Collier, and Jon Becker.
- Kate Bowles says “we have a capacity to listen to each other, and to honour what is particular in the experience of another person.”
- Lumen Learning’s personalized pathways recognize learner agency (as opposed to dominant personalization paradigms that focus on system control)
Compassion is one commonality that these initiatives, calls to action, and observations have in common (and, empowerment, but that’s a different post).
This is not a call for teaching compassion or empathy to the learner. That’s a different topic. I’m more concerned here with how to embed compassion in our practice – in our teaching, in our learning design processes, the technologies that we create, in the research methods that we use. At this point I have a lot of questions and some answers. Some of my questions are:
- What does compassionate digital pedagogy look like?
- What are the purported and actual relationships between compassion and various innovations such as flexible learning environments, competency-based learning, and open education?
- What are the narratives surrounding innovations [The work of Neil Selwyn, Audrey Watters, and David Noble is helpful here]
- What does compassionate technology look like?
- Can technologies express empathy and sympathy? Do students perceive technologies expressing empathy? [Relevant to this: research on pedagogical agents, chatbots, and affective computing]
- What does compassion look like in the design of algorithms for new technologies?
- What does compassionate learning design look like?
- Does a commitment to anti-oppressive education lead to compassionate design?
- Are there any learning design models that explicitly account for compassion and care? Is that perhaps implicit in the general aim to improve learning & teaching?
- In what ways is compassion embedded in design thinking?
- What do compassionate digital learning research methods look like?
- What are their aims and goals?
- Does this question even make sense? Does this question have to do with the paradigm or does it have to do with the perspective employed in the research? Arguing that research methods informed by critical theory are compassionate is easy. Can positivist research methods be compassionate? Researchers may have compassionate goals and use positivist approaches (e.g., “I want to evaluate the efficacy of testing regimes because I believe that they might be harmful to students”).
- What does compassionate digital learning advocacy look like?
- Advocating for widespread adoption of tools/practices/etc without addressing social, political, economic, and cultural contexts is potentially harmful (e.g., Social media might be beneficial but advocating for everyone to use social media ignores the fact that certain populations may face more risks when doing so)
There’s many other topics here (e.g., adjunctification, pedagogies of hope, public scholarship, commercialization….) but there’s more than enough in this post alone!
I’m in the process of creating an activity for a new course, and I thought that this particular activity might be valuable to others. Here’s what it currently looks like:
Task: Examine institutional aspirations for 2025 and beyond
Process: In your assigned teams, read one strategic vision document and you create a 4 minute audio summary to share with the rest of the class. You may use any tool that you feel comfortable with to create this audio summary, but if you are need an easy solution you can try Vocaroo or SoundCloud.
Individually, read the assigned document. Consider the following questions: What are the main themes in the document? What are the institutions’ main goals or aspirations for the future? How is technology described as enabling the institution to achieve these goals? Is technology used in interesting and creative ways? Which of the challenges that we identified as facing contemporary universities is the document aiming to address?
Next, discuss your findings with your team and collaborate to craft an audio summary of your assigned document.
Your audio can take many forms. It can be a summary spoken by one person. Or, a conversation between two or more people. Fel free to be more creative than these two examples. You could for instance imagine that you are in a leadership position at the assigned institution and you are delivering a 4-minute speech to the university community summarizing the institution’s aspirations for 2025.
Strategic document assignments are as follows:
|Team 1||Team choice or UBC. (2014). Flexible learning: Charting a strategic vision for UBC (Vancouver campus). Office of the Provost.|
|Team 2||Team choice or University of Saskatchewan. (n.d.). Vision 2025: From spirit to action.|
|Team 3||MIT. (2013). Institute-wide taskforce on the future of MIT education: Preliminary report.|
|Team 4||Standford. (n.d.). Learning and living at Stanford 2025.|
|Team 5||Royal Roads University (2016). RRU Learning and Teaching Model.|
|Team 6||Team choice or University of The Fraser Valley (2016). UFV 2025: A vision for our future.|
The attention that education and educational technology are receiving are significant, and I feel fortunate to be able to participate in efforts to improve education. Nonetheless, for the past couple of years has been bombarded with announcement after announcement of what the latest and greatest technology can do for education. These announcements are almost always filled with claims about their potential impact. Here’s a clipping of a recent email i received:
Let’s pause. Research from ISTE “found that the use of education technology (EdTech) resulted in 35 percent of students showing higher scores on class assessments and 32 percent increased engagement?” The link pointed to this post, written by the amazing Wendy Drexel. Let’s hone in on what Wendy actually wrote:
“Nationwide, we are seeing powerful results from the effective use of technology in classrooms. For example, results of research by ISTE and the Verizon Foundation earlier this year into the use of education technology had teachers reporting that 35 percent of their students showed higher scores on classroom assessments; 32 percent showed increased engagement; and 62 percent demonstrated increased proficiency with mobile devices. In fact, 60 percent of participating teachers also reported that by using their mobile devices, they provided more one-on-one help to students, and 47 percent said they spent less time on lectures to the entire class.”
These are interesting and worthwhile results. But they are mischaracterized in the email above. From the summary of the research posted on the ISTE site and a more detailed report of the research (pdf), we can begin to see how some edtech companies purport that there is evidence of impact and use that to further their cause. Here’s a summary of two issues ignored by the ad/email:
- The email claims that use of edtech resulted in 35% and 32% more students scoring higher in classroom assessment and engagement. What the research actually reported about this particular area is the following: teachers reported that edtech use led to increased scores/engagement. In more plain language, the teachers said that their students did better. We don’t know if they did or did not.
- The email paints a direct and unequivocal relationship between edtech use and outcomes.: “use of educational technology resulted in.” That’s not actually the case. Why?
- The actual research showed that even though there were differences in math and science outcomes between schools that participated and schools that did not, the results were not statistically significant. In different words: similar results could be expected without the use of this particular technology
- How is the technology used? The research report notes: “During site visits, observers noted that edtech-using teachers used technology to efficiently
facilitate drill and practice test preparation activities.” In other words: Edtech helps with teaching to the test and that seems to work. Put differently: We have powerful technologies that empower people to be creative and allow global collaboration, but have created systems that put teachers in situations in which they have to use these tools in simplistic ways.
Royce Kimmons and I have been exploring the use of large-scale data in a number of recent studies. We just published a paper that tries to make sense of students’ and professors’ social media participation on a large scale. We are continuing our qualitative investigations to understand “why, in what ways, and how” scholars (students & professors) are using social media, but this is our first data mining study making use of Twitter data. It’s also the first study using large-scale Twitter data to make sense of how professors and students of education are using Twitter.
Here’s a high-level summary of three of our findings:
- There is significant variation in how scholars participate on Twitter. The platform may not be the democratizing tool it is often purported to be: The most popular 1% scholars have an average follower base nearly 100 times that of scholars in the lower 99% and 700 times those in the bottom 50%.
- Civil rights and advocacy seem to be an important activity of social media participation – this is rarely captured in research to date, which most often focuses on how social media are used in teaching & research. Scholars’ participation on Twitter extends well beyond traditional notions of scholarship.
We found that those scholars who follow more users, have tweeted more, signal themselves as professors, and have been on Twitter longer will have more followers. This model predicts 83% of the variation on follower counts. This finding raises questions as to the meaning of follower counts and its use as a metric in conversations pertaining to scholarly quality/reach.
Veletsianos, G., & Kimmons, R. (2016). Scholars in an Increasingly Digital and Open World: How do Education Professors and Students use Twitter? The Internet and Higher Education, 30, 1-10.
The British Journal of Educational Technology and BERA approached us to create an infographic for the article we (Amy Collier, Emily Schneider, and myself) published last year: Digging Deeper into Learners’ Experiences in MOOCs: Participation in social networks outside of MOOCs, Notetaking, and contexts surrounding content consumption
Below is the outcome (and a pdf version is here):
Here’s why academics should write for the public
There’s been much discussion about the needless complexity of academic writing.
In a widely read article in The Chronicle of Higher Education last year, Steven Pinker, professor of psychology at Harvard and author of several acclaimed books including The Sense of Style, analyzed why academic writing is “turgid, soggy, wooden, bloated, clumsy, obscure, unpleasant to read, and impossible to understand.”
More recently, Jeff Camhi, professor emeritus of biology at the Hebrew University of Jerusalem, discovered how much academic authors struggle when trying to write for a lay audience. He suggested writing programs should “develop a night course in creative nonfiction writing, specifically for professors.”
We think learning to write creative nonfiction isn’t a bad idea. But we disagree with Camhi’s suggestion that academics need a night course for this. We propose something simpler: academics just need to start writing, getting edited and seeing if the public reads them. Through this process, academics will not only learn to express themselves clearly, but most likely become better scientists as well.
What are the benefits?
Although both of us currently write for the public, we come at this from different perspectives – one of us has written for a few years, and the other started writing only this year.
We don’t think we are amazing writers, but we do think writing for the public has helped us improve. The immediate feedback from editors and the public has helped make our writing clearer.
We’ve learned that if we’re not clear and engaging, then editors and the general public simply won’t read us. And that continues to teach us how to improve the next time we write.
Public writing has also improved both our academic writing skills and scientific thinking abilities.
That’s because the first step in improving academic writing is to learn to reduce the jargon academics use and express concepts clearly. And this has forced us to distill our thinking to its absolute core.
Consequently, not only did the process improve the quality of our writing, but it also brought more clarity to the way we were thinking about our scientific problems.
For example, when we recently started to write an academic review article together, we first considered how we could write a piece for the public later based on the review. This helped us reconfigure the way we synthesized the literature, forcing us to discuss it clearly and logically.
Additionally, because public writing engages both the public and our academic colleagues, we’ve found that public commentary can be a form of “public peer review.” Exciting research ideas for academic papers have developed from our public pieces thanks to crowdsourced feedback.
For example, a Psychology Today magazine article written by one of us (Wai) led to feedback from editors and others on the importance of studying highly educated influential people. This resulted in a series of research papers, discussed subsequently in The Washington Post.
Such public engagement can bring in other benefits for an academic career. For instance, one of us (Miller) traveled to Amsterdam last month to give a keynote address at a conference about gender and science.
The conference organizers found him because of the attention he received in popular press about an international study that he had led on gender stereotypes in science. That popular press attention was initiated by the author contacting his university’s press office and working closely with its writers to collaboratively draft a press release.
In both our cases, public engagement opened up opportunities to network with academics and others within and outside our fields. And this happened only because people read the public pieces we had written.
It’s that simple
Writing for the public requires improving one’s skills, just the way it does for writing an academic article or a grant proposal. Yes, there is a “start-up cost” as you learn the ropes. But it isn’t as time-consuming as many academics may think.
In fact, both of us were very cautious when we first started to write for the public. We were even skeptical of its benefits given the perceived time cost. But earlier this year, one of us (Miller) learned how easy this process is.
He learned about a controversial study that he wanted to place in a broader context for the public. So he submitted a 199-word pitch that night to The Conversation, which encourages academics to write for the public. An editor replied the next morning giving advice on how to structure and write the piece for clarity.
The 765-word article took just one day to draft and one day to refine with the editor – lightning fast compared to academic journals. The Atlantic’s Quartz republished the article, which has now reached over 25,000 readers. Consider how most academic articles are read by only a handful of people.
We now believe that public writing is part and parcel of our identities as scholars.
Engage with the public to have social impact
Now that we’ve discussed some of the benefits of public writing and why we think academics should do it, we conclude by addressing one important structural component to the solution.
But what she did not mention is that more scientists are needed in the public square to become clearer and better writers as well. As we said earlier, that clarity can bring other indirect and direct benefits for science and scientists’ careers.
So why aren’t more academics writing for the public?
Well, it’s really quite simple. There’s little incentive built into the reward and promotion system, something Steven Pinker noted as well. Perhaps administrators need to include public engagement on equal footing as teaching, advising, publishing, and grant-getting in the tenure review process.
Many academics, including us, now realize that if we want to reach people who might benefit from our research, we have to step out of the ivory tower. We academics need to enter the discussion that the rest of the world engages in every day.
“Personalized learning” is that one area of research and practice that brings to the forefront many of the debates and issues that the field is engaging with right now. If one wanted to walk people through the field, and wanted to do so through *one* specific topic, that topic would be personalized learning.
Personalized cans? (CC-licensed image from Flickr)
Here’s are some of the questions that personalized learning raises:
- We have a problem with labels and meaning in this field. Heck, we have a problem with what to call ourselves: Learning Technologies or Educational Technology? Or perhaps instructional design? Learning Design? Learning, Design, and Technology? Or is it Learning Science? Reiser asks: What field did you say you were in? The same is true for personalized learning. Audrey Watters and Mike Caulfield ask what does “personalized learning” mean and what is the term’s history? Does it mean different pathways for each learner, one pathway with varied pacing for each learner, or something else?
- The Chan-Zuckerberg initiative and the Bill and Melinda Gates Foundation endorse personalized learning. What is the role of philanthropy in education in general and educational technology in particular? Should educators and researchers “beware of big donors” or should they enthusiastically welcome the support in the current climate of declining public monies?
- Where is the locus of control? Is personalization controlled by the learner? Is the control left to the software? What of shared control? Obsolete views of personalization and adaptive learning focus on how the system can control both the content and the learning process ignoring, for the most part, the learner, even though learner control appears to be an important determinant of success in e-learning (see Singhanayok & Hooper, 1998). The important question in my mind is the following: How do we balance system and learner control? Such shared control should empower students and enable technology to support and enhance the process. Downes distinguishes between personalized learning and personal learning. I think that locus of control is the distinguishing aspect, and that the role of shared control remains an open conceptual and empirical question. Debates about xMOOCx vs cMOOCs fall in here as well as the debate regarding the value of guided vs discovery learning.
- How do big data and learning analytics improve learning and participation? What are the limitations of depending on trace data? Personalized learning often appears to depend on the creation of learner profiles. For example, if you fit a particular profile you might receive a particular worked-out example or semi-completed problem, and problems might vary as one progresses through a pathway. Or, you might get an email from Coursera about “recommended courses” (see my point above regarding definitions and meanings). Either way, the role that large datasets, analytics, and educational data science – as well as the limitations and assumptions of these approaches, as we show in our research – is central to personalization and new approaches to education.
- What assumptions do authors of personalized learning algorithms make? We can’t answer this question unless we look at the algorithms. Such algorithms are rarely transparent. They often come in “black box” form, which means that what we have no insight into the processes of how inputs are transformed to outputs. We don’t know the inner workings of the algorithms that Facebook, Twitter, and Google Scholar use, and we likely won’t know how the algorithms that EdTechCompany uses work to deliver particular content to particular groups of students. If independent researchers can’t evaluate the inner workings of personalized learning software, how can we be sure that such algorithms so what they are supposed to do without being prejudicial? Perhaps the authors of education technology algorithms need a code of conduct, and a course on social justice?
- Knewton touts its personalization engine. Does it actually work? Connecting this to broader conversations in the field: What evidence do we have about the claims made by the EdTech industry? Is there empirical evidence to support these claims? See for example, this analysis by Phil Hill on the relationship between LMS use and retention/performance and this paper by Royce Kimmons on the impact of LMS adoption on outcomes. If you’ve been in the position of making a technology purchasing in K-12/HigherEd, you have likely experienced the unending claims regarding the positive impact of technology on outcomes and retention.
- And speaking of data and outcomes, what of student privacy in this context? How long should software companies keep student data? Who has access to the data? Should the data follow students from one system (e.g., K-12) to another (e.g., Higher Ed)? Is there uniformity in place (e.g., consistent learner profiles) for this to happen? How does local legislation relate to educational technology companies’ use of student data? For example, see this analysis by BCCampus describing how British Columbia’s Freedom of Information and Protection of Privacy Act (FIPPA) impacts the use of US-based cloud services. The more one looks into personalization and its dependence on student data, the more one has to explore questions pertaining to privacy, surveillance and ethics.
- Finally, what is the role of openness is personalized learning? Advocates for open frequently argue that openness and open practices enable democratization, transparency, and empowerment. For instance, open textbooks allow instructors to revise them. But, what happens when the product that publishing companies sell isn’t content? What happens, when the product is personalized learning software that uses OER? Are the goals of the open movement met when publishers use OER bundles with personalized learning software that restricts the freedoms associated with OER? What becomes of the open agenda to empower instructors, students, and institutions?
There’s lots to contemplate here, but the point is this: Personalized learning is ground zero for the field and its debates.