This just in: My book, Networked Scholars, is (mostly) complete. It’s out of my hands – as much as book that hasn’t yet been printed is out of anyone’s hands – and I am happy that I have had the experience of writing it.
One of the conclusions/implications of the book that I believe deserves more conversation is the fact that a parallel, even “shadow,” scholarly environment is arising – this is the environment in which networked scholarship is operating. It behooves scholars and institutions to make better sense of it. Shadow educational systems are not new – the private tutoring industry in Cyprus is a prime example of how such systems operate. However, “shadow” or parallel systems take many forms. Siemens argued that a shadow education system has arisen, one in which individuals use the Internet to learn without the support of educational institutions. He argues that this has occurred as a result of institutions of learning having failed to recognize the demand for the unique needs of complex contemporary societies. While this argument focuses on learners, a similar situation is occurring in terms of scholarly practice: The shadow education system that Siemens sees arising encompasses a scholarly environment that runs parallel to the traditional one. This environment, facilitated and encouraged by online social networks, serves scholarly functions and features and supports the development, sharing, negotiation, and evaluation of knowledge. It also functions as an environment where scholars do scholarly things that have little to do with knowledge creation. In this parallel environment, scholars have,
- supported peers and students regardless of hierarchy and institutional affiliation;
- provided advice and care in time of need;
- commented on peers’ in-progress manuscripts;
- delivered guest lectures or have taught open courses, and
- created and shared videos and other media summarizing their scholarship.
Many of these activities have occurred with little or no institutional support and in many instances with little or no institutional oversight.
This is not to say that the emerging parallel scholarly environment is always effective and fair. Many of the power relations and inequities that exist in the traditional scholarly environment are reproduced in networks. For instance, replacing citation/journal metrics with social media metrics does little to resist reductionist agendas.
This parallel environment also appears to encompass (some) alternative signals of influence, prestige, and impact: Follower counts. Presence. But, as Stewart notes, recognizable signals – such as Oxford – are still powerful.
Will this environment replace the traditional one? It’s doubtful, but scholarly environments evolve with the cultures that house them, and as such, I expect that both the traditional environment and this parallel one will converge.
One of the main arguments that we made in our recent paper on MOOCs, which is also the argument that I continue in this op ed piece published in Inside Higher Ed, is that the field needs to embrace diverse research methods to understand and improve digital learning. The following passage is from our paper, and given that the paper is quite long, I thought that posting it here might be helpful:
By capturing and analyzing digital data, the field of learning analytics promises great value and potential in understanding and improving learning and teaching. The focus on big data, log file analyses, and clickstream analytics in MOOCs is reflective of a broader societal trend towards big data analytics (Eynon, 2013; Selwyn, 2014) and toward greater accountability and measurement of student learning in higher education (Leahy, 2013; Moe, 2014). As technology becomes integrated in all aspects of education, the use of digital data and computational analysis techniques in education research will increase. However, an over-reliance on log file analyses and clickstream data to understand learning leaves many learner activities and experiences invisible to researchers.
While computational analyses are a powerful strategy for making a complex phenomenon tractable to human observation and interpretation, an overwhelming focus on any one methodology will fail to generate a complete understanding of individuals’ experiences, practices, and learning. The apparent over-reliance on MOOC platform clickstream data in the current literature poses a significant problem for understanding learning in and with MOOCs. Critics of big data in particular question what is missing from large data sets and what is privileged in the analyses of big data (e.g., boyd & Crawford, 2012). For instance, contextual factors such as economic forces, historical events, and politics are often excluded from clickstream data and analyses (Carr, 2014; Selwyn 2014). As a result, MOOC research frequently examines learning as an episodic and temporary event that is divorced from the context which surrounds it. While the observation of actions on digital learning environments allows researchers to report activities and behaviors, such reporting also needs an explanation as to why learners participate in MOOCs in the ways that they do. For example, in this research, participants reported that their participation in MOOCs varies according to the daily realities of their life and the context of the course. Learners’ descriptions of how these courses fit into their lives are a powerful reminder of the agency of each individual.
To gain a deeper and more diverse understanding of the MOOC phenomenon, researchers need to use multiple research methods. While clickstream data generates insights on observable behaviors, interpretive research approaches (e.g., ethnography, phenomenology, discourse analysis) add context to them. For example, Guo, Kim, and Rubin (2014), analyzed a large data set of MOOC video-watching behaviors, found that the median length of time spent watching a video is six minutes, and recommended that “instructors should segment videos into short chunks, ideally less than 6 minutes.” While dividing content into chunks aligns with psychological theories of learning (Miller, 1956), this finding does not explain why the median length of time learners spent watching videos is six minutes. Qualitative data and approaches can equip researchers to investigate the reasons why learners engage in video-watching behaviors in the ways that they do. For example, the median watching length of time might be associated with learner attention spans. On the other hand, multiple participants in this study noted that they were fitting the videos in-between other activities in their lives – thus shorter videos might be desirable for practical reasons: because they fit in individuals’ busy lives. Different reasons might be uncovered that explain why learners seem to engage with videos for six minutes, leading to different design inspirations and directions. Because the MOOC phenomenon, and its associated practices, are still at a nascent stage, interpretive approaches are valuable as they allow researchers to generate a refined understanding of meaning and scope of MOOCs. At the same time, it is significant to remember that a wholly interpretive approach to understanding learning in MOOCs will be equally deficient. Combining methods and pursuing an understanding of the MOOC phenomenon from multiple angles, while keeping in mind the strengths and weaknesses of each method, is the most productive avenue for future research.
A computational analysis and data science discourse is increasingly evident in educational technology research. This discourse posits that it is possible to tell a detailed and robust story about learning and teaching by relying on the depth and breadth of clickstream data. However, the findings in our research reveal meaningful learner activities and practices that evade data-capturing platforms and clickstream-based research. Off-platform experiences as described above (e.g., notetaking) call into question claims that can be made about learning that are limited to the activities that are observable on the MOOC platform. Further, the reasons that course content is consumed in the ways that it is exemplifies the opportunity to bring together multiple methodological approaches to researching online learning and participation.
I am really excited for #dLRN15 because the (awesome) group organizing the conference is asking the right set of difficult questions. Various research results that colleagues and I are in the process of reporting reflect the themes of the conference (e.g., increased interdisciplinary activity in digital learning research, significant variation in how education scholars participate online, unequal student activity on digital environments), and I’m excited that space is provided for us to have these conversations. Plus, the organizers are thinking in caring ways about the conference.
The conference themes are the following:
Ethics of Collaboration
Digital networks have the potential to redraw the maps of global educational influence and enable new models of international collaboration. More commonly, however, investment has been directed towards the consolidation of existing relations of prestige and influence, extending the reach of elite institutions into larger and more dispersed markets. In this strand, we are interested in papers that explore the ethical dimension of international digital learning initiatives, and in particular, that consider ways of advancing global learning through models of reciprocity and exchange.
In this strand, we are interested in papers that examine the emergence of individualised digital and networked learning as an educational priority. What are the technical and strategic drivers of the shift to adaptive, personalised learning? How are new edu models designing frameworks for student agency? What can learners of the future be expected to manage for themselves over their life course, and what do we assume about the skills, devices and network access they will need to do this?
In this strand, we are interested in papers that will provide insight into how faculty and institutional leaders are responding systemically to the use of digital networks. Examples might include: alternative assessment methods, prior learning assessment, competency based learning, partnerships with external capacity providers, changing forms of scholarship, academic innovation hubs (R&D), and so on. Research that assesses the impact of new systemic structures on student success will be of particular importance.
Innovation and Work
In this strand, we are interested in papers that examine the impact of networked innovation on the experience of working inside and alongside higher education. How has digital learning affected the academic profession, whether for the minority with tenure, or the much larger number working insecurely? What does it feel like to work alongside higher education from within other industries and sectors? In this strand, we particularly encourage papers that address the intersection of digital innovation, academic labour, and the education workforce of the future.
This strand invites concept and research papers on the relationships between networks, higher education, and sociocultural inequalities both in local and global contexts. While digital and networked higher education initiatives are often framed for the media in emancipatory terms, what effects does the changing landscape of higher education actually have on learners whose identities are marked by race/gender/class and other factors within their societies? Papers exploring societal factors, power structures, and their relationships to networked higher education are encouraged.
What do learning experiences in MOOCs look like? Amy Collier, Emily Schneider and I have just published a paper that provides some in-depth answers to this question. Here is a copy of the paper in pdf. The paper is part of a special issue published by the British Journal of Educational Technology which can be found here (there are many excellent pieces in that issue, so be sure to read them).
In addition to trying to understand learner experiences, in the paper we describe that we did this study because “ease of access to large data sets from xMOOCs offered through an increasing number of centralized platforms has shifted the focus of MOOC research primarily to data science and computational methodologies, giving rise to a discourse suggesting that teaching and learning can be fully analyzed, understood and designed for by examining clickstream data”
Our abstract reads:
Researchers describe with increasing confidence what they observe participants doing in massive open online courses (MOOCs). However, our understanding of learner activities in open courses is limited by researchers’ extensive dependence on log file analyses and clickstream data to make inferences about learner behaviors. Further, the field lacks an empirical understanding of how people experience MOOCs andwhy they engage in particular activities in the ways that they do. In this paper, we report three findings derived by interviewing 13 individuals about their experiences in MOOCs. We report on learner interactions in social networks outside of MOOC platforms, notetaking, and the contexts that surround content consumption. The examination and analysis of these practices contribute to a greater understanding of the MOOC phenomenon and to the limitations of clickstream-based research methods. Based on these findings, we conclude by making pragmatic suggestions for pedagogical and technological refinements to enhance open teaching and learning.
We reported 3 main findings:
1. Interactions in social networks outside of the MOOC platform
A number of learners alluded to interactions they have had with individuals who are part of their social networks. These include digital connections with other participants in a MOOC, face-toface interactions with friends and family, and face-to-face interactions with new connections in a MOOC.
Despite the fact that none of the popular MOOC platforms support integrated notetaking at the time of writing this paper, nearly all interviewees reported taking notes while watching lecture videos. Only one interviewee never took notes. However, the tools used to take notes and the subsequent use of notes varied substantially by learner.
3. Consuming content
All individuals participating in this study discussed factors that shaped the ways they consumed MOOC content, shedding light on the context surrounding their participation. Scholars in the learning sciences have long highlighted the critical role of the environment, arguing that learning must be understood as a sociocultural phenomenon situated in context and culture (Brown, Collins & Duguid, 1989). Patterns of MOOC content consumption can be examined by clickstream data, but these contextual factors help explain why learners exhibit particular patterns of participation.
Veletsianos, G., Collier, A., & Schneider, E. (2015). Digging Deeper into Learners’ Experiences in MOOCs: Participation in social networks outside of MOOCs, Notetaking, and contexts surrounding content consumption. British Journal of Educational Technology 46(3), 570-587.