Multidisciplinary, interdisciplinary, and crossdisciplinary research represent promising approaches for studying digital learning. Prior research however, discovered that research efforts directed at digital learning via MOOCs were dominated by individuals affiliated with education (Gašević, Kovanović, Joksimović, and Siemens, 2014). In their assessment of proposals submitted for funding under the MOOC research initiative (MRI), Gašević and colleagues show that more than 50% of the authors in all phases of the MRI grants were from the field of education. This result was interesting because a common perception in the field is that the MOOC phenomenon is “driven by computer scientists” (p. 166).
We were curious to understand whether this was the case with research conducted on MOOCs (as opposed to grant proposals) and used a dataset of author affiliations publishing MOOC research in 2013-2015 to examine the following questions:
RQ 1: What are the disciplinary backgrounds of the authors who published empirical MOOC research in 2013-2015?
RQ 2: How does the disciplinary distribution of the authors who published MOOC research in 2013-2015 compare to that of the submissions to the MRI reported by Gašević et al. (2014)?
RQ 3: Is the 2013-2015 empirical research on MOOCs more or less interdisciplinary than was previously the case?
Results from our paper (published in IRRODL last week) show the following:
– In 2013-2015, Education and Computer Science (CS) were by far the most common affiliations for researchers writing about MOOCs to possess
– During this time period, the field appears to be far from monolithic, as more than 40% of papers written on MOOCs are from authors not affiliated with Education/CS.
– The corpus of papers that we examined (empirical MOOC papers published in 2013-2015) was less dominated by authors from the field of education than were the submissions to the MOOC Research Initiative.
– A comparison of affiliations with past published papers shows that recent MOOC research appears to be more interdisciplinary than was the case in research published in 2008–2012.
We draw 2 implications from these results:
1. Current research on MOOCs appears to be more interdisciplinary than in the past, suggesting that the scientific complexity of the field is being tackled by a greater diversity of researchers. This suggests that even though xMOOCs are often disparaged for their teacher-centric and cognitivist-behaviorist approach, empirical research on xMOOCs may be more interdisciplinary than research on cMOOCs.
2. These results however, also lead us to wonder whether the trend toward greater interdisciplinarity of recent research might reflect (a) the structure and pedagogical model used in xMOOCs, (b) the greater interest in the field of online learning, and (c) the hype and popularity of MOOCs. Could it be that academics’ familiarity with the xMOOC pedagogical model make it a more accessible venue in which researchers from varying disciplines can conduct studies? Or, is increased interdisciplinary attention to digital education the result of media attention, popularity, and funding afforded to the MOOC phenomenon?
We conclude by arguing that “The burgeoning interest in digital learning, learning at scale, online learning, and other associated innovations presents researchers with the exceptional opportunity to convene scholars from a variety of disciplines to improve the scholarly understanding and practice of digital learning broadly understood. To do so however, researchers need to engage in collaborations that value their respective expertise and recognize the lessons learned from past efforts at technology-enhanced learning. Education and digital learning researchers may need to (a) take on a more active role in educating colleagues from other disciplines about what education researchers do and do not know about digital learning from the research that exists in the field and, (b) remain open to the perspectives that academic “immigrants” can bring to this field (cf. Nissani, 1997).”
For more on this, here’s our paper.
This just in: My book, Networked Scholars, is (mostly) complete. It’s out of my hands – as much as book that hasn’t yet been printed is out of anyone’s hands – and I am happy that I have had the experience of writing it.
One of the conclusions/implications of the book that I believe deserves more conversation is the fact that a parallel, even “shadow,” scholarly environment is arising – this is the environment in which networked scholarship is operating. It behooves scholars and institutions to make better sense of it. Shadow educational systems are not new – the private tutoring industry in Cyprus is a prime example of how such systems operate. However, “shadow” or parallel systems take many forms. Siemens argued that a shadow education system has arisen, one in which individuals use the Internet to learn without the support of educational institutions. He argues that this has occurred as a result of institutions of learning having failed to recognize the demand for the unique needs of complex contemporary societies. While this argument focuses on learners, a similar situation is occurring in terms of scholarly practice: The shadow education system that Siemens sees arising encompasses a scholarly environment that runs parallel to the traditional one. This environment, facilitated and encouraged by online social networks, serves scholarly functions and features and supports the development, sharing, negotiation, and evaluation of knowledge. It also functions as an environment where scholars do scholarly things that have little to do with knowledge creation. In this parallel environment, scholars have,
- supported peers and students regardless of hierarchy and institutional affiliation;
- provided advice and care in time of need;
- commented on peers’ in-progress manuscripts;
- delivered guest lectures or have taught open courses, and
- created and shared videos and other media summarizing their scholarship.
Many of these activities have occurred with little or no institutional support and in many instances with little or no institutional oversight.
This is not to say that the emerging parallel scholarly environment is always effective and fair. Many of the power relations and inequities that exist in the traditional scholarly environment are reproduced in networks. For instance, replacing citation/journal metrics with social media metrics does little to resist reductionist agendas.
This parallel environment also appears to encompass (some) alternative signals of influence, prestige, and impact: Follower counts. Presence. But, as Stewart notes, recognizable signals – such as Oxford – are still powerful.
Will this environment replace the traditional one? It’s doubtful, but scholarly environments evolve with the cultures that house them, and as such, I expect that both the traditional environment and this parallel one will converge.
One of the main arguments that we made in our recent paper on MOOCs, which is also the argument that I continue in this op ed piece published in Inside Higher Ed, is that the field needs to embrace diverse research methods to understand and improve digital learning. The following passage is from our paper, and given that the paper is quite long, I thought that posting it here might be helpful:
By capturing and analyzing digital data, the field of learning analytics promises great value and potential in understanding and improving learning and teaching. The focus on big data, log file analyses, and clickstream analytics in MOOCs is reflective of a broader societal trend towards big data analytics (Eynon, 2013; Selwyn, 2014) and toward greater accountability and measurement of student learning in higher education (Leahy, 2013; Moe, 2014). As technology becomes integrated in all aspects of education, the use of digital data and computational analysis techniques in education research will increase. However, an over-reliance on log file analyses and clickstream data to understand learning leaves many learner activities and experiences invisible to researchers.
While computational analyses are a powerful strategy for making a complex phenomenon tractable to human observation and interpretation, an overwhelming focus on any one methodology will fail to generate a complete understanding of individuals’ experiences, practices, and learning. The apparent over-reliance on MOOC platform clickstream data in the current literature poses a significant problem for understanding learning in and with MOOCs. Critics of big data in particular question what is missing from large data sets and what is privileged in the analyses of big data (e.g., boyd & Crawford, 2012). For instance, contextual factors such as economic forces, historical events, and politics are often excluded from clickstream data and analyses (Carr, 2014; Selwyn 2014). As a result, MOOC research frequently examines learning as an episodic and temporary event that is divorced from the context which surrounds it. While the observation of actions on digital learning environments allows researchers to report activities and behaviors, such reporting also needs an explanation as to why learners participate in MOOCs in the ways that they do. For example, in this research, participants reported that their participation in MOOCs varies according to the daily realities of their life and the context of the course. Learners’ descriptions of how these courses fit into their lives are a powerful reminder of the agency of each individual.
To gain a deeper and more diverse understanding of the MOOC phenomenon, researchers need to use multiple research methods. While clickstream data generates insights on observable behaviors, interpretive research approaches (e.g., ethnography, phenomenology, discourse analysis) add context to them. For example, Guo, Kim, and Rubin (2014), analyzed a large data set of MOOC video-watching behaviors, found that the median length of time spent watching a video is six minutes, and recommended that “instructors should segment videos into short chunks, ideally less than 6 minutes.” While dividing content into chunks aligns with psychological theories of learning (Miller, 1956), this finding does not explain why the median length of time learners spent watching videos is six minutes. Qualitative data and approaches can equip researchers to investigate the reasons why learners engage in video-watching behaviors in the ways that they do. For example, the median watching length of time might be associated with learner attention spans. On the other hand, multiple participants in this study noted that they were fitting the videos in-between other activities in their lives – thus shorter videos might be desirable for practical reasons: because they fit in individuals’ busy lives. Different reasons might be uncovered that explain why learners seem to engage with videos for six minutes, leading to different design inspirations and directions. Because the MOOC phenomenon, and its associated practices, are still at a nascent stage, interpretive approaches are valuable as they allow researchers to generate a refined understanding of meaning and scope of MOOCs. At the same time, it is significant to remember that a wholly interpretive approach to understanding learning in MOOCs will be equally deficient. Combining methods and pursuing an understanding of the MOOC phenomenon from multiple angles, while keeping in mind the strengths and weaknesses of each method, is the most productive avenue for future research.
A computational analysis and data science discourse is increasingly evident in educational technology research. This discourse posits that it is possible to tell a detailed and robust story about learning and teaching by relying on the depth and breadth of clickstream data. However, the findings in our research reveal meaningful learner activities and practices that evade data-capturing platforms and clickstream-based research. Off-platform experiences as described above (e.g., notetaking) call into question claims that can be made about learning that are limited to the activities that are observable on the MOOC platform. Further, the reasons that course content is consumed in the ways that it is exemplifies the opportunity to bring together multiple methodological approaches to researching online learning and participation.
I am really excited for #dLRN15 because the (awesome) group organizing the conference is asking the right set of difficult questions. Various research results that colleagues and I are in the process of reporting reflect the themes of the conference (e.g., increased interdisciplinary activity in digital learning research, significant variation in how education scholars participate online, unequal student activity on digital environments), and I’m excited that space is provided for us to have these conversations. Plus, the organizers are thinking in caring ways about the conference.
The conference themes are the following:
Ethics of Collaboration
Digital networks have the potential to redraw the maps of global educational influence and enable new models of international collaboration. More commonly, however, investment has been directed towards the consolidation of existing relations of prestige and influence, extending the reach of elite institutions into larger and more dispersed markets. In this strand, we are interested in papers that explore the ethical dimension of international digital learning initiatives, and in particular, that consider ways of advancing global learning through models of reciprocity and exchange.
In this strand, we are interested in papers that examine the emergence of individualised digital and networked learning as an educational priority. What are the technical and strategic drivers of the shift to adaptive, personalised learning? How are new edu models designing frameworks for student agency? What can learners of the future be expected to manage for themselves over their life course, and what do we assume about the skills, devices and network access they will need to do this?
In this strand, we are interested in papers that will provide insight into how faculty and institutional leaders are responding systemically to the use of digital networks. Examples might include: alternative assessment methods, prior learning assessment, competency based learning, partnerships with external capacity providers, changing forms of scholarship, academic innovation hubs (R&D), and so on. Research that assesses the impact of new systemic structures on student success will be of particular importance.
Innovation and Work
In this strand, we are interested in papers that examine the impact of networked innovation on the experience of working inside and alongside higher education. How has digital learning affected the academic profession, whether for the minority with tenure, or the much larger number working insecurely? What does it feel like to work alongside higher education from within other industries and sectors? In this strand, we particularly encourage papers that address the intersection of digital innovation, academic labour, and the education workforce of the future.
This strand invites concept and research papers on the relationships between networks, higher education, and sociocultural inequalities both in local and global contexts. While digital and networked higher education initiatives are often framed for the media in emancipatory terms, what effects does the changing landscape of higher education actually have on learners whose identities are marked by race/gender/class and other factors within their societies? Papers exploring societal factors, power structures, and their relationships to networked higher education are encouraged.