I was at the Educause Learning Initiative conference last week (#ELI2014), where I had some interesting conversations and discussions around online learning, MOOCs, research methods, and the future of higher education.
Amy Collier and I presented early results from our qualitative studies looking at learners’ MOOC experiences (if you have not yet responded to our call to share your lived experiences with us, please consider this invitation). Our talk was entitled “Messy Realities: Investigating Learners’ Experiences in MOOCs.” Our thinking is guided by the notion that even though surveys and big data yield insights into general behavioral patterns, these methods are detached and can distance us rather than help us understand the human condition. As a result, the phenomenon of “learning in a MOOC” is understudied and undiscovered. During the session, we shared what we have been finding in our studies, highlighting the messiness of learning and teaching in the open.
Karen Vignare and Amy Collier were also very kind to extend an invitation to a number of us to share our work with individuals participating in the leadership seminar they organized. It was fantastic to hear Katie Vale (Harvard), Matt Meyer (The Pennsylvania State University), Rebecca Petersen (edX, MIT), and D. Christopher Brooks (EDUCAUSE) discuss their work, and once again, I felt grateful that we are having these conversations more openly, more frequently, and with greater intent.
Below are my rough notes from my 5-7 minute presentation. I appreciate parsimony (who doesn’t?), and in the words of D. Christopher Brooks, this is the litany of things I think:
I am a designer and researcher of education and learning. I study emerging technologies and emerging learning environments. I’m also a faculty member , and I have been teaching in higher education settings both face-to-face and online since 2005.
To contextualize my comments on MOOCs, first I want to describe my experiences with them:
- I have facilitated one week of the #change11 MOOC was organized by George Siemens and Stephen Downes in 2011. This MOOC had a distinctively connectivist flavor with each week being facilitated by 1 person.
- I have enrolled in a number of MOOCs, and have even completed a small number of them.
- I have repurposed MOOCs in my own courses. For example, I have asked students to enroll in MOOCs and write about them.
- I have published an e-book with my students, sharing stories of student experiences with MOOCs.
- Finally, I am actively involved in studying learners’ experiences in MOOCs in order to understand the human element in these emerging learning environments.
I have recently come to the realization that I have an ambivalent relationship with MOOCs. My relationship with MOOCs is one of the most ambivalent relationships I have had with anyone or anything. This relationship is more ambivalent than the love-ignore-hate relationship that my cat has with me!
On the one hand, I appreciate the opportunities for open learning that MOOCs provide. I also appreciate how MOOCs have brought us together to discuss issues around technology, teaching, and learning. At the same time, I cringe at the narratives around big data, I cringe at the hype, at the ignorance around what education is and should be about.
I want to talk about two topics today: MOOC research and the MOOC phenomenon.
On MOOC Research
- We don’t know much about MOOCs
- The things that we know about MOOCs are mostly the result of surveys, learning analytics, and big data research
- The existing research and the existing methods that we use are informative, BUT they simply paint an incomplete picture of MOOCs. We should be asking more in-depth questions about learner and instructor experiences in MOOCs
- Qualitative and interpretive research methods can and will help us better understand MOOCs, open learning, and open scholarship
- Descriptions of learner behaviors are helpful, but these descriptions only provide a glimpse and superficial summary of what students experience and what they do in digital learning environments. To give you an example, emerging research suggests that students may be “sampling” courses; a behavior that we don’t frequently see in traditional online courses or traditional face-to-face courses. Nonetheless, “sampling” is not how participants would describe their experiences or the ways they participate MOOCs. To illustrate, consider family-style Mediterranean meals that consist of numerous dishes, where participants sample a wide array of food. If you ask a person to describe this meal, to explain it to someone else, or to simply tell you about the meal, they will likely describe the meal as a feast, they might describe the tahini as lemony, the variety of flavors as intriguing, the whole meal as satisfying. Different people will also describe the meal differently: Tourists might describe the meal as fulfilling, heavy, or even extravagant; locals might describe the same meal as appropriate, or better than or worst than meals that they have had at other restaurants. “Sampling” may be an appropriate descriptor of the act of eating a family-style meal, or exploring a MOOC, but the descriptor does not fully capture the experience of sampling.
On the MOOC as a Phenomenon
MOOCs. The acronym stands for massive, open, online courses. That is not what MOOCs are though. MOOCs are a phenomenon. They represent something larger than a course and should be seen in conjunction to the rebirth and revival of educational technology. They represent symptoms, responses, and failures facing Higher Education. For instance, MOOCs are a response to the increasing costs of Higher Education; represent the belief that the purpose of education is to prepare students for the workforce; represent the belief that technology is the solution to the problems that education is facing; are indicative of scholarly failures; seem to represent the belief that education is a product that can be packaged, automated, and delivered; and, are a response to failures by researchers, designers, administrators, and institutions to develop effective and inspiring solutions to the problems of education (alternatively, they might also represent the failure of existing systems to support creative individuals in enacting change)*.
The MOOC is an acronym that elicits strong feelings: excitement, fear, defiance, uncertainty, hope, contempt…. To address these feelings we have to address the failures of higher education and the underlying causes that have given rise to MOOCs. For this reason, instead of talking about MOOCs at my own institution, I discuss innovations and approaches that I value, including networked scholarship, openness, flexibility, social learning, and the design and development of new technologies.
* NOTE: Rolin Moe and I are working on a paper refining and delineating these. If you have thoughts, concerns, or input on any of these issues, we’d love to hear form you!
HarvardX and MITx released a number of reports describing their open courses. The overarching paper describing these initiatives, entitled HarvardX and MITx: The first year of open online courses is really helpful in gaining a holistic understanding of HarvardX and MITx learned about their initiatives.
My (preliminary) thoughts:
- It’s exciting to see more data
- It’s exciting to see education researchers involved in the analysis of the data
- The researchers should be congratulated for making these reports available in an expeditious manner via SSRN
- We need more interpretive/qualitative research to understand participants’ and practitioners’ experiences on the ground
- I am wondering whether the community would benefit from access to the data that HarvardX and MITx have, as other individuals/groups could run additional analyses. Granted, I imagine this might require quite a lot of effort, not least in the development of procedures for data sharing.
The course reports appear below, and these are quite helpful in helping the community understand the particulars of each course:
- 3.091x Introduction to Solid-State Chemistry – Fall 2012 MITx Course Report
- 6.00x Introduction to Computer Science and Programming – Fall 2012 MITx Course Report
- 6.002x: Circuits and Electronics – Fall 2012 MITx Course Report
- 2.01x Elements of Structures – Spring 2013 MITx Course Report
- 3.091x Introduction to Solid-State Chemistry – Spring 2013 MITx course report
- 6.00x Introduction to Computer Science and Programming – Spring 2013 MITx Course Report
- 6.002x: Circuits and Electronics – Spring 2013 MITx Course Report
- 7.00x Introduction to Biology: The Secret of Life – Spring 2013 MITx Course Report
- 8.02x Electricity and Magnetism – Spring 2013 MITx Course Report
- 14.73x: The Challenges of Global Poverty - Spring 2013 MITx Course Report
- 8.MReV: Mechanics ReView – Summer 2013 MITx Course Report
- PH207x: Health in Numbers and PH278x: Human Health and Global Environmental Change (HarvardX)
- CB22x: HeroesX (HarvardX)
- ER22x: JusticeX (HarvardX)
- HLS1X: CopyrightX (HarevardX)
A facebook conversation from yesterday encouraged me to share one of the assignments that I developed for my instructional design course. The goal of the class is for the students to understand, experience, and apply instructional design in a variety of educational contexts.
One of the assignments I developed for asked students to enroll in a Massive Open Online Course (MOOC) and analyze the instructional materials within the course using one of the rubrics provided by Dick and Carey (the instructional design book we use in class). It was a lot of fun and the students appreciated the exercise. Given the lack of presence and voice by instructional designers in MOOC happenings, the lack of valid, reliable, and serious research that exists on the topic (though Rita Kop’s work on cMOOCs is admirable), and my desire to engage students in contemporary events, I came up with this assignment to embed MOOC analysis in my course. The assignment is available for download on https://dl.dropbox.com/u/2533962/instr-materials-veletsianos.doc and posted below for those who just want to skim it without downloading it. Enjoy and feel free to use it:
Instructional Material analysis assignment
Individually, you will examine and report on the instructional materials of one popular digital learning initiative. An analysis matrix will be provided to you, and you will use that to matrix to evaluate these initiatives.
Length: Minimum 500 words.
|Criteria||Levels of Attainment||Points|
|Written analysis (evaluation)||
This task requires a few hours of research before you can actually complete it. Even though this is an individual task, if you would like to discuss the assignment with any of your colleagues, please feel free to do so.
First read the chapter and the rest of the materials for this week. Without reading those, I can assure you that your understanding of the issues presented will be superficial.
Second, examine the rubric provided by Dick & Carey for evaluating instructional materials (p. 250-251 – see below for the rubric). You will be completing this rubric for a digital environment, and it’s a good idea to understand what it encompasses before you proceed.
Third, select one course provided on one of the following platforms to examine:
- A course on Coursera (select a course that is occurring right now or has been completed. DO NOT select a course that has not started yet): https://www.coursera.org/courses
- A course on EdX (select a course that is occurring right now. DO NOT select a course that has not started yet): https://www.edx.org/courses
- A free course on Udemy (select a course that includes at least 5 “lessons/lectures”): http://www.udemy.com/courses
You can also choose to examine DS106: http://ds106.us/ I am including DS106 on its own because it is a course as opposed to the above (Coursera, EdX, and Udemy) which are platforms. If you pick any of these three (Coursera, EdX, or Udemy), then you should also pick a course (e.g., Within Coursera a possible course is https://www.coursera.org/course/friendsmoneybytes).
Once you have made your selection, it’s time to research your course. Spend time looking around, examining and evaluating the instructional materials provided. You will use the rubric to keep track of the criteria that need to be assessed, and then using this rubric you will write a report assessing the instructional material for the course.
You should start your report by stating the course and its provider. A link would also be helpful. For example, using the example above, I would start my report by stating the following:
“I am examining the course entitled Networks: Friends, Money and Bytes (https://www.coursera.org/course/friendsmoneybytes). This course if offered through Coursera and is taught by Mung Chiang who is a Professor or Electrical Engineering at Princeton University. The course is an introduction to the topic of X and its objectives are XYZ.”
Your report should be specific and detailed in its evaluation of instructional material, and should be guided by the five criteria families discussed by DC: Goal-centered, learner-centered, learning-centered, context-centered, technical criteria. I would like to see that you understand each criterion and that you are capable of applying it to evaluating your course. For example, at the very least, I would expect to see statements such as the following:
Instructional designers use five criteria families to evaluate instructional materials. Learner-centered criteria focus on XYZ and refer to X. The instructional materials for this course appear to be adequate for this criterion because <provide list of reasons here>. The course could be improved in this domain by <list of additions/revisions here>. However, because item X was not disclosed in the course, I am not able to evaluate Y.
Let me reiterate that to complete this assignment you will need to do background research on the course and the platform. For example, your background research on Coursera will reveal that some of these courses have more than 80,000 students from around the world. This fact alone will impact your evaluation!
Instructional Material Evaluation Rubric
Rubric is copyright of: Dick, W., Carey, L. & Carey, J. (2008). Systematic Design of Instruction, (7th ed.) Upper Saddle River, NJ: Pearson.
A. Goal-centered Criteria:
Are the instructional materials:
|1. Congruent with the terminal and performance objectives?|
|2. Adequate in content coverage and completeness?|
|6. Objective in presentations (lack of content bias)?|
Are the instructional materials appropriate for learners’:
|1. Vocabulary and language?|
|2. Development level?|
|3. Background, experience, environment?|
|4. Experiences with testing formats and equipment?|
|5. Motivation and interest?|
|6. Cultural, racial, gender needs (lack bias)?|
Do the material include:
|1. Pre-instructional material?|
|2. Appropriate content sequencing?|
|3. Presentations that are complete, current and tailored for learners?|
|4. Practice exercises that are congruent with the goal?|
|5. Adequate and supportive feedback?|
|6. Appropriate assessment?|
|7. Appropriate sequence and chunk size?|
Are/do the instructional materials:
|1. Authentic for the learning and performance sites?|
|2. Feasible for the learning and performance sites?|
|3. Require additional equipment/tools?|
|4. Have congruent technical qualities for planned site (facilities/delivery system)?|
|5. Have adequate resources (time, budget, personal availability and skills)?|
Do the instructional materials have appropriate:
|1. Delivery system and media for the nature of objectives?|
|3. Graphic design and typography?|
|6. Audio and video quality?|
|7. Interface design?|
I’m working through my thoughts with this blog entry, as I’ve been trying to use this space to think out loud about my work and what I see happening in online education and higher ed.
A lot has been written about MOOCs and accreditation, and a lot more will be forthcoming. For example, see Terry Anderson’s post on this.
Today, I run across this quote in an article at Time Magazine:
…if Liu passes the graduate-level Harvard course she is taking for free through edX — one of the leading providers of massive open online courses, or MOOCs — she will be granted 7.5 credit hours, which her school district has agreed to accept as a form of professional development that can help her earn a higher salary. Liu might be among the first students nationwide to turn free online coursework into tangible college credit, but that number may soon grow exponentially.
What is the value of a critique?
The value of critique is to help us see a phenomenon through a different lens, to help us make sense of something in a different way, and to spark a conversation. This is the purpose, and value, of a paper we recently published with IRRODL on the topic of open scholarship.
The paper identifies the assumptions and challenges of openness and open scholarship and attempts to put forward suggestions for addressing those. A summary of our paper, appears below:
Many scholars hope and anticipate that open practices will broaden access to education and knowledge, reduce costs, enhance the impact and reach of scholarship and education, and foster the development of more equitable, effective, efficient, and transparent scholarly and educational processes. Wiley and Green (2012, pp. 88) note that “only time will tell” whether practices of open scholarship will transform education or whether the movement “will go down in the history books as just another fad that couldn’t live up to its press.” Given the emerging nature of such practices, educators are finding themselves in a position in which they can shape and/or be shaped by openness (Veletsianos, 2010). The intention of this paper is (a) to identify the assumptions of the open scholarship movement and (b) to highlight challenges associated with the movement’s aspirations of broadening access to education and knowledge. The goal of this paper is not to frame open scholarship as a problematic alternative to the status quo. Instead, as we see individuals, institutions, and organizations embrace openness, we have observed a parallel lack of critique of open educational practices. We find that such critiques are largely absent from the educational technology field, as members of the field tend to focus on the promises of educational technologies, rarely pausing to critique its assumptions. Selwyn (2011b, pp. 713) even charges that our field’s inherent positivity “limits the validity and credibility of the field as a site of serious academic endeavour.” Our intention is to spark a conversation with the hopes of creating a more equitable and effective future for digital education and scholarship. To this end, this paper is divided into three major sections. First, we review related literature to introduce the reader to the notion of open scholarship. Next, we discuss the assumptions of openness and open scholarship. We then identify the challenges of open scholarship and discuss how these may limit or problematize its outcomes.
Common assumptions and challenges are summarized as follows:
|Common themes and assumptions||Challenges|
|Open scholarship has a strong ideological basis rooted in an ethical pursuit for democratization, fundamental human rights, equality, and justice.||Are these ideals essential components of the open scholarship movement or are merely incidental to those who are pioneering the field?|
|Open scholarship emphasizes the importance of digital participation for enhanced scholarly outcomes||Scholars need to develop an understanding of participatory cultures and social/digital literacies in order to take full advantage of open scholarship.Need to redesign university curricula to prepare future scholars to account for the changing nature of scholarship.
|Open scholarship is treated as an emergent scholarly phenomenon that is co-evolutionary with technological advancements in the larger culture||Technology both shapes and is shaped by practice.Technology is not neutral, and its embedded values may advance tensions and compromises (e.g., flat relationships, homophily, filter bubbles).|
|Open scholarship is seen as a practical and effective means for achieving scholarly aims that are socially valuable||Open scholarship introduces new dilemmas and needs (e.g., personal information management challenges; Social stratification and exclusion).|
Given the topic, the best home for this paper was the International Review Of Research In Open And Distance Learning, through which you can download the paper for free in an open access manner:
I am in Cyprus to meet with a number of colleagues and give an invited talk at ICEM 2012.
Talk title: What does the future of design for online learning look like? Emerging technologies, Openness, MOOCs, and Digital Scholarship
Abstract: What will we observe if we take a long pause and examine the practice of online education today? What do emerging technologies, openness, Massive Open Online Courses, and digital scholarship tell us about the future that we are creating for learners, faculty members, and learning institutions? And what does entrepreneurial activity worldwide surrounding online education mean for the future of education and design? In this talk, I will discuss a number of emerging practices relating to online learning and online participation in a rapidly changing world and explain their implications for design practice. Emerging practices (e.g., open courses, researchers who blog, students who use social media to self-organize) can shape our teaching/learning practice and teaching/learning practice can shape these innovations. By examining, critiquing, and understanding these practices we will be able to understand potential futures for online learning and be better informed on how we can design effective and engaging online learning experiences. This talk will draw from my experiences and research on online learning, openness, and digital scholarship, and will present recent evidence detailing how researchers, learners, educators are creating, sharing, and negotiating knowledge and education online.
This is part of my ongoing reflection on moocs. See the rest of the entries here.
I have signed up to a number of MOOCs as a student (and led one of the #change11 weeks), and have spoken in general terms a couple of weeks ago about how education research can help improve the type of education offered through a MOOC. In this post, I will give specific suggestions, focusing on the University of Pennsylvania MOOC: Listening to World Music, offered through Coursera. I am signed up to this course, which started on June 23, and I just submitted the first assignment. I decided to post these thoughts early because of two reasons. First, the beginning of any course is an important moment in its success and I find that it takes a lot of planning and reflection. Second, MOOCS are discussed as being experiments in online education. The Atlantic even calls them “The single most important experiment in higher education.” I agree that they are experimental initiatives, and as such would benefit from ongoing evaluation and advice. Where I disagree with is the notion that they are a departure from what we know about online (and face-to-face) education. This post is intended to highlight just a couple of items that the Coursera instructional designers and learning technologists could have planned for in order to develop a more positive learning experience.
1. Length of the video lectures.
The syllabus lists the length of the video lectures (e.g., video 1 is 10:01 minutes long and video 2 is 10:45 minutes long.) However, this length is not provided on the page that students visit to watch the videos, which is where they need that information. I’ve annotated this below.
2. Opportunities for interaction.
The platform provides forums for students to interact with each other. Learners are of course instrumental and will figure out alternative, and more efficient and effective ways to communicate with each other, if they need to. For instance, in a number of other MOOCs students set up facebook groups, and I anticipate that this will happen here as well. What Coursera could do to support learners in working with each other is to integrate social media plugins within each course. I am surprised that this isn’t prominent within the course because you can see from the images below that Coursera uses social media plugins to allow students to announce participation in the course:
For instance, it appears that the course uses the #worldmusic hashtag, though it’s not integrated within the main page of the course, not does it seem to be a unique hashtag associated with the course.
3. How do you encourage students to watch the videos?
Let’s say that we added the length of each video next to its title. Now, the learner knows that they need about an hour to watch the video. Some learners (e.g., those who are intrinsically motivated by the topic) will watch them without much scaffolding. But, how do you provide encouragement for others to do so? Here’s where some social psychology insights might be helpful. By providing learners with simple descriptions of how the majority of their colleagues are behaving (i.e. appealing to social norms), then one might be able to encourage individuals to watch the videos. For example, the videos might include a dynamic subtitle that informs learners that “8 out of 10 of your peers have watched this video” or that “70% of your peers have completed the first assignment” and so on. This is the same strategy that hotels use to encourage users to reuse towels and the same strategy that Nike uses when it compares your running patterns to the running patterns of other runners, as shown in the image below:
4. Peer-grading expectations.
This course is different from others that I’ve participated in because it includes an element of peer-grading. This is exciting to me because I’m a firm believer in social learning. One minor concern however is the following: I don’t know how many peers I am supposed to evaluate. I thought I was supposed to evaluate just one, but each time I finish my evaluation, I am presented with the option to “evaluate next student.” Do I keep evaluating? How many do I need to evaluate before I can move to the next step? I don’t know. In other words, it’s always helpful to inform the learner of what s/he has to do. For instance, in my case, I just stopped evaluating peers after having evaluated 4 because I don’t know how much I am expected to do. Perhaps there’s no minimum… and this information would be helpful to me as a learner.
Overall, my experience with this course is positive, though there is a lot of room for improvement here, which is to be expected. For example, I haven’t touched much on the pedagogy of the course, but there’s a few more weeks left… so stay tuned!
Nike photo credit. Thanks to my colleague Chuck Hodges for directing my attention to the Nike example.
This entry is part of a reflective series of posts/questions relating to online learning, MOOCs, and openness. See the first one here.
Coursera announced today that it is adding a dozen or so universities as partners. In an article in the New York Times, Sebastian Thrun notes that MOOC courses are still experimental and argues: “I think we are rushing this a little bit,” he said. “I haven’t seen a single study showing that online learning is as good as other learning.”
This perception of online education as “better than” or “as good as” other forms of education (I imagine that Sebastian Thrun is referring to face-to-face education here), is rampant. I believe it is rampant because our field has not done a good job disseminating what we know and what we don’t know about online education. At the same time, individuals do not tend to go back to the foundations of the field to investigate what others have discovered.
The result: A lack of understanding that there’s a whole field out there (here?) that has developed important insights on how we can design online education effectively. The list of references at the end of this post are merely a few of the resources one can use to get started on what we know and what we don’t know about comparison studies (i.e. studies that compare learning between delivery modes).
The point of this entry is to argue that there’s no point to reinvent the wheel. There’s no point to make the same mistakes. And above all, past research has shown that there’s no point to study whether online education is as good as (or as bad as) other forms of education because what one will discover is that:
- There are no significant differences in learning outcomes between face-to-face education and online education.
- When differences are found between the two, the differences can be attributed to (a) pedagogy, or (b) and a lack of controls in the experimental design.
It is important to point out that the effectiveness of an educational approach is influenced greatly by other variables, such as instructor support or pedagogical approach. Therefore, it is very difficult (if not impossible) to compare face-to-face and online education because when one is not a replication of the other, they are vastly different, are based on different learner-instructor interactions, and offer different affordances. While researchers have tried to minimize differences and compare face-to-face learning and online learning in experimental ways, the interventions end up being meaningless for the types of powerful online/face-to-face teaching we might envision. Comparing delivery mechanisms therefore, blinds us to the important variables that truly impact how people learn.
The important and informative questions to ask are not comparative. Rather they focus squarely on understanding online education:
- How can we design effective and engaging online education (e.g., MOOCs)?
- What is the nature of learning in a MOOC?
- What is the learning experience in a MOOC like?
These questions are difficult. They won’t be answered by comparing survey responses and they won’t generate one simple answer. They will however generate answers that will be different depending on context. And that’s what’s exciting about doing research on online education.
- Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies. U. S. Department of Education. http://www.ed.gov/rschstat/eval/tech/evidence-based-practices/finalreport.pdf
- Twenty Years of Research on the Academic Performance Differences Between Traditional and Distance Learning: Summative Meta-Analysis and Trend Examination http://jolt.merlot.org/vol6no2/shachar_0610.htm
- Clark, R. E. (1983). Reconsidering research on learning from media. Review of Educational Research, 43(4), 445-459.
- Clark, R. (1985). Evidence for confounding in computer-based instruction studies: Analyzing the meta-analyses. Educational Communications and Technology Journal, 33(4), 249-262.
- Kozma, R. (1994). Will media influence learning? Reframing the debate. Educational Technology, Research and Development, 42(2), 7-19.
- Tennyson, R. D. (1994). The big wrench vs. integrated approaches: The great media debate. Educational Technology Research & Development, 42(3), 15-28.
- Veletsianos, G. & Navarrete, C. (2012). Online Social Networks as Formal Learning Environments: Learner Experiences and Activities. The International Review Of Research In Open And Distance Learning, 13(1), 144-166
- Veletsianos, G. (2011). Designing Opportunities for Transformation with Emerging Technologies. Educational Technology, 51(2), 41-46.
- Veletsianos, G. (2010). A Definition of Emerging Technologies for Education. In G. Veletsianos (Ed.), Emerging Technologies in Distance Education (pp. 3-22). Athabasca University Press.