This is part of my ongoing reflection on moocs. See the rest of the entries here.
I have signed up to a number of MOOCs as a student (and led one of the #change11 weeks), and have spoken in general terms a couple of weeks ago about how education research can help improve the type of education offered through a MOOC. In this post, I will give specific suggestions, focusing on the University of Pennsylvania MOOC: Listening to World Music, offered through Coursera. I am signed up to this course, which started on June 23, and I just submitted the first assignment. I decided to post these thoughts early because of two reasons. First, the beginning of any course is an important moment in its success and I find that it takes a lot of planning and reflection. Second, MOOCS are discussed as being experiments in online education. The Atlantic even calls them “The single most important experiment in higher education.” I agree that they are experimental initiatives, and as such would benefit from ongoing evaluation and advice. Where I disagree with is the notion that they are a departure from what we know about online (and face-to-face) education. This post is intended to highlight just a couple of items that the Coursera instructional designers and learning technologists could have planned for in order to develop a more positive learning experience.
1. Length of the video lectures.
The syllabus lists the length of the video lectures (e.g., video 1 is 10:01 minutes long and video 2 is 10:45 minutes long.) However, this length is not provided on the page that students visit to watch the videos, which is where they need that information. I’ve annotated this below.
2. Opportunities for interaction.
The platform provides forums for students to interact with each other. Learners are of course instrumental and will figure out alternative, and more efficient and effective ways to communicate with each other, if they need to. For instance, in a number of other MOOCs students set up facebook groups, and I anticipate that this will happen here as well. What Coursera could do to support learners in working with each other is to integrate social media plugins within each course. I am surprised that this isn’t prominent within the course because you can see from the images below that Coursera uses social media plugins to allow students to announce participation in the course:
For instance, it appears that the course uses the #worldmusic hashtag, though it’s not integrated within the main page of the course, not does it seem to be a unique hashtag associated with the course.
3. How do you encourage students to watch the videos?
Let’s say that we added the length of each video next to its title. Now, the learner knows that they need about an hour to watch the video. Some learners (e.g., those who are intrinsically motivated by the topic) will watch them without much scaffolding. But, how do you provide encouragement for others to do so? Here’s where some social psychology insights might be helpful. By providing learners with simple descriptions of how the majority of their colleagues are behaving (i.e. appealing to social norms), then one might be able to encourage individuals to watch the videos. For example, the videos might include a dynamic subtitle that informs learners that “8 out of 10 of your peers have watched this video” or that “70% of your peers have completed the first assignment” and so on. This is the same strategy that hotels use to encourage users to reuse towels and the same strategy that Nike uses when it compares your running patterns to the running patterns of other runners, as shown in the image below:
4. Peer-grading expectations.
This course is different from others that I’ve participated in because it includes an element of peer-grading. This is exciting to me because I’m a firm believer in social learning. One minor concern however is the following: I don’t know how many peers I am supposed to evaluate. I thought I was supposed to evaluate just one, but each time I finish my evaluation, I am presented with the option to “evaluate next student.” Do I keep evaluating? How many do I need to evaluate before I can move to the next step? I don’t know. In other words, it’s always helpful to inform the learner of what s/he has to do. For instance, in my case, I just stopped evaluating peers after having evaluated 4 because I don’t know how much I am expected to do. Perhaps there’s no minimum… and this information would be helpful to me as a learner.
Overall, my experience with this course is positive, though there is a lot of room for improvement here, which is to be expected. For example, I haven’t touched much on the pedagogy of the course, but there’s a few more weeks left… so stay tuned!
Nike photo credit. Thanks to my colleague Chuck Hodges for directing my attention to the Nike example.
Part of my research demands that I develop technology-enhanced interventions in order to study them. I enjoy this part of my work partly because I get to create solutions to tackle education problems and partly because it has allowed me to explore technology-enhanced learning across different disciplines (e.g. I was involved with developing online learning environments for American Sign Language, environmental stewardship, and sociological concepts).
Now comes another excitement and challenge: Last August, Dr. Calvin Lin and I were awarded a National Science Foundation grant (award #1138506) to develop a hybrid “Introduction to Computer Science” course to be taught at Texas high schools and institutions of higher education. The project is a collaboration between the department of Computer Science (Dr. Lin) and Curriculum and Instruction – Instructional Technology (me). I’ll be posting more about the project (probably on a different blog), but the overarching goal here is to enhance how CS is taught using emerging technologies and pedagogies (mostly PBL) while valuing local contexts and practices. Mark Guzdial, in a recent paper, notes that “We need more education research that is informed by understanding CS—how it’s taught, what the current practices are, and what’s important to keep as we change practice. We need more computing education researchers to help meet the workforce needs in our technology-based society.”
I look forward to sharing more about this project with everyone soon!
How do we design for engagement? This is a question that has hovered over my shoulders for a while. Although not explicitly verbalized it is part of my work with avatars, pedagogical agents, and virtual characters. For example, see this paper in the British journal of Educational Technology. In addition, in my dissertation, I also argue that pedagogical agents/virtual characters may incite such deep and engaging experiences so as to distract learners from the task they are engaged with (I am of course talking about the conversational type agents and NOT the passive pedagogical agents that prominently appear in instructional design research – yes, I am being sarcastic). Outside of my tiny little contributions, others have thought about this issue. Pat Parrish, drawing on the work of Dewey and others, has written extensively on learner engagement. Charlie Miller, coming for an interaction design perspective, has also talked about engagement. And, the other day, a blog by Joseph from BYU, noting sister issues of engagement, emotion, and narrative. Granted, the ID field has for long (and long overdue) been focusing on information delivery and wow-look-at-what-this-can-do, but I think there are enough people thinking and writing about learner engagement, that the topic may gain prominence – as it should.
Back to the original question: How do we design for engagement? Honestly, if I knew how to verbalize this, I would probably write it up. But, I have a few ideas. First, I think that this question spurs multiple other questions. For example, how do K-12 teachers engage children? What are the characteristics of engaging lessons? What are the characteristics of engaging learners? Note that I am writing about characteristics in qualitative (and possibly interpretive, and further, possibly phenomenological) terms. What are the characteristics of boring lessons? What are the characteristics of engaging electronic learning environments/experiences? What is the process of engagement? How do we measure engagement? Again, I think that “measuring” engagement should be done in qualitative terms – this is a poor way of measuring something as malleable and inherent to our existential being, but it’s at least a start. Could we provide some sort of guidelines for the design of engaging electronic learning experiences? What does social psychology say about this? More on the last question in an upcoming post…
A set of preliminary ideas that I have is that “fun” has a lot to do with it. The HCI field discussed funology for a while, but I haven’t seen anything recently. Additionally, I think that a sense of achievement, contribution, belongingness, ability to change things, and purpose, matter. That’s an initial list, and it is very rough. There are numerous other ideas that need to be covered, including aesthetics, transformational learning, and, alas, the learner. But, I’ll leave that for a different time because I need to do some dissertation work.