I will be visiting my colleagues at the University of Edinburgh in mid-June to give a seminar on MOOCs, automation, artificial intelligence and pedagogical agents. This is a free event organized by the Moray House School of Education at the U of Edinburgh and supported by the Digital Cultures and Education research group and DigitalHSS. Please feel free to join us face-to-face or online (Date: 18 June 2014; Time: 1-3pm) by registering here.
This seminar will bring together some of my current and past research. A lot of my work in the past examined learners’ experiences with conversational and (semi)intelligent agents. In that research, we discovered that the experience of interacting with intelligent technologies was engrossing (pdf). Yet, learners often verbally abused the pedagogical agents (pdf). We also discovered that appearance (pdf) may be a significant mediating factor in learning. Importanly, this research indicated that “learners both humanized the agents and expected them to abide by social norms, but also identified the agents as programmed tools, resisting and rejecting their lifelike behaviors.”
A lot of my current work examines experiences with open online courses and online social networks, but what exactly does pedagogical agents and MOOCs have to do with each other? Ideas associated with Artificial Intelligence are present in both the emergence of xMOOCs (EdX, Udacity, and Coursera emanated from AI labs) and certain practices associated with them – e.g., see Balfour (2013) on automated essay scoring. Audrey Watters highlighted these issues in the past. While I haven’t yet seen discussions on the integration of lifelike characters and pedagogical agents in MOOCs, the use of lifelike robots for education and the role of the faculty member in MOOCs are areas of debate and investigation in both the popular press and the scholarly literature. The quest to automate instruction has a long history, and lives within the sociocultural context of particular time periods. For example, the Second World War found US soldiers and cilvilians unprepared for the war effort, and audiovisual devices were extensively used to efficiently train individuals at a massive scale. Nowadays, similar efforts at achieving scale and efficiencies reflect problems, issues, and cultural beliefs of our time.
I’m working on my presentation, but if you have any questions or thoughts to share, I’d love to hear them!
I spent part of last week in Dallas at the annual Emerging Technologies for Online Learning conference, organized by SLOAN-C. I describe my presentation at the conference in this post, but the sessions below were all relevant to my work:
Jim Groom’s keynote. Jim’s Domain of one’s own work resonates with me. Providing students with digital tools that will enable them to learn the ways of the web is significant, but the idea also resonates with me in the context of digital scholarship, which is one of my research strands. In particular, I see Jim’s project being applicable for PhD students who should be equipped with the tools, skills, and experiences to understand networked, open, and digital scholarship. I’ve met Jim briefly in the past, but we never had a chance to chat much, so it was great to be able to spend some more time together.
Amy Collier’s and Jen Ross’ plenary. The session focused on giving insightful descriptions of the messy and compromised realities of learning in contrast to the narratives of efficiency and ease suggested by numerous educational technology providers.
Andy Saltarelli’s study of belongingness and synchronicity in cooperative learning settings. What’s not to like about rigorous, theory-driven, large-scale evaluations of the socio-psychological constructs that make a difference in online learning contexts?
The session on distributed flips, or re-using MOOC resources in face-to-face/blended courses. MJ Bishop, Mike Caulfield, and Amy Collier came together to share their research on the topic. Mike was unable to join unfortunately, but he shared his thoughts here.
Rolin Moe organized a number of fantastic panels on issues pertaining to the field and I was excited to participate in the one focused on academics in educational technology, along with Jen Ross, Amy Collier, Jill Leafstedt, Jesse Stommel, and Sean Michael Morris. We had a wonderful conversation, but 50 minutes are never enough to cover this topic. The Sloan-C organizing committee should consider making this session a longer (free-to-attend) workshop.
I took the following two pictures in two recent trips of mine. Similarities and differences abound, but one difference (other than the language) stands out for me. And that difference reminds me of an unfortunate state of affairs in the learning technologies field.
Look at the photo below. It’s from a menu that I came across in Dublin.
And the next one: It’s from a menu that I came across in Stockholm.
Other than the differences in the language, do you notice anything else? (Hint: Look at the typography.) Wouldn’t it be amazing if instructional/learning designers paid that much attention to the details as well? Yes, beauty and aesthetics are probably the least of our problems (so say the critics), but they count, and they count more and more in a world where beauty (constructed as it may be) surrounds us.
(High resolution images are available on my flickr page)
This is another one of those mini posts related to the changing nature of the work that academics do; specifically, publishing. I wrote this after being directed to the Public Library of Science site from Tony Hirst‘s tweet:
If you visit the website mentioned (here) you will see that the Public Library of Science will be making available a number of metrics intenting to evaluate the reach of published articles (I played with a similar concept here). These metrics (which will accompany each article) include reader notes and comments, ratings, social bookmakrs, citations in the academic literature, and so on. Not only is this a step toward transparently assessing the value of a publication, it provides another impetus for academics to seriously consider engaging with and participating in social media spheres. In an age where ongoing debate, collaboration, interaction, participation, and engagement are daily buzz words when envisioning improved education, shouldn’t the same ideas apply to our publications? If you are interested in these issues you may like to look at this cloudwork (and especially the comments made by Giota on the credibility, resistance, legitimacy, and power structures). It’s an interesting conversation.
In this post I demonstrate several points that I have been playing with over the years. On the one hand, the post takes a simple concept (the popularity of academic journals) and attempts to rethink it in the context of the digital, interconnected space. On the other hand, it demonstrates the power of the “cloud” and the opportunities provided by posting information in online spaces that are accessible via standardized formats (such as XML). The posting also serves as an example of what kinds of opportunities mashups can provide to universities/education. And finally, I just wanted to learn how to remix data via online services
As you may have seen in my previous posting, we collected a list of all the open access online journals that we could find that are focused on publishing educational technology research. While having the list online in an open spreadsheet format allows anyone interested to update it, it also allows us to manipulate and remix the data. As a simple example, consider the issue of journal rankings. I’ve seen it debated on ITForum, on twitter, at the University of Minnesota where I did my PhD, and at the University of Manchester where I currently work. The issue is that “top tier” journals are good for tenure, but there are debates on what constitutes “top tier.” Is it readership? Rejection rates? Quality? Citations? All the above? I could link to a few different resources here, but the only one I will refer interested readers to is the European Science Foundation ERIH listings that I personally use as a guide.
My intention in this post is to rank the online open access journals according to “popularity.” As I see the rolling eyes through the tubes of the internet, let me say that popularity in this case refers to the number of sites that link to a particular page. Higher numbers denote more inbound links (= higher popularity). If you want to see the popularity metrics without reading the details of how this was done, the end result (that is generated every time you click on the link) is available on this page. At the time of writing, the least linked-to journal had 0 inbound links and the most linked-to journal had 31,534 links.
To be fair (or, “a word of caution”): The popularity index is not without it’s faults. Popularity doesn’t mean quality or even readership. The number of inbound links can be easily manipulated. The measure leaves our RSS subscriptions and number of individuals receiving TOC alerts. Also, inbound links carry equal weight regardless of where they come from. Another issue relates to journals changing URLs. For example, the Journal of Computer-Mediated Communication used to be hosted an Indiana University but is now part of the Wiley InterScience group (and is still open access). Also, the URL we used to link to a journal might not be the most appropriate one. To fully understand and see the problems with this method, one has to dive under the hood of the whole process, and that’s what I am doing next.
The implementation in detail
The journal URLs are posted in a google spreadsheet that allows data to exist online in a variety of formats (e.g. csv and html files). Those files can then be read into Yahoo Pipes (essentially, a drag-and-drop mashup tool). Once Yahoo pipes has a list of journal URLs, those URLs are send through the Yahoo Site Explorer API which generates “information about the pages linking to a particular page or pages within a domain.” That information includes the magic numbers used in this exercise (i.e. the number of pages linking to a particular journal via its url). Once the numbers are generated, Yahoo Pipes exports them as an RSS feed. That feed can then be imported back to a Google Spreadsheet. And that’s it. Whenever a journal url is added to the spreadsheet, the pipe generates a popularity number for it without anyone needing to do anything. A new journal appears? No problem, just add the url and its inbound links will be counted automatically. If you want the full details, feel free to grab the actual yahoo pipe that does all the work and clone it (at this point I should thank Mat Morisson and Tony Hirst, whose postings on yahoo pipes and online data manipulation helped me rethink how I was doing this). If you don’t have a yahoo account and are interested in how the implementation looks, the image at the top of this post is the actual pipe created.
A final word of caution
This is not a valid method to decide where to send your next paper :). Yet, as I see more and more conversations online about open access (e.g., BJET published an editorial on the topic on Aug 12, 2009) and alternative ways to evaluate ones contribution to his/her chosen field, this simple example may ignite ideas for evaluating journal contributions (in the UK at least the issue of journal impact is currently being debated as we await the transformation of the Research Assessment Exercise). Also, the ranking is less interesting to me than the implications behind our ability to remix available data to think about journal “impact”. Finally, if you are managing an online open access journal and you feel that the URL used is not representative of where users link to, please feel free to correct the url by visiting the original listing. If we used an erroneous link, we apologize and we thank you for helping us correct it.
This past week, my colleague and I had the pleasure of having with us a group of 25 faculty members from the Kingdom of Saudi Arabia. In cooperation with the National Center For e-learning and Distance Learning, we held a two week workshop/training session for them on e-learning, digital technologies and education. Our conversations over these days touched upon multiple aspects of online and distance learning, ranging from cultural issues to techno-social affordances, to LMS evaluation, quality assurance, creativity, and pedagogical transformation. While I had a curriculum designed for my workshop days, I followed about half of it. The rest was revised on the spot according to what we felt we needed to cover and the needs that arose. In reality, the workshop wouldn’t have been successful had the curriculum was set in stone, but, if you are reading this far, I am probably preaching to the wrong choir
Below, is a list of items/ideas surrounding workshop issues. Other than being helpful to me, they might also be helpful to you if you are planning on leading a workshop/training session:
- People seem to like lists. I don’t know why, but they do. I think it was Curt Bonk who had wrote that people like lists and acronyms (probably because they are memorable), but the last item that I gave to my colleagues before they left today was a list of 10 things to keep in mind when using technology in education.
- This group was especially interested in learning from our experience with e-learning. Frequent questions were: How does the University of Manchester do e-learning? How do you train instructors/professors in using technology in education? What is your e-learning agenda? How do you convince instructors to adopt technology? What went wrong and what did you learn?
- Pedagogy and technology-enhanced pedagogy should be central and this should be made explicit from the very beginning. By George (!) enough with pedagogy-enhanced technology!
- University networks are just plain weird. On the one hand, my computer (that is registered on the network by its mac address, which is a unique identifier) would not connect to the network via ethernet. On the other hand, more than 1 person can log on the lab machines using the same username and password. The reason why the first issue arises while the second issue is ok is baffling me.
- Practical activities and discussion trump theory.
- People also seem to like to explore the courses that others have created and investigate specific design ideas or specific things that worked well or didn’t. I had my own courses to showcase and a few other open courses, but I wasn’t able to invite others to talk about their own experiences/courses. Perhaps the next time.
- Every university is different and it’s always difficult to give specific input on what might work in a specific situation. Recipes for success are generally recipes for disaster. For example, in some of these universities, the university’s budget is a non-issue. Yes, you read this right. In this economic climate. This was something new for me. To be more specific, it doesn’t matter if Blackboard costs money and Moodle doesn’t.
- Studying your learners helps. Did you know that online learning and distance education are pressing matters in Suadi Arabia due to the fact that 38% of the country’s population is between the ages of 10-14 and the country needs to provide higher education to these people? It’s an exciting time for our field in this part of the world.
- Respectfulness, politeness, openness, appreciation, and kindness (along with a desire to improve education) go a long way.
I will end by posting a link to a twitpic posting that occurred during class time when we were trying to explore how the college of applied arts could promote student work online. And, in the spirit of the cross-cultural learning that transpired during the sessions, I look forward to visiting my newfound colleagues in the near future in Saudi Arabia. Inshallah (which, incidentally is a common Cypriot expression and is not derived from a specific religion)… oh, the things that this blog’s visitors learn are never-ending
Alec has posted a CFP for a special issue on Technology and Social Media for the in education journal:
My student lacks the funds to graduate. We thought to turn to social media to help him graduate. Can you help?