Category: Ideas Page 2 of 5

Educational Technology. #EdTech. A discipline?

I’ve been (re) reading the numerous posts on whether educational technology is a discipline, and on whether it’s needed. In light of that, I thought I’d post a link to this book: Educational Technology: A definition with commentary.

The first paragraph from the introduction reads:

“Continuing the tradition of the 1963, 1977, and 1994 AECT projects to define the ever-changing contours of the field, the Definition and Terminology Committee completed the most recent definitional effort with the publication of Educational Technology: A Definition with Commentary in 2007. The main purpose of the 384-page book is to frame the issues confronting educational technology in the context of today’s world of education and training. What is new, and frankly, controversial, about this latest definition is its insistence that “values” are integral to the very meaning of educational technology.”

I wonder what this conversation around discipline would look like if we published our work in more open ways, described the field in more consistent ways, were more inclusive, and engaged in more advocacy.

Your thoughts on our Table of Contents for an upcoming book proposal?

My colleague Ash Shaw and I are working on a book. The book aims to highlight student voices in online learning. The main aims are to surface the experiences of online learners in an evocative and accessible manner, synthesize literature on the topic, and present our original work. Below is our draft table of contents. If you have a couple of minutes, could you take a look at it and let us know if there are any topics/debates/issues that might be of interest to the average faculty member and student that we are missing?

Thank you!

# Topic Summary and questions answered
2 Demographics Examines who today’s online learners are and how online learners demographics have changed over time. Who are today’s online learners? How many students enroll in online courses nationally and globally? How have demographics changed over time?
3 Who succeeds? (or, The online paradox) Investigates the reasons why students who take online courses have greater degree completion rates when online courses are characterized by higher attrition rates.
4 Motivations Investigates the reasons that individuals take online courses. Shows that students take online courses for a variety of reasons, and reveals that reasons differ depending on the type of online course (e.g., some learners take MOOCs for different reasons than online courses).
5 Digital Literacies Examines the need for skills and the skills required to participate productively in online courses.
6 Note-taking Uses note-taking to illustrate that online learning research that focuses on tracking student activity on platforms alone is insufficient to understand the human condition and hence improve learning outcomes.
7 Self-directed learning Investigates self-directed learning as a process necessary for contemporary learners to develop and apply.
8 Openness Investigates the meaning of the term openness in the context of online learning.
9 Personalized learning Examines efforts to develop adaptive learning software and automate instruction (system control), and juxtaposes those efforts with designs that allow learners to personalize their own learning (learner control). Explores instructor strategies and designs to personalize learning.
10 Flexibility Examines the ways that online courses can be designed to accommodate learners’ lives and allow flexible participation. Investigates issues of modality and (a)synchrnonicity.
11 Social Media Investigates how social media are used in online courses and shows how intentional integration of such tools can lead to positive outcomes.
12 Loneliness or “The student who watched videos alone” Examines how online learning can be a lonely and isolating experience and proposes strategies for enhancing presence and immediacy.
13 Emotions Shows that learning online is an emotional experience, calling for a more caring pedagogy and critiquing the calls to employ online learning to simply make online learning offerings more efficient.
14 Lurking or “The student who learned as much by just watching videos” Investigates the topic of lurking. Highlights the visible and invisible practices that online learners engage in. Demostrates…
15 Time or “The student who stole time from his family to study” Explores the topic of time-management in online students’ lives, and investigates how courses can be designed to fit with the complexity of learner’s day-to-day realities (e.g., work and family requirements).
16 Dropout, Attrition, and Persistence Explores the topic of attrition, as online courses often face higher attrition rates than alternatives.
17 Instructor The role of the instructor in online learning environments. Investigates instructor presence, support, and explores how instructors can contribute to meaningful and effective learning experiences
18 Online vs. face-to-face learning Investigates the question as to whether face-to-face learning is better than online learning. Presents the empirical research on the question and highlights (a) how different forms of education serve different needs, and (b) how learning design is a more significant factor in determining learning outcomes than modality.
19 MOOCs or “The student who completed 200 courses: And other, less profound, online learning experiences” Explores the topic of MOOCs and summarizes the empirical research that exists on the topic. Explains the origins of the term, the different designs, and how the concept has evolved over time, with particular emphasis on students’ experiences in MOOCs.
20 The Learning Management System and Next-Generation Digital Learning Environments Investigates the idea that Learning Management Systems contribute little to student learning. Proposes the courses are “nodes in a network” as opposed to hermetic containers of knowledge. Shows how course design differs between these two ideas.
21 Challenges and remediation strategies Investigates the challenges that online learners face and the strategies employed by themselves and others to remediate them.

Compassion, Kindness, and Care in Digital Learning Contexts

Bear with me. This work-in-progress is a bit raw. I’d love any feedback that you might have.

Back in 2008, my colleagues and I wrote a short paper arguing that social justice is a core element of good instructional design. Good designs were, and still are, predominantly judged upon their effectiveness, efficiency, and engagement (e3 instruction). Critical and anti-opressive educators and theorists have laid the foundations of extending educational practice beyond effectiveness a long time ago.

I’m not convinced that edtech, learning design, instructional design, digital learning, or any other label that one wants to apply to the “practice of improving digital teaching and learning” is there yet.

I’ve been thinking more and more about compassion with respect to digital learning. More specifically, I’ve been reflecting on the following question:

What does compassion look like in digital learning contexts?

I’m blogging about this now, because my paper journal is limiting and there is an increasing recognition within various circles in the field that are coalescing around similar themes. For instance,

  • The CFP for Learning with MOOCs III asks: What does it mean to be human in the digital age?
  • Our research questions reductionist agendas embedded in some approaches to evaluating and enhancing learning online. Similar arguments are made by Jen Ross, Amy Collier, and Jon Becker.
  • Kate Bowles says “we have a capacity to listen to each other, and to honour what is particular in the experience of another person.”
  • Lumen Learning’s personalized pathways recognize learner agency (as opposed to dominant personalization paradigms that focus on system control)

Compassion is one commonality that these initiatives, calls to action, and observations have in common (and, empowerment, but that’s a different post).

This is not a call for teaching compassion or empathy to the learner. That’s a different topic. I’m more concerned here with how to embed compassion in our practice – in our teaching, in our learning design processes, the technologies that we create, in the research methods that we use. At this point I have a lot of questions and some answers. Some of my questions are:

  • What does compassionate digital pedagogy look like?
    • What are the theories of learning that underpin compassionate practice?
    • What does a pedagogy of care look like? [Noddings’s work is seminal here. Some thoughts from a talk I gave. thoughts from Lee Skallerup Bessette and a paper describing how caring is experienced in online learning contexts.]
  • What are the purported and actual relationships between compassion and various innovations such as flexible learning environments, competency-based learning, and open education?
    • What are the narratives surrounding innovations [The work of Neil Selwyn, Audrey Watters, and David Noble is helpful here]
  • What does compassionate technology look like?
    • Can technologies express empathy and sympathy? Do students perceive technologies expressing empathy? [Relevant to this: research on pedagogical agents, chatbots, and affective computing]
    • What does compassion look like in the design of algorithms for new technologies?
  • What does compassionate learning design look like?
    • Does a commitment to anti-oppressive education lead to compassionate design?
    • Are there any learning design models that explicitly account for compassion and care? Is that perhaps implicit in the general aim to improve learning & teaching?
    • In what ways is compassion embedded in design thinking?
  • What do compassionate digital learning research methods look like?
    • What are their aims and goals?
    • Does this question even make sense? Does this question have to do with the paradigm or does it have to do with the perspective employed in the research? Arguing that research methods informed by critical theory are compassionate is easy. Can positivist research methods be compassionate? Researchers may have compassionate goals and use positivist approaches (e.g., “I want to evaluate the efficacy of testing regimes because I believe that they might be harmful to students”).
  • What does compassionate digital learning advocacy look like?
    • Advocating for widespread adoption of tools/practices/etc without addressing social, political, economic, and cultural contexts is potentially harmful (e.g., Social media might be beneficial but advocating for everyone to use social media ignores the fact that certain populations may face more risks when doing so)

There’s many other topics here (e.g., adjunctification, pedagogies of hope, public scholarship, commercialization….) but there’s more than enough in this post alone!

Personalized learning: the locus of edtech debates

“Personalized learning” is that one area of research and practice that brings to the forefront many of the debates and issues that the field is engaging with right now. If one wanted to walk people through the field, and wanted to do so through *one* specific topic, that topic would be personalized learning.

7901578380_43a6c9ac6d_z

Personalized cans? (CC-licensed image from Flickr)

Here’s are some of the questions that personalized learning raises:

 

  • We have a problem with labels and meaning in this field. Heck, we have a problem with what to call ourselves: Learning Technologies or Educational Technology? Or perhaps instructional design? Learning Design? Learning, Design, and Technology? Or is it Learning Science? Reiser asks: What field did you say you were in? The same is true for personalized learning. Audrey Watters and Mike Caulfield ask what does “personalized learning” mean and what is the term’s history?  Does it mean different pathways for each learner, one pathway with varied pacing for each learner, or something else?

 

 

  • Where is the locus of control? Is personalization controlled by the learner? Is the control left to the software? What of shared control? Obsolete views of personalization and adaptive learning focus on how the system can control both the content and the learning process ignoring, for the most part, the learner, even though learner control appears to be an important determinant of success in e-learning (see Singhanayok & Hooper, 1998). The important question in my mind is the following: How do we balance system and learner control? Such shared control should empower students and enable technology to support and enhance the process. Downes distinguishes between personalized learning and personal learning. I think that locus of control is the distinguishing aspect, and that the role of shared control remains an open conceptual and empirical question. Debates about xMOOCx vs cMOOCs fall in here as well as the debate regarding the value of guided vs discovery learning.

 

  • How do big data and learning analytics improve learning and participation? What are the limitations of depending on trace data? Personalized learning often appears to depend on the creation of learner profiles. For example, if you fit a particular profile you might receive a particular worked-out example or semi-completed problem, and problems might vary as one progresses through a pathway. Or, you might get an email from Coursera about “recommended courses” (see my point above regarding definitions and meanings). Either way, the role that large datasets, analytics, and educational data science – as well as the limitations and assumptions of these approaches, as we show in our research – is central to personalization and new approaches to education.

  • What assumptions do authors of personalized learning algorithms make? We can’t answer this question unless we look at the algorithms. Such algorithms are rarely transparent. They often come in “black box” form, which means that what we have no insight into the processes of how inputs are transformed to outputs. We don’t know the inner workings of the algorithms that Facebook, Twitter, and Google Scholar use, and we likely won’t know how the algorithms that EdTechCompany uses work to deliver particular content to particular groups of students. If independent researchers can’t evaluate the inner workings of personalized learning software, how can we be sure that such algorithms so what they are supposed to do without being prejudicial? Perhaps the authors of education technology algorithms need a code of conduct, and a course on social justice?

 

  • Knewton touts its personalization engine. Does it actually work? Connecting this to broader conversations in the field: What evidence do we have about the claims made by the EdTech industry? Is there empirical evidence to support these claims? See for example, this analysis by Phil Hill on the relationship between LMS use and retention/performance and this paper by Royce Kimmons on the impact of LMS adoption on outcomes. If you’ve been in the position of making a technology purchasing in K-12/HigherEd, you have likely experienced the unending claims regarding the positive impact of technology on outcomes and retention.

 

  • And speaking of data and outcomes, what of student privacy in this context? How long should software companies keep student data? Who has access to the data? Should the data follow students from one system (e.g., K-12) to another (e.g., Higher Ed)? Is there uniformity in place (e.g., consistent learner profiles) for this to happen? How does local legislation relate to educational technology companies’ use of student data? For example, see this analysis by BCCampus describing how British Columbia’s Freedom of Information and Protection of Privacy Act (FIPPA) impacts the use of US-based cloud services. The more one looks into personalization and its dependence on student data, the more one has to explore questions pertaining to privacy, surveillance and ethics.

 

  • Finally, what is the role of openness is personalized learning? Advocates for open frequently argue that openness and open practices enable democratization, transparency, and empowerment. For instance, open textbooks allow instructors to revise them. But, what happens when the product that publishing companies sell isn’t content? What happens, when the product is personalized learning software that uses OER? Are the goals of the open movement met when publishers use OER bundles with personalized learning software that restricts the freedoms associated with OER? What becomes of the open agenda to empower instructors, students, and institutions?

 

There’s lots to contemplate here, but the point is this: Personalized learning is ground zero for the field and its debates.

Automating the collection of literature – or, keeping up to date with the MOOC literature

Spoiler: We’ve been toying with automating the collection of literature on MOOCs (and other topics). Interested? Read further.

Researchers use different ways to keep updated with the literature on a topic. On a daily basis for example, I use Table of Content (TOC) alerts, RSS feeds, and Google Scholar alerts. Many colleagues have sought to keep track of literature on a topic and share it. For example, danah boyd maintained this list of papers on Twitter and microblogging; Tony Bates shared a copy of the MOOC literature he collected on his blog; Katy Jordan also kept a collection of MOOC literature.

gscholar

A Google Scholar Alert

The problem with maintaining an updated list of relevant literature on a topic is that it quickly becomes a daunting and time-consuming task, especially for popular topics (like MOOCs or social media or teacher training).

In an attempt to automate the collection and sharing of  literature, my research team and I created a python script that goes through the Google Scholar alert emails that I receive (see above), parses the content of the emails, and places it in an html page on my server, from where others can access it. The script runs daily and any new literature is added to the page.

We aren’t there just yet, but here is the output for the MOOC literature going back to November 2012. All 400 pages. I placed it in a Google Document because the html file is 2.5mb (and its easier for people to just download it in a format that they prefer)

In theory this is supposed to work quite well, but there’s a couple of problems with it:

  1. The output is as good as the input. Google Scholar (and its associated alerts) are a black box – meaning there’s no transparency of what is and isn’t indexed.
  2. It’s automated – which means it’s not clean and some “mooc literature” may not really be mooc literature because Google Scholar alerts work on keywords in the body of papers/text rather than keywords describing the papers/text.

We plan on to make the source code available and describe the process to install this so that others can use it for their own literature needs. My question is: How can the output be more helpful to you? Is there anything else that we can do to improve this?

On peer-review

My colleague Amy Collier wrote a thoughtful and reflective post on peer-review. Peer review has been a topic of conversation at a number of other spaces recently, including the Chronicle of Higher Education advice columns and Inside Higher Ed.

One of the most thoughtful writings on the topic that I have read is a conversational series of articles initiated by Kevin Kumashiro, called Thinking Collaboratively about the Peer-Review Process for Journal- Article Publication and published with Harvard Educational Review. This is an excellent piece of writing and even though it was published in 2005 it is as relevant today as it ever was. For example, here’s a sample from one of my favorite authors, William Pinar, that appears in this paper:

For professors of education, working pedagogically should structure all that we do, not just what happens in our classrooms or in our offices. Working pedagogically should structure our research as we labor to teach our students and our colleagues what we have understood from study and inquiry. It must also structure our professional relations with each other, especially during those moments of anonymity when we are called upon to critique research and inquiry that is under consideration for publication in our field’s scholarly journals. When we are anonymous, we are called upon to perform that pedagogy of care and concern to which we claim to be committed. The ethical conduct of our professional practice demands no less.

Peer-review will continue to receive attention and interest, as higher education is facing formidable technological and socio-cultural pressures. We wrote about this issue in the past in one of our papers (p. 770-771), and I am going to quote it at length here because of its relevance: 

“Peer review is the first example of how seemingly non-negotiable scholarly artifacts are currently being questioned: while peer review is an indispensable tool intended to evaluate scholarly contributions, empirical evidence questions the value and contributions of peer review (Cole, Cole, & Simon,1981; Rothwell & Martyn, 2000), while its historical roots suggest that it has served functions other than quality control (Fitzpatrick, 2011). On the one hand, Neylon and Wu (2009, p. 1) eloquently point out that “the intentions of traditional peer review are certainly noble: to ensure methodological integrity and to comment on potential significance of experimental studies through examination by a panel of objective, expert colleagues”, while Scardamalia and Bereiter (2008, p. 9) recognize that “like democracy, it [peer-review] is recognized to have many faults but is judged to be better than the alternatives”. Yet, peer review’s harshest critics consider it an anathema. Casadevall and Fang (2009) for instance, question whether peer review is in fact a subtle cousin of censorship that relies heavily upon linguistic negotiation or grammatical “courtship rituals” to determine value, instead of scientific validity or value to the field, while Boshier (2009) argues that the current, widespread acceptance of peer review as a valid litmus test for scholarly value is a “faith-” rather than “science-based” approach to scholarship, citing studies in which peer review was found to fail in identifying shoddy work and to succeed in censoring originality… The challenge for scholarly practice is to devise review frameworks that are not just better than the status quo, but systems that take into consideration the cultural norms of scholarly activity, for if they don’t, they might be doomed from their inception. A recent experiment with public peer review online at Nature, for example, revealed that scholars exhibited minimal interest in online commenting and informal discussions with findings suggesting that scholars “are too busy, and lack sufficient career incentive, to venture onto a venue such as Nature’s website and post public, critical assessments of their peers’ work” (Nature, 2006, { 9). Shakespeare Quarterly, a peer-reviewed scholarly journal founded in 1950 conducted a similar experiment in 2010 (Rowe, 2010). While the trial elicited more interest than the one in Nature with more than 40 individuals contributing who, along with the authors, posted more than 300 comments, the experiment further illuminated the fact that tenure considerations impact scholarly contributions. Cohen (2010) reported that “the first question that Alan Galey, a junior faculty member at the University of Toronto, asked when deciding to participate in The Shakespeare Quarterly’s experiment was whether his essay would ultimately count toward tenure”. Considering the reevaluation of such an entrenched and centripetal structure of scholarly practice as peer review, along with calls for recognizing the value of diverse scholarly activities (Pellino et al., 1984), such as faculty engagement in K–12 education (Foster et al., 2010), we find that the internal values of the scholarly community are shifting in a direction that may be completely incompatible with some of the seemingly non-negotiable elements of 20th century scholarship.”

MOOCs, automation, artificial intelligence seminar

I will be visiting my colleagues at the University of Edinburgh in mid-June to give a seminar on MOOCs, automation, artificial intelligence and pedagogical agents. This is a free event organized by the Moray House School of Education at the U of Edinburgh and supported by the Digital Cultures and Education research group and DigitalHSS. Please feel free to join us face-to-face or online (Date: 18 June 2014; Time: 1-3pm) by registering here.

This seminar will bring together some of my current and past research. A lot of my work in the past examined learners’ experiences with conversational and (semi)intelligent agents. In that research, we discovered that the experience of interacting with intelligent technologies was engrossing (pdf). Yet, learners often verbally abused the pedagogical agents (pdf). We also discovered that appearance (pdf) may be a significant mediating factor in learning. Importanly, this research indicated that “learners both humanized the agents and expected them to abide by social norms, but also identified the agents as programmed tools, resisting and rejecting their lifelike behaviors.”

A lot of my current work examines experiences with open online courses and online social networks, but what exactly does pedagogical agents and MOOCs have to do with each other? Ideas associated with Artificial Intelligence are present in both the emergence of xMOOCs (EdX, Udacity, and Coursera emanated from AI labs) and certain practices associated with them – e.g., see Balfour (2013) on automated essay scoring. Audrey Watters highlighted these issues in the past. While I haven’t yet seen discussions on the integration of lifelike characters and pedagogical agents in MOOCs, the use of lifelike robots for education and the role of the faculty member in MOOCs are areas of  debate and investigation in both the popular press and the scholarly literature.  The quest to automate instruction has a long history, and lives within the sociocultural context of particular time periods. For example, the Second World War found US soldiers and cilvilians unprepared for the war effort, and audiovisual devices were extensively used to efficiently train individuals at a massive scale. Nowadays, similar efforts at achieving scale and efficiencies reflect problems, issues, and cultural beliefs of our time.

I’m working on my presentation, but if you have any questions or thoughts to share, I’d love to hear them!

 

Page 2 of 5

Powered by WordPress & Theme by Anders Norén