Search results: ""pedagogical agent"" Page 2 of 5

Pedagogical agent bashing

Lots of my work has to do with “pedagogical agents.” These are virtual characters employed in electronic learning environments for instructional purposes. But what lies behind the lingo? These are images of beings (humans or inanimate objects) that appear in learning modules or tutorials and “do something”. Some of them can hold a conversation with a learner (conversational pedagogical agents) while others present information, decorate the learning environment, or represent some sort of a persona. The second type of agents I call passive or non-conversational pedagogical agents. This is the type that gets employed the most in pedagogical agent research and this is the type of agent that I am going to bash in this post! :)

Pedagogical agents represent one of those technologies that have been presented as greatly beneficial to teaching and learning. Yet, the difference hasn’t been made explicit. Conversational agents, for reasons that I won’t explore here may be beneficial. Passive pedagogical agents however will not have any lasting impact on learning or any lasting impression on students. This is because:

* Pedagogical agents who merely present information to students are boring. Boring is bad. Let me say that again because educational technology researchers/designers might not have got it the first time around: Boring is bad.

* Pedagogical agents who don’t allow from deviation from the given task are “oppressive”. To be clear, “oppression” here is compatible to (or derived from) Freire’s description of classroom oppression and democracy.

While I do believe (and have empirical evidence) that the pedagogical agent’s representation influences how people interact with them, passivity isn’t the way forward.

Apologies for the negative post – but to move forward we need to talk about these things too :)… Read the rest

Metaphors of generative AI in education

It’s been interesting to see what kinds of conceptual metaphors have been used to describe generative AI. Conceptual metaphors are ways to talk about something by relating it to something else. It’s been interesting because the ways we choose to speak about generative AI in education matter, because those choices impact experiences, expectations, and even outcomes. Early pedagogical agent research for example, identified different roles for agents often positioning them as experts, mentors or motivators. Even the term pedagogical agents carries its own connotations around the kinds of work that such a system will engage in (pedagogical), compared to say conversational agents or intelligent agents.

Below is a small sample. What other metaphors have you seen?

Update: See Dave Cormier’s related post around prompt engineering, or what to call the act of talking to algorithms.… Read the rest

ChatGPT is the tree, not the forest.

“Not see the forest for the trees,” is a North American idiom that is used to urge one that focusing on the details might lead them to miss the larger issue/problem. ChatGPT is the tree. Perhaps it’s the tallest or the leafiest tree, or the one that blossomed rapidly right in front of your eyes… sort of like a Japanese flowering cherry. What does this mean for you?

If you’re exploring ChatGPT – as a student, instructor, administrator, perhaps as a community – don’t focus solely on ChatGPT. Certainly, this particular tool is can serve as one illustration of the possibilities, pitfalls, and challenges of Generative AI, but making decisions about Generative AI by focusing solely on ChatGPT may lead you to make decisions that are grounded on the idiosyncrasies of this particular technology at this particular point in time.

What does this mean in practice? Your syllabus policies should be broader than ChatGPT. Your taskforce and working groups should look beyond this particular tool. Your classroom conversations should highlight additional technologies.

I was asked recently to lead a taskforce to explore implications and put forward recommendations for our teaching and learning community. ChatGTP was the impetus. But our focus is Generative AI. It needs to be. And there’s a long AIED history here, which includes some of my earlier work on pedagogical agents.

 … Read the rest

Bots, AI, & Education update #1

A rough set of notes from today that focus on teacherbots and artificial intelligence in education

  • Bots in education bring together many technologies & ideas including, but not limited to artificial intelligence, data analytics, speech-recognition technologies, personalized learning, algorithms, recommendation engines, learning design, and human-computer interaction.
    • They seek to serve many roles (content curation, advising, assessment, etc)
  • Many note the potential that exists in developing better algorithms for personalized learning. Such algos are endemic in the design of AI and bots
    • Concerns: Black box algorithms, data do not fully capture learning & may lead to biased outcomes & processes
  • Downes sees the crux of the matter as What AI can currently do vs. What AI will be able to do
    • This is an issue with every new technology and the promises of its creators
    • Anticipated future impact features prominently in claims surrounding impact of tech in edu
  • Maha Bali argues that AI work misunderstands what teachers do in the classroom
    • Yet, in a number of projects we see classroom observations as being used to inform the design of AI systems
  • “AI can free time for teachers to do X” is an oft-repeated claim of AI/bot proponents. This claim often notes that AI will free teachers from mundane tasks and enable them to focus on those that matter. We see this in Jill Watson, in talks from IBM regarding Watson applications to education, but also in earlier attempts to integrate AI, bots, and pedagogical agents in education (e.g., 1960s, 1980s). Donald Clark reiterates this when he argues that teachers should “welcome something that takes away all the admin and pain.” See update* below.
  • Another oft-repeated claim is that AI & bots will work with teachers, not replace them
  • At times this argument is convincing. At other times, it seems dubious (e.g., when made in instances where proponents ask readers/audience to imagine a future where every child could have instant access to [insert amazing instructor here])
  • Predictions regarding the impact of bots and AI abound (of course). There’s too many to list here, but here’s one example
  • Why a robot-filled education future may not be as scary as you think” argues that concerns around robots in education are to be expected. The article claims that people are “hard-wired” to perceive “newness as danger” as it seeks to explain away concerns by noting that education, broadly speaking, avoids change. There’s no recognition anywhere in the article that (a) education is, and has always been, in a constant state of change, and (b) edtech has always been an optimistic endeavour, so much so that its blind orthodoxy has been detrimental to its goal of improving education.

 

Update:

From Meet the mind-reading robo tutor in the sky:

And underpaid, time-stressed teachers don’t necessarily have the time to personalize every lesson or drill deep into what each child is struggling with.

Enter the omniscient, cloud-based robo tutor.

“We think of it like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile,” says Jose Ferreira, the founder and CEO of ed-tech company Knewton.”

Read the rest

Imagine a future in which technologies teach humans

Pause for a few minutes and imagine a future in which technologies teach humans. Call them robots, bots, chatbots, algorithms, teaching machines, tutoring software, agents, or something else. Regardless, consider them technologies that teach.

robo_teacher

Vector created by Freepik

How far into the future is that time?

What do these technologies look like? Are they anthropomorphous? Are they human-like? In what ways are they human-like? Do they have voice capabilities, and if so, do they understand natural language? Are they men or women?  Do they have a representation in the way that one would imagine a teacher – such as a pedagogical agent – or do they function behind the scenes in ways that seem rather innocuous – such as the Mechanical MOOC?

Do these technologies teach humans of all ages? Do they teach independently, support human teachers, or do human teachers assist them? Are they featured in articles in the New York Times, The Guardian, and The Economist as innovations in education? Or, are they as common as desks and chairs, and therefore of less interest to the likes of the New York Times? Are they common in all learning contexts? Who benefits from technologies that teach? Is being taught by these technologies better or worse than being taught be a human teacher? In what ways is it better or worse? Are they integrated in affluent universities and k-12 schools? Or, are they solely used in educational institutions serving students of low socioeconomic status? Who has access to the human teachers and who gets the machines? Are they mostly used in public or private schools?

How do learners feel about them? Do they like them? Do they trust them? Ho do learners think that these technologies feel about them? Do they feel cared for and respected? How do learners interact with them? How do human teachers feel about them? Would parents want their children to be taught be these technologies? Which parents have a choice and which parents don’t? How do politicians feel about them? How do educational technology and data mining companies view them?

Do teaching technologies treat everyone the same based on some predetermined algorithm? Or, are their actions and responses based on machine learning algorithms that are so complex that even the designers of these technologies cannot predict their behaviour with exact precision? Do they subscribe to pre-determined pedagogical models? Or, do they “learn” what works over time for certain people, in certain settings, for certain content areas, for certain times of the day? Do they work independently in their own classroom? Or, do colonies of robo-teachers gather, share, and analyze the minutiae of student life, with each robo-teacher carefully orchestrating his or her next evidence-based pedagogical move supported by Petabytes of data?

Final question for this complicated future, I promise: What aspects of this future are necessary and desirable, and why?

Read the rest

Learning Design at Pearson

Last week, a reporter from EdSurge reached out to me to shed some light on what Pearson called their Learning Design Principles. The EdSurge article is here, but below is a more detailed rough draft of the points that I made to share. I am posting them here for a fuller picture of some of my thoughts.

  1. Nothing proprietary (yet, perhaps). I saw a number of sources note that Pearson released their proprietary learning design principles. There’s not much proprietary in the principles. All of these ideas are well-documented in the literature pertaining to educational technology found in cognitive psychology, learning sciences, instructional design, and education literature.
  2. It’s good to see that Pearson is using findings from the education literature to guide its design and development. Some of these principles should be standard practice. If you are creating educational technology products without considering concepts like instructional alignment, feedback, and scaffolding, authentic learning, student-centered learning environments, and inquiry-based learning, you are likely creating more educational harm than good. The point is that using research to guide educational technology should be applauded and emulated. More educational technology companies should be using research to inform their designs and product iterations.
  3. BUT, since around 2011, the educational technology industry has promoted the narrative that education has not changed since the dawn of time. With a few exceptions, the industry has ignored the history, theory, and research of the academic fields associated with improving education with technology. The industry has ignored this at its own peril because we have a decent – not perfect, but decent – understanding of how people learn and how we can help improve the ways that people learn. But, the industry has developed products and services starting from scratch, making the same mistakes that other have done in the past, while claiming that their products and services will disrupt education.
  4. Not all of the items released are principles. For example, “pedagogical agents” is on the list but that’s not a principle. Having studied the implementation of pedagogical agents for more than 7 years, it’s clear that what Pearson is attempting to do is figure our how to better design pedagogical agents for learning. Forgive me while I link to some pdfs of my past work here, but, should amagent’s representation match the content area that they are supporting (should a doctor look like a doctor or should she have a blue mohawk?). Table 1 in this paper provides more on principles for designing pedagogical agents (e.g., agents should establish their role so that learners have a clear anticipation of what the agent can and cannot do: Does the agent purport to know everything or is the agent intended to ask questions but provide no answers?)
  5. As you can tell from the above, I firmly believe that industry needs research/researchers in developing, evaluating, and refining innovations.

But more importantly, happy, merry, just, and peaceful holidays to everyone!… Read the rest

Compassion, Kindness, and Care in Digital Learning Contexts

Bear with me. This work-in-progress is a bit raw. I’d love any feedback that you might have.

Back in 2008, my colleagues and I wrote a short paper arguing that social justice is a core element of good instructional design. Good designs were, and still are, predominantly judged upon their effectiveness, efficiency, and engagement (e3 instruction). Critical and anti-opressive educators and theorists have laid the foundations of extending educational practice beyond effectiveness a long time ago.

I’m not convinced that edtech, learning design, instructional design, digital learning, or any other label that one wants to apply to the “practice of improving digital teaching and learning” is there yet.

I’ve been thinking more and more about compassion with respect to digital learning. More specifically, I’ve been reflecting on the following question:

What does compassion look like in digital learning contexts?

I’m blogging about this now, because my paper journal is limiting and there is an increasing recognition within various circles in the field that are coalescing around similar themes. For instance,

  • The CFP for Learning with MOOCs III asks: What does it mean to be human in the digital age?
  • Our research questions reductionist agendas embedded in some approaches to evaluating and enhancing learning online. Similar arguments are made by Jen Ross, Amy Collier, and Jon Becker.
  • Kate Bowles says “we have a capacity to listen to each other, and to honour what is particular in the experience of another person.”
  • Lumen Learning’s personalized pathways recognize learner agency (as opposed to dominant personalization paradigms that focus on system control)

Compassion is one commonality that these initiatives, calls to action, and observations have in common (and, empowerment, but that’s a different post).

This is not a call for teaching compassion or empathy to the learner. That’s a different topic. I’m more concerned here with how to embed compassion in our practice – in our teaching, in our learning design processes, the technologies that we create, in the research methods that we use. At this point I have a lot of questions and some answers. Some of my questions are:

  • What does compassionate digital pedagogy look like?
    • What are the theories of learning that underpin compassionate practice?
    • What does a pedagogy of care look like? [Noddings’s work is seminal here. Some thoughts from a talk I gave. thoughts from Lee Skallerup Bessette and a paper describing how caring is experienced in online learning contexts.]
  • What are the purported and actual relationships between compassion and various innovations such as flexible learning environments, competency-based learning, and open education?
    • What are the narratives surrounding innovations [The work of Neil Selwyn, Audrey Watters, and David Noble is helpful here]
  • What does compassionate technology look like?
    • Can technologies express empathy and sympathy? Do students perceive technologies expressing empathy? [Relevant to this: research on pedagogical agents, chatbots, and affective computing]
    • What does compassion look like in the design of algorithms for new technologies?
  • What does compassionate learning design look like?
    • Does a commitment to anti-oppressive education lead to compassionate design?
    • Are there any learning design models that explicitly account for compassion and care? Is that perhaps implicit in the general aim to improve learning & teaching?
    • In what ways is compassion embedded in design thinking?
  • What do compassionate digital learning research methods look like?
    • What are their aims and goals?
    • Does this question even make sense? Does this question have to do with the paradigm or does it have to do with the perspective employed in the research? Arguing that research methods informed by critical theory are compassionate is easy. Can positivist research methods be compassionate? Researchers may have compassionate goals and use positivist approaches (e.g., “I want to evaluate the efficacy of testing regimes because I believe that they might be harmful to students”).
  • What does compassionate digital learning advocacy look like?
    • Advocating for widespread adoption of tools/practices/etc without addressing social, political, economic, and cultural contexts is potentially harmful (e.g., Social media might be beneficial but advocating for everyone to use social media ignores the fact that certain populations may face more risks when doing so)

There’s many other topics here (e.g., adjunctification, pedagogies of hope, public scholarship, commercialization….) but there’s more than enough in this post alone!… Read the rest

Page 2 of 5

Powered by WordPress & Theme by Anders Norén