Tag: pedagogical agents

Bots, AI, & Education update #3

Today’s rough set of notes that focus on teacherbots and artificial intelligence in education

  • Chatbots: One of the technologies that’s mesmerized silicon valley
  • Humans have long promised future lives enhanced by machines
  • Many proponents highlight the qualities of bots vis-a-vis teachers
    • personal
    • personalized
    • monitoring & nudging
    • can give reliable feedback
    • don’t get tired
    • etc etc
  • Knewton: Algos to complement and support teacher (sidenote: as if anyone will be forthright about aiming to replace teachers… except perhaps this book that playfully states that “coaches (once called teachers)” will cooperate with AI)
  • Genetics with Jean: bots with affect-sensing functionality, ie software that detects students’ affective states and responds accordingly
  • Driveleress Ed-Tech: Robots aren’t going to march in for jobs; it’s the corporations and the systems that support them that enable that to happen.

Bots, AI, & Education update #2

Yesterday’s rough set of notes that focus on teacherbots and artificial intelligence in education

  • Notable critiques of Big Data, data analytics, and algorithmic culture (e.g., boyd & Crawford, 2012; Tufecki, 2014 & recent critiques of YouTube’s recommendation algorithm as well as Caulfield’s demonstration of polarization on Pinterest). These rarely show up in discussions around bots and AI in education, critiques of learning analytics and big data (e.g., Selwyn 2014; Williamson, 2015) are generally applicable to the technologies that enable bots to do what they do (e.g., Watters, 2015).
  • Complexity of machine learning algorithms means that even their developers are at times unsure as to how said algorithms arrive at particular conclusions
  • Ethics are rarely an area of focus in instructional design and technology (Gray & Boling, 2016)  – and related edtech-focused areas. In designing bots where should we turn for moral guidance? Who are such systems benefiting? Whose interests are served? If we can’t accurately predict how bots may make decisions when interacting with students (see bullet point above), how will we ensure that moral values are embedded in the design of such algorithms? Whose moral values in a tech industry that’s mired with biases, lacks broad representation, and rarely heeds user feedback (e.g., women repeatedly highlighting the harassment they experience on Twitter for the past 5 or so years, with Twitter taking few, if any, steps to curtail it)?

Bots, AI, & Education update #1

A rough set of notes from today that focus on teacherbots and artificial intelligence in education

  • Bots in education bring together many technologies & ideas including, but not limited to artificial intelligence, data analytics, speech-recognition technologies, personalized learning, algorithms, recommendation engines, learning design, and human-computer interaction.
    • They seek to serve many roles (content curation, advising, assessment, etc)
  • Many note the potential that exists in developing better algorithms for personalized learning. Such algos are endemic in the design of AI and bots
    • Concerns: Black box algorithms, data do not fully capture learning & may lead to biased outcomes & processes
  • Downes sees the crux of the matter as What AI can currently do vs. What AI will be able to do
    • This is an issue with every new technology and the promises of its creators
    • Anticipated future impact features prominently in claims surrounding impact of tech in edu
  • Maha Bali argues that AI work misunderstands what teachers do in the classroom
    • Yet, in a number of projects we see classroom observations as being used to inform the design of AI systems
  • “AI can free time for teachers to do X” is an oft-repeated claim of AI/bot proponents. This claim often notes that AI will free teachers from mundane tasks and enable them to focus on those that matter. We see this in Jill Watson, in talks from IBM regarding Watson applications to education, but also in earlier attempts to integrate AI, bots, and pedagogical agents in education (e.g., 1960s, 1980s). Donald Clark reiterates this when he argues that teachers should “welcome something that takes away all the admin and pain.” See update* below.
  • Another oft-repeated claim is that AI & bots will work with teachers, not replace them
  • At times this argument is convincing. At other times, it seems dubious (e.g., when made in instances where proponents ask readers/audience to imagine a future where every child could have instant access to [insert amazing instructor here])
  • Predictions regarding the impact of bots and AI abound (of course). There’s too many to list here, but here’s one example
  • Why a robot-filled education future may not be as scary as you think” argues that concerns around robots in education are to be expected. The article claims that people are “hard-wired” to perceive “newness as danger” as it seeks to explain away concerns by noting that education, broadly speaking, avoids change. There’s no recognition anywhere in the article that (a) education is, and has always been, in a constant state of change, and (b) edtech has always been an optimistic endeavour, so much so that its blind orthodoxy has been detrimental to its goal of improving education.

 

Update:

From Meet the mind-reading robo tutor in the sky:

And underpaid, time-stressed teachers don’t necessarily have the time to personalize every lesson or drill deep into what each child is struggling with.

Enter the omniscient, cloud-based robo tutor.

“We think of it like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile,” says Jose Ferreira, the founder and CEO of ed-tech company Knewton.”

Talking to machines: What do learners and robots talk about?

Talk like me

 Talk like me by pursyapt

My research endeavors originally started with an attempt to understand interactions between learners and virtual characters, bots, and other artificially intelligent beings. Even though a lot of that research has been published, there’s still a couple of papers arriving. As we are moving closer and closer to everything (and i mean everything) being networked, I believe that it’s important to keep on examining our mediated existence and the ways we experience and interact with emerging forms of media. This is especially true for education. Until very recently, educators and practitioners have been adopting technologies developed for non-educational purposes and using them to fit education needs (e.g., TV, Radio, computers, the Internet, YouTube, iTunes, the list is endless). This might be changing a little bit with the booming interest in educational technology, but when we adopt technologies developed for other purposes, we need to investigate the congruency between those technologies and our teaching/learning context.

In a paper that a graduate student and I wrote, we tried to understand what learners and virtual characters may discuss when they have the ability to have open-ended conversations. If you were a student, and a virtual robot (of sorts) was deployed to support your learning, what would you ask it (him?her?)? If you could talk about anything, what your interactions with him/her (it?) look like?

Here’s our abstract, describing our findings:

Researchers claim that pedagogical agents engender opportunities for social learning in digital environments. Prior literature, however, has not thoroughly examined the discourse between agents and learners. To address this gap, we analyzed a data corpus of interactions between agents and learners using open coding methods. Analysis revealed that: (1) conversations between
learners and agents included sporadic on-task interactions with limited follow-up; (2) conversations were often playful and lighthearted; (3) learners positioned agents in multiple instructional/social roles; (4) learners utilized numerous strategies for understanding agent responses; (5) learners were interested in agents’ relationship status and love interests; and (6) learners
asked personal questions to the agent but did not reciprocate to requests to talk about themselves.

You can download a pdf of the full paper below:

Veletsianos, G. & Russell, G. (2013). What do learners and pedagogical agents discuss when given opportunities for open-ended dialogue? Journal of Educational Computing Research, 48(3), 381-401.

What happens when pedagogical agents are off-task?

Social and non-task interactions are often recognized as a valuable part of the learning experience. Talk over football, community events, or local news for example, may enable the development of positive instructor-learner relationships and a relaxed learning atmosphere. Non-task aspects of learning however have received limited attention in the education literature. Morgan-Fleming, Burley, and Price (2003) argue that this is the result of an implicit assumption that no pedagogical benefits are derived from non-task behavior, hence the reduction of off-task activities in schools such as recess time. This issue has received limited attention in the pedagogical agent literature as well. What happens when a virtual character designed to help a student learn about a topic, introduces off-task comments to a lesson? What happens when a virtual instructor mentions current events? How do learners respond?

These are the issues that I am investigating in a paper published in the current issue of the journal Computers in Human Behavior, as part of my research on the experiences of students who interact with virtual instructors and pedagogical agents. The abstract, citation, and link to the full paper appear below:

Abstract
In this paper, I investigate the impact of non-task pedagogical agent behavior on learning outcomes, perceptions of agents’ interaction ability, and learner experiences. While quasi-experimental results indicate that while the addition of non-task comments to an on-task tutorial may increase learning and perceptions of the agent’s ability to interact with learners, this increase is not statistically significant. Further addition of non-task comments however, harms learning and perceptions of the agent’s ability to interact with learners in statistically significant ways. Qualitative results reveal that on-task interactions are efficient but impersonal, while non-task interactions were memorable, but distracting. Implications include the potential for non-task interactions to create an uncanny valley effect for agent behavior.

Veletsianos, G. (2012). How do Learners Respond to Pedagogical Agents that Deliver Social-oriented Non-task Messages? Impact on Student Learning, Perceptions, and Experiences. Computers in Human Behavior, 28(1), 275-283.

Enhancing the interactions between pedagogical agents and learners

One thing that I don’t usually post on this blog is information related to my research on pedagogical agents and virtual characters, which is one of the research strands that I’ve followed for the past 4 years. I am breaking away from that mold by posting this note : )

virtual character, pedagogical agent

Specifically, my colleagues (Aaron Doering and Charles Miller) and I developed a research and design framework to guide smooth, natural, and effective communication between learners and pedagogical agents. Our reasons for developing this framework were varied, but after four years of research and design in the field, I became convinced that to push the field forward, we needed guidance. I use the word “guidance” as opposed to the words “rules” or “laws” because we “anticipate that designers, researchers, and instructors will adapt and sculpt the guidelines of the EnALI framework into their unique instructional contexts, ultimately kindling future research and design that will expand the framework foundations.”

The framework (called Enhancing Agent Learner Interactions or EnALI) is grounded on three major theories: socio-cultural notions of learning, cooperative learning, and conflict theory. In this, we have tried to bring a humanist perspective and encourage designers and researchers to move beyond the use of pedagogical agents as dispassionate tools delivering pre-recorded lectures… but I’ll save that information for a different post. The paper is to appear in the Journal of Educational Computing Research as: Veletsianos, G., Miller, C., & Doering, A. (2009). EnALI: A Research and Design Framework for Virtual Characters and Pedagogical Agents. Journal of Educational Computing Research, 41(2), 171-194 [email me for a preprint].

The framework is posted below, but if you want a full explanation of the guidelines, please refer to the paper. As always questions, comments, and critique are appreciated:

1. Pedagogical Agents should be attentive and sensitive to the learner’s needs and wants by:

• Being responsive and reactive to requests for additional and/or expanded information.
• Being redundant.
• Asking for formative and summative feedback.
• Maintaining an appropriate balance between on- and off-task communications.

2. Pedagogical Agents should consider intricacies of the message they send to learners by:

• Making the message appropriate to the receiver’s abilities, experiences, and frame of reference.
• Using congruent verbal and nonverbal messages.
• Clearly owning the message.
• Making messages complete and specific.
• Using descriptive, non-evaluative comments.
• Describing feelings by name, action, or figure of speech.

3. Pedagogical Agents should display socially appropriate demeanor, posture, and representation by:

• Establishing credibility and trustworthiness
• Establishing role and relationship to user/task.
• Being polite and positive (e.g., encouraging, motivating)
• Being expressive (e.g. exhibiting verbal cues in speech).
• Using a visual representation appropriate to content.

Powered by WordPress & Theme by Anders Norén