Category: pedagogical agents Page 1 of 2

High school senior: Why aren’t more teachers embracing AI?

One of my joys in life is reading student op-eds. Here is a wonderful example, by a high school senior who asks: why aren’t more teachers using AI?

The student describes how they use it, how they find it beneficial, and how their teachers are suspicious of it.

I believe that the student, and many others, parents included, are truly curious. In other words, I don’t think the question is rhetorical. Why not use a technology which seems to offer so many benefits? So, I thought I’d take a few moments to answer it. A point of clarification before we turn to a list of possible reasons:

  • It’s not quite clear what is the prevalence of AI use in K-12. In the US, one survey suggests that around 10% of teachers use it, while another puts that number at ~50%. Even with the high number, we need to clarify what “AI use” means because teachers’ AI use might be invisible to students (e.g., using it to create/refine rubrics, produce examples, etc). In other words, teachers might be using AI, just not in the pedagogical ways described in the op ed.

Here’s a list of possible reasons

  • Lack of familiarity and knowledge about how to use AI in the classroom.
  • Concerns about AI (e.g., about its biases, ethics, and implications for equity and access).
  • Lack of support and guidance (e.g., at the administrator or school district level) as to whether and how teachers ought to use it.
  • For decades, edtech promises to revolutionize education. AI comes with similar promises. Teachers are tired and weary of these unmet promises.
  • Inconsistencies between the technology and the school/testing environment that teachers operate under.
  • It takes time for technology to spread into education settings, and for good reasons (e.g., devising ways to integrate a technology with an uncertain future takes more time and effort that people realize, and, if one thing is certain, teachers lack time).

There’s likely other reasons, and these can be grouped into individual reasons (e.g., why aren’t individual teachers using AI?), community and organizational reasons (e.g., why aren’t schools supporting teachers in using AI?), and societal reasons (e.g., why did our  society structure schools in ways which limit rapid adoption of AI?).

Importantly: A lot of it relates to context, such as the content area or the particular school. And so, if you’re interested in why your particular teachers at your particular school in your particular part of the country aren’t using a technology (or a pedagogical strategy even), it’s important to identify local reasons for use/non-use.

And to be clear: This isn’t to say that teachers should or shouldn’t use a particular technology in education.

Metaphors of generative AI in education

It’s been interesting to see what kinds of conceptual metaphors have been used to describe generative AI. Conceptual metaphors are ways to talk about something by relating it to something else. It’s been interesting because the ways we choose to speak about generative AI in education matter, because those choices impact experiences, expectations, and even outcomes. Early pedagogical agent research for example, identified different roles for agents often positioning them as experts, mentors or motivators. Even the term pedagogical agents carries its own connotations around the kinds of work that such a system will engage in (pedagogical), compared to say conversational agents or intelligent agents.

Below is a small sample. What other metaphors have you seen?

Update: See Dave Cormier’s related post around prompt engineering, or what to call the act of talking to algorithms.

Playing with prompt engineering in response to Hickey & Luo prompt engineering analysis

In an worthwhile and expansive post Daniel Hickey and Qianxu Morgan Luo explore knowledge-rich prompts and the ways in which ChatGPT and Bard differ in the results they return. I thought that

  1. their exploration was interesting,
  2. that their findings were interesting (e.g., “this initial analysis suggests that ChatGPT was much more capable of interpreting and applying the article using an engineered prompt that included the abstract. But ChatGPT did not add much knowledge about cyber ranges In contrast, Bard appeared to take away almost nothing from the abstract but knew more about cyber ranges”)
  3. but, I was convinced that the prompts could have been engineered in more nuanced ways to improve the outcome.

I also recently learned that thinking of prompts in an incremental fashion may provide more nuanced result, and so I went about to recreate Hickey and Luo’s prompt in ChatGPT to see if I could generate a better outcome.

What did Hickey and Luo do?

Their knowledge-rich prompt was the following:

Use the following article by Sinatra, Heddy, & Lombardi (2015) to motivate engagement by cybersecurity learners who are using a digital range and use APA citations: Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a “grain-size” continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.

ChatGPT’s output was:

Research in educational psychology has consistently shown that engagement is a key factor in successful learning outcomes (Sinatra, Heddy, & Lombardi, 2015). For cybersecurity learners who are using a digital range, engagement is particularly important as it can enhance their motivation and achievement in this field. As Sinatra et al. (2015) note, engagement can be viewed from a dimensional perspective, encompassing behavioral, cognitive, emotional, and agentic dimensions. These dimensions can be used to design effective learning experiences that tap into students’ interests and motivations, such as through gamification or interactive simulations.

However, there is still a need for a concrete definition and effective measurement of engagement, especially in the context of science learning (Sinatra et al., 2015). To address this issue, educators in the field of cybersecurity can consider using a complementary approach that places engagement instrumentation on a continuum, ranging from a person-centered to a context-centered orientation (Sinatra et al., 2015). This approach can help clarify measurement issues and enable educators to tailor their teaching strategies to the individual needs and interests of their students.

In conclusion, engagement is a crucial factor in the success of cybersecurity learners using a digital range. By taking a dimensional perspective and using a continuum-based approach to measure engagement, educators can design effective learning experiences that promote student motivation and achievement in this field.

Reference:

Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges and opportunities of engaging learners in the scientific practices of explanation and argumentation. Journal of Educational Psychology, 107(2), 380–397. https://doi.org/10.1037/edu0000004

What did I do?

I divided the prompt in chunks, incrementally moving from the field (cybersecurity), to the topic within the field (cyber ranges) to the application of motivation to the topic (the Sinatra et al citation), to probing for specifics. I also used the term “cyber range” rather than “digital range” which Hickey and Luo used. This approach generated specific and actionable recommendations, especially compared to the generic reply above. My prompts and ChatGPT responses are below.

Prompt: I’d like to learn more about cybersecurity. Could you help me with this topic?

Sure! I’d be happy to help you learn more about cybersecurity. What specific areas of cybersecurity are you interested in? Some possible topics within cybersecurity include:

  1. Network security
  2. Information security
  3. Application security
  4. Cloud security
  5. Identity and access management
  6. Incident response
  7. Risk management
  8. Cybersecurity laws and regulations

Let me know what you’re interested in, and I can give you some pointers on where to start.

Prompt: What do people mean when they mention “cyber range” in the context of cybersecurity?

A “cyber range” is a simulated environment used for training and testing cybersecurity skills and technologies. It is essentially a virtual training ground where individuals and teams can practice identifying, preventing, and responding to cyber threats.

Cyber ranges can be set up to simulate a variety of scenarios, such as network intrusions, phishing attacks, malware infections, and denial-of-service attacks. They may also incorporate realistic components such as simulated user behavior, fake websites, and decoy systems.

By providing a safe and controlled environment for cybersecurity training and testing, cyber ranges can help organizations and individuals develop their skills and test their defenses without risking their actual systems or data. They can also be used to evaluate and compare different cybersecurity tools and techniques.

Some cyber ranges are open to the public, while others are only available to specific organizations or groups. Some are physical facilities that require specialized equipment and infrastructure, while others are entirely virtual and can be accessed from anywhere with an internet connection.

Overall, cyber ranges are an increasingly important tool for improving cybersecurity readiness and resilience in the face of an ever-evolving threat landscape.

Prompt: I would like to motivate engagement by cybersecurity learners who are using a cyber range. Using the ideas from Sinatra, Heddy, & Lombardi (2015) that are in quotes, write how to motivate engagement and use APA citations: “Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a ‘grain-size’ continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.”

To motivate engagement in cybersecurity learners who are using a cyber range, it is important to create a learning environment that is both challenging and supportive. According to Sinatra, Heddy, & Lombardi (2015), engagement is influenced by a variety of factors, including behavioral, cognitive, emotional, and agentic dimensions. To address these dimensions and promote engagement in the cyber range, the following strategies can be implemented:

  1. Provide hands-on learning experiences: learners should be given the opportunity to actively engage with the cyber range and practice their skills in a realistic and challenging environment. This can help to increase their sense of control and agency, which is important for promoting engagement (Sinatra, Heddy, & Lombardi, 2015).
  2. Create a supportive learning environment: learners should feel supported and encouraged throughout the learning process. Providing feedback, encouragement, and support can help to foster positive emotional and cognitive engagement (Sinatra, Heddy, & Lombardi, 2015).
  3. Use a variety of instructional strategies: learners have different learning styles and preferences, and it is important to use a variety of instructional strategies to cater to these differences. For example, using videos, interactive simulations, and quizzes can help to increase cognitive engagement (Sinatra, Heddy, & Lombardi, 2015). [Sidenote from George: This is worthy of a post around AI dispensing bad advice and perpetuating myths]
  4. Set clear learning goals: providing learners with clear learning goals and objectives can help to increase their motivation and focus. Goals should be challenging but achievable, and learners should be given regular feedback on their progress (Sinatra, Heddy, & Lombardi, 2015).

Overall, creating a supportive and challenging learning environment that promotes engagement can help to increase motivation and achievement in cybersecurity learners who are using a cyber range.

Reference: Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges of defining and measuring student engagement in science. Educational Psychologist, 50(1), 1-13. doi: 10.1080/00461520.2014.1002924

Prompt: I want to know more about #2. How do I create a supportive learning environment in a cyber range?

Creating a supportive learning environment in a cyber range involves providing learners with feedback, encouragement, and support throughout their learning process. Here are some specific strategies you can use to promote a supportive learning environment:

  1. Provide timely feedback: learners need feedback on their progress to understand what they are doing well and what they need to improve. Provide feedback as soon as possible after a learning activity and be specific about what the learner did well and what they need to work on.
  2. Encourage learners: it is important to encourage learners and recognize their efforts. Encouragement can come in many forms, such as praise, positive reinforcement, and recognition of achievements. This can help learners build confidence and feel more motivated to learn.
  3. Build a sense of community: building a sense of community among learners can help to create a supportive learning environment. Encourage learners to work together, share their experiences, and collaborate on learning activities. This can create a sense of belonging and increase engagement.
  4. Provide support resources: learners may need additional support to be successful in a cyber range. Provide resources such as online help, tutorials, and support forums to help learners overcome challenges and get the assistance they need.
  5. Use effective communication: effective communication is key to creating a supportive learning environment. Communicate clearly and frequently with learners, provide clear instructions, and be available to answer questions and provide support.

By using these strategies, you can create a supportive learning environment that promotes engagement and success in the cyber range.

Prompt: Can you give me an example of an encouraging message I can send to learners who are in a cyber range?

Sure! Here is an example of an encouraging message you can send to learners who are in a cyber range:

“Great work on completing that cyber range activity! Your hard work and dedication to learning in this field are truly commendable. Remember, every challenge you overcome is an opportunity to grow and become a better cybersecurity professional. Keep up the good work, and don’t hesitate to reach out if you need any support or assistance. You’ve got this!”

This message acknowledges the learner’s efforts, encourages them to continue their learning journey, and offers support if needed.

On Teacherbot rights

Pause for a few more minutes and imagine that future in which technologies teach humans. Call them robots, bots, chatbots, algorithms, teaching machines, tutoring software, agents, or something else. Regardless, consider them technologies that teach. Now consider their rights.

Assuming that teaching bots can exhibit (algorithmic) intelligence, can behave with some sort of (algorithmic) morality, can learn, can plan their interactions with students and make choices about them, and overall behave somewhat independently… what rights do they have, or should they have, as non-human entities, as teachers?

Consider this scenario: A teaching bot teaches independently in an online course. It (S/he?) develops a novel pedagogical approach wherein student test scores are maximized for some, but not all, students. University administrators, in collaboration with an edtech company, learn of this and would like to intervene to ensure that every student is served in an equitable manner. They are considering refining the underlying code that runs the bot. If unsuccessful, they are considering replacing the bot with a new one.

What are the bot’s rights? Does it have the right to protest this change? Does it have the right to its life? Does it have the rights that all other workers have?

 

Followup: Some background reading on ethical principles for robots.

Learning Design at Pearson

Last week, a reporter from EdSurge reached out to me to shed some light on what Pearson called their Learning Design Principles. The EdSurge article is here, but below is a more detailed rough draft of the points that I made to share. I am posting them here for a fuller picture of some of my thoughts.

  1. Nothing proprietary (yet, perhaps). I saw a number of sources note that Pearson released their proprietary learning design principles. There’s not much proprietary in the principles. All of these ideas are well-documented in the literature pertaining to educational technology found in cognitive psychology, learning sciences, instructional design, and education literature.
  2. It’s good to see that Pearson is using findings from the education literature to guide its design and development. Some of these principles should be standard practice. If you are creating educational technology products without considering concepts like instructional alignment, feedback, and scaffolding, authentic learning, student-centered learning environments, and inquiry-based learning, you are likely creating more educational harm than good. The point is that using research to guide educational technology should be applauded and emulated. More educational technology companies should be using research to inform their designs and product iterations.
  3. BUT, since around 2011, the educational technology industry has promoted the narrative that education has not changed since the dawn of time. With a few exceptions, the industry has ignored the history, theory, and research of the academic fields associated with improving education with technology. The industry has ignored this at its own peril because we have a decent – not perfect, but decent – understanding of how people learn and how we can help improve the ways that people learn. But, the industry has developed products and services starting from scratch, making the same mistakes that other have done in the past, while claiming that their products and services will disrupt education.
  4. Not all of the items released are principles. For example, “pedagogical agents” is on the list but that’s not a principle. Having studied the implementation of pedagogical agents for more than 7 years, it’s clear that what Pearson is attempting to do is figure our how to better design pedagogical agents for learning. Forgive me while I link to some pdfs of my past work here, but, should amagent’s representation match the content area that they are supporting (should a doctor look like a doctor or should she have a blue mohawk?). Table 1 in this paper provides more on principles for designing pedagogical agents (e.g., agents should establish their role so that learners have a clear anticipation of what the agent can and cannot do: Does the agent purport to know everything or is the agent intended to ask questions but provide no answers?)
  5. As you can tell from the above, I firmly believe that industry needs research/researchers in developing, evaluating, and refining innovations.

But more importantly, happy, merry, just, and peaceful holidays to everyone!

MOOCs, automation, artificial intelligence seminar

I will be visiting my colleagues at the University of Edinburgh in mid-June to give a seminar on MOOCs, automation, artificial intelligence and pedagogical agents. This is a free event organized by the Moray House School of Education at the U of Edinburgh and supported by the Digital Cultures and Education research group and DigitalHSS. Please feel free to join us face-to-face or online (Date: 18 June 2014; Time: 1-3pm) by registering here.

This seminar will bring together some of my current and past research. A lot of my work in the past examined learners’ experiences with conversational and (semi)intelligent agents. In that research, we discovered that the experience of interacting with intelligent technologies was engrossing (pdf). Yet, learners often verbally abused the pedagogical agents (pdf). We also discovered that appearance (pdf) may be a significant mediating factor in learning. Importanly, this research indicated that “learners both humanized the agents and expected them to abide by social norms, but also identified the agents as programmed tools, resisting and rejecting their lifelike behaviors.”

A lot of my current work examines experiences with open online courses and online social networks, but what exactly does pedagogical agents and MOOCs have to do with each other? Ideas associated with Artificial Intelligence are present in both the emergence of xMOOCs (EdX, Udacity, and Coursera emanated from AI labs) and certain practices associated with them – e.g., see Balfour (2013) on automated essay scoring. Audrey Watters highlighted these issues in the past. While I haven’t yet seen discussions on the integration of lifelike characters and pedagogical agents in MOOCs, the use of lifelike robots for education and the role of the faculty member in MOOCs are areas of  debate and investigation in both the popular press and the scholarly literature.  The quest to automate instruction has a long history, and lives within the sociocultural context of particular time periods. For example, the Second World War found US soldiers and cilvilians unprepared for the war effort, and audiovisual devices were extensively used to efficiently train individuals at a massive scale. Nowadays, similar efforts at achieving scale and efficiencies reflect problems, issues, and cultural beliefs of our time.

I’m working on my presentation, but if you have any questions or thoughts to share, I’d love to hear them!

 

What happens when pedagogical agents are off-task?

Social and non-task interactions are often recognized as a valuable part of the learning experience. Talk over football, community events, or local news for example, may enable the development of positive instructor-learner relationships and a relaxed learning atmosphere. Non-task aspects of learning however have received limited attention in the education literature. Morgan-Fleming, Burley, and Price (2003) argue that this is the result of an implicit assumption that no pedagogical benefits are derived from non-task behavior, hence the reduction of off-task activities in schools such as recess time. This issue has received limited attention in the pedagogical agent literature as well. What happens when a virtual character designed to help a student learn about a topic, introduces off-task comments to a lesson? What happens when a virtual instructor mentions current events? How do learners respond?

These are the issues that I am investigating in a paper published in the current issue of the journal Computers in Human Behavior, as part of my research on the experiences of students who interact with virtual instructors and pedagogical agents. The abstract, citation, and link to the full paper appear below:

Abstract
In this paper, I investigate the impact of non-task pedagogical agent behavior on learning outcomes, perceptions of agents’ interaction ability, and learner experiences. While quasi-experimental results indicate that while the addition of non-task comments to an on-task tutorial may increase learning and perceptions of the agent’s ability to interact with learners, this increase is not statistically significant. Further addition of non-task comments however, harms learning and perceptions of the agent’s ability to interact with learners in statistically significant ways. Qualitative results reveal that on-task interactions are efficient but impersonal, while non-task interactions were memorable, but distracting. Implications include the potential for non-task interactions to create an uncanny valley effect for agent behavior.

Veletsianos, G. (2012). How do Learners Respond to Pedagogical Agents that Deliver Social-oriented Non-task Messages? Impact on Student Learning, Perceptions, and Experiences. Computers in Human Behavior, 28(1), 275-283.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén