Category: pedagogical agents Page 1 of 2

Playing with prompt engineering in response to Hickey & Luo prompt engineering analysis

In an worthwhile and expansive post Daniel Hickey and Qianxu Morgan Luo explore knowledge-rich prompts and the ways in which ChatGPT and Bard differ in the results they return. I thought that

  1. their exploration was interesting,
  2. that their findings were interesting (e.g., “this initial analysis suggests that ChatGPT was much more capable of interpreting and applying the article using an engineered prompt that included the abstract. But ChatGPT did not add much knowledge about cyber ranges In contrast, Bard appeared to take away almost nothing from the abstract but knew more about cyber ranges”)
  3. but, I was convinced that the prompts could have been engineered in more nuanced ways to improve the outcome.

I also recently learned that thinking of prompts in an incremental fashion may provide more nuanced result, and so I went about to recreate Hickey and Luo’s prompt in ChatGPT to see if I could generate a better outcome.

What did Hickey and Luo do?

Their knowledge-rich prompt was the following:

Use the following article by Sinatra, Heddy, & Lombardi (2015) to motivate engagement by cybersecurity learners who are using a digital range and use APA citations: Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a “grain-size” continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.

ChatGPT’s output was:

Research in educational psychology has consistently shown that engagement is a key factor in successful learning outcomes (Sinatra, Heddy, & Lombardi, 2015). For cybersecurity learners who are using a digital range, engagement is particularly important as it can enhance their motivation and achievement in this field. As Sinatra et al. (2015) note, engagement can be viewed from a dimensional perspective, encompassing behavioral, cognitive, emotional, and agentic dimensions. These dimensions can be used to design effective learning experiences that tap into students’ interests and motivations, such as through gamification or interactive simulations.

However, there is still a need for a concrete definition and effective measurement of engagement, especially in the context of science learning (Sinatra et al., 2015). To address this issue, educators in the field of cybersecurity can consider using a complementary approach that places engagement instrumentation on a continuum, ranging from a person-centered to a context-centered orientation (Sinatra et al., 2015). This approach can help clarify measurement issues and enable educators to tailor their teaching strategies to the individual needs and interests of their students.

In conclusion, engagement is a crucial factor in the success of cybersecurity learners using a digital range. By taking a dimensional perspective and using a continuum-based approach to measure engagement, educators can design effective learning experiences that promote student motivation and achievement in this field.

Reference:

Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges and opportunities of engaging learners in the scientific practices of explanation and argumentation. Journal of Educational Psychology, 107(2), 380–397. https://doi.org/10.1037/edu0000004

What did I do?

I divided the prompt in chunks, incrementally moving from the field (cybersecurity), to the topic within the field (cyber ranges) to the application of motivation to the topic (the Sinatra et al citation), to probing for specifics. I also used the term “cyber range” rather than “digital range” which Hickey and Luo used. This approach generated specific and actionable recommendations, especially compared to the generic reply above. My prompts and ChatGPT responses are below.

Prompt: I’d like to learn more about cybersecurity. Could you help me with this topic?

Sure! I’d be happy to help you learn more about cybersecurity. What specific areas of cybersecurity are you interested in? Some possible topics within cybersecurity include:

  1. Network security
  2. Information security
  3. Application security
  4. Cloud security
  5. Identity and access management
  6. Incident response
  7. Risk management
  8. Cybersecurity laws and regulations

Let me know what you’re interested in, and I can give you some pointers on where to start.

Prompt: What do people mean when they mention “cyber range” in the context of cybersecurity?

A “cyber range” is a simulated environment used for training and testing cybersecurity skills and technologies. It is essentially a virtual training ground where individuals and teams can practice identifying, preventing, and responding to cyber threats.

Cyber ranges can be set up to simulate a variety of scenarios, such as network intrusions, phishing attacks, malware infections, and denial-of-service attacks. They may also incorporate realistic components such as simulated user behavior, fake websites, and decoy systems.

By providing a safe and controlled environment for cybersecurity training and testing, cyber ranges can help organizations and individuals develop their skills and test their defenses without risking their actual systems or data. They can also be used to evaluate and compare different cybersecurity tools and techniques.

Some cyber ranges are open to the public, while others are only available to specific organizations or groups. Some are physical facilities that require specialized equipment and infrastructure, while others are entirely virtual and can be accessed from anywhere with an internet connection.

Overall, cyber ranges are an increasingly important tool for improving cybersecurity readiness and resilience in the face of an ever-evolving threat landscape.

Prompt: I would like to motivate engagement by cybersecurity learners who are using a cyber range. Using the ideas from Sinatra, Heddy, & Lombardi (2015) that are in quotes, write how to motivate engagement and use APA citations: “Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a ‘grain-size’ continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.”

To motivate engagement in cybersecurity learners who are using a cyber range, it is important to create a learning environment that is both challenging and supportive. According to Sinatra, Heddy, & Lombardi (2015), engagement is influenced by a variety of factors, including behavioral, cognitive, emotional, and agentic dimensions. To address these dimensions and promote engagement in the cyber range, the following strategies can be implemented:

  1. Provide hands-on learning experiences: learners should be given the opportunity to actively engage with the cyber range and practice their skills in a realistic and challenging environment. This can help to increase their sense of control and agency, which is important for promoting engagement (Sinatra, Heddy, & Lombardi, 2015).
  2. Create a supportive learning environment: learners should feel supported and encouraged throughout the learning process. Providing feedback, encouragement, and support can help to foster positive emotional and cognitive engagement (Sinatra, Heddy, & Lombardi, 2015).
  3. Use a variety of instructional strategies: learners have different learning styles and preferences, and it is important to use a variety of instructional strategies to cater to these differences. For example, using videos, interactive simulations, and quizzes can help to increase cognitive engagement (Sinatra, Heddy, & Lombardi, 2015). [Sidenote from George: This is worthy of a post around AI dispensing bad advice and perpetuating myths]
  4. Set clear learning goals: providing learners with clear learning goals and objectives can help to increase their motivation and focus. Goals should be challenging but achievable, and learners should be given regular feedback on their progress (Sinatra, Heddy, & Lombardi, 2015).

Overall, creating a supportive and challenging learning environment that promotes engagement can help to increase motivation and achievement in cybersecurity learners who are using a cyber range.

Reference: Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges of defining and measuring student engagement in science. Educational Psychologist, 50(1), 1-13. doi: 10.1080/00461520.2014.1002924

Prompt: I want to know more about #2. How do I create a supportive learning environment in a cyber range?

Creating a supportive learning environment in a cyber range involves providing learners with feedback, encouragement, and support throughout their learning process. Here are some specific strategies you can use to promote a supportive learning environment:

  1. Provide timely feedback: learners need feedback on their progress to understand what they are doing well and what they need to improve. Provide feedback as soon as possible after a learning activity and be specific about what the learner did well and what they need to work on.
  2. Encourage learners: it is important to encourage learners and recognize their efforts. Encouragement can come in many forms, such as praise, positive reinforcement, and recognition of achievements. This can help learners build confidence and feel more motivated to learn.
  3. Build a sense of community: building a sense of community among learners can help to create a supportive learning environment. Encourage learners to work together, share their experiences, and collaborate on learning activities. This can create a sense of belonging and increase engagement.
  4. Provide support resources: learners may need additional support to be successful in a cyber range. Provide resources such as online help, tutorials, and support forums to help learners overcome challenges and get the assistance they need.
  5. Use effective communication: effective communication is key to creating a supportive learning environment. Communicate clearly and frequently with learners, provide clear instructions, and be available to answer questions and provide support.

By using these strategies, you can create a supportive learning environment that promotes engagement and success in the cyber range.

Prompt: Can you give me an example of an encouraging message I can send to learners who are in a cyber range?

Sure! Here is an example of an encouraging message you can send to learners who are in a cyber range:

“Great work on completing that cyber range activity! Your hard work and dedication to learning in this field are truly commendable. Remember, every challenge you overcome is an opportunity to grow and become a better cybersecurity professional. Keep up the good work, and don’t hesitate to reach out if you need any support or assistance. You’ve got this!”

This message acknowledges the learner’s efforts, encourages them to continue their learning journey, and offers support if needed.

On Teacherbot rights

Pause for a few more minutes and imagine that future in which technologies teach humans. Call them robots, bots, chatbots, algorithms, teaching machines, tutoring software, agents, or something else. Regardless, consider them technologies that teach. Now consider their rights.

Assuming that teaching bots can exhibit (algorithmic) intelligence, can behave with some sort of (algorithmic) morality, can learn, can plan their interactions with students and make choices about them, and overall behave somewhat independently… what rights do they have, or should they have, as non-human entities, as teachers?

Consider this scenario: A teaching bot teaches independently in an online course. It (S/he?) develops a novel pedagogical approach wherein student test scores are maximized for some, but not all, students. University administrators, in collaboration with an edtech company, learn of this and would like to intervene to ensure that every student is served in an equitable manner. They are considering refining the underlying code that runs the bot. If unsuccessful, they are considering replacing the bot with a new one.

What are the bot’s rights? Does it have the right to protest this change? Does it have the right to its life? Does it have the rights that all other workers have?

 

Followup: Some background reading on ethical principles for robots.

Learning Design at Pearson

Last week, a reporter from EdSurge reached out to me to shed some light on what Pearson called their Learning Design Principles. The EdSurge article is here, but below is a more detailed rough draft of the points that I made to share. I am posting them here for a fuller picture of some of my thoughts.

  1. Nothing proprietary (yet, perhaps). I saw a number of sources note that Pearson released their proprietary learning design principles. There’s not much proprietary in the principles. All of these ideas are well-documented in the literature pertaining to educational technology found in cognitive psychology, learning sciences, instructional design, and education literature.
  2. It’s good to see that Pearson is using findings from the education literature to guide its design and development. Some of these principles should be standard practice. If you are creating educational technology products without considering concepts like instructional alignment, feedback, and scaffolding, authentic learning, student-centered learning environments, and inquiry-based learning, you are likely creating more educational harm than good. The point is that using research to guide educational technology should be applauded and emulated. More educational technology companies should be using research to inform their designs and product iterations.
  3. BUT, since around 2011, the educational technology industry has promoted the narrative that education has not changed since the dawn of time. With a few exceptions, the industry has ignored the history, theory, and research of the academic fields associated with improving education with technology. The industry has ignored this at its own peril because we have a decent – not perfect, but decent – understanding of how people learn and how we can help improve the ways that people learn. But, the industry has developed products and services starting from scratch, making the same mistakes that other have done in the past, while claiming that their products and services will disrupt education.
  4. Not all of the items released are principles. For example, “pedagogical agents” is on the list but that’s not a principle. Having studied the implementation of pedagogical agents for more than 7 years, it’s clear that what Pearson is attempting to do is figure our how to better design pedagogical agents for learning. Forgive me while I link to some pdfs of my past work here, but, should amagent’s representation match the content area that they are supporting (should a doctor look like a doctor or should she have a blue mohawk?). Table 1 in this paper provides more on principles for designing pedagogical agents (e.g., agents should establish their role so that learners have a clear anticipation of what the agent can and cannot do: Does the agent purport to know everything or is the agent intended to ask questions but provide no answers?)
  5. As you can tell from the above, I firmly believe that industry needs research/researchers in developing, evaluating, and refining innovations.

But more importantly, happy, merry, just, and peaceful holidays to everyone!

MOOCs, automation, artificial intelligence seminar

I will be visiting my colleagues at the University of Edinburgh in mid-June to give a seminar on MOOCs, automation, artificial intelligence and pedagogical agents. This is a free event organized by the Moray House School of Education at the U of Edinburgh and supported by the Digital Cultures and Education research group and DigitalHSS. Please feel free to join us face-to-face or online (Date: 18 June 2014; Time: 1-3pm) by registering here.

This seminar will bring together some of my current and past research. A lot of my work in the past examined learners’ experiences with conversational and (semi)intelligent agents. In that research, we discovered that the experience of interacting with intelligent technologies was engrossing (pdf). Yet, learners often verbally abused the pedagogical agents (pdf). We also discovered that appearance (pdf) may be a significant mediating factor in learning. Importanly, this research indicated that “learners both humanized the agents and expected them to abide by social norms, but also identified the agents as programmed tools, resisting and rejecting their lifelike behaviors.”

A lot of my current work examines experiences with open online courses and online social networks, but what exactly does pedagogical agents and MOOCs have to do with each other? Ideas associated with Artificial Intelligence are present in both the emergence of xMOOCs (EdX, Udacity, and Coursera emanated from AI labs) and certain practices associated with them – e.g., see Balfour (2013) on automated essay scoring. Audrey Watters highlighted these issues in the past. While I haven’t yet seen discussions on the integration of lifelike characters and pedagogical agents in MOOCs, the use of lifelike robots for education and the role of the faculty member in MOOCs are areas of  debate and investigation in both the popular press and the scholarly literature.  The quest to automate instruction has a long history, and lives within the sociocultural context of particular time periods. For example, the Second World War found US soldiers and cilvilians unprepared for the war effort, and audiovisual devices were extensively used to efficiently train individuals at a massive scale. Nowadays, similar efforts at achieving scale and efficiencies reflect problems, issues, and cultural beliefs of our time.

I’m working on my presentation, but if you have any questions or thoughts to share, I’d love to hear them!

 

What happens when pedagogical agents are off-task?

Social and non-task interactions are often recognized as a valuable part of the learning experience. Talk over football, community events, or local news for example, may enable the development of positive instructor-learner relationships and a relaxed learning atmosphere. Non-task aspects of learning however have received limited attention in the education literature. Morgan-Fleming, Burley, and Price (2003) argue that this is the result of an implicit assumption that no pedagogical benefits are derived from non-task behavior, hence the reduction of off-task activities in schools such as recess time. This issue has received limited attention in the pedagogical agent literature as well. What happens when a virtual character designed to help a student learn about a topic, introduces off-task comments to a lesson? What happens when a virtual instructor mentions current events? How do learners respond?

These are the issues that I am investigating in a paper published in the current issue of the journal Computers in Human Behavior, as part of my research on the experiences of students who interact with virtual instructors and pedagogical agents. The abstract, citation, and link to the full paper appear below:

Abstract
In this paper, I investigate the impact of non-task pedagogical agent behavior on learning outcomes, perceptions of agents’ interaction ability, and learner experiences. While quasi-experimental results indicate that while the addition of non-task comments to an on-task tutorial may increase learning and perceptions of the agent’s ability to interact with learners, this increase is not statistically significant. Further addition of non-task comments however, harms learning and perceptions of the agent’s ability to interact with learners in statistically significant ways. Qualitative results reveal that on-task interactions are efficient but impersonal, while non-task interactions were memorable, but distracting. Implications include the potential for non-task interactions to create an uncanny valley effect for agent behavior.

Veletsianos, G. (2012). How do Learners Respond to Pedagogical Agents that Deliver Social-oriented Non-task Messages? Impact on Student Learning, Perceptions, and Experiences. Computers in Human Behavior, 28(1), 275-283.

Design Challenges in Pedagogical Agent Implementations

When creating pedagogical agents for use in online learning environments, designers face numerous challenges. These range from technological (e.g., How do I ensure proper lip-synching when speech is generated in real-time?) to pedagogical (e.g., How do I ensure that the agent provides scaffolding that is appropriate to the students’ needs at a given point in time?) to social (e.g., How can I develop an agent that is sensitive to students’ varying social needs?). While designers deal with these questions frequently and decide on what we deem to be the best approaches to tackle them, we don’t often share the our design thinking with others.

My colleagues and I (Gulz, Haake, Silvervarg, Sjoden, Veletsianos), have just published a book chapter that deals with this issue. In this chapter we discuss design challenges we faced when developing a pedagogical agent, and the steps we took, and decisions we made to tackle those challenges. The challenges we discuss are the following:

  1. how do we manage learners’ expectations of the agent’s knowledge and social profile,
  2. how do we deal with learners’ who engage in off-task conversations with an agent, and
  3. how do we manage abusive comments directed to the agent?

These issues were observed in studies that both Agneta Gulz and myself have independently conducted in the past, and sharing our design thinking with the community sounded like a great idea – hence the publication. A copy of this publication (1.7MB pdf) is provided below:

Gulz, A., Haake, M., Silvervarg, A., Sjoden, B., & Veletsianos, G. (2011). Building a Social Conversational Pedagogical Agent: Design Challenges and Methodological approaches. In Perez-Marin, D., & I. Pascual-Nieto (Eds.), Conversational Agents and Natural Language Interaction: Techniques and Effective Practices (pp. 128-155). IGI Global.

As always, I’d love to hear your input!

Pedagogical Agent Appearance & Stereotypes

I have a new publication out that deals with the degree to which students stereotype virtual characters (short answer: yes they do and this behavior influences learning processes, but sometimes they resist. Or, they say that they resist. It’s a bit more complex than that, but the results present an interesting thinking exercise). This one has been “in the works” for more than a couple of years, but it’s recently been updated because interest on the topic seems to be growing.

Veletsianos, G. (2010). Contextually relevant pedagogical agents: Visual appearance, stereotypes, and first impressions and their impact on learning. Computers & Education, 55(2), 576-585. [pre-print PDF]

Abstract: Humans draw on their stereotypic beliefs to make assumptions about others. Even though prior research has shown that individuals respond socially to media, there is little evidence with regards to learners stereotyping and categorizing pedagogical agents. This study investigated whether learners stereotype a pedagogical agent as being knowledgeable or not knowledgeable and how this acuity influenced learning. Participants were assigned to four experimental conditions differing by agent (scientist or artist) and tutorial type (nanotechnology or punk rock). Quantitative analyses indicated that agents were stereotyped depending on their image and the academic domain under which they functioned. Regardless of tutorial, participants assigned to the artist agent recalled more information than participants assigned to the scientist agent. Learning differences between the groups varied according to whether agent appearance fit the content area under investigation. Qualitative results indicated learner’s stereotypic expectations as well as their unwillingness to draw conclusions based on visual appearance.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén