BC’s guidelines for digital learning strategy: inadvertent effects?

As you may or may not be aware, BC has developed a digital learning strategy. Here’s an earlier draft, and some earlier thoughts. With its release coming soon, I thought I would post a final set of thoughts that apply to this policy, but to other policies as well. I am only posting this because I saw that the University of California recently closed a loophole that allowed learners to fully complete their degree online. Such decision reminds me once again that decisions which are laser-focused on modality miss the bigger picture. Which then reminded me of the BC digital learning policy.

My reading and analysis of the guidelines coming to BC is that they raise quality standards for online and hybrid learning. This is a good thing. But, they are silent on the quality standards for in-person learning, and might therefore have inadvertent effects.

Because of the focus on a specific modality, the strategy creates a de facto level of standard for digital learning courses/programs/efforts that is higher than that for in-person courses/programs/efforts. While the document focuses on guidelines for “technology-enhanced learning,” it’s not explicit that these guidelines ought to apply to ALL courses.

In other words, the policy presumes that guidelines are unnecessary for in-person courses, or at the very least outside of the purview of the policy . As one example, note how the following important guideline specifically focuses on the digital but not the in-person context:

“Digital PSE in BC must achieve true, meaningful, and lasting reconciliation with Indigenous Peoples. It should advance and implement decolonial practices, promote Indigenization, and recognize Indigenous knowledge, pedagogies, and learning. To achieve these goals, technology-enhanced learning should…”

What I’d rather see is this:

Digital PSE in BC must achieve true, meaningful, and lasting reconciliation with Indigenous Peoples. It should advance and implement decolonial practices, promote Indigenization, and recognize Indigenous knowledge, pedagogies, and learning. To achieve these goals, technology-enhanced learning should…”

Or this:

Digital PSE in BC must achieve true, meaningful, and lasting reconciliation with Indigenous Peoples. It should advance and implement decolonial practices, promote Indigenization, and recognize Indigenous knowledge, pedagogies, and learning. Technology-enhanced learning provides risks and opportunities towards these goals, and in this context, institutions should… To achieve these goals, technology-enhanced learning should…

Setting a higher standard for digital learning compared to in-person learning is a problem for two reasons.

First, a different levels of standard produces the very real possibility of stifling innovation in digital learning and prioritizing in-person learning. Institutions which are considering digital learning will need to account with these guidelines, especially if they need to highlight how they are meeting them in QA and new degree approval processes. Yet, it’s unclear whether in-person offerings need to account for them. By raising the bar for one kind of approach, we might be inadvertently guiding institutions into the alternative modality.

Second, a different level of standard will impact the sector unevenly, and will disproportionately impact institutions and disciplines which are predominantly digital/online. The impacts that the strategy will have on in-person trades programs are limited compared to the impacts that it will have on education programs, which are typically blended. The impacts that it will have on  smaller institutions which are exploring expanding their digital learning offerings are greater that the impacts it will have on predominantly in-person institutions.

What is a possible solution?

This is a difficult one. One approach might be to clarify and be explicit that these guidelines apply to all courses/offerings regardless of modality. Designing education with ethics, equity, and decolonization in mind ought not be limited by whether the course takes place in-person, online, or in blended fashion. Further, any change in QA and course approval policies at the Ministry level will need to apply to all programs – not just “digital” ones.

Playing with prompt engineering in response to Hickey & Luo prompt engineering analysis

In an worthwhile and expansive post Daniel Hickey and Qianxu Morgan Luo explore knowledge-rich prompts and the ways in which ChatGPT and Bard differ in the results they return. I thought that

  1. their exploration was interesting,
  2. that their findings were interesting (e.g., “this initial analysis suggests that ChatGPT was much more capable of interpreting and applying the article using an engineered prompt that included the abstract. But ChatGPT did not add much knowledge about cyber ranges In contrast, Bard appeared to take away almost nothing from the abstract but knew more about cyber ranges”)
  3. but, I was convinced that the prompts could have been engineered in more nuanced ways to improve the outcome.

I also recently learned that thinking of prompts in an incremental fashion may provide more nuanced result, and so I went about to recreate Hickey and Luo’s prompt in ChatGPT to see if I could generate a better outcome.

What did Hickey and Luo do?

Their knowledge-rich prompt was the following:

Use the following article by Sinatra, Heddy, & Lombardi (2015) to motivate engagement by cybersecurity learners who are using a digital range and use APA citations: Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a “grain-size” continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.

ChatGPT’s output was:

Research in educational psychology has consistently shown that engagement is a key factor in successful learning outcomes (Sinatra, Heddy, & Lombardi, 2015). For cybersecurity learners who are using a digital range, engagement is particularly important as it can enhance their motivation and achievement in this field. As Sinatra et al. (2015) note, engagement can be viewed from a dimensional perspective, encompassing behavioral, cognitive, emotional, and agentic dimensions. These dimensions can be used to design effective learning experiences that tap into students’ interests and motivations, such as through gamification or interactive simulations.

However, there is still a need for a concrete definition and effective measurement of engagement, especially in the context of science learning (Sinatra et al., 2015). To address this issue, educators in the field of cybersecurity can consider using a complementary approach that places engagement instrumentation on a continuum, ranging from a person-centered to a context-centered orientation (Sinatra et al., 2015). This approach can help clarify measurement issues and enable educators to tailor their teaching strategies to the individual needs and interests of their students.

In conclusion, engagement is a crucial factor in the success of cybersecurity learners using a digital range. By taking a dimensional perspective and using a continuum-based approach to measure engagement, educators can design effective learning experiences that promote student motivation and achievement in this field.

Reference:

Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges and opportunities of engaging learners in the scientific practices of explanation and argumentation. Journal of Educational Psychology, 107(2), 380–397. https://doi.org/10.1037/edu0000004

What did I do?

I divided the prompt in chunks, incrementally moving from the field (cybersecurity), to the topic within the field (cyber ranges) to the application of motivation to the topic (the Sinatra et al citation), to probing for specifics. I also used the term “cyber range” rather than “digital range” which Hickey and Luo used. This approach generated specific and actionable recommendations, especially compared to the generic reply above. My prompts and ChatGPT responses are below.

Prompt: I’d like to learn more about cybersecurity. Could you help me with this topic?

Sure! I’d be happy to help you learn more about cybersecurity. What specific areas of cybersecurity are you interested in? Some possible topics within cybersecurity include:

  1. Network security
  2. Information security
  3. Application security
  4. Cloud security
  5. Identity and access management
  6. Incident response
  7. Risk management
  8. Cybersecurity laws and regulations

Let me know what you’re interested in, and I can give you some pointers on where to start.

Prompt: What do people mean when they mention “cyber range” in the context of cybersecurity?

A “cyber range” is a simulated environment used for training and testing cybersecurity skills and technologies. It is essentially a virtual training ground where individuals and teams can practice identifying, preventing, and responding to cyber threats.

Cyber ranges can be set up to simulate a variety of scenarios, such as network intrusions, phishing attacks, malware infections, and denial-of-service attacks. They may also incorporate realistic components such as simulated user behavior, fake websites, and decoy systems.

By providing a safe and controlled environment for cybersecurity training and testing, cyber ranges can help organizations and individuals develop their skills and test their defenses without risking their actual systems or data. They can also be used to evaluate and compare different cybersecurity tools and techniques.

Some cyber ranges are open to the public, while others are only available to specific organizations or groups. Some are physical facilities that require specialized equipment and infrastructure, while others are entirely virtual and can be accessed from anywhere with an internet connection.

Overall, cyber ranges are an increasingly important tool for improving cybersecurity readiness and resilience in the face of an ever-evolving threat landscape.

Prompt: I would like to motivate engagement by cybersecurity learners who are using a cyber range. Using the ideas from Sinatra, Heddy, & Lombardi (2015) that are in quotes, write how to motivate engagement and use APA citations: “Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a ‘grain-size’ continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.”

To motivate engagement in cybersecurity learners who are using a cyber range, it is important to create a learning environment that is both challenging and supportive. According to Sinatra, Heddy, & Lombardi (2015), engagement is influenced by a variety of factors, including behavioral, cognitive, emotional, and agentic dimensions. To address these dimensions and promote engagement in the cyber range, the following strategies can be implemented:

  1. Provide hands-on learning experiences: learners should be given the opportunity to actively engage with the cyber range and practice their skills in a realistic and challenging environment. This can help to increase their sense of control and agency, which is important for promoting engagement (Sinatra, Heddy, & Lombardi, 2015).
  2. Create a supportive learning environment: learners should feel supported and encouraged throughout the learning process. Providing feedback, encouragement, and support can help to foster positive emotional and cognitive engagement (Sinatra, Heddy, & Lombardi, 2015).
  3. Use a variety of instructional strategies: learners have different learning styles and preferences, and it is important to use a variety of instructional strategies to cater to these differences. For example, using videos, interactive simulations, and quizzes can help to increase cognitive engagement (Sinatra, Heddy, & Lombardi, 2015). [Sidenote from George: This is worthy of a post around AI dispensing bad advice and perpetuating myths]
  4. Set clear learning goals: providing learners with clear learning goals and objectives can help to increase their motivation and focus. Goals should be challenging but achievable, and learners should be given regular feedback on their progress (Sinatra, Heddy, & Lombardi, 2015).

Overall, creating a supportive and challenging learning environment that promotes engagement can help to increase motivation and achievement in cybersecurity learners who are using a cyber range.

Reference: Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges of defining and measuring student engagement in science. Educational Psychologist, 50(1), 1-13. doi: 10.1080/00461520.2014.1002924

Prompt: I want to know more about #2. How do I create a supportive learning environment in a cyber range?

Creating a supportive learning environment in a cyber range involves providing learners with feedback, encouragement, and support throughout their learning process. Here are some specific strategies you can use to promote a supportive learning environment:

  1. Provide timely feedback: learners need feedback on their progress to understand what they are doing well and what they need to improve. Provide feedback as soon as possible after a learning activity and be specific about what the learner did well and what they need to work on.
  2. Encourage learners: it is important to encourage learners and recognize their efforts. Encouragement can come in many forms, such as praise, positive reinforcement, and recognition of achievements. This can help learners build confidence and feel more motivated to learn.
  3. Build a sense of community: building a sense of community among learners can help to create a supportive learning environment. Encourage learners to work together, share their experiences, and collaborate on learning activities. This can create a sense of belonging and increase engagement.
  4. Provide support resources: learners may need additional support to be successful in a cyber range. Provide resources such as online help, tutorials, and support forums to help learners overcome challenges and get the assistance they need.
  5. Use effective communication: effective communication is key to creating a supportive learning environment. Communicate clearly and frequently with learners, provide clear instructions, and be available to answer questions and provide support.

By using these strategies, you can create a supportive learning environment that promotes engagement and success in the cyber range.

Prompt: Can you give me an example of an encouraging message I can send to learners who are in a cyber range?

Sure! Here is an example of an encouraging message you can send to learners who are in a cyber range:

“Great work on completing that cyber range activity! Your hard work and dedication to learning in this field are truly commendable. Remember, every challenge you overcome is an opportunity to grow and become a better cybersecurity professional. Keep up the good work, and don’t hesitate to reach out if you need any support or assistance. You’ve got this!”

This message acknowledges the learner’s efforts, encourages them to continue their learning journey, and offers support if needed.

Special Issue on Trauma-Informed Instructional Design Practices in JAID

JAID recently published a special issue on trauma-informed instructional design practices, that you can read here. The abstract for the introduction to the special issue summarizes the effort well:

This special issue of JAID begins to bridge the gap between the theories of social emotional learning/trauma informed learning with instructional design offering specific cases of design and development projects that illustrate the confluence of these two broad areas. We share these articles with our ID community in the hopes of creating principles for “compassionate instructional design” (Thomas et al., 2019) through a collection of practitioner cases and research articles on applied instructional design practices that are responsive to trauma-affected learners, and which highlight the complexities of the learning context of the learners being served.

Faculty curiosities about AI tools and ChatGPT

I led an online workshop/conversation on AI for ~200 faculty at three colleges/universities who came together today to learn about the topic. It centered on the following questions. I am sharing them here for the benefit of others, but also to ask: Are there other curiosities that you are seeing locally? (Yes, I know that the most recent EDUCAUSE poll highlights cheating as a top concern, though I’m not certain it ought to be)

  • How can (should??) I use AI for the benefit of my students’ learning?
  • Is ChatGPT really the disruptor it seems to be?
  • ChatGPT (AI) and authentic assessment – can these co-exist?
  • Neither I nor my students are as tech-savvy as it is often assumed we are. How do we keep up with innovations like ChatGPT, whether they be ‘good’ or ‘bad’, and how do we learn when to embrace them or ignore them?
  • Is ChatGPT (or other AI) a blessing or a curse for higher education?

ChatGPT is the tree, not the forest.

“Not see the forest for the trees,” is a North American idiom that is used to urge one that focusing on the details might lead them to miss the larger issue/problem. ChatGPT is the tree. Perhaps it’s the tallest or the leafiest tree, or the one that blossomed rapidly right in front of your eyes… sort of like a Japanese flowering cherry. What does this mean for you?

If you’re exploring ChatGPT – as a student, instructor, administrator, perhaps as a community – don’t focus solely on ChatGPT. Certainly, this particular tool is can serve as one illustration of the possibilities, pitfalls, and challenges of Generative AI, but making decisions about Generative AI by focusing solely on ChatGPT may lead you to make decisions that are grounded on the idiosyncrasies of this particular technology at this particular point in time.

What does this mean in practice? Your syllabus policies should be broader than ChatGPT. Your taskforce and working groups should look beyond this particular tool. Your classroom conversations should highlight additional technologies.

I was asked recently to lead a taskforce to explore implications and put forward recommendations for our teaching and learning community. ChatGTP was the impetus. But our focus is Generative AI. It needs to be. And there’s a long AIED history here, which includes some of my earlier work on pedagogical agents.

 

AIs dedication to truth, justice or equity

In response to my post from yesterday, Stephen Downes focuses on the important and difficult issue. He says:

…George Veletsianos focuses on the question, “What new knowledge, capacities, and skills do instructional designers need in their role as editors and users of LLMs?” Using the existing state of chatGPT as a guide, he suggests that “a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive.” At a certain point, I think we might find ourselves uncomfortable with the idea that an individual designer’s values can outweigh the combined insights of the thousands or millions of voices that feed into an AI. True, today’s AIs are not very good examples of dedication to truth, justice or equity. But that, I’m sure, is a very temporary state of affairs.

Good point: We might find ourselves uncomfortable with that idea. But, here’s the two assumptions that I am making:

1. That individual has developed a dedication to truth, justice, equity, and decolonization that they are able to apply to their work. Yes, I am hopeful on this.

2. For an AI to reflect values aligned with justice, equity, and decolonization, we (aka society) likely need to regulate and re-imagine their design. I am less hopeful on this.

I guess that where Stephen and I disagree is on the future commitments of AI. I would like to be as hopeful as he is, but I am not convinced yet. I would like to be wrong.

AI, ChatGPT, instructional design, and prompt crafting

David Wiley wrote a thoughtful post on the ways in which AI and Large Language Models (LLMs) can “provide instructional designers with first drafts of some of the work they do.” He says “imagine you’re an instructional designer who’s been paired with a faculty member to create a course in microeconomics. These tools might help you quickly create first drafts of” learning outcomes, discussion prompts, rubrics, and formative assessment items.  The point is that LLMs can quickly generate rough drafts that are mostly accurate drafts, that humans can then “review, augment, and polish,” potentially shifting the work of instructional designers from authors to editors. The post is well worth your time.

The question that I’d like to spend some time thinking about is the following: What new knowledge, capacities, and skills do  instructional designers need in their role as editors and users of LLMs?

To focus on a single item here, much has been written about prompt engineering or crafting, which is the way in which user text is entered to guide LLMs like ChatGPT to generate desired responses. For example, a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive from a prompt. Below is an example to illustrate how assessment items vary, as the desired assessment changes from good, to efficient, to equitable. My prompt is in bold and ChatGPT output follows.

 

assume that a learning outcome of a course is “understand opportunity cost.” What is a good assessment item for this learning outcome?

A good assessment item for the learning outcome of “understand opportunity cost” would be a case study or scenario-based question that requires the student to analyze and apply their understanding of opportunity cost to make a decision.

For example:

Scenario: A small business owner is trying to decide whether to invest in new equipment or to expand the business by opening a second location. The owner has $100,000 to spend and estimates that the equipment will cost $50,000 and the second location will cost $80,000.

Question: What is the opportunity cost for the business owner in each case and which option would you recommend and why?

This question requires the student to understand the concept of opportunity cost, which is the cost of the next best alternative, in order to make an informed decision about which option would be the best investment for the business. It also allows the student to demonstrate their understanding of the concept by applying it to a real-world scenario.

assume that a learning outcome of a course is “understand opportunity cost.” What is an efficient assessment item for this learning outcome?

An efficient assessment item for the learning outcome of “understand opportunity cost” would be a multiple-choice question that allows the student to quickly and easily demonstrate their understanding of the concept.

For example:

Question: Which of the following is the correct definition of opportunity cost? A) The cost of a good or service B) The cost of the next best alternative C) The total cost of production D) The profit made from a good or service

The correct answer is B) The cost of the next best alternative. This multiple-choice question is efficient as it allows students to quickly demonstrate their understanding of the concept, and it is easy to grade, as there is only one correct answer. This multiple-choice question could be used as a formative assessment to check student’s understanding, and it could be also used as a summative assessment to evaluate their final understanding of the course’s outcome.

assume that a learning outcome of a course is “understand opportunity cost.” What is an equitable assessment item for this learning outcome?

An equitable assessment item for the learning outcome of “understand opportunity cost” would be a problem-solving or open-ended question that allows students to demonstrate their understanding of the concept in a way that is not limited by their background or language abilities.

For example:

Question: Suppose you have $1000 to spend and you want to buy a new computer and a new phone. A computer costs $800 and a phone costs $200. Provide a written explanation of how you would make your choice and what the opportunity cost of your choice is.

This open-ended question allows students to demonstrate their understanding of opportunity cost in their own words and allows them to apply their understanding to a real-world scenario. It also allows for different students to approach the problem differently and does not rely on language or background knowledge. It also allows for the teacher to evaluate student’s understanding of the concept in a more holistic way, and not just based on a single correct answer.

This type of assessment is equitable because it allows all students to demonstrate their understanding of the concept and does not disadvantage students who may struggle with multiple-choice questions or who may have a different background or language abilities.

Page 9 of 81

Powered by WordPress & Theme by Anders Norén