Tag: AI Page 1 of 2

CFP: Equity of Artificial Intelligence in Higher Education (Journal of Computing in Higher Education)

Below is a call for papers for a special issue to be published by JCHE focusing on Equity of Artificial Intelligence in Higher Education

Guest Editors:

Lin Lin Lipsmeyer, Southern Methodist University, USA
Nia Nixon, University of California, Irvine, USA
Judi Fusco, Digital Promise, USA
Pati Ruiz, Digital Promise, USA
Cassandra Kelley, University of Pittsburgh, USA
Erin Walker, University of Pittsburgh, USA

In this special issue, we center opportunities and challenges in the rapid development of artificial intelligence (AI) for promoting the equitable design, implementation, and use of technologies within higher education. Equity is meeting people where they are with what they need to be successful (Levinson,  Geron, &  Brighouse, 2022). Issues related to equity are multiple and complex, involving but not limited to the choice of learning goals in the design of technologies, facilitating broader access to emerging technologies, and ensuring that technologies are responsive to the needs of individuals and communities from historically and systematically excluded populations. We are looking for articles that engage meaningfully with topics related to equity as part of their research questions, design and implementation focus, data analysis, and/or discussion when considering AI systems in higher education. We are interested in articles that address the impact of AI technologies on psychological experiences, processes (e.g., sense of belonging, self-efficacy), and/or domain knowledge. How can we use AI to know what people are learning? How can we use AI to support a diversity of learners and teachers? How should AI technologies in education differ across different populations of learners?

As AI technologies become more sophisticated, there are increasing opportunities for human-AI partnerships in service of learning. There is a need for increased understanding of what might be involved in these partnerships, grounded in the learning sciences, educational psychology, assessment, and related fields. Alongside this work there is a need for technological advancements that ensure these technologies are designed and implemented in ways that advance equitable outcomes for historically and contemporarily underserved groups. Core questions in this space revolve around the roles for humans and AI systems in higher education; how should each contribute within these partnerships, what should humans take away from these partnerships, and what does learning look like in environments where AI is being widely used? Specific to the JCHE, what is the role of higher education in the evolution of more equitable human-AI partnership? We define equitable human-AI partnerships as one where humans of varied backgrounds and identities are included in the design and deployment of the technologies, have agency during use of the technologies, and all see positive outcomes that meet their individual and community goals as they use the technologies for learning and life.

Technologies offer extensions to human abilities but also areas where traditionally human skills might be lost or replaced, yielding opportunities and pitfalls. AI-based advancements can yield new opportunities for educational settings, including an improved ability to model learning across contexts, support learning in individual and group settings through personalized adaptations, and enhance learners’ and teachers’ ability to engage in learning environments. On the other side, the use of AI-based technologies can invite concerns related to privacy and overly prescriptive models of learning. They are often implemented inequitably, sometimes due to lack of equal access to the technologies, but also due to a lack of culturally relevant design for communities that are often most harmed by bias encoded in new technologies, and a misalignment between the goals of the technologies and the goals of the communities they serve. AI systems might also replace things students have been asked to do in the past and it is not clear what the implications are of new approaches, and what is lost and what is gained with these changes?

Unique Collaboration with CIRCLS

To support this important agenda related to foregrounding equity and inclusion in the design and understanding of AI technologies for Higher Ed, we are partnering with CIRCLS to host a series of three support sessions for authors submitting to this special issue that will provide additional resources for doing this effectively, as well as convening an Advisory Board to support the authors of submitted articles. Authors are strongly encouraged to participate in these sessions and engage with the Advisory Board as part of their submissions to this issue.
Key Topics

Papers considered for this special issue will report ground-breaking empirical research or present important conceptual and theoretical considerations on the conjunction of equity, inclusion, and AI. In general, papers may pursue one or several of the following goals:

  • Innovating new assessments, technologies, modeling, and pedagogies as we use more AI for learning across a variety of content domains.
  • Exploring the impact of AI technologies on marginalized communities
  • Investigating AI literacy, education, and awareness building
  • Defining equitable human-AI partnerships
  • Exploring the impact of AI technologies on domain knowledge and psychological experiences and processes (e.g., sense of belonging, self-efficacy)
  • Aligning goals of learning in this new AI-enhanced landscape with the diversity of goals held by students as they pursue higher education.
  • Engaging with topics such as but not limited to privacy, security, transparency, sustainability, labor costs, ethics, learner agency, learner diversity, and cultural relevance as they intersect with more equitable learning processes and outcomes.
  • Developing accountability metrics for researchers and educational technology development teams

Timeline

December 15, 2023 — Abstracts of proposed papers due to the editors.

January 15, 2024: Authors notified of initial acceptance of abstracts.

February 2024 (Date TBD) – CIRCLS Support Session A

April 1, 2024 – Papers due in the Editorial Management system

June 1, 2024 — Reviews completed & authors notified of decisions

June 2024 (Date TBD) – CIRCLS Support Session B

October 1, 2024 — Revised manuscripts due

December 1, 2024 — Reviews completed & authors notified of decisions

February 15, 2025 — Final manuscripts due

March 15, 2025 – Final manuscripts sent to the publishers
Submission to the Special Issue

Indicate your interest in participating in the special issue by submitting your abstracts here at https://pitt.co1.qualtrics.com/jfe/form/SV_bBpc2DTNPk5P7nM
For more information, resources, and updates related to this Special Issue, please visit the AI CIRCLS & JCHE Collaboration web page.

Reference:

Meira Levinson, Tatiana Geron, and Harry Brighouse. 2022. Conceptions of Educational Equity. AERA Open 8: 23328584221121344. https://doi.org/10.1177/23328584221121344 

GPTs and one student’s custom version of ChatGPT

A few days ago OpenAI released GPTs, which are “custom versions of ChatGPT that you can create for a specific purpose” (announcement here: https://openai.com/blog/introducing-gpts), meaning that one could produce a GPT that is dedicated to a specific set of tasks. I’ve seen a few of these so far, including Mike Sharples’ chatbot that uses his book to offer teaching advice https://chat.openai.com/g/g-RCHNUwnD1-teachsmart, Mairéad Pratschke’s expert in digital education and learning design https://chat.openai.com/g/g-hrPUmXB5X-digital-professor, and this degree builder I came across https://chat.openai.com/g/g-KVB8vSaAJ-degree-designer.

One of the more interesting use cases I saw was one student’s Reddit post titled “I just replaced my chemistry professor with AI:” https://www.reddit.com/r/ChatGPT/comments/17slpti/i_just_replaced_my_chemistry_professor_with_ai/ 

 “I gave it the prompt: you are a college professor teaching inorganic chemistry 2 thermodynamics. The scope of your class is covered in the uploaded documents.

I then uploaded my professors PowerPoint slides and copied and pasted the chapter from the book. All the exercises, extra problems, and a thermodynamics properties table. I also included a summary of the topics covered.

I had to double prompt it to only teach from the documents and upload pdfs seemed to work a lot better than .txt files.”

Lots of areas to reflect on here, including student creativity, privacy, and ethics.

Recommendations on the use of Generative Artificial Intelligence at Royal Roads University

Today I met with Royal Roads University’s Board of Governors to present the work that we have completed in relation to Generative AI. I appreciated the opportunity not only to meet with the board, but also to hear comments and questions around this work and AI more general.

Because this was a public session, I thought it might be beneficial to share the recommendations we put forward. The full report will likely appear on the university website but for those of you who are tracking or thinking about institutional responses, this short timely summary might be more valuable than a more detailed future report.

As background: In March 2023, Royal Roads University established a generative Artificial Intelligence (AI) taskforce. Chaired by Dr. George Veletsianos, the taskforce consisted of Drs. Niels Agger-Gupta, Jo Axe, Geoff Bird, Elizabeth Childs, Jaigris Hodson, Deb Linehan, Ross Porter, and Rebecca Wilson-Mah. This work was also supported by our colleagues Rosie Croft (University Librarian), and Ken Jeffery and Keith Webster (Associate Directors, Centre for Teaching and Educational Technologies). The taskforce produced a report with 10 recommendations by June 2o23. The report (And its recommendations) should be seen as a working document that ought to be revisited and revised periodically as the technology, ethics, and use of AI are rapidly evolving. The recommendations were:

  1. Establish and publicize the university’s position on Generative AI
  2. Build human capacity
  3. Engage in strategic and targeted hiring
  4. Establish a faculty working group and foster a community of practice.
  5. Investigate, and potentially revise, assessment practices.
  6. Position the Centre for Teaching and Educational Technologies as a “go-to” learning and resource hub for teaching and learning with AI.
  7. Future-proof Royal Roads University [comment: a very contextual recommendation, but to ensure that this isn’t understood to encourage an instrumentalist view of AI or understood to mean that the institutions solely focus on AI, the report urged readers to “consider that the prevalence of AI will have far-reaching institutional impacts, which will add to the social, economic, political, and environmental pressures that the University is facing]
  8. Revise academic integrity policy
  9. Develop and integrate one common research methods course [comment: another very contextual recommendation that likely doesn’t apply to others, but what does apply is the relevance of AI to student research suggesting that research methods courses consider the relationships between AI and research practices.]
  10. Ensure inclusivity and fair representation in AI-related decisions.

I hope this is helpful to others.

High school senior: Why aren’t more teachers embracing AI?

One of my joys in life is reading student op-eds. Here is a wonderful example, by a high school senior who asks: why aren’t more teachers using AI?

The student describes how they use it, how they find it beneficial, and how their teachers are suspicious of it.

I believe that the student, and many others, parents included, are truly curious. In other words, I don’t think the question is rhetorical. Why not use a technology which seems to offer so many benefits? So, I thought I’d take a few moments to answer it. A point of clarification before we turn to a list of possible reasons:

  • It’s not quite clear what is the prevalence of AI use in K-12. In the US, one survey suggests that around 10% of teachers use it, while another puts that number at ~50%. Even with the high number, we need to clarify what “AI use” means because teachers’ AI use might be invisible to students (e.g., using it to create/refine rubrics, produce examples, etc). In other words, teachers might be using AI, just not in the pedagogical ways described in the op ed.

Here’s a list of possible reasons

  • Lack of familiarity and knowledge about how to use AI in the classroom.
  • Concerns about AI (e.g., about its biases, ethics, and implications for equity and access).
  • Lack of support and guidance (e.g., at the administrator or school district level) as to whether and how teachers ought to use it.
  • For decades, edtech promises to revolutionize education. AI comes with similar promises. Teachers are tired and weary of these unmet promises.
  • Inconsistencies between the technology and the school/testing environment that teachers operate under.
  • It takes time for technology to spread into education settings, and for good reasons (e.g., devising ways to integrate a technology with an uncertain future takes more time and effort that people realize, and, if one thing is certain, teachers lack time).

There’s likely other reasons, and these can be grouped into individual reasons (e.g., why aren’t individual teachers using AI?), community and organizational reasons (e.g., why aren’t schools supporting teachers in using AI?), and societal reasons (e.g., why did our  society structure schools in ways which limit rapid adoption of AI?).

Importantly: A lot of it relates to context, such as the content area or the particular school. And so, if you’re interested in why your particular teachers at your particular school in your particular part of the country aren’t using a technology (or a pedagogical strategy even), it’s important to identify local reasons for use/non-use.

And to be clear: This isn’t to say that teachers should or shouldn’t use a particular technology in education.

So very tired of predictions about AI in education…

By people who aren’t AIEd experts, education technology experts, education experts, and the like.

Case in point: “AI likely to spell end of traditional school classroom, leading [computer science] expert says.”

I appreciate cross disciplinary engagement as much as I love guacamole (which is to say, a lot), but I’d also appreciate that we stop wasting our time on these same unfulfilled prophecies year after year, decade after decade.

Will AI impact education? In some ways it will, and in others it won’t. Will education shape the ways AI comes to be used in classrooms? In some ways it will, and in others it won’t.

Truth be told, this negotiated relationship isn’t as appealing as DISRUPTION, AVALANCHE, MIND-READING ROBO-TUTOR IN THE SKY, etc, which are words that readers of the history of edtech will recognize.

Three special issues on Generative AI in Education (update: now five)

If you’re looking for a home for your generative AI paper, you now have a few special issues to choose from:

TechTrends call

We welcome proposals for original research articles and theoretical papers that explore the potential of integrating Generative AI in education. We encourage submissions on (but not limited to) the following topics:
1. Personalized learning experiences that address learner needs and preferences;
2. Language learning, such as offering practice in conversation or helping with translation;
3. Coding/programming education or computational thinking, such as supporting debugging;
4. Assistance during writing process, such as brainstorming, editing, character generation;
5. Teaching support, such as answering frequently asked questions, generating question prompts and examples, evaluating students’ writing;
6. Student engagement and motivation, such as providing feedback and human-like interactions through natural language output;
7. Higher-order thinking, such as enhancing analytical, critical thinking, and reflection.
8. Collaborative learning process, such as supporting group discussion or interaction.

International Journal of Educational Technology in Higher Education call

The call for papers is intended to invite contributions that address the recent development of AI in HE in light of new applications such as ChatGTP, aiming to provide more comprehensive and collective answers to the following questions:

• What is the actual impact of AI on different aspects of HE institutions (e.g. student support systems, administration, professional development, and infrastructure)?
• What is the actual impact of AI on different aspects of learning and teaching in HE? (e.g. assessment, data literacy, design of learning activities)?
• What is the actual impact of AI on different subjects in HE? (e.g. students, teachers, administrators, causal workers, other stakeholders)?

The Special Issue Editors are also interested in making sense of the impact of AI on educational accessibility and (in-)equity regarding the cost, quality, and access in different forms of open, distance, and digital education. Both theoretical and empirical works will be considered to be included as long as they demonstrate rigour, criticality, and novelty.

IEEE Transactions on Learning Technologies call

The successful design and application of generative AI require holistic considerations of theoretical frameworks, pedagogical approaches, facilitative ecological structure, and appropriate standards. Topics of interest for this special issue include, but are not limited to:

  • Studies on the pedagogical or curricular approaches to teaching and learning with generative AI.
  • Discussion on the theoretical frameworks of generative AI to provide the basis for the understanding of systems and their capabilities for teaching and learning.
  • Discussion of the extent to which the design of learning environments with generative AI aligns with different theories of learning (e.g., behaviorism, cognitivism, (social) constructivism, constructionism, socio-cultural).
  • Studies on the applications of generative AI for assessment of, assessment for, and assessment as learning.
  • Development of the environmental structures that facilitate the employment of generative AI in education.
  • Development or implementation of relevant standards governing the proper use of generative AI in human learning contexts.
  • Exemplary use cases and practices of generative AI… [more bullet points in the CFP]

Update (Eamon Costello alerted me to an additional one): Learning Media and Technology call

We invite theoretical papers and theoretically-informed empirical studies that explore emerging practices and offer new imaginings of generative AI in education. Papers may use a variety of methodological approaches including feminist, critical, new materialist, interpretive, qualitative, rhetorical, quantitative, or experimental.

Topics may include, and are not limited to:

  • Critical pedagogy and generative AI
  • Ways in which generative AI further complicates notions of authenticity–of authorship, ideas, ownership, and truth
  • Creative and productive uses of generative AI and how can they be harnessed in education
  • Ethical, political, and epistemological issues of generative AI in education
  • Socio-technical explorations of generative AI and equity, power, inclusion, diversity, identity, marginalization, (dis)ability, ethnicity, gender, race, class, community, sustainability, etc.
  • Development of methodologies to critically assess generative AI in education
  • Alternative imaginaries for the development of future generative AI tools for education

Update Apr 16: One more from the Asian Journal of Distance Education

…to better understand what generative AI promises us, we need to examine its philosophy, develop a theoretical understanding, and investigate “how human tutors and machines (ChatGPT) could work together to achieve an educational objective, as well as the changes and outcomes brought to the education field (e.g., evolutionary or revolutionary)” (Tlili et al., 2023, p. 22). Based on above the thoughts, Asian Journal of Distance Education seeks papers on generative AI with a focus on open, online, and distance education. Research papers, systematic reviews, and opinion papers with a critical stance are welcome.

Playing with prompt engineering in response to Hickey & Luo prompt engineering analysis

In an worthwhile and expansive post Daniel Hickey and Qianxu Morgan Luo explore knowledge-rich prompts and the ways in which ChatGPT and Bard differ in the results they return. I thought that

  1. their exploration was interesting,
  2. that their findings were interesting (e.g., “this initial analysis suggests that ChatGPT was much more capable of interpreting and applying the article using an engineered prompt that included the abstract. But ChatGPT did not add much knowledge about cyber ranges In contrast, Bard appeared to take away almost nothing from the abstract but knew more about cyber ranges”)
  3. but, I was convinced that the prompts could have been engineered in more nuanced ways to improve the outcome.

I also recently learned that thinking of prompts in an incremental fashion may provide more nuanced result, and so I went about to recreate Hickey and Luo’s prompt in ChatGPT to see if I could generate a better outcome.

What did Hickey and Luo do?

Their knowledge-rich prompt was the following:

Use the following article by Sinatra, Heddy, & Lombardi (2015) to motivate engagement by cybersecurity learners who are using a digital range and use APA citations: Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a “grain-size” continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.

ChatGPT’s output was:

Research in educational psychology has consistently shown that engagement is a key factor in successful learning outcomes (Sinatra, Heddy, & Lombardi, 2015). For cybersecurity learners who are using a digital range, engagement is particularly important as it can enhance their motivation and achievement in this field. As Sinatra et al. (2015) note, engagement can be viewed from a dimensional perspective, encompassing behavioral, cognitive, emotional, and agentic dimensions. These dimensions can be used to design effective learning experiences that tap into students’ interests and motivations, such as through gamification or interactive simulations.

However, there is still a need for a concrete definition and effective measurement of engagement, especially in the context of science learning (Sinatra et al., 2015). To address this issue, educators in the field of cybersecurity can consider using a complementary approach that places engagement instrumentation on a continuum, ranging from a person-centered to a context-centered orientation (Sinatra et al., 2015). This approach can help clarify measurement issues and enable educators to tailor their teaching strategies to the individual needs and interests of their students.

In conclusion, engagement is a crucial factor in the success of cybersecurity learners using a digital range. By taking a dimensional perspective and using a continuum-based approach to measure engagement, educators can design effective learning experiences that promote student motivation and achievement in this field.

Reference:

Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges and opportunities of engaging learners in the scientific practices of explanation and argumentation. Journal of Educational Psychology, 107(2), 380–397. https://doi.org/10.1037/edu0000004

What did I do?

I divided the prompt in chunks, incrementally moving from the field (cybersecurity), to the topic within the field (cyber ranges) to the application of motivation to the topic (the Sinatra et al citation), to probing for specifics. I also used the term “cyber range” rather than “digital range” which Hickey and Luo used. This approach generated specific and actionable recommendations, especially compared to the generic reply above. My prompts and ChatGPT responses are below.

Prompt: I’d like to learn more about cybersecurity. Could you help me with this topic?

Sure! I’d be happy to help you learn more about cybersecurity. What specific areas of cybersecurity are you interested in? Some possible topics within cybersecurity include:

  1. Network security
  2. Information security
  3. Application security
  4. Cloud security
  5. Identity and access management
  6. Incident response
  7. Risk management
  8. Cybersecurity laws and regulations

Let me know what you’re interested in, and I can give you some pointers on where to start.

Prompt: What do people mean when they mention “cyber range” in the context of cybersecurity?

A “cyber range” is a simulated environment used for training and testing cybersecurity skills and technologies. It is essentially a virtual training ground where individuals and teams can practice identifying, preventing, and responding to cyber threats.

Cyber ranges can be set up to simulate a variety of scenarios, such as network intrusions, phishing attacks, malware infections, and denial-of-service attacks. They may also incorporate realistic components such as simulated user behavior, fake websites, and decoy systems.

By providing a safe and controlled environment for cybersecurity training and testing, cyber ranges can help organizations and individuals develop their skills and test their defenses without risking their actual systems or data. They can also be used to evaluate and compare different cybersecurity tools and techniques.

Some cyber ranges are open to the public, while others are only available to specific organizations or groups. Some are physical facilities that require specialized equipment and infrastructure, while others are entirely virtual and can be accessed from anywhere with an internet connection.

Overall, cyber ranges are an increasingly important tool for improving cybersecurity readiness and resilience in the face of an ever-evolving threat landscape.

Prompt: I would like to motivate engagement by cybersecurity learners who are using a cyber range. Using the ideas from Sinatra, Heddy, & Lombardi (2015) that are in quotes, write how to motivate engagement and use APA citations: “Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a ‘grain-size’ continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.”

To motivate engagement in cybersecurity learners who are using a cyber range, it is important to create a learning environment that is both challenging and supportive. According to Sinatra, Heddy, & Lombardi (2015), engagement is influenced by a variety of factors, including behavioral, cognitive, emotional, and agentic dimensions. To address these dimensions and promote engagement in the cyber range, the following strategies can be implemented:

  1. Provide hands-on learning experiences: learners should be given the opportunity to actively engage with the cyber range and practice their skills in a realistic and challenging environment. This can help to increase their sense of control and agency, which is important for promoting engagement (Sinatra, Heddy, & Lombardi, 2015).
  2. Create a supportive learning environment: learners should feel supported and encouraged throughout the learning process. Providing feedback, encouragement, and support can help to foster positive emotional and cognitive engagement (Sinatra, Heddy, & Lombardi, 2015).
  3. Use a variety of instructional strategies: learners have different learning styles and preferences, and it is important to use a variety of instructional strategies to cater to these differences. For example, using videos, interactive simulations, and quizzes can help to increase cognitive engagement (Sinatra, Heddy, & Lombardi, 2015). [Sidenote from George: This is worthy of a post around AI dispensing bad advice and perpetuating myths]
  4. Set clear learning goals: providing learners with clear learning goals and objectives can help to increase their motivation and focus. Goals should be challenging but achievable, and learners should be given regular feedback on their progress (Sinatra, Heddy, & Lombardi, 2015).

Overall, creating a supportive and challenging learning environment that promotes engagement can help to increase motivation and achievement in cybersecurity learners who are using a cyber range.

Reference: Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges of defining and measuring student engagement in science. Educational Psychologist, 50(1), 1-13. doi: 10.1080/00461520.2014.1002924

Prompt: I want to know more about #2. How do I create a supportive learning environment in a cyber range?

Creating a supportive learning environment in a cyber range involves providing learners with feedback, encouragement, and support throughout their learning process. Here are some specific strategies you can use to promote a supportive learning environment:

  1. Provide timely feedback: learners need feedback on their progress to understand what they are doing well and what they need to improve. Provide feedback as soon as possible after a learning activity and be specific about what the learner did well and what they need to work on.
  2. Encourage learners: it is important to encourage learners and recognize their efforts. Encouragement can come in many forms, such as praise, positive reinforcement, and recognition of achievements. This can help learners build confidence and feel more motivated to learn.
  3. Build a sense of community: building a sense of community among learners can help to create a supportive learning environment. Encourage learners to work together, share their experiences, and collaborate on learning activities. This can create a sense of belonging and increase engagement.
  4. Provide support resources: learners may need additional support to be successful in a cyber range. Provide resources such as online help, tutorials, and support forums to help learners overcome challenges and get the assistance they need.
  5. Use effective communication: effective communication is key to creating a supportive learning environment. Communicate clearly and frequently with learners, provide clear instructions, and be available to answer questions and provide support.

By using these strategies, you can create a supportive learning environment that promotes engagement and success in the cyber range.

Prompt: Can you give me an example of an encouraging message I can send to learners who are in a cyber range?

Sure! Here is an example of an encouraging message you can send to learners who are in a cyber range:

“Great work on completing that cyber range activity! Your hard work and dedication to learning in this field are truly commendable. Remember, every challenge you overcome is an opportunity to grow and become a better cybersecurity professional. Keep up the good work, and don’t hesitate to reach out if you need any support or assistance. You’ve got this!”

This message acknowledges the learner’s efforts, encourages them to continue their learning journey, and offers support if needed.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén