Category: generative artificial intelligence Page 1 of 2

Generative AI course statement

I have a new statement on generative AI in the class that I am teaching this semester. I like Ryan Baker’s statement from last year due to its permissive nature, and so I build upon it by adding some more specific guidance for my students to clarify expectations. I  came up with what you see below. As always, feel free to use as you see fit:

Artificial Intelligence Course Policy

Within this class, you are welcome to use generative AI models (e.g., ChatGPT,  microsoft copilot, etc) in your assignments at no penalty. However, you should note that when the activity asks you to reflect, my expectation is that you, rather than the AI, will be doing the reflection. In other words, prior to using AI tools, you should determine for what purpose you will be using it. For instance, you might ask it for help in critiquing your ideas or in generating additional ideas, or editing, but not in completing assignments on your behalf.   You should also be aware that all generative AI models have a tendency to make up incorrect facts and fake citations, leading to inaccurate (and sometimes offensive) output. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or an AI model. For each assignment you use an AI model, you need to submit a document that shows (a) the prompts you used, (b) the output the AI model produced, (c) how you used that output, and (d) what your original contributions and edits were to the AI output that led to your final submission. Language editing is not an original contribution. An original contribution is one that showcases your critical and inquiry thinking skills and abilities beyond and above what the AI tools have generated. To summarize: if you use a generative AI model, you need to submit an accompanying document that shows the contribution of the model as well as your original contribution, explaining to me how you contributed significant work over and above what the model provided. You will be penalized for using AI tools without acknowledgement. The university’s policy on scholastic dishonesty still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

Recommendations on the use of Generative Artificial Intelligence at Royal Roads University

Today I met with Royal Roads University’s Board of Governors to present the work that we have completed in relation to Generative AI. I appreciated the opportunity not only to meet with the board, but also to hear comments and questions around this work and AI more general.

Because this was a public session, I thought it might be beneficial to share the recommendations we put forward. The full report will likely appear on the university website but for those of you who are tracking or thinking about institutional responses, this short timely summary might be more valuable than a more detailed future report.

As background: In March 2023, Royal Roads University established a generative Artificial Intelligence (AI) taskforce. Chaired by Dr. George Veletsianos, the taskforce consisted of Drs. Niels Agger-Gupta, Jo Axe, Geoff Bird, Elizabeth Childs, Jaigris Hodson, Deb Linehan, Ross Porter, and Rebecca Wilson-Mah. This work was also supported by our colleagues Rosie Croft (University Librarian), and Ken Jeffery and Keith Webster (Associate Directors, Centre for Teaching and Educational Technologies). The taskforce produced a report with 10 recommendations by June 2o23. The report (And its recommendations) should be seen as a working document that ought to be revisited and revised periodically as the technology, ethics, and use of AI are rapidly evolving. The recommendations were:

  1. Establish and publicize the university’s position on Generative AI
  2. Build human capacity
  3. Engage in strategic and targeted hiring
  4. Establish a faculty working group and foster a community of practice.
  5. Investigate, and potentially revise, assessment practices.
  6. Position the Centre for Teaching and Educational Technologies as a “go-to” learning and resource hub for teaching and learning with AI.
  7. Future-proof Royal Roads University [comment: a very contextual recommendation, but to ensure that this isn’t understood to encourage an instrumentalist view of AI or understood to mean that the institutions solely focus on AI, the report urged readers to “consider that the prevalence of AI will have far-reaching institutional impacts, which will add to the social, economic, political, and environmental pressures that the University is facing]
  8. Revise academic integrity policy
  9. Develop and integrate one common research methods course [comment: another very contextual recommendation that likely doesn’t apply to others, but what does apply is the relevance of AI to student research suggesting that research methods courses consider the relationships between AI and research practices.]
  10. Ensure inclusivity and fair representation in AI-related decisions.

I hope this is helpful to others.

On Vanderbilt’s disabling of Turnitin’s AI detection feature, and faculty guidance

Last week, Vanderbilt University decided to disable Turnitin’s AI detection tool. Congratulations are in order!

To date, there’s little evidence as to the effectiveness and appropriateness of such tools (also see: their unintended consequences). Equally importantly, Vanderbilt’s decision lends credence and support to recommendations that numerous working groups put forward to their institutions, and paves the way for others to feel confident in taking similar actions. Earlier this year for example, I led a generative AI taskforce at Royal Roads University. The relevant recommendation we put forward in early June is this:

Recommendation 5: Investigate, and potentially revise, assessment practices.

We recommend that faculty examine their current assessment practices and question them through the lens of AI tools. For instance, faculty could try their discussion prompts or reflection questions with AI tools to explore the role and limits of this technology. Regardless of the outcome of such efforts we recommend that faculty do not rely on AI detection tools to determine whether learners used AI in their work. A service that asserts to detect AI generated text does not provide transparency on how that assertion is made and encourages a culture of suspicion and mistrust. Emerging research also highlights challenges with reliably detecting AI-generated text (Sadasivan et al., 2023). Instead, we recommend that faculty engage with learners in conversations at the beginning of the course as to the appropriate and ethical use of AI. We further encourage faculty to continue their efforts towards experiential and authentic learning– including work integrated learning, live cases, active learning opportunities, field trips, service learning, iterative writing assignments, project-based learning, and others. These are not necessarily failsafe approaches to deter cheating, and it may even be possible to leverage AI in support of experiential learning. Ultimately, we recommend that faculty question their assignments at a time and age when generative AI is widely available.

Metaphors of generative AI in education

It’s been interesting to see what kinds of conceptual metaphors have been used to describe generative AI. Conceptual metaphors are ways to talk about something by relating it to something else. It’s been interesting because the ways we choose to speak about generative AI in education matter, because those choices impact experiences, expectations, and even outcomes. Early pedagogical agent research for example, identified different roles for agents often positioning them as experts, mentors or motivators. Even the term pedagogical agents carries its own connotations around the kinds of work that such a system will engage in (pedagogical), compared to say conversational agents or intelligent agents.

Below is a small sample. What other metaphors have you seen?

Update: See Dave Cormier’s related post around prompt engineering, or what to call the act of talking to algorithms.

3 ways higher education can become more hopeful in the post-pandemic, post-AI era

Below is a republished version of an article that Shandell Houlden and I published in The Conversation last week, summarizing some of the themes that arose in our Speculative Learning Futures podcast.

3 ways higher education can become more hopeful in the post-pandemic, post-AI era

The future of education is about more than technology.
(Pexels/Emily Ranquist)

Shandell Houlden, Royal Roads University and George Veletsianos, Royal Roads University

We live at a time when universities and colleges are facing multiplying crises, pressures and changes.

From the COVID-19 pandemic and budgetary pressures to generative artificial intelligence (AI) and climate catastrophe, the future of higher education seems murky and fragmented — even gloomy.

Student mental health is in crisis. University faculty in our own research from the early days of the pandemic told us that they were “juggling with a blindfold on.” Since that time, we’ve also heard many echo the sentiment of feeling they’re “constantly drowning,” something recounted by researchers writing about a sense of precarity in universities in New Zealand, Australia and the western world.

In this context, one outcome of the pandemic has been a rise in discourses about specific, quite narrowly imagined futures of higher education. Technology companies, consultants and investors, for example, push visions of the future of education as being saved by new technologies. They suggest more technology is always a good thing and that technology will necessarily make teaching and learning faster, cheaper and better. That’s their utopian vision.

Some education scholars have been less optimistic, often highlighting the failures of utopian thinking. In many cases, their speculation about the future of education, especially where education technology is concerned, often looks bleak. In these examples, technology often reinforces prejudices and is used to control educators and learners alike.

A picture of a collage showing a Facebook-jammed image that says 'You've been Zucked'
Amid accelerating technology, what kind of future do we imagine for higher education?
Annie Spratt/Unsplash

In contrast to both utopian and grim futures, for a recent study funded by the Social Sciences and Humanities Research Council, we sought to imagine more hopeful and desirable higher education futures. These are futures emerging out of justice, equity and even joy. In this spirit, we interviewed higher education experts for a podcast entitled Speculative Learning Futures.

When asked to imagine more hopeful futures, what do experts propose as alternatives? What themes emerge in their work? Here are three key ideas.

It’s about more than technology

First, these experts reiterated that the future of education is about more than technology. When we think about the future of education we can sometimes imagine it as being tied entirely to the internet, computers and other digital tools. Or we believe AI in education is inevitable — or that all learning will be done through screens, maybe with robot teachers!

But as Jen Ross, senior lecturer in digital education observes, technology doesn’t solve all our problems. When we think about education futures, technology alone does not automatically help us create better education or healthier societies. Social or community concerns like social inequities will continue to affect who can access education, our education systems’ values and how we are shaped by technologies.

As many researchers have argued, including us, the pandemic highlighted how differences in access to the internet and computers can reinforce inequities for students.

AI can also reinforce inequities. Depending on the nature of data AI is trained with, the use of AI can perpetuate harmful biases in classrooms.

Ross notes in her recent book that social or community concerns shape how our societies could imagine education.
Researchers involved with Indigenous-led AI are tackling questions around how Indigenous knowledge systems could push AI to be more inclusive.

Policymakers and educators should consider technology as one part of a toolkit of responses for making informed decisions about what technologies align with more equitable and just education futures.

Emphasizing connection and diversity

In line with thinking about more than technology, the second theme is a reminder that the future of education is about healthy social connection and social justice. Researchers emphasize fostering diversity and celebrating diverse expressions of strengths and needs.

Experts envision and call for education that is more sustainable for everyone, not just a privileged few. Kathrin Otrel-Cass, professor at University of Graz, and Mark Brown, Ireland’s first chair in digital learning and director of the National Institute for Digital Learning at Dublin City University, suggest this means teaching and learning should be at a slower pace for students and faculty alike.

In this vision, policymakers must support education systems that regard the whole learner as an individual with specific physical, mental, emotional and intellectual needs, and as a member of multiple communities.

Acknowledge the goodness of the present

There’s lots to be gained by noting and supporting all the great things related to education that are happening in the present, since possible futures emerge from what now exists.

As two podcast guests, Eamon Costello, professor at Dublin City University and collaborator Lily (Prajakta) Girme, noted, we need to acknowledge the good work of educators and learners in the small wins that happen every day.

In 2019, researchers Justin Reich and José Ruipérez-Valiente wrote: “new education technologies are rarely disruptive but instead are domesticated by existing cultures and systems. Dramatic expansion of educational opportunities to under-served populations will require political movements that change the focus, funding and purpose of higher education; they will not be achieved through new technologies alone.”

These are words worth repeating.

 

 

Shandell Houlden, Postdoctoral Fellow, School of Education and Technology, Royal Roads University and George Veletsianos, Professor and Canada Research Chair in Innovative Learning and Technology, Royal Roads University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

So very tired of predictions about AI in education…

By people who aren’t AIEd experts, education technology experts, education experts, and the like.

Case in point: “AI likely to spell end of traditional school classroom, leading [computer science] expert says.”

I appreciate cross disciplinary engagement as much as I love guacamole (which is to say, a lot), but I’d also appreciate that we stop wasting our time on these same unfulfilled prophecies year after year, decade after decade.

Will AI impact education? In some ways it will, and in others it won’t. Will education shape the ways AI comes to be used in classrooms? In some ways it will, and in others it won’t.

Truth be told, this negotiated relationship isn’t as appealing as DISRUPTION, AVALANCHE, MIND-READING ROBO-TUTOR IN THE SKY, etc, which are words that readers of the history of edtech will recognize.

Public version: Making ChatGPT detectors part of our education system prioritizes surveillance over trust

The Globe and Mail published an op-ed I wrote. As a condition of being featured in the publication, the paper has first publication rights for the first 48 hours. Since it’s been more than 48 hours, and for posterity, I’m making a copy available below.

Making ChatGPT detectors part of our education system prioritizes surveillance over trust

George Veletsianos is a professor of education and Canada Research Chair in Innovative Learning and Technology at Royal Roads University.

Imagine a world where surveillance technologies monitor and scrutinize your behaviour. Imagine a report that you write at work being compared with myriads of others and flagged for additional inspection when an algorithm deems it to be “very similar” to others.

Students don’t have to imagine this world. They are already living in it, in the form of plagiarism detection software, remote proctoring technologies, and now, tools aimed at detecting whether the students used ChatGPT – including new software that promises to catch students who use ChatGPT to cheat.

While taking online exams, students’ webcams scan their surroundings; their microphones monitor sounds and background noise, and their body and eye movements are tracked. Unexpected movements may indicate something as innocuous as stretching a tight neck or as problematic as catching a glimpse of Post-it notes on the wall, while unexpected sounds may indicate a child playing in the background or a roommate whispering answers. The essay assignments students submit are compared to a vast amount of writing by others. And a battery of scores might indicate plagiarizing from Wikipedia, passing off text created by ChatGPT as one’s own, or simply using common expressions. Any of this will get students flagged as potential cheaters.

That there are technologies to identify text written by artificial intelligence shouldn’t come as a surprise. What is surprising is that educators, administrators, students, and parents put up with surveillance technologies like these.

These technologies are harmful to education for two main reasons. First, they formalize mistrust. As a professor and researcher who has been studying the use of emerging technologies in education for nearly two decades, I am well aware that educational technology produces unintended consequences. In this case, these technologies take on a policing role and cultivate a culture of suspicion. The ever-present microscope of surveillance technology casts a suspicious eye on all learners, subjecting them all to an unwarranted level of scrutiny.

Second, these technologies introduce a host of other problems. Researchers note that these tools often flag innocent students and exacerbate student anxiety. This is something I’ve personally experienced as well when I took my Canadian citizenship exam online. Even though I knew the material and was confident in my abilities, my webcam’s bright green light was a constant reminder that I was being watched and that I should be wary of my every move.

To be certain, such tools may deter some students from intentionally plagiarizing. They may also improve efficiency, since they algorithmically check student work on behalf of educators.

But these reasons don’t justify surveillance.

A different world is possible when schools and universities dare to imagine richer and more hospitable learning environments that aren’t grounded in suspicion and policing. Schools and universities can begin to achieve this by developing more trusting relationships with their students and emphasizing the importance of honesty, original work, and creativity. They need to think of education in terms of relationships, and not in terms of control, monitoring, and policing. Students should be viewed as colleagues and partners.

Educators also need to come to terms with the fact that our assessments generally suffer from a poverty of imagination. Essays, tests, and quizzes have an important role to play in the learning process, but there are other ways to check for student achievement. We can design assessments that ask students to collect original data and draw inferences, or write and publish op-eds like this one; we can invite them to develop business and marketing plans for real-world businesses in their cities; we can ask them to reflect on their own unique experiences; we can require them to provide constructive peer-review and feedback to fellow students, or have them engage in live debates. In this light, ChatGPT is not a threat, but an opportunity for the education system to renew itself, to imagine a better world for its students.

Educators and administrators should stop using surveillance technologies like ChatGPT detectors, and parents and students should demand that schools and universities abolish them – not because cheating should be tolerated, but because rejecting the culture of suspicion that surveillance technologies foster and capitalize upon is a necessary step toward an education system that cares for its learners.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén