Category: generative artificial intelligence Page 1 of 2

Edtech history, erasure, udacity, and blockchain

This thought in Audrey’s newsletter (update: link added March 30th) caught my attention, and encouraged me to share a related story.

 [Rose Eveleth] notes how hard it can be to tell a history when you try to trace a story to its primary sources and you simply cannot find the origin, the source. (I have been thinking a lot about this in light of last week’s Udacity news. So much of “the digital” has already been scrubbed from the web. The Wired story where Sebastian Thrun claimed that his startup would be one of ten universities left in the world? It’s gone. Many of the interviews he did where he said other ridiculous things about ed-tech – gone. What does this mean for those who will try to write future histories of ed-tech? Or, no doubt, of tech in general?) Erasure.

 

Remember how blockchain was going to revolutionize education? Ok, let’s get into the weeds of a related idea and how most everything that happened around it has also disappeared from the web.

One way through which blockchain was going to revolutionize education was through the development of education apps and software running on the blockchain. Around 2017, Initial Coin Offerings (ICOs) were the means through which to raise money to build those apps. An ICO was the cryptocurrency equivalent of an initial public offering. A company would offer people a new cryptocurrency token in exchange for funds to launch the company. The token would then provide some utility for ICO holders relating to the app/software (e.g., you could exchange it for courses, or for study sessions, or hold on to it hoping that its value would increase and resell, etc). The basic idea idea here was crowdfunding, and a paper published in the Harvard International Law Journal estimates that contributions to ICO’s exceeded $50bn by 2019. The Wikipedia ICO page includes more background.

A number of these ICOs focused on education. Companies/individuals/friends* would create a website and produce a whitepaper describing their product. Whitepapers varied, but they typically described the problem to be solved, the blockchain-grounded edtech solution they offered, use cases, the team behind the project, a roadmap, and the token sale/model.

To give you a sense of the edtech claims included in one of those whitepapers:

“The vision is the groundbreaking disruption of the old education industry and all of its branches. The following points are initial use cases which [coin] can provide … Users pay with [coins] on every major e-learning platform for courses and other content they have passed or consumed… Institutions can get rid of their old and heavy documented certification process by having it all digitalized, organized, governed and issued by the [coin] technology.”

I was entertaining an ethnographic project at the time, and collected a few whitepapers. For a qualitative researcher, those whitepapers were a treasure trove of information. But, looking online, they’re largely scrubbed, gone, erased. In some cases, ICO’s founders’ LinkedIn profiles were scrubbed and online communities surrounding the projects disappeared, even as early as ICOs didn’t raise the millions they were hoping for.

Some of you following this space might remember Woolf, the “world’s first blockchain university” launched by Oxford academics. And you might also remember that, like other edtech projects, it “pivoted.” See Martin Weller’s writing and David Gerard’s writing on this. Like so many others, the whitepaper describing the vision, the impending disruption of higher ed through a particular form of edtech, is gone. David kept a copy of that whitepaper, and I have copies of a couple of whitepapers from other ventures. But, by and large, that evidence is gone. I get it. Scammers scam, honest companies pivot, the two aren’t the same, and reputation management is a thing. But, I hope that this short post serves as a small reminder to someone in the future that grandiose claims around educational technology aren’t new. And perhaps, just perhaps, at a time of grandiose claims around AI in education, there are some lessons here.

 

 

Generative AI course statement

I have a new statement on generative AI in the class that I am teaching this semester. I like Ryan Baker’s statement from last year due to its permissive nature, and so I build upon it by adding some more specific guidance for my students to clarify expectations. I  came up with what you see below. As always, feel free to use as you see fit:

Artificial Intelligence Course Policy

Within this class, you are welcome to use generative AI models (e.g., ChatGPT,  microsoft copilot, etc) in your assignments at no penalty. However, you should note that when the activity asks you to reflect, my expectation is that you, rather than the AI, will be doing the reflection. In other words, prior to using AI tools, you should determine for what purpose you will be using it. For instance, you might ask it for help in critiquing your ideas or in generating additional ideas, or editing, but not in completing assignments on your behalf.   You should also be aware that all generative AI models have a tendency to make up incorrect facts and fake citations, leading to inaccurate (and sometimes offensive) output. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or an AI model. For each assignment you use an AI model, you need to submit a document that shows (a) the prompts you used, (b) the output the AI model produced, (c) how you used that output, and (d) what your original contributions and edits were to the AI output that led to your final submission. Language editing is not an original contribution. An original contribution is one that showcases your critical and inquiry thinking skills and abilities beyond and above what the AI tools have generated. To summarize: if you use a generative AI model, you need to submit an accompanying document that shows the contribution of the model as well as your original contribution, explaining to me how you contributed significant work over and above what the model provided. You will be penalized for using AI tools without acknowledgement. The university’s policy on scholastic dishonesty still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

Recommendations on the use of Generative Artificial Intelligence at Royal Roads University

Today I met with Royal Roads University’s Board of Governors to present the work that we have completed in relation to Generative AI. I appreciated the opportunity not only to meet with the board, but also to hear comments and questions around this work and AI more general.

Because this was a public session, I thought it might be beneficial to share the recommendations we put forward. The full report will likely appear on the university website but for those of you who are tracking or thinking about institutional responses, this short timely summary might be more valuable than a more detailed future report.

As background: In March 2023, Royal Roads University established a generative Artificial Intelligence (AI) taskforce. Chaired by Dr. George Veletsianos, the taskforce consisted of Drs. Niels Agger-Gupta, Jo Axe, Geoff Bird, Elizabeth Childs, Jaigris Hodson, Deb Linehan, Ross Porter, and Rebecca Wilson-Mah. This work was also supported by our colleagues Rosie Croft (University Librarian), and Ken Jeffery and Keith Webster (Associate Directors, Centre for Teaching and Educational Technologies). The taskforce produced a report with 10 recommendations by June 2o23. The report (And its recommendations) should be seen as a working document that ought to be revisited and revised periodically as the technology, ethics, and use of AI are rapidly evolving. The recommendations were:

  1. Establish and publicize the university’s position on Generative AI
  2. Build human capacity
  3. Engage in strategic and targeted hiring
  4. Establish a faculty working group and foster a community of practice.
  5. Investigate, and potentially revise, assessment practices.
  6. Position the Centre for Teaching and Educational Technologies as a “go-to” learning and resource hub for teaching and learning with AI.
  7. Future-proof Royal Roads University [comment: a very contextual recommendation, but to ensure that this isn’t understood to encourage an instrumentalist view of AI or understood to mean that the institutions solely focus on AI, the report urged readers to “consider that the prevalence of AI will have far-reaching institutional impacts, which will add to the social, economic, political, and environmental pressures that the University is facing]
  8. Revise academic integrity policy
  9. Develop and integrate one common research methods course [comment: another very contextual recommendation that likely doesn’t apply to others, but what does apply is the relevance of AI to student research suggesting that research methods courses consider the relationships between AI and research practices.]
  10. Ensure inclusivity and fair representation in AI-related decisions.

I hope this is helpful to others.

On Vanderbilt’s disabling of Turnitin’s AI detection feature, and faculty guidance

Last week, Vanderbilt University decided to disable Turnitin’s AI detection tool. Congratulations are in order!

To date, there’s little evidence as to the effectiveness and appropriateness of such tools (also see: their unintended consequences). Equally importantly, Vanderbilt’s decision lends credence and support to recommendations that numerous working groups put forward to their institutions, and paves the way for others to feel confident in taking similar actions. Earlier this year for example, I led a generative AI taskforce at Royal Roads University. The relevant recommendation we put forward in early June is this:

Recommendation 5: Investigate, and potentially revise, assessment practices.

We recommend that faculty examine their current assessment practices and question them through the lens of AI tools. For instance, faculty could try their discussion prompts or reflection questions with AI tools to explore the role and limits of this technology. Regardless of the outcome of such efforts we recommend that faculty do not rely on AI detection tools to determine whether learners used AI in their work. A service that asserts to detect AI generated text does not provide transparency on how that assertion is made and encourages a culture of suspicion and mistrust. Emerging research also highlights challenges with reliably detecting AI-generated text (Sadasivan et al., 2023). Instead, we recommend that faculty engage with learners in conversations at the beginning of the course as to the appropriate and ethical use of AI. We further encourage faculty to continue their efforts towards experiential and authentic learning– including work integrated learning, live cases, active learning opportunities, field trips, service learning, iterative writing assignments, project-based learning, and others. These are not necessarily failsafe approaches to deter cheating, and it may even be possible to leverage AI in support of experiential learning. Ultimately, we recommend that faculty question their assignments at a time and age when generative AI is widely available.

Metaphors of generative AI in education

It’s been interesting to see what kinds of conceptual metaphors have been used to describe generative AI. Conceptual metaphors are ways to talk about something by relating it to something else. It’s been interesting because the ways we choose to speak about generative AI in education matter, because those choices impact experiences, expectations, and even outcomes. Early pedagogical agent research for example, identified different roles for agents often positioning them as experts, mentors or motivators. Even the term pedagogical agents carries its own connotations around the kinds of work that such a system will engage in (pedagogical), compared to say conversational agents or intelligent agents.

Below is a small sample. What other metaphors have you seen?

Update: See Dave Cormier’s related post around prompt engineering, or what to call the act of talking to algorithms.

3 ways higher education can become more hopeful in the post-pandemic, post-AI era

Below is a republished version of an article that Shandell Houlden and I published in The Conversation last week, summarizing some of the themes that arose in our Speculative Learning Futures podcast.

3 ways higher education can become more hopeful in the post-pandemic, post-AI era

The future of education is about more than technology.
(Pexels/Emily Ranquist)

Shandell Houlden, Royal Roads University and George Veletsianos, Royal Roads University

We live at a time when universities and colleges are facing multiplying crises, pressures and changes.

From the COVID-19 pandemic and budgetary pressures to generative artificial intelligence (AI) and climate catastrophe, the future of higher education seems murky and fragmented — even gloomy.

Student mental health is in crisis. University faculty in our own research from the early days of the pandemic told us that they were “juggling with a blindfold on.” Since that time, we’ve also heard many echo the sentiment of feeling they’re “constantly drowning,” something recounted by researchers writing about a sense of precarity in universities in New Zealand, Australia and the western world.

In this context, one outcome of the pandemic has been a rise in discourses about specific, quite narrowly imagined futures of higher education. Technology companies, consultants and investors, for example, push visions of the future of education as being saved by new technologies. They suggest more technology is always a good thing and that technology will necessarily make teaching and learning faster, cheaper and better. That’s their utopian vision.

Some education scholars have been less optimistic, often highlighting the failures of utopian thinking. In many cases, their speculation about the future of education, especially where education technology is concerned, often looks bleak. In these examples, technology often reinforces prejudices and is used to control educators and learners alike.

A picture of a collage showing a Facebook-jammed image that says 'You've been Zucked'
Amid accelerating technology, what kind of future do we imagine for higher education?
Annie Spratt/Unsplash

In contrast to both utopian and grim futures, for a recent study funded by the Social Sciences and Humanities Research Council, we sought to imagine more hopeful and desirable higher education futures. These are futures emerging out of justice, equity and even joy. In this spirit, we interviewed higher education experts for a podcast entitled Speculative Learning Futures.

When asked to imagine more hopeful futures, what do experts propose as alternatives? What themes emerge in their work? Here are three key ideas.

It’s about more than technology

First, these experts reiterated that the future of education is about more than technology. When we think about the future of education we can sometimes imagine it as being tied entirely to the internet, computers and other digital tools. Or we believe AI in education is inevitable — or that all learning will be done through screens, maybe with robot teachers!

But as Jen Ross, senior lecturer in digital education observes, technology doesn’t solve all our problems. When we think about education futures, technology alone does not automatically help us create better education or healthier societies. Social or community concerns like social inequities will continue to affect who can access education, our education systems’ values and how we are shaped by technologies.

As many researchers have argued, including us, the pandemic highlighted how differences in access to the internet and computers can reinforce inequities for students.

AI can also reinforce inequities. Depending on the nature of data AI is trained with, the use of AI can perpetuate harmful biases in classrooms.

Ross notes in her recent book that social or community concerns shape how our societies could imagine education.
Researchers involved with Indigenous-led AI are tackling questions around how Indigenous knowledge systems could push AI to be more inclusive.

Policymakers and educators should consider technology as one part of a toolkit of responses for making informed decisions about what technologies align with more equitable and just education futures.

Emphasizing connection and diversity

In line with thinking about more than technology, the second theme is a reminder that the future of education is about healthy social connection and social justice. Researchers emphasize fostering diversity and celebrating diverse expressions of strengths and needs.

Experts envision and call for education that is more sustainable for everyone, not just a privileged few. Kathrin Otrel-Cass, professor at University of Graz, and Mark Brown, Ireland’s first chair in digital learning and director of the National Institute for Digital Learning at Dublin City University, suggest this means teaching and learning should be at a slower pace for students and faculty alike.

In this vision, policymakers must support education systems that regard the whole learner as an individual with specific physical, mental, emotional and intellectual needs, and as a member of multiple communities.

Acknowledge the goodness of the present

There’s lots to be gained by noting and supporting all the great things related to education that are happening in the present, since possible futures emerge from what now exists.

As two podcast guests, Eamon Costello, professor at Dublin City University and collaborator Lily (Prajakta) Girme, noted, we need to acknowledge the good work of educators and learners in the small wins that happen every day.

In 2019, researchers Justin Reich and José Ruipérez-Valiente wrote: “new education technologies are rarely disruptive but instead are domesticated by existing cultures and systems. Dramatic expansion of educational opportunities to under-served populations will require political movements that change the focus, funding and purpose of higher education; they will not be achieved through new technologies alone.”

These are words worth repeating.

 

 

Shandell Houlden, Postdoctoral Fellow, School of Education and Technology, Royal Roads University and George Veletsianos, Professor and Canada Research Chair in Innovative Learning and Technology, Royal Roads University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

So very tired of predictions about AI in education…

By people who aren’t AIEd experts, education technology experts, education experts, and the like.

Case in point: “AI likely to spell end of traditional school classroom, leading [computer science] expert says.”

I appreciate cross disciplinary engagement as much as I love guacamole (which is to say, a lot), but I’d also appreciate that we stop wasting our time on these same unfulfilled prophecies year after year, decade after decade.

Will AI impact education? In some ways it will, and in others it won’t. Will education shape the ways AI comes to be used in classrooms? In some ways it will, and in others it won’t.

Truth be told, this negotiated relationship isn’t as appealing as DISRUPTION, AVALANCHE, MIND-READING ROBO-TUTOR IN THE SKY, etc, which are words that readers of the history of edtech will recognize.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén