Tag: AI

Faculty curiosities about AI tools and ChatGPT

I led an online workshop/conversation on AI for ~200 faculty at three colleges/universities who came together today to learn about the topic. It centered on the following questions. I am sharing them here for the benefit of others, but also to ask: Are there other curiosities that you are seeing locally? (Yes, I know that the most recent EDUCAUSE poll highlights cheating as a top concern, though I’m not certain it ought to be)

  • How can (should??) I use AI for the benefit of my students’ learning?
  • Is ChatGPT really the disruptor it seems to be?
  • ChatGPT (AI) and authentic assessment – can these co-exist?
  • Neither I nor my students are as tech-savvy as it is often assumed we are. How do we keep up with innovations like ChatGPT, whether they be ‘good’ or ‘bad’, and how do we learn when to embrace them or ignore them?
  • Is ChatGPT (or other AI) a blessing or a curse for higher education?

AIs dedication to truth, justice or equity

In response to my post from yesterday, Stephen Downes focuses on the important and difficult issue. He says:

…George Veletsianos focuses on the question, “What new knowledge, capacities, and skills do instructional designers need in their role as editors and users of LLMs?” Using the existing state of chatGPT as a guide, he suggests that “a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive.” At a certain point, I think we might find ourselves uncomfortable with the idea that an individual designer’s values can outweigh the combined insights of the thousands or millions of voices that feed into an AI. True, today’s AIs are not very good examples of dedication to truth, justice or equity. But that, I’m sure, is a very temporary state of affairs.

Good point: We might find ourselves uncomfortable with that idea. But, here’s the two assumptions that I am making:

1. That individual has developed a dedication to truth, justice, equity, and decolonization that they are able to apply to their work. Yes, I am hopeful on this.

2. For an AI to reflect values aligned with justice, equity, and decolonization, we (aka society) likely need to regulate and re-imagine their design. I am less hopeful on this.

I guess that where Stephen and I disagree is on the future commitments of AI. I would like to be as hopeful as he is, but I am not convinced yet. I would like to be wrong.

EdTech, magic mushrooms, and magic bullets

In my inbox, an email says:

Alberta’s new regulations on psychedelics to treat mental health issues come into effect today, making it the first province to regulate the use of hallucinogens in therapy.

Today in The Conversation Canada, Erika Dyck of the University of Saskatchewan walks readers through the new regulations, as well as the history, potential and pitfalls of hallucinogens both inside and outside clinical settings.

Psychedelics — from magic mushrooms and ayahuasca to LSD — are having a moment in the spotlight, with celebrity endorsements and a new generation of research on potential clinical uses. There is certainly a need for therapeutics to treat mental health issues, the growing prevalence of which could place a strain on the health-care system.

“Psychedelics are being held up as a potential solution,” Dyck writes. “But, magic mushrooms are not magic bullets.

That last line captures so much of what is happening in our field, and education more broadly, that it is worth repeating.

  • AI is being held up as a potential solution, but it is not a magic bullet.
  • A return to in-person learning is being held up as a potential solution, but it is not a magic bullet.
  • Online learning is being held up as a potential solution, but it is not a magic bullet.
  • Microcredentials are being held up as a potential solution, but they are not a magic bullet.
  • … and so on

These things – and others – can be solutions to some problems, but they consider them to be part of a Swiss army knife, part of a toolkit. And while sometimes your Swiss army knife will work, this isn’t always going to be the case, especially when we’re considering some of the most major challenges facing higher ed, the kinds of things that we’re not talking about (e.g., precarious employment and and external regulations that encourage and foster conservatism, etc).

And perhaps, that’s the crux of the issue: That these solutions are used to respond to the symptoms of larger problems, of the things we’re not talking about, rather than the root causes of them.

Image credit: Wall-e output in response to the prompt “a magic bullet in the style of salvador dali”

AI use in class policy

Ryan Baker shares his class policy on foundation models, and asks for input:

Within this class, you are welcome to use foundation models (ChatGPT, GPT, DALL-E, Stable Diffusion, Midjourney, GitHub Copilot, and anything after) in a totally unrestricted fashion, for any purpose, at no penalty. However, you should note that all large language models still have a tendency to make up incorrect facts and fake citations, code generation models have a tendency to produce inaccurate outputs, and image generation models can occasionally come up with highly offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a foundation model. If you use a foundation model, its contribution must be acknowledged in the handin; you will be penalized for using a foundation model without acknowledgement. Having said all these disclaimers, the use of foundation models is encouraged, as it may make it possible for you to submit assignments with higher quality, in less time. The university’s policy on plagiarism still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

As far as policies go, I like what Ryan created because

  • It functions as a policy as well as a pedagogical tool (“you should know that these models do X”) that draws students’ attention to specific issues that are important (e.g., ethics and equity).
  • It encourages use of foundation models. It recognizes that they are available and they can have benefits, unlike head in the sand efforts that ban their use
  • It invites students to engage with the output of foundation models in meaningful ways

In the LinkedIn thread, Jason D. Baker has a great comment that speaks to this, when he asks whether students solely need to state whether they used a model or whether they will need to explain in detail how they used model outputs. What would an explanation accompanying a submission look like? I’m not quite sure, but here’s an example of an article demonstrating the ways the human was involved and the ways the AI contributed to an article.

Bots, AI, & Education update #3

Today’s rough set of notes that focus on teacherbots and artificial intelligence in education

  • Chatbots: One of the technologies that’s mesmerized silicon valley
  • Humans have long promised future lives enhanced by machines
  • Many proponents highlight the qualities of bots vis-a-vis teachers
    • personal
    • personalized
    • monitoring & nudging
    • can give reliable feedback
    • don’t get tired
    • etc etc
  • Knewton: Algos to complement and support teacher (sidenote: as if anyone will be forthright about aiming to replace teachers… except perhaps this book that playfully states that “coaches (once called teachers)” will cooperate with AI)
  • Genetics with Jean: bots with affect-sensing functionality, ie software that detects students’ affective states and responds accordingly
  • Driveleress Ed-Tech: Robots aren’t going to march in for jobs; it’s the corporations and the systems that support them that enable that to happen.

Bots, AI, & Education update #2

Yesterday’s rough set of notes that focus on teacherbots and artificial intelligence in education

  • Notable critiques of Big Data, data analytics, and algorithmic culture (e.g., boyd & Crawford, 2012; Tufecki, 2014 & recent critiques of YouTube’s recommendation algorithm as well as Caulfield’s demonstration of polarization on Pinterest). These rarely show up in discussions around bots and AI in education, critiques of learning analytics and big data (e.g., Selwyn 2014; Williamson, 2015) are generally applicable to the technologies that enable bots to do what they do (e.g., Watters, 2015).
  • Complexity of machine learning algorithms means that even their developers are at times unsure as to how said algorithms arrive at particular conclusions
  • Ethics are rarely an area of focus in instructional design and technology (Gray & Boling, 2016)  – and related edtech-focused areas. In designing bots where should we turn for moral guidance? Who are such systems benefiting? Whose interests are served? If we can’t accurately predict how bots may make decisions when interacting with students (see bullet point above), how will we ensure that moral values are embedded in the design of such algorithms? Whose moral values in a tech industry that’s mired with biases, lacks broad representation, and rarely heeds user feedback (e.g., women repeatedly highlighting the harassment they experience on Twitter for the past 5 or so years, with Twitter taking few, if any, steps to curtail it)?

Bots, AI, & Education update #1

A rough set of notes from today that focus on teacherbots and artificial intelligence in education

  • Bots in education bring together many technologies & ideas including, but not limited to artificial intelligence, data analytics, speech-recognition technologies, personalized learning, algorithms, recommendation engines, learning design, and human-computer interaction.
    • They seek to serve many roles (content curation, advising, assessment, etc)
  • Many note the potential that exists in developing better algorithms for personalized learning. Such algos are endemic in the design of AI and bots
    • Concerns: Black box algorithms, data do not fully capture learning & may lead to biased outcomes & processes
  • Downes sees the crux of the matter as What AI can currently do vs. What AI will be able to do
    • This is an issue with every new technology and the promises of its creators
    • Anticipated future impact features prominently in claims surrounding impact of tech in edu
  • Maha Bali argues that AI work misunderstands what teachers do in the classroom
    • Yet, in a number of projects we see classroom observations as being used to inform the design of AI systems
  • “AI can free time for teachers to do X” is an oft-repeated claim of AI/bot proponents. This claim often notes that AI will free teachers from mundane tasks and enable them to focus on those that matter. We see this in Jill Watson, in talks from IBM regarding Watson applications to education, but also in earlier attempts to integrate AI, bots, and pedagogical agents in education (e.g., 1960s, 1980s). Donald Clark reiterates this when he argues that teachers should “welcome something that takes away all the admin and pain.” See update* below.
  • Another oft-repeated claim is that AI & bots will work with teachers, not replace them
  • At times this argument is convincing. At other times, it seems dubious (e.g., when made in instances where proponents ask readers/audience to imagine a future where every child could have instant access to [insert amazing instructor here])
  • Predictions regarding the impact of bots and AI abound (of course). There’s too many to list here, but here’s one example
  • Why a robot-filled education future may not be as scary as you think” argues that concerns around robots in education are to be expected. The article claims that people are “hard-wired” to perceive “newness as danger” as it seeks to explain away concerns by noting that education, broadly speaking, avoids change. There’s no recognition anywhere in the article that (a) education is, and has always been, in a constant state of change, and (b) edtech has always been an optimistic endeavour, so much so that its blind orthodoxy has been detrimental to its goal of improving education.

 

Update:

From Meet the mind-reading robo tutor in the sky:

And underpaid, time-stressed teachers don’t necessarily have the time to personalize every lesson or drill deep into what each child is struggling with.

Enter the omniscient, cloud-based robo tutor.

“We think of it like a robot tutor in the sky that can semi-read your mind and figure out what your strengths and weaknesses are, down to the percentile,” says Jose Ferreira, the founder and CEO of ed-tech company Knewton.”

Page 2 of 2

Powered by WordPress & Theme by Anders Norén