Category: short reflections

EdTech, magic mushrooms, and magic bullets

In my inbox, an email says:

Alberta’s new regulations on psychedelics to treat mental health issues come into effect today, making it the first province to regulate the use of hallucinogens in therapy.

Today in The Conversation Canada, Erika Dyck of the University of Saskatchewan walks readers through the new regulations, as well as the history, potential and pitfalls of hallucinogens both inside and outside clinical settings.

Psychedelics — from magic mushrooms and ayahuasca to LSD — are having a moment in the spotlight, with celebrity endorsements and a new generation of research on potential clinical uses. There is certainly a need for therapeutics to treat mental health issues, the growing prevalence of which could place a strain on the health-care system.

“Psychedelics are being held up as a potential solution,” Dyck writes. “But, magic mushrooms are not magic bullets.

That last line captures so much of what is happening in our field, and education more broadly, that it is worth repeating.

  • AI is being held up as a potential solution, but it is not a magic bullet.
  • A return to in-person learning is being held up as a potential solution, but it is not a magic bullet.
  • Online learning is being held up as a potential solution, but it is not a magic bullet.
  • Microcredentials are being held up as a potential solution, but they are not a magic bullet.
  • … and so on

These things – and others – can be solutions to some problems, but they consider them to be part of a Swiss army knife, part of a toolkit. And while sometimes your Swiss army knife will work, this isn’t always going to be the case, especially when we’re considering some of the most major challenges facing higher ed, the kinds of things that we’re not talking about (e.g., precarious employment and and external regulations that encourage and foster conservatism, etc).

And perhaps, that’s the crux of the issue: That these solutions are used to respond to the symptoms of larger problems, of the things we’re not talking about, rather than the root causes of them.

Image credit: Wall-e output in response to the prompt “a magic bullet in the style of salvador dali”

AI use in class policy

Ryan Baker shares his class policy on foundation models, and asks for input:

Within this class, you are welcome to use foundation models (ChatGPT, GPT, DALL-E, Stable Diffusion, Midjourney, GitHub Copilot, and anything after) in a totally unrestricted fashion, for any purpose, at no penalty. However, you should note that all large language models still have a tendency to make up incorrect facts and fake citations, code generation models have a tendency to produce inaccurate outputs, and image generation models can occasionally come up with highly offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a foundation model. If you use a foundation model, its contribution must be acknowledged in the handin; you will be penalized for using a foundation model without acknowledgement. Having said all these disclaimers, the use of foundation models is encouraged, as it may make it possible for you to submit assignments with higher quality, in less time. The university’s policy on plagiarism still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

As far as policies go, I like what Ryan created because

  • It functions as a policy as well as a pedagogical tool (“you should know that these models do X”) that draws students’ attention to specific issues that are important (e.g., ethics and equity).
  • It encourages use of foundation models. It recognizes that they are available and they can have benefits, unlike head in the sand efforts that ban their use
  • It invites students to engage with the output of foundation models in meaningful ways

In the LinkedIn thread, Jason D. Baker has a great comment that speaks to this, when he asks whether students solely need to state whether they used a model or whether they will need to explain in detail how they used model outputs. What would an explanation accompanying a submission look like? I’m not quite sure, but here’s an example of an article demonstrating the ways the human was involved and the ways the AI contributed to an article.

Time *with* people, online learning, and feeling ridiculous

I really enjoyed the quote below from John Warner, who is writing about graduate programs in writing and pursuing writing as a career. It strikes me that it also seems to capture some aspects of online learning which takes place in community with people vis-a-vis the independent, self-paced, and autodidactic approaches. Group dynamics and being in community are powerful, but even then, does being in community necessarily mean feeling not ridiculous? Perhaps the emphasis is on the “not so.”

Spending so much time immersed in a group of people that are interested in the same things you are, and want the same things you do can be incredibly nourishing. Being alone, and believing you want to write can honestly feel like a kind of madness. For sure there’s some writers who have a kind of self-belief and inner fortitude that buoys them, but the rest of us are like any other human being walking around wondering if what you think, what you believe, what you want, is ridiculous, and you are therefore a ridiculous person.

Basically, being surrounded by others who are similarly oriented makes you feel not so ridiculous. (emphasis mine)

Powered by WordPress & Theme by Anders Norén