AIs dedication to truth, justice or equity

In response to my post from yesterday, Stephen Downes focuses on the important and difficult issue. He says:

…George Veletsianos focuses on the question, “What new knowledge, capacities, and skills do instructional designers need in their role as editors and users of LLMs?” Using the existing state of chatGPT as a guide, he suggests that “a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive.” At a certain point, I think we might find ourselves uncomfortable with the idea that an individual designer’s values can outweigh the combined insights of the thousands or millions of voices that feed into an AI. True, today’s AIs are not very good examples of dedication to truth, justice or equity. But that, I’m sure, is a very temporary state of affairs.

Good point: We might find ourselves uncomfortable with that idea. But, here’s the two assumptions that I am making:

1. That individual has developed a dedication to truth, justice, equity, and decolonization that they are able to apply to their work. Yes, I am hopeful on this.

2. For an AI to reflect values aligned with justice, equity, and decolonization, we (aka society) likely need to regulate and re-imagine their design. I am less hopeful on this.

I guess that where Stephen and I disagree is on the future commitments of AI. I would like to be as hopeful as he is, but I am not convinced yet. I would like to be wrong.

AI, ChatGPT, instructional design, and prompt crafting

David Wiley wrote a thoughtful post on the ways in which AI and Large Language Models (LLMs) can “provide instructional designers with first drafts of some of the work they do.” He says “imagine you’re an instructional designer who’s been paired with a faculty member to create a course in microeconomics. These tools might help you quickly create first drafts of” learning outcomes, discussion prompts, rubrics, and formative assessment items.  The point is that LLMs can quickly generate rough drafts that are mostly accurate drafts, that humans can then “review, augment, and polish,” potentially shifting the work of instructional designers from authors to editors. The post is well worth your time.

The question that I’d like to spend some time thinking about is the following: What new knowledge, capacities, and skills do  instructional designers need in their role as editors and users of LLMs?

To focus on a single item here, much has been written about prompt engineering or crafting, which is the way in which user text is entered to guide LLMs like ChatGPT to generate desired responses. For example, a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive from a prompt. Below is an example to illustrate how assessment items vary, as the desired assessment changes from good, to efficient, to equitable. My prompt is in bold and ChatGPT output follows.


assume that a learning outcome of a course is “understand opportunity cost.” What is a good assessment item for this learning outcome?

A good assessment item for the learning outcome of “understand opportunity cost” would be a case study or scenario-based question that requires the student to analyze and apply their understanding of opportunity cost to make a decision.

For example:

Scenario: A small business owner is trying to decide whether to invest in new equipment or to expand the business by opening a second location. The owner has $100,000 to spend and estimates that the equipment will cost $50,000 and the second location will cost $80,000.

Question: What is the opportunity cost for the business owner in each case and which option would you recommend and why?

This question requires the student to understand the concept of opportunity cost, which is the cost of the next best alternative, in order to make an informed decision about which option would be the best investment for the business. It also allows the student to demonstrate their understanding of the concept by applying it to a real-world scenario.

assume that a learning outcome of a course is “understand opportunity cost.” What is an efficient assessment item for this learning outcome?

An efficient assessment item for the learning outcome of “understand opportunity cost” would be a multiple-choice question that allows the student to quickly and easily demonstrate their understanding of the concept.

For example:

Question: Which of the following is the correct definition of opportunity cost? A) The cost of a good or service B) The cost of the next best alternative C) The total cost of production D) The profit made from a good or service

The correct answer is B) The cost of the next best alternative. This multiple-choice question is efficient as it allows students to quickly demonstrate their understanding of the concept, and it is easy to grade, as there is only one correct answer. This multiple-choice question could be used as a formative assessment to check student’s understanding, and it could be also used as a summative assessment to evaluate their final understanding of the course’s outcome.

assume that a learning outcome of a course is “understand opportunity cost.” What is an equitable assessment item for this learning outcome?

An equitable assessment item for the learning outcome of “understand opportunity cost” would be a problem-solving or open-ended question that allows students to demonstrate their understanding of the concept in a way that is not limited by their background or language abilities.

For example:

Question: Suppose you have $1000 to spend and you want to buy a new computer and a new phone. A computer costs $800 and a phone costs $200. Provide a written explanation of how you would make your choice and what the opportunity cost of your choice is.

This open-ended question allows students to demonstrate their understanding of opportunity cost in their own words and allows them to apply their understanding to a real-world scenario. It also allows for different students to approach the problem differently and does not rely on language or background knowledge. It also allows for the teacher to evaluate student’s understanding of the concept in a more holistic way, and not just based on a single correct answer.

This type of assessment is equitable because it allows all students to demonstrate their understanding of the concept and does not disadvantage students who may struggle with multiple-choice questions or who may have a different background or language abilities.

NIDL’s Top 10 Good Reads from 2022

For the past seven years the good folks from the National Institute of Digital Learning have been selecting 10 open access journal articles to share as their “good reads” of the year. It’s quite a feat to find, read, and reflect on more than 100 articles to come up with any list, let alone a list that goes beyond typical metrics like citation counts, and takes into account issues like diversity and representation. I am grateful to see one of our papers on the list, but I am more excited to see a few paper that I haven’t read yet! I am listing the papers below, but please read the original post as our colleagues’ reflection on the process is part of what makes this project great!

Fawns, T. (2022). An entangled pedagogy: Looking beyond the pedagogy—technology dichotomy. Postdigital Science and Education,

Johnson, N., Seaman, J., & Poulin, R. (2022). Defining different modes of learning: Resolving confusion and contention through consensus. Online Learning Journal, 26(3), 91-110. DOI:

Passey, D. (2022). Theories, theoretical and conceptual frameworks, models and constructs: Limiting research outcomes through misconceptions and misunderstanding. Studies in Technology Enhanced Learning, 1(1), 95-114. 10.21428/8c225f6e.56810a1a

Atenas, J., Beetham, H., Bell, F., Cronin, C., Vu Henry, J., & Walji, S. (2022). Feminisms, technologies and learning: Continuities and contestations. Learning, Media and Technology, 47(1), 1-10, DOI: 10.1080/17439884.2022.2041830

Downes, S. (2022). Connectivism. Asian Journal of Distance Education, 17(1), 58-87.

Saçak, B., Bozkurt, A., & Wagner, E. (2022). Learning design versus instructional design: A bibliometric study through data visualization approaches. Education Sciences, 12, 752, 1-14.

Houlden, S., & Veletsianos, G. (2022). Impossible dreaming: On speculative education fiction and hopeful learning futures. Postdigital Science and Education.

Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education, European Journal of Education, 57(4),542–570.

Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends, 66, 616–630.

Tlili, A., et al. (2022). Is Metaverse in education a blessing or a curse: a combined content and bibliometric analysis. Smart Learning Environments, 9(24),

EdTech, magic mushrooms, and magic bullets

In my inbox, an email says:

Alberta’s new regulations on psychedelics to treat mental health issues come into effect today, making it the first province to regulate the use of hallucinogens in therapy.

Today in The Conversation Canada, Erika Dyck of the University of Saskatchewan walks readers through the new regulations, as well as the history, potential and pitfalls of hallucinogens both inside and outside clinical settings.

Psychedelics — from magic mushrooms and ayahuasca to LSD — are having a moment in the spotlight, with celebrity endorsements and a new generation of research on potential clinical uses. There is certainly a need for therapeutics to treat mental health issues, the growing prevalence of which could place a strain on the health-care system.

“Psychedelics are being held up as a potential solution,” Dyck writes. “But, magic mushrooms are not magic bullets.

That last line captures so much of what is happening in our field, and education more broadly, that it is worth repeating.

  • AI is being held up as a potential solution, but it is not a magic bullet.
  • A return to in-person learning is being held up as a potential solution, but it is not a magic bullet.
  • Online learning is being held up as a potential solution, but it is not a magic bullet.
  • Microcredentials are being held up as a potential solution, but they are not a magic bullet.
  • … and so on

These things – and others – can be solutions to some problems, but they consider them to be part of a Swiss army knife, part of a toolkit. And while sometimes your Swiss army knife will work, this isn’t always going to be the case, especially when we’re considering some of the most major challenges facing higher ed, the kinds of things that we’re not talking about (e.g., precarious employment and and external regulations that encourage and foster conservatism, etc).

And perhaps, that’s the crux of the issue: That these solutions are used to respond to the symptoms of larger problems, of the things we’re not talking about, rather than the root causes of them.

Image credit: Wall-e output in response to the prompt “a magic bullet in the style of salvador dali”

AI use in class policy

Ryan Baker shares his class policy on foundation models, and asks for input:

Within this class, you are welcome to use foundation models (ChatGPT, GPT, DALL-E, Stable Diffusion, Midjourney, GitHub Copilot, and anything after) in a totally unrestricted fashion, for any purpose, at no penalty. However, you should note that all large language models still have a tendency to make up incorrect facts and fake citations, code generation models have a tendency to produce inaccurate outputs, and image generation models can occasionally come up with highly offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a foundation model. If you use a foundation model, its contribution must be acknowledged in the handin; you will be penalized for using a foundation model without acknowledgement. Having said all these disclaimers, the use of foundation models is encouraged, as it may make it possible for you to submit assignments with higher quality, in less time. The university’s policy on plagiarism still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

As far as policies go, I like what Ryan created because

  • It functions as a policy as well as a pedagogical tool (“you should know that these models do X”) that draws students’ attention to specific issues that are important (e.g., ethics and equity).
  • It encourages use of foundation models. It recognizes that they are available and they can have benefits, unlike head in the sand efforts that ban their use
  • It invites students to engage with the output of foundation models in meaningful ways

In the LinkedIn thread, Jason D. Baker has a great comment that speaks to this, when he asks whether students solely need to state whether they used a model or whether they will need to explain in detail how they used model outputs. What would an explanation accompanying a submission look like? I’m not quite sure, but here’s an example of an article demonstrating the ways the human was involved and the ways the AI contributed to an article.

A pan-Canadian certification program for higher education instructors?

Tony Bates wrote his five wishes for online learning in 2023, along with reasons why he’s somewhat pessimistic about them being fulfilled. I wanted to spend a few minutes here discussing alternatives to Tony’s second wish: “A national certification program for higher education instructors.” If this wish has a “5% odds of happening” (and I agree with Tony here), what kinds of alternatives might have greater chances of success?

Provincial responsibility for higher education means that (at present) this is the kind of wish that is dead in the water. Some alternatives that might go towards addressing the problems of teaching competence might be the following:

  • Provincial certification programs for higher education instructors. The BC government has developed a digital learning strategy, which includes a variety of steps, resources, actions, recommendations, and tools to support and expand the effective and equitable use of digital learning in the province. With a strategy in place, developing a provincial certification program makes good sense. Some of the challenges that Tony identifies a federal program facing will still be present in the provincial context (e.g., research-teaching hierarchies, cost, academic freedom issues), but the odds of this are greater than 5%. My guesstimate? 10%. Still poor. And smaller-scale. On the other hand, a provincial program, say in BC, might become a proof of concept for other provinces, especially, if it is openly licensed, is cross-disciplinary, and is flexible enough in its design and assessment.
  • Institutional and cross-institutional certification programs, such as BCIT’s Polytechnic Academy proposal, which I understand to be similar to the work that Centers of Teaching and Learning at multiple institutions do, such as, for example, the Facilitating Learning Online (FLO) offerings, that are offered by a number of institutions/organizations in the province. There’s a slew of benefits that can come from  multi-institutional collaboration on such efforts, like Tony describes. I’m more optimistic on this, especially because there was quite a lot of collaboration during the COVID-19 pandemic that might provide the impetus and support for this, and also because I see collaborative-minded institutions coming together for other initiatives (like the new campus that four island institutions are opening in the Westshore).
  • Institutional certification programs for future faculty. This is close to my heart. Preparing current doctoral students for online/hybrid teaching – and preparing them for teaching in general – is necessary (which, I might add, also prepares them with skills that are relevant outside of the academy, like leading teams in collaborative groupwork). There’s other challenges here to be sure (such as academic departments agreeing that this is topic that is significant enough to warrant a course/certificate/microcredential/something), but this might be an area where the office/school/college of graduate studies plays a pivotal role. Another challenge: this kind of initiative addresses the current problem, but in the future, while remaining unresponsive to the status quo. It’s not a solution, but it’s part of a package for a solution.

If you would like to add more to this, the comments are open!

Η ξενιθκιά τζιαι οι ανθοί

There’s been a recent movement in Cyprus to accept and recognize the beauty, richness, and significance of the Cypriot language. Typically understood as a dialect, Greekcypriot (and Turkishcypriot for that matter, i believe) are oral, not written languages: There’s no courses in “Cypriot” in K-12 schools. The courses are Modern Greek (Turkish), and the language spoken in professional contexts is Greek. There’s all sorts of issues wrapped up in this around people’s concepts of what is and isn’t proper, and around what ought to be or not be taught in schools. As part of the movement to use Cypriot in written contexts, a local newspaper has started publishing op eds written in Cypriot on any topic, and I wrote the following story around my grandmother’s cooking of stuffed zucchini flowers.

Η ξενιθκιά τζιαι οι ανθοί

Έσιει πάνω που είκοσι χρόννια που έφυα που την Κύπρο. Πρώτα Αμερική. Ύστερα Αγγλία. Πίσω Αμερική. Τώρα Καναδά. Τζιαι όι Τορόντο, που τουλάχιστον εν’ 10 ώρες απευθείας πτήση προς την Αθήνα τζιαι ύστερα ανάμιση ώρα που τζιαμαί. Όι! Στην άλλη πλευρά του Καναδά. Στην πλευρά του Ειρηνικού ωκεανού. Στη Βικτώρια, που εν’ στο νησί Βανκούβερ, που πολλοί συγχύζουν με την πόλη του Βανκούβερ. Στην πλευρά που θέλει τουλάχιστον τρεις πτήσεις τζιαι 24 ώρες για να έρτω πίσω. «Στα τριβίλλουρα», λαλεί ο φίλος μου ο Φόττας που το γυμνάσιο. Να μεν τα πολυλογώ, Αγγλία, Αμερική ή Καναδάς εν είσιεν σημμασία για την μακαρίτισσα τη γιαγιά μου την Ελένη που την Τρεμιθούσα. «Ζάβαλλε μου, τζιαι εννά σε φάει η ξενιθκιά γιε μου» ελάλεν μου κάθε φορά που επήαιννα να την δω.

Εν ηξέρω ακριβώς ίνταμπου εννόαν με την έκφραση «εννά σε φάει η ξενιθκιά». Εχαμογέλουν τζιαι εκαθησύχαζα την, αλλά θαρκούμαι εσκέφτετουν ότι τα πράματα εν’ δύσκολα στο εξωτερικό. Ότι εν θα έχω συγγενείς, γνωστούς, τζιαι φίλους σε ώρα ανάγκης. Ότι ο κοσμος εν’ άλλωσπως. Αξινόστραφος, σε σύγκριση με τους Κυπραίους; Χωρίς εμπιστοσύνη; Ότι σαν την Κύπρο εν έσιει; Εν ηξέρω. Έπρεπε να την ρωτήσω, αλλά πού να το κόψει ο νους μου! Ένα πράμα όμως έξερα το: Ότι στη ξενιθκιά σίουρα εν θα έβρισκα μια πιατέλλα γεμάτη με ανθούς έτοιμους πας στο τραπέζι, έτσι όπως τους έκαμνεν κάθε φορά που επήαιννα να την δω.

Η κοτζιάκαρη η μακαρίτισσα ήταν αγράμματη αλλά, τελικά, γνωστική. Ως ένα σημείο έντζιαι είσιεν άδικο. «Εξίασες τα ελληνικά σου, ρε» είπε μου η αρφή μου πας στην κουβέντα τον Ιούνη που ήρτα να δω τους δικούς μου, ύστερα που δκυόμισι χρόννια πανδημίας. Τζιαι όταν έγραψα της άλλης μου της αρφής στο φέισμπουκ «Πολλά ωρεο! Πίος το εκαμε;» για ένα γλύκισμα, απάντησε μου: «Re, me polli agapi einai *ωραίο και *ποιος». Ορθογράφος η ροκόλα, που ήταν δέκα χρονών όταν έφυα, τζιαι τωρά εν’ τριάντα. Τέλος πάντων, έχουν τζιαι τούτες δίκαιο. Η ξενιθκιά εμισοέφαεν μου την γλώσσα μου την μητρική. Τούντο άρθρο εννά το έγραφα σε καμμιάν ώρα στα αγγλικά αλλά επήρεν μου τρεις τζιαι κάτι στα κυπριακά. Τζιαι επειδή λυσσιώ της πείνας, εν ημπόρω να περιγράψω τωρά πως η ξενιθκιά εν’ όπως την Ιθάκη, τζιαι εννά σταματήσω να γράφω.

Μια πιατέλλα γεμάτη με ανθούς έτοιμους πας στο τραπέζι, έτσι όπως τους έκαμνεν κάθε φορά που επήαιννα να την δω, όμως γιοκ.

Page 1 of 73

Powered by WordPress & Theme by Anders Norén