ChatGPT is the tree, not the forest.

“Not see the forest for the trees,” is a North American idiom that is used to urge one that focusing on the details might lead them to miss the larger issue/problem. ChatGPT is the tree. Perhaps it’s the tallest or the leafiest tree, or the one that blossomed rapidly right in front of your eyes… sort of like a Japanese flowering cherry. What does this mean for you?

If you’re exploring ChatGPT – as a student, instructor, administrator, perhaps as a community – don’t focus solely on ChatGPT. Certainly, this particular tool is can serve as one illustration of the possibilities, pitfalls, and challenges of Generative AI, but making decisions about Generative AI by focusing solely on ChatGPT may lead you to make decisions that are grounded on the idiosyncrasies of this particular technology at this particular point in time.

What does this mean in practice? Your syllabus policies should be broader than ChatGPT. Your taskforce and working groups should look beyond this particular tool. Your classroom conversations should highlight additional technologies.

I was asked recently to lead a taskforce to explore implications and put forward recommendations for our teaching and learning community. ChatGTP was the impetus. But our focus is Generative AI. It needs to be. And there’s a long AIED history here, which includes some of my earlier work on pedagogical agents.

 

AIs dedication to truth, justice or equity

In response to my post from yesterday, Stephen Downes focuses on the important and difficult issue. He says:

…George Veletsianos focuses on the question, “What new knowledge, capacities, and skills do instructional designers need in their role as editors and users of LLMs?” Using the existing state of chatGPT as a guide, he suggests that “a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive.” At a certain point, I think we might find ourselves uncomfortable with the idea that an individual designer’s values can outweigh the combined insights of the thousands or millions of voices that feed into an AI. True, today’s AIs are not very good examples of dedication to truth, justice or equity. But that, I’m sure, is a very temporary state of affairs.

Good point: We might find ourselves uncomfortable with that idea. But, here’s the two assumptions that I am making:

1. That individual has developed a dedication to truth, justice, equity, and decolonization that they are able to apply to their work. Yes, I am hopeful on this.

2. For an AI to reflect values aligned with justice, equity, and decolonization, we (aka society) likely need to regulate and re-imagine their design. I am less hopeful on this.

I guess that where Stephen and I disagree is on the future commitments of AI. I would like to be as hopeful as he is, but I am not convinced yet. I would like to be wrong.

AI, ChatGPT, instructional design, and prompt crafting

David Wiley wrote a thoughtful post on the ways in which AI and Large Language Models (LLMs) can “provide instructional designers with first drafts of some of the work they do.” He says “imagine you’re an instructional designer who’s been paired with a faculty member to create a course in microeconomics. These tools might help you quickly create first drafts of” learning outcomes, discussion prompts, rubrics, and formative assessment items.  The point is that LLMs can quickly generate rough drafts that are mostly accurate drafts, that humans can then “review, augment, and polish,” potentially shifting the work of instructional designers from authors to editors. The post is well worth your time.

The question that I’d like to spend some time thinking about is the following: What new knowledge, capacities, and skills do  instructional designers need in their role as editors and users of LLMs?

To focus on a single item here, much has been written about prompt engineering or crafting, which is the way in which user text is entered to guide LLMs like ChatGPT to generate desired responses. For example, a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive from a prompt. Below is an example to illustrate how assessment items vary, as the desired assessment changes from good, to efficient, to equitable. My prompt is in bold and ChatGPT output follows.

 

assume that a learning outcome of a course is “understand opportunity cost.” What is a good assessment item for this learning outcome?

A good assessment item for the learning outcome of “understand opportunity cost” would be a case study or scenario-based question that requires the student to analyze and apply their understanding of opportunity cost to make a decision.

For example:

Scenario: A small business owner is trying to decide whether to invest in new equipment or to expand the business by opening a second location. The owner has $100,000 to spend and estimates that the equipment will cost $50,000 and the second location will cost $80,000.

Question: What is the opportunity cost for the business owner in each case and which option would you recommend and why?

This question requires the student to understand the concept of opportunity cost, which is the cost of the next best alternative, in order to make an informed decision about which option would be the best investment for the business. It also allows the student to demonstrate their understanding of the concept by applying it to a real-world scenario.

assume that a learning outcome of a course is “understand opportunity cost.” What is an efficient assessment item for this learning outcome?

An efficient assessment item for the learning outcome of “understand opportunity cost” would be a multiple-choice question that allows the student to quickly and easily demonstrate their understanding of the concept.

For example:

Question: Which of the following is the correct definition of opportunity cost? A) The cost of a good or service B) The cost of the next best alternative C) The total cost of production D) The profit made from a good or service

The correct answer is B) The cost of the next best alternative. This multiple-choice question is efficient as it allows students to quickly demonstrate their understanding of the concept, and it is easy to grade, as there is only one correct answer. This multiple-choice question could be used as a formative assessment to check student’s understanding, and it could be also used as a summative assessment to evaluate their final understanding of the course’s outcome.

assume that a learning outcome of a course is “understand opportunity cost.” What is an equitable assessment item for this learning outcome?

An equitable assessment item for the learning outcome of “understand opportunity cost” would be a problem-solving or open-ended question that allows students to demonstrate their understanding of the concept in a way that is not limited by their background or language abilities.

For example:

Question: Suppose you have $1000 to spend and you want to buy a new computer and a new phone. A computer costs $800 and a phone costs $200. Provide a written explanation of how you would make your choice and what the opportunity cost of your choice is.

This open-ended question allows students to demonstrate their understanding of opportunity cost in their own words and allows them to apply their understanding to a real-world scenario. It also allows for different students to approach the problem differently and does not rely on language or background knowledge. It also allows for the teacher to evaluate student’s understanding of the concept in a more holistic way, and not just based on a single correct answer.

This type of assessment is equitable because it allows all students to demonstrate their understanding of the concept and does not disadvantage students who may struggle with multiple-choice questions or who may have a different background or language abilities.

NIDL’s Top 10 Good Reads from 2022

For the past seven years the good folks from the National Institute of Digital Learning have been selecting 10 open access journal articles to share as their “good reads” of the year. It’s quite a feat to find, read, and reflect on more than 100 articles to come up with any list, let alone a list that goes beyond typical metrics like citation counts, and takes into account issues like diversity and representation. I am grateful to see one of our papers on the list, but I am more excited to see a few paper that I haven’t read yet! I am listing the papers below, but please read the original post as our colleagues’ reflection on the process is part of what makes this project great!

Fawns, T. (2022). An entangled pedagogy: Looking beyond the pedagogy—technology dichotomy. Postdigital Science and Education, https://doi.org/10.1007/s42438-022-00302-7

Johnson, N., Seaman, J., & Poulin, R. (2022). Defining different modes of learning: Resolving confusion and contention through consensus. Online Learning Journal, 26(3), 91-110. DOI: http://dx.doi.org/10.24059/olj.v26i3.3565

Passey, D. (2022). Theories, theoretical and conceptual frameworks, models and constructs: Limiting research outcomes through misconceptions and misunderstanding. Studies in Technology Enhanced Learning, 1(1), 95-114. https://doi.org/ 10.21428/8c225f6e.56810a1a

Atenas, J., Beetham, H., Bell, F., Cronin, C., Vu Henry, J., & Walji, S. (2022). Feminisms, technologies and learning: Continuities and contestations. Learning, Media and Technology, 47(1), 1-10, DOI: 10.1080/17439884.2022.2041830

Downes, S. (2022). Connectivism. Asian Journal of Distance Education, 17(1), 58-87.  http://www.asianjde.com/ojs/index.php/AsianJDE/article/view/623

Saçak, B., Bozkurt, A., & Wagner, E. (2022). Learning design versus instructional design: A bibliometric study through data visualization approaches. Education Sciences, 12, 752, 1-14. https://doi.org/10.3390/educsci12110752

Houlden, S., & Veletsianos, G. (2022). Impossible dreaming: On speculative education fiction and hopeful learning futures. Postdigital Science and Education. https://doi.org/10.1007/s42438-022-00348-7

Holmes, W., & Tuomi, I. (2022). State of the art and practice in AI in education, European Journal of Education, 57(4),542–570. https://doi.org/10.1111/ejed.12533

Celik, I., Dindar, M., Muukkonen, H., & Järvelä, S. (2022). The promises and challenges of artificial intelligence for teachers: A systematic review of research. TechTrends, 66, 616–630. https://doi.org/10.1007/s11528-022-00715-y

Tlili, A., et al. (2022). Is Metaverse in education a blessing or a curse: a combined content and bibliometric analysis. Smart Learning Environments, 9(24), https://doi.org/10.1186/s40561-022-00205-x

EdTech, magic mushrooms, and magic bullets

In my inbox, an email says:

Alberta’s new regulations on psychedelics to treat mental health issues come into effect today, making it the first province to regulate the use of hallucinogens in therapy.

Today in The Conversation Canada, Erika Dyck of the University of Saskatchewan walks readers through the new regulations, as well as the history, potential and pitfalls of hallucinogens both inside and outside clinical settings.

Psychedelics — from magic mushrooms and ayahuasca to LSD — are having a moment in the spotlight, with celebrity endorsements and a new generation of research on potential clinical uses. There is certainly a need for therapeutics to treat mental health issues, the growing prevalence of which could place a strain on the health-care system.

“Psychedelics are being held up as a potential solution,” Dyck writes. “But, magic mushrooms are not magic bullets.

That last line captures so much of what is happening in our field, and education more broadly, that it is worth repeating.

  • AI is being held up as a potential solution, but it is not a magic bullet.
  • A return to in-person learning is being held up as a potential solution, but it is not a magic bullet.
  • Online learning is being held up as a potential solution, but it is not a magic bullet.
  • Microcredentials are being held up as a potential solution, but they are not a magic bullet.
  • … and so on

These things – and others – can be solutions to some problems, but they consider them to be part of a Swiss army knife, part of a toolkit. And while sometimes your Swiss army knife will work, this isn’t always going to be the case, especially when we’re considering some of the most major challenges facing higher ed, the kinds of things that we’re not talking about (e.g., precarious employment and and external regulations that encourage and foster conservatism, etc).

And perhaps, that’s the crux of the issue: That these solutions are used to respond to the symptoms of larger problems, of the things we’re not talking about, rather than the root causes of them.

Image credit: Wall-e output in response to the prompt “a magic bullet in the style of salvador dali”

AI use in class policy

Ryan Baker shares his class policy on foundation models, and asks for input:

Within this class, you are welcome to use foundation models (ChatGPT, GPT, DALL-E, Stable Diffusion, Midjourney, GitHub Copilot, and anything after) in a totally unrestricted fashion, for any purpose, at no penalty. However, you should note that all large language models still have a tendency to make up incorrect facts and fake citations, code generation models have a tendency to produce inaccurate outputs, and image generation models can occasionally come up with highly offensive products. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or a foundation model. If you use a foundation model, its contribution must be acknowledged in the handin; you will be penalized for using a foundation model without acknowledgement. Having said all these disclaimers, the use of foundation models is encouraged, as it may make it possible for you to submit assignments with higher quality, in less time. The university’s policy on plagiarism still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

As far as policies go, I like what Ryan created because

  • It functions as a policy as well as a pedagogical tool (“you should know that these models do X”) that draws students’ attention to specific issues that are important (e.g., ethics and equity).
  • It encourages use of foundation models. It recognizes that they are available and they can have benefits, unlike head in the sand efforts that ban their use
  • It invites students to engage with the output of foundation models in meaningful ways

In the LinkedIn thread, Jason D. Baker has a great comment that speaks to this, when he asks whether students solely need to state whether they used a model or whether they will need to explain in detail how they used model outputs. What would an explanation accompanying a submission look like? I’m not quite sure, but here’s an example of an article demonstrating the ways the human was involved and the ways the AI contributed to an article.

A pan-Canadian certification program for higher education instructors?

Tony Bates wrote his five wishes for online learning in 2023, along with reasons why he’s somewhat pessimistic about them being fulfilled. I wanted to spend a few minutes here discussing alternatives to Tony’s second wish: “A national certification program for higher education instructors.” If this wish has a “5% odds of happening” (and I agree with Tony here), what kinds of alternatives might have greater chances of success?

Provincial responsibility for higher education means that (at present) this is the kind of wish that is dead in the water. Some alternatives that might go towards addressing the problems of teaching competence might be the following:

  • Provincial certification programs for higher education instructors. The BC government has developed a digital learning strategy, which includes a variety of steps, resources, actions, recommendations, and tools to support and expand the effective and equitable use of digital learning in the province. With a strategy in place, developing a provincial certification program makes good sense. Some of the challenges that Tony identifies a federal program facing will still be present in the provincial context (e.g., research-teaching hierarchies, cost, academic freedom issues), but the odds of this are greater than 5%. My guesstimate? 10%. Still poor. And smaller-scale. On the other hand, a provincial program, say in BC, might become a proof of concept for other provinces, especially, if it is openly licensed, is cross-disciplinary, and is flexible enough in its design and assessment.
  • Institutional and cross-institutional certification programs, such as BCIT’s Polytechnic Academy proposal, which I understand to be similar to the work that Centers of Teaching and Learning at multiple institutions do, such as, for example, the Facilitating Learning Online (FLO) offerings, that are offered by a number of institutions/organizations in the province. There’s a slew of benefits that can come from  multi-institutional collaboration on such efforts, like Tony describes. I’m more optimistic on this, especially because there was quite a lot of collaboration during the COVID-19 pandemic that might provide the impetus and support for this, and also because I see collaborative-minded institutions coming together for other initiatives (like the new campus that four island institutions are opening in the Westshore).
  • Institutional certification programs for future faculty. This is close to my heart. Preparing current doctoral students for online/hybrid teaching – and preparing them for teaching in general – is necessary (which, I might add, also prepares them with skills that are relevant outside of the academy, like leading teams in collaborative groupwork). There’s other challenges here to be sure (such as academic departments agreeing that this is topic that is significant enough to warrant a course/certificate/microcredential/something), but this might be an area where the office/school/college of graduate studies plays a pivotal role. Another challenge: this kind of initiative addresses the current problem, but in the future, while remaining unresponsive to the status quo. It’s not a solution, but it’s part of a package for a solution.

If you would like to add more to this, the comments are open!

Page 9 of 81

Powered by WordPress & Theme by Anders Norén