AI, ChatGPT, instructional design, and prompt crafting

David Wiley wrote a thoughtful post on the ways in which AI and Large Language Models (LLMs) can “provide instructional designers with first drafts of some of the work they do.” He says “imagine you’re an instructional designer who’s been paired with a faculty member to create a course in microeconomics. These tools might help you quickly create first drafts of” learning outcomes, discussion prompts, rubrics, and formative assessment items.  The point is that LLMs can quickly generate rough drafts that are mostly accurate drafts, that humans can then “review, augment, and polish,” potentially shifting the work of instructional designers from authors to editors. The post is well worth your time.

The question that I’d like to spend some time thinking about is the following: What new knowledge, capacities, and skills do  instructional designers need in their role as editors and users of LLMs?

To focus on a single item here, much has been written about prompt engineering or crafting, which is the way in which user text is entered to guide LLMs like ChatGPT to generate desired responses. For example, a certain level of specificity and nuance is necessary to guide the model towards particular values and ideals, and users should not assume that their values are aligned with the first response they might receive from a prompt. Below is an example to illustrate how assessment items vary, as the desired assessment changes from good, to efficient, to equitable. My prompt is in bold and ChatGPT output follows.


assume that a learning outcome of a course is “understand opportunity cost.” What is a good assessment item for this learning outcome?

A good assessment item for the learning outcome of “understand opportunity cost” would be a case study or scenario-based question that requires the student to analyze and apply their understanding of opportunity cost to make a decision.

For example:

Scenario: A small business owner is trying to decide whether to invest in new equipment or to expand the business by opening a second location. The owner has $100,000 to spend and estimates that the equipment will cost $50,000 and the second location will cost $80,000.

Question: What is the opportunity cost for the business owner in each case and which option would you recommend and why?

This question requires the student to understand the concept of opportunity cost, which is the cost of the next best alternative, in order to make an informed decision about which option would be the best investment for the business. It also allows the student to demonstrate their understanding of the concept by applying it to a real-world scenario.

assume that a learning outcome of a course is “understand opportunity cost.” What is an efficient assessment item for this learning outcome?

An efficient assessment item for the learning outcome of “understand opportunity cost” would be a multiple-choice question that allows the student to quickly and easily demonstrate their understanding of the concept.

For example:

Question: Which of the following is the correct definition of opportunity cost? A) The cost of a good or service B) The cost of the next best alternative C) The total cost of production D) The profit made from a good or service

The correct answer is B) The cost of the next best alternative. This multiple-choice question is efficient as it allows students to quickly demonstrate their understanding of the concept, and it is easy to grade, as there is only one correct answer. This multiple-choice question could be used as a formative assessment to check student’s understanding, and it could be also used as a summative assessment to evaluate their final understanding of the course’s outcome.

assume that a learning outcome of a course is “understand opportunity cost.” What is an equitable assessment item for this learning outcome?

An equitable assessment item for the learning outcome of “understand opportunity cost” would be a problem-solving or open-ended question that allows students to demonstrate their understanding of the concept in a way that is not limited by their background or language abilities.

For example:

Question: Suppose you have $1000 to spend and you want to buy a new computer and a new phone. A computer costs $800 and a phone costs $200. Provide a written explanation of how you would make your choice and what the opportunity cost of your choice is.

This open-ended question allows students to demonstrate their understanding of opportunity cost in their own words and allows them to apply their understanding to a real-world scenario. It also allows for different students to approach the problem differently and does not rely on language or background knowledge. It also allows for the teacher to evaluate student’s understanding of the concept in a more holistic way, and not just based on a single correct answer.

This type of assessment is equitable because it allows all students to demonstrate their understanding of the concept and does not disadvantage students who may struggle with multiple-choice questions or who may have a different background or language abilities.


NIDL’s Top 10 Good Reads from 2022


AIs dedication to truth, justice or equity


  1. Thanks a lot for these useful examples. I’ve also been trying out Chat GTP for drafting online courses, and my first impressions are that

    1. It gives very traditional designs
    2. It gives activities and assessment options that are also easily answered using Chat GTP.

    I believe that Chat is GTP is a game changer. Therefore, we’ll need to go beyond traditional online course design so as to promote actual learning and reliable evaluation. With this in mind, we can analyze how Chat GTP can be useful for designers and students.

    It’s an exciting challenge and we need to go on thinking and trying out together.

  2. I found that the outcomes generated by ChatGPT were bland, rote, weak or unmeasurable. “Understand” is not a strong outcome. I think instructional design is in a sad state when we look at the work as a product – this is exactly what got us into this corner in the first place. If we start teaching students that the most important thing for the semester is the production of a paper, that is what they will do: produce products. Using ChatGPT for instructional design reduces ID work to the production of so-called “instructional materials” instead of the relationship-building that fosters trust and respect which is the real work of ID. The problem with education right now is NOT that we do not have enough ways to make it “faster and cheaper.” What I have found in over 20 years of instructional design and nearly 30 years in teaching is that each campus is its own community with unique needs. In order to meet those needs as an instructional designer requires careful thought, relationship-building, and listening. After that, the creation of any needed materials is already the easy part. Putting education back into a can is not a problem that needs to be solved. I also can’t believe that responsible educators are still willing to use tools when the ethical and intellectual property issues are not really resolved: when did teachers and students become corporate guinea pigs driven by FOMO? The intellectual property issues are only “resolved” by the usual corporate open-washers who think that just because something is, at this time, technically legal, it is therefore ethical. My thoughts on this are not driven by fear but a cynicism born from too many companies ignoring things like protecting student data and accessibility. Other commenters have opined that we need to get on board now because for all its weaknesses, it will only get better. What are we using as a comparison? MS Office? Oracle? Twitter? Has it occurred to anyone that it could get worse?

    • Thanks for these comments, Geoff. I had a whole paragraph on some of this that I opted to keep out for now because I don’t yet have fully formed thoughts, but I think that part of the issue here is that much of the nuance gets lost in the binary ways in which we are approaching these conversations. I am likely guilty of this in this post. So, to back up here: If a student or ID I supervised submitted these outcomes to me, I would have asked them to redo them, and referred them to a variety of resources, including my trusted list of 100 helpful performance terms in page 40 of Heinich, M. Molenda, J. Russell, S. Smaldino (2001) Instructional Media and Technologies for Learning. And that’s part of the point. I would expect them to know that these LOs aren’t good and to apply their knowledge to improve them. If I were teaching the class and using this in a meaningful way, I would have liked to receive an appendix that says “here’s what ChatGPT gave me, here’s what I submitted, and here’s why I made these changes.”

      Re: listening and relationship-building – absolutely. I have more to say on this.

      Re: adoption and ethics. That’s fair and it’s a topic that deserves more attention. This post wasn’t meant as an advocacy, but if it reads that way to you, let me know. As you know by following me online for the past decade or so, issues that you raise – including issues of power, control, and erasure – are close to my heart.

  3. I’m gonna jump in here, in part because the example used – an econ course (obviously micro principles) – happens to be my backyard. I think a little realism here might change the conclusions/analysis of the potential of AI/ChatGPT for this kind of instructional design purpose.
    First off, as some have already pointed out, the idea of an outcome “understanding opp cost” is a total no-go. That weak, poor wording might fly in the world of R1/flagships where instructional designers apparently do most of the “writing” of the course subject apparently only to a professor – I’m sorry, “subject matter expert” – delivering said course with little involvement because they’re busy researching.
    At the CC or small regional college or regional state uni, that outcome won’t fly. The Dir of Assessment or equiv won’t allow it and the accrediting body will criticize it. “Understand” isn’t just weak, it’s not “observable objective output”.

    My bigger criticism though is that the assessment/question setup is weak and ineffective. If i want them to understand opportunity cost – and EVERY econ professor does – a big part of them understanding it is recognizing situations where the concept is relevant and then identifying what the opp cost is and how it affected the decision. You can def do that w/ a MC question effectively, but not if you tell the student in the question itself that you’re asking about opp cost. So, I’d definitely support what George says in his comment response.

    But I’d go further. If LO’s are to be anything other than bureaucratic box-checking that only interferes with teaching & learning instead of supporting it, The professor, the one that these days has been reduced to “subject matter expert” as if they’re merely a copy-editor, needs to be the one that thinks & generates the drafting of the LO’s, not some AI stochastic parrot or someone who isn’t a master of the learning that we wish to students to learn. The prof is the one who knows the knowledge that we want to student to learn, and they know it at multiple levels: they can repeat verbiage (AI can do that) but they also know it intuitively, conceptually, in context, and why it’s valuable/important/useful. AI/ChatGPT doesn’t know that and never will.

    • Thanks for this thoughtful and extensive reply, Jim! I appreciate where you are coming from, and all i have to add is “hear hear” to the last paragraph!

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by WordPress & Theme by Anders Norén