Open for Public Comment: Minnesota’s Computer Science Strategic Plan

My colleague Cassie Scharber shared this with me and I am passing it along for broader input. Please share widely and submit comments!
The draft of the Minnesota state plan for K12 computer science education is now open for public review and feedback (Feb 1-Feb 16). This plan contains recommendations for teacher licensure, academic student standards, and professional learning. More information can be found on MDE’s website

How to Provide Comments on the Plan

1.       Review the CS State Plan Draft

2.       Share Your Thoughts: We encourage you to share your thoughts, suggestions, and concerns through the online comment form.

3.       Attend Virtual Feedback Sessions: Join our virtual feedback sessions where you can engage directly with members of the CS Working Group and share your insights. Sessions will be held via Zoom for one hour each. Register for one of the sessions using the following links:

4.       Help us Spread the Word: Help us reach more stakeholders by sharing this information with your colleagues, friends and community members. The more variety of voices we hear, the stronger and more inclusive our strategic plan will be.

CFP: Equity of Artificial Intelligence in Higher Education (Journal of Computing in Higher Education)

Below is a call for papers for a special issue to be published by JCHE focusing on Equity of Artificial Intelligence in Higher Education

Guest Editors:

Lin Lin Lipsmeyer, Southern Methodist University, USA
Nia Nixon, University of California, Irvine, USA
Judi Fusco, Digital Promise, USA
Pati Ruiz, Digital Promise, USA
Cassandra Kelley, University of Pittsburgh, USA
Erin Walker, University of Pittsburgh, USA

In this special issue, we center opportunities and challenges in the rapid development of artificial intelligence (AI) for promoting the equitable design, implementation, and use of technologies within higher education. Equity is meeting people where they are with what they need to be successful (Levinson,  Geron, &  Brighouse, 2022). Issues related to equity are multiple and complex, involving but not limited to the choice of learning goals in the design of technologies, facilitating broader access to emerging technologies, and ensuring that technologies are responsive to the needs of individuals and communities from historically and systematically excluded populations. We are looking for articles that engage meaningfully with topics related to equity as part of their research questions, design and implementation focus, data analysis, and/or discussion when considering AI systems in higher education. We are interested in articles that address the impact of AI technologies on psychological experiences, processes (e.g., sense of belonging, self-efficacy), and/or domain knowledge. How can we use AI to know what people are learning? How can we use AI to support a diversity of learners and teachers? How should AI technologies in education differ across different populations of learners?

As AI technologies become more sophisticated, there are increasing opportunities for human-AI partnerships in service of learning. There is a need for increased understanding of what might be involved in these partnerships, grounded in the learning sciences, educational psychology, assessment, and related fields. Alongside this work there is a need for technological advancements that ensure these technologies are designed and implemented in ways that advance equitable outcomes for historically and contemporarily underserved groups. Core questions in this space revolve around the roles for humans and AI systems in higher education; how should each contribute within these partnerships, what should humans take away from these partnerships, and what does learning look like in environments where AI is being widely used? Specific to the JCHE, what is the role of higher education in the evolution of more equitable human-AI partnership? We define equitable human-AI partnerships as one where humans of varied backgrounds and identities are included in the design and deployment of the technologies, have agency during use of the technologies, and all see positive outcomes that meet their individual and community goals as they use the technologies for learning and life.

Technologies offer extensions to human abilities but also areas where traditionally human skills might be lost or replaced, yielding opportunities and pitfalls. AI-based advancements can yield new opportunities for educational settings, including an improved ability to model learning across contexts, support learning in individual and group settings through personalized adaptations, and enhance learners’ and teachers’ ability to engage in learning environments. On the other side, the use of AI-based technologies can invite concerns related to privacy and overly prescriptive models of learning. They are often implemented inequitably, sometimes due to lack of equal access to the technologies, but also due to a lack of culturally relevant design for communities that are often most harmed by bias encoded in new technologies, and a misalignment between the goals of the technologies and the goals of the communities they serve. AI systems might also replace things students have been asked to do in the past and it is not clear what the implications are of new approaches, and what is lost and what is gained with these changes?

Unique Collaboration with CIRCLS

To support this important agenda related to foregrounding equity and inclusion in the design and understanding of AI technologies for Higher Ed, we are partnering with CIRCLS to host a series of three support sessions for authors submitting to this special issue that will provide additional resources for doing this effectively, as well as convening an Advisory Board to support the authors of submitted articles. Authors are strongly encouraged to participate in these sessions and engage with the Advisory Board as part of their submissions to this issue.
Key Topics

Papers considered for this special issue will report ground-breaking empirical research or present important conceptual and theoretical considerations on the conjunction of equity, inclusion, and AI. In general, papers may pursue one or several of the following goals:

  • Innovating new assessments, technologies, modeling, and pedagogies as we use more AI for learning across a variety of content domains.
  • Exploring the impact of AI technologies on marginalized communities
  • Investigating AI literacy, education, and awareness building
  • Defining equitable human-AI partnerships
  • Exploring the impact of AI technologies on domain knowledge and psychological experiences and processes (e.g., sense of belonging, self-efficacy)
  • Aligning goals of learning in this new AI-enhanced landscape with the diversity of goals held by students as they pursue higher education.
  • Engaging with topics such as but not limited to privacy, security, transparency, sustainability, labor costs, ethics, learner agency, learner diversity, and cultural relevance as they intersect with more equitable learning processes and outcomes.
  • Developing accountability metrics for researchers and educational technology development teams


December 15, 2023 — Abstracts of proposed papers due to the editors.

January 15, 2024: Authors notified of initial acceptance of abstracts.

February 2024 (Date TBD) – CIRCLS Support Session A

April 1, 2024 – Papers due in the Editorial Management system

June 1, 2024 — Reviews completed & authors notified of decisions

June 2024 (Date TBD) – CIRCLS Support Session B

October 1, 2024 — Revised manuscripts due

December 1, 2024 — Reviews completed & authors notified of decisions

February 15, 2025 — Final manuscripts due

March 15, 2025 – Final manuscripts sent to the publishers
Submission to the Special Issue

Indicate your interest in participating in the special issue by submitting your abstracts here at
For more information, resources, and updates related to this Special Issue, please visit the AI CIRCLS & JCHE Collaboration web page.


Meira Levinson, Tatiana Geron, and Harry Brighouse. 2022. Conceptions of Educational Equity. AERA Open 8: 23328584221121344. 

Generative AI course statement

I have a new statement on generative AI in the class that I am teaching this semester. I like Ryan Baker’s statement from last year due to its permissive nature, and so I build upon it by adding some more specific guidance for my students to clarify expectations. I  came up with what you see below. As always, feel free to use as you see fit:

Artificial Intelligence Course Policy

Within this class, you are welcome to use generative AI models (e.g., ChatGPT,  microsoft copilot, etc) in your assignments at no penalty. However, you should note that when the activity asks you to reflect, my expectation is that you, rather than the AI, will be doing the reflection. In other words, prior to using AI tools, you should determine for what purpose you will be using it. For instance, you might ask it for help in critiquing your ideas or in generating additional ideas, or editing, but not in completing assignments on your behalf.   You should also be aware that all generative AI models have a tendency to make up incorrect facts and fake citations, leading to inaccurate (and sometimes offensive) output. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or an AI model. For each assignment you use an AI model, you need to submit a document that shows (a) the prompts you used, (b) the output the AI model produced, (c) how you used that output, and (d) what your original contributions and edits were to the AI output that led to your final submission. Language editing is not an original contribution. An original contribution is one that showcases your critical and inquiry thinking skills and abilities beyond and above what the AI tools have generated. To summarize: if you use a generative AI model, you need to submit an accompanying document that shows the contribution of the model as well as your original contribution, explaining to me how you contributed significant work over and above what the model provided. You will be penalized for using AI tools without acknowledgement. The university’s policy on scholastic dishonesty still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

Postdigital Research: Transforming Borders into Connections [interview]

Petar Jandrić, Alison MacKenzie & Jeremy Knox edited two books on postdigital research: “Constructing Postdigital Research” and “Postdigital Research: Genealogies, Challenges, and Future Perspectives.” When they invited me to intervuiew them about the books and bring my understanding of digital learning and the postdigital to their ideas, I knew that they would be very gracious in answering my questions. Here’s the resulting interview, but more than that, read the books: They’re rich, diverse, and unique in their approaches and sensibilities.

Are education and learning engineering problems?

Audrey Watter’s begins her latest post with this insight: “Much of what I wrote about with regards to education applies to this sector [health and wellness] as well, in no small part because everything for Silicon Valley (1) is an engineering problem: a matter of optimization, individualization, and gadgeteering (that’s B. F. Skinner’s word, not mine).”

In a similar fashion, in his 2023 chapter The future of the Field is not Design Jason McDonald notes that in pursuing the mission of transforming learning and teaching, the field of Learning and Instructional Design Technology has ” become too fixated on being designers and applying the methods of design thinking. As valuable as design has been for our field, it’s ultimately too narrow an approach to help us have the impact we desire because it overemphasizes the importance of the products and services we create. To be more influential, we need approaches that focus our efforts on nurturing people’s “intrinsic talents and capacities” that are ultimately outside of our ability to manage and control.”

I am nodding along with this, and I am also reminded that both silicon valley edtech efforts as well as LIDT efforts overwhelmingly focus on the individual student and the individual teacher,  and much less on the environments, systems, policies, and structural issues that surround our efforts (or in Berliner’s 2002 work, the contexts that surround us).

Southern New Hampshire University’s efforts with generative AI

Much of the work around generative AI happening in higher education to date focuses on individuals, centering on policies, workshops, exploration of  how individual faculty and students can/should/ought to use generative AI in teaching, learning, and research. Explorations at the system level are rarer, which is why SNHU’s efforts to explore what higher education looks like with AI as a feature rather than an add-on is unique. We need such explorations because a higher education system that serves its citizens well and addresses the kinds of complex societal challenges that we face today requires experimenting with different approaches, questions solutionism, and engages with the possibility that education futures aren’t prescribed.

I’m interested to see the results of this effort. SNHU is generally considered to be successful in answering the question “what does a university built for the digital age look like” while others have treated the digital as an add-on to operations they considered central. This is not to say that every institution should try to be a SHNU, in the same way that not every institution should try to be an Ivy. But we can all learn from a case study like this.

January 2024: Month ahead

It’s a new year, and I thought I’d try something new: posting a beginning-of-the-month broad and incomplete “to do” list, and revisiting it at the end of the month.

I expect this month to be one where I am going to be trying to find my new rhythms since it’s going to be my first full month at the Learning Technologies program at the University of Minnesota: This month, I plan to

  • Finalize the class I will be teaching (Foundations of Distance Education)
  • Start teaching.
  • Establish a tentative work from office/home pattern.
  • Establish a (3 x week) exercise pattern that works
  • Finalize my parts of “paper 3” which is a survey of the education futures that youth find hopeful (with Shandell)
  • Co-develop a survey instrument around education futures (with Shandell)
  • Finalize my parts of the ER paper, which is an analysis of the degree to which research is publicly available (with Josh).
  • Continue work on the special issue colleagues and I are co-editing
  • Begin work on a new online learning project (more on this soon)
  • Keep on top of all the things that need to happen during this month (webinars, trainings, admissions decisions, setting up my home and work office, connecting with people, figuring out how new workplace systems work, etc etc). I suspect that this will take up a lot of time.

Page 1 of 80

Powered by WordPress & Theme by Anders Norén