Tag: artificial intelligence

CFP: Equity of Artificial Intelligence in Higher Education (Journal of Computing in Higher Education)

Below is a call for papers for a special issue to be published by JCHE focusing on Equity of Artificial Intelligence in Higher Education

Guest Editors:

Lin Lin Lipsmeyer, Southern Methodist University, USA
Nia Nixon, University of California, Irvine, USA
Judi Fusco, Digital Promise, USA
Pati Ruiz, Digital Promise, USA
Cassandra Kelley, University of Pittsburgh, USA
Erin Walker, University of Pittsburgh, USA

In this special issue, we center opportunities and challenges in the rapid development of artificial intelligence (AI) for promoting the equitable design, implementation, and use of technologies within higher education. Equity is meeting people where they are with what they need to be successful (Levinson,  Geron, &  Brighouse, 2022). Issues related to equity are multiple and complex, involving but not limited to the choice of learning goals in the design of technologies, facilitating broader access to emerging technologies, and ensuring that technologies are responsive to the needs of individuals and communities from historically and systematically excluded populations. We are looking for articles that engage meaningfully with topics related to equity as part of their research questions, design and implementation focus, data analysis, and/or discussion when considering AI systems in higher education. We are interested in articles that address the impact of AI technologies on psychological experiences, processes (e.g., sense of belonging, self-efficacy), and/or domain knowledge. How can we use AI to know what people are learning? How can we use AI to support a diversity of learners and teachers? How should AI technologies in education differ across different populations of learners?

As AI technologies become more sophisticated, there are increasing opportunities for human-AI partnerships in service of learning. There is a need for increased understanding of what might be involved in these partnerships, grounded in the learning sciences, educational psychology, assessment, and related fields. Alongside this work there is a need for technological advancements that ensure these technologies are designed and implemented in ways that advance equitable outcomes for historically and contemporarily underserved groups. Core questions in this space revolve around the roles for humans and AI systems in higher education; how should each contribute within these partnerships, what should humans take away from these partnerships, and what does learning look like in environments where AI is being widely used? Specific to the JCHE, what is the role of higher education in the evolution of more equitable human-AI partnership? We define equitable human-AI partnerships as one where humans of varied backgrounds and identities are included in the design and deployment of the technologies, have agency during use of the technologies, and all see positive outcomes that meet their individual and community goals as they use the technologies for learning and life.

Technologies offer extensions to human abilities but also areas where traditionally human skills might be lost or replaced, yielding opportunities and pitfalls. AI-based advancements can yield new opportunities for educational settings, including an improved ability to model learning across contexts, support learning in individual and group settings through personalized adaptations, and enhance learners’ and teachers’ ability to engage in learning environments. On the other side, the use of AI-based technologies can invite concerns related to privacy and overly prescriptive models of learning. They are often implemented inequitably, sometimes due to lack of equal access to the technologies, but also due to a lack of culturally relevant design for communities that are often most harmed by bias encoded in new technologies, and a misalignment between the goals of the technologies and the goals of the communities they serve. AI systems might also replace things students have been asked to do in the past and it is not clear what the implications are of new approaches, and what is lost and what is gained with these changes?

Unique Collaboration with CIRCLS

To support this important agenda related to foregrounding equity and inclusion in the design and understanding of AI technologies for Higher Ed, we are partnering with CIRCLS to host a series of three support sessions for authors submitting to this special issue that will provide additional resources for doing this effectively, as well as convening an Advisory Board to support the authors of submitted articles. Authors are strongly encouraged to participate in these sessions and engage with the Advisory Board as part of their submissions to this issue.
Key Topics

Papers considered for this special issue will report ground-breaking empirical research or present important conceptual and theoretical considerations on the conjunction of equity, inclusion, and AI. In general, papers may pursue one or several of the following goals:

  • Innovating new assessments, technologies, modeling, and pedagogies as we use more AI for learning across a variety of content domains.
  • Exploring the impact of AI technologies on marginalized communities
  • Investigating AI literacy, education, and awareness building
  • Defining equitable human-AI partnerships
  • Exploring the impact of AI technologies on domain knowledge and psychological experiences and processes (e.g., sense of belonging, self-efficacy)
  • Aligning goals of learning in this new AI-enhanced landscape with the diversity of goals held by students as they pursue higher education.
  • Engaging with topics such as but not limited to privacy, security, transparency, sustainability, labor costs, ethics, learner agency, learner diversity, and cultural relevance as they intersect with more equitable learning processes and outcomes.
  • Developing accountability metrics for researchers and educational technology development teams

Timeline

December 15, 2023 — Abstracts of proposed papers due to the editors.

January 15, 2024: Authors notified of initial acceptance of abstracts.

February 2024 (Date TBD) – CIRCLS Support Session A

April 1, 2024 – Papers due in the Editorial Management system

June 1, 2024 — Reviews completed & authors notified of decisions

June 2024 (Date TBD) – CIRCLS Support Session B

October 1, 2024 — Revised manuscripts due

December 1, 2024 — Reviews completed & authors notified of decisions

February 15, 2025 — Final manuscripts due

March 15, 2025 – Final manuscripts sent to the publishers
Submission to the Special Issue

Indicate your interest in participating in the special issue by submitting your abstracts here at https://pitt.co1.qualtrics.com/jfe/form/SV_bBpc2DTNPk5P7nM
For more information, resources, and updates related to this Special Issue, please visit the AI CIRCLS & JCHE Collaboration web page.

Reference:

Meira Levinson, Tatiana Geron, and Harry Brighouse. 2022. Conceptions of Educational Equity. AERA Open 8: 23328584221121344. https://doi.org/10.1177/23328584221121344 

GPTs and one student’s custom version of ChatGPT

A few days ago OpenAI released GPTs, which are “custom versions of ChatGPT that you can create for a specific purpose” (announcement here: https://openai.com/blog/introducing-gpts), meaning that one could produce a GPT that is dedicated to a specific set of tasks. I’ve seen a few of these so far, including Mike Sharples’ chatbot that uses his book to offer teaching advice https://chat.openai.com/g/g-RCHNUwnD1-teachsmart, Mairéad Pratschke’s expert in digital education and learning design https://chat.openai.com/g/g-hrPUmXB5X-digital-professor, and this degree builder I came across https://chat.openai.com/g/g-KVB8vSaAJ-degree-designer.

One of the more interesting use cases I saw was one student’s Reddit post titled “I just replaced my chemistry professor with AI:” https://www.reddit.com/r/ChatGPT/comments/17slpti/i_just_replaced_my_chemistry_professor_with_ai/ 

 “I gave it the prompt: you are a college professor teaching inorganic chemistry 2 thermodynamics. The scope of your class is covered in the uploaded documents.

I then uploaded my professors PowerPoint slides and copied and pasted the chapter from the book. All the exercises, extra problems, and a thermodynamics properties table. I also included a summary of the topics covered.

I had to double prompt it to only teach from the documents and upload pdfs seemed to work a lot better than .txt files.”

Lots of areas to reflect on here, including student creativity, privacy, and ethics.

Recommendations on the use of Generative Artificial Intelligence at Royal Roads University

Today I met with Royal Roads University’s Board of Governors to present the work that we have completed in relation to Generative AI. I appreciated the opportunity not only to meet with the board, but also to hear comments and questions around this work and AI more general.

Because this was a public session, I thought it might be beneficial to share the recommendations we put forward. The full report will likely appear on the university website but for those of you who are tracking or thinking about institutional responses, this short timely summary might be more valuable than a more detailed future report.

As background: In March 2023, Royal Roads University established a generative Artificial Intelligence (AI) taskforce. Chaired by Dr. George Veletsianos, the taskforce consisted of Drs. Niels Agger-Gupta, Jo Axe, Geoff Bird, Elizabeth Childs, Jaigris Hodson, Deb Linehan, Ross Porter, and Rebecca Wilson-Mah. This work was also supported by our colleagues Rosie Croft (University Librarian), and Ken Jeffery and Keith Webster (Associate Directors, Centre for Teaching and Educational Technologies). The taskforce produced a report with 10 recommendations by June 2o23. The report (And its recommendations) should be seen as a working document that ought to be revisited and revised periodically as the technology, ethics, and use of AI are rapidly evolving. The recommendations were:

  1. Establish and publicize the university’s position on Generative AI
  2. Build human capacity
  3. Engage in strategic and targeted hiring
  4. Establish a faculty working group and foster a community of practice.
  5. Investigate, and potentially revise, assessment practices.
  6. Position the Centre for Teaching and Educational Technologies as a “go-to” learning and resource hub for teaching and learning with AI.
  7. Future-proof Royal Roads University [comment: a very contextual recommendation, but to ensure that this isn’t understood to encourage an instrumentalist view of AI or understood to mean that the institutions solely focus on AI, the report urged readers to “consider that the prevalence of AI will have far-reaching institutional impacts, which will add to the social, economic, political, and environmental pressures that the University is facing]
  8. Revise academic integrity policy
  9. Develop and integrate one common research methods course [comment: another very contextual recommendation that likely doesn’t apply to others, but what does apply is the relevance of AI to student research suggesting that research methods courses consider the relationships between AI and research practices.]
  10. Ensure inclusivity and fair representation in AI-related decisions.

I hope this is helpful to others.

So very tired of predictions about AI in education…

By people who aren’t AIEd experts, education technology experts, education experts, and the like.

Case in point: “AI likely to spell end of traditional school classroom, leading [computer science] expert says.”

I appreciate cross disciplinary engagement as much as I love guacamole (which is to say, a lot), but I’d also appreciate that we stop wasting our time on these same unfulfilled prophecies year after year, decade after decade.

Will AI impact education? In some ways it will, and in others it won’t. Will education shape the ways AI comes to be used in classrooms? In some ways it will, and in others it won’t.

Truth be told, this negotiated relationship isn’t as appealing as DISRUPTION, AVALANCHE, MIND-READING ROBO-TUTOR IN THE SKY, etc, which are words that readers of the history of edtech will recognize.

Powered by WordPress & Theme by Anders Norén