Open Access fees are exorbitant

After signed another publishing agreement, and I was, once again, taken aback by the exorbitant OA fees that publishers charge.

Publishing open access with us (gold OA) lets you share and re-use your article immediately after publication.

The article processing charge (APC) to publish an article open access in Educational technology research and development is:

Article processing charge (excluding local taxes)
£2,290.00 / $3,290.00 / €2,590.00

Some organisations will pay some or all of your APC.

If you want to publish subscription, instead of open access, there will be an option to do that in the following steps.

I know, I know, we probably shouldn’t have submitted to journal that isn’t gold and free OA by default, *but* the system is structured in such ways that my junior co-authors would benefit from being published in this journal.

While not a solution to this problem, it’s worth noting the terms in the publishing agreement around sharing the article. This is in the terms:

The Assignee grants to the Author (i) the right to make the Accepted Manuscript available on their own personal, self-maintained website immediately on acceptance.

This is the approach that I use for nearly all my papers, but it’s worth remembering that what this really does is suggest an individual solution to a systemic problem, which will do little to solve the broader problem of lack of access to research.

There are other statements in the terms around placing one’s article in an institutional repository, but author self-archiving is generally the first and immediate option available to individuals. And perhaps google scholar will index the author’s personal website, making the article available, as shown below. Google scholar’s approach of identifying articles and placing publicly-available versions in search results is a systemic solution to the problem. Unpaywall is similar in that respect.

 

[To be clear: this post isn’t about ETR&D. It’s about the publishers & the publishing system]

Reflections: 100 Year EdTech Project design summit

Last week, I was at the 100 Year EdTech Project Design Summit in Phoenix AZ, and I thought it might be worthwhile to post some raw reflections here, captured throughout the days, and unedited. At the event

“Leaders, educators, futurists, designers, students, lifelong learners, visionaries –  all will be invited to explore the last 50 years of technology’s impact on education, observe where we stand today, and imagine the future of education for generations to come.”

I appreciated the event starting with a student keynote panel. Some ideas I heard included attention to equity, excitement about the role of AI (i have lots and lots of thoughts on this, including how much space these conversations are taking and concern around the kinds of conversations it’s displacing), the inevitability of technology, the limits of our imagination (e.g., a comment made around how X years ago “my concern was about walking to the library and spending hours looking for an article, and so no i couldn’t have imagined the progress of tech”), emphasizing community, expanding access to tech (e.g., broadband for all), and sharing wealth and resources.

And because I’m obviously reflecting on these from my own perspective: These conversations are somewhat similar in Canada, but there’s a stark difference: The starting point there, or at least in the conversations I was part of, was typically decolonization, conciliation, indigenization, equity, and inclusion. The starting point here, or at least at this event, is technology in the service of similar ends… in other words, there’s a more pronounced technosolutionist stance at this event. Granted, that’s the focus of the event, which makes solutionism a difficult pattern of thought to escape/resist.

The slide below from Kiran Budhrani recentered some ideas around the broader issues, that don’t have to do with tech, but shape education nonetheless

This was punctuated by Bryan Alexander highlighting climate adaptation, and especially climate migrants and the impacts of that impending reality. I say reality because I’ve reached the conclusion that climate collapse is more likely than otherwise. I hope I am proven wrong.

A great question from a medical professional was the following: what do medical professionals need to know when patients (all of us) come to them with some basic knowledge of their ailments? The focus here was  on skills and knowledge relating to empathy and communication on the medical professional’s part, as well as the ethical issues around AI systems that will invariably support patients (e.g., what data were they trained on? how trustworthy are they, etc etc). I also think this area relates to patients navigating the flood of information/misinformation circulating online and their use of various technologies to make sense of their ongoing and new ailments. This reminds me of Dave Cormier’s book, which argues that we ought to be preparing people to navigate uncertainty at a time of information abundance.

Much of the event focused on small group discussions around approaches that might address certain challenges. I thought that framing the role of edtech in the future in terms of scenarios was grounding and valuable. The discussions in my group were rich, and there lots and lots of thoughts and ideas about our topic.

Finally, it was great to catch up with Philippos Savvides, fellow Cypriot at ASU, who partners with and supports edtech startups around the world. I also appreciated a short tour of EdPlus (ASU’s internal-focused OPM) and learning more about their work. Rather than outsourcing online program management, like so many other institutions, EdPlus focuses on innovating and scaling ASU offerings. I believe that  operations integral to the institution (and OPM is one of them) ought to stay within the institution and ought to be cultivated. I like what ASU is doing here. And through luck or foresight, it’s perhaps avoiding the entanglements of an OPM market in turbulence.

 

Update #1: The paper “Climate imaginaries as praxis,” showed up in my inbox a few hours after posting this, and I wish I had read it prior to the summit. Abstract reads: As communities around the world grapple with the impacts of climate change on the basic support systems of life, their future climate imaginaries both shape and are shaped by actions and material realities. This paper argues that the three globally dominant imaginaries of a climate changed future, which we call ‘business as usual’, ‘techno-fix’ and ‘apocalypse’ – fail to encourage actions that fundamentally challenge or transform the arrangements that underpin systemic injustices and extractive forms of life. And yet, to meet the challenges associated with food production, energy needs, and the destruction of ecosystems, people are coming together, not only to take transformative action, but in doing so, to create and nurture alternative imaginaries. This paper presents empirical findings about how communities in north and south India and south-east Australia are pre-figuring alternative futures, locally and in most cases in the absence of broader state support. An analysis of communities’ actions and reflections indicates that their praxes are altering their future imaginaries, and we consider how these local shifts might contribute to broader changes in climate imaginaries. At the heart of the emerging imaginaries are a set of transformations in the relational fabric within which communities are embedded and how they attend to those relations: relations within community, with the more-than-human, and with time.

Open for Public Comment: Minnesota’s Computer Science Strategic Plan

My colleague Cassie Scharber shared this with me and I am passing it along for broader input. Please share widely and submit comments!
**
The draft of the Minnesota state plan for K12 computer science education is now open for public review and feedback (Feb 1-Feb 16). This plan contains recommendations for teacher licensure, academic student standards, and professional learning. More information can be found on MDE’s website

How to Provide Comments on the Plan

1.       Review the CS State Plan Draft

2.       Share Your Thoughts: We encourage you to share your thoughts, suggestions, and concerns through the online comment form.

3.       Attend Virtual Feedback Sessions: Join our virtual feedback sessions where you can engage directly with members of the CS Working Group and share your insights. Sessions will be held via Zoom for one hour each. Register for one of the sessions using the following links:

4.       Help us Spread the Word: Help us reach more stakeholders by sharing this information with your colleagues, friends and community members. The more variety of voices we hear, the stronger and more inclusive our strategic plan will be.

CFP: Equity of Artificial Intelligence in Higher Education (Journal of Computing in Higher Education)

Below is a call for papers for a special issue to be published by JCHE focusing on Equity of Artificial Intelligence in Higher Education

Guest Editors:

Lin Lin Lipsmeyer, Southern Methodist University, USA
Nia Nixon, University of California, Irvine, USA
Judi Fusco, Digital Promise, USA
Pati Ruiz, Digital Promise, USA
Cassandra Kelley, University of Pittsburgh, USA
Erin Walker, University of Pittsburgh, USA

In this special issue, we center opportunities and challenges in the rapid development of artificial intelligence (AI) for promoting the equitable design, implementation, and use of technologies within higher education. Equity is meeting people where they are with what they need to be successful (Levinson,  Geron, &  Brighouse, 2022). Issues related to equity are multiple and complex, involving but not limited to the choice of learning goals in the design of technologies, facilitating broader access to emerging technologies, and ensuring that technologies are responsive to the needs of individuals and communities from historically and systematically excluded populations. We are looking for articles that engage meaningfully with topics related to equity as part of their research questions, design and implementation focus, data analysis, and/or discussion when considering AI systems in higher education. We are interested in articles that address the impact of AI technologies on psychological experiences, processes (e.g., sense of belonging, self-efficacy), and/or domain knowledge. How can we use AI to know what people are learning? How can we use AI to support a diversity of learners and teachers? How should AI technologies in education differ across different populations of learners?

As AI technologies become more sophisticated, there are increasing opportunities for human-AI partnerships in service of learning. There is a need for increased understanding of what might be involved in these partnerships, grounded in the learning sciences, educational psychology, assessment, and related fields. Alongside this work there is a need for technological advancements that ensure these technologies are designed and implemented in ways that advance equitable outcomes for historically and contemporarily underserved groups. Core questions in this space revolve around the roles for humans and AI systems in higher education; how should each contribute within these partnerships, what should humans take away from these partnerships, and what does learning look like in environments where AI is being widely used? Specific to the JCHE, what is the role of higher education in the evolution of more equitable human-AI partnership? We define equitable human-AI partnerships as one where humans of varied backgrounds and identities are included in the design and deployment of the technologies, have agency during use of the technologies, and all see positive outcomes that meet their individual and community goals as they use the technologies for learning and life.

Technologies offer extensions to human abilities but also areas where traditionally human skills might be lost or replaced, yielding opportunities and pitfalls. AI-based advancements can yield new opportunities for educational settings, including an improved ability to model learning across contexts, support learning in individual and group settings through personalized adaptations, and enhance learners’ and teachers’ ability to engage in learning environments. On the other side, the use of AI-based technologies can invite concerns related to privacy and overly prescriptive models of learning. They are often implemented inequitably, sometimes due to lack of equal access to the technologies, but also due to a lack of culturally relevant design for communities that are often most harmed by bias encoded in new technologies, and a misalignment between the goals of the technologies and the goals of the communities they serve. AI systems might also replace things students have been asked to do in the past and it is not clear what the implications are of new approaches, and what is lost and what is gained with these changes?

Unique Collaboration with CIRCLS

To support this important agenda related to foregrounding equity and inclusion in the design and understanding of AI technologies for Higher Ed, we are partnering with CIRCLS to host a series of three support sessions for authors submitting to this special issue that will provide additional resources for doing this effectively, as well as convening an Advisory Board to support the authors of submitted articles. Authors are strongly encouraged to participate in these sessions and engage with the Advisory Board as part of their submissions to this issue.
Key Topics

Papers considered for this special issue will report ground-breaking empirical research or present important conceptual and theoretical considerations on the conjunction of equity, inclusion, and AI. In general, papers may pursue one or several of the following goals:

  • Innovating new assessments, technologies, modeling, and pedagogies as we use more AI for learning across a variety of content domains.
  • Exploring the impact of AI technologies on marginalized communities
  • Investigating AI literacy, education, and awareness building
  • Defining equitable human-AI partnerships
  • Exploring the impact of AI technologies on domain knowledge and psychological experiences and processes (e.g., sense of belonging, self-efficacy)
  • Aligning goals of learning in this new AI-enhanced landscape with the diversity of goals held by students as they pursue higher education.
  • Engaging with topics such as but not limited to privacy, security, transparency, sustainability, labor costs, ethics, learner agency, learner diversity, and cultural relevance as they intersect with more equitable learning processes and outcomes.
  • Developing accountability metrics for researchers and educational technology development teams

Timeline

December 15, 2023 — Abstracts of proposed papers due to the editors.

January 15, 2024: Authors notified of initial acceptance of abstracts.

February 2024 (Date TBD) – CIRCLS Support Session A

April 1, 2024 – Papers due in the Editorial Management system

June 1, 2024 — Reviews completed & authors notified of decisions

June 2024 (Date TBD) – CIRCLS Support Session B

October 1, 2024 — Revised manuscripts due

December 1, 2024 — Reviews completed & authors notified of decisions

February 15, 2025 — Final manuscripts due

March 15, 2025 – Final manuscripts sent to the publishers
Submission to the Special Issue

Indicate your interest in participating in the special issue by submitting your abstracts here at https://pitt.co1.qualtrics.com/jfe/form/SV_bBpc2DTNPk5P7nM
For more information, resources, and updates related to this Special Issue, please visit the AI CIRCLS & JCHE Collaboration web page.

Reference:

Meira Levinson, Tatiana Geron, and Harry Brighouse. 2022. Conceptions of Educational Equity. AERA Open 8: 23328584221121344. https://doi.org/10.1177/23328584221121344 

Generative AI course statement

I have a new statement on generative AI in the class that I am teaching this semester. I like Ryan Baker’s statement from last year due to its permissive nature, and so I build upon it by adding some more specific guidance for my students to clarify expectations. I  came up with what you see below. As always, feel free to use as you see fit:

Artificial Intelligence Course Policy

Within this class, you are welcome to use generative AI models (e.g., ChatGPT,  microsoft copilot, etc) in your assignments at no penalty. However, you should note that when the activity asks you to reflect, my expectation is that you, rather than the AI, will be doing the reflection. In other words, prior to using AI tools, you should determine for what purpose you will be using it. For instance, you might ask it for help in critiquing your ideas or in generating additional ideas, or editing, but not in completing assignments on your behalf.   You should also be aware that all generative AI models have a tendency to make up incorrect facts and fake citations, leading to inaccurate (and sometimes offensive) output. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or an AI model. For each assignment you use an AI model, you need to submit a document that shows (a) the prompts you used, (b) the output the AI model produced, (c) how you used that output, and (d) what your original contributions and edits were to the AI output that led to your final submission. Language editing is not an original contribution. An original contribution is one that showcases your critical and inquiry thinking skills and abilities beyond and above what the AI tools have generated. To summarize: if you use a generative AI model, you need to submit an accompanying document that shows the contribution of the model as well as your original contribution, explaining to me how you contributed significant work over and above what the model provided. You will be penalized for using AI tools without acknowledgement. The university’s policy on scholastic dishonesty still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

Postdigital Research: Transforming Borders into Connections [interview]

Petar Jandrić, Alison MacKenzie & Jeremy Knox edited two books on postdigital research: “Constructing Postdigital Research” and “Postdigital Research: Genealogies, Challenges, and Future Perspectives.” When they invited me to intervuiew them about the books and bring my understanding of digital learning and the postdigital to their ideas, I knew that they would be very gracious in answering my questions. Here’s the resulting interview, but more than that, read the books: They’re rich, diverse, and unique in their approaches and sensibilities.

Are education and learning engineering problems?

Audrey Watter’s begins her latest post with this insight: “Much of what I wrote about with regards to education applies to this sector [health and wellness] as well, in no small part because everything for Silicon Valley (1) is an engineering problem: a matter of optimization, individualization, and gadgeteering (that’s B. F. Skinner’s word, not mine).”

In a similar fashion, in his 2023 chapter The future of the Field is not Design Jason McDonald notes that in pursuing the mission of transforming learning and teaching, the field of Learning and Instructional Design Technology has ” become too fixated on being designers and applying the methods of design thinking. As valuable as design has been for our field, it’s ultimately too narrow an approach to help us have the impact we desire because it overemphasizes the importance of the products and services we create. To be more influential, we need approaches that focus our efforts on nurturing people’s “intrinsic talents and capacities” that are ultimately outside of our ability to manage and control.”

I am nodding along with this, and I am also reminded that both silicon valley edtech efforts as well as LIDT efforts overwhelmingly focus on the individual student and the individual teacher,  and much less on the environments, systems, policies, and structural issues that surround our efforts (or in Berliner’s 2002 work, the contexts that surround us).

Page 1 of 81

Powered by WordPress & Theme by Anders Norén