Category: generative artificial intelligence Page 1 of 3

Using NotebookLM to facilitate knowledge mobilization and broader dissemination of research

Learning technologies research can be useful to many different groups of people. This is one of the reasons why there has been an emphasis on getting research findings into the hands of broader audiences – aka knowledge mobilization. “Broader” here refers to audiences other than researchers. Examples of audiences that might find value in learning technologies research include educators (teachers and higher ed faculty), administrators, policymakers, instructional/learning designers, edtech developers, edtech entrepreneurs, and parents.

One* of the challenges that researchers face in doing this work, is in representing and translating their research in ways in which broader audiences will find it meaningful, engaging, and useful. Some approaches that researchers have used include podcasts, YouTube videos (e.g., see our ResearchShorts series), opinion editorials, and so on. In doing this work over the years, I have learned that it’s incredibly helpful to see examples of how others translate their research for the broader public.

This is where NotebookLM, the Google AI tool which generates an audio summary of research papers, comes in. Plug in a paper, say D’Arcy Norman’s dissertation or our recently-published paper Is Artificial Intelligence in education an object or a subject?, and it generates a five-minute podcast hosted by two synthetic voices.

Some will say that the AI-generated podcast is the outcome, i.e. the knowledge dissemination vehicle: You now have a podcast for your research, and the usual caveats around accuracy, hallucinations, and biases apply.

But, there’s another, perhaps more personally meaningful way to view this: The AI-generated product is a means to an end, a way to help you think about how you might go about translating your research for broader audiences. It’s one thing to read an op ed and marvel at the ways an author frames and describes their research. It’s another to read or listen to how your own research is translated. Try it with one of your own papers, and listen closely to how the topic is introduced, explore the analogies, and pay attention to the accessible language. This is not to say that you should offload your writing or dissemination efforts to this tool. It’s to say that this is a way to see an example of how your research translated for broader audiences could be framed and described.

To be clear, I am certain that you could do better than the AI-generated podcast/summary. There will likely be inaccuracies and shortcomings in the AI-generated summary. Also, the audience isn’t specific, so if your target audience is policymakers, for example, your arguments may be different that if your audience were teachers.

Let me know what happens if you try this!

* There are many other challenges in doing this work, including systemic issues (e.g., what the institution values), whose voice is prioritized, etc etc.

Call for submissions: the intersection of AI + open education

MIT Open Learning is announcing a 2024 call for proposals from practitioners in open education and AI from around the world. We invite individual authors or groups of authors from and across higher education institutions, nonprofits, philanthropy, and industry working in AI to submit proposals for rapid response papers or multimedia projects that explore the future of open education in an ecosystem inhabited and shaped by AI systems. More details at https://aiopeneducation.pubpub.org/2024call

Edtech history, erasure, udacity, and blockchain

This thought in Audrey’s newsletter (update: link added March 30th) caught my attention, and encouraged me to share a related story.

 [Rose Eveleth] notes how hard it can be to tell a history when you try to trace a story to its primary sources and you simply cannot find the origin, the source. (I have been thinking a lot about this in light of last week’s Udacity news. So much of “the digital” has already been scrubbed from the web. The Wired story where Sebastian Thrun claimed that his startup would be one of ten universities left in the world? It’s gone. Many of the interviews he did where he said other ridiculous things about ed-tech – gone. What does this mean for those who will try to write future histories of ed-tech? Or, no doubt, of tech in general?) Erasure.

 

Remember how blockchain was going to revolutionize education? Ok, let’s get into the weeds of a related idea and how most everything that happened around it has also disappeared from the web.

One way through which blockchain was going to revolutionize education was through the development of education apps and software running on the blockchain. Around 2017, Initial Coin Offerings (ICOs) were the means through which to raise money to build those apps. An ICO was the cryptocurrency equivalent of an initial public offering. A company would offer people a new cryptocurrency token in exchange for funds to launch the company. The token would then provide some utility for ICO holders relating to the app/software (e.g., you could exchange it for courses, or for study sessions, or hold on to it hoping that its value would increase and resell, etc). The basic idea idea here was crowdfunding, and a paper published in the Harvard International Law Journal estimates that contributions to ICO’s exceeded $50bn by 2019. The Wikipedia ICO page includes more background.

A number of these ICOs focused on education. Companies/individuals/friends* would create a website and produce a whitepaper describing their product. Whitepapers varied, but they typically described the problem to be solved, the blockchain-grounded edtech solution they offered, use cases, the team behind the project, a roadmap, and the token sale/model.

To give you a sense of the edtech claims included in one of those whitepapers:

“The vision is the groundbreaking disruption of the old education industry and all of its branches. The following points are initial use cases which [coin] can provide … Users pay with [coins] on every major e-learning platform for courses and other content they have passed or consumed… Institutions can get rid of their old and heavy documented certification process by having it all digitalized, organized, governed and issued by the [coin] technology.”

I was entertaining an ethnographic project at the time, and collected a few whitepapers. For a qualitative researcher, those whitepapers were a treasure trove of information. But, looking online, they’re largely scrubbed, gone, erased. In some cases, ICO’s founders’ LinkedIn profiles were scrubbed and online communities surrounding the projects disappeared, even as early as ICOs didn’t raise the millions they were hoping for.

Some of you following this space might remember Woolf, the “world’s first blockchain university” launched by Oxford academics. And you might also remember that, like other edtech projects, it “pivoted.” See Martin Weller’s writing and David Gerard’s writing on this. Like so many others, the whitepaper describing the vision, the impending disruption of higher ed through a particular form of edtech, is gone. David kept a copy of that whitepaper, and I have copies of a couple of whitepapers from other ventures. But, by and large, that evidence is gone. I get it. Scammers scam, honest companies pivot, the two aren’t the same, and reputation management is a thing. But, I hope that this short post serves as a small reminder to someone in the future that grandiose claims around educational technology aren’t new. And perhaps, just perhaps, at a time of grandiose claims around AI in education, there are some lessons here.

 

 

Generative AI course statement

I have a new statement on generative AI in the class that I am teaching this semester. I like Ryan Baker’s statement from last year due to its permissive nature, and so I build upon it by adding some more specific guidance for my students to clarify expectations. I  came up with what you see below. As always, feel free to use as you see fit:

Artificial Intelligence Course Policy

Within this class, you are welcome to use generative AI models (e.g., ChatGPT,  microsoft copilot, etc) in your assignments at no penalty. However, you should note that when the activity asks you to reflect, my expectation is that you, rather than the AI, will be doing the reflection. In other words, prior to using AI tools, you should determine for what purpose you will be using it. For instance, you might ask it for help in critiquing your ideas or in generating additional ideas, or editing, but not in completing assignments on your behalf.   You should also be aware that all generative AI models have a tendency to make up incorrect facts and fake citations, leading to inaccurate (and sometimes offensive) output. You will be responsible for any inaccurate, biased, offensive, or otherwise unethical content you submit regardless of whether it originally comes from you or an AI model. For each assignment you use an AI model, you need to submit a document that shows (a) the prompts you used, (b) the output the AI model produced, (c) how you used that output, and (d) what your original contributions and edits were to the AI output that led to your final submission. Language editing is not an original contribution. An original contribution is one that showcases your critical and inquiry thinking skills and abilities beyond and above what the AI tools have generated. To summarize: if you use a generative AI model, you need to submit an accompanying document that shows the contribution of the model as well as your original contribution, explaining to me how you contributed significant work over and above what the model provided. You will be penalized for using AI tools without acknowledgement. The university’s policy on scholastic dishonesty still applies to any uncited or improperly cited use of work by other human beings, or submission of work by other human beings as your own.

Recommendations on the use of Generative Artificial Intelligence at Royal Roads University

Today I met with Royal Roads University’s Board of Governors to present the work that we have completed in relation to Generative AI. I appreciated the opportunity not only to meet with the board, but also to hear comments and questions around this work and AI more general.

Because this was a public session, I thought it might be beneficial to share the recommendations we put forward. The full report will likely appear on the university website but for those of you who are tracking or thinking about institutional responses, this short timely summary might be more valuable than a more detailed future report.

As background: In March 2023, Royal Roads University established a generative Artificial Intelligence (AI) taskforce. Chaired by Dr. George Veletsianos, the taskforce consisted of Drs. Niels Agger-Gupta, Jo Axe, Geoff Bird, Elizabeth Childs, Jaigris Hodson, Deb Linehan, Ross Porter, and Rebecca Wilson-Mah. This work was also supported by our colleagues Rosie Croft (University Librarian), and Ken Jeffery and Keith Webster (Associate Directors, Centre for Teaching and Educational Technologies). The taskforce produced a report with 10 recommendations by June 2o23. The report (And its recommendations) should be seen as a working document that ought to be revisited and revised periodically as the technology, ethics, and use of AI are rapidly evolving. The recommendations were:

  1. Establish and publicize the university’s position on Generative AI
  2. Build human capacity
  3. Engage in strategic and targeted hiring
  4. Establish a faculty working group and foster a community of practice.
  5. Investigate, and potentially revise, assessment practices.
  6. Position the Centre for Teaching and Educational Technologies as a “go-to” learning and resource hub for teaching and learning with AI.
  7. Future-proof Royal Roads University [comment: a very contextual recommendation, but to ensure that this isn’t understood to encourage an instrumentalist view of AI or understood to mean that the institutions solely focus on AI, the report urged readers to “consider that the prevalence of AI will have far-reaching institutional impacts, which will add to the social, economic, political, and environmental pressures that the University is facing]
  8. Revise academic integrity policy
  9. Develop and integrate one common research methods course [comment: another very contextual recommendation that likely doesn’t apply to others, but what does apply is the relevance of AI to student research suggesting that research methods courses consider the relationships between AI and research practices.]
  10. Ensure inclusivity and fair representation in AI-related decisions.

I hope this is helpful to others.

On Vanderbilt’s disabling of Turnitin’s AI detection feature, and faculty guidance

Last week, Vanderbilt University decided to disable Turnitin’s AI detection tool. Congratulations are in order!

To date, there’s little evidence as to the effectiveness and appropriateness of such tools (also see: their unintended consequences). Equally importantly, Vanderbilt’s decision lends credence and support to recommendations that numerous working groups put forward to their institutions, and paves the way for others to feel confident in taking similar actions. Earlier this year for example, I led a generative AI taskforce at Royal Roads University. The relevant recommendation we put forward in early June is this:

Recommendation 5: Investigate, and potentially revise, assessment practices.

We recommend that faculty examine their current assessment practices and question them through the lens of AI tools. For instance, faculty could try their discussion prompts or reflection questions with AI tools to explore the role and limits of this technology. Regardless of the outcome of such efforts we recommend that faculty do not rely on AI detection tools to determine whether learners used AI in their work. A service that asserts to detect AI generated text does not provide transparency on how that assertion is made and encourages a culture of suspicion and mistrust. Emerging research also highlights challenges with reliably detecting AI-generated text (Sadasivan et al., 2023). Instead, we recommend that faculty engage with learners in conversations at the beginning of the course as to the appropriate and ethical use of AI. We further encourage faculty to continue their efforts towards experiential and authentic learning– including work integrated learning, live cases, active learning opportunities, field trips, service learning, iterative writing assignments, project-based learning, and others. These are not necessarily failsafe approaches to deter cheating, and it may even be possible to leverage AI in support of experiential learning. Ultimately, we recommend that faculty question their assignments at a time and age when generative AI is widely available.

Metaphors of generative AI in education

It’s been interesting to see what kinds of conceptual metaphors have been used to describe generative AI. Conceptual metaphors are ways to talk about something by relating it to something else. It’s been interesting because the ways we choose to speak about generative AI in education matter, because those choices impact experiences, expectations, and even outcomes. Early pedagogical agent research for example, identified different roles for agents often positioning them as experts, mentors or motivators. Even the term pedagogical agents carries its own connotations around the kinds of work that such a system will engage in (pedagogical), compared to say conversational agents or intelligent agents.

Below is a small sample. What other metaphors have you seen?

Update: See Dave Cormier’s related post around prompt engineering, or what to call the act of talking to algorithms.

Page 1 of 3

Powered by WordPress & Theme by Anders Norén