Category: emerging technologies Page 1 of 9

Edtech history, erasure, udacity, and blockchain

This thought in Audrey’s newsletter (update: link added March 30th) caught my attention, and encouraged me to share a related story.

 [Rose Eveleth] notes how hard it can be to tell a history when you try to trace a story to its primary sources and you simply cannot find the origin, the source. (I have been thinking a lot about this in light of last week’s Udacity news. So much of “the digital” has already been scrubbed from the web. The Wired story where Sebastian Thrun claimed that his startup would be one of ten universities left in the world? It’s gone. Many of the interviews he did where he said other ridiculous things about ed-tech – gone. What does this mean for those who will try to write future histories of ed-tech? Or, no doubt, of tech in general?) Erasure.

 

Remember how blockchain was going to revolutionize education? Ok, let’s get into the weeds of a related idea and how most everything that happened around it has also disappeared from the web.

One way through which blockchain was going to revolutionize education was through the development of education apps and software running on the blockchain. Around 2017, Initial Coin Offerings (ICOs) were the means through which to raise money to build those apps. An ICO was the cryptocurrency equivalent of an initial public offering. A company would offer people a new cryptocurrency token in exchange for funds to launch the company. The token would then provide some utility for ICO holders relating to the app/software (e.g., you could exchange it for courses, or for study sessions, or hold on to it hoping that its value would increase and resell, etc). The basic idea idea here was crowdfunding, and a paper published in the Harvard International Law Journal estimates that contributions to ICO’s exceeded $50bn by 2019. The Wikipedia ICO page includes more background.

A number of these ICOs focused on education. Companies/individuals/friends* would create a website and produce a whitepaper describing their product. Whitepapers varied, but they typically described the problem to be solved, the blockchain-grounded edtech solution they offered, use cases, the team behind the project, a roadmap, and the token sale/model.

To give you a sense of the edtech claims included in one of those whitepapers:

“The vision is the groundbreaking disruption of the old education industry and all of its branches. The following points are initial use cases which [coin] can provide … Users pay with [coins] on every major e-learning platform for courses and other content they have passed or consumed… Institutions can get rid of their old and heavy documented certification process by having it all digitalized, organized, governed and issued by the [coin] technology.”

I was entertaining an ethnographic project at the time, and collected a few whitepapers. For a qualitative researcher, those whitepapers were a treasure trove of information. But, looking online, they’re largely scrubbed, gone, erased. In some cases, ICO’s founders’ LinkedIn profiles were scrubbed and online communities surrounding the projects disappeared, even as early as ICOs didn’t raise the millions they were hoping for.

Some of you following this space might remember Woolf, the “world’s first blockchain university” launched by Oxford academics. And you might also remember that, like other edtech projects, it “pivoted.” See Martin Weller’s writing and David Gerard’s writing on this. Like so many others, the whitepaper describing the vision, the impending disruption of higher ed through a particular form of edtech, is gone. David kept a copy of that whitepaper, and I have copies of a couple of whitepapers from other ventures. But, by and large, that evidence is gone. I get it. Scammers scam, honest companies pivot, the two aren’t the same, and reputation management is a thing. But, I hope that this short post serves as a small reminder to someone in the future that grandiose claims around educational technology aren’t new. And perhaps, just perhaps, at a time of grandiose claims around AI in education, there are some lessons here.

 

 

Metaphors of generative AI in education

It’s been interesting to see what kinds of conceptual metaphors have been used to describe generative AI. Conceptual metaphors are ways to talk about something by relating it to something else. It’s been interesting because the ways we choose to speak about generative AI in education matter, because those choices impact experiences, expectations, and even outcomes. Early pedagogical agent research for example, identified different roles for agents often positioning them as experts, mentors or motivators. Even the term pedagogical agents carries its own connotations around the kinds of work that such a system will engage in (pedagogical), compared to say conversational agents or intelligent agents.

Below is a small sample. What other metaphors have you seen?

Update: See Dave Cormier’s related post around prompt engineering, or what to call the act of talking to algorithms.

What Comes After Disinformation Studies(CFP)? What comes after universities?

The CFP below is relevant to education researchers who study mis/disinformation, digital literacies, and design/evaluate education interventions to interrupt misinformation flows. I’m also posting it as an example of a CFP that’s relevant to something a “what if” scenario been thinking about: what comes after universities? In other words, what does a radically different higher education landscape look like? What should such a landscape look like? While this work overlaps with the disciplines I find myself in (ID, education, edtech, curriculum & instruction, learning sciences), it has interesting interdisciplinary tentacles and connects with platform studies, platform cooperativism, postdigital studies, anticipation studies, decolonial studies, etc.

ICA Pre-Conference: What Comes After Disinformation Studies?

Paris, May 25, 2022

The médialab at Science Po

Submissions due: Friday, February 18, 2022 at 12pm ET

Submit here

Introduction

The title of this pre-conference, “What Comes After Disinformation Studies?”, is something of a deliberate provocation. With an ongoing increase in authoritarian and nationalist politics globally over the past several years and the weakening of democratic institutions in many countries, scholarly and media attention to disinformation has exploded, as have institutional, platform, and funder investments towards policy and technical solutions. This has also led to critical debates over the “disinformation studies” literature. Some of the more prominent critiques of extant assumptions and literatures by scholars and researchers include: the field possesses a simplistic understanding of the effects of media technologies; overemphasizes platforms and underemphasizes politics; focuses too much on the United States and Anglocentric analysis; has a shallow understanding of political culture and culture in general; lacks analysis of race, class, gender, and sexuality as well as status, inequality, social structure, and power; has a thin understanding of journalistic processes; and, has progressed more through the exigencies of grant funding than the development of theory and empirical findings. These concerns have also been surfaced by journalists and community organizers in public forums, such as Harper’s Magazine’s special report “Bad News” in late August 2021; or, organizers highlighting the exclusions of communities of color in existing discourse and subsequent responses.

Even as disinformation has been the subject of growing academic debate, the relationship between disinformation, technology, and global democratic backsliding, white supremacy, inequalities, nationalisms, and the rise of authoritarianism globally remains unclear, and raises important questions of what constitutes healthy democratic systems.

Given this, the time is right to create and advance an interdisciplinary, critical, post-disinformation studies agenda that centers questions of politics and power. We are particularly excited to take the best existing aspects of the research that has been done so far and put it into dialog with other fields (such as history, feminist science and technology studies, critical race and ethnic studies, anthropology, social movement studies, etc.) that have their own perspectives on how to understand and study politics, technology, and media in the 21st century.

Submission Guidelines

This pre-conference is not structured around the traditional academic practice of “submitting a paper,” making a brief presentation, and then fielding follow-up questions from the audience. Instead, we ask everyone to submit a 2-3 page (1200-1500 word) “big idea” argument for what might come after, replace, or supplement disinformation studies (submission details at the end of the CFP). This paper should formulate a proposal for what comes after disinformation studies, analyze what needs to be done to supplement its analytical and methodological tools, or critique one or more of the major works in the field of disinformation studies as a jumping off point for considering the limits, and promises, of the existing field. Or, the proposal can be a combination of some or all of these things. In sum, we are looking for arguments that spur debate, discussion, and the generation of new perspectives.

In particular, this pre-conference seeks short reflections and provocations that answer, What should we be focusing our scholarly energies on, and how can we move our understandings of contemporary threats to democracy, public knowledge, political and social equality, and multi-racial and multi-ethnic societies forward? These submissions might address some of the following:

  • Draw on diverse traditions of scholarship (e.g. mass audience theory, cultural studies, postcolonial and decolonial studies, political economy and critical race theory) that help us place disinformation research within an interdisciplinary or cross-disciplinary context. For example, how might critical theory from the Frankfurt School or sociological theory from W.E.B Du Bois offer new lenses and perspectives on disinformation?
  • Emphasize non-U.S. and Anglocentric contexts and/or transnational approaches to the study of politics and platforms.
  • Historicize what are often very presentist debates on technology and information.
  • Discuss the ways in which often neglected social structures, social categories, and social identities play a role in differential experiences of disinformation, technological structures, and democracy, such as political expression and suppression; inequalities and asymmetries of information and technological access; or modes of state and institutional governance and the mobilization of security infrastructures.
  • Detail the theoretical, conceptual, and methodological tools necessary for understanding disinformation in different social, political, economic, cultural, and technological contexts (e.g. cross-disciplinary collaborations, community-engaged approaches, and qualitative and interpretive methods).
  • Draw on original empirical research in order to complicate the often-simplistic relationship between mis- and disinformation and political dysfunction and/or to offer considerations for how we may re-conceptualize approaches to digital harm and safety, platform governance, institutional trust, etc.

Please submit your “big idea” paper via this form by 5pm UK Time on Friday, February 18 (12 pm EST). 

Submissions should not exceed 3 single-spaced pages (or 1500 words maximum) and be submitted in .pdf or .docx format. Please include your complete name, title, and affiliation in the document header.

 

Pre-Conference Format

The conference aims to foster a series of overlapping conversations that will also introduce original empirical and theoretical research. It also aims to “democratize” the idea of the conference keynote. To these ends, the conference will operate in an “onion” format. There will be four, relatively short, invited keynotes presented over the course of the day (2 in the morning and 2 in the afternoon). These keynotes will then be followed by 3-4 also relatively brief paper presentations that will be related to the topic of the keynote just presented. The organizers will select the keynotes and paper presenters from submissions to the preconference based on consideration of the quality of the arguments, fit with other submissions, and interventions to address critical gaps in the field, as well as on the diversity of research profiles, methodologies and theoretical perspectives of the authors. After these talks, we will quickly open the conversation up to the audience so we can engage the entire room.

Cost and Logistics

There is no cost to attend this preconference. Coffee, tea, meals and dessert will be served over the course of the day.

The conference will be located at Sciences Po, Paris, 27 rue Saint-Guillaume (room Leroy-Beaulieu). It will also be possible to participate virtually.

Contact

Email  afterdisinformation@gmail.com with any questions. 

Sponsors

ICA Lead sponsor: Political Communication Division

ICA Co-sponsor: Ethnicity and Race in Communication Division

University of North Carolina Center for Information, Technology, and Public Life (CITAP)

University of Leeds School of Media and Communication

Science Po médialab

The seduction of the digital

Josh Kim wrote a very kind post today over at Inside Higher Ed, highlighting what he sees as three indictments of the role of technology in higher education. There’s good food for thought there, and I’d like to focus on Josh’s third indictment which states that digital technologies distract.

The crux of the matter (for me) is here: “Nor are students the only people on campuses likely to use technologies in a way that inhibits, rather than promotes, learning.”

This point gets lost in the broader conversation around technology distracting from learning. The broader conversation focuses on learners being distracted by… all sorts of things… laptops, social media algorithmically perfected to demand never-ending attention, and so on.

Yet, we talk little about the seductive appeal of technology that positions it as an easy solution to all sort of problems. That seductive property is what is distracting faculty, administrators, instructional designers, and other higher education professionals, not the technology itself, not technology as an object. Problem-solving – dare I say innovation – can exist without the latest gizmo or platform, and I’ve said that so many times, and heard it so many times, that I feel like we should be past this point. We *need* to be past this point. But, in a practice characterized by historical amnesia as Martin Weller aptly reminds us, we need reminders.

Four years ago I gave a talk at the University of Edinburgh. It was a wonderful event, with many amazing people, but I’ll always remember one comment that Jen Ross made. I’m paraphrasing, but she essentially said: We can be frustrated that we have to remind people of the history of the field, of the role that technology plays in education, of its potential and shortcomings. Or, we can be excited that more and more people are joining the field, and more and more people need to learn that “technology” isn’t the one and easy solution.

She was, and is, right. The needle is slow to move, but, at this moment, I choose to be excited.

 

10 interesting papers in the proceedings of the Artificial Intelligence in Education 2018 conference #aied18

The 2018 Artificial Intelligence in Education conference starts today. Its full proceedings are freely available online until July 21st, and I scrolled through them to identify papers/posters/reports that seemed potentially relevant to my work. These are of interest to me because some use methods that seem worthwhile, others offer insightful results, and yet others seem to make unsubstantiated claims.  As an aside, I especially like the fact that AIED offers space for PhD students to discuss their proposed research.

Here’s the papers that I identified to read:

Leveraging Educational Technology to Improve the Quality of Civil Discourse

Towards Combined Network and Text Analytics of Student Discourse in Online Discussions

An Instructional Factors Analysis of an Online Logical Fallacy Tutoring System

Adapting Learning Activities Selection in an Intelligent Tutoring System to Affect

Preliminary Evaluations of a Dialogue-Based Digital Tutor

ITADS: A Real-World Intelligent Tutor to Train Troubleshooting Skills

Early Identification of At-Risk Students Using Iterative Logistic Regression

Smart Learning Partner: An Interactive Robot for Education

Do Preschoolers ‘Game the System’? A Case Study of Children’s Intelligent (Mis)Use of a Teachable Agent Based Play-&-Learn Game in Mathematics

A Data-Driven Method for Helping Teachers Improve Feedback in Computer Programming Automated Tutors

 

 

What do you do *in anticipation of* social media privacy concerns and scandals?

Responses to the news relating to the Cambridge Analytica + Facebook scandal have been swift with many vouching to #DeleteFacebook. An extensive collection of resources relating to this scandal are here. Lee Skallerup Bessette calls the fiasco the latest iteration of “guess how safe and secure your data is and how it might be used for nefarious purposes but it’s actually worse than that.”

An Angus Reid poll in Canada shows that 3 out of 4 respondents indicated that they plan on changing the ways they use the platform. How many will actually change their practices and behavior? Rehardless, I wonder what people do when their habits center around mistrusting contemporary digital platforms and their opaque use of our data. In other words, what do you do on an ongoing basis when you anticipate that benevolence isn’t the distinguishing characteristic of social media platforms?

For example, like others, I:

  • purge my historical tweets (because bad actors can easily take them out of context)
  • use authenticator apps
  • use browser extensions that block ads and trackers
  • delete unused online accounts and profiles (unless of course you really still need your ICQ account)
  • rarely connect distinct apps (e.g., google with dropbox)

I’m quite sure I take a number of other, likely unconscious, steps that I’ve picked up over time for privacy’s sake. For instance, after thinking about this for a couple of minutes I remembered that I installed an app on my website that aims to limit brute-force login attempts. And it strikes me that many of the conscious (and unconscious) steps that I take are rarely enabled by the platforms that are thirsty for data: There’s no bulk delete button on Twittter; there’s no “unfollow all the pages I currently follow” on Facebook; and so on.

What steps do you take to minimize the likelihood of your social media data being used in unanticipated ways?

Tri-council guidance on using online public data in research

I am often asked whether there are Canadian ethics guidelines on the use of online public data in research. The  relevant section from the Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans is provided below. I believe that researchers should take further steps to protect privacy and confidentiality pertaining to public data, but with regards to accessing and using public online data, this is a start.

A sample project to which these guidelines may apply is the following:  The researcher will collect and analyze Twitter profiles and postings of higher education stakeholders (e.g., faculty, researchers, administrators) and institutional offices (e.g., institutional Twitter accounts). This research will use exclusively publicly available information. Private Twitter accounts (ie those that are not public and involve an expectation of privacy) will be excluded from the research. The purposes of the research is to gain a better understanding of Twitter metrics, practices, and use/participation.

 

=== Begin relevant Tricouncil guidance ===

Retrieved on December 12 2014 from http://www.pre.ethics.gc.ca/eng/policy-politique/initiatives/tcps2-eptc2/chapter2-chapitre2/

REB review is also not required where research uses exclusively publicly available information that may contain identifiable information, and for which there is no reasonable expectation of privacy. For example, identifiable information may be disseminated in the public domain through print or electronic publications; film, audio or digital recordings; press accounts; official publications of private or public institutions; artistic installations, exhibitions or literary events freely open to the public; or publications accessible in public libraries. Research that is non-intrusive, and does not involve direct interaction between the researcher and individuals through the Internet, also does not require REB review. Cyber-material such as documents, records, performances, online archival materials or published third party interviews to which the public is given uncontrolled access on the Internet for which there is no expectation of privacy is considered to be publicly available information.

Exemption from REB review is based on the information being accessible in the public domain, and that the individuals to whom the information refers have no reasonable expectation of privacy. Information contained in publicly accessible material may, however, be subject to copyright and/or intellectual property rights protections or dissemination restrictions imposed by the legal entity controlling the information.

However, there are situations where REB review is required.

There are publicly accessible digital sites where there is a reasonable expectation of privacy. When accessing identifiable information in publicly accessible digital sites, such as Internet chat rooms, and self-help groups with restricted membership, the privacy expectation of contributors of these sites is much higher. Researchers shall submit their proposal for REB review (see Article 10.3).

Where data linkage of different sources of publicly available information is involved, it could give rise to new forms of identifiable information that would raise issues of privacy and confidentiality when used in research, and would therefore require REB review (see Article 5.7).

When in doubt about the applicability of this article to their research, researchers should consult their REBs.

=== End relevant Tricouncil guidance ===

Page 1 of 9

Powered by WordPress & Theme by Anders Norén