Category: emerging technologies Page 2 of 9

On Teacherbot rights

Pause for a few more minutes and imagine that future in which technologies teach humans. Call them robots, bots, chatbots, algorithms, teaching machines, tutoring software, agents, or something else. Regardless, consider them technologies that teach. Now consider their rights.

Assuming that teaching bots can exhibit (algorithmic) intelligence, can behave with some sort of (algorithmic) morality, can learn, can plan their interactions with students and make choices about them, and overall behave somewhat independently… what rights do they have, or should they have, as non-human entities, as teachers?

Consider this scenario: A teaching bot teaches independently in an online course. It (S/he?) develops a novel pedagogical approach wherein student test scores are maximized for some, but not all, students. University administrators, in collaboration with an edtech company, learn of this and would like to intervene to ensure that every student is served in an equitable manner. They are considering refining the underlying code that runs the bot. If unsuccessful, they are considering replacing the bot with a new one.

What are the bot’s rights? Does it have the right to protest this change? Does it have the right to its life? Does it have the rights that all other workers have?

 

Followup: Some background reading on ethical principles for robots.

Web browser extension that filters offensive content

“Nikola Draca, a third-year statistics student, and his colleague, Angus McLean, 23, an engineering student at McGill University, put their heads together to develop an extension called Soothe for the Google Chrome web browser that blurs out homophobic, racist, sexist, transphobic and other hateful language while browsing the web.” Source

I thought this was interesting, because:

  • It’s yet another example of how student work can contribute meaningfully to society
  • It attempts to take back (some) control from platforms, and enable individuals to refine the experiences they have online

Related initiatives include the following:

 

Imagine a future in which technologies teach humans

Pause for a few minutes and imagine a future in which technologies teach humans. Call them robots, bots, chatbots, algorithms, teaching machines, tutoring software, agents, or something else. Regardless, consider them technologies that teach.

robo_teacher

Vector created by Freepik

How far into the future is that time?

What do these technologies look like? Are they anthropomorphous? Are they human-like? In what ways are they human-like? Do they have voice capabilities, and if so, do they understand natural language? Are they men or women?  Do they have a representation in the way that one would imagine a teacher – such as a pedagogical agent – or do they function behind the scenes in ways that seem rather innocuous – such as the Mechanical MOOC?

Do these technologies teach humans of all ages? Do they teach independently, support human teachers, or do human teachers assist them? Are they featured in articles in the New York Times, The Guardian, and The Economist as innovations in education? Or, are they as common as desks and chairs, and therefore of less interest to the likes of the New York Times? Are they common in all learning contexts? Who benefits from technologies that teach? Is being taught by these technologies better or worse than being taught be a human teacher? In what ways is it better or worse? Are they integrated in affluent universities and k-12 schools? Or, are they solely used in educational institutions serving students of low socioeconomic status? Who has access to the human teachers and who gets the machines? Are they mostly used in public or private schools?

How do learners feel about them? Do they like them? Do they trust them? Ho do learners think that these technologies feel about them? Do they feel cared for and respected? How do learners interact with them? How do human teachers feel about them? Would parents want their children to be taught be these technologies? Which parents have a choice and which parents don’t? How do politicians feel about them? How do educational technology and data mining companies view them?

Do teaching technologies treat everyone the same based on some predetermined algorithm? Or, are their actions and responses based on machine learning algorithms that are so complex that even the designers of these technologies cannot predict their behaviour with exact precision? Do they subscribe to pre-determined pedagogical models? Or, do they “learn” what works over time for certain people, in certain settings, for certain content areas, for certain times of the day? Do they work independently in their own classroom? Or, do colonies of robo-teachers gather, share, and analyze the minutiae of student life, with each robo-teacher carefully orchestrating his or her next evidence-based pedagogical move supported by Petabytes of data?

Final question for this complicated future, I promise: What aspects of this future are necessary and desirable, and why?

MOOCs and Open Education in the Developing World symposium

A thought-provoking pre-conference symposium is being organized and facilitated on October 17th by Curt Bonk, Sheila Jagannathan, Tom Reeves, and Tom Reynolds at this year’s E-learn conference. It’s focused on a variety of innovations pertaining to online learning in the context of the developing world. While some research demonstrates that  socioeconomic divides persist in the context of MOOCs used by US learners, the symposium organizers note that “minimal attention has been placed on how developing countries and regions of the world are taking advantage of these new forms of technology-enabled learning” and “this is exactly where exciting and impactful innovations are currently occurring.”

Beyond the impressive list of presenters, I appreciate

  • the diverse organizations represented here, which include universities, polytechnics, non-profits, NGOs, and financial institutions
  • the main questions behind the symposium, which is: How do innovations work in different contexts, for whom, why, and what can we learn from one another?

Plus: It’s in Vancouver, off the coast of Canada’s paradise, Victoria ;)

AERA statement and #edtech research

What appears below is a copy of the AERA Statement on the Hateful Acts in Charlottesville. I am posting it here because there’s a tendency in our field to focus on instruction and learning that is effective, efficient, and engaging without considering that we need to evaluate instruction/learning in the context of larger societal needs. What’s the value of an effective programming course if it leaves behind traditionally disenfranchised groups? This reminds me of Tom Reeves and his efforts to encourage us all to engage in socially responsible research that addresses the urgent problems of our time.

AERA statement

The American Educational Research Association condemns racism in all its forms and joins others throughout our nation in the fight to eradicate hate, injustice, and racial violence. The recent events in Charlottesville not only make visible how White supremacy, racism, antisemitism, religious persecution, homophobia, and xenophobia continue to permeate our society, but also remind us of the critical importance of studying, analyzing, and broadly communicating about these patterns and structures. Our social responsibility as a community of education researchers is to engage in producing knowledge and to share that knowledge with clarity and integrity.

 

A wide range of scholarship can and must be used to inform and engage current and future generations in the multiple stories of our pasts, the realities of our presents, and the critical demands of our futures. We need to uncover and analyze how our educational system is connected to our past and present legacies of racism in all of its forms—how our institutions and practices persistently reproduce inequities. We must also develop the knowledge and evidence that can lead to practices and policies that address hate, support understanding and respect of others, and disrupt the divisive patterns of disparity and denigration. Researchers, together with educators across all levels of education, must confront the racism, xenophobia, power and privilege, and injustice that permeate the ordinary life of our nation and world and interrogate and teach the histories of our past. No one should leave our educational institutions thinking that the expressions of hate that were on display in Charlottesville are just legitimate “points of view” or acceptable acts of “free speech.” No one should leave our classrooms or campuses believing that the symbols of oppression and killing are mere logos.

 

Education is fundamentally about our futures as a nation and a world, for education can empower the next generation of human beings who can promote and protect human rights, build institutions, make laws, create knowledge and art, and imagine and make possible a just world. AERA is committed to providing the knowledge base and working with other scientific organizations to support educators and others in our communities to be able to confront hate and to teach all people to know the histories of slavery, racism, genocide, inhumanity, oppression, colonialism, and White supremacy, as well as to know and learn from the stories of those who have fought and devoted their lives to justice. We strive to make known and foster the use of research on institutional and individual factors that engender prejudice and acts of violence against groups. As researchers, we must be prepared to support educators with tools, knowledge, and expertise to notice, name, deal with, and confront these issues as they arise in our contemporary world, our communities, and in our institutions and classrooms.

 

Now is the time, as new school and academic years commence, to ensure that we do not ignore or forget the realities that underlie what we have just experienced nor resume a normalcy that belies the scholarship that we have. AERA is committed to continuing this conversation as we go from city to city. It is our priority in planning for the 2018 Annual Meeting in April in New York and speaks to the very heart of this year’s theme—“The Dreams, Possibilities, and Necessity of Public Education.”

 

 

Deborah Loewenberg Ball, AERA President

Felice J. Levine, AERA Executive Director

Research Dissemination, Research Mobilization, and Reaching Broader Audiences

I gave an ignite talk at the Canadian Society for the Study of Higher Education in early June, sharing some of the lessons learned in creating whiteboard animation videos for mobilizing research and reaching broader audiences. We’ve now turned that talk into a whiteboard animation video. It’s all very meta. Here it is below:

#NotAllEdTech and critical #edtech conversations

The article below was posted on Inside Higher Ed. I’m copying and pasting it here for posterity.

 

#NotAllEdTech Derails Critical Educational Technology Conversations

Last month, Rolin Moe and I published an essay in EDUCAUSE Review highlighting ideological and sociocultural factors associated with the rise of Educational Technology (hereafter EdTech). Motivated by two responses to our essay, I decided to write this additional piece highlighting an argument/misunderstanding that can often circumvent and derail critical discussions in the field.

The critique, offered by Downes and Kim, counters our underlying premise. They say: Not all educational technology is characterized by technocentric, market-centric, and product-driven ideologies. Downes argues that the way we describe educational technology doesn’t describe him – and by implication many that work in the field. Kim notes that he doesn’t know anyone in the field who thinks and behaves in the ways that aligns with how we describe the rise of EdTech.

Moe parsimoniously summarized these responses as #NotAllEdTech, as a hashtag version of not all educational technology is this way, paralleling usage of the phrase “not all men.”

I will unpack the meaning of #NotAllEdTech here.

#NotAllEdTech posits that not all educational technology is malevolent and not all educational technology represents an insidious attempt at privatizing and automating education. The #NotAllEdTech argument notes that there are many good people in our field. People who care. Entrepreneurs, researchers, and colleagues of many vocations – instructors, instructional designers, directors of digital learning  – who are working, in their own way, to improve teaching and learning with technology. Not all educational technology is sinister, atheoretical, ahistorical, and driven by unsavory desires. #NotAllEdTech. Individuals that make this argument seem to want to guard themselves, and others, against being defined by the ideologies we identified in our original paper.

This all makes sense, of course. If it weren’t for the thoughtful, caring, creative, innovative, and justice-oriented people in the field focused on making positive change in education and society, I would have switched careers a long time ago. Moe and I, and countless colleagues, use educational technology toward valued ends, from providing educational opportunities where none have existed before, to providing them in more flexible ways, to re-thinking the ways students learn and instructors teach. Making a meaningful contribution to society is at the core of this multi-faceted and exciting field.

We know that there is good in educational technology. To borrow Downes’s terminology, we know that educational technology can be benign.

But, that’s not the point.

Just because there are many well-intentioned people in the field, just because our essay doesn’t accurately characterize Downes, just because Kim doesn’t “know of ” anyone who thinks of educational technology in the ways we described, it doesn’t mean that educational technology is operating outside of socio-cultural, -economic, and –political forces.

I’m certain that many well-intentioned people were involved in a wide-range of initiatives that ended up being problematic. Many well-intentioned individuals believed in xMOOCs and for-profit online universities as emancipatory. Many well-intentioned individuals write, adopt, and otherwise participate in the operations of the textbook publishing industry despite the exorbitant prices that some publishers charge. Many well-intentioned individuals review for or publish in non-open-access journals and otherwise support the academic publishing industry despite the restrictions the industry places upon knowledge dissemination. Many well-intentioned individuals imagined the aforementioned practices as ways to democratize access, but the presence of well-intentioned individuals did not ensure positive outcomes.

The most pressing problem with the #NotAllEdTech argument though, is that it perpetuates a dangerous counternarrative.

#NotAllEdTech can be a tactic that derails and deflects from discussions of educational technology as a practice that needs deep questioning. #NotAllEdTech could, perhaps inadvertently, redirect attention on the optimism surrounding educational technology, ignoring the broader landscape around which educational technology operates. It might also create a false binary: the heroes and good guys of EdTech vis-à-vis the bad ones (e.g., for-profits, large companies, and so on). Most importantly, such a binary might imply that those on the good side are somehow shielded by outside forces (some of which, such as pressures to rethink our practices, might in fact be very useful).

What I fear, and hope to avoid, is a world where conversations about educational technology focus solely on individuals (e.g., those who use the technology, create the technology, etc.), while avoiding criticisms of educational technology as an overly optimistic practice shaped by societal trends. It’s easy to shift the focus on individuals. It’s easy to blame teachers for not using technology in participatory ways, faculty for not employing more progressive digital pedagogies, and researchers for not publishing in open access venues. But such blame, such a “pull yourself up by your bootstraps” approach, ignores the unequal distribution of power in our social systems and ignores sociocultural and sociopolitical constraints that individuals face. Teachers might face testing regimes that favor certain (poor) pedagogies. Researchers might face institutional policies or disciplinary norms that favor publishing in certain (closed) journals. Using a parallel example, it’s easy (and tempting) to claim that Uber drivers enjoy opportunities to supplement their income, work at their leisure, and make use of idle resources (i.e. their cars), and easy to avoid investigations of the broader social trends surrounding the gig economy.

Have we had successes using educational technology to re-imagine pedagogical approaches, expand flexibility, reduce costs, improve outcomes, and escalate access? Of course we have. And the future is bright. But, if we keep ignoring the ways that educational technology is a symptom of powerful forces, such as our changing economy[1], outside of the control of any single well-intentioned individual we might find ourselves supporting systems and practices that are in conflict with the positive societal ideals that we are aspiring towards.

Through such conversations, our field becomes more vibrant, critical, and reflective. And, for that, despite our disagreements, I’d like to thank Downes and Kim.

Page 2 of 9

Powered by WordPress & Theme by Anders Norén