Yesterday’s rough set of notes that focus on teacherbots and artificial intelligence in education

  • Notable critiques of Big Data, data analytics, and algorithmic culture (e.g., boyd & Crawford, 2012; Tufecki, 2014 & recent critiques of YouTube’s recommendation algorithm as well as Caulfield’s demonstration of polarization on Pinterest). These rarely show up in discussions around bots and AI in education, critiques of learning analytics and big data (e.g., Selwyn 2014; Williamson, 2015) are generally applicable to the technologies that enable bots to do what they do (e.g., Watters, 2015).
  • Complexity of machine learning algorithms means that even their developers are at times unsure as to how said algorithms arrive at particular conclusions
  • Ethics are rarely an area of focus in instructional design and technology (Gray & Boling, 2016)  – and related edtech-focused areas. In designing bots where should we turn for moral guidance? Who are such systems benefiting? Whose interests are served? If we can’t accurately predict how bots may make decisions when interacting with students (see bullet point above), how will we ensure that moral values are embedded in the design of such algorithms? Whose moral values in a tech industry that’s mired with biases, lacks broad representation, and rarely heeds user feedback (e.g., women repeatedly highlighting the harassment they experience on Twitter for the past 5 or so years, with Twitter taking few, if any, steps to curtail it)?