Category: generative artificial intelligence Page 1 of 2

On Vanderbilt’s disabling of Turnitin’s AI detection feature, and faculty guidance

Last week, Vanderbilt University decided to disable Turnitin’s AI detection tool. Congratulations are in order!

To date, there’s little evidence as to the effectiveness and appropriateness of such tools (also see: their unintended consequences). Equally importantly, Vanderbilt’s decision lends credence and support to recommendations that numerous working groups put forward to their institutions, and paves the way for others to feel confident in taking similar actions. Earlier this year for example, I led a generative AI taskforce at Royal Roads University. The relevant recommendation we put forward in early June is this:

Recommendation 5: Investigate, and potentially revise, assessment practices.

We recommend that faculty examine their current assessment practices and question them through the lens of AI tools. For instance, faculty could try their discussion prompts or reflection questions with AI tools to explore the role and limits of this technology. Regardless of the outcome of such efforts we recommend that faculty do not rely on AI detection tools to determine whether learners used AI in their work. A service that asserts to detect AI generated text does not provide transparency on how that assertion is made and encourages a culture of suspicion and mistrust. Emerging research also highlights challenges with reliably detecting AI-generated text (Sadasivan et al., 2023). Instead, we recommend that faculty engage with learners in conversations at the beginning of the course as to the appropriate and ethical use of AI. We further encourage faculty to continue their efforts towards experiential and authentic learning– including work integrated learning, live cases, active learning opportunities, field trips, service learning, iterative writing assignments, project-based learning, and others. These are not necessarily failsafe approaches to deter cheating, and it may even be possible to leverage AI in support of experiential learning. Ultimately, we recommend that faculty question their assignments at a time and age when generative AI is widely available.

Metaphors of generative AI in education

It’s been interesting to see what kinds of conceptual metaphors have been used to describe generative AI. Conceptual metaphors are ways to talk about something by relating it to something else. It’s been interesting because the ways we choose to speak about generative AI in education matter, because those choices impact experiences, expectations, and even outcomes. Early pedagogical agent research for example, identified different roles for agents often positioning them as experts, mentors or motivators. Even the term pedagogical agents carries its own connotations around the kinds of work that such a system will engage in (pedagogical), compared to say conversational agents or intelligent agents.

Below is a small sample. What other metaphors have you seen?

Update: See Dave Cormier’s related post around prompt engineering, or what to call the act of talking to algorithms.

3 ways higher education can become more hopeful in the post-pandemic, post-AI era

Below is a republished version of an article that Shandell Houlden and I published in The Conversation last week, summarizing some of the themes that arose in our Speculative Learning Futures podcast.

3 ways higher education can become more hopeful in the post-pandemic, post-AI era

The future of education is about more than technology.
(Pexels/Emily Ranquist)

Shandell Houlden, Royal Roads University and George Veletsianos, Royal Roads University

We live at a time when universities and colleges are facing multiplying crises, pressures and changes.

From the COVID-19 pandemic and budgetary pressures to generative artificial intelligence (AI) and climate catastrophe, the future of higher education seems murky and fragmented — even gloomy.

Student mental health is in crisis. University faculty in our own research from the early days of the pandemic told us that they were “juggling with a blindfold on.” Since that time, we’ve also heard many echo the sentiment of feeling they’re “constantly drowning,” something recounted by researchers writing about a sense of precarity in universities in New Zealand, Australia and the western world.

In this context, one outcome of the pandemic has been a rise in discourses about specific, quite narrowly imagined futures of higher education. Technology companies, consultants and investors, for example, push visions of the future of education as being saved by new technologies. They suggest more technology is always a good thing and that technology will necessarily make teaching and learning faster, cheaper and better. That’s their utopian vision.

Some education scholars have been less optimistic, often highlighting the failures of utopian thinking. In many cases, their speculation about the future of education, especially where education technology is concerned, often looks bleak. In these examples, technology often reinforces prejudices and is used to control educators and learners alike.

A picture of a collage showing a Facebook-jammed image that says 'You've been Zucked'
Amid accelerating technology, what kind of future do we imagine for higher education?
Annie Spratt/Unsplash

In contrast to both utopian and grim futures, for a recent study funded by the Social Sciences and Humanities Research Council, we sought to imagine more hopeful and desirable higher education futures. These are futures emerging out of justice, equity and even joy. In this spirit, we interviewed higher education experts for a podcast entitled Speculative Learning Futures.

When asked to imagine more hopeful futures, what do experts propose as alternatives? What themes emerge in their work? Here are three key ideas.

It’s about more than technology

First, these experts reiterated that the future of education is about more than technology. When we think about the future of education we can sometimes imagine it as being tied entirely to the internet, computers and other digital tools. Or we believe AI in education is inevitable — or that all learning will be done through screens, maybe with robot teachers!

But as Jen Ross, senior lecturer in digital education observes, technology doesn’t solve all our problems. When we think about education futures, technology alone does not automatically help us create better education or healthier societies. Social or community concerns like social inequities will continue to affect who can access education, our education systems’ values and how we are shaped by technologies.

As many researchers have argued, including us, the pandemic highlighted how differences in access to the internet and computers can reinforce inequities for students.

AI can also reinforce inequities. Depending on the nature of data AI is trained with, the use of AI can perpetuate harmful biases in classrooms.

Ross notes in her recent book that social or community concerns shape how our societies could imagine education.
Researchers involved with Indigenous-led AI are tackling questions around how Indigenous knowledge systems could push AI to be more inclusive.

Policymakers and educators should consider technology as one part of a toolkit of responses for making informed decisions about what technologies align with more equitable and just education futures.

Emphasizing connection and diversity

In line with thinking about more than technology, the second theme is a reminder that the future of education is about healthy social connection and social justice. Researchers emphasize fostering diversity and celebrating diverse expressions of strengths and needs.

Experts envision and call for education that is more sustainable for everyone, not just a privileged few. Kathrin Otrel-Cass, professor at University of Graz, and Mark Brown, Ireland’s first chair in digital learning and director of the National Institute for Digital Learning at Dublin City University, suggest this means teaching and learning should be at a slower pace for students and faculty alike.

In this vision, policymakers must support education systems that regard the whole learner as an individual with specific physical, mental, emotional and intellectual needs, and as a member of multiple communities.

Acknowledge the goodness of the present

There’s lots to be gained by noting and supporting all the great things related to education that are happening in the present, since possible futures emerge from what now exists.

As two podcast guests, Eamon Costello, professor at Dublin City University and collaborator Lily (Prajakta) Girme, noted, we need to acknowledge the good work of educators and learners in the small wins that happen every day.

In 2019, researchers Justin Reich and José Ruipérez-Valiente wrote: “new education technologies are rarely disruptive but instead are domesticated by existing cultures and systems. Dramatic expansion of educational opportunities to under-served populations will require political movements that change the focus, funding and purpose of higher education; they will not be achieved through new technologies alone.”

These are words worth repeating.



Shandell Houlden, Postdoctoral Fellow, School of Education and Technology, Royal Roads University and George Veletsianos, Professor and Canada Research Chair in Innovative Learning and Technology, Royal Roads University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

So very tired of predictions about AI in education…

By people who aren’t AIEd experts, education technology experts, education experts, and the like.

Case in point: “AI likely to spell end of traditional school classroom, leading [computer science] expert says.”

I appreciate cross disciplinary engagement as much as I love guacamole (which is to say, a lot), but I’d also appreciate that we stop wasting our time on these same unfulfilled prophecies year after year, decade after decade.

Will AI impact education? In some ways it will, and in others it won’t. Will education shape the ways AI comes to be used in classrooms? In some ways it will, and in others it won’t.

Truth be told, this negotiated relationship isn’t as appealing as DISRUPTION, AVALANCHE, MIND-READING ROBO-TUTOR IN THE SKY, etc, which are words that readers of the history of edtech will recognize.

Public version: Making ChatGPT detectors part of our education system prioritizes surveillance over trust

The Globe and Mail published an op-ed I wrote. As a condition of being featured in the publication, the paper has first publication rights for the first 48 hours. Since it’s been more than 48 hours, and for posterity, I’m making a copy available below.

Making ChatGPT detectors part of our education system prioritizes surveillance over trust

George Veletsianos is a professor of education and Canada Research Chair in Innovative Learning and Technology at Royal Roads University.

Imagine a world where surveillance technologies monitor and scrutinize your behaviour. Imagine a report that you write at work being compared with myriads of others and flagged for additional inspection when an algorithm deems it to be “very similar” to others.

Students don’t have to imagine this world. They are already living in it, in the form of plagiarism detection software, remote proctoring technologies, and now, tools aimed at detecting whether the students used ChatGPT – including new software that promises to catch students who use ChatGPT to cheat.

While taking online exams, students’ webcams scan their surroundings; their microphones monitor sounds and background noise, and their body and eye movements are tracked. Unexpected movements may indicate something as innocuous as stretching a tight neck or as problematic as catching a glimpse of Post-it notes on the wall, while unexpected sounds may indicate a child playing in the background or a roommate whispering answers. The essay assignments students submit are compared to a vast amount of writing by others. And a battery of scores might indicate plagiarizing from Wikipedia, passing off text created by ChatGPT as one’s own, or simply using common expressions. Any of this will get students flagged as potential cheaters.

That there are technologies to identify text written by artificial intelligence shouldn’t come as a surprise. What is surprising is that educators, administrators, students, and parents put up with surveillance technologies like these.

These technologies are harmful to education for two main reasons. First, they formalize mistrust. As a professor and researcher who has been studying the use of emerging technologies in education for nearly two decades, I am well aware that educational technology produces unintended consequences. In this case, these technologies take on a policing role and cultivate a culture of suspicion. The ever-present microscope of surveillance technology casts a suspicious eye on all learners, subjecting them all to an unwarranted level of scrutiny.

Second, these technologies introduce a host of other problems. Researchers note that these tools often flag innocent students and exacerbate student anxiety. This is something I’ve personally experienced as well when I took my Canadian citizenship exam online. Even though I knew the material and was confident in my abilities, my webcam’s bright green light was a constant reminder that I was being watched and that I should be wary of my every move.

To be certain, such tools may deter some students from intentionally plagiarizing. They may also improve efficiency, since they algorithmically check student work on behalf of educators.

But these reasons don’t justify surveillance.

A different world is possible when schools and universities dare to imagine richer and more hospitable learning environments that aren’t grounded in suspicion and policing. Schools and universities can begin to achieve this by developing more trusting relationships with their students and emphasizing the importance of honesty, original work, and creativity. They need to think of education in terms of relationships, and not in terms of control, monitoring, and policing. Students should be viewed as colleagues and partners.

Educators also need to come to terms with the fact that our assessments generally suffer from a poverty of imagination. Essays, tests, and quizzes have an important role to play in the learning process, but there are other ways to check for student achievement. We can design assessments that ask students to collect original data and draw inferences, or write and publish op-eds like this one; we can invite them to develop business and marketing plans for real-world businesses in their cities; we can ask them to reflect on their own unique experiences; we can require them to provide constructive peer-review and feedback to fellow students, or have them engage in live debates. In this light, ChatGPT is not a threat, but an opportunity for the education system to renew itself, to imagine a better world for its students.

Educators and administrators should stop using surveillance technologies like ChatGPT detectors, and parents and students should demand that schools and universities abolish them – not because cheating should be tolerated, but because rejecting the culture of suspicion that surveillance technologies foster and capitalize upon is a necessary step toward an education system that cares for its learners.

Three special issues on Generative AI in Education (update: now five)

If you’re looking for a home for your generative AI paper, you now have a few special issues to choose from:

TechTrends call

We welcome proposals for original research articles and theoretical papers that explore the potential of integrating Generative AI in education. We encourage submissions on (but not limited to) the following topics:
1. Personalized learning experiences that address learner needs and preferences;
2. Language learning, such as offering practice in conversation or helping with translation;
3. Coding/programming education or computational thinking, such as supporting debugging;
4. Assistance during writing process, such as brainstorming, editing, character generation;
5. Teaching support, such as answering frequently asked questions, generating question prompts and examples, evaluating students’ writing;
6. Student engagement and motivation, such as providing feedback and human-like interactions through natural language output;
7. Higher-order thinking, such as enhancing analytical, critical thinking, and reflection.
8. Collaborative learning process, such as supporting group discussion or interaction.

International Journal of Educational Technology in Higher Education call

The call for papers is intended to invite contributions that address the recent development of AI in HE in light of new applications such as ChatGTP, aiming to provide more comprehensive and collective answers to the following questions:

• What is the actual impact of AI on different aspects of HE institutions (e.g. student support systems, administration, professional development, and infrastructure)?
• What is the actual impact of AI on different aspects of learning and teaching in HE? (e.g. assessment, data literacy, design of learning activities)?
• What is the actual impact of AI on different subjects in HE? (e.g. students, teachers, administrators, causal workers, other stakeholders)?

The Special Issue Editors are also interested in making sense of the impact of AI on educational accessibility and (in-)equity regarding the cost, quality, and access in different forms of open, distance, and digital education. Both theoretical and empirical works will be considered to be included as long as they demonstrate rigour, criticality, and novelty.

IEEE Transactions on Learning Technologies call

The successful design and application of generative AI require holistic considerations of theoretical frameworks, pedagogical approaches, facilitative ecological structure, and appropriate standards. Topics of interest for this special issue include, but are not limited to:

  • Studies on the pedagogical or curricular approaches to teaching and learning with generative AI.
  • Discussion on the theoretical frameworks of generative AI to provide the basis for the understanding of systems and their capabilities for teaching and learning.
  • Discussion of the extent to which the design of learning environments with generative AI aligns with different theories of learning (e.g., behaviorism, cognitivism, (social) constructivism, constructionism, socio-cultural).
  • Studies on the applications of generative AI for assessment of, assessment for, and assessment as learning.
  • Development of the environmental structures that facilitate the employment of generative AI in education.
  • Development or implementation of relevant standards governing the proper use of generative AI in human learning contexts.
  • Exemplary use cases and practices of generative AI… [more bullet points in the CFP]

Update (Eamon Costello alerted me to an additional one): Learning Media and Technology call

We invite theoretical papers and theoretically-informed empirical studies that explore emerging practices and offer new imaginings of generative AI in education. Papers may use a variety of methodological approaches including feminist, critical, new materialist, interpretive, qualitative, rhetorical, quantitative, or experimental.

Topics may include, and are not limited to:

  • Critical pedagogy and generative AI
  • Ways in which generative AI further complicates notions of authenticity–of authorship, ideas, ownership, and truth
  • Creative and productive uses of generative AI and how can they be harnessed in education
  • Ethical, political, and epistemological issues of generative AI in education
  • Socio-technical explorations of generative AI and equity, power, inclusion, diversity, identity, marginalization, (dis)ability, ethnicity, gender, race, class, community, sustainability, etc.
  • Development of methodologies to critically assess generative AI in education
  • Alternative imaginaries for the development of future generative AI tools for education

Update Apr 16: One more from the Asian Journal of Distance Education

…to better understand what generative AI promises us, we need to examine its philosophy, develop a theoretical understanding, and investigate “how human tutors and machines (ChatGPT) could work together to achieve an educational objective, as well as the changes and outcomes brought to the education field (e.g., evolutionary or revolutionary)” (Tlili et al., 2023, p. 22). Based on above the thoughts, Asian Journal of Distance Education seeks papers on generative AI with a focus on open, online, and distance education. Research papers, systematic reviews, and opinion papers with a critical stance are welcome.

Playing with prompt engineering in response to Hickey & Luo prompt engineering analysis

In an worthwhile and expansive post Daniel Hickey and Qianxu Morgan Luo explore knowledge-rich prompts and the ways in which ChatGPT and Bard differ in the results they return. I thought that

  1. their exploration was interesting,
  2. that their findings were interesting (e.g., “this initial analysis suggests that ChatGPT was much more capable of interpreting and applying the article using an engineered prompt that included the abstract. But ChatGPT did not add much knowledge about cyber ranges In contrast, Bard appeared to take away almost nothing from the abstract but knew more about cyber ranges”)
  3. but, I was convinced that the prompts could have been engineered in more nuanced ways to improve the outcome.

I also recently learned that thinking of prompts in an incremental fashion may provide more nuanced result, and so I went about to recreate Hickey and Luo’s prompt in ChatGPT to see if I could generate a better outcome.

What did Hickey and Luo do?

Their knowledge-rich prompt was the following:

Use the following article by Sinatra, Heddy, & Lombardi (2015) to motivate engagement by cybersecurity learners who are using a digital range and use APA citations: Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a “grain-size” continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.

ChatGPT’s output was:

Research in educational psychology has consistently shown that engagement is a key factor in successful learning outcomes (Sinatra, Heddy, & Lombardi, 2015). For cybersecurity learners who are using a digital range, engagement is particularly important as it can enhance their motivation and achievement in this field. As Sinatra et al. (2015) note, engagement can be viewed from a dimensional perspective, encompassing behavioral, cognitive, emotional, and agentic dimensions. These dimensions can be used to design effective learning experiences that tap into students’ interests and motivations, such as through gamification or interactive simulations.

However, there is still a need for a concrete definition and effective measurement of engagement, especially in the context of science learning (Sinatra et al., 2015). To address this issue, educators in the field of cybersecurity can consider using a complementary approach that places engagement instrumentation on a continuum, ranging from a person-centered to a context-centered orientation (Sinatra et al., 2015). This approach can help clarify measurement issues and enable educators to tailor their teaching strategies to the individual needs and interests of their students.

In conclusion, engagement is a crucial factor in the success of cybersecurity learners using a digital range. By taking a dimensional perspective and using a continuum-based approach to measure engagement, educators can design effective learning experiences that promote student motivation and achievement in this field.


Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges and opportunities of engaging learners in the scientific practices of explanation and argumentation. Journal of Educational Psychology, 107(2), 380–397.

What did I do?

I divided the prompt in chunks, incrementally moving from the field (cybersecurity), to the topic within the field (cyber ranges) to the application of motivation to the topic (the Sinatra et al citation), to probing for specifics. I also used the term “cyber range” rather than “digital range” which Hickey and Luo used. This approach generated specific and actionable recommendations, especially compared to the generic reply above. My prompts and ChatGPT responses are below.

Prompt: I’d like to learn more about cybersecurity. Could you help me with this topic?

Sure! I’d be happy to help you learn more about cybersecurity. What specific areas of cybersecurity are you interested in? Some possible topics within cybersecurity include:

  1. Network security
  2. Information security
  3. Application security
  4. Cloud security
  5. Identity and access management
  6. Incident response
  7. Risk management
  8. Cybersecurity laws and regulations

Let me know what you’re interested in, and I can give you some pointers on where to start.

Prompt: What do people mean when they mention “cyber range” in the context of cybersecurity?

A “cyber range” is a simulated environment used for training and testing cybersecurity skills and technologies. It is essentially a virtual training ground where individuals and teams can practice identifying, preventing, and responding to cyber threats.

Cyber ranges can be set up to simulate a variety of scenarios, such as network intrusions, phishing attacks, malware infections, and denial-of-service attacks. They may also incorporate realistic components such as simulated user behavior, fake websites, and decoy systems.

By providing a safe and controlled environment for cybersecurity training and testing, cyber ranges can help organizations and individuals develop their skills and test their defenses without risking their actual systems or data. They can also be used to evaluate and compare different cybersecurity tools and techniques.

Some cyber ranges are open to the public, while others are only available to specific organizations or groups. Some are physical facilities that require specialized equipment and infrastructure, while others are entirely virtual and can be accessed from anywhere with an internet connection.

Overall, cyber ranges are an increasingly important tool for improving cybersecurity readiness and resilience in the face of an ever-evolving threat landscape.

Prompt: I would like to motivate engagement by cybersecurity learners who are using a cyber range. Using the ideas from Sinatra, Heddy, & Lombardi (2015) that are in quotes, write how to motivate engagement and use APA citations: “Engagement is one of the hottest research topics in the field of educational psychology. Research shows that multifarious benefits occur when students are engaged in their own learning, including increased motivation and achievement. However, there is little agreement on a concrete definition and effective measurement of engagement. This special issue serves to discuss and work toward addressing conceptual and instrumentation issues related to engagement, with particular interest in engagement in the domain of science learning. We start by describing the dimensional perspective of engagement (behavioral, cognitive, emotional, agentic) and suggest a complementary approach that places engagement instrumentation on a continuum. Specifically, we recommend that instrumentation be considered on a ‘grain-size’ continuum that ranges from a person-centered to a context-centered orientation to clarify measurement issues. We then provide a synopsis of the articles included in this special issue and conclude with suggestions for future research.”

To motivate engagement in cybersecurity learners who are using a cyber range, it is important to create a learning environment that is both challenging and supportive. According to Sinatra, Heddy, & Lombardi (2015), engagement is influenced by a variety of factors, including behavioral, cognitive, emotional, and agentic dimensions. To address these dimensions and promote engagement in the cyber range, the following strategies can be implemented:

  1. Provide hands-on learning experiences: learners should be given the opportunity to actively engage with the cyber range and practice their skills in a realistic and challenging environment. This can help to increase their sense of control and agency, which is important for promoting engagement (Sinatra, Heddy, & Lombardi, 2015).
  2. Create a supportive learning environment: learners should feel supported and encouraged throughout the learning process. Providing feedback, encouragement, and support can help to foster positive emotional and cognitive engagement (Sinatra, Heddy, & Lombardi, 2015).
  3. Use a variety of instructional strategies: learners have different learning styles and preferences, and it is important to use a variety of instructional strategies to cater to these differences. For example, using videos, interactive simulations, and quizzes can help to increase cognitive engagement (Sinatra, Heddy, & Lombardi, 2015). [Sidenote from George: This is worthy of a post around AI dispensing bad advice and perpetuating myths]
  4. Set clear learning goals: providing learners with clear learning goals and objectives can help to increase their motivation and focus. Goals should be challenging but achievable, and learners should be given regular feedback on their progress (Sinatra, Heddy, & Lombardi, 2015).

Overall, creating a supportive and challenging learning environment that promotes engagement can help to increase motivation and achievement in cybersecurity learners who are using a cyber range.

Reference: Sinatra, G. M., Heddy, B. C., & Lombardi, D. (2015). The challenges of defining and measuring student engagement in science. Educational Psychologist, 50(1), 1-13. doi: 10.1080/00461520.2014.1002924

Prompt: I want to know more about #2. How do I create a supportive learning environment in a cyber range?

Creating a supportive learning environment in a cyber range involves providing learners with feedback, encouragement, and support throughout their learning process. Here are some specific strategies you can use to promote a supportive learning environment:

  1. Provide timely feedback: learners need feedback on their progress to understand what they are doing well and what they need to improve. Provide feedback as soon as possible after a learning activity and be specific about what the learner did well and what they need to work on.
  2. Encourage learners: it is important to encourage learners and recognize their efforts. Encouragement can come in many forms, such as praise, positive reinforcement, and recognition of achievements. This can help learners build confidence and feel more motivated to learn.
  3. Build a sense of community: building a sense of community among learners can help to create a supportive learning environment. Encourage learners to work together, share their experiences, and collaborate on learning activities. This can create a sense of belonging and increase engagement.
  4. Provide support resources: learners may need additional support to be successful in a cyber range. Provide resources such as online help, tutorials, and support forums to help learners overcome challenges and get the assistance they need.
  5. Use effective communication: effective communication is key to creating a supportive learning environment. Communicate clearly and frequently with learners, provide clear instructions, and be available to answer questions and provide support.

By using these strategies, you can create a supportive learning environment that promotes engagement and success in the cyber range.

Prompt: Can you give me an example of an encouraging message I can send to learners who are in a cyber range?

Sure! Here is an example of an encouraging message you can send to learners who are in a cyber range:

“Great work on completing that cyber range activity! Your hard work and dedication to learning in this field are truly commendable. Remember, every challenge you overcome is an opportunity to grow and become a better cybersecurity professional. Keep up the good work, and don’t hesitate to reach out if you need any support or assistance. You’ve got this!”

This message acknowledges the learner’s efforts, encourages them to continue their learning journey, and offers support if needed.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén