
Questioning our Language of Change
Author Name
Iman Khwaja
Published On
January 28, 2026
Keywords/Tags
Philosophy of AI, AI Adoption, Human-AI Partnerships, Research
Questioning our Language of Change
While walking through the University of Toronto, I observed a flyer that described the institution’s current research trajectory as “empowering AI to take the wheel safely”. I was frankly astounded to see that a celebration of scientific progress was described using language of such defeat and deferral. This raises a number of critical questions. Are research scientists at the frontier now just temporary facilitators for a kind of exclusionary progress that they accept they may not partake in? Where does that leave the rest of us, and how do we conceive of innovation and progress if we have drawn a hard line for ourselves to stop shaping it? What does it mean to empower something we do not yet understand, and how do we know we are doing it safely? These questions are not just reserved for those actively “empowering machines” by developing them, but are questions that require all of us to step forward and challenge the innovation trajectory before we lose our way. In this short post I argue that this makes an important case to approach how we discuss future trajectories with more philosophy-driven thinking. In considering this idea further, I thematically derive fragments from the statement made by the University of Toronto referred to earlier, “empowering AI to take the wheel safely”, in a threefold approach:
Empowerment
Traditional empowerment discourse has always placed humans at the front and centre.. This reiterates the role that technological capabilities play in achieving empowerment within the science disciplines, but calls for the readjustment of how we define empowerment conceptually if we are to apply it to entities with non-human architecture. Lim et al summarize the theory of psychological empowerment as a “motivational construct manifested in four cognitions: meaning, competence, self-determination, and impact” (2025). Arguably, a number of these are already demonstrated in various models, particularly those of competence and impact, but if self-determination is a crucial component of the empowerment equation, how do we reckon with empowering a technology that cannot yet self-determine anything? If we are to adhere to these tenets of human psychological empowerment, in what way do machines fit into these boxes, and do we intend to elucidate these boxes as our relationships with machines develop? Philosophy-driven thinking at its simplest encourages us to question how the language we use often conflates us with technologies we use daily, and in the case of empowerment, how we may need to redefine or challenge these terms to meet new ethical thresholds. These questions of “can” and “should we” are where I believe that philosophy-driven thinking could enable us to pace deployment before we find ourselves letting AI “take the wheel”.
Taking the Wheel
Envisioning what allowing AI to “take the wheel” could look like invites myriad perspectives on possible outcomes. While this could mean automating processes that pose barriers to our human capacities for creativity and meaningful application of our skills, the decision to defer to automation could equally become a laying down of arms in the face of something we have decided is competent enough to run the show for us. Neves et al summarize this in their argument that “technological change is not preordained but emerges through social relations, institutional settings and cultural imaginaries” (2025). Recognizing this effect, the wisdom of the crowds must remain a critical factor in the trajectory of AI dependencies at large, and harnessing this wisdom becomes a crucial component of decision-making processes for developers, policymakers and researchers alike. This is not to say it is impossible for groups of end-users to be wildly wrong about the value or utility of a technology, but our metacognition must remain poised at the forefront of our considerations for technology use before we fully defer to being outpaced. Phrases like “take the wheel” not only reflect a defeatist attitude in terms of leading research and scientific progress, but also discount the importance of our oversight in the fight for greater equity, interpretability and overall ethical AI development.
Safety
In The Philosophic Turn for AI Agents, Koralus envisions “AI systems that empower users to maintain control over their judgements, augmenting their agency without undermining autonomy” (2024). In this case, building or “empowering” machines that work inversely to empower us is an ideal objective that aligns critically with scalable oversight methods. Philosophy-driven manifestations of this approach could be methods that focus on rater assistance or safety by debate opportunities, in which humans retain the autonomy to engage with AI critically and are able to adapt to doing so successfully. However, it is abundantly clear that the race to innovate has taken precedence over our strategy to lead it, to remain ethically and autonomously involved with development. For this reason, it is impossible to conceive of empowering an AI to “safely take the wheel from us”, when our interests in and awareness of safeguarding protocols are severely overshadowed by institutional interests in innovation. Tacked onto the end of the phrase, the use of the word in this context denotes only a kind of obligatory reassurance to conceal a complete surrender to technology that we do not necessarily understand, let alone know how to work with safely. Philosophy-driven thinking invites us to reflect on what we consider a “safe” threshold for deference to technology, to consider what tasks we are willing to delegate to autonomous technologies and critically, at what cost.
Conclusion
Amidst masses of adverts for AI companionship, assistants and other opportunities to augment our realities, we are at a critical and decisive point in technological change that is largely shaped by the language we use to describe it. By allowing language to reflect our interests in innovation with words that offer void reassurance that it will be done ethically, we sacrifice the integrity of a language commitment to developing truth-seeking AI’s. Using philosophy-driven thinking to inform our language of change, even at its most fundamental, enables us to think critically about what we truly want and need from AI, rather than what we want to say about it.
Bibliography:
- Henke, Benjamin, “Consciousness”, The Philosophical Glossary for AI, Alex Grzankowski and Benjamin Henke (eds.), URL=https://www.aiglossary.co.uk/2025/06/24/test-post/, accessed on 21, September,2025.
- Koralus, P. (2025). The Philosophic Turn for AI Agents: Replacing centralized digital rhetoric with decentralized truth-seeking. arXiv. https://doi.org/10.48550/arXiv.2504.18601
- Lim, J. S., Shin, D., Lee, C., Kim, J., & Zhang, J. (2025). The Role of User Empowerment, AI Hallucination, and Privacy Concerns in Continued Use and Premium Subscription Intentions: An Extended Technology Acceptance Model for Generative AI. Journal of Broadcasting & Electronic Media, 69(3), 183–199. https://doi.org/10.1080/08838151.2025.2487679
- Neves, B. B., Broom, A., Gulson, K., & Mead, G. (2025). Reframing Artificial Intelligence: Critical Perspectives from AI Social Science. Humanities and Social Science Communications, Springer Nature.
- Reihaneh Golpayegani / https://betterimagesofai.org / https://creativecommons.org/
licenses/by/4.0/
