Volume 1, Issue 1

Cambridge Journal of Artificial Intelligence

ISSN

3050-2586

Published On

July 18, 2024

Editor-in-Chief

Mahera Sarkar

Managing Editors

Debarya Dutta, Raphael Hernandes, Angy Watson

Review Editors

Davina Duggan, Marine Ragnet, Zoya Yousef, Michael Zimba, Vikas Gupta

Copy Editors

Bhavesh Chhipa, Berenice Fernandez Nieto

Citation

Nicolis, A. and Kingsman, N., 2024. AI Explainability in the EU AI Act: A Case for an NLE Approach Towards Pragmatic Explanations. Cambridge Journal of Artificial Intelligence, 1(1), pp. 3-16.

Mehta, D., 2024. Artificial Intelligence and the Incomplete Aesthetic Experience. Cambridge Journal of Artificial Intelligence, 1(1), pp.17-29.

Benson, C., 2024. Quantifying Bodies, Categorising Difference: Border AI Through the Lens of Racial Capitalism. Cambridge Journal of Artificial Intelligence, 1(1), pp. 30-40.

Sarkar, M., 2024. Towards Contextually Sensitive Informed Consent in the Age of Medical AI. Cambridge Journal of Artificial Intelligence, 1(1), pp. 41-51.

Collins, H., 2024. Virtue Under Pressure: The Case for an Exemplarist Virtue Ethics Framework to Build Artificial Moral Agents for High-Pressure Tasks. Cambridge Journal of Artificial Intelligence, 1(1), pp. 52-61.

Table of Contents

Front Matter
(Cover, Contents, Editorial, Foreword; pp. I.i – I.iv)

An Interview with Claire Benn
(pp. 1 – 2)

Claire Benn is an Assistant Professor at the University of Cambridge and Course Leader of the MPhil in Ethics of AI, Data, and Algorithms at the Leverhulme Centre for the Future of Intelligence. In this interview, Editor-in-Chief and current MPhil student Mahera Sarkar sits down with Dr. Benn to reflect on the programme’s first year.

AI Explainability in the EU AI Act: A Case for an NLE Approach Towards Pragmatic Explanations
Anna Nicolis and Nigel Kingsman (pp. 3 – 16)

This paper navigates the implications of the emerging EU AI Act for artificial intelligence (AI) explainability, revealing challenges and opportunities. It reframes explainability from mere regulatory compliance with the Act to an organising principle that can drive user empowerment and compliance with broader EU regulations. The study’s unique contribution lies in attempting to tackle the ‘last mile’ of AI explainability: conveying explanations from AI systems to users. Utilising explanatory pragmatism as the philosophical framework, it formulates pragmatic design principles for conveying “good explanations” through dialogue systems using natural language explanations. AI-powered robo-advising is used as a case study to assess the design principles, showcasing their potential benefits and limitations. The study acknowledges persisting challenges in the implementation of explainability standards and user trust, urging future researchers to empirically test the proposed principles.

Artificial Intelligence and the Incomplete Aesthetic Experience
Dvija Mehta (pp. 17 – 29)

This paper assesses the aesthetic experience provided by AI-generated visual art to assign aesthetic values to the same. Following from an experiential theory of aesthetic value, the notion of value remains inseparable from that of the experience of the aesthetic object in question. By conducting a detailed exploration of a complete, unified, and correct aesthetic experience through aesthetic judgement, I argue that AI-generated works lack the intentional relation thus providing an incomplete aesthetic experience – resulting in aesthetic values lower that than of anthropogenic works valued for a unified aesthetic experience. In doing so, the paper additionally answers recent allegations of human bias in perceiving AI art. The findings of this paper contribute to the field of computational creativity by treading on novel ground and providing a qualitative unravelling of the aesthetic values such works hold and their credibility as a tool to aid human creativity.

Quantifying Bodies, Categorising Difference: Border AI Through the Lens of Racial Capitalism
Cherry Benson (pp. 30 – 40)

This paper explores the connection between racial capitalism and the development and deployment of AI technologies, using border AI as an illustrative example. Section 1, examines how racial capitalism, rooted in historical hierarchies and discrimination, influences the development and deployment of AI technologies. It highlights how this legacy perpetuates inequalities, privileging certain groups while disadvantaging others. Section 2 frames border AI, highlighting both its benefits and challenges. This section sets the stage for understanding how border AI can perpetuate existing inequalities and raise significant human rights concerns. Section 3 presents an analysis stemming from the ideas presented in Sections 1 and 2. Tracing the historical roots of AI technologies in border control, it highlights how pseudo-scientific racist ideologies and biometric quantification practices have shaped their foundations. Section 4 explores algorithmic accountability at EU borders and examines the EU Artificial Intelligence Act, revealing significant gaps in migrant protection. Although automating decision-making processes offers potential benefits, these systems often reinforce existing biases and lack transparency, complicating oversight and judicial review. The paper concludes by drawing upon the insights gleaned from the exploration and advocates for a shift towards a person-centred framework at the border that acknowledges and incorporates marginalised knowledge systems. This approach underscores the necessity for border control practices to prioritise human rights and dignity over technical progress and efficiency, paving the way for a more equitable future in AI deployment.

Towards Contextually Sensitive Informed Consent in the Age of Medical AI
Mahera Sarkar (pp. 41 – 51)

Informed consent is a fundamental aspect of medical ethics, empowering patients to engage in their healthcare decisions. However, the advent of medical AI introduces new challenges, particularly contextual bias, which can undermine informed consent. This paper explores strategies for contextually sensitive informed consent in the UK healthcare system, addressing biases related to gender, ethnicity, and age. It critiques existing informed consent guidelines, highlighting their inadequacy in handling AI’s complexities and biases. A novel four-part framework is proposed: enhancing AI literacy among healthcare professionals, implementing dynamic risk communication through “Model Facts” labels, providing patient-centric risk interpretation using electronic health records, and establishing legal and ethical safeguards to support clinicians. This framework aims to ensure that informed consent remains robust and meaningful in the age of medical AI, ultimately promoting equitable and patient-centred care. The paper emphasises immediate improvements to informed consent processes to complement long-term efforts to mitigate contextual bias in AI, contributing to ongoing debates and proposing practical solutions for integrating AI into healthcare ethically and effectively. Future research should focus on refining this framework and exploring its applicability across different healthcare systems and cultural contexts.

Virtue Under Pressure: The Case for an Exemplarist Virtue Ethics Framework to Build Artificial Moral Agents for High-Pressure Tasks
Harry Collins (pp. 52 – 61)

This paper examines artificial moral agents (AMA) and seeks to justify their use to perform tasks involving high situational pressures that significantly impact human moral decision-making even when there is consensus on the correct decision. Moreover, these tasks can lead to moral injury for human decision-makers if their moral code has been violated, either by themselves or by external situational pressures. I argue that AMAs can potentially negate these concerns, particularly AMAs utilising exemplarist virtue ethics, a flexible approach to normative ethics, allowing agents to learn from experience to emulate the virtue of selected exemplars. To this end, I propose the outlines of an exemplarist framework for building AMAs using reinforcement learning from human feedback, where feedback is provided by moral exemplars in given tasks. Processes for selecting candidate tasks, identifying exemplars, and developing AMAs to emulate those exemplars are provided. Finally, potential objections are considered, both against the idea of exemplarist AMAs and their feasibility. I conclude that exemplarist AMAs for high-pressure tasks are promising candidates to perform high-pressure moral tasks, reducing moral injury for humans, although issues such as minimising cross-cultural disagreement on moral decisions and how well agents capture morally relevant features in their environment to emulate exemplars need further exploration through practical experimentation.