Volume 1, Issue 2
Cambridge Journal of Artificial Intelligence
ISSN
3050-2586
Published On
November 25, 2024
Editor-in-Chief
Mahera Sarkar
Managing Editors
Debarya Dutta, Marine Ragnet, Angy Watson
Review Editors
Davina Duggan, Vikas Gupta, Kailash Chauhan, Zoya Yasmine, Michael Zimba
Copy Editors
Thomas Greany, Berenice Fernandez Nieto
Citation
Cardwell, L., 2024. Artificial Companionship: Moral Deskilling in the Era of Social AI. Cambridge Journal of Artificial Intelligence, 1(2), pp. 64 – 75.
Efchary, Z., 2024. Healing Privacy Wounds with SPLINT: A Psychological Framework to Preserve Human Well-Being during Information Privacy Trade-Offs, Cambridge Journal of Artificial Intelligence, 1(2), pp. 76 – 86.
Lai, S., Ali, A. and Ho, K.J.M., Data Protection and Generative AI – Policy, Regulation and the Way Forward, Cambridge Journal of Artificial Intelligence, 1(2), pp. 87 – 97.
Rachevsky, L., Are Tech Companies Responsible for Solving the Global AI Divide?, Cambridge Journal of Artificial Intelligence, 1(2), pp. 98 – 107.
Yasmine, Z., Getty Images v Stability AI: Why Should UK Copyright Law Require Licences for Text and Data Mining Used to Train Commercial Generative AI Systems?, Cambridge Journal of Artificial Intelligence, 1(2), pp. 108 – 120.
Watson, A., How Metaphors Influence Ontology, Epistemology, and Methods in AI: Rethinking the Black Box, Cambridge Journal of Artificial Intelligence, 1(2), pp. 121 – 130.
Table of Contents
(Cover, Contents, Editorial; pp. II.i – II.iii)
In Conversation with Mia Shah-Dand
(pp. 62 – 62)
Mia Shah-Dand is CEO of Lighthouse3, where she advises global organisations on responsible AI. She is also the founder of Women in AI Ethics, which highlights women’s contributions in the tech industry through the annual 100 Brilliant Women in AI Ethics list.
Artificial Companionship: Moral Deskilling in the Era of Social AI
Laurence Cardwell (pp. 64 – 75)
This paper investigates “social AI” and its ethical implications, particularly the risk of “moral deskilling” described by Shannon Vallor, where reliance on AI could deteriorate moral skills. Despite social AI’s potential to counter loneliness, it predominantly appears to threaten moral competencies as it prioritises user demands and market forces, and lacks the complexity of human interactions necessary for moral development. The paper suggests that extensive interaction with AI may weaken empathy and reduce genuine human engagement, potentially leading to a decline in moral and social abilities. It concludes that the prevailing application of social AI may contribute more to moral deskilling than upskilling, emphasising the need for diligent research and ethical design in the proliferation of AI technologies.
Zina Efchary (pp. 76 – 86)
This paper explores the critical role of informational privacy in promoting human well-being and flourishing, with particular attention to the challenges posed by Artificial Intelligence (AI) systems. As AI increasingly mediates digital interactions and processes large scales of personal data, controlling the flow of personal information becomes intractable. In response to these evolving challenges, this paper argues for an alternative approach to informational privacy that emphasises its psychological value to support autonomy and positive liberty. To operationalise these values, I adapt Self-Determination Theory (SDT) as a psychological framework, mapping the dimensions of autonomy, relatedness, and competence to the core benefits of informational privacy. Furthermore, by examining the threats posed by predictive AI algorithms to informational privacy in personalised targeting, I argue that conventional privacy measures, such as the notice and consent model, fail to address the psychological challenges to human well-being. In response, I propose a supplementary framework called SPLINT (Self-determined Privacy Loss in Informational Networks and Technologies) and provide concrete application examples of it. This model leverages the psychological insights of SDT to guide the design of mitigation strategies to preserve human well-being even if privacy trade-offs occur. By focusing on preserving the psychological values underpinning informational privacy, SPLINT aims to offer a proactive approach to safeguarding human well-being in AI-mediated digital environments. I conclude that SDT-based approaches like SPLINT provide a progressive, promising starting point to navigate privacy trade-offs, although their wider societal impact as measures and the benefits of informational privacy as a psychological phenomenon require further empirical investigation.
Data Protection and Generative AI – Policy, Regulation and the Way Forward
Dr. Stanley Lai, Afzal Ali and Kan Jie Marcus Ho (pp. 87 – 97)
The proliferation of Artificial Intelligence (“AI”) has led to paradigm shifts in the context of innovation. With rapid advancement in technology in the past twenty to thirty years, large swathes of data were being generated, collected, and used. It was quickly recognised that this affected all facets of society, and that rules and regulations were urgently required to prevent the unfettered flow and (mis)use of data. Examples of such regulations included the groundbreaking General Data Protection Regulation (“GDPR”), and Singapore’s Personal Data Protection Act (“PDPA”). However, just over a decade after the enactment of such rules and regulations, another paradigm shift is on the horizon. Artificial intelligence and generative intelligence are radically transforming how data can be interpreted, used, and presented. It has validly been pointed out that such generative artificial intelligence could bring forth a new epoch of data synthesis and augmentation. This paper discusses how policy and regulations can work to address issues surrounding the use of input data, which is critical to generative AI. Specifically, it will examine whether input data should be considered “personal data” and thus caught by the GDPR or Singapore’s PDPA; whether there is a recourse for emotional harm caused by content generated using such data. It will also discuss some of the current limitations and gaps that exist in the current regulatory framework. It is hoped that this discourse will further the continuing dialogue on the intersection between data protection and artificial intelligence, particularly in the domain of Generative AI and Data Protection.
Are Tech Companies Responsible for Solving the Global AI Divide?
Laura Rachevsky (pp. 98 – 107)
The global AI divide, marked by the unequal distribution of AI benefits between developed and developing countries, is a pressing ethical concern. This paper examines the moral responsibility of tech companies in addressing this divide, analysing it through the lenses of libertarianism, Rawlsianism, and utilitarianism. It delves into the nuances of each perspective, particularly highlighting their limitations in a global context, and contrasts the current focus on productivity-enhancing AI applications in developed countries with the potential of life-saving AI applications in developing countries. The paper explores empirical examples of tech companies’ investments in developing countries, revealing that libertarian and Rawlsian perspectives, despite initial differences, converge in their practical implications on a global scale. Ultimately, it argues that utilitarianism, although not without its challenges, provides the most actionable framework for addressing the global AI divide due to its emphasis on measurable outcomes and its ability to transcend national boundaries. It further performs a simplistic redistribution calculation as a proof of concept to demonstrate how incorporating life-saving AI applications into the benefits calculation can result in different investment recommendations.
Zoya Yasmine (pp. 108 – 120)
In 2023, Getty Images commenced legal proceedings in the United Kingdom High Court against Stability AI. Getty Images claims that 7.3 million images from its database were unlawfully used to train Stability AI’s generative Artificial Intelligence system. Drawing inspiration from Getty Images v Stability AI, this paper addresses the complexities surrounding copyright protection for text and data mining (TDM) in the UK. It argues that expanding Section 29(A) of the Copyright, Designs and Patents Act 1988 to exempt commercial AI developers from TDM licensing obligations would undermine the creative sector and hinder responsible innovation. This paper outlines the case’s background and provides justifications for requiring TDM licences in the training of commercial generative AI systems. It argues that licensing requirements prevent the unjust appropriation of creators’ work, foster valuable collaboration between creators and AI developers, and could even create new markets for existing works. The paper addresses practical challenges of TDM licensing, such as high costs, complexity, and the opacity of generative AI models. To address these issues, it proposes a set of reforms, including the adoption of standardised contracts for TDM, cross-licensing arrangements to facilitate fair data exchanges, and “nutrition labels” on AI-generated content to increase transparency and accountability. The paper concludes that these reforms, alongside the proposed court decision in Getty Images, could strengthen the UK’s AI and art industries by promoting innovation within a fair legal framework that strikes an appropriate balance of rights between technology developers and creators.
How Metaphors Influence Ontology, Epistemology, and Methods in AI: Rethinking the Black Box
Dr. Angy Watson (pp. 121 – 130)
In this response paper, I explore how metaphors influence ontology, epistemology, and methodology within AI. Using the example of the black box metaphor, I demonstrate that an over-reliance on one metaphor forecloses potential futures, limiting discourse, research and policy. I thus conclude that reflexivity about our use of metaphors is necessary and that we should strive to utilise a range of metaphors to capture the full scope of concepts we aim to express. To establish the foundation for my thesis I examine and critique two articles: “The Ethnographer and the Algorithm: Beyond the Black Box” (Christin, 2020) and “Prediction Promises: Towards a Metaphorology of Artificial Intelligence” (Möck, 2022).