Arrowe Park Teaching Hospital: An Aperture for Renewed Trust Through Artificial Intelligence (AI) Literacy

Author Name

Iman Khwaja

Published On

December 19, 2024

Keywords/Tags

Healthcare AI, AI Literacy, AI Ethics

Arrowe Park Teaching Hospital: A Potential Aperture for Renewed Trust Through Artificial Intelligence (AI) Literacy

In the early-stages of a collective transition to AI driven healthcare, patients who are already reckoning with barriers to quality healthcare are now at further risk of these vulnerabilities being amplified, faced with rapid advancements of healthcare technologies. Research around patient responses to health AI remains ongoing, but many such as Robertson et al who find “substantial resistance to AI” amongst patients, alarmingly suggest it could lead to potential “systemic inefficiencies stemming from patient resistance” (2023). Within this lack of confidence, reliability on feedback bodies such as Friends and Family Test and the NHS Patient Survey Program becomes increasingly obsolete, supported by Robert’s argument that they are “valued by policymakers but generate little insight for practitioners” (2018). In light of these issues, this position piece briefly considers the question: could AI driven healthcare be better received by allowing patients and other stakeholders to participate directly in its implementation?

A fair number of examples regarding cyber-failures have emerged recently across the UK, one of the latest having occurred at Arrowe Park Teaching Hospital in Wirral, leaving patients out of the loop, with appointments cancelled across the board. With the simple explanation in the official statement given by the hospital being that they are “responding to a cyber security issue” (2024), it is reasonable to assume that patients who were impacted by this event could be questioning the trustworthiness of current technical healthcare operations, undoubtedly impacting their confidence in increasing technological reliances. We are not short of patient testimonies that reveal a general lack of trust in the complexity of patient needs being met by the NHS, illuminating both the minimal involvement patients have in systemic operations, and a potential growing challenge amongst patients to interface with and understand technologies that are designed to service them. The Arrowe Park incident importantly highlights both the fragility of technology-driven healthcare and the critical need for transparency to build patient trust, and encourage involvement. In a future governed by advanced technologies used to meet sensitive health needs, clinicians and healthcare officials must be technologically literate enough to convey the details of pitfalls transparently, and it is further important that patients are able to ascertain these details to create response, advocacy and interfacing capabilities amongst populations. The challenge of achieving this synergetic literacy between stakeholders is seemingly immense, at a logistical, legal and operative level, but it is perhaps worth working towards changing the nature of the “patient-out-of-the-loop” (Schillinger, 2023) approach the NHS currently takes to its operations, to enable patients to become knowledgeable, willing recipients of and major contributors to technology driven healthcare.

Literacy as a Resource for Equity in Health AI

Healthcare AI might be our most challenging and yet promising application for global data intelligence. The Reform AI in Healthcare Report importantly identifies the NHS as a “high-risk area”, and equally recognizes that AI systems are “not infallible or devoid of biases” (2018), despite their potential to resolve limitations faced within healthcare as we presently know it. Ensuring that embedded AI is responsive to the “sensitivity” (2018) of the NHS is fundamentally a multidisciplinary challenge, that I argue could be mitigated by establishing an initial congruence in literacy between stakeholders, that equips patients, clinicians and policymakers to co-design models for healthcare successfully. Establishing this literacy could include understanding how healthcare technologies function, their potential biases, and their implications for patient care, and using testimony from stakeholders to inform algorithms. To obtain this kind of synergy between stakeholders is what Yekaterina refers to as a “delicate multidisciplinary task” (2024), and must equally account for fearful, disagreeable and even technologically limited perspectives from end-users. By underpinning co-design methodologies with essential literacy skills that create contexts for interactions with AI tools, this could build what Cashaback refers to as a “unification of computational, clinical and patient stakeholders” (2024), to build resilient, ethical tools that are algorithmically derived from its target end-users.

This positions end-users who are already disadvantaged by nationwide healthcare pressures as vulnerable to the additional requirement for different and advanced technology capabilities. Indeed, a system designed to meet complex patient needs must operate on the basis that patients are able to advocate for themselves through interactions that are comfortable, knowledgeable and explainable. However, in a future potentially driven by a convergence of human and technological medicine, patients must be prepared to receive and process information in new ways, and in turn be able to recognize new inequities and unethical practices that could impact them. This is not to say that all patients should be able to communicate complex medical or technological information, irrespective of capability, all patients deserve to receive excellent care, but it does bring into question how the quality of healthcare could be improved beyond the efficiency offered by AI capabilities, if patients are given the opportunity to participate in its implementation. This approach could instead position current limitations that patients face as valuable testimonial contributions to the development of models themselves, enabling developers, clinicians and patients to collaboratively work towards meeting one another’s needs by leveraging healthcare AI tools. While achieving this may seem idyllic, there is no better time than the present to adopt what Thornton et al refers to as “the right level of maturity” (2024) to restructure how patients currently contribute the systemic design of the NHS, moving beyond feedback methodologies towards more congruent literacy and co-design opportunities.

Conclusion

In positioning patients and clinicians as co-designers of healthcare AI tools we also move towards a collective resilience against other catastrophic risks, from algorithmic bias to bioterrorism, by establishing an “everyone-in-the-loop” philosophy that underpins the entire lifecycle of healthcare AI development. In choosing to rise above the “hype regarding AI capacities” (LaGrandeur, 2024), we might instead focus our efforts on slowly building congruent literacy and decision-making skills amongst all stakeholders, to equip them to become co-regulators of its integration from beginning to end.

References

  • Arrowe Park Hospital hopes to fully restore system in ‘next 24 hours’ eight days after cyber attack”, ITV News, 03/12/2024, (Accessed: 03/12/2024), Available at: Arrowe Park Hospital hopes to fully restore system in ‘next 24 hours’ eight days after cyber attack | ITV News Granada
  • Cashaback, J.G.A., Allen, J.L., Chou, A.H., Lin, D.J., Price, M.A., Secerovic, N.K., Song, S., Zhang, H. and Miller, H.L. (2024) ‘NSF DARE-transforming modelling in neurorehabilitation: a patient-in-the-loop framework’, Journal of Neuroengineering and Rehabilitation, 21(1), p. 23.
  • Harwich, E. and Laycock, K. (2018). ‘Thinking on its own: AI in the NHS’,Reform Research Trust.
  • LaGrandeur, K. (2024). ‘The consequences of AI hype’, AI Ethics, 4, pp. 653–656.
  • Robert, G. and Donetto, S. (2020). ‘Whose data is it anyway? Patient experience and service improvement’, Journal of Health Services Research &
    Policy, 25(3).
  • Robertson C, Woods A, Bergstrand K, Findley J, Balser C, Slepian MJ. (2023) Diverse patients’ attitudes towards Artificial Intelligence (AI) in diagnosis. PLOS Digit Health.
  • Schillinger D, Piette J, Grumbach K, et al. Closing the Loop: Physician Communication With Diabetic Patients Who Have Low Health Literacy. Arch Intern Med. 2003
  • Thornton, D. et al. (2024). ‘Priorities for an AI in Healthcare Strategy’, The Health Foundation.
  • Yekaterina, K. (2024). ‘Challenges and opportunities for AI in healthcare’, International Journal of Law and Policy, pp. 11–15.
  • Warburton, A. Medicine.