- Feb 15, 2025
Is it Possible to Use AI Ethically in Healthcare?
- Giselle R and Mitra V
- Medicine, Technology
- 0 comments
Artificial intelligence has fundamentally altered the healthcare industry through its quick precision, which no human is able to match. Technologies such as IBM Watson and DeepMind demonstrate AI’s revolutionary ability to process troves of medical data, allowing professionals to swiftly diagnose patients and provide them with the highest standard of treatment. IBM Watson, created in 2004 (La Rose, 2024), is able to process patient records, complex research papers, rigorous clinical trials and even real-time patient data. It is capable of making nuanced, well-informed decisions to deliver personalised care. (Mishra, 2024) Similarly, DeepMind is used to efficiently analyse eye scans, this allows optometrists to diagnose retinal diseases in a timely manner. Approximately 80% of visual impairment can be reversed if treated in a swift manner (WHO, 2003), therefore implementing this can prevent avoidable eye loss since it has the same accuracy as expert doctors (Moorfields, 2024). As AI becomes increasingly integrated into medicine, it threatens the roles reserved for human professionals and raises a pressing question:
Does the unrelenting speed triumph over a doctor’s experience, compassion, and ethical reasoning?
An Enhancing Tool or a Replacement?
Unlike human professionals, AI is wholly efficient. It doesn’t require any form of rest or salary. Unless an external factor causes servers to shut down, they will continue working without fatigue. Although hospitals operate all hours of the day, on certain occasions specialists may not always be available for urgent diagnoses, and waiting for their arrival - however long it may be - could place the patient at risk. Additionally, AI lacks personal and emotional bias, which might otherwise influence a doctor’s decisions revolving around treatment, even if guidelines dictate otherwise.
As aforementioned, AI possesses extremely accurate outcomes with a small error margin, but its effectiveness is dependent on the quality of data it is trained on. Although AI is seen as a beacon of objectivity, it can still inherit bias if data given reflects historical inequalities (James, 2024). As majority of medical research was conducted on white men, the AI’s hypothesis has the potential to be skewed or unaccommodating for a different gender/race. The NHS 6 C's are considered essential for high quality care: Compassion, Competence, Communication, Courage Commitment, and Care. The inability to experience genuine empathy reduces the effectiveness of AI, you cannot have computational compassion, it must be built from experience.
By implementing AI into the medical industry, it provides the possibility for immense changes both positively and negatively. On one hand, it provides improved access to personalised support which will mitigate the extreme demand and reduce pressure on staff, which inevitably could reduce NHS costs and allow for a better quality of care (BMA, 2024) . However, when something goes wrong -such as a false diagnosis- who would be blamed for the AI’s decision? The programmers, the doctors using it, or the hospital? It would be difficult to hold anyone accountable because the blame could be shifted to any one of those three groups. An easy solution would be to require human agreement before the final decision, but that would defeat the purpose of AI since it still requires human involvement. So, surely it is more plausible to use AI alongside doctors as a means to speed up diagnosis, collect and evaluate data, or even with admin work than to replace doctors entirely with robots.?
The Impact on Doctor-patient Relationships
The relationship between patients and doctors are built upon several principles: mutual trust, faith in the doctor’s competence, empathy, understanding experiences, and loyalty to never disregard the patient's wishes. (Kingsford & Ambrose, 2024). A patient whose doctor achieves this criteria is more likely to follow their suggestion with treatment, and less likely to have negligence or general patient complaints. (Chipidza et al., 2015). This demonstrates how crucial it is to form bonds with the patient, otherwise making healthcare - an industry revolving around caring for people who are ill and vulnerable - feel impersonal. Around 53% of the UK’s general public believe the introduction of AI will make them feel more distant from healthcare staff (Thornton et al., 2024), with a greater number supporting the usage of AI only for tedious, admin work. Thus reinforcing the idea that AI can be a complementary force when used alongside a doctor who has already built that essential rapport and trust with their patient. It should only be an extension of human expertise, a tool, not a separate system.
To conclude, though there is great appeal in applying AI to healthcare in order to bolster the accuracy and the rate of diagnosis, it is a fruitless pursuit if we lose sight of the fundamentals of hospitals. As Francis Peabody outlined, “The treatment of a disease may be entirely impersonal; the care of a patient must be completely personal.” (Peabody, 2015), highlighting how AI’s merits only extend to treatment, but not to care, whereas a doctor embodies both wholeheartedly. As technology advances, the integration of artificial intelligence will become inevitable. Perhaps it could possess genuine emotions rather than emulating them, but until then the focus should be on what is most important: the patient, not paperwork.
References
David, L. (2024, August 13) From checkers to chess: A brief history of IBM AI. https://www.ibm.com/products/blog/from-checkers-to-chess-a-brief-history-of-ibm-ai#:~:text=Less%20than%20a%20decade%20after,total%20of%202%2C880%20processor%20cores.
Rashmin, M. (2024, May 20) AI Revolutionizing Healthcare: A Look at IBM Watson's Impact in the Healthcare Industry.
https://wearecommunity.io/communities/healthcare/articles/5012#:~:text=Medical%20Diagnosis%20and%20Treatment%20Recommendations,accurate%20diagnoses%20and%20treatment%20recommendations.
World Health Organisation (WHO) (2003, October 9) Up to 45 million blind people globally - and growing.
https://www.who.int/news/item/09-10-2003-up-to-45-million-blind-people-globally---and-growing
Moorfields Eye Hospital. (2024) Google Deepmind. https://www.moorfields.nhs.uk/research
Ted, A. J. (2024, September 24). Confronting the Mirror: Reflecting on Our Biases Through AI in Health Care.
https://postgraduateeducation.hms.harvard.edu/trends-medicine/confronting-mirror-reflecting-our-biases-through-ai-health-care#:~:text=AI%20models%20that%20predict%20patient,treatment%20or%20access%20to%20care.
British Medical Association (BMA) Principles for Artificial Intelligence (AI) and its application in healthcare.
John, A. A., Phillip, A. K. (2024, January 26) Artificial Intelligence and the Doctor-Patient Relationship.
https://www.amjmed.com/article/S0002-9343(24)00043-3/fulltext
Fallon, E. C., Rachael, S. W., Theodore, A. S. (2015, October 22) Impact of the Doctor-Patient Relationship.
https://www.psychiatrist.com/pcc/impact-doctor-patient-relationship/
Nell, T., Ahmed, B., Tim, H., Tom, H. (2024, July 31) AI in health care: what do the public and NHS staff think?
Francis, W. P. (2015, May 12) The Care of the Patient.https://jamanetwork.com/journals/jama/article-abstract/2290625