Law students are often asked in job interviews how they think AI will influence and change legal practice. This question conjures up in the mind images of robot judges and lawyers. We have self-driving cars, why not automated justice? Feed your case into a computer and out comes a ruling? Surely absolute justice has been given here. A decision not influenced by the eloquence of the lawyers, the demeanour of the witnesses, or the environment of the courtroom—pure justice based on reason and the law, albeit machine reasoning.
The Law Society (2018) published a forward thinking, horizon scanning paper on A.I.–Artificial Intelligence and the Legal Profession. This identified several key emerging strands of AI development and use: Q&A chatbots, document analysis, document delivery, legal adviser support, case outcome prediction and clinical negligence analysis:
‘Fletchers, the largest UK medical negligence law firm, has teamed up with the University of Liverpool with the aim of creating a clinical negligence ‘robot lawyer’—in practice, a decision support system which reviews similar previous cases. The project has the support of a £225,000 grant from government-backed funder Innovate UK.’
AI can also be seen in law firm management, legal research, case management systems and so on
AI, health law and patient safety
AI is making an impact in the legal profession, and this will have a commensurate impact in clinical negligence litigation and more broadly into general patient safety strategies and policies.
In terms of clinical negligence litigation, The Law Society (2018) discussed case outcome prediction:
‘Researchers at University College London, the University of Sheffield and the University of Pennsylvania applied an AI algorithm to the judicial decisions of 584 cases that went through the European Court of Human Rights and found patterns in the text. Having learned from these cases, the algorithm was able to predict the outcome of other cases with 79% accuracy.’
The report further states that the research found that rather than legal argument being predictive of case outcomes, the most reliable factors were non-legal elements: language used, topics covered, and circumstances mentioned in the case text.
Clinical negligence claims prediction
In discussing AI reports, it is important to have a definition of AI and the broad one used by The Law Society is both clear and helpful:
‘What is AI? The term ‘Artificial Intelligence’ can be applied to computer systems which are intended to replicate human cognitive functions. It includes ‘machine learning’, where algorithms detect patterns in data, and apply these new patterns to automate certain tasks.’
The NHS AI Lab Skunkworks (NHSX, 2021) has reported on a project with NHS Resolution: a rapid feasibility study to investigate whether it is possible to use machine learning AI to predict the number of claims a trust is likely to receive and learn what drives them in order to improve safety for patients. This is clearly an exciting project with major implications for patient safety and clinical negligence claims management in the NHS. A rapid delivery plan was made to:
- Develop a machine learning model that could predict claims
- Produce a code pipeline that could prepare input data, then train and run the chosen model.
The report states that the project aimed to prove the value of machine learning in determining insights from available data. It goes into some detail on automated machine learning and testing methods used, constraints that impacted on the project, data security, impact, outcomes and the next steps.
The results are taken to be an indication that prediction of claims is possible but that large quantities of data are needed to do this.
Findings include (NHSX, 2021:4):
- The presence of specialties in a trust tends to have a correlation with the predicted rate of claims
- Longer waiting times also appear to correlate with the predicted rate of claims, although this varies with specialty.
The project is a valuable one in terms of forecasting claims and enhancing our understanding of claim drivers. There is now a good foundation for further work. This is a real time, modern example of how AI can impact on clinical negligence litigation and patient safety. It also feeds in the trends mentioned by the Law Society (2018). The relationship between law and science is a long established one and there is an aspect of legal theory called ‘jurimetrics’ (Loevinger, 1953). In legal theory terms, this is a modern-day example of ‘jurimetrics’ in action.
A note of caution
We can see here an illustration of the use and potential for AI in clinical negligence litigation and patient safety. Important legal, ethical and patient safety issues are also raised. It is important to guard against getting caught up too much in the flurry and excitement of disruptive technologies and AI advancements and then forgetting about these issues. Issues such as what happens when software or algorithms fail, product liability, negligence liability, and ultimate responsibility. There are also questions regarding discrimination, data protection, confidentiality, privacy, human rights and so on. These issues are subject to regular debate and research in this fast-moving area.
Law and ethics of AI
Ordish (2018) wrote a briefing on legal liability for machine learning in health care that raised several issues, including whether clinicians who have taken due care should be liable if an algorithm causes damage or loss to their patient. This point is important for negligence law.
‘The expansion of machine learning in medicine could also exacerbate old ambiguities in product liability, leaving those that have suffered loss without any robust way to recover damages.’
The concept of ‘black box medicine’ was also discussed, where nurses, doctors and others must interpret and explain the results of machine learning, characterised as ‘typically probabilistic and sometimes inscrutable’ (Ordish, 2018:2).
Ethics and governance of artificial intelligence for health
The World Health Organization (WHO) has issued a global report, Ethics & Governance of Artificial Intelligence for Health, with six guiding principles for its design and use (WHO, 2021). The publication follows a 2-year consultation by a panel of WHO-appointed international experts. It covers the following topics:
- Applications of artificial intelligence for health (section 3)
- Laws, policies and principles that apply to use of artificial intelligence for health (section 4)
- Key ethical principles for use of artificial intelligence for health (section 5)
- Ethical challenges to use of artificial intelligence for health care (section 6)
- Building an ethical approach to use of artificial intelligence for health (section 7)
- Liability regimes for artificial intelligence for health (section 8)
- Elements of a framework for governance of artificial intelligence for health (section 9).
Liability regimes for artificial intelligence for health
Within section 8 several key legal issues relating to AI and health are discussed including the question: are machine-learning algorithms products? The issue of fault and liability is discussed:
‘A liability regime for AI might not be adequate to assign fault, as algorithms are evolving in ways that neither developers nor providers can fully control. In other areas of health care, compensation is occasionally provided without the assignment of fault or liability, such as for medical injuries resulting from adverse effects of vaccines.’
The report states the WHO should examine whether no-fault, no-liability compensation funds are an appropriate mechanism for providing payments to individuals who suffer medical injuries due to the use of AI technologies, including how to mobilise resources to pay any claims.
The report provides a useful and contemporary information source on the legal and ethical aspects of AI and health. In terms of the ethics, six principles are given as the basis for AI regulation and governance:
- Protecting human autonomy
- Promoting human wellbeing and safety and the public interest
- Ensuring transparency, explainability and intelligibility
- Fostering responsibility and accountability
- Ensuring inclusiveness and equity
- Promoting AI that is responsive and sustainable.
WHO (2021) stands as a guide for countries across the world on getting the best from AI while at the same time avoiding problems and minimising risks.
Conclusion
AI does appear to be here to stay in terms of patient safety and clinical negligence litigation. Elements of this can already be seen to be in play. AI has already made a significant impact on NHS care delivery and treatment, its rise will continue. IT in health can be viewed as one of those great disrupters of established practices such as online taxi ordering, online booking of accommodation, or self-driving cars. New ways of doing things have been carved out, which we will all have to respond to. New norms have been established that we cannot really escape from even if we should be minded to do so. At the same time, it should not be forgotten that there are legal, ethical and patient safety issues to consider in any discussion of AI and health.
In considering these issues it is important to remember the personal, professional, and legal duties and responsibilities of the individual nurse or doctor to practise safely, which cannot be delegated to AI or others. We should not be so focused on macro scale, system-level considerations that we forget about these responsibilities. Personal professional judgements should always be safeguarded in any AI system design. Hopefully in the long term AI will not depersonalise the health professional and patient relationship too much. AI should always be seen as a tool and not as the directing force. It is important to remember that, from a litigation and an ethical perspective, while the computer may say ‘no’, the nurse or doctor must always be free to say ‘yes’.