/News

July 18, 2024

Risk and reward: managing the exponential rise of AI in healthcare – Part 1

AlteaPlus Essential

By Will Marshall, Head of Legal and Risk Management

Risk and reward: managing the exponential rise of AI in healthcare – Part 1

Artificial Intelligence (AI) is widely viewed as a potential saviour that can help to solve an ailing health and care sector’s increasingly complex and expensive needs. With a huge number of uses spanning from diagnostics to domiciliary care, from robotic surgery to the most mundane administrative tasks, these shiny new technologies have the potential to truly revolutionise the sector. But what legal risks does AI pose to service users and health and care providers, and how can the latter look to protect themselves?

A tool for the ages

AI is the process by which a computer is trained to perform tasks that mimic human behaviour and cognitive functions such as perceiving, reasoning, learning, and problem-solving.

As everyone knows, AI is rapidly evolving and becoming ever more powerful. A rapid increase in computing power is driving this unprecedented acceleration (now said to be doubling approximately every 3.5 months), enabling AI systems to handle complex calculations faster and more efficiently, leading to more advanced and reliable solutions.

At the same time, there has been a significant increase in the volume of data, providing AI systems with more material to learn from. This is combined with a decrease in the cost of data storage, meaning it has become more economical to collect and process large amounts of data. Coupled with the rise of new technologies, such as wearables and other portable devices that are revolutionising accessibility, the advancement of AI would appear to be unstoppable.

AI and healthcare – a panacea for an ailing system?

Several key factors are driving the rapid growth of AI in healthcare. A major driver is the trend for increased automation in healthcare delivery. This has become commonplace since the covid pandemic with a surge in telemedicine and remote consultations. This shift has allowed patients to receive care without the need to visit hospitals, reducing the risk of infection and making healthcare more accessible. AI also played a crucial role in the rapid uptake of vaccines by streamlining distribution and optimising resource allocation.

Another important factor is the increasing emphasis placed on prevention rather than cure. As workforce pressures intensify and the number of older adults continues to rise, there is a growing appreciation of the need for a health and care system that promotes self-management and champions increased proactivity to improve people’s healthy life expectancy – i.e. the time when they are well and active, rather than simply enabling people to live longer in ill health and frailty. AI is playing an increasingly prominent role in delivering this agenda and helping to keep people out of hospitals by focusing on prevention and self-care. Technologies are being used to develop personalised health plans, monitor chronic conditions and provide real-time health advice, empowering individuals to manage their health more proactively.

A question of trust – AI regulation

However, despite its obvious potential to be a force for good, many people feel a deep sense of unease, apprehension and mistrust when it comes to AI. This is hardly surprising given the sheer pace and scale of the AI revolution we are all now witnessing. These ethical misgivings are particularly acute in the health and care sector where there is genuine concern about the displacement of human judgement and the absence of compassion in a care journey delivered by a machine. This unease adds to worries about a perceived lack of transparency in how AI systems will make decisions, not to mention data privacy, bias, and the potential for user or machine error.

Winning and maintaining public trust in the safety of AI is therefore imperative for governments and providers looking to embed AI in their care pathways. AI regulation goes to the heart of this debate and is especially important in the health and care sectors. However, recent developments have revealed a growing schism between the more conservative, risk-based approach to regulation adopted by the EU and the more entrepreneurial, light-touch approach of the previous UK government. The US approach sits somewhere in between.

The EU AI Act 2024

The EU made headlines worldwide when the EU Council approved the EU Artificial Intelligence Act on 21 May 2024, EU AI Act: first regulation on artificial intelligence | Topics | European Parliament (europa.eu). This legislation is the world’s first risk-based approach to regulating machine learning systems. It introduces a framework that categorises non-exempted AI systems into four broad risk levels – unacceptable, high, limited and minimal. Applications for unacceptable risks are banned. High risk applications must comply with security, transparency and quality obligations. Limited risk applications only have transparency obligations and minimal risk applications are not regulated. These rules will come into force from May 2025, under the oversight of national watchdog authorities in each member state.

The EU is confident that its comprehensive regulatory framework will set a global precedent, ensuring the need for safety and ethical considerations are not lost in the noise of the dynamic growth potential of AI technologies. It is important to note that UK-based organisations with operations in the EU, or those deploying AI systems within the EU, are likely to fall under the jurisdiction of the EU AI Act, therefore requiring UK organisations to keep abreast of legislative changes and any potential future alignments between the UK and EU in this area.

The UK approach

The UK’s position on AI regulation is understandably less settled, given the very recent election. The previous Conservative government championed a pro-innovation approach, as set out in its white paper published last year, A pro-innovation approach to AI regulation: government response - GOV.UK (www.gov.uk). As part of this strategy, the UK was looking to develop a non-binding, cross-sector, principles-based framework to enable existing regulators such as the Information Commissioner’s Office, Ofcom and the Financial Conduct Authority to apply bespoke measures within their respective fields.

Responsibility for regulation in the healthcare sector fell to the Medicines and Healthcare Regulatory Agency (MHRA). The MHRA published its strategic response to the white paper on 30 April 2024 MHRA’s AI regulatory strategy ensures patient safety and industry innovation into 2030 - GOV.UK (www.gov.uk). Dr Laura Squire, Chief Quality and Access Officer at the MHRA acknowledged the opportunity that AI offers the sector but emphasised the need to “ensure there is risk proportionate regulation of AI as a Medical Device (AIaMD) which takes into account the risks of these products without stifling the potential they have to transform healthcare.”  

A change of direction under Labour?

Under the new Labour Government, we can expect to see a shift away from the Conservative Government’s permissive strategy. However, it is not expected that any new legislation will be as wide as the EU AI Act (although the plans to rebuild a closer trading relationship with the EU may yet lead to greater alignment with EU regulation). There has been no proposal by Labour for a general AI regulation per se, but it is expected that the Government will introduce stronger regulation in targeted areas, which is likely to include healthcare.

In the second part of this blog, out next week, we delve into the key legal issues facing healthcare providers implementing AI, and discuss essential risk management strategies and insurance considerations. We'll provide practical insights to help healthcare organisations navigate the complex landscape of AI integration safely and effectively.

"The information contained in this article does not represent a complete analysis of the topics presented and is provided for information purposes only. It is not intended as legal advice and no responsibility can be accepted by Altea Insurance or WTW for any reliance placed upon it. Legal advice should always be obtained before applying any information to particular circumstances."

What to do about the CQC?

What to do about the CQC?

October 9, 2024

Altea Insurance Expands European Presence with Office in Ireland

Altea Insurance Expands European Presence with Office in Ireland

September 17, 2024

Social Media and the Boom in Aesthetic Procedures

Social Media and the Boom in Aesthetic Procedures

August 6, 2024