This article considers the increasing political interest in the use of Artificial Intelligence (AI) in healthcare and considers how data protection legislation may help to build greater understanding and trust

Artificial Intelligence (AI), machine learning, deep learning, neural networks – call it what you like, there’s a lot of excitement about the ability of software to analyse masses of data, spot patterns, learn (sometimes independently) and to make conclusions and produce insights that are entirely new.

It is not unusual in the world of new technologies for this level of excitement and chatter. What is new though, is the political and legislative interest in this technology, which distinguishes it and its potential to re-shape our world. This is especially so in the case of healthcare.

In May 2018, the Government allocated millions of pounds for the NHS to use AI in the early diagnoses of cancer and chronic disease. Then in October 2018, the newly appointed Health and Social Care Secretary Matt Hancock brought renewed focus on the use of AI with his ‘The Future of Healthcare’ document, which will enable access to real-time data and set standards for security and interoperability.

Most recently (June 2019), Simon Stephens, Chief Executive of the NHS said: “We are seeing an artificial intelligence revolution that will be a big part of our future over the next five years, with technologies that can cut the time patients wait for scan results and ease the burden on hard-working staff. We’re therefore kicking off a global ‘call for evidence’ for NHS staff and technology innovators to come forward with their best ideas for how we should adjust our financial frameworks to best incentivise the use of safe and evidence-based AI and machine learning technologies across the NHS.”

My observation is that the technology needed in healthcare is not the clever, cutting-edge stuff. It’s the basic ability to access a patient’s record electronically, send emails rather than faxes, book appointments, automate rotas and surgery schedules and locate equipment digitally within a hospital. Investing in the basics would bring efficiencies, savings and better patient experience.

However, there is, of course, a place for the cutting-edge of technology, which is capable of transformative change, rather than incremental efficiency. The challenge in healthcare is that the professionals who will use the technology will need to trust that it works and will indeed reduce the burdens on them.

That word, trust, sums up the most significant barrier to the adoption of AI in a healthcare setting. Patients do not know if they can trust new software, when no-one can explain in layman’s terms how it works. Doctors do not know which of the many applications out there have been properly coded or calibrated, with physician input and based on accurate data.

The House of Lords Select Committee on AI recently identified ‘transparency’ and ‘explainability’ as key requirements if AI is to become an integral and trusted tool in our society.

So that’s the political landscape. What about the legislative perspective?

The General Data Protection Regulation 2016 (“GDPR”) already has transparency and accountability at its core. The regulators identified that if people do not trust organisations with their data, they will not want to share it or allow access to it. In turn, that will stifle innovation and prevent organisations delivering better, more tailored services – such as personalised medicines. GDPR aims to increase transparency to foster trust so that data is properly protected and handled through a range of measures. In addition to the well-established requirement for physical and technical security, these measures include:

  • Privacy notices that provide greater clarity about what data is collected, how it will be used and for what purpose. This is not without its challenges in the use of AI systems: How can you easily explain complex software; How do you let people know what data you hold when the AI could produce new data; how can you explain how data will be used when the AI could produce insights which were not expected/predicted?
  • Greater rights for individuals to control how their data is used including rights to have data deleted and ported
  • Data protection impact assessments to ensure that the use of any new technology will be GDPR compliant
  • Privacy by design so that the protection of privacy is the built into the design of any new system rather than being a retro-fitted after-thought.

GDPR also anticipates particular types of processing which AI will use, known as “profiling”: this means automated processing of data to evaluate, analyse or predict aspects of the individual, including their health. This type of analysis of a collection of factors about a patient to assist with diagnosis will be used by some AI in a healthcare context. Profiling is permitted, provided that it is properly explained in privacy notices including the ‘logic’ involved and the significance and consequences of the profiling.

Clearly then, AI in a black box will not comply with GDPR as well as appearing unfriendly to healthcare professionals who want to be able to follow the machine’s logic and check the result(s) it has provided. Profiling has the potential for the AI to make automatic decisions such as giving a diagnosis and whether or not to treat a patient. Used in this way, profiling can only be carried out if either the individual has consented, or the processing is necessary for the purpose of a contract between the patient and clinician/organisation, or is permitted by law.

Organisations, therefore, need to carefully analyse whether their use of AI amounts to ‘profiling’ and whether there is an ‘automatic’ decision made as a result. Where a human reviews results of an AI’s analysis or recommendation, this is not an ‘automatic’ decision but it will of course be important for the human to be able to decide whether or not they agree with the AI.

GDPR is designed to accommodate new technologies such as AI – indeed the measures described above should help to facilitate greater transparency and understanding of how AI works and therefore to ensure that it is trusted by all who use it.

www.gowlingwlg.com

 

Please note: This is a commercial profile

Jocelyn Paulley

Partner

Gowling WLG

Tel: +44 (0)203 636 7889

Jocelyn.Paulley@gowlingwlg.com

www.gowlingwlg.com

www.twitter.com/JossDoesIT

1 COMMENT

  1. Thanks for sharing such an interesting post. In our modern age of technology, we can not ignore the importance of Artificial Intelligence , Be it health care industry, or be it IT industry, usage of AI in data protection is everywhere significant.

LEAVE A REPLY

Please enter your comment!
Please enter your name here