Artificial Intelligence in healthcare: exciting but complex

This blog is part of a series showcasing the People’s Research Cafe which took place on the 18th and 19th June 2022 in South Kensington, London as part of the Great Exhibition Road Festival.

What is a People’s Research Café?

The People’s Research Cafe is a café with a twist. Visitors are welcomed to sit down at a table co-hosted by an Imperial College researcher and a public contributor whose role is to help the conversation flow freely. Over a free hot drink, visitors find out about the researcher’s project and will be asked for their opinions on it. The researchers are then expected to use this feedback from visitors to improve their projects. The People’s Research Café has been run previously at two Imperial Festivals (2018 and 2019) and also in four community venues. It was previously called the “PPI Café” and you can read about the one at the Imperial Festival 2018 here.

When did this People’s Research Café take place?

People’s Research Café was hosted at the Great Exhibition Road Festival held on 18 and 19 June 2022 and was a collaboration between the following teams/centres: Imperial Biomedical Research Centre, Imperial Clinical Trials Unit, Imperial Clinical Research Facility, London In-Vitro Diagnostics Cooperative, North West London Applied Research Collaboration, MRC Centre for Environmental and Health, HPRU in Healthcare Associated Infections and Antimicrobial Resistance. Researchers from each centre were given the opportunity to apply to host a table at the Café for 2 hours where they would explain their research in plain language and ask visitors 3 questions with the help of a public contributor. It was a requirement that the research project was at a stage where visitor feedback could still influence the project. Researchers and public contributors were offered training about how the Café would work and how to maximise conversations with visitors. Ahead of the Café, researchers refined their plain language summaries and three questions to pose to visitors with the help of public involvement leads and a public contributor.

 


In conversation with: William Bolton (AI4Health Research Postgraduate) and Damien Ming (Clinical Research Fellow), Centre for Antimicrobial Optimisation, Imperial College, London

What research project did you bring to the People’s Research Café and what is it about?

Healthcare is complex, and large amounts of new information is constantly being created. Clinicians need to understand and use this information in the most helpful way to ensure good patient care. Artificial intelligence (AI) is where computers assist by ‘understanding’ this data and come up with predictions.

Our research group is developing AI-based systems for healthcare called “clinical decision support systems”. The goal is that these systems will help clinicians make better decisions about patient care and treatment. For example, we have been developing a system to help clinicians predict whether a patient has an infection or not, and to guide better use of antibiotics in UK hospitals.

This is a new area of research and complicated issues are involved about: i) how patient data should be used, ii) who is responsible for decisions when AI systems are involved and iii) whether these AI systems can be biased against specific groups of patients in a way which is hard to detect.

 

What questions did you ask visitors to the People’s Research Café about your project?

The questions I asked visitors are as follows:

  • How would you feel about having an AI system make all treatment decisions for you without involving a doctor? Would your answer change if the medical condition being treated was a mild bacterial infection (e.g. an ear infection) compared with deciding on cancer treatment?
  • Would you want to understand how recommendations about your healthcare made by the AI system come about? E.g. what information is fed into the AI system and what it has identified? To what extent do you think a doctor should understand recommendations by the AI system?
  • Do you think a clinician should be responsible if the AI system they are using makes a mistake, when they recommend treatment for a patient? If so to what extent?
Will and Damien discussing their project with the public

 

What did visitors tell you?

Views on AI in medicine

Most of the individuals who stopped by the café were optimistic and excited about the potential of artificial intelligence (AI) in medicine. They understood that such technology could help medical professionals and would be used more in the future in a stepwise manner.

 

Views on how AI should be used

The degree to which AI systems should act independently was widely discussed. Most individuals believed clinicians should have the final say and that AI should be used as a supporting tool. However, some people felt that if AI has been shown to outperform doctors for certain tasks, then it should be allowed to operate independently within that role. In general, however, those who attended the café agreed that a higher degree of human involvement should be in place for decisions with important consequences, given the potentially higher risks involved.

Whether patients should be involved in decision making and interact with AI systems in healthcare was also a complex issue. Some people wanted patient-centred care where their perspectives are considered, while others just wanted to be informed and given the chance to ask questions on why certain decision were made. It was agreed that this is patient- and context-specific, depends on the options available, and the severity of the decision. For example, end of life care will always have a greater focus on patient preference.

 

Views on who is responsible for AI

The public agreed that it was the responsibility of a doctor in deciding when it was appropriate to take a recommendation by the AI system; upon disagreement between an AI system and a doctor further expert clinical viewpoints should be sought. Responsibility and liability of decisions in situations where errors were made, such as the wrong treatment or dosage being administered, would be on the decision-maker. For most of the public this meant the doctors and the health service, rather than the AI-engineers.

 

Views on potential AI biases

Finally, the issues around biases within datasets, including minority groups not being appropriately represented, were raised frequently. It was agreed that this needs to be appropriately considered and addressed to ensure AI systems are fair and do not discriminate.

 

How will what visitors told you impact and/or change your project?

Visitors feedback will be invaluable in shaping future research directions and stakeholder involvement. In particular, issues of biases and fairness in medical datasets and AI algorithms will be explored by understanding the impact of sensitive attributes such as age, sex and ethnicity and resolved through the use of appropriate machine learning methods. Furthermore, given issues about AI patient-centered care, transparency, and responsibility were significantly highlighted by public. Future research with clinicians will elucidate if they agree with the publics stance on such issues with respect to AI-assisted antimicrobial prescribing, to ensure future systems are clinically relevant and sustainably adopted.

 

What was your personal experience of taking part in the People’s Research Café?

I thoroughly enjoyed participating in the People’s Research Café. It was great to meet so many individuals from the public who are passionate about their health and how technology can progress their care. This event improved my communication skills as it was necessary to convey complex ideas in short simple language. In addition, it was important to ensure the topic of conversation stayed relevant to our questions and research area, to ensure fruitful discussion on nuanced and morally difficult questions. Overall, the People’s Research Café validated my view that public involvement is important and gratifying.

 

Series Navigation<< Analysing patient feedback to improve patient-centred careCan a daily food supplement prevent weight gain? >>