The future of clinical imaging diagnostics – a hybrid of human eye and AI?

“This is a kind of hybrid situation”, says Dr Grant Mair, a neuroradiologist from the University of Edinburgh. Mair is co-author of a recent study applying artificial intelligence to imaging analysis of CT brain scans. “We’re not, you know, putting our feet up and letting the machine do it – this is about making us better.”

The software the team has developed is 85% accurate in detecting the severity of small vessel disease (SVD), one of the commonest causes of stroke and dementia. This means it’s as accurate as human experts in the field.

Ensuring the quality of data fed into AI systems is fundamental, as it provides the parameters within which the software learns. AI systems are “only as good as the data that goes into them”, explains Mair. “If you teach the machine what [SVD] looks like, then it takes that as its truth.”

Liang Chen, a computer scientist from Imperial College London and lead author of the study, explains that the team used one data set to train the software, and a completely separate set to tune the parameters and evaluate the system. This method helped avoid “overfitting”, which can occur if the system fits too closely to the example data, rendering it ineffective when faced with new data that it hasn’t seen before.

The researchers also applied a “brain atlas”, derived from MRI images showing SVD, to help exclude false positives and reduce the runtime of the process. The “atlas of SVD” is a map showing the probability of each voxel, the three-dimensional equivalent of a pixel, being SVD or not. “Given this atlas we were able to focus on certain regions of interest of SVD [excluding] a large amount of false positives,” says Chen. SVD doesn’t occur in the ventricles or the grey matter of the brain, “so we use an atlas to limit the region within the white matter.”

So, how is this research developing the landscape of AI? “Most of the work that’s gone on in AI brain imaging is about MRI,” says Mair. “MRI gives you pictures that have lots of contrast in them, so it’s easy to spot abnormalities.” What the researchers have done here is apply the technology to CT scanning, which is used almost ubiquitously for brain imaging, but which produces images that are much harder for both humans and machines to read.

If MRI scans produce images of a much higher clarity, why are they not the go-to method for brain scanning? It comes down to safety, practicality and availability. MRI uses a very strong magnet, rendering it unsafe for patients with pacemakers, for example. It requires multiple sequences, taking around twenty minutes, compared with around five or six seconds for CT imaging. “So, if a patient’s unwell and they’re wriggling around and they’re confused…it can be difficult to acquire nice images in MR”, explains Mair. MRI scanners are also more expensive, and not in operation twenty-four-seven in many UK hospitals.

The AI technology could be applied in emergency situations giving doctors the ability to personalise treatment when estimating the likely risk of haemorrhage in patients. “Information is power…especially in [critical] situations,” says Greg Hollingworth from Different Strokes, a charity that helps younger stroke survivors. A stroke survivor himself, Hollingworth emphasises the significance of the FAST campaign. “Time is very important [in potentially reducing the] long-lasting injuries you might get… anything that decreases the time by which people can make a better decision is beneficial.”

The software can scan millions of CT images at a time and produce consistent results. “If a machine looks at a scan today, and gives you an output, and looks at the same scan again tomorrow it’ll give you the same output, because it’s very much a rules-based system,” says Mair. “If you ask me as a human, or another radiologist to look at a scan today and look at the same scan tomorrow you might get a slightly different answer, and that’s just…human nature.” It can also provide clarity when detecting subtle changes in slowly progressive diseases, which an expert might struggle to identify.

Might advances in one application of AI be utlised for another? Take DeepMind’s self-teaching AI legend, AlphaGo, for example. AlphaGo software is defeating human grandmasters in the ancient Chinese game ‘Go’, teaching itself winning moves that humans never could have conceived. At the forefront of AI research, could AlphaGo’s technology also be applied to brain image analysis?

“For Go, it’s a very well-known environment,” says Chen. “The board is known; there are only white and black players and everything is fixed… but for other cases, like healthcare, there are no ground rules,” he says. “There are so many unknowns, which means the system cannot be perfect… DeepMind made a significant breakthrough [with AlphaGo], but it’s also limited in that Go, or similar games, cannot be extended to other applications like security or healthcare at the moment.”

Does AI run the risk of removing human nuance from diagnostics? Mair emphaises that we shouldn’t be worried about losing human interaction in the face of AI. “It might be that machines kind of whizz through the scans, highlighting [characteristics that it] thinks are problems”, he says, “and then someone like me comes along and interprets those problems.”

“What we’re talking about is adding to the human, rather than taking the human away.” In this case, at least, it’s not AI versus human eye, but man and machine working symbiotically, augmenting sight and delving deeper into the human brain.

Claudia Cannon is studying for an MSc in Science Communication at Imperial College London

Banner Image: CT scan, Radpod.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *