Category: General

Brain imaging, neural networks and a year in research

As the last conference (for now) has passed I found myself in a position to bring the blog back to life after a bit of a pause during the summer. As this month also marks the end of my first year working in research, I thought I’d familiarize you with some of the work that got me into the lab and formed a large part of my first year.

A year in research it has been and I still remember vividly my first days within the lab which started even more way back – during the last year of my Bsc degree at Imperial College London. Given that an online python course over the summer and a statistical course in R during my studies were the only things relating me to coding or computer science, it was indeed a slightly frightening situation I got myself into – doing a final year project (on which a major part of my final score depends) training a machine learning algorithm to segment a muscle in brain images. Fast forward half a year, there was I with a degree in my hands (figuratively speaking, as Covid restrictions led me to finishing my degree from home) and a trained neural network, which I did not know then, but would base an important part of my work later on.

But let’s not go down the memory lane for too long and focus on what’s important – what is a neural network and why did I need to train it to segment a muscle in the brain?  A neural network is a type of machine learning algorithm which mimics the human brain in its architecture and uses training data to solve a problem and improve its performance over time without input from a user.  As the training progresses, the network determines the best parameters to produce the desired output – in our case a segmentation (a version of the original image where all of the pixels denoting the desired object are assigned one value and all of the pixels denoting the background a different one).

In our case we wanted to segment temporalis muscle, which is observed on routinely performed brain scans in brain tumour patients. We chose this muscle for a reason – there is evidence that it could be used to assess sarcopenia, which is the loss of muscle mass and/or function. Furtner et al. showed that temporal muscle thickness is predictive of overall and progression-free survival in glioblastoma patients and in a paper by Zakaria et al., temporalis muscle width accurately predicts 30 and 90-day mortality as well as overall survival from diagnosis.  You can find a quick overview of sarcopenia and its assessment in a previous blog post by Dr James Wang (https://blogs.imperial.ac.uk/componc/2021/04/06/covid-and-the-return-to-research/ ). Using temporalis muscle is advantageous in a brain tumour patient cohort, as other types of scans on which sarcopenia could be assessed are not usually available as part of their general cancer treatment. Thus, we wanted to train a neural network to automatically segment temporalis muscle, which would be much quicker than manually measuring the width in each patient scan.

During my final year project, I successfully trained a network to segment temporalis muscle. I used an already existing implementation of a U-Net (a type of neural network architecture) on GitHub, however, as I was not getting the required output, I had to find ways to make it work. Finally, after deciding to use a difference loss function (which was more suitable for our task, when the segmentation object only occupies a small proportion of the whole image) and trying hyperparameters optimization, I managed to get a good performing model just in time before the end of my placement.

The U-Net was trained to segment temporalis muscle in 2D slices, however in real world, brain images come in 3D. Thus, we needed to rethink and expand on this work. We wanted to automate the whole process – from having a 3D brain scan to having a segmentation in 2D at a specific level of the 3D scan and also the area of the muscle. We decided to use a slice-based approach as it is commonly used in medical imaging tasks and to use a 2D segmentation, as it is very difficult to outline temporalis muscle in some levels of the 3D scan, it is not clear how much additional information that would provide compared to taking one slice where the muscle is wide and clearly distinguishable from surrounding tissues as well as it would have taken us an enormous amount of time to do the manual segmentations on a larger amount of images. Thus, for our approach, we created a pipeline combining two neural networks – one network was trained to segment the eyeball and the other one to segment temporalis muscle as previously. We used the output of the first network to select one slice per patient at a desired level based on a threshold. Then, once we have one slice at a desired level per patient, we feed these slices into the second neural network, which segments the muscle and outputs its area.

The road to get to the final pipeline was not easy and without obstacles, as was my first year in research. I’ve learned the importance of keeping track of every file and folder, the methods used as well as the reasoning behind choosing a specific approach. The final product is not perfect – the code sometimes selects images at an incorrect level so there’s still room for improvement. Nonetheless, some of my colleagues have already started using this tool in their research and it is nice to see something that started off as my final year project slowly molding into an actual tool.

COVID and the return to Research

The 14th of March marked the end of my redeployment to a support role in Intensive Care and the second interruption of my PhD due to the pandemic. Academic papers and lines of data were replaced by disembodied voices as I endeavoured to keep two wards worth of family members updated on their loved ones’ progress. With strict restrictions on visitation, this daily conversation was often the only insight into how their relatives were recovering, and in many cases, how they weren’t. Whether I’ll be due a third pandemic-related sabbatical is yet to be seen. In the past few weeks, I’ve personally witnessed the steady downtick of COVID-related admissions. Beds filled with ventilated patients are now replaced by those in need of ITU-level monitoring following delayed essential procedures. Things are no less busy, but the grip of COVID has loosened and at the very least, there is a measure of respite.

Today, I replace my ITU hat with the academic hat I hung up two months ago. My oncology hat continues to gather dust, awaiting its eventual turn. Being reminded of my time as a clinician, one of my motivations for delving into the world of code and computational solutions is in its ability to capture and manipulate data that is often overlooked in day-to-day practice. Medical data is costly, both in time and manpower. Request forms, going through the scanning process, having labs do bloodwork, waiting for a report to be generated, are all steps taken to produce what is often a singular data point, which subsequently is consigned to medical archives. As our technology advances, so too has the information we capture from investigations, as well as our ability to store and read it on a larger integrated scale. This could enable – the discovery of complex relationships that would otherwise not have fit onto a blackboard or spreadsheet. Pairing this with the zeitgeist that is the renewed interest in artificial intelligence, we now have the technology to realise complex manipulation of large datasets at a level previously unattainable; bursting open the barriers that previously held us back.

One venue of unused data lies in opportunistic imaging. Cross-sectional imaging such as Magnetic Resonance Imaging (MRI) or Computed Tomography (CT) are commonly used in cancer care. These scans reconstruct “slices” through the scanned body which clinicians can scroll through to visualise internal structures. In cancer, the main reason to do this is to evaluate how the cancer is responding to a treatment. Simply put, if a cancer is in the lung, the focus of imaging and attention will be in the lung. But scanning the chest for lung cancer patients invariably picks up other organs as well, including the heart, bones, muscle and fatty tissue present in all of us. Unless there is an obvious abnormality in these other organs (such as a grossly enlarged heart, or cancer deposits elsewhere in the chest), these other organs are barely commented on and become unused by-products. There is information to be gained by review of these other organs, but until now there have not been the tools to attempt to fully realize them.

In a landmark 2009 study, Prado et al. measured muscle mass in obese cancer patients using CT scans obtained routinely during their cancer management. Due to its correlation with total muscle mass, the muscle area was measured at the level of the third lumbar vertebrae. This was subsequently corrected for height to a skeletal muscle index. Prado et al. found a relationship between low muscle index and survival, creating the label of sarcopenic obesity with a diagnostic cut-off determined by optimum stratification (<Male 55/Female 39 cm2/m2). Sarcopenia was a new concept in cancer care at that point, previously having been used in mainly an aging context to define frailty associated with poor muscle and mass. In the frailty literature, where there is limited access to CT or MRI imaging, assessments were usually functional (such as defined set of exercises) or involved plain X-ray imaging of limbs for practicality, cost and radiation dose purposes.

For cancer sarcopenia, assessment of muscle index has been repeated by other groups in single-centre studies across a variety of tumour types and geographic locations. Even when correcting for sex and height, there are enough other uncorrected factors that the range of cut-offs for pathological sarcopenia is too wide to be of practical utility (29.6-41 cm2/m2 in women and 36-55.4 cm2/m2 in men). Another limitation lies in that tumour sites do not always share imaging practices. For instance, in the case of brain tumours, there is less of a need to look for extracranial disease and thus, no imaging is available at the level of the third lumbar vertebrae for analysis.

In the current age of personalised medicine, being able to create individualised risk profiles based on the incidental information gained from necessary clinical imaging would add utility to scan results without adding clinical effort. For my PhD, the goal is to overcome this challenge using transfer learning from an existing detailed dataset. We’ve had the fortune to secure access to the UK Biobank, a medical compendium of half a million UK participants. The biobank includes results of biometrics, genomics, blood tests, imaging as well as medical history. Such a rich dataset is ripe for machine learning tasks.

I have therefore been working to integrate several high-dimensional datasets, applying a convolutional neural network to the imaging aspect whilst a deep neural network to the non-imaging aspect. A dimensionality reduction technique such as autoencoder will subsequently have to be applied to generate a clinically workable model. Being a clinician primarily, I am able to bring clinical rationale to the model and intuit the origin of certain biases from my prototype pipelines. On the flip side, I have struggled to become fluent with the computational code necessary to tackle these problems, and often still feel like I am at the equivalent level of asking for directions to the bathroom back in my GCSE German days.

In becoming a hybrid scientist, I’ve long since acknowledged that I will not be the best coder in the room. I am still climbing the steep learning curve of computer languages and code writing, grateful for this opportunity to realise my potential in this field. Machine learning is ever encroaching on not just our daily lives but in our clinical practice, usually for the better. I imagine that as early adopters turn into the early majority, those of us who have chosen to embrace this technology will be in a position to better develop and understand the tools that will benefit our future patients. After all, soon we will not just be collaborating with each other but also with Dr Siri and Dr Alexa.

Patients and research

Where do patients fit in research?

How to brilliantly take over from Dr Seema Dadhania’s blog post when I am definitely not someone to keep diaries (like Anne Frank or Carrie Brashaw)?

Some suggest writing about yourself; others, on your research. Well, that is a shame because there is nothing sufficiently interesting about me to make it public, and my research is scattered over so many projects and obligations that they would be difficult to summarise in 900 words. Then, the idea came to me after a patient and public involvement meeting organised and chaired by the talented Miss Lillie Pakzad-Shahabi: how do patients have an influence on my work? do I work for clinical staff or patients in the National Health Service (NHS)?

(more…)