X-ray imaging techniques, using both synchrotron and laboratory sources, have emerged as a powerful tool to study dynamic phenomena in materials science, from metal solidification to the functioning of lithium batteries. However, the vast and complex data sets generated during time-resolved experiments present profound technical and practical problems for quantification, especially for multi-modal experiments and fast time resolve tomography where 10s of TB can be generated in a single experiment. Analysis of the data already takes months or even years, and so the extraction of robust and transferable insights has lagged behind the rapidly improving experimental capabilities.
Applying Artificial Intelligence (AI) to X-ray imaging has so far mainly focused on speeding up cumbersome human operations on uni-modal tomographic data, such as volume reconstruction and segmentation, and radiograph post-acquisition analysis. Little work has been carried out on multi-modal deep learning, which therefore remains a difficult challenge as well as an enormous opportunity. One reason deep learning has not been applied extensively to multi-modal data is that training deep models requires large-scale annotated datasets, e.g. millions of images with human supplied labels.
The DPhil project will capitalise on recent developments in self-supervised training methods to overcome the need of large-scale datasets and develop AI models for multi-modal X-ray imaging. Deep learning models will be trained directly from the data without human-supplied annotations, and then adapted to new tasks with a relatively small number of human-supplied labels for training. The newly created models will be applied to the study of metal solidification and the extraction of information in real-time during in-situ experiments.