M2EF-NNs: Multimodal Multi-instance Evidence Fusion Neural Networks for Cancer Survival Prediction

H Luo, J Huang, H Ju, T Zhou, W Ding�- arXiv preprint arXiv:2408.04170, 2024 - arxiv.org
H Luo, J Huang, H Ju, T Zhou, W Ding
arXiv preprint arXiv:2408.04170, 2024arxiv.org
Accurate cancer survival prediction is crucial for assisting clinical doctors in formulating
treatment plans. Multimodal data, including histopathological images and genomic data,
offer complementary and comprehensive information that can greatly enhance the accuracy
of this task. However, the current methods, despite yielding promising results, suffer from two
notable limitations: they do not effectively utilize global context and disregard modal
uncertainty. In this study, we put forward a neural network model called M2EF-NNs, which�…
Accurate cancer survival prediction is crucial for assisting clinical doctors in formulating treatment plans. Multimodal data, including histopathological images and genomic data, offer complementary and comprehensive information that can greatly enhance the accuracy of this task. However, the current methods, despite yielding promising results, suffer from two notable limitations: they do not effectively utilize global context and disregard modal uncertainty. In this study, we put forward a neural network model called M2EF-NNs, which leverages multimodal and multi-instance evidence fusion techniques for accurate cancer survival prediction. Specifically, to capture global information in the images, we use a pre-trained Vision Transformer (ViT) model to obtain patch feature embeddings of histopathological images. Then, we introduce a multimodal attention module that uses genomic embeddings as queries and learns the co-attention mapping between genomic and histopathological images to achieve an early interaction fusion of multimodal information and better capture their correlations. Subsequently, we are the first to apply the Dempster-Shafer evidence theory (DST) to cancer survival prediction. We parameterize the distribution of class probabilities using the processed multimodal features and introduce subjective logic to estimate the uncertainty associated with different modalities. By combining with the Dempster-Shafer theory, we can dynamically adjust the weights of class probabilities after multimodal fusion to achieve trusted survival prediction. Finally, Experimental validation on the TCGA datasets confirms the significant improvements achieved by our proposed method in cancer survival prediction and enhances the reliability of the model.
arxiv.org