Speaker
Description
Precise diagnosis of Alzheimer’s disease (AD) is crucial to ensure timely intervention and evaluate patient prognosis. Although integrating multi-modal neuroimaging such as MRI and PET has the potential, there are still challenges in effectively integrating multi-modal images. To this end, we propose a deep learning-based framework that uses Mutual Information Decomposition to obtain modality-specific information and combines attention mechanisms to learn the optimal multi-modal feature combinations. Our proposed framework includes three parts. First, we design a feature extractor for modality-specific information through mutual information separation. Second, we optimize the combination of modality-specific features by adding attention constraints. Third, we mitigate the over-fitting of the model through multi-task learning to improve the generalization ability. Evaluation results on the ADNI dataset highlight the effectiveness of our method. Our work demonstrates the potential of effectively integrating multi-modal neuroimaging data for advancing early AD detection and treatment.
Keyword-1 | Multi-Modal Neuroimaging |
---|---|
Keyword-2 | Information Decomposition |
Keyword-3 | Alzheimer’s Disease Diagnosis |