Menu

Department
  • Psychiatry
  • Research Focus

    Keywords

  • Deep Learning
  • Deep Neural Networks
  • Machine Learning
  • Artificial Intelligence
  • Uncertainty
  • Brain Image Processing
  • Brain Mapping
  • Brain Segmentation
  • Brain Tumours
  • Brain Development
  • Clinical Conditions

  • Bipolar disorder
  • Schizophrenia
  • Brain cancer
  • Equipment & Techniques

  • Magnetic resonance imaging (MRI)
  • Statistical analysis
  • Back

    Dr Michail (Mike) Mamalakis

    (he/him/his)
    University Position
    Research Associate

    Interests

    My research aims to optimize simultaneous prediction of patients' survival risk and overall well-being using advanced techniques like graph neural networks trained on genomics data and biomarkers linked to brain region heterogeneity. I leverage deep learning computer vision methods (e.g., convolutional networks, transformers) applied to MRI data. Trustworthy AI systems require explainability and accurate prediction uncertainty estimation. I integrate Explainable AI (XAI) techniques to validate and enhance interpretability, increasing trust in deep learning models. My MRC grant role focuses on variable pattern learning in sulcal patterns for schizophrenic and bipolar patients.

    3D eXplainability Artificial Intelligence

    The 3D explainable framework that provides both local and global interpretations and explanations of our deep learning 3D classification network's results. The ratio of the aithfulness and complexity metrics are compute in that stage. In this example we include only the GradCam explainability method just for simplicity.

    Key Publications

    Publications