My PhD focuses on Continual Learning with Transformer Architectures for magnetic resonance images (MRIs) and computer tomography (CT) scans. Changing patient populations over time as well as different acquisition techniques across and within medical institutions lead to shifts in the data domain. Networks only trained on a single domain inevitably create unreliable predictions for out-of-distribution images. Transformer Architectures help to contain this restriction, however they are not perfectly suited for a direct application on segmentation tasks. My goal is to use Deep Learning based Transformer registration models for atlas-based segmentation in clinical multi-institutional settings to fully leverage the potential of Transformers known from NLP and Machine Translation.
I am mainly involved in the project. You can follow me on Twitter at RACOON and see my publication list in Google @amin_ranem. Scholar
If you’re interested in these topics and have experience in working with PyTorch, please feel free to reach out to me for current (Fortgeschrittenes) Visual Computing Praktikum (6 CP) and Bachelor/Master Thesis topics.