Cancer research has been central in the biomedical community, seeking earlier and safer cancer diagnosis and prognosis, as well as better and more personalized treatment decisions. AI is increasingly receiving attention as a major component leveraging cancer imaging and multi-omics data analysis, towards these goals.
A key factor to ensure the success and impact of AI in cancer research and secure the wider adoption in clinical practice is the aspect of trustworthiness. This entails a multifaceted strategy in all phases of an AI service development, spanning from AI design to training and validation and including –among others– human oversight, technical robustness, data governance, transparency, fairness, and auditability. AI trustworthiness is currently moving from the definition of its theoretical framework capturing all trustworthiness perspectives to guidelines and best practices for its practical implementation. It is also evolving to incorporate the particularities of the cancer research areas and the needs of the w hole user spectrum. Incorporating explainability/interpretability in AI is an emerging field that is receiving lately much attention, while XAI validation is still an open issue that also involves several human aspects. As regards the much-discussed AI fairness, recent initiatives attempt to generate a framework based on best practices. Keeping in mind these multiple and diverse challenges and opportunities in the field of cancer research, the aim of this mini-symposium is to address the important questions: “How to design AI that is trustworthy”, and “How to validate AI trustworthiness” in the scope of AI for cancer imaging.
Speaker will be:
Ioanna Chouvarda, Aristotle University of Thessaloniki, Greece Karim Lekadir, University of Barcelona, Spain
Fuensanta Bellvis Bataller, Quibim, Spain Sara Colantonio, CNR, Italy
Haridimos Kondylakis, ICS-FORTH, Greece
Alexandra Kosvyra, Aristotle University of Thessaloniki, Greece
João Santinha, Champalimaud Foundation, Portugal