Linkedin

TALKS

PhD in Informatics Seminar #8 2021/2022 | DI Ciências ULisboa

Rita Sousa

Title: Explainable semantic similarity for biomedical supervised learning
Speaker: Rita Sousa, LASIGE/DI-FCUL
Date: May 26, 12h
Where: Room 6.3.27

Abstract: Explainable artificial intelligence approaches ensure algorithmic fairness, identify potential bias in the data, ensure that the algorithms perform as expected, and bridge the gap between the machine learning community and other scientific disciplines. They are key to promoting the adoption of machine learning as a tool for scientific discovery. Explanations in the biomedical domain should be grounded in domain knowledge which can be achieved by using ontologies and knowledge graphs. However, the most popular way to explore knowledge graphs with machine learning is through embeddings, which are not explainable. In this work, the use of ontology-based semantic similarity that captures different semantic aspects represented in a knowledge graph as a tool to support both supervised learning and explainability is investigated. The underlying hypothesis is that using more semantic aspects to compute similarity and more interpretable models can make machine learning over ontologies more explainable with minimal losses in predictive performance. The experiments for protein-protein interaction prediction revealed that interpretable machine learning models coupled with semantic similarity, although performing worse than black-box ones, produce global models relevant to the biological phenomena and have a high prediction agreement with black-box models. This work represents a step towards demonstrating the potential of explainable artificial intelligence for scientific discovery.