Corrado Monti, Paolo Bajardi, Francesco Bonchi, André Panisson, Alan Perotti
Transactions on Machine Learning Research (4/2024).
Explainability defines how people interpret and trust machine learning systems, making it central to the interaction between humans and algorithms. This study introduces an axiomatic benchmark to evaluate whether graph-based explainers faithfully reflect the decision process of the models they interpret. By systematically testing white-box classifiers and real-world networks, it exposes the conditions under which explainers deviate from model logic, offering a rigorous framework for assessing faithfulness and robustness in graph explainability.