Title: Making Blackbox Models Explainable
Speaker: Dr. My T. Thai
Time: 2021/3/11 9:00-10:00
Place: Zoom（ID：504 068 1016 Code：738834）
Despite the impressive feats of applying deep neural networks (DNNs) to many academic disciplines and business sectors, researchers and the public have grown alarmed by the fact that these models lack of interpretability. They have been used as black boxes with a little explanation for why the models make such predictions. Explaining model’s decisions is of great importance because: 1) Explanations provide transparency to the prediction models, thereby increasing trustworthiness in using models. 2) Faithful explanations can identify models’ failures and bias when not all possible scenarios are testable, thereby avoiding several shortcuts learning that existing DNNs have been exhibiting.
In this talk, I will overview recent local explainers, which answers what features are important to the model’s decision. I will next present a metric to quantify these explainers, providing a framework of choosing which explainers to use in which scenarios. And finally, I will introduce a new Probabilistic Graphical Model (PGM) model-agnostic explainer for Graph Neural Networks (GNNs), called PGM-Explainer. Different from existing explainers where the explanations are drawn from a set of linear functions of explained features, PGM-Explainer is able to demonstrate the dependencies of explained features in a form of conditional probabilities.
Dr. My T. Thai is currently a University of Florida (UF) Research Foundation Professor of Computer & Information Sciences & Engineering and Associate Director of UF Nelms Institute for the Connected World. Dr. Thai has extensive expertise in billion-scale data mining, machine learning, and optimization, especially for complex graph data with applications to healthcare, social media, blockchain, and cybersecurity. She has been working on various interdisciplinary topics, focusing on the underlying mathematical models, coupled with fast approximation algorithmic designs and scalable machine learning for dynamic, interdependent, and uncertainty systems. Her recent work has focused on differential privacy and interpretable machine learning for fairness and trustworthy AI. The results of her work have led to 7 books and 250+ publications in highly ranked international journals and conferences, including several best paper awards.
Dr. Thai has received many research awards, notable ones are DTRA Young Investigator Award, 2009; the NSF CAREER Award. She is an IEEE Fellow.