Presentation
14 May 2018 Role of influence functions in model interpretability (Conference Presentation)
Supriyo Chakraborty, Jorge Ortiz, Simon Julier
Author Affiliations +
Abstract
Deep Neural Networks (DNNs) have achieved near human and in some cases super human accuracies in tasks such as machine translation, image classification, speech processing and so on. However, despite their enormous success these models are often used as black-boxes with very little visibility into their working. This opacity of the models often presents hindrance towards the adoption of these models for mission-critical and human-machine hybrid networks. In this paper, we will explore the role of influence functions towards opening up these black-box models and for providing interpretability of their output. Influence functions are used to characterize the impact of training data on the model parameters. We will use these functions to analytically understand how the parameters are adjusted during the model training phase to embed the information contained in the training dataset. In other words, influence functions allows us to capture the change in the model parameters due to the training data. We will then use these parameters to provide interpretability of the model output for test data points.
Conference Presentation
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Supriyo Chakraborty, Jorge Ortiz, and Simon Julier "Role of influence functions in model interpretability (Conference Presentation)", Proc. SPIE 10635, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX, 1063505 (14 May 2018); https://doi.org/10.1117/12.2306009
Advertisement
Advertisement
KEYWORDS
Data modeling

Image classification

Image processing

Neural networks

Opacity

Visibility

Back to Top