Explaining AI models for clinical research: Validation through model comparison and data simulation
Document Type
Conference Proceeding
Publication Date
1-1-2019
Journal
Multi Conference on Computer Science and Information Systems, MCCSIS 2019 - Proceedings of the International Conference on e-Health 2019
DOI
10.33965/eh2019_201910l004
Keywords
Clinical Research; Explainable AI; Validation
Abstract
© Copyright 2019 IADIS Press All rights reserved. For clinical research to take advantage of artificial intelligence techniques such as the various types of deep neural networks, we need to be able to explain the deep neural network models to clinicians and researchers. While some explanation approaches have been developed, their validation and utilization are very limited. In this study, we evaluated a novel explainable artificial intelligence method called impact assessment by applying it to deep neural networks trained on real world and simulated data. Using real clinical data, the impact scores from deep neural networks were compared with odds ratios from logistic regression models. Using simulated data, the impact scores from deep neural networks were compared with the impact scores calculated based on the ground truth (i.e. formulas used to generate the simulated data). The correlations between impact scores and odds ratios ranged from 0.63 to 0.97. The correlations between impact scores from DNN and ground truth ranged were all above 0.99. These suggest that the impact score provide a valid explanation of the contribution of a variable in a DNN model.
APA Citation
Zeng-Treitler, Q., Shao, Y., Redd, D., Goulet, J., Brandt, C., & Bray, B. (2019). Explaining AI models for clinical research: Validation through model comparison and data simulation. Multi Conference on Computer Science and Information Systems, MCCSIS 2019 - Proceedings of the International Conference on e-Health 2019, (). http://dx.doi.org/10.33965/eh2019_201910l004