Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/121648
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Producing radiologist quality reports for interpretable deep learning
Author: Gale, W.
Oakden-Rayner, L.
Carneiro, G.
Palmer, L.J.
Bradley, A.P.
Citation: Proceedings / IEEE International Symposium on Biomedical Imaging: from nano to macro. IEEE International Symposium on Biomedical Imaging, 2019, vol.2019-April, pp.1275-1279
Publisher: IEEE
Publisher Place: online
Issue Date: 2019
Series/Report no.: IEEE International Symposium on Biomedical Imaging
ISBN: 9781538636411
ISSN: 1945-7928
1945-8452
Conference Name: IEEE International Symposium on Biomedical Imaging (ISBI) (8 Apr 2019 - 11 Apr 2019 : Venice, ITALY)
Statement of
Responsibility: 
William Gale, Luke Oakden-Rayner, Gustavo Carneiro, Lyle J. Palmer, Andrew P. Bradley
Abstract: Current approaches to explaining the decisions of deep learning systems for medical tasks have focused on visualising the elements that have contributed to each decision. We argue that such approaches are not enough to “open the black box” of medical decision making systems because they are missing a key component that has been used as a standard communication tool between doctors for centuries: language. We propose a model-agnostic interpretability method that involves training a simple recurrent neural network model to produce descriptive sentences to clarify the decision of deep learning classifiers. We test our method on the task of detecting hip fractures from frontal pelvic x-rays. This process requires minimal additional labelling despite producing text containing elements that the original deep learning classification model was not specifically trained to detect. The experimental results show that: 1) the sentences produced by our method consistently contain the desired information, 2) the generated sentences are preferred by the cohort of doctors tested compared to current tools that create saliency maps, and 3) the combination of visualisations and generated text is better than either alone.
Keywords: Pattern recognition; text generation; x-ray imaging; bone, fractures
Rights: © 2019 IEEE
DOI: 10.1109/ISBI.2019.8759236
Grant ID: http://purl.org/au-research/grants/arc/DP180103232
Published version: http://dx.doi.org/10.1109/isbi.2019.8759236
Appears in Collections:Aurora harvest 8
Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.