Please use this identifier to cite or link to this item:
https://hdl.handle.net/2440/129158
Citations | ||
Scopus | Web of Science® | Altmetric |
---|---|---|
?
|
?
|
Type: | Conference paper |
Title: | Learning what makes a difference from counterfactual examples and gradient supervision |
Author: | Teney, D. Abbasnejad, M. Van Den Hengel, A. |
Citation: | Lecture Notes in Artificial Intelligence, 2020 / Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (ed./s), vol.12355, pp.580-599 |
Publisher: | Springer |
Publisher Place: | Switzerland |
Issue Date: | 2020 |
Series/Report no.: | Lecture Notes in Computer Science; 12355 |
ISBN: | 3030586065 9783030586065 |
ISSN: | 0302-9743 1611-3349 |
Conference Name: | European Conference on Computer Vision Workshops (ECCV) (23 Aug 2020 - 28 Aug 2020 : virtual online) |
Editor: | Vedaldi, A. Bischof, H. Brox, T. Frahm, J.-M. |
Statement of Responsibility: | Damien Teney, Ehsan Abbasnedjad, and Anton van den Hengel |
Abstract: | One of the primary challenges limiting the applicability of deep learning is its susceptibility to learning spurious correlations rather than the underlying mechanisms of the task of interest. The resulting failure to generalise cannot be addressed by simply using more data from the same distribution. We propose an auxiliary training objective that improves the generalization capabilities of neural networks by leveraging an overlooked supervisory signal found in existing datasets. We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task. We show that such pairs can be identified in a number of existing datasets in computer vision (visual question answering, multi-label image classification) and natural language processing (sentiment analysis, natural language inference). The new training objective orients the gradient of a model’s decision function with pairs of counterfactual examples. Models trained with this technique demonstrate improved performance on out-of-distribution test sets. |
Rights: | © Springer Nature Switzerland AG 2020 |
DOI: | 10.1007/978-3-030-58607-2_34 |
Published version: | https://link.springer.com/book/10.1007/978-3-030-58607-2 |
Appears in Collections: | Aurora harvest 4 Australian Institute for Machine Learning publications |
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.