Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/116282
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: Visual Question Answering as a meta learning task
Author: Teney, D.
Van Den Hengel, A.
Citation: Lecture Notes in Artificial Intelligence, 2018 / Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (ed./s), vol.11219 LNCS, pp.229-245
Publisher: Springer
Issue Date: 2018
Series/Report no.: Lecture Notes in Computer Science; 11219
ISBN: 9783030012663
ISSN: 0302-9743
1611-3349
Conference Name: 15th European Conference on Computer Vision (ECCV 2018) (8 Sep 2018 - 14 Sep 2018 : Munich)
Editor: Ferrari, V.
Hebert, M.
Sminchisescu, C.
Weiss, Y.
Statement of
Responsibility: 
Damien Teney and Anton van den Hengel
Abstract: The predominant approach to Visual Question Answering (VQA) demands that the model represents within its weights all of the information required to answer any question about any image. Learning this information from any real training set seems unlikely, and representing it in a reasonable number of weights doubly so. We propose instead to approach VQA as a meta learning task, thus separating the question answering method from the information required. At test time, the method is provided with a support set of example questions/answers, over which it reasons to resolve the given question. The support set is not fixed and can be extended without retraining, thereby expanding the capabilities of the model. To exploit this dynamically provided information, we adapt a state-of-the-art VQA model with two techniques from the recent meta learning literature, namely prototypical networks and meta networks. Experiments demonstrate the capability of the system to learn to produce completely novel answers (i.e. never seen during training) from examples provided at test time. In comparison to the existing state of the art, the proposed method produces qualitatively distinct results with higher recall of rare answers, and a better sample efficiency that allows training with little initial data. More importantly, it represents an important step towards vision-and-language methods that can learn and reason on-the-fly.
Rights: © Springer Nature Switzerland AG 2018
DOI: 10.1007/978-3-030-01267-0_14
Published version: http://dx.doi.org/10.1007/978-3-030-01267-0_14
Appears in Collections:Aurora harvest 3
Australian Institute for Machine Learning publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.