Please use this identifier to cite or link to this item:
Type: Thesis
Title: Bayesian Data Augmentation and Generative Active Learning for Robust Imbalanced Deep Learning
Author: Tran, Toan Minh
Issue Date: 2020
School/Discipline: School of Computer Science
Abstract: Deep learning has become a leading machine learning approach in many domains such as image classification, face recognition, and autonomous driving cars. However, its success is predicated on the availability of immense labelled training sets. Furthermore, it is usually the case that these data sets need to be well-balanced, otherwise the performance of the trained model is compromised. The outstanding performance of deep learning compared to other traditional machine learning approaches is therefore traded off by the need of a significant amount of human resources for labelling and computational resources for training. Designing effective deep learning approaches that can perform well using small and imbalanced labelled training sets is essential since that will increase the use of deep learning in many real-life applications. In this thesis, we investigate several learning approaches that aim to improve the data efficiency in training deep models. In particular, we propose novel effective learning methods that enable deep learning models to perform well with relatively small and imbalanced labelled training sets. We first introduce a novel theoretically sound Bayesian data augmentation (BDA) method motivated by the fact that the current dominant data augmentation (DA), based on small geometric and appearance transformations of the original training samples, does not guarantee the usefulness and the realism of the generated samples. We formulate BDA with the generalised Monte-Carlo expectation maximisation (GMCEM).We theoretically show the weak convergence of GMCEM and introduce an implementation of BDA based on a variant of the generative adversarial network (GAN). We empirically demonstrate that our proposed BDA performs better than the dominant DA above. One of the drawbacks of BDA mentioned above is that the generation of synthetic training samples is performed without considering their informativeness to the training process. Therefore, we next propose a new Bayesian generative active deep learning (BGADL) approach that aims to train a generative model to produce novel informative training samples. We formulate this algorithm based on a theoretically sound combination of the Bayesian active learning by disagreement (BALD) and BDA, where BALD guides BDA to produce synthetic samples. We provide a formal proof that these generated samples are informative for the training process. We provide empirical evidence that our proposed BGADL outperforms BDA and BALD with respect to training efficiency and classification accuracy. The Bayesian generative active deep learning above does not properly handle class imbalanced training that may occur in the updated training sets formed at each iteration of the algorithm. We extend BGADL with an approach that is robust to imbalanced training data by combining it with a sample re-weighting learning approach. We empirically demonstrate that the extended BGADL performs well on several imbalanced data sets and produce better classification results compared to other baselines. In summary, the contributions of this thesis are the introduction of the following novel methods: Bayesian data augmentation, Bayesian generative active deep learning, and a robust Bayesian generative active deep learning for imbalanced learning. All of those contributions are supported by theoretical justifications, empirical evidence and published or submitted papers.
Advisor: Carneiro, Gustavo
Reid, lan
Dissertation Note: Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 2019
Keywords: Bayesian inference
Deep learning
Data augmentation
Generative active learning
Imbalanced learning
Sample Reweighting
Provenance: This electronic version is made publicly available by the University of Adelaide in accordance with its open access policy for student theses. Copyright in this thesis remains with the author. This thesis may incorporate third party material which has been used by the author pursuant to Fair Dealing exceptions. If you are the owner of any included third party copyright material you wish to be removed from this electronic version, please complete the take down form located at:
Appears in Collections:Research Theses

Files in This Item:
File Description SizeFormat 
Tran2019_PhD.pdf22.21 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.