Please use this identifier to cite or link to this item:
https://hdl.handle.net/2440/108833
Citations | ||
Scopus | Web of Science® | Altmetric |
---|---|---|
?
|
?
|
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Zhang, C. | - |
dc.contributor.author | Shen, C. | - |
dc.contributor.author | Shen, T. | - |
dc.date.issued | 2016 | - |
dc.identifier.citation | International Journal of Computer Vision, 2016; 116(1):90-107 | - |
dc.identifier.issn | 0920-5691 | - |
dc.identifier.issn | 1573-1405 | - |
dc.identifier.uri | http://hdl.handle.net/2440/108833 | - |
dc.description.abstract | We propose a fast, accurate matching method for estimating dense pixel correspondences across scenes. It is a challenging problem to estimate dense pixel correspondences between images depicting different scenes or instances of the same object category. While most such matching methods rely on hand-crafted features such as SIFT, we learn features from a large amount of unlabeled image patches using unsupervised learning. Pixel-layer features are obtained by encoding over the dictionary, followed by spatial pooling to obtain patch-layer features. The learned features are then seamlessly embedded into a multi-layer matching framework. We experimentally demonstrate that the learned features, together with our matching model, outperform state-of-the-art methods such as the SIFT flow (Liu et al. in IEEE Trans Pattern Anal Mach Intell 33(5):978–994, 2011), coherency sensitive hashing (Korman and Avidan in: Proceedings of the IEEE international conference on computer vision (ICCV), 2011) and the recent deformable spatial pyramid matching (Kim et al. in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2013) methods both in terms of accuracy and computation efficiency. Furthermore, we evaluate the performance of a few different dictionary learning and feature encoding methods in the proposed pixel correspondence estimation framework, and analyze the impact of dictionary learning and feature encoding with respect to the final matching performance. | - |
dc.description.statementofresponsibility | Chao Zhang, Chunhua Shen, Tingzhi Shen | - |
dc.language.iso | en | - |
dc.publisher | Springer | - |
dc.rights | © Springer Science+Business Media New York 2015 | - |
dc.source.uri | http://dx.doi.org/10.1007/s11263-015-0829-6 | - |
dc.subject | Unsupervised feature learning; scene alignment; dense scene correspondence; loopy belief propagation | - |
dc.title | Unsupervised Feature Learning for Dense Correspondences Across Scenes | - |
dc.type | Journal article | - |
dc.identifier.doi | 10.1007/s11263-015-0829-6 | - |
dc.relation.grant | http://purl.org/au-research/grants/arc/FT120100969 | - |
pubs.publication-status | Published | - |
dc.identifier.orcid | Shen, C. [0000-0002-8648-8718] | - |
Appears in Collections: | Aurora harvest 3 Computer Science publications |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
RA_hdl_108833.pdf Restricted Access | Restricted Access | 12.51 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.