Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/133344
Citations
Scopus Web of Science® Altmetric
?
?
Type: Conference paper
Title: 3D semantic mapping from arthroscopy using out-of-distribution pose and depth and in-distribution segmentation training
Author: Jonmohamadi, Y.
Ali, S.
Liu, F.
Roberts, J.
Crawford, R.
Carneiro, G.
Pandey, A.K.
Citation: Lecture Notes in Artificial Intelligence, 2021 / DeBruijne, M., Cattin, P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. (ed./s), vol.12902 LNCS, pp.383-393
Publisher: Springer International Publishing
Publisher Place: Switzerland
Issue Date: 2021
Series/Report no.: Lecture Notes in Computer Science
ISBN: 9783030871956
ISSN: 0302-9743
1611-3349
Conference Name: 24th International Conference of Medical Image Computing and Computer Assisted Intervention – MICCAI 2021 (27 Sep 2021 - 1 Oct 2021 : Strasbourg, France)
Editor: DeBruijne, M.
Cattin, P.C.
Cotin, S.
Padoy, N.
Speidel, S.
Zheng, Y.
Essert, C.
Statement of
Responsibility: 
Yaqub Jonmohamadi, Shahnewaz Ali, Fengbei Liu, Jonathan Roberts, Ross Crawford, Gustavo Carneiro, Ajay K. Pandey
Abstract: Minimally invasive surgery (MIS) has many documented advantages, but the surgeon’s limited visual contact with the scene can be problematic. Hence, systems that can help surgeons navigate, such as a method that can produce a 3D semantic map, can compensate for the limitation above. In theory, we can borrow 3D semantic mapping techniques developed for robotics, but this requires finding solutions to the following challenges in MIS: 1) semantic segmentation, 2) depth estimation, and 3) pose estimation. In this paper, we propose the first 3D semantic mapping system from knee arthroscopy that solves the three challenges above. Using out-of-distribution non-human datasets, where pose could be labeled, we jointly train depth+pose estimators using self-supervised and supervised losses. Using an in-distribution human knee dataset, we train a fully-supervised semantic segmentation system to label arthroscopic image pixels into femur, ACL, and meniscus. Taking testing images from human knees, we combine the results from these two systems to automatically create 3D semantic maps of the human knee. The result of this work opens the pathway to the generation of intra-operative 3D semantic mapping, registration with pre-operative data, and robotic-assisted arthroscopy. Source code: https://github.com/YJonmo/EndoMapNet.
Keywords: 3D semantic mapping; endoscopy; deep learning
Rights: © Springer Nature Switzerland AG 2021
DOI: 10.1007/978-3-030-87196-3_36
Grant ID: http://purl.org/au-research/grants/arc/DP180103232
http://purl.org/au-research/grants/arc/FT190100525
Published version: https://link.springer.com/book/10.1007/978-3-030-87196-3
Appears in Collections:Computer Science publications

Files in This Item:
There are no files associated with this item.


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.