SHREC'17: RgB-D to CAD retrieval with ObjectNN dataset
Version 2 2024-06-06, 10:12Version 2 2024-06-06, 10:12
Version 1 2019-04-16, 15:13Version 1 2019-04-16, 15:13
conference contribution
posted on 2024-06-06, 10:12authored byBS Hua, QT Truong, MK Tran, QH Pham, A Kanezaki, T Lee, HY Chiang, W Hsu, B Li, Y Lu, H Johan, S Tashiro, M Aono, MT Tran, VK Pham, HD Nguyen, VT Nguyen, QT Tran, TV Phan, B Truong, MN Do, AD Duong, LF Yu, Duc Thanh NguyenDuc Thanh Nguyen, SK Yeung
The goal of this track is to study and evaluate the performance of 3D object retrieval algorithms using RGB-D data. This is inspired from the practical need to pair an object acquired from a consumer-grade depth camera to CAD models available in public datasets on the Internet. To support the study, we propose ObjectNN, a new dataset with well segmented and annotated RGB-D objects from SceneNN [HPN ∗ 16] and CAD models from ShapeNet [CFG ∗ 15]. The evaluation results show that the RGB-D to CAD retrieval problem, while being challenging to solve due to partial and noisy 3D reconstruction, can be addressed to a good extent using deep learning techniques, particularly, convolutional neural networks trained by multi-view and 3D geometry. The best method in this track scores 82% in accuracy.