SHREC'17: RgB-D to CAD retrieval with ObjectNN dataset
Version 2 2024-06-06, 10:12Version 2 2024-06-06, 10:12
Version 1 2019-04-16, 15:13Version 1 2019-04-16, 15:13
conference contribution
posted on 2017-01-01, 00:00authored byB S Hua, Q T Truong, M K Tran, Q H Pham, A Kanezaki, T Lee, H Y Chiang, W Hsu, B Li, Y Lu, H Johan, S Tashiro, M Aono, M T Tran, V K Pham, H D Nguyen, V T Nguyen, Q T Tran, T V Phan, B Truong, M N Do, A D Duong, L F Yu, Duc Thanh NguyenDuc Thanh Nguyen, S K Yeung
The goal of this track is to study and evaluate the performance of 3D object retrieval algorithms using RGB-D data. This is inspired from the practical need to pair an object acquired from a consumer-grade depth camera to CAD models available in public datasets on the Internet. To support the study, we propose ObjectNN, a new dataset with well segmented and annotated RGB-D objects from SceneNN [HPN ∗ 16] and CAD models from ShapeNet [CFG ∗ 15]. The evaluation results show that the RGB-D to CAD retrieval problem, while being challenging to solve due to partial and noisy 3D reconstruction, can be addressed to a good extent using deep learning techniques, particularly, convolutional neural networks trained by multi-view and 3D geometry. The best method in this track scores 82% in accuracy.