Version 2 2024-06-06, 02:46Version 2 2024-06-06, 02:46
Version 1 2019-04-04, 12:56Version 1 2019-04-04, 12:56
conference contribution
posted on 2024-06-06, 02:46authored byQH Pham, MK Tran, W Li, S Xiang, H Zhou, W Nie, A Liu, Y Su, MT Tran, NM Bui, TL Do, TV Ninh, TK Le, AV Dao, VT Nguyen, MN Do, AD Duong, BS Hua, LF Yu, Duc Thanh NguyenDuc Thanh Nguyen, SK Yeung
Recent advances in consumer-grade depth sensors have enable the collection of massive real-world 3D objects. Together with the rise of deep learning, it brings great potential for large-scale 3D object retrieval. In this challenge, we aim to study and evaluate the performance of 3D object retrieval algorithms with RGB-D data. To support the study, we expanded the previous ObjectNN dataset [HTT ∗ 17] to include RGB-D objects from both SceneNN [HPN ∗ 16] and ScanNet [DCS ∗ 17], with the CAD models from ShapeNetSem [CFG ∗ 15]. Evaluation results show that while the RGB-D to CAD retrieval problem is indeed challenging due to incomplete RGB-D reconstructions, it can be addressed to a certain extent using deep learning techniques trained on multi-view 2D images or 3D point clouds. The best method in this track has a 82% retrieval accuracy.