Version 2 2024-06-06, 02:45Version 2 2024-06-06, 02:45
Version 1 2018-01-22, 12:29Version 1 2018-01-22, 12:29
journal contribution
posted on 2024-06-06, 02:45authored byTD Nguyen, BS Hua, LF Yu, SK Yeung
IEEE Recent advances of 3D acquisition devices have enabled large-scale acquisition of 3D scene data. Such data, if completely and well annotated, can serve as useful ingredients for a wide spectrum of computer vision and graphics works such as data-driven modeling and scene understanding, object detection and recognition. However, annotating a vast amount of 3D scene data remains challenging due to the lack of an effective tool and/or the complexity of 3D scenes (e.g. clutter, varying illumination conditions). This paper aims to build a robust annotation tool that effectively and conveniently enables the segmentation and annotation of massive 3D data. Our tool works by coupling 2D and 3D information via an interactive framework, through which users can provide high-level semantic annotation for objects. We have experimented our tool and found that a typical indoor scene could be well segmented and annotated in less than 30 minutes by using the tool, as opposed to a few hours if done manually. Along with the tool, we created a dataset of over a hundred 3D scenes associated with complete annotations using our tool. Both the tool and dataset will be available at http://scenenn.net.
History
Journal
IEEE transactions on visualization and computer graphics