Deakin University
Browse

SceneNN: a scene meshes dataset with aNNotations

Version 2 2024-06-06, 02:45
Version 1 2016-12-15, 00:00
conference contribution
posted on 2024-06-06, 02:45 authored by BS Hua, QH Pham, Duc Thanh NguyenDuc Thanh Nguyen, MK Tran, LF Yu, SK Yeung
Several RGB-D datasets have been publicized over the past few years for facilitating research in computer vision and robotics. However, the lack of comprehensive and fine-grained annotation in these RGB-D datasets has posed challenges to their widespread usage. In this paper, we introduce SceneNN, an RGB-D scene dataset consisting of 100 scenes. All scenes are reconstructed into triangle meshes and have per-vertex and per-pixel annotation. We further enriched the dataset with fine-grained information such as axis-aligned bounding boxes, oriented bounding boxes, and object poses. We used the dataset as a benchmark to evaluate the state-of-the-art methods on relevant research problems such as intrinsic decomposition and shape completion. Our dataset and annotation tools are available at http://www.scenenn.net.

History

Related Materials

  1. 1.

Location

Stanford, California

Language

eng

Publication classification

E Conference publication, E1.1 Full written paper - refereed

Copyright notice

2016, IEEE

Pagination

92-101

Start date

2016-10-25

End date

2016-10-28

ISBN-13

9781509054077

Title of proceedings

3DV 2016 : Proceedings of the 4th International Conference on 3D Vision 2016

Event

International Conference on 3D vision (4th : 2016 : Stanford, California)

Publisher

IEEE

Place of publication

Piscataway, N.J.

Usage metrics

    Research Publications

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC