Super-resolution of a 3-dimensional scene from novel viewpoints

Nelson, Kyle, Bhatti, Asim and Nahavandi, Saeid 2012, Super-resolution of a 3-dimensional scene from novel viewpoints, in ICARCV 2012 : Proceedings of the 12th International Conference on Control, Automation, Robotics and Vision, IEEE, Piscataway, N.J., pp. 1380-1385.

Attached Files
Name Description MIMEType Size Downloads

Title Super-resolution of a 3-dimensional scene from novel viewpoints
Author(s) Nelson, KyleORCID iD for Nelson, Kyle orcid.org/0000-0003-1956-5493
Bhatti, AsimORCID iD for Bhatti, Asim orcid.org/0000-0001-6876-1437
Nahavandi, SaeidORCID iD for Nahavandi, Saeid orcid.org/0000-0002-0360-5270
Conference name Control, Automation, Robotics and Vision. Conference (12th : 2012 : Guangzhou, China)
Conference location Guangzhou, China
Conference dates 5-7 Dec. 2012
Title of proceedings ICARCV 2012 : Proceedings of the 12th International Conference on Control, Automation, Robotics and Vision
Editor(s) [Unknown]
Publication date 2012
Conference series Control, Automation, Robotics and Vision Conference
Start page 1380
End page 1385
Total pages 6
Publisher IEEE
Place of publication Piscataway, N.J.
Keyword(s) super-resolution
3-dimensional
multi-view
uncalibrated camera
novel view
sparse reconstruction
Summary Super-resolution is a method of post-processing image enhancement that increases the spatial resolution of video or images. Existing super-resolution techniques apply only to images captured of a planar scene. This paper aims to extend super-resolution concepts from the 2D domain to the 3D domain, drawing on ideas from both superresolution and multi-view geometry, two fields of research that until now have predominantly been studied in isolation. 2D super-resolution methods are not without their complexities and limitations. However, once multiple views of a scene are considered within a super-resolution framework, a new range of issues arise that must also be resolved. For example, when input images of a scene with variation in depth are considered, it is no longer clear how and where the images should be registered. This paper describes the use of sparse 3D reconstruction in order to ‘register’ the input images, which are then transferred to a novel image plane and combined to increase the perceived detail in the scene. Experimental results using real images captured from generally positioned input cameras are presented.
ISBN 9781467318723
Field of Research 080106 Image Processing
Socio Economic Objective 970108 Expanding Knowledge in the Information and Computing Sciences
HERDC Research category E1 Full written paper - refereed
Copyright notice ©2012, IEEE
Persistent URL http://hdl.handle.net/10536/DRO/DU:30050964

Connect to link resolver
 
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Versions
Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 0 times in TR Web of Science
Scopus Citation Count Cited 0 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 654 Abstract Views, 8 File Downloads  -  Detailed Statistics
Created: Tue, 05 Mar 2013, 14:52:45 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact drosupport@deakin.edu.au.