Selective visuo-haptic rendering of heterogeneous objects in 'parallel universes'

Wei, Lei, Zhou, Hailing and Nahavandi, Saeid 2014, Selective visuo-haptic rendering of heterogeneous objects in 'parallel universes', in SMC 2014 : Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics, IEEE, Piscataway, N.J., pp. 2176-2179.

Attached Files
Name Description MIMEType Size Downloads

Title Selective visuo-haptic rendering of heterogeneous objects in 'parallel universes'
Author(s) Wei, LeiORCID iD for Wei, Lei
Zhou, HailingORCID iD for Zhou, Hailing
Nahavandi, SaeidORCID iD for Nahavandi, Saeid
Conference name Systems, Man, and Cybernetics. Conference (2014 : San Diego, California)
Conference location San Diego, California
Conference dates 5-8 Oct. 2014
Title of proceedings SMC 2014 : Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics
Editor(s) [Unknown]
Publication date 2014
Conference series Systems, Man, and Cybernetics Conference
Start page 2176
End page 2179
Total pages 4
Publisher IEEE
Place of publication Piscataway, N.J.
Summary  Haptic rendering of complex models is usually prohibitive due to its much higher update rate requirement compared to visual rendering. Previous works have tried to solve this issue by introducing local simulation or multi-rate simulation for the two pipelines. Although these works have improved the capacity of haptic rendering pipeline, they did not take into consideration the situation of heterogeneous objects in one scenario, where rigid objects and deformable objects coexist in one scenario and close to each other. In this paper, we propose a novel idea to support interactive visuo-haptic rendering of complex heterogeneous models. The idea incorporates different collision detection and response algorithms and have them seamlessly switched on and off on the fly, as the HIP travels in the scenario. The selection of rendered models is based on the hypothesis of “parallel universes”, where the transition of rendering one group of models to another is totally transparent to users. To facilitate this idea, we proposed a procedure to convert the traditional single universe scenario into a “multiverse” scenario, where the original models are grouped and split into each parallel universe, depending on the scenario rendering requirement rather than just locality. We also proposed to add simplified visual objects as background avatars in each parallel universe to visually maintain the original scenario while not overly increase the scenario complexity. We tested the proposed idea in a haptically-enabled needle thoracostomy training environment and the result demonstrates that our idea is able to substantially accelerate visuo-haptic rendering with complex heterogeneous scenario objects.
ISBN 9781479938391
Language eng
Field of Research 080111 Virtual Reality and Related Simulation
Socio Economic Objective 970108 Expanding Knowledge in the Information and Computing Sciences
HERDC Research category E1 Full written paper - refereed
ERA Research output type E Conference publication
Copyright notice ©2014, IEEE
Persistent URL

Connect to link resolver
Unless expressly stated otherwise, the copyright for items in DRO is owned by the author, with all rights reserved.

Version Filter Type
Citation counts: TR Web of Science Citation Count  Cited 4 times in TR Web of Science
Scopus Citation Count Cited 3 times in Scopus
Google Scholar Search Google Scholar
Access Statistics: 526 Abstract Views, 5 File Downloads  -  Detailed Statistics
Created: Fri, 17 Apr 2015, 12:05:07 EST

Every reasonable effort has been made to ensure that permission has been obtained for items included in DRO. If you believe that your rights have been infringed by this repository, please contact