E. Doukakis
Audio-visual-olfactory resource allocation for tri-modal virtual environments
Doukakis, E.; Debattista, K.; Bashford-Rogers, T.; Dhokia, A.; Asadipour, A.; Chalmers, A.; Harvey, C.
Authors
K. Debattista
Tom Bashford-Rogers Tom.Bashford-Rogers@uwe.ac.uk
Associate Lecturer - CATE - CCT - UCCT0001
A. Dhokia
A. Asadipour
A. Chalmers
C. Harvey
Abstract
© 2019 IEEE. Virtual Environments (VEs) provide the opportunity to simulate a wide range of applications, from training to entertainment, in a safe and controlled manner. For applications which require realistic representations of real world environments, the VEs need to provide multiple, physically accurate sensory stimuli. However, simulating all the senses that comprise the human sensory system (HSS) is a task that requires significant computational resources. Since it is intractable to deliver all senses at the highest quality, we propose a resource distribution scheme in order to achieve an optimal perceptual experience within the given computational budgets. This paper investigates resource balancing for multi-modal scenarios composed of aural, visual and olfactory stimuli. Three experimental studies were conducted. The first experiment identified perceptual boundaries for olfactory computation. In the second experiment, participants (N=25) were asked, across a fixed number of budgets (M=5), to identify what they perceived to be the best visual, acoustic and olfactory stimulus quality for a given computational budget. Results demonstrate that participants tend to prioritize visual quality compared to other sensory stimuli. However, as the budget size is increased, users prefer a balanced distribution of resources with an increased preference for having smell impulses in the VE. Based on the collected data, a quality prediction model is proposed and its accuracy is validated against previously unused budgets and an untested scenario in a third and final experiment.
Journal Article Type | Article |
---|---|
Acceptance Date | Jan 2, 2019 |
Online Publication Date | Feb 14, 2019 |
Publication Date | May 1, 2019 |
Deposit Date | Feb 18, 2019 |
Publicly Available Date | Feb 18, 2019 |
Journal | IEEE Transactions on Visualization and Computer Graphics |
Print ISSN | 1077-2626 |
Publisher | Institute of Electrical and Electronics Engineers |
Peer Reviewed | Peer Reviewed |
Volume | 25 |
Issue | 5 |
Pages | 1865-1875 |
DOI | https://doi.org/10.1109/TVCG.2019.2898823 |
Keywords | Olfactory; Visualization; Computational modeling; Resource management; Mathematical model; Auditory system; Virtual environments; Multi-modal; Cross-modal; Tri-modal; Sound; Graphics |
Public URL | https://uwe-repository.worktribe.com/output/846388 |
Publisher URL | http://dx.doi.org/10.1109/TVCG.2019.2898823 |
Additional Information | Additional Information : (c) 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works. |
Contract Date | Feb 18, 2019 |
Files
main.pdf
(22.2 Mb)
PDF
You might also like
Learning preferential perceptual exposure for HDR displays
(2019)
Journal Article
Olfaction and selective rendering
(2017)
Journal Article
Subjective evaluation of high-fidelity virtual environments for driving simulations
(2017)
Journal Article
Downloadable Citations
About UWE Bristol Research Repository
Administrator e-mail: repository@uwe.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search