Efstratios Doukakis
Audiovisual resource allocation for bimodal virtual environments
Doukakis, Efstratios; Debattista, Kurt; Harvey, Carlo; Chalmers, Alan; Bashford-Rogers, T.
Authors
Kurt Debattista
Carlo Harvey
Alan Chalmers
Tom Bashford-Rogers Tom.Bashford-Rogers@uwe.ac.uk
Associate Lecturer - CATE - CSCT - UCSC0000
Abstract
© 2017 The Authors and 2017 The Eurographics Association and John Wiley & Sons Ltd. Fidelity is of key importance if virtual environments are to be used as authentic representations of real environments. However, simulating the multitude of senses that comprise the human sensory system is computationally challenging. With limited computational resources, it is essential to distribute these carefully in order to simulate the most ideal perceptual experience. This paper investigates this balance of resources across multiple scenarios where combined audiovisual stimulation is delivered to the user. A subjective experiment was undertaken where participants (N=35) allocated five fixed resource budgets across graphics and acoustic stimuli. In the experiment, increasing the quality of one of the stimuli decreased the quality of the other. Findings demonstrate that participants allocate more resources to graphics; however, as the computational budget is increased, an approximately balanced distribution of resources is preferred between graphics and acoustics. Based on the results, an audiovisual quality prediction model is proposed and successfully validated against previously untested budgets and an untested scenario.
Citation
Doukakis, E., Debattista, K., Harvey, C., Bashford-Rogers, T., & Chalmers, A. (2018). Audiovisual resource allocation for bimodal virtual environments. Computer Graphics Forum, 37(1), 172-183. https://doi.org/10.1111/cgf.13258
Journal Article Type | Article |
---|---|
Acceptance Date | Jun 4, 2017 |
Publication Date | Jan 1, 2018 |
Publicly Available Date | Jul 14, 2018 |
Journal | Computer Graphics Forum |
Print ISSN | 0167-7055 |
Electronic ISSN | 1467-8659 |
Publisher | Wiley |
Peer Reviewed | Peer Reviewed |
Volume | 37 |
Issue | 1 |
Pages | 172-183 |
DOI | https://doi.org/10.1111/cgf.13258 |
Keywords | audio, visual, multi-modal, human perception |
Public URL | https://uwe-repository.worktribe.com/output/884216 |
Publisher URL | http://dx.doi.org/10.1111/cgf.13258 |
Additional Information | Additional Information : This is the peer reviewed version of the following article: Doukakis, E., Debattista, K., Harvey, C., Bashford-Rogers, T. and Chalmers, A. (2017) Audio-visual resource allocation for bimodal virtual environments. Computer Graphics Forum. ISSN 0167-7055, which has been published in final form at http://dx.doi.org/10.1111/cgf.13258. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Self-Archiving. |
Files
AVRABVE.pdf
(21.6 Mb)
PDF
You might also like
A wide spectral range sky radiance model
(2022)
Journal Article
Deep learning-based defect inspection in sheet metal stamping parts
(2022)
Conference Proceeding
Deep synthesis of cloud lighting
(2022)
Journal Article
Ensemble metropolis light transport
(2021)
Journal Article
Deception in network defences using unpredictability
(2021)
Journal Article