Influence of vision on short-term sound localization training with non-individualized HRTF - Cnam - Conservatoire national des arts et métiers Accéder directement au contenu
Communication Dans Un Congrès Année : 2019

Influence of vision on short-term sound localization training with non-individualized HRTF

Résumé

Previous studies have demonstrated that it is possible for humans to adapt to new HRTF, non-individualized or altered, in a short time period. While natural adaptation, through sound exposure, takes several weeks [1], some training programs have been employed to accelerate adaptation and improve performance on sound localization in a few days (see [2] for a review). The majority of these training programs are based on audio-visual positional or response feedback learning [3] (participants correct their answer after seeing the target position), or on active learning, for example through audio-proprioceptive manipulations [4] (blindfolded participants actively explore the sphere around them by playing a mini sonified version of hot-and-cold game). While all training programs are based on a bimodal coupling (audio-vision [3] or audio-proprioception [4]), they are rarely based on a trimodal one. Therefore, if vision is not necessary for adaptation [4], and audio-visual training can even be less efficient than other methods [1,2], the role of vision in short-term audio localization training remains unclear, especially when action and proprioception are already involved. Our study compares two versions of active trainings: an audio-proprioceptive one and an audio-visuo-proprioceptive one. We hypothesize that combining all modalities leads to better adaptation inducing better performances and a longer remaining effect.The experiment is developed in virtual reality using a HTC Vive as a head- and hand-tracker. 3D audio spatialization is obtained through Steam Audio’s non-individualized built-in HRTF. When applicable, 3D visual information is displayed directly on the Vive screen. A total of 36 participants, equally distributed in 3 groups (G1 to G3), participate in this between-subject design study.G1 is a control group receiving no training session, while the 2 other groups receive a training session of 12 minutes during 3 consecutive days. All the participants also had to perform 5 sound localization tests (no feedback, hand-pointing techniques, 2 repetitions × 33 positions, frontal space): one before the experiment, one after each training session, and the last one 1 week after the first day in order to evaluate the remaining effect. G2 receives an audio-proprioceptive training as exposed in [4]. Participants have to freely scan the space around them with their hand-held Vive controller to find an animal sound hidden around them. The controller-to-target angular distance is sonified and spatialized at the controller position. No visual information is provided. G3 receives the same task as in G2 but, a visual representation of a sphere is also displayed at the hand position during all training sessions (audio-visuo-proprioceptive situation). We measure the angular error in azimuth and elevation during localization tests. Performances are also analyzed in interaural polar coordinate system to discuss front/back and up/down confusion errors. Data from training sessions are logged (total number of found animals and detailed sequence of hand positions) to evaluate how training and vision influence scanning strategy. The experimental phase is taking place right now (10 participants have completed it for the moment) and extends until the end of April. Complete results will be available for the final version of the paper in June. References [1] Carlile, S., and Blackman, T. Relearning auditory spectral cues for locations inside and outside the visual field. J. Assoc. Res. Otolaryngol. 15, 249–263 (2014)[2] Strelnikov, K., Rosito, M., and Barrone, P. Effect of audiovisual training on monaural spatial hearing in horizontal plane. PLoS ONE 6:e18344 (2011)[3] Mendonça, C. A review on auditory space adaptation to altered head-related cues. Front. Neurosci. 8, 219 (2014)[4] Parseihian, G. & Katz, B.F.G. Rapid head-related transfer function adaptation using a virtual auditory environment. J. Acous. Soc. of America 131, 2948–2957 (2012)
Fichier principal
Vignette du fichier
000004.pdf (256.37 Ko) Télécharger le fichier
Origine : Publication financée par une institution
Loading...

Dates et versions

hal-02466823 , version 2 (30-08-2019)
hal-02466823 , version 1 (04-02-2020)

Identifiants

Citer

Tifanie Bouchara, Tristan-Gaël Bara, Pierre-Louis Weiss, Alma Guilbert. Influence of vision on short-term sound localization training with non-individualized HRTF. EAA Spatial Audio Signal Processing Symposium, Sep 2019, Paris, France. pp.55-60, ⟨10.25836/sasp.2019.04⟩. ⟨hal-02466823v2⟩
493 Consultations
295 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More