Music perception remains rather poor for many Cochlear Implant (CI) users due to the users' deficient pitch perception. However, comprehensible vocals and simple music structures are well perceived by many CI users. In previous studies researchers re-mixed songs to make music more enjoyable for them, favoring the preferred music elements (vocals or beat) attenuating the others. However, mixing music requires the individually recorded tracks (multitracks) which are usually not accessible. To overcome this limitation, Source Separation (SS) techniques are proposed to estimate the multitracks. These estimated multitracks are further re-mixed to create more pleasant music for CI users. However, SS may introduce undesirable audible distortions and artifacts. Experiments conducted with CI users (N = 9) and normal hearing listeners (N = 9) show that CI users can have different mixing preferences than normal hearing listeners. Moreover, it is shown that CI users' mixing preferences are user dependent. It is also shown that SS methods can be successfully used to create preferred re-mixes although distortions and artifacts are present. Finally, CI users' preferences are used to propose a benchmark that defines the maximum acceptable levels of SS distortion and artifacts for two different mixes proposed by CI users.
Skip Nav Destination
Article navigation
December 2016
December 19 2016
Remixing music using source separation algorithms to improve the musical experience of cochlear implant users
Jordi Pons;
Jordi Pons
a)
Department of Otolaryngology,
Medical University Hannover and Cluster of Excellence Hearing4all
, Karl-Wiechert Allee 3, 30625, Hannover, Germany
Search for other works by this author on:
Jordi Janer;
Jordi Janer
Music Technology Group, Department of Information and Communication Technologies,
Universitat Pompeu Fabra
. Roc Boronat 138, 55.310, 08018 Barcelona, Spain
Search for other works by this author on:
Thilo Rode;
Thilo Rode
HoerSys GmbH
, Karl-Wiechert Allee 3, 30625, Hannover, Germany
Search for other works by this author on:
Waldo Nogueira
Waldo Nogueira
Department of Otolaryngology,
Medical University Hannover and Cluster of Excellence Hearing4all
, Karl-Wiechert Allee 3, 30625, Hannover, Germany
Search for other works by this author on:
a)
Also at: Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra. Roc Boronat 138, 55.308, 08018 Barcelona, Spain. Electronic mail: jordi.pons@upf.edu
J. Acoust. Soc. Am. 140, 4338–4349 (2016)
Article history
Received:
January 18 2016
Accepted:
November 22 2016
Citation
Jordi Pons, Jordi Janer, Thilo Rode, Waldo Nogueira; Remixing music using source separation algorithms to improve the musical experience of cochlear implant users. J. Acoust. Soc. Am. 1 December 2016; 140 (6): 4338–4349. https://doi.org/10.1121/1.4971424
Download citation file:
Pay-Per-View Access
$40.00
Sign In
You could not be signed in. Please check your credentials and make sure you have an active account and try again.
Citing articles via
Vowel signatures in emotional interjections and nonlinguistic vocalizations expressing pain, disgust, and joy across languages
Maïa Ponsonnet, Christophe Coupé, et al.
The alveolar trill is perceived as jagged/rough by speakers of different languages
Aleksandra Ćwiek, Rémi Anselme, et al.
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Related Content
A subjective evaluation of different music preprocessing approaches in cochlear implant listeners
J. Acoust. Soc. Am. (February 2023)
Spectral complexity reduction of music signals based on frequency-domain reduced-rank approximations: An evaluation with cochlear implant listeners
J. Acoust. Soc. Am. (September 2017)
Deep learning models to remix music for cochlear implant users
J. Acoust. Soc. Am. (June 2018)
A versatile deep-neural-network-based music preprocessing and remixing scheme for cochlear implant listeners
J. Acoust. Soc. Am. (May 2022)
Exploring level- and spectrum-based music mixing transforms for hearing-impaired listeners
J. Acoust. Soc. Am. (August 2023)