Creating sheet music can be an arduous task for instrumentalists and vocalists who improvise. This problem can be solved by using a pitch detector that produces a MIDI data file. MIDI data is a good solution because it can be easily transformed into music notation by standard programs such as Logic Pro X or Finale. The authors have developed an open source C/UNIX-based program that automatically transforms a monophonic sound file into a playable MIDI file. Pitch (F0) detection is accomplished using a short-time autocorrelation algorithm. Successive F0’s that correspond to the same MIDI note number are combined to form notes. The minimum duration of each note is determined by the autocorrelation window size, which in our case is set to 0.03 s. To achieve a more accurate notation result, the program employs duration and RMS amplitude thresholds to exclude spurious notes from the MIDI data.
Skip Nav Destination
,
Article navigation
March 2019
Meeting abstract. No PDF available.
March 01 2019
Automatic transcription of solo audio into music notation
Dong Hyun Lee;
Dong Hyun Lee
Dept. of Elec. and Comput. Eng., Univ. of Illinois Urbana-Champaign, Urbana, IL 61801, [email protected]
Search for other works by this author on:
James W. Beauchamp
James W. Beauchamp
Dept. of Elec. and Comput. Eng., School of Music, Univ. of Illinois Urbana-Champaign, Urbana, IL
Search for other works by this author on:
Dong Hyun Lee
James W. Beauchamp
Dept. of Elec. and Comput. Eng., Univ. of Illinois Urbana-Champaign, Urbana, IL 61801, [email protected]
J. Acoust. Soc. Am. 145, 1709–1710 (2019)
Citation
Dong Hyun Lee, James W. Beauchamp; Automatic transcription of solo audio into music notation. J. Acoust. Soc. Am. 1 March 2019; 145 (3_Supplement): 1709–1710. https://doi.org/10.1121/1.5101272
Download citation file:
Citing articles via
A survey of sound source localization with deep learning methods
Pierre-Amaury Grumiaux, Srđan Kitić, et al.
Focality of sound source placement by higher (ninth) order ambisonics and perceptual effects of spectral reproduction errors
Nima Zargarnezhad, Bruno Mesquita, et al.
Related Content
Musician-led performance perspectives in virtual acoustics
J. Acoust. Soc. Am. (March 2024)
Worship space acoustics and architecture for contemporary services with modern music
J. Acoust. Soc. Am. (October 2017)
The khaen: Musical traditions and contemporary innovations
J. Acoust. Soc. Am. (October 2019)
Extensive underwater radiated noise analysis of a research vessel in compliance with DNV SILENT notation
J. Acoust. Soc. Am. (October 2019)
Some thoughts on teaching and learning phonetic transcription
J. Acoust. Soc. Am. (October 2017)