On the Use of Sparse Time-Relative Auditory Codes for Music
Presenter: Arthur Tofani, Thilo Koch
In this seminar we will present a paper from Pierre-Antoine Manzago, Thierry Bertin-Mahieux and Douglas Eck presented at ISMIR 2008.<blockquote>Many if not most audio features used in MIR research are inspired by work done in speech recognition and are variations on the spectrogram. Recently, much attention has been given to new representations of audio that are sparse and time-relative. These representations are efficient and able to avoid the time-frequency trade-off of a spectrogram. Yet little work with music streams has been conducted and these features remain mostly unused in the MIR community. In this paper we further explore the use of these features for musical signals. In particular, we investigate their use on realistic music examples (i.e. released commercial music) and their use as input features for supervised learning.</blockquote><p>
When: September 13th, 2016
Where: Auditório Antonio Giglioli, IME/USP