Seminars

Past talks

Speaker: Rodrigo Borges e Shayenne Moura
Date and time: Thursday, September 29, 2016 - 16:00
Place: Auditório Antonio Gilioli, IME/USP
Abstract:

This seminar is a presentation of the award article as Best Student Paper of the ISMIR in 2009, called EASY AS CBA: A Probabilistic SIMPLE MODEL FOR TAGGING MUSIC Matthew D. Hoffman, David M. Blei, Perry R. Cook.

Many songs are not associated with well representative tags and this makes the songs recovery process from the tags not be efficient.

We present a probabilistic model that learns to perform automatic prediction that words applies to a song from the timbre features. This method is simple to implement, easy to test and returns results quickly.


(video presentation in portuguese)

Speaker: Guilherme Feulo e Guilherme Jun
Date and time: Tuesday, September 27, 2016 - 16:00
Place: Auditório do CCSL
Abstract: Nowadays there are a lot of scores available on the internet for free. However, beginners may have some difficulties to choose a score that is appropriate to their instrument level. In this seminar we will talk about the paper SCORE ANALYZER: AUTOMATICALLY DETERMINING SCORES DIFFICULTY LEVEL FOR INSTRUMENTAL E-LEARNING from Véronique Sébastien, Henri Ralambondrainy, Olivier Sébastien and Noël Conruyt,this paper was awarded as the best student paper of 2012 ISMIR. The autors proposed a Score Analyser prototype in order to automatically extract the difficulty level of a MusicXML piece and suggest advice thanks to a Musical Sign Base (MSB). During the seminar we will discuss about their approach and results.


(video presentation in portuguese)

Speaker: Alessandro Palmeira and Itai Soares
Date and time: Thursday, September 22, 2016 - 16:00
Place: IME/USP
Abstract: In this seminar, we will present the best paper from the 2015 International Society for Music Information Retrieval Conference (ISMIR 2015) entitled "Real-time music tracking using multiple performances as a reference" by Andreas Arzt e Gerhard Widmer.
In general, algorithms for real-time music tracking directly use a symbolic representation of the score, or a synthesised version thereof, as a reference for the on-line alignment process. In this paper we present an alternative approach. First, different performances of the piece in question are collected and aligned (off-line) to the symbolic score. Then, multiple instances of the on-line tracking algorithm (each using a different performance as a reference) are used to follow the live performance, and their output is combined to come up with the current position in the score. As the evaluation shows, this strategy improves both the robustness and the precision, especially on pieces that are generally hard to track (e.g. pieces with extreme, abrupt tempo changes, or orchestral pieces with a high degree of polyphony). Finally, we describe a real-world application, where this music tracking algorithm was used to follow a world-famous orchestra in a concert hall in order to show synchronised visual content (the sheet music, explanatory text and videos) to members of the audience.


(video presentation in portuguese)

Speaker: Marcio Masaki Tomiyoshi e Roberto Piassi Passos Bodo
Date and time: Tuesday, September 20, 2016 - 16:00
Place: Antonio Gilioli Auditorium, IME/USP
Abstract: In this seminar, we will present the best student paper from ISMIR 2011 "Unsupervised learning of sparse features for scalable audio classification" by Mikael Henaff, Kevin Jarrett, Koray Kavukcuoglu and Yann LeCun.
In this work it is presented a system to automatically learn features from audio in an unsupervised manner. This method first learns an overcomplete dictionary which can be used to sparsely decompose log-scaled spectrograms. It then trains an efficient encoder which quickly maps new inputs to approximations of their sparse representations using the learned dictionary. This avoids expensive iterative procedures usually required to infer sparse codes. These sparse codes are then used as inputs for a linear Support Vector Machine (SVM). The system achieves 83.4% accuracy in predicting genres on the GTZAN dataset, which is competitive with current state-of-the-art approaches. Furthermore, the use of a simple linear classifier combined with a fast feature extraction system allows this approach to scale well to large datasets.


(video presentation in portuguese)

Speaker: Fábio Goródscy e Felipe Felix
Date and time: Thursday, September 15, 2016 - 18:00
Place: Auditório Antonio Giglioli, IME/USP
Abstract:

We are dealing with the task of capturing repetitive structures of music recordings. This is similar to audio thumbnailing where the goal is to reduce the duration of audio recordings keeping important information defined by the application.

We show examples with fitness matrices using a technique that captures repetitive structures, based on precision and coverage of segments of the audio recording calculated from self-similarity matrices.

This seminar is based upon the 2011 ISMIR awarded article, A SEGMENT-BASED FITNESS MEASURE FOR CAPTURING REPETITIVE STRUCTURES OF MUSIC RECORDINGS by Meinard Müller, Peter Grosche, Nanzhu Jiang.


(video presentation in portuguese)

Pages

Subscribe to Upcoming talks Subscribe to Past talks