Seminars

Past talks

Speaker: Arthur Tofani, Thilo Koch
Date and time: Tuesday, September 13, 2016 - 15:00
Place: Auditório Antonio Giglioli, IME/USP
Abstract: In this seminar we will present a paper from Pierre-Antoine Manzago, Thierry Bertin-Mahieux and Douglas Eck presented at ISMIR 2008.
Many if not most audio features used in MIR research are inspired by work done in speech recognition and are variations on the spectrogram. Recently, much attention has been given to new representations of audio that are sparse and time-relative. These representations are efficient and able to avoid the time-frequency trade-off of a spectrogram. Yet little work with music streams has been conducted and these features remain mostly unused in the MIR community. In this paper we further explore the use of these features for musical signals. In particular, we investigate their use on realistic music examples (i.e. released commercial music) and their use as input features for supervised learning.


(video presentation in portuguese)

Speaker: Dra. Carolina Brum Medeiros (Fliprl CEO, IDMIL/McGill, Google ATAP)
Date and time: Tuesday, September 6, 2016 - 15:00
Place: Room 132-A, IME/USP
Abstract: In the past decade, various consumer electronic devices were launched as gestural controllers, from which several have been used for musical expression. Despite the variety of these devices, academic and industrial institutions keep their efforts on researching and developing new devices every so often. Why? In this conversation, I’d like to raise the discussion about the reasons why we are not satisfied with the existent gestural controllers: Natural human unsettledness? Consumerism and market? Technological evolution, allowing for creation of more efficient devices? Search for new ways of expression? Or maybe we are aiming towards abstracting settled physical objects and structures? We are going to discuss and review some new gestural controllers, based on the reading of the following authors: Marcelo Wanderley, Alva Noe, Ivan Poupyrev, Oliver Sacks, John Milton, and Ana Solodkin.


(video presentation in portuguese)

Speaker: Ivan Eiji Simurra
Date and time: Wednesday, June 1, 2016 - 12:00
Place: Auditório do CCSL, IME/USP
Abstract: In this seminar we will present an overview of researches that relate to sound perception with verbal correlates to describe instrumental timbres. In our presentation we will oppose three works of Asteris Zacharakis ("An Investigation of Musical Timbre", "An Interlanguage Study of Musical Timbre Semantic Dimensions and Their Acoustic Correlates" and "An Interlanguage Unification Of Musical Timbre: Bridging Semantic, Perceptual and Acoustic Dimensions" ) with two works by Vinoo Alluri ("Effect of enculturation on the Semantic and Acoustic Correlates of Polyphonic Timbre" and "Exploring Perceptual and Acoustival Correlates of Polyphonic Timbre"). Our goal is to highlight the characteristics of each survey and how they can dialogue with our own related to timbre and emotions.


(video presentation in portuguese)

Speaker: Thilo Koch
Date and time: Wednesday, May 18, 2016 - 12:00
Place: CCSL Auditorium, IME/USP
Abstract: In our daily lives we are exposed to many kinds of sound - traffic noise, people talking, crowds, music, etc. Consequently, the awareness of quality and its perception is increasing. Although anyone has some individual understanding of what audio quality is about, the research aimed at quantifying the perceived audio quality corresponds to a relative new scientific field. Objective quantification of perceived audio quality is a complex topic which involves a number of technical and scientific issues, from audio recording, signal processing, and room acoustics, to statistics and experimental psychology. In this seminar we will give an introduction on what is audio quality evaluation, and an overview on how experiments are planned and executed, and how their results are analyzed and interpreted.


(video presentation in portuguese)

Speaker: Rodrigo Borges
Date and time: Wednesday, May 4, 2016 - 12:00
Place: Auditório do CCSL, IME/USP
Abstract:

Music Recommender Systems are computational techniques for suggesting music to an specific user according to his personal interest. They operate under a big amount of music files and, depending on the information provided in its entry, may apply Collaborative Filtering, Context-Based or Content-based approaches.

Collaborative Filtering makes recommendations to the current user based on items that other users with similar tastes liked in the past. Contextual Music Recommendation refer to the situation of the user when listening to recommended tracks (e.g, time, mood, current activity, the presence of other people). Music Content can be understood as musical features computed directly from audio, or semantic inferred or predicted by machine learning techniques.

Differently from another recommender systems, as for books, movies or news, the music ones has specific characteristics: they allow recommendation of repeated items, and has a fast consumption time in comparison. These leads us to differentiate between parallel (albums) and serial (playlist) recommendation.

Preliminary feature extraction results are finally presented, retrieved from a temporary database containing popular Brazilian music.


(video presentation in portuguese)

Pages

Subscribe to Upcoming talks Subscribe to Past talks