Seminars

2022

  • Diversity by Design in Music Recommender Systems

    Music Recommender Systems (Music RS) are nowadays pivotal in shaping the listening experience of people all around the world. Thanks also to their widespread use in music streaming services, it has been possible to enhance several characteristics of such systems in terms of performance, design, and user experience. Nonetheless, imagining Music RS only from an application-driven perspective may generate an incomplete view of how this technology is affecting people’s habitus, from the decision- making processes to the formation of musical taste and opinions.

    Presenter: MsC. Lorenzo Porcaro

    When: March 29th, 2022

    Where: https://meet.jit.si/CompmusSeminário (apenas em caso de dificuldades técnicas, usaremos como endereço alternativo o https://meet.google.com/vhb-wfxh-pwt)

2021

  • Quantification of uncertainty and stochastic models and its impacts

    (Abstract in Portuguese)A análise de um sistema dinâmico e sua modelagem depende de vários fatores que podem implicar na sua performance. Estes modelos podem ser definidos como a idealização matemática dos processos físicos que regem sua evolução. Isso requer definições de variáveis básicas como a descrição da geometria do sistema, carregamento, propriedades do material, eventos externos, variáveis de resposta em deslocamento, deformação, tensões, e as relações entre essas várias quantidades. Neste cenário, as perguntas são: Como a incerteza do sistema impacta a sua resposta dinâmica? Qual é o significado físico? Como podemos modelar a incerteza em sistemas dinâmicos? Nós ‘sabemos’ a fonte das incertezas? Como podemos quantificar a incerteza de forma eficiente na resposta dinâmica?

    Presenter: Profa. Dra. Marcela Rodrigues Machado (UnB)

    When: November 30th, 2021

    Where: https://meet.jit.si/CompmusSeminário (in case of technical difficulties: https://meet.google.com/vhb-wfxh-pwt)

  • Misfits and ruptures as processes of artistic creation: an invitation for collaborative and experimental practices

    (abstract in Portuguese)Através de uma narrativa pessoal e de exemplos artísticos autorais busco introduzir algumas reflexões sobre questões amplas e complexas envolvendo a criação contemporânea.

    Presenter: Vitor Kisil (NuSom/USP e FIAP)

    When: October 19th, 2021

    Where: https://meet.jit.si/CompmusSeminário (apenas em caso de dificuldades técnicas, usaremos como endereço alternativo o https://meet.google.com/vhb-wfxh-pwt)

  • Virtual prototyping of musical instruments: development of a decision-support platform for string instruments manufacturing.

    The musical instruments manufacturing involves several design demands related both to the instrument's structural capacity, as well as to the desired sound and aesthetic attributes. Although there have been, for at least two decades, technologies available to assist these projects, much of what has been done is the empirical reproduction of consolidated models: a process that makes innovation difficult because it is often based on trial and error methods.

    Presenter: Dr. Guilherme Paiva, Dr. Rodolfo Thomazelli e Guilherme Fontebasso

    When: October 5th, 2021

    Where: https://meet.jit.si/CompmusSeminário (just in case of technical difficulties, we will use as an alternative address https://meet.google.com/vhb-wfxh-pwt)

  • From Timbre Perception to the Creative Exploration of Musical Instrument Sound Morphing (MORPH)

    The MORPH project proposes to use sound morphing to investigate timbre in traditional musical instruments. Timbre is a complex multi-dimentional perceptual phenomenon considered to be one of the last frontiers of auditory science. Sound morphing is a transformation that gradually blurs the categorical distinction between sounds by blending their sensory attributes.

    Presenter: Dr. Marcelo Caetano

    When: September 21st, 2021

    Where: https://meet.jit.si/CompmusSeminário (apenas em caso de dificuldades técnicas, usaremos como endereço alternativo o https://meet.google.com/vhb-wfxh-pwt)

  • Quantifying Disruptive Influence

    Understanding how influences shape musical creation provides rich insight into cultural trends. As such, there have been several efforts to create quantitative complex network methods that support the analysis of influence networks among songs/artists in a music corpus. We contribute to this body of work by examining how disruption happens in different music corpora. That is, we leverage methods devised to study disruption in Science and Technology and apply them to the context of music creation.

    Presenter: Prof. Dr. Flávio Figueiredo (UFMG)

    When: August 24th, 2021

    Where: https://meet.jit.si/CompmusSeminário (apenas em caso de dificuldades técnicas, usaremos como endereço alternativo o https://meet.google.com/vhb-wfxh-pwt)

  • Simulation and cloud optimization of the acoustic response of rooms

    (abstract in Portuguese)No seminário será apresentado o desenvolvimento do XuriAPP – um aplicativo projetado para guiar o condicionamento acústico otimizado sobretudo para salas pequenas e em baixas frequências.

    Presenter: Rodolfo Thomazelli, Guilherme Fontebasso, Lucas Egidio Neves, Victor Oliveira (Turbonius P&D) e Guilherme Paiva (post-doc at IME-USP and Turbonius P&D)

    When: July 13th, 2021

    Where: https://meet.jit.si/CompmusSeminário

2020

  • High-Definition Time-Frequency Representations for Music Information Retrieval

    (abstract in Portuguese)Representações tempo-frequenciais (RTFs) são uma das ferramentas mais valiosas em processamento digital de áudio, sendo utilizadas em diversas aplicações. Uma forma de se calcular RTFs de alta resolução é combinar RTFs de diferentes resoluções de forma a preservar os melhores aspectos de cada uma. Essa é a ideia geral que embasa todos os métodos propostos nesse seminário

    Presenter: Dr. Maurício do V. M. da Costa (COPPE/UFRJ)

    When: June 23rd, 2020

    Where: https://meet.jit.si/CompmusSeminário

  • The physics of the viola caipira

    Live transmission @ https://meet.jit.si/CompmusSeminárioThe viola caipira is a Brazilian guitar widely used in popular music and has been little studied in musical acoustics. Generally, it consists of ten metallic strings arranged in five pairs, tuned in unison or octave. In this seminar, the author presents some results obtained in his thesis research, which focused on the analysis and synthesis of musical sounds produced by the viola caipira.

    Presenter: Dr. Guilherme Paiva

    When: June 16th, 2020

    Where: https://meet.jit.si/CompmusSeminário

2019

  • Adaptive multi-resolution analysis of audio signals

    Spectral representations of temporal signals are vastly utilized in several areas including engineering, mathematics and computer science. For this, the SFTF is commonly used, and it utilizes equally spaced constant resolution frequency components. In order to minimize the trade-off between time and frequency resolution, multi-resolution techniques utilize frequency components with variable resolution and spacing.

    Presenter: Nicolas Figueiredo

    When: April 10th, 2019

    Where: Auditório do CCSL/IME

2018

  • A recursive Bayesian algorithm for detection of changepoints in unidimensional signals

    The problem of detecting changepoints in time series has been studied since at least the 1950s, and has applications in several areas. In this talk we present a brief historical survey of the problem and solutions proposed in the literature. We then propose a recursive algorithm for audio segmentation based on the search of changepoints in the total signal power. The algorithm uses a fully-Bayesian hypothesis test as stopping condition, and has worst-case complexity O(n log n); the operating characteristics of the algorithm can be effectively adjusted based on a single free parameter.

    Presenter: Dr. Paulo Hubert, Lab. for Acoustics and the Environment, EPUSP

    When: October 30th, 2018

    Where: CCSL Auditorium, IME/USP

  • Vibration Control Applied to Prepared Musical Instruments

    This seminar will present a broad view of actuated musical instruments with vibration control systems, aiming to increase acoustic and musical possibilities on a musical instrument. In particular, it will focus on presenting the challenges of using a Feedback control system with the intention of controlling resonances, partials and decays of an acoustic instrument in real time.

    Presenter: Paulo Vitor Itaboraí

    When: August 30th, 2018

    Where: CCSL Auditorium, IME/USP

  • Sensory Dissonance: models, historical context and musical applications

    In this seminar we present an algorithm for sensory dissonance modeling through a critical band filter, based on the work of William Sethares and John Pierce. We begin with a historical review of the term dissonance, aiming to define what we understood by sensory dissonance, followed by an explanation of proposed models, from Helmholtz to Vassilakis. We will also present a detailed study of a spectrum generated by the proposed algorithm, and evaluate the limitations imposed by it. At last, we will present musical examples of usage of these techniques for creative processes.

    Presenter: Guilherme Feulo e Micael Antunes

    When: June 25th, 2018

    Where: Auditório Antonio Gilioli - Bloco A

  • Balancing Exploration and Exploitation as a strategy for enhancing music recommendation systems

    Music recommender systems typically use historical listening information for making personalized recommendations. This approach however keeps high rated songs always as better candidates in a greedy manner. We present a strategy for balancing safe (Exploitation) and novel (Exploration) recommendations in order to prevent suboptimal performance over the long term. The solution proposed is based in a reinforcement learning problem called multi-armed bandit that simulates a situation where someone is playing in several slot machines and needs to optimize his gains.

    Presenter: Rodrigo Borges

    When: June 18th, 2018

    Where: Antonio Gilioli Auditorium, IME/USP

  • Audio-to-Midi Similarity For Music Retrieval

    We are presenting strategies for processing audio for query-by-humming, having the goal of matching melodies from MIDI files to the melody hummed in audio files. An application of query-by-humming is defined as interface where an user can hum a melody as he or she remembers and the application brings melodies from a MIDI repertoire that has some degree of similarity, depending on what is expected by the user to be similar, so that the user can discover more information from the melody he hummed.

    Presenter: Fábio Goródscy

    When: June 4th, 2018

    Where: Auditório Antonio Gilioli, IME/USP

  • A brief intro to neural networks anc convolutional neural networks in computer music

    (sorry about the abstract in Portuguese)Redes Neurais são modelos de aprendizado de máquina extremamente populares na atualidade. Nesse seminário iremos dar uma pincelada histórica nas origens desse modelo, discutir os problemas que o levaram a ser abandonado por anos, apontar as soluções para então discutir as diversas arquiteturas da atualidade com foco em Redes Neurais Convolucionais.

    Presenter: Marcos Leal (mleal@ime.usp.br)

    When: May 24th, 2018

    Where: Antonio Gilioli Auditorium, IME/USP

  • Measuring musical popularity using signals from several sources

    Índices de popularidade são instrumentos tradicionalmente utilizados na indústria da música para a comparação de artistas. Apesar da ampla aceitação, pouco se conhece a respeito da metodologia que embasa tais índices, o que os torna instrumentos por vezes difíceis de compreender e/ou criticar.

    Presenter: Giuliano Mega and Daniel Cukier (Playax)

    When: May 21st, 2018

    Where: Antonio Gilioli Auditorium, IME/USP

  • Physical modelling for audio synthesis: a general picture

    Since 1957, computers have been used for synthesizing and processing audio signal. Some of the first techniques used were additive synthesis, AM/FM synthesis and subtractive synthesis. An alternative to these techniques is physical modeling, that utilizes mathematical descriptions of sound waves and physical components such as strings, tubes and membranes, to create musical signals. This seminar will present the main techniques for physical modeling of instruments, with special attention to waveguides, lumped models and state-space models.

    Presenter: Nicolas Figueiredo

    When: May 7th, 2018

    Where: Auditório Gilioli - Bloco A

  • Singing Voice Detection in poliphonic audio signals

    Humans can easily identify portions of singing voice in an audio with a mixture of sound sources. However, trying to identify such segments computationally is not a trivial task. This seminar will present the fundamentals of the problem of singing voice detection in polyphonic audio signals, a brief description of the techniques used to solve it, and its applications in other tasks of music information retrieval (MIR). Finally, some challenges will be highlighted regarding the performance improvement in the automatic detection of segments with singing voice.

    Presenter: Shayenne da Luz Moura

    When: April 23rd, 2018

    Where: Auditório Antonio Gilioli - Bloco A

2017

  • Musical Pattern Discovery: Musicological, Cognitive and Computational Perspectives

    The emergence of musical patterns via repetition/similarity is paramount in making sense and understanding music. Yet, despite the efforts made towards its systematic description, musical similarity remains an elusive concept, resisting robust formalisation. Why does the introduction of well-established powerful pattern matching techniques (exact or approximate) in the musical domain, usually ends up with rather limited/partial/fragmentary results? Why is it so difficult to create a general model of musical similarity that may capture musically and cognitively plausible patterns?

    Presenter: Emilios Cambouropoulos, Aristotle University of Thessaloniki (Greece)

    When: September 1st, 2017

    Where: Jacy Monteiro Auditorium, IME/USP

  • The relationship between ancillary gestures and the organization of musical phrases

    (Abstract available in Portuguese only)Músicos manipulam, conscientemente, diferentes parâmetros acústicos, com o objetivo de expressar suas ideias musicais para os ouvintes e para outros músicos. Eles performam, ainda, movimentos físicos que, embora sejam essenciais para a produção de som no instrumento musical, possuem estreita relação com as intenções artísticas dos performers.

    Presenter: Thais Fernandes Rodrigues Santos, UFMG

    When: August 2nd, 2017

    Where: CCSL Auditorium

  • Musical Genre Classification Using Global Audio Features and Traditional Classifiers

    Despite the complexity of assigning genres to songs, automatic musical genre classification of large databases is of great importance for Music Information Retrieval tasks. Historically, several Machine Learning techniques were applied using audio features extracted from audio files, scores and even the lyrics of the songs.In this talk we will present the results of experiments performed with the GTZAN dataset, using global audio features (extracted with the LibROSA library) and traditional classification algorithms (implemented by scikit-learn).

    Presenter: Roberto Piassi Passos Bodo

    When: March 28th, 2017

    Where: Auditório do CCSL

2016

  • Audio effects based on AM/FM decomposition

    (Abstract available in Portuguese only)

    Presenter: Antonio Goulart

    When: December 13th, 2016

    Where: Room B-7

  • A simple probabilistic model for tagging music

    This seminar is a presentation of the award article as Best Student Paper of the ISMIR in 2009, called EASY AS CBA: A Probabilistic SIMPLE MODEL FOR TAGGING MUSIC Matthew D. Hoffman, David M. Blei, Perry R. Cook.Many songs are not associated with well representative tags and this makes the songs recovery process from the tags not be efficient.We present a probabilistic model that learns to perform automatic prediction that words applies to a song from the timbre features. This method is simple to implement, easy to test and returns results quickly.

    Presenter: Rodrigo Borges e Shayenne Moura

    When: September 29th, 2016

    Where: Auditório Antonio Gilioli, IME/USP

  • Score Analyser: Automatically determing scores difficulty level for instrumental e-learning

    Nowadays there are a lot of scores available on the internet for free. However, beginners may have some difficulties to choose a score that is appropriate to their instrument level.

    Presenter: Guilherme Feulo e Guilherme Jun

    When: September 27th, 2016

    Where: Auditório do CCSL

  • Real-time music tracking using multiple performances as a reference

    In this seminar, we will present the best paper from the 2015 International Society for Music Information Retrieval Conference (ISMIR 2015) entitled "Real-time music tracking using multiple performances as a reference" by Andreas Arzt e Gerhard Widmer.

    Presenter: Alessandro Palmeira and Itai Soares

    When: September 22nd, 2016

    Where: IME/USP

  • Unsupervised learning of sparse features for scalable audio classification

    In this seminar, we will present the best student paper from ISMIR 2011 "Unsupervised learning of sparse features for scalable audio classification" by Mikael Henaff, Kevin Jarrett, Koray Kavukcuoglu and Yann LeCun.In this work it is presented a system to automatically learn features from audio in an unsupervised manner. This method first learns an overcomplete dictionary which can be used to sparsely decompose log-scaled spectrograms. It then trains an efficient encoder which quickly maps new inputs to approximations of their sparse representations using the learned

    Presenter: Marcio Masaki Tomiyoshi e Roberto Piassi Passos Bodo

    When: September 20th, 2016

    Where: Antonio Gilioli Auditorium, IME/USP

  • A segment-based fitness measure for capturing repetitive structures of music recordings

    We are dealing with the task of capturing repetitive structures of music recordings. This is similar to audio thumbnailing where the goal is to reduce the duration of audio recordings keeping important information defined by the application.We show examples with fitness matrices using a technique that captures repetitive structures, based on precision and coverage of segments of the audio recording calculated from self-similarity matrices.

    Presenter: Fábio Goródscy e Felipe Felix

    When: September 15th, 2016

    Where: Auditório Antonio Giglioli, IME/USP

  • On the Use of Sparse Time-Relative Auditory Codes for Music

    In this seminar we will present a paper from Pierre-Antoine Manzago, Thierry Bertin-Mahieux and Douglas Eck presented at ISMIR 2008.

    Presenter: Arthur Tofani, Thilo Koch

    When: September 13th, 2016

    Where: Auditório Antonio Giglioli, IME/USP

  • Yet another gesture controller?

    In the past decade, various consumer electronic devices were launched as gestural controllers, from which several have been used for musical expression. Despite the variety of these devices, academic and industrial institutions keep their efforts on researching and developing new devices every so often. Why? In this conversation, I’d like to raise the discussion about the reasons why we are not satisfied with the existent gestural controllers: Natural human unsettledness? Consumerism and market? Technological evolution, allowing for creation of more efficient devices?

    Presenter: Dra. Carolina Brum Medeiros (Fliprl CEO, IDMIL/McGill, Google ATAP)

    When: September 6th, 2016

    Where: Room 132-A, IME/USP

  • Verbal correlates for timbre perception: a research on orchestral sonorities

    In this seminar we will present an overview of researches that relate to sound perception with verbal correlates to describe instrumental timbres.

    Presenter: Ivan Eiji Simurra

    When: June 1st, 2016

    Where: Auditório do CCSL, IME/USP

  • How to measure audio Quality?

    In our daily lives we are exposed to many kinds of sound - traffic noise, people talking, crowds, music, etc. Consequently, the awareness of quality and its perception is increasing. Although anyone has some individual understanding of what audio quality is about, the research aimed at quantifying the perceived audio quality corresponds to a relative new scientific field.

    Presenter: Thilo Koch

    When: May 18th, 2016

    Where: CCSL Auditorium, IME/USP

  • Music Recommender Systems

    Music Recommender Systems are computational techniques for suggesting music to an specific user according to his personal interest. They operate under a big amount of music files and, depending on the information provided in its entry, may apply Collaborative Filtering, Context-Based or Content-based approaches.

    Presenter: Rodrigo Borges

    When: May 4th, 2016

    Where: Auditório do CCSL, IME/USP

  • Introducing Web Audio API

    A tutorial about Web Audio API.We are showing and  presenting some simple examples for explaining how Web Audio API works and how its structure is defined. You can see the examples here: https://github.com/fabiogoro/webaudioDuring the presentation we show:

    Presenter: Fábio Goródscy

    When: April 6th, 2016

    Where: Auditório do CCSL

  • Audio signals fundamental tracking using Phase-Lock Loops

    In this seminar we will present Phase-Lock Loops systems, widely used for demodulation of frequency modulated signals. The basic idea behind the technique will be shown, along with recently proposed adaptations for fundamental tracking of audio signals. The efficiency of the system will be confirmed with audio examples, and audio effects based on the fundamental frequency of the input signal will also be discussed.

    Presenter: Antonio José Homsi Goulart

    When: March 30th, 2016

    Where: Auditório do CCSL

  • A Tutorial on EEG Signal-processing

    This seminar is based in the article "A Tutorial on EEG Signal-processing: techniques for mental-state recognition in Brain-computer Interfaces" by Fabien Lotte, a chapter of the book "Guide to Brain-computer Music Interfacing" edited by Eduardo R. Miranda and Julien Castet.The main stages in the development of Brain-computer Interfaces through Electroencephalography will be presented, with the description of the most used techniques for Feature Extraction an Feature Classification..

    Presenter: Guilherme Feulo do Espirito Santo

    When: March 16th, 2016

    Where: Auditório do CCSL

  • Colaborative experiences between Computer Music and other areas

    In this talk we will present many research projects developed from the collaboration between Computer Music and professionals from other areas like Musica, Engineering, Physics, and Medicine. The discussion will focus on the contributions that the Computer Music area provided to the researches and researchers, and we will discuss the interaction between the professionals that had to deal with different technical languages, methods, and scientific approaches in front of the problems to be solved.

    Presenter: Antonio Deusany de Carvalho Junior, ScD candidate at IME/USP

    When: March 9th, 2016

    Where: Auditório do CCSL, IME/USP

  • Noise and Timbre: Everyday listening, Synthesized sounds and Noise Music

    (abstract in Portuguese)O presente estudo propõe uma análise comparativa entre amostras relativas à escuta cotidiana, sons sintetizados e trechos de música contemporânea japonesa do gênero Noise Music.

    Presenter: Rodrigo Borges

    When: March 2nd, 2016

    Where: CCSL Auditorium, IME/USP

2015

  • Use of Audio descriptors for music composition and orchestration

    Within the research line in Creative Processes focused on music composition and computer-aided orchestration, this work describes an investigation into the instrumental combination process using a computing environment using audio descriptors in PureData (PD) and the PDescriptors library to analyze the sound characteristics of a database of audio files with various musical instruments and extended techniques.

    Presenter: Ivan Simurra

    When: December 2nd, 2015

    Where: CCSL Auditorium, IME/USP

  • Audio processing with adaptive computational costs

    No english abstract available.

    Presenter: Thilo Koch

    When: November 18th, 2015

    Where: Auditório do CCSL, IME/USP

  • A system for interactive media and digital immersion using audio features and genetic algorithms

    In the context of the study and development of computational systems for composition and interactive performance with multimodal content, this talk presents the ongoing research which aims at implementing a computational infrastructure for an Interactive Media and Digital Immersion Lab at NICS/Unicamp. The system under development at the Center of Autonomous Systems and Neurorobotics (NRAS) of the Universitat Pompeu Fabra (Barcelona, Spain) generates, controla and shares through the Internet a process for multimodal artistic creation.

    Presenter: Prof. Dr. Jônatas Manzolli (NICS/UNICAMP)

    When: November 11th, 2015

    Where: CCSL Auditorium, IME/USP

  • The ODA Middleware and its application in the production of digital games

    No english abstract available.

    Presenter: Lucas Dário

    When: November 4th, 2015

    Where: Auditório do CCSL, IME/USP

  • The oscillator waveform animation effect

    An enhancing effect that can be applied to analogue oscillators in subtractive synthesizers is termed Animation, which is an efficient way to create a sound of many closely detuned oscillators playing in unison. This is often referred to as a supersaw oscillator. In this seminar we will explain the operating principle of this effect in the digital domain using a combination of additive and frequency modulation synthesis. The Fourier series will be derived and results will be presented to demonstrate its accuracy.

    Presenter: Joseph Timoney, National University of Ireland - Maynooth

    When: October 21st, 2015

    Where: CCSL Auditorium, IME/USP

  • DIY EEG: Do it Yourself Brain-computer interfaces Projects

    With recent technology advances, the search for new ways to interact with electronic devices as computers, Videogames or Smartphones, have been creating new interfaces with a set of different applications and low costs. In this seminar we will present some "Do It Yourself" (DIY) brain-computer interfaces trough Electroencephalography (EEG), their basic project and the variations that can be found in the Internet.

    Presenter: Guilherme Feulo

    When: October 14th, 2015

    Where: Auditório do CCSL, IME/USP

  • Computational techniques for the automated musical accompaniment problem

    In this seminar we will present the algorithms and the results obtained so far of some techniques concerning the central problems of automated musical accompaniment: score following and accompaniment generation. In addition, we will present the MetaMatcher: element created to combine all implemented tracking techniques by running them in parallel and getting a greater reliability of the information extracted from the performance.

    Presenter: Roberto Piassi Passos Bodo

    When: September 23rd, 2015

    Where: CCSL Auditorium, IME/USP

  • Musical performance with audience participation using Cloud Services

    Technological restrictions and interaction settings affects audience participation in musical performances. It might be really expensive to give devices to each participant on a large audience, and the participants can avoid an interactive performance that is restricted to a specific technology, like the ones that requires iOS instead of Android systems.

    Presenter: Antonio Deusany de Carvalho Junior

    When: September 16th, 2015

    Where: CCSL Auditorium, IME/USP

  • Networked Collaboration and Communication in Live Coding

    Live coding is a highly interactive music performance where one or more musicians writes code on the fly to generate music. In the setting of live coding, there often emerges two issues: (1) the need for communication, (2) technical support to enable networked collaboration among and within live coders, audience and musicians. The author presents a series of projects that not only facilitate communication and networked collaboration in the domain of live coding but also yields a variety of novel distributed music performances that blur the boundary of live coding.

    Presenter: Sang Won Lee, University of Michigan/USA

    When: September 2nd, 2015

    Where: Auditório do CCSL, IME/USP

  • Techniques for audio synthesis and effects based on AM/FM decomposition

    In this talk we will present various techniques for the processing of signals resulted from an AM/FM decomposition of a musical signal. Different methods for the decomposition and possibilities for operation on the AM/FM domain will be discussed.

    Presenter: Antonio José Homsi Goulart

    When: August 26th, 2015

    Where: Auditório do CCSL, IME/USP

  • SuperCopair: a tool for colaborative and cooperative live coding

    In this talk we present the SuperCopair package, which is a new way to integrate cloud computing into a collaborative live coding scenario with minimum efforts in the setup. This package, created in Coffee Script for Atom.io, is developed to interact with SuperCollider and provide opportunities for the crowd of online live coders to collaborate remotely on distributed performances. Additionally, the package provides the advantages of cloud services offered by Pusher.

    Presenter: Antonio Deusany de Carvalho Junior

    When: August 19th, 2015

    Where: Auditório do CCSL, IME/USP

  • Open Dynamic Audio: a middleware for dynamic audio in digital games

    In games, a common strategy to provide a more engaging player expericence is to allow the game's soundtrack to react, in real-time, to the happenings inside the virtual universe, since that provides a complementary interaction to the user beyond the conventional expression through visual aspects. However, given the non-linearity of this audiovisual media, many challenges of artistic, practical and technological order arise from this dynamic approach.

    Presenter: Wilson Kazuo Mizutani

    When: August 12th, 2015

    Where: Auditório do CCSL, IME/USP

  • Mapping Android sensors using OSC

    In this talk we will present an application that can send all events from any sensor available on an Android device using OSC and through Unicast or Multicast network communication. Sensors2OSC permits the user to activate and deactivate any sensor at runtime and has forward compatibility with any new sensor that may become available without the need to upgrade the application for that. The sensors rate can be adjusted from the slowest to the fastest, and the user can configure any IP and port to set receivers for OSC messages.

    Presenter: Antonio Deusany de Carvalho Junior

    When: June 29th, 2015

    Where: Auditório do CCSL, IME/USP

  • The Influence of Spectrum Content in the Perception of Consonant and Dissonant Chords and its reflex on Brain Activity

    In this seminar we present the paper "A Influência do Conteúdo Espectral na percepção de acordes Consonantes e Dissonantes e seu reflexo na Atividade Cerebral" (The Influence of Spectrum Content in the Perception of Consonant and Dissonant Chords and its reflex on Brain Activity) developed in partnership with Antonio Goulart and Micael Antunes, that tackles the basis for the musical consonance analysis and its relationship with the spectral content of chords present in the tonal system and, based on this, propose an experiment that explores through electroencephalography a relation between

    Presenter: Guilherme Feulo do Espírito Santo

    When: June 1st, 2015

    Where: 144-B

  • Audio Processing with Adaptive Computational Costs

    Audio processing applications can and do suffer from overload situations. This results in unwanted sound events: strong distortions, clicks and interruptions. For practical reason it is not always possible to resolve such problems always with more computational ressources. In this talk we will present ideas which try to take another approach. We want to flexibilise the computational costs of the elements of the audio processing chain to control them dynamically thus avoiding overload situations of the system.

    Presenter: Thilo Koch

    When: May 4th, 2015

    Where: 144-B

  • Musical interaction through Cloud Services

    Cloud Computing is a buzzword that has got the attention of many areas. One of its attractions is the variety of services offered taking advantage of the powerful processing and data distribution. The cloud services present many benefits in case you want to use Cloud Computing but don't want to spend so much time setting up some virtual machine in order to start your projects.

    Presenter: Antonio Deusany de Carvalho Junior, Sc.D. candidate at IME/USP and exchange student at University of Michigan/USA

    When: April 6th, 2015

    Where: Auditório do CCSL-IME

  • Audio synthesis with periodically time-varying filters

    No abstract in english available.

    Presenter: Antonio Goulart

    When: March 23rd, 2015

    Where: CCSL Auditorium, IME/USP

2014

  • Talk about Audio Synthesis

    The description will soon be informed.

    Presenter: Antonio Goulart

    When: December 8th, 2014

    Where: CCSL Auditorium, IME/USP

  • Mobile Music and some results

    No english abstract available.

    Presenter: Antonio Deusany de Carvalho Junior

    When: December 1st, 2014

    Where: CCSL Auditorium, IME/USP

  • Adaptive soundtracks in digital games

    It is fairly common in digital games for the soundtrack to reduce to a single background music per scene. Little research has been conducted to this day in trying to make those adaptable to player experience, especially compared to the investment in gaming graphical capabilities. This talk aims to present methods and criteria to change music rendering in real time in order to best express the feeling intended for the game.

    Presenter: Wilson Kazuo Mizutani

    When: November 24th, 2014

    Where: CCSL Auditorium, IME/USP

  • Simulation of sound in rooms

    Simulations play an important role in acoustics. There are applications in planning of acoustic environments, materials testing and in virtual reality systems. In this seminar we will give a short introduction to some methods for simulations of sound in rooms. The most used approaches will be explained: ray tracing and image source modelling as well as hybrid models.

    Presenter: Thilo Koch

    When: November 10th, 2014

    Where: CCSL Auditorium, IME/USP

  • MIDI as communication protocol and musical score digital format

    This seminar will be an introduction to MIDI as communication protocol between instruments/applications and as file format for digital representation of musical scores.We will present the main control events, their codification protocol, MIDI note numbers details, sound banks and musical symbols support, among other topics.Also, it will be exhibit two of the most commonly used audio APIs in Linux and some free MIDI tools (synthesizers and score editors, for example).

    Presenter: Roberto Piassi Passos Bodo

    When: November 3rd, 2014

    Where: CCSL Auditorium, IME/USP

  • Modeling a Music Auralization System using Wave Field Synthesis

    No english abstract available.

    Presenter: Marcio José da Silva

    When: October 20th, 2014

    Where: CCSL Auditorium, IME/USP

  • Musical Brain-Computer Interfaces

    Recent technological advances promoted the development of sensors that previously, because of the costs, were used only for medical proposes. Among all, the Electroencephalogram (EEG), gained attention between Interface researchers by it's versatility and portability, creating different applications in distinct fields. In this seminar we will present some techniques, results and discussions that were published in the Fifth International Brain-Computer Interface Meeting (2013) involving the Musical BCI development problem.

    Presenter: Guilherme Feulo

    When: September 29th, 2014

    Where: CCSL Auditorium, IME/USP

  • Recent (post-2006) distortion techniques for music signal synthesis

    (sorry about the abstract in Portuguese)Neste seminário vamos abordar técnicas recentemente propostas de distorção de senóides para síntese de sinais musicais, baseadas em extensões ou variações das técnicas clássicas.

    Presenter: Antonio Goulart

    When: September 1st, 2014

    Where: CCSL Auditorium, IME/USP

  • Using Mobile Devices Sensors as input for Pure Data

    Pure Data (PD) is a computer music language used to create musical applications that has been increasingly integrated into mobile applications since the development of libpd. Sensors2PD (aka S2PD) was developed in order to facilitate the use of sensors in Android devices as input for PD patches. S2PD is going to be presented at this seminar with some demonstrations in order to show that it can be used in new musical applications aimed at performances with mobile devices.

    Presenter: Antonio Deusany de Carvalho Junior, ScD candidate at IME/USP

    When: August 25th, 2014

    Where: Auditório do CCSL, IME/USP

  • An Introduction to Binaural Processing - Binaural Listening

    How does binaural listening work? What benefits does it introduce over monaural listening? Which cues are used to localize sound sources in the 3-dimensional space?This seminar will present some answers for these questions, which the current (psycho) acoustic science offers. Furthermore models for the human auditory system and their applications will be introduced.Keywords: binaural listening, sound localization, binaural modeling

    Presenter: Thilo Koch, PhD student at IME/USP

    When: August 18th, 2014

    Where: CCSL Auditorium, IME/USP

  • Interactive Composition and Computational Neuroscience Models

    This talk explores the convergence between concepts developed around contemporary music and formulations that emerged in our research based on models from computational neuroscience. We start by pointing out that musical composition evolved from the use of symbolic notation to the writing of the internal organization of sound. This may be observed in expanded instrumental techniques and in more recent strategies using new interfaces for musical expression.

    Presenter: Jônatas Manzolli, head of NICS/UNICAMP and full professor at IA/UNICAMP

    When: August 7th, 2014

    Where: CCSL Auditorium, IME/USP

  • Pure Data externals development workshop

    This seminar will be a practical workshop to Pure Data externals development. Procedures to develop an external in C will be presented starting with code examples previously available on the handout. It will be presented the basic external structure, the parameters passing, hot and cold inlets, outlets and DSP. The audience must have installed on their computer gcc, make and Pure Data.Handouts: https://github.com/flschiavoni/pd-external-tutorialExternal Code generator: http://www.ime.usp.br/~fls/PDExternal-generator/Pure Data website: http://puredata.info

    Presenter: Flávio Schiavoni

    When: June 10th, 2014

    Where: CCSL Auditorium, IME/USP

  • Classical distortion techniques for audio synthesis

    (sorry about the abstract in portuguese)Neste seminário vamos apresentar as técnicas clássicas de distorção que se enquadram na famílias de modulações.

    Presenter: Antonio Goulart

    When: June 3rd, 2014

    Where: CCSL Auditorium, IME/USP

  • Workshop: plugin development for the MOD pedal rack

    (sorry about the abstract in portuguese)Esta será uma oficina sobre criação e publicação de plugins de efeitos de áudio para a pedaleira programável MOD (http://portalmod.com/). Vamos apresentar nosso SDK para o desenvolvimento dos plugins, juntamente com alguns exemplos de efeitos e rotinas de processamento digital de áudio. Apresentaremos também nossa cloud para a publicação dos plugins e a GUI para carregar a pedaleira com os efeitos.Tragam seus computadores com python e a SDK do MOD instalada: sudo pip install modsdk

    Presenter: Gianfranco Ceccolini, André Coutinho and Luís Henrique Fagundes

    When: May 27th, 2014

    Where: CCSL Auditorium, IME/USP

  • Using Wavelets in the analysis of Evoked Potentials in EEGs

    Event-Related Potentials (ERP) are frequently used in Cognitive Neuroscience to study the physiology correlations between sensory, perceptual and cognitive activities in external stimuli process. Many techniques to analyze the data from Electroencephalograms (EEG) were developed to handle ERP's. In this seminar we will present the Wavelet theory techniques to noise removal and EEG analysis, an also a brief comparative with recent techniques.

    Presenter: Guilherme Feulo do Espírito Santo

    When: May 13th, 2014

    Where: CCSL Auditorium, IME/USP

  • Obtaining normalized statistics for chroma features

    (sorry about the abstract in portuguese)Este seminário apresentará o passo a passo de um processo de extração de características de croma a partir de um arquivo de áudio de forma a se obter uma representação eficiente em termos de uso de recursos computacionais. Esta representação será então refinada para lidar com ruídos e pequenas variações de croma.

    Presenter: Álvaro Stivi

    When: April 29th, 2014

    Where: CCSL Auditorium, IME/USP

  • Live coding as a DJing tool

    (sorry about the abstract in Portuguese)Neste seminário será apresentada uma sugestão de como fazer uma performance de DJ acrescida de técnicas de live coding, criando em tempo real sons a serem mixados com as gravações comerciais ou também utilizados em momentos de puro live coding. Dessa maneira as possibilidades oferecidas pelo live coding são colocadas ao lado das técnicas clássicas de DJing, promovendo assim a sonoridade do live coding mas mantendo a ambiência já conhecida da Electronic Dance Music (EDM).

    Presenter: Antonio Goulart

    When: April 22nd, 2014

    Where: CCSL Auditorium, IME/USP

  • Using Pure Data to create Musical Multiagent Systems in the Ensemble Framework

    The Ensemble Framework has a large set of tools for creating and configuring Musical Multiagent Systems. This seminar presents an interface of Ensemble and Pure Data, a visual programming language used in musical applications.The interface is implemented via the Pure Data API, and allows creation and configuration of Ensemble applications using Pure Data patches, combining the tool set of Ensemble with Pure Data features and ambient usability.

    Presenter: Pedro Bruel

    When: April 8th, 2014

    Where: CCSL Auditorium, IME/USP

  • Computational techniques for expressive musical accompaniment synthesis

    In this seminar the speaker will present his master's project that addresses the problem of automated musical accompaniment. In such problem the computer compares the events of a live performance with a score and, inferring the tempo of the musician, generates the appropriate accompaniment in real-time. A musical accompaniment system can be implemented with four separate modules: the input preprocessor, the matcher, the accompanist and the synthesizer.

    Presenter: Roberto Piassi Passos Bodo

    When: April 1st, 2014

    Where: CCSL Auditorium, IME/USP

  • segmenting to fuse: sensor fusion for movement analysis

    In life, some say: “If the problem is easy, solve it directly. If not, decompose it into smaller parts.” In computer science, this is called Divide & Conquer (D&C). But what type of problems should one divide to conquer? In research, communication, collaboration… in life? Carolina Brum Medeiros, a PhD Candidate at IDMIL Laboratory/McGill University, sketches possibilities on dividing and conquering on human motion analysis and, why not, on research. In this talk, she will discuss sensor fusion as a method for closing the D&C open chain.

    Presenter: Carolina Brum Medeiros, PhD candidate at IDMIL/McGill

    When: March 25th, 2014

    Where: CCSL Auditorium, IME/USP

  • Impedance and its effects

    Impedance is related to the impediment or opposition inside a system. The electrical impedance can be represented as a complex function of magnitude in resistance (real part) and reactance (imaginary part). On the other hand, the acoustic impedance is dependent upon the frequency and can be calculated as a function of pressure, particle velocity and surface area. It is also possible to check the impedance of a medium or a component of sound.

    Presenter: Antonio Deusany de Carvalho Junior

    When: March 18th, 2014

    Where: Auditório do CCSL - IME/USP

  • Introduction to Gstreamer

    Writing multimedia applications from scratch is hard work. There are many aspects to consider: I/O, formats for video and audio, conversions, real-time conditions, synchronization, (de)multiplexing, effective control and so on. The Gstreamer framework tries to help with this.

    Presenter: Thilo Koch

    When: March 11th, 2014

    Where: Auditório do CCSL - IME/USP

2013

  • Medusa: A distributed music environment

    The popularization of computer networks, the growth in computational resources and their use in music production have raised the interest in using computers for synchronous communication of music content. This communication may allow a new level of interactivity between machines and people in music production processes, including the distribution of activities, resources and people within a networked music environment. In this context, this work presents a solution for synchronous communication of audio and MIDI streams in computer networks.

    Presenter: Flávio Luiz Schiavoni

    When: November 26th, 2013

    Where: Sala B-3 do IME/USP

  • Brain Computer Interfaces through Eletroencefalogram

    Being able to control a computational system with one's mind corresponds to an old wish among computational interface designers. In this talk we will present the brain signals aquisition process through eletroencefalograms, applied to brain-computer interfaces design. We will present how these interfaces work, their limitations and the different brain signal acquisitio methodsn that are commonly used in this type of application, aiming to motivate the use of eletroencefalograms as a control mechanism in computer interfaces.

    Presenter: Guilherme Feulo

    When: November 19th, 2013

    Where: Sala B-3 do IME/USP

  • Techniques for improving an automatic accompaniment system

    In this talk we will present several techniques to improve an automatic accompaniment system following the line of research of Roger B. Dannenberg, professor at Carnegie Mellon University.An automatic accompaniment system, basically, compares the events of a live performance with the ones of a score and, inferring the musician's tempo, plays the appropriate accompaniment.Early implementations had some limitations to recognize and follow a real-time performance. This demanded, in a way, that the soloist played the notes of the score in a more orderly fashion.

    Presenter: Roberto Piassi Passos Bodo

    When: November 12th, 2013

    Where: Room B-3 at IME/USP

  • Introduction to Csound

    (sorry about the abstract in portuguese)Nesse seminário será apresentado o Csound, que é uma linguagem de computação musical e uma plataforma para síntese e processamento de áudio. Csound tem uma longa tradição, tendo sido definida em 1986 a partir da série de linguagens Music-N, mas segue sendo usado por muitos artistas e pesquisadores até hoje.

    Presenter: Thilo Koch

    When: November 5th, 2013

    Where: Room B-3, IME/USP

  • Basic SuperCollider + Acid Live Coding = Salted Jam

    In this seminar we will introduce basic elements of SuperCollider, which is a Smalltalk-based, dynamic, object-oriented language, suited for algorithmic composition, audio synthesis and processing.Some considerations on live coding (musical performance consisting of writing programs in real-time), algorithmic composition (generating compositions with algorithmic processes) and laptop orchestras will also be presented.

    Presenter: Antonio Goulart

    When: October 29th, 2013

    Where: Sala B-3 do IME/USP

  • Embedding Pure Data patches with libpd

    Pure Data is a visual programming language, used in many interactive multimedia applications. A Pd program is called "patch", and allows quick and interactive programming, but the solution is restricted to a Pure Data environment.This seminar aims to discuss libpd, a thin wrapper of Pure Data functions, that allows usage of Pd patches in different language and platform contexts.Code featured in the presentation:libpd: $ git clone https://github.com/libpd/libpd.gitC and Java tests: $ git clone https://github.com/phrb/libpd_tutorials.git

    Presenter: Pedro Henrique Rocha Bruel

    When: October 22nd, 2013

    Where: Room B-3 at IME/USP

  • Realtime audio processing in highly-available low-cost devices

    (sorry about the abstract in Portuguese)Este trabalho de mestrado explorou diferentes possibilidades de processamento de áudio em tempo real utilizando dispositivos com alta disponibilidade e baixo custo.Arduino é uma estrutura minimal para interação com microcontroladores da família ATmega e é geralmente utilizados como interface de controle de outros dispositivos elétricos ou eletrônicos. Por possuir pinos com capacidade ADC e DAC, pode ser utilizado para capturar, processar e emitir sinais analógicos.

    Presenter: André J. Bianchi

    When: October 15th, 2013

    Where: Room B-3 at IME/USP

  • Technologies and tools for spatial audio

    In this talk we will present the main references in the engineering of systems for spatial audio. We will present existing architectures, projects for development of auralization technologies and multichannel audio, with a view of our research on auralization at USP. We will also show popular commercial systems and interesting applications, ending with a discussion on the next steps in this field.

    Presenter: Regis Rossi Alves Faria (Music Dept., FFCLRP/USP)

    When: October 10th, 2013

    Where: Room B-3, IME/USP

  • CSound for Android

    Csound is a computer music language developed at MIT nearly 30 years ago that is still widely used. Their latest versions now support livecoding, and Csound was also selected as the audio system of the project One Laptop Per Child (OLPC). Besides its basic C API, the language can be used with Python, Java, Lisp, Tcl, C + +, Haskell and many other languages. At this talk we will discuss about how Csound can be used on devices with Android.

    Presenter: Antonio Deusany de Carvalho Junior, ScD candidate at IME/USP

    When: October 1st, 2013

    Where: Room B-3, IME/USP

  • Musical accompaniment algorithms for polyphonic performances

    In this talk we will present a paper by Joshua J. Bloch and Roger B. Dannenberg entitled "Real-Time Computer Accompaniment of Keyboard Performances" (ICMC 1985). In this paper a set of algorithms is developed to deal with real-time accompaniment of polyphonic performances. We are in a scenario in which the computer listens to a musician's performance, compares the events of the input with the events of a score and, with a high correlation between them, infers a tempo and plays the appropriate accompaniment.

    Presenter: Roberto Piassi Passos Bodo

    When: September 24th, 2013

    Where: Room B-3 at IME/USP

  • The sound of space

    In this talk we will revisit some of the main references in the conception and implementation of spatial audio systems. In a chronological journey we will approach the ideas, inventions and audio engineering solutions behind the most interesting commercial system implementations to date. We will conclude with a vision of our own research in auralization at USP and open issues for the future.

    Presenter: Regis Rossi Alves Faria (Music Dept., FFCLRP/USP)

    When: September 19th, 2013

    Where: Room B-6, IME/USP

  • Alternatives in network transport protocols for audio streaming applications

    Audio streaming is often pictured as a networking application that is not concerned with packet loss or data integrity, but is otherwise very latency-sensitive. However, some usage scenarios may be identified, such as remote recording, that shift concerns towards more conservative views regarding stream integrity. Although many streaming applications today use the UDP protocol, there are some alternative transport layer protocols that are worth investigating, especially in applications other than Voice-over-IP (VoIP) or live distributed performance.

    Presenter: Flávio Schiavoni, PhD student at IME/USP

    When: September 10th, 2013

    Where: Room B-3, IME/USP

  • Three applications of audio and sensor processing in embedded systems

    In this talk I will present three applications of coupled audio/sensor signal processing I worked with in the first two years of my PhD.

    Presenter: Gabriel Gómez, PhD student at Friedrich-Alexander-Universität Erlangen-Nürnberg

    When: August 27th, 2013

    Where: Room B-3, IME/USP

  • Controlling audiovisual resources in cultural events

    In this talk will be presented a proposal of a tool to control audiovisual resources in artistic expressions using open projects like the Open Lighting Architecture (OLA), which provides communication by the standard protocol of lighting equipment, DMX512, and the visual programming language Pure Data, widely used for sound processing. The goal is to reduce the equipment cost and make more accessible the technical knowledge needed to manipulate them, also providing new possibilities of interaction between visual and sound elements.

    Presenter: Gilmar Dias

    When: August 20th, 2013

    Where: Room B3 - IME/USP

  • Random Audio Signal Processing

    In this talk we will introduce elementary concepts referring to random audio signal processing. This expression should not be confused with any reference to a sound/timber property or a characterization of a signal's contents, but as a reference to signal models and hypotheses related to signal description. Deterministic and stochastic approaches for classical problems such as prediction and interpolation will be compared, and solutions for stochastic filtering problems with the methods of Wiener and Kalman will be presented.

    Presenter: Marcelo Queiroz

    When: June 26th, 2013

    Where: Room B-101, IME/USP

  • Network music with Medusa

    This seminary will present an architectural view of real time network music tools. This architectural model is being researched in a PhD thesis and resulted into the development of a network music tool called Medusa.Medusa works with different audio and MIDI APIs and also uses different transport protocols. In a close future this tool will support also different addressing methidologies and OSC communication.In this seminar we will present a discussion about Medusa architectural elements and how these choices can influence performance and utilization.

    Presenter: Flávio Schiavoni

    When: June 19th, 2013

    Where: Sala B-101 do IME/USP

  • Síntese por modulação de amplitude com realimentação

    Neste seminário será apresentada uma nova técnica de síntese sonora baseada na modulação da amplitude de um oscilador a partir da realimentação de sua saída. O sistema será interpretado como um filtro digital PLTV, cujos coeficientes variam periodicamente. Variações de arquiteturas utilizando este oscilador também serão apresentadas e analisadas, assim como suas aplicações musicais. Por último, a possibilidade de criar efeitos digitais de áudio (DAFX) com o mesmo sistema será discutida.

    Presenter: Antonio Goulart

    When: June 12th, 2013

    Where: Sala B-101 do IME/USP

  • MOD - An LV2 host and processor at your feet

    MOD is a linux-based LV2 plugins processor and controller. Musicians access it via bluetooth and setup their pedalboards by making internal digital connections between audio sources, plugins and audio outputs. After a pedalboard set is saved, it can be shared with other users at the MOD Social network. The software components are Open Source, which means you can also run it on any linux machine, not only on MOD hardware.

    Presenter: Bruno Gola, Gianfranco Ceccolini e Rodrigo Ladeira

    When: June 5th, 2013

    Where: Room B-101 at IME/USP

  • Chuck e TAPESTREA

    Neste seminário, abordaremos duas novas ferramentas de manipulação de áudio. Na primeira parte, apresentaremos a linguagem de programação ChucK. Voltada para composição, análise e síntese de áudio em tempo real, a linguagem utiliza um modelo de programação concorrente com ênfase no controle temporal. ChucK ainda oferece a possibilidade de adicionar/alterar código durante a execução. Em seguida, iremos apresentar o framework TAPESTREA, projetado para analisar, transformar e sintetizar sons complexos interativamente.

    Presenter: Gilmar Dias

    When: May 29th, 2013

    Where: Sala B-101 do IME/USP

  • O papel das ferramentas na criação musical

    A partir da experiência do projeto Mobile vamos levantar algumas questões referentes às ferramentas e interfaces tecnológicas utilizadas na criação musical. Serão apresentados alguns exemplos de utilização de ferramentas tecnológicas nas produções do grupo e abordados alguns problemas que surgem entre as demandas artísticas e as tecnologias disponíveis. Em especial vamos apontar o aspecto da interação entre os projetos de pesquisa e desenvolvimento tecnológico e a produção artística experimental.

    Presenter: Fernando Iazzetta

    When: May 22nd, 2013

    Where: Auditório Jacy Monteiro, Bloco B do IME/USP

  • Case studies about real time digital audio processing using highly available low cost computational platforms

    In this masters work we explore different possibilities of real time digital audio processing using platforms that are highly available and have relatively low cost.Arduinos are minimal structure for interaction with ATmega microcontrollers and are generally used as control interface for other eletric or eletronic devices. Because it has pins capable of ADC and DAC, it can be used to capture, process and emit analogic signals.

    Presenter: André Jucovsky Bianchi

    When: May 8th, 2013

  • AudioLazy: DSP em Python

    AudioLazy = DSP (Digital Signal Processing) + expressiveness + real time + pure Python. It is a package designed for audio processing, analysis and synthesis that is intended to be used both for prototyping and simulation tasks and for applications with real-time requirement. This seminar aims to present AudioLazy, its design goals, aspects of the digital representation of the sound and their impacts, relationships between expressivity and implementation, as well as several examples of applications.

    Presenter: Danilo de Jesus da Silva Bellini

    When: April 24th, 2013

  • From Computer Music to the Theater: The Realization of a Theatrical Automaton

    In this talk will be presented an experience of application of computer music in theater realized in 1981 in the "Festival of Two Worlds" in Spoleto, Italy. The aims was to increase the interaction between sounds, scenery and actors on stage distributing the control of the musical events generation. In the seminar the structure of the performance and the music generated will be adressed such as the technical apparatus constructed to perform the work.

    Presenter: Gilmar Dias

    When: April 17th, 2013

    Where: Room B-101 at IME/USP

  • Framboesa π: Signal processing with raspberry pie

    In this seminar we show the Raspberry Pi from basic specifications to its use with some operating systems. Some case studies and comparisons with other devices will be presented in conjunction with useful accessories to provide better user experience. Aiming to validate (or not) its use in artistic performances there will be some demonstrations related to real-time signal processing with this a credit-card-sized single-board computer.

    Presenter: Antonio Deusany de Carvalho Junior

    When: April 10th, 2013

    Where: Room B-101 at IME/USP

  • Musical Multiagent Systems

    Musical Multiagent Systems are useful in solving inherently complex and distributed problems. This seminar will present an overview of Multiagent Systems, and applications on the field of Computer Music.

    Presenter: Pedro Henrique Rocha Bruel

    When: April 3rd, 2013

    Where: Room B-101 at IME/USP

  • Artigos Clássicos em Computação Musical: Roads e a Síntese Granular

    Neste seminário será apresentado o artigo de Curtis Roads intitulado "Granular Synthesis of Sound", publicado em 1978 no Computer Music Journal. Serão apresentadas as idéias e teorias que culminaram em um novo método para síntese sonora, como o quantum de som e o estudo da informação nas mensagens sonoras, assim como as primeiras metodologias propostas para composições musicais com grãos e algumas implementações.

    Presenter: Antonio Goulart

    When: March 20th, 2013

    Where: IME, sala B-101

  • Classical papers in Computer Music: Dannenberg and the real-time musical accompaniment

    In this talk we will present Roger Dannenberg's paper entitled "An On-Line Algorithm for Real-Time Accompaniment" (ICMC 1984). The author presents a new approach to the real-time musical accompaniment problem in which the computer has the ability to follow a solo musician. Three sub-problems are considered: process the input from the soloist in real-time (using symbolic code), compare this entry with the original score (using a matching algorithm) and generate the accompaniment in accordance with the musician's progress (using a virtual clock).

    Presenter: Roberto Piassi Passos Bodo

    When: March 13th, 2013

    Where: Room B-101 at IME/USP

  • Conversão de Voz Inter-Linguística

    A conversão de voz é um problema emergente em processamento de fala e vozcom um crescente interesse comercial, tanto em aplicações comoTradução Fala para Fala (Speech-to-Speech Translation - SST)e em sistemas Text-To-Speech (TTS) personalizados. Um sistema de Conversão de Vozdeve permitir o mapeamento de características acústicasde sentenças pronunciadas por um falante origempara valores correspondentes da voz do falante destino, de modoque a saída processada é percebida como uma sentença pronunciada

    Presenter: Anderson Fraiha Machado

    When: March 6th, 2013

    Where: Sala 10-B do IME/USP

2012

  • Composer Classification in Symbolic Data Using PPM

    The aim of this work is to propose four methods for composer classification in symbolic data based on melodies making use of the Prediction by Partial Matching (PPM) algorithm, and also to propose data modeling inspired on psychophysiological aspects. Rhythmic and melodic elements are combined instead of using only melody or rhythm alone. The models consider the perception of pitch changing and note durations articulations then the models are used to classify melodies.

    Presenter: Antonio Deusany de Carvalho Junior

    When: December 6th, 2012

    Where: Sala 268-A do IME/USP

  • Wave Field Synthesis

    Wave field synthesis (or WFS) is the attempt to reproduce in a different environment an acoustic scene exactly like the original one. In the past 20 years several wave field synthesis systems have been developed which use holographic techniques. Using multi-channel systems, WFS tries to resynthesize a two-dimensional sound field in a spatial environment based on the idea of superposition/interference of small sound sources represented by a large number of loudspeakers close to each other.

    Presenter: Thilo Koch

    When: November 29th, 2012

    Where: Room 268-A, IME/USP

  • Introduction to Faust

    Faust is a programming language designed for Digital Signal Processing, based in block algebra. In this seminar, we will present Faust's basic syntax and the FaustWorks development environment. A Faust program can be compiled for using with audio processing tools as Pure Data and Csound, audio architectures as Jack, OSS and ALSA, and plugin standards as LADSPA and LV2. We suggest that the participants bring their computers so they can follow the presentation and examples. Faust and FaustWorks can be downloaded in the following link: http://faust.grame.fr/index.php/downloads

    Presenter: Gilmar Dias and André J. Bianchi

    When: November 22nd, 2012

    Where: Room 268-A at IME/USP

  • Artigos clássicos em computação musical: II - Schroeder, Moorer e a reverberação

    Neste seminário apresentaremos o artigo de James Moorer intitulado "About this reverberation business", publicado em 1979 no Computer Music Journal. Serão apresentadas as estruturas primeiramente propostas por Schroeder, baseadas em filtros pente e em filtros passa-tudo, assim como as melhorias propostas por Moorer. Considerações sobre a acústica de salas também serão expostas, para compreensão da modelagem das reflexões iniciais na reverberação em ambientes reais.

    Presenter: Antonio Goulart

    When: November 8th, 2012

    Where: Sala 268-A do IME/USP

  • Analyzing Network Music Performance Tools

    In this talk we'll analyze our currently options to make music performances through a computer network focusing on the residential Internet available in Brazil. It will be presented an overview of this computer music sub-area so that we can discuss implementation aspects of softwares like JackTrip, SoundJack and Skype. We will also describe the procedures taken to make automated testing with those applications and the techniques used to analyze the results.

    Presenter: Marcio Tomiyoshi

    When: October 25th, 2012

    Where: Sala 268-A do IME/USP

  • Using Pure Data on Android Apps

    Pure Data (PD) is a visual programming language intended for multimedia applications. The language is used by musicians and musical performers especially because it is easy-to-use. PD can process flows of digital signals and can be applied to game development, computer music, music computing, computer graphics and image processing. The language is compatible with many operation systems including iOS and Android and can be embeded on languages that support native code like C, Java, Objective-C an Python (including PyGame).

    Presenter: Antonio Deusany de Carvalho Junior

    When: October 18th, 2012

    Where: Sala 268-A do IME/USP

  • AudioLazy - Package for audio processing with Python

    Python is a multi-paradigm (object-oriented, imperative, functional), general use, high level, interpreted, dynamic typed programming language with a philosophy that emphasizes code readability. Numpy, Scipy and Matplotlib are packages that let the Python use and expressiveness look like languages such as MatLab and Octave. However, the eager evaluation done by these tools make it difficult, perhaps impossible, to use them for real time audio processing. Another difficulty concerns expressive code creation for audio processing in blocks through indexes and vectors.

    Presenter: Danilo de Jesus da Silva Bellini

    When: October 11th, 2012

    Where: Sala 268-A do IME/USP

  • Real time audio processing using Arduino

    Arduino is a microcontroller platform based on open hardware and software, widely used for interfacing with eletric and electronic equipment in multidisciplinary projects. In this seminar, techniques for using Arduino for real time audio processing will be presented, as well as results for performance analysis on calculating FFTs, aditive synthesis and convolution on time domain.

    Presenter: André Jucovsky Bianchi

    When: October 4th, 2012

    Where: Sala 268-A do IME/USP

  • Classical papers in Computer Music: I - Chowning's FM synthesis

    In this talk we will present John Chowning's paper entitled "The Synthesis of Complex Audio Spectra by Means of Frequency Modulation" (J. Audio Eng. Soc. 21(7):526-534, 1973). The well-known process of frequency modulation is here brought to a new application domain, where it is shown to result in a surprising control of audio spectra. The technique provides a means of great simplicity to control the spectral components and their evolution in time. Such dynamic spectra are diverse in their subjective impressions and include sounds both known and unknown at the time.

    Presenter: Marcelo Queiroz

    When: September 27th, 2012

    Where: Room 268-A at IME/USP

  • Warbike - Sonification of wireless networks for a bike

    This talk will present the Warbike project: an artistic sonification of wireless networks for a bike, where data about networks activity and encryption status are translated into sounds by means of a bicycle equipped with speakers and a mobile device. The work is located in the psychogeography universe and its contemporary counterpart like locative media, wardriving, and the bicycle as a symbol and as a platform for political manifestation.

    Presenter: Gilmar Rocha de Oliveira Dias

    When: September 20th, 2012

    Where: Room 268-A, at IME/USP

  • A tool for musical notation in braille

    This work researches visually-impaired person's difficulties when studying music as a university career, where musical information is usually forwarded as ink-printed sheet music and the translation of this material to braille involves specific skills and resource availability. In that sense, the musical production demanded from a blind student is accomplished by using braille notation, for taking notes or producing homework for disciplines like Harmony, Musical Analysis, or even to take tests.

    Presenter: Arthur P. M. Tofani

    When: September 13th, 2012

    Where: Room 252-A, at IME/USP

  • Musical agents reasoning, algorithmic composition, articial life and interactivity in multiagent musical systems

    Multiple musical multiagent systems have been developed in the last years proving the increasing interest in composition and musical performance systems that exploit intelligent agents technology. Thereis an special focus on systems that integrate algorithmic composition techniques, articial life and interactivity. We can also observe that most of these existing projects show many exibility and scope limitations, as they normally use symbolic musical notation and they solve a single issue or scenario, as well as they have a technical motivation rather than a musical one.

    Presenter: Santiago Dávila Benavides

    When: August 30th, 2012

    Where: Room 252-A, at IME/USP

  • Mobile Devices as Musical Instruments

    In this talk we will analyse the use of mobile devices as musical instruments. The following topics will be presented: the evolution of musical instruments, the advantages of mobile devices compared to some digital electronic instruments, the native support current state of some operating systems (such as Android and iOS) for real-time audio processing, examples of audio applications available in the market and the acceptance/use of mobile phones and tablets in the music industry.

    Presenter: Roberto Piassi Passos Bodo

    When: August 23rd, 2012

    Where: Room A-252 at IME/USP

  • Digital Audio Effects, talk 14: Sound Source Separation

    In these talks, the main concepts in the field of audio effects processing will be presented. In this seminar, we will present some technics used for sound source separation from mixtures. From the general principles, we will discuss about binaural source separation, separation of sources from single-channel mixtures, and also about some applications.

    Presenter: Antonio Deusany de Carvalho Junior

    When: July 2nd, 2012

    Where: Room 267-A, at IME/USP

  • Efeitos Digitais de Áudio, seminário 13: Mixagem automática

    Neste ciclo de seminários serão percorridos os temas principais na área de processamento de efeitos de áudio. Neste seminário apresentaremos técnicas para a mixagem automática de sinais de áudio. Tais técnicas são capazes de automaticamente realizar paneamento, correção de polaridade, correção de offset, compensação de ganho e nível de faders. Podem também propiciar realce de canais e equalização simples.

    Presenter: Antonio Goulart

    When: June 25th, 2012

    Where: Sala 267-A do IME/USP

  • Digital Audio Effects, talk 12: Virtual analog effects

    In these talks, the main concepts in the field of audio effects processing will be presented. In this seminar, we will present digital versions of the following virtual analog effects: nonlinear resonator, the Moog ladder filter, tone stack, wah-wah filter, phaser, circuit-based valve emulation, plate and spring reverberation, tape-based echo simulation and telephone line effect.

    Presenter: Marcio Tomiyoshi

    When: June 18th, 2012

    Where: Room 267-A, at IME/USP

  • Digital Audio Effects, talk 11: Time and frequency-warping musical signals

    In these DAFx talks, the main concepts in the field of audio effects processing will be presented. In the 11th seminar of the series, the interesting audio effects that can be obtained by warping the time and/or frequency axis of an audio signal will be described. Time warping aims at deforming the waveform or the envelope of the signal, while frequency warping modifies its spectral content by transforming a harmonic signal into an inharmonic one and vice versa. The effects obtained by warping often increase the richness of the signal by introducing detuning or fluctuation of the waveform.

    Presenter: Gilmar Rocha de Oliveira Dias

    When: June 11th, 2012

    Where: Room 267-A, at IME/USP

  • Digital Audio Effects, talk 10: Spectral processing

    In these talks, the main concepts in the field of audio effects processing will be presented. In this seminar, frequency domain processing and representation audio models will be described, as well as issues derived from spectral model gathering.

    Presenter: Danilo J. S. Bellini

    When: June 4th, 2012

    Where: Sala 267-A at IME/USP

  • Digital Audio Effects, talk 9: Adaptive Effects

    In this talk we will introduce the notion of adaptive effects, i.e. sound processing methods whose internal configuration is adaptive to temporal context. This adaptation may be due to automatic control of effect parameters through features extracted from the input signal, or through dynamic reconfigurations of processing chains. Some examples of adaptive effects are dynamic compression, auto-tuning, morphing and concatenative synthesis.

    Presenter: Marcelo Queiroz

    When: May 28th, 2012

    Where: Room 267-A, at IME/USP

  • Digital Audio Effects, talk 8: Source-filter processing

    In these DAFx talks, the main concepts in the field of audio effects processing will be presented. In this seminar, we will present techniques to estimate the spectral envelope and perform the source-filter separation, and sound-filter combination. There are three techniques which can be used for these steps: Channel Vocoder, Linear Prediction, and Cepstrum. Initially we will describe the fundamentals of each technique. Next we will introduce some basic transformations.

    Presenter: Antonio Deusany de Carvalho Junior

    When: May 21st, 2012

    Where: Room 267-A, at IME/USP

  • Digital Audio Effects, talk 7: Time-frequency processing

    In these DAFx talks, the main concepts in the field of audio effects processing will be presented. In this seminar, we will initially present some basic concepts for time-frequency processing and some implementation models (filter-bank, FFT analysis, IFFT and sine sum resynthesis and gaboret sum). Next, we will see some audio effects that can be produced as filtering, dispersion, time-stretching, pitch-shifting, stable/transient component separation, mutation of sounds, robotization, whisperization, denoising and spectral panning.

    Presenter: André Jucovsky Bianchi

    When: May 14th, 2012

    Where: Room 267-A, at IME/USP

  • Digital Audio Effects, talk 6: Time-segment processing

    In these DAFx talks, the main concepts in the field of audio effects processing will be presented. In this talk, will be presented some algorithms for signal processing in the time domain. These algorithms will change, basically, the pitch and duration of the audio signal. Will be presented methods for variable speed replay, time stretching and pitch shifting.

    Presenter: Roberto Piassi Passos Bodo

    When: May 7th, 2012

    Where: Room 267-A at IME/USP

  • Digital Audio Effects, talk 5: Spatial effects

    In these DAFX talks, the main concepts in the field of audio effects processing will be presented. In this talk, we will present fundamentals of spatial hearing and basic techniques for spatial digital audio effects, such as stereo imaging, panning, HRTF filters, reverb, effects due to distance and Doppler effects.

    Presenter: Santiago Dávila Benavides

    When: April 23rd, 2012

    Where: Room 267-A, at IME/USP

  • Digital Audio Effects, talk 4: Nonlinear processing

    In these DAFx talks, the main concepts in the field of audio effects processing will be presented. In this talk we will present models of nonlinear processing and several types of effects created with this model like compressors, limiters, noise gate, de-esser, valve simulators, distortion, overdrive, fuzz, harmonic and subharmonic generation, tape saturation, exciters and enhancers.

    Presenter: Marcio Masaki Tomiyoshi

    When: April 16th, 2012

    Where: Room 267-A, at IME/USP

  • Digital Audio Effects, talk 3: Modulators and Demodulators

    In these DAFx talks, the main concepts in the field of audio effects processing will be presented. In the third talk of the series, basic modulation techniques will be presented by introducing simple schemes for amplitude modulation, single-side-band modulation and phase modulation with its use in audio effects. Following, will be seen the demodulators, which extract parameters of the incoming signal for further effects processing. The combination of these techniques lead to more advanced audio effects, which will be demonstrated by several examples.

    Presenter: Gilmar Rocha de Oliveira Dias

    When: April 9th, 2012

    Where: Room 267-A, at IME/USP

  • Digital Audio Effects, talk 2: Filters and delays

    In these DAFX talks, the main concepts in the field of audio effects processing will be presented. In this second talk fundamental tools for filter design will be introduced, both for frequency-response-based designs and for filter based on delayed copies of the input signal. Some specific filter designs will be discussed, such as equalizers, well-known effects like wah-wah and phaser, FIR and IIR comb filters for reverb and delay-based filters including like vibrato, flanger and chorus.

    Presenter: Flávio Luiz Schiavoni

    When: March 26th, 2012

    Where: Room 267-A, IME/USP

  • Digital Audio Effects, talk 1: Fundamentals of DSP

    In these DAFX talks, the main concepts in the field of audio effects processing will be presented, corresponding to the chapters of the book DAFX - Digital Audio Effects (ed. Udo Zölzer). In this first talk, we will present the foundations for digital signal processing: digital signal representation, spectral analysis through Fourier Transform, linear systems, convolution, z transform, FIR and IIR filters.

    Presenter: Marcelo Queiroz

    When: March 19th, 2012

    Where: Room 267-A, at IME/USP

  • St. Paul Revisited by Music, Fine Arts, and Mathematics

    Professor for Methods of Applied Algebra, Dresden University of Technology, GermanyThis talk gives insight into an interdisciplinary art project by the artist Franziska Leonhardi together with Jonas Leonhardi and Thomas & Stefan E. Schmidt - in collaboration with Immanuel Albrecht & Maximilian Marx et al. from the Institute of Algebra at Dresden University of Technology. Here, a major challenge was a visualization of the oratorio St. Paul by Felix Mendelssohn Bartholdy. The underlying question was how to visualize music and how deaf people can benefit from this.

    Presenter: Franziska Leonhardi and Stefan E. Schmidt

    When: March 13th, 2012

    Where: Antonio Gilioli Auditorium, at IME/USP

2011

  • How to write externals for Pd.

    Material.Flávio Luiz Schiavoni page at IME.

    Presenter: Flávio Luiz Schiavoni

    When: November 13th, 2011

  • Parallel processing using Pure Data and GPU

    Pure Data (Pd) is a realtime signal processing tool widely used for live artistic performances. CUDA is a platform for parallel processing using NVIDIA GPU boards. In this seminar, I will present these technologies and an ongoing work which purpose is to allow for the realtime parallel signal processing using Pd and CUDA. 

    Presenter: André Jucovsky Bianchi

    When: September 30th, 2011

  • Exploring different strategies for music genre classification

    The possibility of streaming and the ease to download and store music in computers and portable devices make AMGC (automatic music genre classification) systems a must. Systems based on metadata analysis might be unprecise, and classifications are by artist or by album, rather than by each tune. Other systems, such as those that addopt MFCCs, explores rhytmic and timbric content of the songs, being LDA, SVM and GMM the most used classifiers.

    Presenter: Antonio José Homsi Goulart

    When: September 27th, 2011

    Where: Room 259-A, at IME/USP

  • Audio production using Linux

    Material.Flávio Luiz Schiavoni page at IME.

    Presenter: Flávio Luiz Schiavoni

    When: September 2nd, 2011

  • Interactivity in Computer Music and the MOBILE Project

    In this seminar, we will begin by presenting a brief historical overview of works involving interactivity in music production using computational mechanisms. Then, we will present a few academic works developed in Brazil. Finally, we will describe the Mobile Project, which has, as its central theme, the utilization and development of interactive processes in the realm of musical production technologically mediated.

    Presenter: Fabio Kon (IME/USP)

    When: August 29th, 2011

    Where: Room 256-A, at IME/USP

  • Techniques for inter-linguistic voice conversion

    [portuguese only] O problema de Conversão de Voz Inter-Linguística se refere à substituição do timbre ou identidade vocal de uma sentença gravada por um falante (origem) pela de outro falante (destino), supondo que ambos os falantes pronunciam suas respectivas sentenças em línguas diversas. Este problema se difere de uma conversão de voz típica no sentido de que o mapeamento de características acústicas não depende do alinhamento temporal de gravações de sentenças idênticas pronunciadas pelos falantes origem e destino.

    Presenter: Anderson Fraiha Machado (doctorate stutend at IME/USP)

    When: August 23rd, 2011

    Where: Room 254-A, at IME/USP

  • Real time digital audio processing using non-conventional devices. Case studies: Arduino, GPU and Android

    Material.André Jucovsky Bianchi page at IME.

    Presenter: André Jucovsky Bianchi

    When: August 15th, 2011

  • Segmentation methods based on sound descriptors.

    [portuguese only] Esta palestra apresenta um estudo comparativo de diferentes métodos computacionais de segmentação estrutural musical, onde o principal objetivo é delimitar fronteiras de seções musicais em um sinal de áudio, e rotulá-las, i.e. agrupar as seções encontradas que correspondem a uma mesma parte musical. As técnicas computacionais apresentadas neste estudo são divididas em três categorias: segmentação supervisionada, segmentação não-supervisionada e segmentação não-supervisionada em tempo real.

    Presenter: André Salim Pires

    When: June 16th, 2011

    Where: Room 144-B, at IME/USP

  • A framework for building musical multi-agent systems

    [portuguese only] A área de Sistemas multiagente é um promissor domínio tecnológico para uso em performances musicais interativas. Em trabalhos recentes, essa tecnologia vem sendo utilizada para resolver problemas musicais de escopo específico e alcance limitado, como a detecção de pulsação, a simulação de instrumentos e o acompanhamento musical automático.

    Presenter: Leandro Ferrari Thomaz (doctorate student at IME/USP)

    When: June 7th, 2011

    Where: Room 254-A, at IME/USP

  • Medusa - A distributed sound environment

    [portuguese only] Neste seminário apresentaremos uma ferramenta que está sendo desenvolvida para distribuição síncrona de conteúdo musical (streams de áudio e MIDI) em uma rede local de computadores. Medusa é uma ferramenta que une os recursos computacionais de uma rede de computadores formando um ambiente distribuído que simplifica a utilização de recursos remotos para práticas musicais. No seminário serão apresentadas o mapeamento das características desejáveis da ferramenta, as soluções que foram propostas para a implementação e os resultados obtidos até o momento.

    Presenter: Flávio Luiz Schiavoni (doctorate student at IME/USP)

    When: April 28th, 2011

    Where: Room B-16, at IME/USP.