Xavier Serra

(Music Technology Group, Dept. of Information and Communication Technologies, Universitat Pompeu Fabra, Spain)

Homepage: http://www.dtic.upf.edu/~xserra/

Xavier Serra Xavier Serra is Associate Professor of the Department of Information and Communication Technologies and Director of the Music Technology Group at the Universitat Pompeu Fabra in Barcelona. After a multidisciplinary academic education he obtained a PhD in Computer Music from Stanford University in 1989 with a dissertation on the spectral processing of musical sounds that is considered a key reference in the field. His research interests cover the computational analysis, description and synthesis of sound and music signals, with a balance between basic and applied research and approaches from both scientific/technological and humanistic/artistic disciplines. Dr. Serra is very active in fields of Audio Signal Processing, Sound and Music Computing, Music Information Retrieval and Computational Musicology at the local and international levels, being involved in the editorial board of a number of journals and conferences and giving lectures on current and future challenges of these fields. He was awarded an Advanced Grant of the European Research Council to carry out the project CompMusic aimed at promoting multicultural approaches in music information research.

CompMusic project: motivation, results, reflections

Slides of the talk

The idea of the talk is to go over the motivations that shaped the CompMusic project http://compmusic.upf.edu, then highlight some of the research results that have come out until now, and finally make some reflections once the funding from the European Research Council has finished.

In this project we aimed to advance in the automatic description of music by emphasising cultural specificity, carrying research within the field of music information processing with a domain knowledge approach. The project focused on five music traditions of the world: Hindustani (North India), Carnatic (South India), Turkish-makam (Turkey), Arab-Andalusian (Maghreb), and Beijing Opera (China).

The project contributed extensively to the field of Music Information Retrieval and to the musical cultures that it studied. It had a major impact in promoting the topic of cultural/domain specificity, influencing many researchers and institutional initiatives. We compiled and made openly available corpora of five music traditions studied, and also created 24 datasets developed for specific experiments around these traditions. We produced 150 publications (plus some more still being prepared) with a wide variety of contributions, especially on the extraction of features from audio music recordings related to melody and rhythm and on the semantic analysis of the contextual information of those recordings. We developed new and improved existing software tools which now are becoming a reference in the field.

An important goal of CompMusic was to create corpora and to develop technologies that could be used both by researchers and the general public, while also promoting technology transfer. Dunya http://dunya.compmusic.upf.edu comprises the music corpora and related software tools that have been developed and that are useful for the research community. These corpora include audio recordings plus complementary information that describes the recordings. Each corpus has specific characteristics and the developed software tools allow to process the available information in order to study and explore the characteristics of each musical repertoire.

The dissemination strategy of CompMusic has been based on a clear open science model; thus sharing our ideas, goals, and results as openly and widely as possible. All our publications have been made available as soon as they have been written, all our code is open source, and all the data generated is available under open licenses. We have also organised seminars, workshops, and concerts, which have been recorded and made available from the project website and in general we have been very active in disseminating our work.

Emilios Cambouropoulos

(Cognitive and Computational Musicology Group, Department of Music Studies, Aristotle University of Thessaloniki, Greece)

Homepage: http://users.auth.gr/emilios/

Emilios Cambouropoulos Emilios Cambouropoulos is Associate Professor in Musical Informatics at the School of Music Studies, Aristotle University of Thessaloniki. He studied Physics, Music, and Music Technology before obtaining his PhD in 1998 on Artificial Intelligence and Music at the University of Edinburgh. He worked as a research associate at King’s College London (1998-1999) on a musical data-retrieval project and was employed at the Austrian Research Institute for Artificial Intelligence (OeFAI) in Vienna on the project Artificial Intelligence Models of Musical Expression (1999-2001). Recently he was principal investigator for the EU FP7 project Concept Invention Theory COIVENT (2013-2016). His research interests cover topics in the domain on cognitive and computational musicology (CCM Group - ccm.web.auth.gr) and has published extensively in this field in scientific journals, books and conference proceedings.

Musical Creativity and Conceptual Blending: The CHAMELEON melodic harmonisation assistant

Slides of the talk

Νew concepts can be created by ‘exploring’ previously unexplored regions of a given space (‘exploratory creativity’) or transforming / altering in novel ways established concepts (‘transformational creativity’) or by making associations between conceptual spaces that were previously not directly linked (‘combinational creativity’) – Boden 2009. The latter is linked to the theory of Conceptual Blending (Turner and Fauconnier, 2003). Composers / musicians actively engage in one or more of these modes of creativity in producing novel music creations.

The cognitive, psychological and neural basis of conceptual blending has been extensively studied (Fauconnier and Turner 2003; Gibbs, Jr. 2000; Baron and Osherson 2011). Moreover, Fauconnier and Turner’s theory has been successfully applied for describing existing blends of ideas and concepts in a varied number of fields, such as linguistics, music theory, poetics, mathematics, theory of art, political science, discourse analysis, philosophy, anthropology, and the study of gesture and of material culture (Turner 2012). However, the theory has hardly been used for implementing creative computational systems. Indeed, since Fauconnier and Turner did not aim at computer models of cognition, they did not develop their theory in sufficient detail for conceptual blending to be captured algorithmically. Consequently, the theory is silent on issues that are relevant if conceptual blending is to be used as a mechanism for designing creative systems: it does not specify how input spaces are retrieved; or which elements and relations of these spaces are to be projected into the blended space; or how these elements and relations are to be further combined; or how new elements and relations emerge; or how this new structure is further used in creative thinking (i.e., how the blend is “run”). Conceptual blending theory does not specify how novel blends are constructed.

In this presentation we focus on issues of harmonic representation and analysis, giving special attention to the role of conceptual blending in melodic harmonisation. Firstly, a new idiom-independent representation of chord types, namely the General Chord Type representation, is described; this is appropriate for encoding tone simultaneities in diverse harmonic contexts (such as tonal, modal, jazz, octatonic, atonal, traditional harmonic idioms). Then, methods are presented for statistical learning from (harmonic reductions of) musical pieces drawn from diverse idioms; more specifically, chord types, chord transitions, cadences and melody-to-bass line voice leading are learned from data using HMMs and other statistical models.

Finally, a computational account of concept invention via conceptual blending is materialised with the employment of harmonic ontology amalgams, yielding original blended harmonic spaces. The CHAMELEON melodic harmonisation assistant, produces novel harmonisations in diverse musical idioms for given melodies and, also, blends different harmonic spaces giving rise to new ‘unexpected’ outcomes. Many musical examples will be given that illustrate the proposed representations and analytic / generative outcomes; additionally, a set of empirical evaluation tests will be presented.

Damián Keller

(Amazon Center for Music Research, Federal University of Acre, Brazil)

Homepage: https://ccrma.stanford.edu/~dkeller/

Damián Keller Damián Keller holds a post as associate professor of music at the Federal University of Acre. He is founder and principal investigator of the Amazon Center for Music Research (NAP). A member and cofounder of the Ubiquitous Music Group, his research focuses on ecologically grounded creative practices and ubiquitous music. His output encompasses over 150 scientific publications, 20 editorial projects and several artistic projects funded by USA and Brazilian agencies. Recent artworks include the Palafito 2.0 and Green Canopy 5.0 installations, featured at the II Biennial of Latin American Art in Denver, CO.

Challenges for a second decade of Ubiquitous Music (UbiMus)

Slides of the talk

The first part of my talk deals with key contributions of ubiquitous music research to the computer music field. The concepts of everyday creativity, sustainability and participatory design have fueled interdisciplinary discussions impacting both CM research methods and artistic practices. Case studies and artistic products were presented as invited exhibits, talks and panels at the Biennial of Latin American Art in Denver, Colorado (2013), Anppom (2014), SIMA (2015) and SEMPEM (2016). An upcoming issue of Per Musi features a section dedicated to ubimus and special volumes were published by Sonic Ideas (2013), Cadernos de Informática (2014) and Scientia Tech (2015). Aside from the multiple chapters and papers that have appeared in specialized publications over the last few years – such as the Journal of New Music Research, Organised Sound, and the Journal of Music Technology and Education – a reference volume was released by Springer Press in 2014. Hence, I believe we can say that ubiquitous music constitutes a consolidated research field.

Despite these advances, everyday musical activities still present a unique set of challenges. One of the objectives of ubimus endeavors is to provide access to creative music making to a wide range of participants. Several projects have targeted both musicians and non-experts in collaborative activities. Supporting good-quality musical products without creating unnecessary barriers to novice participation is particularly tricky. Initial trials indicated that targeting the requirements of casual users could widen their participation in creative musical activities (Miletto et al. 2011). Nevertheless, this solution came at a price. Novice engagement seems to be encouraged by the participation of musicians (Ferreira et al. 2015). Therefore, support for creative musical activities would have to fulfill both the requirements of participants without musical training and the needs of participants with musical expertise. This remains an open research question.

The increased level of participation of musically naive stakeholders and the distributed nature of everyday resources provide a new context for design strategies. Traditional musical concepts such as “the instrument” and “the score” present multiple shortcomings when applied to everyday musical phenomena. Ubimus research has yielded alternative approaches, such as the development of creativity support metaphors. These metaphors can be used to guide the implementation of support infrastructure. Whether the metaphors are effective means of support for creative activities demands experimentation and data collection in real settings. Thus, multiple studies have dealt with the assessment of creative products and processes while subjects carried out musical activities in everyday contexts. I will summarize and discuss the results of field studies employing time tagging (Keller et al. 2010), the stripes metaphor (Farias et al. 2014), and the sound sphere metaphor (Bessa et al. 2015). I will try to uncover the limitations of these proposals in order to lay out a path of possible ubimus research avenues to be exploited during the next decade.