brazilian computer music symposium

: Aluizio Arcela

Brasilia, DF August 3-7, 1997
Centro de Convenções Ulisses Guimarães
XVII Brazilian Computer Society Conference

Brazilian Group for Computer Music Research





Table of Contents

Chairman's Note
Opening Session
Invited Talks

Max Mathews (1)
Max Mathews (2)
Barry Vercoe

Round Tables

Tape Solo
Lexikon Sonate
Three-Threaded Invention

Financial Suport
Nucom Annual Meeting
Apendix A - Call for Works
Call for Papers
Call for Concerts
Call for Tutorials
Apendix B - SBCM' 98

[goto top]



Com fotografias, textos em português e textos em inglês, registra-se aqui toda a realização do IV Simpósio Brasileiro de Computação e Música, desde os critérios adotados na sua concepção, a chamada de trabalhos, a temática, os comitês de programa, a equipe de produção, o apoio financeiro, e tudo o mais, até o histórico e a cronologia dos eventos ocorridos entre 3 e 7 de agosto de 1977 em Brasília-DF, mais precisamente no auditório Alvorada e nas salas 9 e 10 do Centro de Convenções Ulisses Guimarães. Apresentar um relatório com esse nível de detalhe, além de valorizar o simpósio que é sui generis no cenário internacional, é uma tentativa de deixar para os organizadores das futuras edições do SBCM um roteiro que se mostrou seguro e que teve a aprovação unânime da comunidade.

[goto Table of Contents]


Chairman's Note

Last year when we began to design the format for this fourth symposium,it was clear right from the start that a good theme to be explored should be something conecting music to the technology with networks. Certainly, current computer network technology is capable of promoting a significant evolution in music, not only in sound generation aspects but also in computing and performing whole compositions, or at least in calculating large structures as melodies and timbres in real time.However, no matter what the theme to be focused is, a true evolution will only be possible if music itself can give in return to computer science a significant and useful knowledge, so as to balance and conciliate the condition of a technology user with that of a technology developer. On the other hand, provided that besides logic, compilers, or databases, computing is art as well, the contribution of music to computer science has also to be an effort to correlate the musical intelligence to some kind of human computability. To write compositions or to write programs capable of writing compositions is a way of finding out how such musical intelligence works.I am sure this is why the Brazilian Symposium on Computer Music is going ahead along with the computer science community rather than alone. Four meetings under the partnership of the Brazilian Computer Society is a true demonstration of mutual confidence and commitment.

Aluizio Arcela

Universidade de Brasília
Departamento de Ciência da Computação

[goto Table of Contents]



[goto Table of Contents]


Sunday, 3rd
Room: Alvorada


Mesa composta pelo então presidente da SBC, Ricardo Reis (UFRGS), pelo representante da comissão de Computação e Música na SBC, Maurício Loureiro (UFMG), pelo presidente recém eleito da SBC, Sílvio Meira (UFPE), e pelo chairman Aluizio Arcela (UnB), que a presidiu. Enfatizaram-se os aspectos positivos da cooperação entre computação e música no Brasil.

From left to right: Ricardo Reis (President of SBC), Sílvio Meira (new elected President), Aluizio Arcela, and Max Mathews.


Agnes Daldegan, Geber Ramalho, and Maurício Loureiro.


From right to left: Conrado Silva, Marjorie Mathews and Max.


Aluizio, Bob Willey, Lilian Campos, Márcio Brandão, Ricardo Ribeiro.

[goto Table of Contents]



Talk I Tuesday, 5th
Room: Alvorada, 10:20-12:00

Max Mathews (CCRMA, Stanford University, USA)

Abstract. Computer performance of music was born in 1957 when an IBM 704 in New York City played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, CSound, Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines--they were far too slow to synthesize music in real-time. Chowning's FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.


From left to right: Maurício, Geber, Didier Guigue, Jônatas Manzolli, Behind: Jorge Sad and Bob Willey.

Lilian, Ricardo, Rodolfo Caesar, Maurício, and Geber.


Max checks the audiovisual equipment and ...

... ties a string to the microphone. The history of computer music is about to be related.

"Some time in 1956 Max V. Mathews and John R. Pierce attended a concert at Drew University in Madison, New Jersey. Dika Newlin, musicologist and pianist, played assorted pieces, including a waltz by Arthur Schnabel, the eminent exposer of Beethoven. Mathews and Pierce were in a irreverent mood. One or the other said, 'The computer can do better than this.'

Back in his laboratory, Mathews set out to use the computer to produce musical sounds ("Max, write a program!", said John Pierce to Max Mathews). He wrote a compiler which would translate simple instructions into code that would make a computer to generate a sequence of binary numbers representing successives amplitudes of a musical sound wave. He asked Newman Guttman, a linguist and acoustician, to compose a tune to be played on the computer. Thus, on or about May 17, 1957, In the Silver Scale, the first piece of computer music, was heard at Bell Telephone Laboratories in Murray Hill, New Jersey."

(From the article "Recollections by John Pierce" in Computer Music Currents, vol. 13, Schott Wergo Music Media GmbH, Mainz, Germany, 1995)


He plays the history's first computer generated sounds.


Geber, Aluizio, Max, Maurício, and Conrado.


[goto Table of Contents]


Talk II Wednesday, 6th
Room: Alvorada, 10:20-12:00

Max Mathews (CCRMA, Stanford University, USA)

Abstract. Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the "C" language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using "C". Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.


At the center: Ana Miccolis, Didier Guigue, and Mauricio.


Max describes the technology of the radio-baton and plays the first movement of the Beethoven's 5th Synphony.


Clockwise: Fernando Iazzetta, Carlos Cerana, Felicia Tracogna, and Conrado Silva.


At the top: Geber Ramalho and Barry Vercoe.


A Radio-Baton Lesson.


Max Mathews and Alex Meirelles.


Jônatas Manzolli with the baton at hand ....


The chairman also tries ...


[goto Table of Contents]


Talk III Thrusday, 7th
Room: Alvorada, 10:20-12:00

Barry Vercoe (MIT Media Lab, Massachusetts Institute of Technology, USA)

Abstract. For several years Csound has been the language of choice for computer music synthesis. Three concurrent developments have now taken that in distinct new directions: onto the Internet, with client side rendering of server-defined orchestras and scores; into the PC sound-world, with DSP-based realtime processing of live input and of synchronized accompaniments; and into the broadcast community with the adoption of Structured Audio within the MPEG-4 audio standard. We present an overview of each of these, with live demonstrations of at least one, and describe how the global computer music community can become a developer force in all of them.


Barry Vercoe being assisted by Alex, Ricardo, and Lilian in the arrangement of his talk.


"Barry Vercoe first visited Bell Laboratories in 1967 before he was a student at Princeton. From Princeton he went to MIT, where he wrote Music 360 (for the 360 computer) and later Music 11 for a PDP11 computer that Mathews gave him as surplus no longer useful al Bell Laboratories. He later developed Music 11 into Csound. He also worked at IRCAM and MIT to enable the computer to follow a musical performance. Vercoe is now a professor of music at MIT's Media Laboratory".

(From the article "Recollections by John Pierce" in Computer Music Currents, vol. 13, Schott Wergo Music Media GmbH, Mainz, Germany, 1995)


Singing the Beatles' song "All My Loving" for demonstrating his pitch tracking system


Bob Willey assisting Barry Vercoe's talk at the keyboard

[goto Table of Contents]


August 4th, Monday
Round Table I

Room: Alvorada, 8:00-9:40

Geber Lisboa Ramalho, DI-UFPE

 Didier Guigue, DM-UFPB
Maurício Loureiro, CPCM-UFMG
Jônatas Manzolli, NICS-Unicamp
Rodolfo Caesar, Esc. de Música, UFRJ
Fernando Iazzetta, PUC/SP

August 7th, Thrusday
Round Table I I
Room: 9, 14:00-16:30

 Aluizio Arcela, CIC-UnB

Kathryn Vaughn, Berklee College of Music, USA
Max Mathews, CCRMA-Stanford University, USA
Barry Vercoe, MIT Media Lab, USA
Conrado Silva, MUS-UnB, Brazil
Robert Willey, CRCA-UC, San Diego, USA

[goto Table of Contents]


August 4th, Monday
Papers I Room: 9, 14:00-16:30
Chairman: Maurício Alves Loureiro (Esc. de Música, UFMG, Brazil)

Kenny McAlpine, Eduardo R. Miranda, Stuart G. Hoggar
(Dept. of Mathematics and Dept. of Music, University of Glasgow, Scotland)

Mladen Milicevic (University of South Caroline, USA)

Aditya P. Mathur
(Department of Computer Science, Purdue University, USA)

Juan Reyes, Mauricio Rincon
(Universidad de Los Andes, Colombia)

Geber Lisboa Ramalho (Departamento de Informática, UFPE, Brazil)

Eduardo Reck Miranda (Dept. of Music University of Glasgow, Scotland and
Dept. of Music, UFSM, Brazil)

August 5th, Tuesday
Papers II Room: 9, 14:00-16:30
Chairman: Geber Lisboa Ramalho (Dep. Informática, UFPE, Brazil)

Victor Lazzarini (Depto de Arte, Univ Estadual de Londrina, Brazil)

Márcio da Costa Pereira Brandão, Ricardo Staciarinni Puttini,
Luis Antônio Brasil Kowada, Carlos Antônio Jorge Loureiro
(Depto. de Ciência da Computação, Universidade de Brasília, Brazil)

Marcelo Moreira
(Núcleo Interdisciplinar de Comunicação Sonora, UNICAMP, Brazil)

Ciaran Hope, D.J. Furlong
(Dept. of Electronic and Electrical Eng., Trinity College Dublin, Ireland)

Edwin Loboschi, C.J.B. Pagan, Yaro Burian Jr., Paulo S. dos Santos
(Faculdade de Eng. Elétrica e de Computação, Unicamp, Brazil)

August 6th, Wednesday
Papers III Room: 9, 14:00-16:30
Chairman: Rodolfo Caesar (LaMuT, Escola de Música, UFRJ, Brazil)

Ana Miccolis (Engenharia de Sist. e Computação-COPPE, UFRJ, Brazil)

Marisa Beck Figueiredo, Caetano T. Júnior, Agma J. Machado Traina
(Depto de Ciências de Computação e Estat¡stica, USP Sao Carlos, Brazil)

Gilberto Carvalho, Vladimir Agostini Cerqueira
(Escola de Música, UFMG, Brazil)

Didier Guigue (Departamento de Música, UFPB, Brazil)

Ricardo dal Farra
(Estudio de Música Eletroacústica, Buenos Aires, Argentina)

Martin Alejandro Fumarola
(Laboratory of Electroacoustic Music, National University of Córdoba, Argentina)


August 7th, Thrusday
Papers IV Room: 9, 08:00-10:00
Chairman: Geber Lisboa Ramalho (Dep. Informática, UFPE, Brazil)

Maurício A. Loureiro, Hélder Soares de Souza, Guilherme A. S. de Castro,
Hugo Bastos de Paula, Leandro de Faria Freitas, Marlon P. de Rezende,
Willy Garabini Cornelissen
(CPCM, Escola de Música, UFMG, Brazil)

Rodolfo Caesar (LaMuT, Escola de Música, UFRJ, Brazil)

Aluizio Arcela (Depto. Ciência da Computação, UnB, Brazil)

Abstract. It is described a client-server application for music where graphic servers are arranged in a way to provide visual counterparts to the real-time events generated by a sonic client. The client must read concurrently as many scores as is the number of servers in order to produce a coherent basis for a sound-image composition system. A Java implementation for three-servers-and-a-client is discussed while an extension of the program is proposed. The scores--called spectral charts--are those generated by the time-trees1,2 and will work as raw material for a human real-time composition from a client interface. Properly speaking, the composition system works on a half-human-half-machine basis, for besides having a set of previously computed melodies, the process of melodic sequencing will continue by itself in an endless way when the human composer stops interacting with the system.

[goto Table of Contents]



August 5th, Tuesday
Tape Solo Concert I Room: Alvorada, 17:00-18:00

Celso Aguiar (USA-Brazil)

This title was drawn from the writings of Walter Smetak (composer, instrument-builder, cellist and writer) to whose memory the piece is dedicated. The piece is about sound transformation, as a metaphor to the transformation of consciousness. Metallic percussion sounds are ever-present, while original cello sounds are broken into their rawest components. The basic cuisine for the piece was set up from these spices, and the dish is to be served hot. The cello has its identity transformed: its defining harmonic series is turned inharmonic, sounding closer to the metallic percussion. The pitches from this now bent, inharmonic series, are used as framework for a melodic-timbral game (the 'blue pencil on a blue sky') played by cello and percussion. The cello transformations were obtained with SMSplus, a CLM system built on top of Xavier SerraÕs Spectral Modeling Synthesis and developed by the composer. All blue was originally composed in 1996 for four-channel tape - from which this stereo version was made. A procedure for modeling the physical properties of a room via feedback-delay-networks was employed (Ball within a Box, developed by Italian researcher Davide Rocchesso at CCRMA, with additional enhancements by the composer). A closer impression of the original virtual space is obtained when listening to the piece with headphones.

UNÁCRÈS (1996)
Ralf Ollertz (Germany)

Together with Pyrócua and CralÕune, UnácrŹs makes part of a Trilogy about a travel through a desert in Chile. The piece was commissioned by the Academy of Arts of Berlin, in which studio it was produced. The material is based on my recording of different glass sounds. Glasses, crashed mirrors, bottles, broken windows, etc... Using Macintosh computers, the compositorial work was made with programs like Sound Designer, Pro tools, several GRM Tools, granular synthesis and, of course, traditional transposition and modulation technics. Like in all my works, the technic and programs I use, are never the beginning of the composition process, more a compagnon by realizing my issues and ideas.

Mara Helmuth (USA)

Chimeplay was written on the NeXT computer for my wedding in August 1994 to express the joy of that time. The sound sources were obsidian wind chimes, bells and the voices of my husband and myself, processed with Cmix and rt on the NeXT computer. Patterns heard in the percussion and voices come from musical algorithms based on pitches and rhythms of the wind chimes. I used linear predictive coding to transform the voices, phase vocoding to stretch and deepen the wind chimes, and room simulation to create different environments. I wanted to create a world of chiming sounds which evolve through timbres from brilliant to heavy and dark, and even through the playfulness of the human voice.

CHOI-HUNG (1996)
Juan Reyes (Colombia)

This is a piece for flute sounds, and modeling of timbres from the far east. The focus is on the sound of the Shakuhachi. The timbre, for the most, was achieved by using a technique known as Spectral Modeling Synthesis (SMS) developed by Xavier Serra . The inspirational sources for the piece are a Shakuhachi performance, a subway station in Hong Kong, and its environment at the time. Humidity, hot temperature, and wood highly inspire this soundscape. This piece was composed at MOX - Center for Advanced Computation at Universidad de Los Andes in Bogota - Colombia.

AM STEG (SPACES) (1992-96)
Javier Garavaglia (Argentina)

The main sound material of this piece, is that of three notes played sul ponticiello (in german am Steg) on a viola. The richness in harmonics of this complex lead me to the composition of a piece, which should be a kind of study on spectra, consisting mainly on transpositions and reverberating of the main sound. The first version was finished in 1992, as I went on another project involving also viola sounds (which would become my piece called pizz, based only on one pizzicato on the C string of the viola). It's name was Räume, what in german means Spaces. In the beginning of 1996, I returned to this first viola-project- which hasn't been performed up to that time- as I decided to make some radical changes, leaving only the frame of the work, but adding more to it, for example, making filter chains to improve the spectral quality of the sound, letting some harmonics to resonate more than others, adding very low and very high FM and FM filtered frequencies by following the amplitude envelope of the original sounds and so on. The sense of space of the first version was not lost and that's why I called the piece quite the same but in another way, but I wanted also to mention the source of the sound in the title. The piece can also be performed together with her sister pizz as a Viola cycle.

Paulo Vivacqua (Brazil)

The work is divided in four parts, which show internal articulations. The first two parts alternate two different materials, one internal, waves produced by additive synthesis build into the computer, and another one, external, with fragments of sound of short wave radio. Hardware employed: a PC 486 with audio and a short wave radio. Software employed: Wave, Cool Edit, for audio processing and SAW as a sequencer.

PerCurso (1997)
Fernando Iazzetta (Brazil)

The composition of PerCurso started about one year ago. At that time I was interested in using percussive sounds to create electronic music. For me, it was really challenging to transform those short and punctual sounds in a rich material to be used in music composition. All sonic material in PerCurso comes from a short fragment of clapping sounds presented at the very beginning of the piece. This fragment has been processed and transformed by using in a number of different software and techniques. The result was a large library of sounds which I have used to compose the piece.

Patricia Martinez (Argentina)

This piece is the attempt to effect the musical impression of a plastic work, to describe not the painting, but the first impact of the subject -my own impression in this case- of that almost automatic and hardly apprehensible moment of meeting to the visual, temporally embodied in music. The painting on which this work is based is called "The nail in the picture", and belongs to the contemporary german plastic Gunter Eucker. The idea or concept that I have developed is the challenge of trying to transfer certain mechanisms of aesthetic and temporal perception of visual art to an art apparently opposite in essence, form and content, as music is. I wanted to turn, for the listener, during the hearing of a work, chronological time (implacably continuous) into psychological time, where the flow of music is transformed into sound paths seeking to exalt only "the instant". In general, I worked on the elaboration of compact and regular sound structures, layers or walls of complex sound that are being juxtaposed, superimposed or distorted, gaining mobility and independence through the musical discourse. The organization of these materials is thought as the experimentation of a way of articulating the sound language by means of the fusion of timbre interweaving under permanent transformation.

Aquiles Pantaleao (Brazil)

Concreta intend to discover the identity of the used materials, through the exploration of their spectral contents. The materials are deprived of their immediate recognized features - considered here externals - like contour, gesture, etc. in the attempt to emphasize tone color and internal space. Even though subsequent morphologic development of those materials an abstract course follow, strong qualities of body, mass, density and substance will be hold on. The title is so much a reference to the reinforced concrete blocks that furnished most of the used sounds, as an homage -even if obvious - to the musique concrŹte.

[goto Table of Contents]

August 6th, Wednesday
Tape Solo Concert II Room: Alvorada, 17:00-18:00

Dennis Miller (USA)

Ramparts, for solo tape, was written in 1996 and received its premiere early in 1997. The piece is in two large sections that are similar in their use of granulated textures, but distinct in other ways, primarily register and dynamics. The sonic material for the piece was generates by the Kyma Sound Design Workstation with processed via a Kurzweil K2000 sampler. Like many of the composerÕs recent works, the formal outline is defined by a balance of repeating, referential materials and unexpected, contrasting elements.

James Brody (USA)

Theta Ticker, composed in December of 1996, is a result of an ongoing (naēve) sound experiment to see if there might be some direct correlation between musical rhythm and brain waves. The obvious repeated sound is at a frequency of 5.2 Hz, right in the theta brain wave band. A penetrating quality to this sound seems to have some kind of hypnotic effect. If some listeners find the sound environment unendurable, they should feel free to leave for a minute. Structurally the other sounds in the piece are organized in the proportions of the Fibonacci series, 1-1-2-3-5-8-13. The transformations of a small number of 'real-world' sound sources were accomplished using the Sound Forge and Cool Editor programs on a Pentium 150 computer in the composer's home studio. A final transfer was then made to Digital Audio Tape.

Catalina Peralta (Colombia)

The main concept stays:an interchange between the different instrumental characters of a double stringbody, constructs a kind of Recitativo in space. On the tape:Indian Sitar and Violin; there were no other synthetical or electronic sounds. This work is based on motivs that at the same time are instrumental characters, configurations of instrumental soun objects. The "stringbody" expands and compresses itself contrasting with its own time structure, breaking it offen but going forward, enjoying the distance from recognizable auditive processes of pure acoustical instruments. The piece was produced at the Laboratorio de Composición Electroacústica, Depto.de Música, UNIANDES.

Thomas Wells (USA)

My basic idea was to create an entire work from spectral manipulation of vocal sounds, using techniques of spectral interpolation, compression, and expansion. The primary sound source in this work is an excerpt from Don Carlo Gesualdo's Tenebrae, recorded by the Hilliard Ensemble, and used by permission of ECM Records. The music is dark in mood, befitting the text and the style of Gesualdo's vocal setting. The 60" excerpt on which the entire work is based was transferred directly from CD to hard disk using an Audio Digital Systems AES/EBU/SCSI interface developed at The Ohio State University under a National Endowment for the Arts Centers for New Music Resources Grant. The recorded material was processed using phase-vocoder-based spectral modification programs written by Christopher Penrose, Eric Lyon, and myself -- programs to perform timbral interpolation, spectral expansion, compression, and inversion. These programs run on a NeXT 040 Cube in the Ohio State University Sound Synthesis Studios. The IRCAM Super Phase Vocoder, running on a DECStation 5000/200 was employed in this work for time expansion, and the Mark Dolson phase vocoder, running on a SUN 3/280 was used to provide analysis files for manipulation using cmusic scripts. Linear prediction was used, albeit very sparingly, using the mxv application, written by Doug Scott, and running on the NeXT 040 Cube. Timbral interpolations were made between different excerpts of the Gesualdo samples, as well as between Gesualdo samples and software-synthesis produced sounds (made using FM, waveshaping, granular synthesis, and resonant-filter synthesis). The spectral modification techniques were applied, for the most part, successively, in order to achieve a desired timbral richness, and to distance the material somewhat from the original patterns and inflections of the Gesualdo. Mixing was done with the Lansky/Dickiert application. Recording was made on a Sony PCM-2500 DAT, recording directly digitally from the Audio Digital Systems interface.

Hwang Sung Ho (Korea)

TV Scherzo, based on a dance, Fire, Mist (1991), materialized a theme, TV satire based on the events that occurred in 1994-1995. This is about the loss of our true senses in the reality reflected through TV, which causes confusion in our perception about time and space, reality and virtuality. In the manner of TV programs, it is a simple flow of ordinary daily life without pursuing any formal shape or structural unity. The effect of experiencing minimalism, aleatory, tonality, harmony and sound is intensified by rhythmic contrast of comic gesture at the beginning and the poly-rhythmic gesture in the middle, and the experimental sound in the last part. This work evinces an interest in the strong rhythmic gesture of the composer, and is a product of modernism, which is possible because of today's electro-domineering environmental culture.

Jorge Antunes (Brazil)

This piece, only 2 minutes long, was produced at the Studio Charybde of GMEB, in France, in May 1995. It is a tone colour development of a note E (Mi). After Jorge Antunes Chromophonic Theory, the octaves of this notes matches the colour violet. Therefore, the mini-composition is a kind of tone colour-time picture, where different nuances ad violet shades blend, unfold end evolves in time. Other than Pro-Tools software for the edition, the composer used Sample-Cell software for the construction of tone-colour transformations, and also the Sound Synthesis technic which Antunes call slipping harmonics, where the synthetic-evolved sound of the spectrum, comes to be a changing time web. The junction of E notes by a female voice, joins an erotic character to the little musical speech.

Mladen Milicevic (Bosnia-Herzegovina)

The only music materials used in ATARI ETUDE are 132 different glissandi, defined within a range of ten octaves. Thanks to FORMULA's ability to handle incredibly fast playing speeds, an additional sound dimension of the glissandi could be utilized. Sounds generated by the Yamaha's TG77 played at fast speeds changed their original identity. The transition from synthesized sounds to the "speed modulated" sounds was the basic constructive element in structuring this minimal piece.

Gerald Eckert (Germany)

Diaphane (Diaphanous) for 2-track tape was composed in 1995 at the ICEM (Institute for Computer Music and Electronic Media) at the Folkwang-Hochschule in Essen/ Germany. The title (cf. diaphan - diaphanous) is to be understood as a concept. - A stratum which is in itself complex and has been composed using various means is overlaid by several different strata - or expressed as an association: a surface changes its form due to the simultaneous appearance of different-colored lights refracted by a prism. The result is the overlapping of two different kinds of structures comparable to the interference of two pieces of film laid over each other. This happens in "Diaphane" at carefully chosen points which, temporally, are uniquely related. This work was composed using various kinds of technology. The "concrete" sound material was won from the sounds of percussion, speech and machines and was digitally revised. The sound structure was created with the language Csound.

Ricardo Dal Farra (Argentina)

With "Tierra y sol" I am trying to reflect not only the sonorities from the Andes mountains, but also the pace, the mood, the different way of changes, the hopes (or non-hopes), the times of the people's vital cycle there. All sonic materials heard on this piece were derived from ancient Andes' woodwind instruments like quenas and mohoceĖos; cross-culture musical instruments like charangos, and even the classical guitar; and from the voice of a folk singer, still living in the mountains. The original version of "Tierra y sol" was commissioned by the International Computer Music Association (1996 ICMA Commission Award). This new version was edited and remixed in 1997.

Pete Stollery (Scotland)

My previous tape piece Altered Images was concerned with the dual interpretation of the word ÒimageÓ on both aesthetic and sonic levels; Onset/Offset is concerned, even more than before, with exploiting the interplay between the original ÒmeaningÓ of sound objects and their spectro-morphological characteristics. Thus, there are many recognizable sounds in this piece which can, and should, be perceived on both levels - the sound of a key in a lock is on one level refers to the action of unlocking a door, but on another, is also interesting as a pure sound in itself. Onset/Offset was realized in the Electroacoustic Music Studios at Northern College, Aberdeen and at the University of Birmingham in April 1996. It received an Honorable Mention in the Stockholm Electronic Arts Award, 1996.

[goto Table of Contents]


Man-Machine Concerts

Room:Alvorada, 17:00-18:00

Ciaran Hope (Ireland)

The Persistence Of Memory is based on the painting of the same name, by Salvador Dali which was painted in 1931. This painting is one of Dali's most memorable Surrealist works. One hot August afternoon, in 1931, Dali came upon one of his most stunning paranoid critical hallucinations. Upon taking a pencil, and sliding it under a bit of Camembert cheese, which had become softer and runnier than usual in the summer heat, Dali was inspired with the idea for the melting watches. These are one symbol that are commonly associated with Dali's Surrealism, where he literally uses them to show the irrelevance of time. The piece is based on the note E, and like the picture itself, everything is Ďsuggestedí. It is a one movement piece, containing an introduction, a finale, and four thematic sections. The watches are all described in thematic sections, while references are made concurrently to other features of the painting.

Allen Strange (USA)

(for alto saxophone and digital media (1993)) I: Confirmation II: Ludes n' Licks III: Retroriffs IV: The Street V: Scrapplin VI: Blues for Billy VII: Backstreet

This is the most recent in a series of compositions for instruments and electronic/computer sounds. Like all its predecessors the music is virtuosic in gesture and based on an active metaphor that permeates the composition, in this case "flutter." In this case the "fluttering" refers to the music of the legendary jazz saxophonist, Charlie "Yardbird" Parker. The composition is structured as a series of seven character variations made from "Parkerish" things. In some cases there are very obvious variations and quotes from famous Parker tunes. In other instances characteristic Parker improvisatory phrases are quoted and expanded and large sections are based on scalar patterns indigenous to Parker's performances. As this work is a tribute to the master improviser it was obvious to me that some of the content should be left to the spontaneous skills of the performer.

Bob Willey (USA)

Caxambulismo is an interactive improvisational environment written in MAX using a Buchla Lightning controller. It was made for Mauricio Loureiro, and when the wand is attached to his clarinet the vertical movements of the instrument controls the pitches played by the computer, while the horizontal movements regulate the timbres. Vertical motions of his leg to which the second wand is attached to his leg control the volume of the synthesizer, while the horizontal axis regulates the duration of the notes played.

Jorge Sad (Argentina)

From its beginning the Colectivo de Creación Sonora was thought as a place of exploration and research of traditional instruments' sound possibilities and their capability of melting with electroacoustical sounds. The group tends to appear as a living synthesizer, where the source of sound events is often hard to be identified. The CCS looks for a way of working which incorporates the players intuition and hearing as a structural part of music creation. Sound objects achieved this way are the consecution of a common elaboration process instead of the product of an isolated being. The members of the Colectivo de Creación Sonora are: Juliana Moreno: Flute, Enrique Entenza: Bandoneon, Germán Meira: Electric Guitar, Martín Devoto: Cello, Jorge Sad: Composition and Conduction The CCS was born in 1994 and had its first performance during the Week of Music and Electroacoustical Means '94 at The Centro Cultural Recoleta. It has been chosen to take part of the 6th. International Symposium of Electronic Arts ( ISEA 95 ) at Montreal. The piece Klang/Clan was commissioned by the Fundación Música y Tecnología and was premiered at the Centro Cultural Recoleta on November 1st., 1996.


Carlos Cerana (Argentina)

El poder de lo invisible (The power of the invisible, 1996) represents a confluence of two different cultures far apart in time and space. The practice of T'ai-Chi Ch'uan was developed by Taoist monks of China, as a way to harmonize man with the Universe by balancing the energies of body, emotions and mind. The Lightning is a spatial MIDI controller: two infrared transmitters in the forearms of the performer allow a sensor to detect their position and displacements. A software developed in MAX uses the gestures as control factors for a synthesizer. The ancient form of the T'ai-Chi --rendered by Felicia Tracogna-- is translated to sounds by means of present technology: an audible manifestation of Supreme Unity.

Carlos Cerana and the performance of Felicia Tracogna

[goto Table of Contents]

Demonstration Concerts

Room:Alvorada, 17:00-18:00

Jônatas Manzolli (Brazil)

"Névoas e Cristais" (Portuguese for clouds and chrystals) is an interactive piece for computer and Vibraphon. The sonic clouds are produced by accumulation of fast and short sounds and punctuation of chords and clusters generate fragmented sonic chrystals. The computerís role is to follow the performerís actions playing a second voice part. This enhances the Vibraphonís sonic features enlarging its pitch range and sound colour. A software written in LISP processes a MIDI buffer to produce the sonic output in real time. This composition is the result of a research in Gesture Interfaces developed by the author at the Interdisciplinary Nucleus for Sound Studies (NICS-UNICAMP).


Didier Guigue (France-Brazil)

All low-level material (pitches) is generated by a collection of algorithms built with the Profiles library patches included in the Ircam=B9s Patchwork Composing Environment. Each algorithm provides a controlled sequence of interpolated pitches, chords or harmonic complexes from B to A (1st part of the piece, titled "Profile to A") or reversely A to B (3rd part, titled "Profiles to B"). The 2d part represents a stationnary moment of the form.

As an example, the global structure of the III part attends the following format (PS = Pitch Sequence; CS = Chord or Harmonic =

Complexes Sequence):

00=B900=B200 PS 1a

02=B902=B200 Gap (Low B)

02=B916=B200 PS 1b

03=B947=B206 CS 1

04=B959=B220 PS 2a

06=B949=B221 Gap (Low B)

06=B958=B210 PS 2b

08=B918=B215 CS 2

12=B932=B200 End

In PS sections, positive or negative correlations are estabilished between the pitch sequences themselves and: the linear directionalities of both amplitude (crescendo or diminuendo) and sound focusing/defocusing (by a reverb/chorus increment or decrement) the non-linear directionality of tuning rates progression (from 1/8 to 5/4 of tone) the global tempo progression (increasing or decreasing).

Pitch sequences use sampled piano sounds, and chord sequences, complex sounds (harmonic or non-harmonic) made on a Korg WaveStation. Chord sequences are sustained with sampled String orchestra sounds.

About the title

As a whole, this piece refers to the determinist pessimism which goes through all the work of the Brazilian poet Augusto dos Anjos. 'Vox Victiae' is one the poems I choosed as a reference. The II part of the piece includes excerpts of this and other poems by this autor. The way I worked with frequently obscured or defocused pianistic timbres, the handling of time and periodicity of events, the ineluctable determinism of interpolation as a time process, transfer to the musical domain my own reading of the poet's obsessions.


Carlos Cerana (Argentina)

(see description above)

[goto Table of Contents]

On-Line Concerts

Place: Alvorada's Hall
LEXIKON-SONATE (1992-1997)
Karlheinz Essl (Austria)

"Lexikon-Sonate" is a work-in-progress which was started in 1992. Instead of being a composition in which the structure is fixed by notation, it manifests itself as a computer program that composes the piece - or, more precisely: an excerpt of a virtually endless piano piece - in real time. Lexikon-Sonate lacks two characteristics of a traditional piano piece:

1) there is no pre-composed text to be interpreted, and

2) there is no need for an interpreter.

Instead, the instructions for playing the piano - the indication "which key should be pressed how quickly and held down for how long" - are directly generated by a computer program and transmitted immediately to a player piano (or a MIDI synthesizer) which executes them.

The title "Lexikon-Sonate" refers to the Lexikon-Roman, written in 1968-70 by the Austrian-Slovakian author Andreas Okopenko. This novel appears to be one of the very first literary HyperTexts, several years before this term was introduced by Ted Nelson. This novel - "a sentimental journey to a meeting of exporters in Druden" (subtitle) - consists of several hundred small chapters which were brought into alphabetical order. By reference arrows as in a lexicon the reader could make her own investigations through the multiple nested web structure of the text. Instead of presenting a sequential text with a predefined direction of reading, Okopenko provides a structure of possibilities, which challenges the reader to become a creator of her own version of this novel.

Originally, "Lexikon-Sonate" was conceived as a musical commentary to an electronic implementation of Okopenko's Lexikon-Roman, carried out by the interdisciplinary group "Libraries of the Mind". But soon afterwards it started its own life due to its manifold ramifications, becoming an outstanding example in the domain of algorithmic composition.

Up-to-now, "Lexikon-Sonate" consists of 24 music-generation modules (structure generators) which are related in a very complex way as a musical HyperText. Each module generates a specific characteristic musical output as a result of the compositional strategy that has been applied. A module represents an abstract model of a specific musical behaviour. It does not contain any pre-organized musical material, but a formal description of it and the methods how it is being processed.

These modules are structural re-implementations of piano gestures obtained by analysis of piano music from Johann Sebastian Bach, Beethoven, Schoenberg, Webern, Boulez, Stockhausen and Cecil Taylor. They will never appear as verbal quotation (because none of this gestures has been "sampled"), but mainly as "allusion". Furthermore, they are open and generic enough so that different modules playing at the same time can intermingle, creating unforeseeable meta-structures.

The idea of autopoiesis - material organizing itself due to certain constraints - plays an important rule. By using a lot of different random generators which are controlling each other (which - according to serial thinking - form a scale between a completely deterministic and a completely chaotic behaviour), always new variants of the same model are generated. Variants that may differ dramatically from each other, though they are always perceptable as "inheritances" of the given structural model. Seen in this light, "Lexikon-Sonate" can be perceived rather as a meta-composition which enables the unfolding of piano music than a fixed work.

The underlying program was written in MAX (Puckette & Zicarelli, (c) 1990-1996 Opcode Systems Inc./ IRCAM), an interactive graphical programming environment for multimedia, music, and MIDI, running on a Macintosh computer. It draws from a large library of musical functions, compositional techniques, and algorithmic strategies which I have developed over the past few years: the "Realtime Composition Library" for MAX.

[goto Table of Contents]


Place: Área de Exposições Oeste
Aluizio Arcela (Brazil)

Music by Many Computers

If the communication among computers is possible by the so called client-server architecture, we may think in a group of computers as having the organization of an orchestra, such that each server will be running a different part of a piece which is being read and played by a client computer.

As spectral charts are musical scores having visual information associated to each note, every server in the group must be running a real-time graphic program capable of interpreting visual data coming from the client.

The proper working of this program set depends on many factors all of them having the same degree of importance. An eventual inefficacy in any one of such factors will invalidate the expected result as it may be caused by the low speed of the client computer in the task of running three spectral charts in real-time, or by the insufficient graphic power of some server in painting the required image during the lifetime of the corresponding note. Evidently, the bandwidth of the physical network used to conect the computers is also one of these critical factors.

LPE's staff

Besides allowing the conection among computers, modern programming languages, such as Java4,5,7, have the means for the creation of threads. To music this represents a powerful tool for the writing of polyphony because threads are operating system related facilities which allow the execution of concurrent program segments.

Distributed Music

Three-Treaded Invention* is a program having a set of distributed objects created by instances of the classes to be described in the next sections. These objects communicate among themselves so as to orchestrate a distributed audiovisual output with the automatic input of melodies associated to the intervention of the composer from the usersís interface. The composition model is that of a man-machine cooperation, where intuition is the ability assigned to the human composer--mainly in the timing he/she feels as the most suitable for melody and timbre renewal by pressing the interface buttons--while the heavy musical performance and crucial decisions are left to the computer. Among these crucial decisions is the mentioned melodic sequencing process, so that the human composer do not need to know what melody is the best to be concatenated to the current melody, but only signalize the moment the program must find it. Figure 1 ilustrates how the program is organized.

Database of Melodies

The program works by running three melodies chosen from a database of previously computed melodies. Whenever the composer asks for a melody by pressing the melody button, or when the current melody reaches its end, the program picks a new melody from such group of melodies.

In Three-Threaded Invention the melody database is implemented as a set of files having enumerated names, as for example "1.mel, 2.mel, ..., n.mel", in order to facilitate the picking of a melody. Surely, the larger the number of files the greater the possibility for the composer to reach significant aesthetic results. However, in the construction of the melody database, the first rule to be followed is that a small but good set of melodies will work better than a large set having some bad melodies, even if the number of bad melodies is two or three. The melodies belonging to a really good group must hold formal relationship among themselves, in order to allow a place inside the program for a deterministic, as opposed to a random, method for picking a new melody. Therefore, a bad melody--as employed here--is any melody having no relationship with any other in the database.


Network Administrator João Gondim, Max, and Aluizio

A Data Structure for Timbres

The assignment of a timbre to a melody is to be done with the help of data structures wherein all available instruments can be placed and their more important parameters being properly attached to each of them. Among such parameters are the average spectral contents, the attack time, the frequency range, and so on. The method for calculating the right timbre to play the current melody will operate by comparing the structural values of the melody--i.e. the group of intervals belonging to the melody, the rythmic patterns, the pitch range, and the overall duration--with the instrumentís parameters, looking for that instrument which has the best set of properties in relation to an ideal spectral behaviour specified according to the entire melody or to a part of it, as it occurs when the human composer forces the program to pick a new melody.

The timbral calculation in case of Three-Threaded Invention is, however, not so complex as it is required above, for Invention uses a simple data structure, namely Timbres[], which is an array of integers pointing to a subset of the general midi instruments, but without keeping their parameters. The composer may ask the program to change the timbre at any moment, while the program can decide for a new timbre when it starts a new melody by itself, that is, when a melody ends naturally without the intervention of the composer.

Sonic Client Classes

A brief description in a Java-like way of the three classes belonging to the client program, namely Internote, Performer, and Midi, is found below. Many methods and variables have been omited for obvious reasons while some others have been abreviated to make the overall description easier.

The Internote Class

The role of Internote is to draw and administrate the userís interface after creating the COMPOSITION object. The first action of COMPOSITION is to create the threads p1, p2, and p3 by instanciating three times the Performer class.

To start the threads, the composer may press the playALL button, or otherwise he/she may start individually each of them by pressing the play button inside the Performer box. When at least one conection is done the process of interactive algorithmic composition starts. The Internote class is defined as a subclass of the Java AWT Frame class.



The author and Max Mathews at the installation of Three-Threaded Invention. (The two Silicon Graphics computers on the left was provided by Unimix.)

[goto Table of Contents]



Tutorial I
Monday (10:00-12:00), Tuesday (8:00-10:00), Tuesday (8:00-10:00)
Room: 9
Rodolfo Caesar (LaMuT, Escola de Música, UFRJ, Brazil)


Dividido em segmentos teóricos e práticos, o tutorial esclarecerá as diversas noções de espaço na música, e proporá exercícios visando a exploração do mesmo como uma dimensão poética e técnica.

[goto Table of Contents]

Tutorial I I
Monday (10:00-12:00), Tuesday (8:00-10:00), Tuesday (8:00-10:00)
Room: 10
Jônatas Manzolli (Núcleo Interdisciplinar de Comunicação Sonora, Unicamp, Brazil)


This workshop is to introduce basic methods of Algorithmic Composition. It starts on Stochastic Methods moving to algorithmic methods for Timbral Design. Models to be used on macro and micro sound structures will be presented. It discusses also basic data structures and functions to control music events in real time. Two programs created by the author called Interasom e Morphsom.will be used to produce sound examples. It is plained to an interdisciplinary group, to make the participants to develop their own view of how to use algorithmic tools on Computer Music.


[goto Table of Contents]



Financial Suport


CNPq: Maior parte da verba de custeio para passagens aéreas, hospedagem, aluguel de equipamento de rede, equipamento de som, transporte, telefone, secretária, operadores de som e vídeo.

XVII Congresso da SBC: Hospedagem dos convidados principais no Hotel Nacional, alimentação e transporte do hotel para o centro de convenções, aluguel do auditório Alvorada e das salas 9 e 10 do Centro de Convenções.


Apoio Logístico:

Unimix: Empréstimo de duas máquinas Indy Silicon Graphics destinadas ao concerto on-line Three-Threaded Invention e à palestra de Barry Vercoe do MIT Media Lab para demonstração do padrão Netsound.

MSD: Serviço de editoração dos anais e preparação de fotolitos de capas e cartazes. Cortesia de ex-colaboradores do chairman.

LPE/UnB: 4 Máquinas IBM Risc 6000, 3 PC Pentium, equipamento de som, móveis, placas de som, placas de rede, dispositivos midi.

MUS/UnB: 1 MacIntosh, DAT, teclado, som, móveis.

Alex-Informática: Pastilhas de memória, placas de rede.

[goto Table of Contents]

Room: Restaurant, 19:00-21:00
August 6th, Wednesday

Inicialmente marcado para a sexta-feira (8/agosto) às 9:00h, a assembléia foi antecipada para a quarta-feira à noite em função da alegação de alguns membros quanto a outros compromissos no dia 8 em suas cidades.

(Nota: A lista dos presentes será incluída oportunamente.)


Cumpriu-se a pauta:

Pontos positivos: Opção pela qualidade. Submissão de papers na íntegra ao invés de resumos. Comitês internacionais de alto nível para seleção de trabalhos. Concertos que refletem pesquisas. Inclusão das performances no horário diurno do simpósio para impor um caráter acadêmico ao invés de cultural. Convidados internacionais de grande renome. Espaço físico de altíssima qualidade. Equipamento suficiente e de boa qualidade. Instalação de pontos de rede para os concertos on-line e para as palestras. Equipe de produção altamente qualificada. Material impresso e programação visual de alto nível.

Pontos negativos: Verba reduzida. Impossibilidade de comparecimento de alguns autores por falta de recursos para passagens e diárias. Uma certa dependência burocrática (desnecessária) em relação à coordenação de XVII Congresso da SBC.

Saldo: Altamente positivo na opinião de todos.

Adoção da organização proposta no simpósio de Brasília. Divulgação da Chamada de Trabalhos antes de novembro/97.

Em virtude da indefinição quanto à cidade onde se deverá realizar o XVIII Congresso da SBC, e com base apenas na informação de que tanto poderá ser no Rio de Janeiro como em São Paulo, a assembléia do NUCOM, decidiu eleger dois nomes:


Rodolfo Caesar (Rio)

Fernando Iazzeta (São Paulo)


Papers: Geber Ramalho

Concertos: Fernando Iazzeta

Tutoriais: Robert Willey

Representante eleito para um mandato de 2 anos: Geber Ramalho, que substitui Maurício Loureiro.


[goto Table of Contents


Alex Meirelles windows95 e irix
Antônio Cezar serviços de gráfica
Cléuzio Fonseca revisão de textos
Ricardo Ribeiro programação aix
João Gondim tecnologia de rede
João Olegário programação visual
Lilian Campos secretária


Coordenação Geral do IV SBCM

          Aluizio Arcela
          Universidade de Brasília
          Instituto de Ciências Exatas
          Departamento de Ciência da Computação
          70910-900 Brasília-DF, Brasil

          tel:            (061) 348-2705
          fax:           (061) 273-3589

[goto Table of Contents]

relatório CNPq: 450916/97-9

Date: August 31, 1997

[goto top]