The research presented here reviews the use of Stochastic Processes on composition; Markov Chains were studied as a tool for developing musical structures on the 70s. This paper discusses a new approach in which Transition Matrices are linked to Boundary Functions. These are used to control iterative processes which generate sequences of matrices. The resultant sequences are numerical structures which can be associated to different classes of sound control parameters. The text presents the Mathematical Model as well as the Musical Model i.e. the mapping from numerical data to musical parameters. It describes the implementation of the process for a Windows Environment, presents sound examples and a possible application of the method to sound synthesis.
There are many ways in which computers have been involved in the world of music. One is the score edition, where musicians can develop a score using an editor to produce a digital output file. This strategy might be a good idea, but it is a non-traditional way to develop a score. Another computer application is music processing that can help musicians in music creation process: automatic composition of music, analysis of musical style and so on.
In this paper we are going to deal with automatic recognition of printed music and its conversion to some digital forms. Here, musicians can continue developing the scores as they used to, and introduce these printed scores into a computer to convert them to a digital output file. These digital files can be used for many purposes (Blostain 92):
Over the last years, efforts have been focused on automatic musical notation recognition. Several research centers and universities have developed many programs to recognize printed music (Computing 94), (Fujinaga 88), (Itagaki, et al.), (Kato et al.). Although some results of these works are quite good, improvement is needed. The obstacles that developers of these systems have had to face seem to have increased and most of them are too complex to be solved by classical methods in comparably short execution time of the system. To show this fact, a cople of the most important problems are listed next:
These difficulties have not permitted us to make a universal printed music recognizer system. So, this area is still open to ideas that can improve the methods already applied or create new ones to have a better recognition process.
Some problems mentioned above were solved by MIDISCAN. MIDISCAN is a semi-automatic printed music recognition system which accepts scanned scores and produces a playable file (MIDI file). Such file can be played on synthesizers, computers, and other equipment that can accept MIDI files. Unfortunately, the conversion is not so direct, because of that MIDISCAN represents the recognized scores in an intermediate format called MNOD. The necessity of this intermediate format is mainly due to the following reasons:
In spite of the very good performance of MIDISCAN, it is still far away from being a perfect automatic printed music recognizer due to the difficulties mentioned above.
In conclusion, MIDISCAN is directed at solving issues related with two fields of music processing: notation recognition and music representation. In the document the focus of our attention will be put on possible improvements in MIDISCAN in both areas: to apply suggestions given in several references (Aikin 94), (Homere 94), (Lindstrom 94), and to increase recognition rate and expand the set of recognizable symbols (notation recognition); to extend the represented musical symbols and structures (music representation), in order to make the process easier and faster and to provide more powerful computer tools for musicians.
Um software auto-instrucional (courseware) possibilita que se aprenda sozinho, num ritmo individualizado e utilizando o proprio tempo disponivel. Isto eh feito atraves da integracao de recursos multimidia (texto, sons e imagens) para apresentar informacoes de forma amigavel e interativa, o que tem atraido a atencao de um numero cada vez maior de pessoas.
Este trabalho consiste em um courseware introdutorio, versando sobre topicos basicos da Teoria Musical, desenvolvido para ser executado no ambiente Windows em plataformas PC multimidia. Os topicos abordados sao: definicao de musica e seus componentes; claves e notas; divisao proporcional de valores; tom, semitom e alteracoes; localizagco das notas no teclado; classificacao dos intervalos naturais; escalas maiores e menores. Os topicos sao apresentados na notacao musical convencional, e ilustrados com exemplos sonoros pertinentes. A navegacao entre os topicos eh propiciada por botoes especificos que permitem ao aluno seguir em frente, voltar a telas anteriores, ou mesmo mudar de licao. Exercicios de fixacao foram elaborados para que o aluno verifique se ele assimilou corretamente os conceitos apresentados, sendo possivel o retorno a topicos anteriores para auxiliar na resolucao das questoes apresentadas. Ao final de cada licao existem testes, nao sendo possivel nesta situacao consultar topicos teoricos, para que seja feita uma avaliacao mais realista do desempenho do aluno.
Na implementacao foi utilizado o software de autoria Toolbook, que, alem de facilitar o desenvolvimento da interface com o usuario propicia um acesso simplificado aos recursos multimidia das plataformas PC. Para que os exemplos sonoros apresentassem uma alta qualidade e independessem da placa de som utilizada, optou-se pelo formato dos arquivos de som do ambiente Windows (.WAV) com amostras monofonicas de 16 bits, a uma taxa de amostragem de 44100 Hz.
This paper examines the implications for composition and software design that exist when using ImprovisationBuilder with MIDI piano, primarily the Yamaha Disklavier.
Software design issues include: capturing, transforming, and realizing pedal data in relation to pitch data; an easy-to-use graphical interface for altering parameters during performance; and, addressing the Yamaha DisklavierUs 500 millisecond delay for reliable processing of MIDI input. Compositional issues include: the causal responsorial aspect of the ImprovisationBuilder output; and, varying the small- scale event and gestural levels while preserving the large-scale structure among multiple performances.
In the composition _(Disturbed) Radiance_ (1995), the pianist is invited to improvise, within specified constraints, in response to the music s/he hears generated by ImprovisationBuilder. As ImprovisationBuilder reacts to input from the pianist, the causality becomes circular and dynamic.
Work on this project suggests the advantages of addressing compositional and software design issues simultaneously and cooperatively. Both composer and software designer can benefit from this collaboration.
This paper poses a theoretical discussion upon some aspects of computers' usage as a tool for musical composition, analyzing its consequences to education of composers, to musical ideation p2oxess, and to the communication of musical work to the audience. Realization of a musical work implies decisions in two different fields: composition and rendering. Computer music brought --as one of its possibilities-- performing of music without a player. This gave the composer the responsibility of making performing decisions in the very act of composing.
The authors question some practices that have become usual in computer music: among others, control of temporal structures by machines (which do not perceive the pass of time), or the intellectual origin of structures -- sometimes generated without experiences that relate sound and body. Extremely, the actual possibility of developing a complete training in composition and operating on complex musical structures, without having ever played an instrument.
As a conclusion, the paper stresses the relevance of instrumental practice as a significant tool for structuring musical thought.
This article analizes the application of a new reconstruction method for digital audio signals. It can reduce considerably the bandwidth for transmission, the storage and the time for synthesis, and it is independent of existing compression techniques.
Most signals such as audio, static images and moving images may be considered as analogic under the macroscopic point of view. Nevertheless, for this signals to be translated and manipulated by digital computers they need to be transformed to digital signals. Since the final target of such signal is the analogic display, after all the digital audio processing data must be converted back into the analogic domain.
Following the basic theorem of sampling, the minimum sampling rate for a signal must equals the double of the highest frequency present on the analogic signal. In practice the reconstructor used may be modeled as a reconstructor of order 0 (zero), and frequencies high above the minimum are frequently employed.
Among the reasons for this is the use of reconstructors far from ideal, and the simplification on the projects for low pass filtering placed after de D/A converter. The method here proposed originates from the basic sampling theorem, so that lower sampling frequencies may be used. Results are analized qualitative and quantitative for sampled and synthesized sounds in respect to the complexity of the recontruction methods employed.
Our conclusion on this analisys is that the application of our method on audio signals produces better results than the conventional reconstructor of order 0.
Most of analytic tools developped during this century are almost exclusively devoted to the basic components of music writing, pitches (or pitch-classes). Nethertheless, it is evident that forming and linking complex abstract sound-objects is also a decisive part of musical thought. The heterogenous system of codification could be the main reason why this objects seem to resist to rigorous analytic methods. This paper expose the basis of a method of information and evaluation of the components of the musical score which may bring out all the necessary data for a non-empirical analysis of the formal functions of these multidimensional objects.
Restricted to the piano literature, the algorithmic structure of this method is realized inside the Patchwork environment developped at the IRCAM Institute, Paris. One need first to modelize the acoustic behaviour of the piano and the ways to control it by mean of the instrument's interface. Then the evaluation algorithms are built inside a virtual two-axis network, involving all the writing features that may describe the sound-objects, i.e. laws of space and time filling, distributing models, etc.
This information/evaluation process aims to make easier the computing of most complex components of the musical writing for consistent analytic purposes, and to correlate abstract, physic, and perceived sound material. It may allow to verify in a more objective way the capacity of a concept dimension, as the timbre, to support formal and/or significant structures.
Devido ao grande avanco tecnologico, a maneira de fazermos musica mudou. Por essa razao, os sistemas de aprendizagem musical tambem devem assimilar essas mudancas.
A busca de metodos nao tradicionais que enriquecam a criati- vidade, abrem novas perspectivas no campo da musica. Tais metodos combinados aos metodos tradicionais, quando apropriados, podem levar-nos a um resultado ainda melhor no processo educacional, principalmente com a utilizacao do computador.
Este artigo apresenta um prototipo utilizado como ferramenta, nao tradicional, em ensino de teoria musical, o SETMUS. Sao abordados aspectos da interface grafica, da calculadora musical, de "play back", de sistemas de navegacao, de reconhecimento de escalas e arpejos, de didatica, de recursos de som e demais caracteristicas de funcionalidade do SETMUS.
Musical composition can be roughly viewed as a search for the best solution among a finite - although huge - universe of possibilities. Some of the algorithmic compositional techniques try to simulate the act of composing doing this search automatically. However, this approach has two major problems. The first one is the hardness of depicting aesthetical concepts through mathematical rules. The second problem is the low efficiency of the exhaustive search among all possible solutions.
The "Simulated Annealing" algorithm presents very good results on finding the optimal solution for many combinatorial problems efficiently (in polynomial time).
In this paper we present an adaptation of this algorithm to the problem of algorithmic composition. We then discuss some possibilities regarding goal functions to this algorithm. Finally, we describe MAXANNEALING, our implementation of the algorithm in the MAX programming environment presenting some experimental results.
Pode-se considerar como som musical todo aquele gerado por um instrumento musical (preferencialmente acustico) e percebido pela audicao humana. O reconhecimento humano do som musical e'dado pela percepcao da audicao a uma serie de informacoes contidas no som. O conjunto dessas informacoes permite identificar o timbre. Para sons musicais, tem-se o timbre musical, ou seja: o conjunto de informacoes sonoras percebidas pela audicao humana, que possibilita o reconhecimento do som caracteristico de um instrumento musical. Neste trabalho foram usadas as variacoes das magnitudes dos harmonicos do espectro do som para representar o timbre musical. O objetivo e' permitir o reconhecimento deste timbre por uma rede neural do tipo auto-organizavel.
O modelo de Kohonen foi anteriormente utilizado no reconhecimento de padroes foneticos. Neste trabalho, usou-se a rede de Kohonen para reconhecer padroes que representavam o timbre de um instrumento musical. O modelo recebe sequencialmente uma serie de vetores de treinamento, onde cada um deles e' formado pela concatenacao de 3 vetores menores. Cada um destes vetores representa tres instantes de magnitude dos harmonicos do espectro de energia de um instrumento, respectivamente: ataque, sustentacao e decaimento. Cada um desses instantes do espectro e' obtido pela Transformada Rapida de Fourier (FFT) de um intervalo de som (de m pontos no tempo), curto o bastante para ser considerado instantaneo pela audicao humana.
A arquitetura do modelo de Kohonen consiste de uma rede de neuronios distribuidos num plano bi-dimensional. Na fase de treinamento, todos os neuronios recebem simultaneamente a mesma entrada, ou seja, um vetor de treinamento de n componentes, que representa o timbre. Apos a fase de treinamento, formam-se mapas topologicos no plano da rede. Neles pode-se observar a formacao de setores ou vizinhancas que indicam a classificacao das entradas. Cada um destes setores no plano esta'associado com um dos timbres apresentados na fase de treinamento. Observa-se que a formacao automatica dos mapas topologicos permite relacionar alguns timbres similares (considerados parecidos pela audicao), com regioes vizinhas no mapa.
Verificou-se que este modelo de rede neural auto-organizavel, permite o reconhecimento de um conjunto arbitrario de timbres, com uma taxa de erro suficientemente baixa.
O som, como material de estudo e manipulacao, e'tratado por tres diferentes correntes da ciencia: a acustica, a fisiologia da audicao humana e a psicoacustica. A acustica determina as tres grandezas que compoem o som: a sua intensidade sonora (dB), a frequencia (dos seus harmonicos componentes) (Hz), e a variacao de sua intensidade, ao longo do tempo (s). A fisiologia define os limites da audicao, para as tres grandezas fisicas: intensidade (0-120 dB), frequencia (20-20.000Hz)e tempo (>50ms). A psicoacustica trata das interpretacoes da percepcao auditiva, e define conceitos importantes, tais como: consonancia e timbre. O objetivo deste trabalho e'o estudo de um processo de Transformacao Sonora (TS) atraves do que sera' chamado de Operadores Timbrais (OT). Estes sao ferramentas que podem manipular o timbre de um som. Entre outras possibilidades, este metodo podera permitir ao compositor musical manipular livremente o timbres, em outras palavras, este metodo podera abrir caminhos para a Composicao Timbral.
A representacao das tres grandezas fisicas componentes do som: intensidade, frequencia e tempo; forma uma superficie topografica que sera'chamada de Superficie Sonora (SS). Esta e'formada pela variacao da magnitude dos espectros ao longo do tempo. A SS deve respeitar os limites da audicao em: intensidade, frequencia e tempo. Para a representacao do som discretizado no tempo (o som digital), a SS e'dada por uma matriz, a Matriz Harmonica (MH). Cada elemento da MH e' um numero complexo que representa um harmonico do som (em magnitude e fase) nos dominios da frequencia (linha) e tempo (coluna). A MH e' tomada em um tamanho tal que represente um pequeno intervalo de som no tempo, suficiente para o reconhecimento do seu timbre. Os OTs sao as ferramentas que manipulam os elementos da MH. A consequencia disto e'a transformacao do som representado em MH.
Neste trabalho foram desenvolvidos 15 programas para o ambiente MATLAB 4.0, para UNIX, que simulam todos os passos do processo descrito acima. A simulacao permite: a entrada e saida de arquivos de som (no padrao u-law, da SPARCstation); a visualizacao da superficie sonora (original e modificada); a transformacao sonora atraves de 7 operadores desenvolvidos; a visualizacao dos detalhes do som (no tempo e frequencia) e a audicao das transformacoes sonoras feitas pelos operadores.
Our goal in this paper is to present a new mathematics tool called Wavelet Transform or Time-Frequency Analysis. We Try to show both the pros and cons of this new tool against the secular Fourier Transform. At the end of this paper the reader will be able to understand the fundamental ideas of Wavelet Transform and how this idea has spread among academics.
To motivate the reader, the first section talks about a model that we have assumed as being easy to be implemented. In our model we have imagined any musician or any music lover that inputs the signal (music) in the computer and the computer runs the preprocessing by WT and a Neural Network recognizes the pattern and the output will be the score. This kind of work already exists but it works with Fourier Transform. We wrote a section that talks about the history of WT, his mainly mentor and other famous mathematicians who have been working with this modern tool.
Finally, we will show the necessary calculus to understand WT, but in a simple way with no hard equations. Further on we show, in a general sense, the Heinserberg Uncertainty Principle, which governs the size of the Window (Window Function - The mainly idea of WT). In Particular, it will be observed in this article that the time-frequency, window of any Short-Time Fourier
Transform is rigid, and is not very effective for detecting signals with high frequencies against WT, which presents a variable size window and is capable of detecting any kind of frequency even in non-stationary signals or better, in non-continuous function.
The aim of the paper is to speak about a piece of mine -Gegensatze (gegenseitig) for Alto flute, 4 Channel-tape and live electronics (1994)- and to make a short account about how does the AUDIAC system work (a proyect being carried on by the ICEM (Institut for Computer music and Electronic Media) at the Folkwang School of Music in Essen, which is responsable for the live electronics in my piece.
AUDIAC can work in real-time and can achieve every process known to the electronic music (AM, FM, Filters, Transpositions, Delays, reverbs, etc). It consists on a normal 486 PC with 66 Mhz, which administrates the data with a language (APOS) that was specially developt from the people at the ICEM for this purpose. The data can be written into any possible editor (ASCII). The main part of it is powered by a 186 Audioprocessor, which has 2 Inputs and up to 4 Outputs
The piece's name means "Contraries (reciprocally)". The work is based on two different materials :
The main points of the paper should be:
It is rather commonplace in everyday conversation to refer to the "Language of Music", and indeed the idea of exploring musical structures as linguistic objects has been applied before many times. However, we believe the whole apparatus already built for the analysis of natural language has not been yet as thoroughly used for the analysis of musical phenomena as it could have been. In this article we present some initial ideas towards extending the application of this apparatus for the better understanding of "Music as Language".
We apply some techniques from "Categorial Grammar" to represent a simple problem of music theory, which we believe nevertheless to be of widespread interest: functional harmonic analysis.
The aim of Categorial Grammar is the analysis of syntactic well-formedness of sentences. The fundamental concept underlying Categorial Grammar is that of "syntactic categories", which are classes to which words in a sentence must belong. Syntactic categories can be organised as formulae of some substructural logic -- e.g. the so-called "Lambek Calculus" -- in such way that syntactic well-formedness can be checked via an appropriate "proof theory" related to the logic.
As an example, let us consider the simple sentence "John works". If we assign to the noun "John" the syntactic category [NOUM] and to the verb "works" the (compound) category [NOUN ->SENTENCE}], we may, by "modus ponens", assign to the whole sentence the syntactic category [SENTENCE], thus proving the grammaticality of this sentence.
Let us consider now the harmonic functions of triads in a harmonic cadence. We could assign the syntactic categories [TONIC] to the tonic triad C-E-G and [TONIC ->CADENCE ->TONIC] to the dominant triad G-B-D of C major. Thus, applying "modus ponens" twice, we could assign the category [CADENCE] to the sequence C-E-G, G-B-D, C-E-G.
In this paper we propose an encoding of the harmonic functions of triads as syntactic categories like the ones above, and show how the generation of proofs of ``harmonic well-formedness'' of cadences can be implemented and used as a tool to verify and to display the harmonic functional structuring of cadences.
This paper will describe the design and implementation of PadMaster, a real-time performance / improvisation environment running under the NextStep operating system. The system currently uses the Mathews/Boie Radio Drum as a three dimensional controller for interaction with the performer. The Radio Drum communicates with the computer through a specially designed MIDI protocol and sends x-y position and velocity information when either of the batons hits the surface of the drum. The Drum is also polled periodically by the computer to determine the absolute x-y-z position of the batons. This information is used to split the surface of the drum into up to 30 virtual pads, each one independently programmable to react in a specific way to a hit and to the position information stream of one or more axes of control. Pads can be grouped into Scenes so that the behavior of the surface of the Drum can be subtly or radically altered during the course of the performance. The screen of the computer displays the virtual surface and gives visual feedback to the performer on the state of all the pads. There are currently two types of pads: Performance Pads and Control Pads. Performance Pads can be programmed to control MIDI sequences, playback of soundfiles, control of note generating algorithms and real time DSP synthesis. The velocity of the hits and the continuous position information can be graphically mapped in each pad to different parameters through transfer functions. Control Pads are used to trigger actions that globally affect the performance (change the current Scene, stop, pause or resume all Performance Pads in a Scene, etc). Usually one of the batons is dedicated to triggering pads and the other is reserved for continuous control. The flexibility of individually programming the pads can be used to generate fairly complex behaviors when the control baton is moved in three dimensional space, as different active pads can interpret the movement in different ways. All the programmable information can be stored in binary documents and also in a standard MusicKit score, so that a complete environment might be externally generated as a score and latter loaded into PadMaster.
The system has been used to compose an interactive piece for PadMaster, two synthesizers and electronic cello which has been succesfully performed in nine concerts up to this date (and has also been submitted to this Symposium as a piece).
PadMaster is currently undergoing a major rewrite to implement new or improved functionality: soundfile playback, algorithmic pads, real time DSP synthesis, inter-pad messaging and control, use of remote objects under the NextStep operating system to transparently control an Ethernet connected cluster of workstations, multiple simultaneous controllers which will enable a performer to use alternative controllers in addition to or instead of the Radio Drum, such as Buchla's Lightning or Thunder controllers, normal MIDI keyboards, MIDI pedals, percussion controllers, etc.
This paper describes a new place to work on interactive composition (Laboratorio de Interfaces Gestuais - LIGA) at the Interdisciplinary Nucleus for Sound Studies (NICS-UNICAMP). The aim of LIGA is to build new instruments and graphic interfaces to be used on real time situation. The laboratory also intends to make new devices using simple logic circuits and no expensive transducers. These can be used on performance applications and on musical education. The text presents three new prototypes: two new graphic interfaces developed for Windows Environments ("Laboratorio Interativo MIDI - LIM" and "Quadrilatero") and a new interface which uses the movement of the wrists/hands to produce MIDI events. The text also presents two pieces in which the interfaces developed at LIGA were used. Finally, it discusses proposals for further achievements.
The Laboratorio de Musica Electroacustica is the main open studio of electroacoustic and computer music among all those belonging to Argentinian universities and the second in importance in the country after the LIPM. The activities at the lab can be broadly categorised into three different areas:
O que me levou a escrever este Ensaio foi uma necessidade de organizar as ideias da linha do meu trabalho em composicao assistida por computador. A pesquisa esta sendo desenvolvida nos studios MIDI e Eletroacustico do Departamento de Musica da Universidade de Keele Apresento aqui um 'abstract' do mesmo.
Segundo Dodge & Jerse (1995) tres sao os principais campos da computer music:
E ha tres classes de sofware musicais:
Para nos a composicao pode ser feita:
A Composicao Assistida por Computador (CAC) nos a dividimos em duas categorias:
A composicao nao tem, necessariamente, que seguir exclusivamente um dos dois caminhos. Neste Ensaio abordaremos diferentes etapas possiveis de uso comum dos citados processos, suas divergencias e convergencias. Nossa principal proposta, contudo, eh descrever a composicao utilizando uma plataforma MIDI, tendo como 'master' o computador. Algumas vantagens e desvantagens do MIDI, onde e como utiliza-lo sao alvos deste nosso trabalho. A descricao de nossa estacao de trabalho, em Keele, tambem.
Due to memory constraints, it is believed that listeners do not grasp a musical piece in its entirety but on the contrary, they segment it into parts which can be analyzed, and then later related to each other.
Based on the Gestalt principles of proximity and similarity, Lerdahl and Jackendoff's grouping rules (Lerdahl & Jackendoff, 1983) provide an explanation of how that segmentation could be done.
This paper proposes a knowledge representation of rhythmic patterns, and a neural model to segment musical pieces in accordance with three kinds of Lerdahl and Jackendoff's rhythmic grouping rules.
The neural model has a topology that is similar to that of NETtalk (Sejnowski & Rosenberg, 1987). It is trained on sets of contrived patterns, and tested on four two-part inventions, four three-part inventions, and four fugues of Bach (Bach, 1970; Bach, 1989).
Finally, the outcomes of the neural model are evaluated.
"TOBOGAN" y "TREN AL SUR" son dos programas para educacion musical inicial, adecuados a los contenidos programaticos musicales de la educacion escolar inicial y primaria de Argentina (4 anos de edad). Las pautas de desarrollo consideraron la simplicidad en el manejo de la computadora: los programas permiten al nino interactuar con ella, a partir de la emision vocal como accion musical, revalorizando la voz como instrumento de ejecucion, y estimulando una correcta emision con distintas intensidades y diccion de vocales. Utilizando un microfono y el mouse, un entorno ludico le permite abordar el aprendizaje de nociones elementales del lenguaje musical, como la duracion (campo analogico y metrico) y la variacion de altura del sonido (escalar y glissando), graduando las dificultades a traves de mapas de recorrido que el mismo puede crear. El docente cuenta con un programa REPORTE, que graba la totalidad de errores del alumno, discriminados en tablas, lo que le permite plantear mas adecuadamente sus estrategias de ensenanza. El proyecto previo la utilizacion del software no como fin en si mismo, sino como un material de que el docente dispone para optimizar la construccion del proceso de aprendizaje musical.
The Dynamic MIDI Editor (DyME) is a Windows application for computer music that allows the user to apply the editing process of MIDI instruments over a musical sequence. This paper introduces DyME and shows that it extends the capabilities of MIDI, allowing the definition of dynamic instruments. DyME integrates the real time capabilities of MIDI and the dynamic control of timbre, which is actually possible only in DSP systems.
Contemporary music needs theory and it must be linked to a theory of the musical time. Physics and Mathematics are among the disciplines with more influence in the evolution of musical thought. The very well known decomposition of a periodic function in a Fourier series allows to consider in a large scale what happens inside the sound. Timbre is generated by summing up harmonics: fragmentation into microdurations of a macroduration defined by the fundamental. And if we take timbre as a model for the musical form we can think in a musical piece generated by fragmentation of a global duration. This approach allows to structurate the musical time in a continuum pitch-duration-forms. This composition has a limitation because if a function is not periodic (representing a timbre of musical interest) it does not have in principle a Fourier series decomposition and it is necessary to use the Fourier integral which contains the continuum of frequencies and not only integer multiples of a fundamental frequency.
In the last decade a new type of discrete structures have been introduced and extensively studied: the aperiodic systems also known as quasicrystals. These structures are in some place between order and disorder and are self-similar. A relevant example in 1D is the Fibonacci chain and in 2D the Penrose pattern. The Fibonacci chain, that can be generated through a formal grammar, has a discrete Fourier spectrum and can be used to structurate the musical time in an aperiodic way. The components of the sound spectrum are discrete and numerable and can be computed with the help of the golden number and two integers. This is an example of the rich and unified temporal structure that we can get by means of quasiperiodic systems in 1D; we think in the pairs intensity-pitch as the Fourier spectrum of aperiodic rhythms are generated automatically. On the other hand, the selfsimilarity of the structures provides a hierarchy related with the global musical form.
With the UPIC system it is possible to draw "arcs", with the mouse or with an electromagnetic pen on a digitizer table. This graphic method allows for the exploration of pitch, tempo, intensity, duration and timbre in the whole continuous domain i.e. non- dicrete.
The author has constructed, with sine-waves, an harmonic spectrum of 24 partials, in search of new evolving timbres. He was interested in working with this package of sounds as if it where an atom with its nucleous and electrons in their orbits.
Inspired by the physical phenomenon of isomerism, where electrons jump form an orbit to another, the author worked with the sound-spectrum in such a manner as to prevent the existence of sustained sounds: from time to time each harmonic jumps, with a glissando, to another harmonic level (orbit).
In this way he constructed a continuous timbral variation, with the same fundamental pitch, in which the changing of harmonics with fast glissandi creates an interwoven texture.
This work defines an Artificial Intelligence model to realize a harmonic classification of tonal music. The harmonic classification problem is divided in subproblems and for each subproblem some intelligent solutions are indicated. The set of subproblem's solutions is organized in a unique intelligent model in the way to rebuild the intelligence involved in the harmonic classification problem. The subproblems found are: chord identification, chord classification, chord inversion classification, music tonality classification and harmonic degree's classification. The model indicates connectionist solutions for the chord classification and the music tonality classification problems. Symbolic solutions are indicated for the other problems. Finaly, limitations and possible future improvements are discussed and the hardware and software necessities for the model implementation and validation are indicated.
Discussao a partir da descricao da situacao do compositor em estudio e do que decorre da peculiaridade desta condicao comparada com a da escrita notacional. Foi extraido e traduzido de minha tese de doutoramento "The Composition of Electroacoustic Music". Conceitos basicos como 'material', 'materia' e 'estrutura' - com que sempre nos defrontamos ao tentarmos descrever nosso trabalho composicional - nao oferecem firmeza para quem trabalha com musica de estudio, digital ou analogico, mas podem ser usados quando em uma constante relativizacao e fluidez. Debrucando-nos sobre estes termos encontraremos diferencas transparentes entre musica realizada com novas tecnologias e a assim chamada musica 'instrumental', e tentaremos compreender uma certa confusao que foi instaurada entre 'invencao' e 'descoberta'.
Um dos principais problemas enfrentados pelo professor de musica e a correcao dos vicios adquiridos pelo aluno, quando este estuda sozinho. O ideal e que o aluno estude com a presenca do professor, pelo menos nos primeiros anos do curso de instrumento, mas como isso nem sempre e possivel, estamos desenvolvendo um produto de software educacional - Expert Piano, que assiste o aluno durante o estudo de piano e musica.
Expert Piano e um ambiente educacional composto de um sistema tutorial inteligente, de uma base de dados multimidia e de recursos de interfaceamento MIDI. No ambiente, estao previstas diferentes opcoes de estudo a partir da peca selecionada, entre elas, praticar a mudanca de endamento, transposicao de tons, trabalho com maos separadas, visualizacao de partitura paralelamente a execucao ou audicao. De acordo com o desempenho do aluno, o sistema gera relatorios apontando os erros cometidos, promovendo 'feedback' e sugerindo remediacao. O ambiente Expert Piano e composto, ainda de arquivos de dados no formato de hipermidia com informacoes adicionais sobre pecas musicais armazenadas e de pequenas biografias de autores, visando tornar o ambiente de aprendizagem musical mais rico e interativo.
Ao longo da construcao de nosso trabalho, temos verificado a tendencia interdisciplinar em desenvolvimento de software educacional, aliando Engenharia de Software, Inteligencia Artificial, Tecnologia Multimidia, Educacao e Musica.
Um prototipo do Expert Piano esta sendo desenvolvido no Programa de Engenharia de Sistemas e Computacao da COPE/UFRJ, como parte dos requisitos necessarios para obtencao do grau de Mestre em Ciencias. Uma primeira versao deve estar disponivel em agosto de 95.
The synthesis and compositional aspects of a work by the author (Piece of Mind, for tape) are discussed in this paper. The idea of approaching sound from different but complementary perspective like time domain and spectral domain techniques (sampling and granular synthesis, and Spectrum Modelling Synthesis (SMS) generates a fundamentally ambiguous musical discourse in the piece. The problem of the hybridization and mutation of sound identities, and a discussion on the efficiency of the SMS technique in handling noisy components of the spectra is made and the author exposes his own solutions to those problems. The graphic tools used for viewing spectra and a very efficient IFFT based algorithm for the synthesis of the deterministic part of the spectra (additive synthesis) is described.
A pureza sonora de instrumentos acusticos reais guarda ainda uma distancia apreciavel da de instrumentos artificiais, como sintetizadores, tanto na qualidade dos timbres, quanto nas virtualmente infinitas possibilidades de modulacoes e variacoes.
Tecnicas de analise e sintese vem sendo empregadas ha anos como ferramentas para analisar caracteristicas espectro-temporais de timbres, e ressintetiza-los a partir destes parametros extraidos.
Neste artigo mostra-se o emprego de Wavelets - uma ferramenta matematica para processamento de sinais, com vantagens adicionais sobre a classica transformada da Fourier - como tecnica de analise, edicao e sintese de sons em multiresolucao. A ideia central e explorar as capacidades desta tecnica para alterar componentes finos de timbres sonoros no espaco tempo-frequencia (escala), no intuito de se gerar timbres melhorados.
No ensino tradicional de musica ha uma enfase muito maior em aspectos teoricos, pouco se oferecendo para o "nao-especialista" a experiencia do "fazer musica". No ensino de ciencias, por exemplo, a vivencia do "fazer ciencia" e buscada atraves da resolucao de problemas, da experimentacao pratica e da modelagem.
Em parte, isto se justifica pela dificuldade tecnica na execucao dos instrumentos musicais, pela grande quantidade de informacoes envolvidas na composicao musical, pelo entrosamento necessario entre os diversos instrumentistas para execucao de uma peca mais elaborada. Mas parte consideravel destes obstaculos pode ser reduzida atraves dos recursos computacionais disponiveis, motivando- se a criatividade, o ludico, o inconsciente cognitivo, colocando-se em segundo plano a manipulacao do instrumento e priorizando-se a producao e a composicao musicais. O conhecimento musical, assim, surge da interacao entre o autor e suas obras, experimentando, ouvindo, e ano como produto, imposto socialmente.
Neste sentido foi desenvolvida a AbcMus (Abordagem de Construcao Musical), que envolve aspectos filosoficos, psicopedagogicos e computacionais inter-relacionados. Nesta abordagem nao ha nivel fundamental para construcao de uma musica: nao e o som, nem o acorde, nem o compasso, nem o trecho musical, nem a musica como um todo. Mas os elementos de todos estes niveis se inter-relacionam para formar a musica.
Neste artigo sao discutidos alguns aspectos desta bordagem, incluindo:
SINAPSIS is a composition software whose more significant feature is its capability of interaction with the user at the level of production of musical discourses. For this purpose, a setting similar to that of staging has been designed. On this basis, the system provides two structures: ACTORS and STAGE DIRECTORS.
An ACTOR in conceived as a list of temporal and parametric data and a set of variables which put limits of performance to those data of the object in itself (fragmentation and/or insertion). A STAGE DIRECTOR consists of a set of procedures aimed at selecting and modelling ACTORS in a probabilistic mode. In addition, the STAGE DIRECTOR must conduct the musical flow in its space (register), sonority and time, already assigned.
SINAPSIS was programmed in Pascal and runs on a IBM PC computer. Numerous and varied experiments have been accomplished and many interesting musical results have been abserved. Notwithstanding, SINAPSIS does not replace the composer's activity, but it provides an interacting companion capable of furnishing, in regard to a structural proposal, a multiplicity of constructive solutions some of which can be used apportunely.
Don Cuco el Guapo is the first Mexican and second world pianist robot, which was designed and built at the Department of Micro-electronics of University of Puebla. The project was based on multidisciplinary participation, where physicists, electronic engineers, computer scientists, musicians and designers converged.
The artificial vision system of Don Cuco el Guapo was implemented through the following steps: frame grabbing, image processing, pattern recognition and interpretation or analisys of scene. The vision system of Don cuco el Guapo is capable of reading musical score from a template.
In this papers we present the system and discuss in detail the following aspects of Don Cuco's artificial vision:
Digital waveguide modeling has proven to be an effective and efficient means of synthesizing woodwind instrument sounds (Smith, 1992). To date, conical and cylindrical bores with tone-holes have been successfully modeled, as well as reflection functions associated with bells (Valimaki & Karjalainen, 1994). Excitation methods, however, have proven the most critical and difficult elements to accurately reproduce. This paper reviews past non-linear excitation methods as well as introducing a new method derived in part from the Wave Digital Hammer (Van Duyne & Smith, 1994). This model is based on a non-linear spring/mass system, which interfaces with waveguide sections representing an instrument bore as well as a performer's throat/mouth cavity. In this way, a complete instrument/performer model is attained which better approximates the subtleties of real musical instruments. Further, it is shown that this model of the reed can be modified to represent the lips of a brass player, and the similarities and differences of these two systems are examined.
Quando falamos em aprendizagem, estamos aplicando em um aluno, seja este humano ou artificial, as caracteristicas de agente principal e responsavel pela maneira de aprender, devendo: captar e processar informacoes; organizar dados; apreender e relacionar conceitos; perceber e resolver problemas; criar conceitos e solucoes. Essa visao privilegia o aspecto cognitivo do ser humano e, nesta abordagem, sera considerada como ideal para o ambiente de aprendizagem que apresentamos.
Nesse ambiente, um professor humano (especialista) ira' interagir com um aluno artificial (Agente Racional]) objetivando que o mesmo adquira e transmita conhecimento de forma interativa e ativa, devendo compreender duas partes distintas: a fase de aquisicao de conhecimento da maquina, com a presenca de um especialista e um pedagogo e a fase de transmissao de conhecimentos da maquina para o aluno humano. Nosso interesse se situa, aqui, na primeira fase, na qual repousa a qualidade da segunda fase.
Em nosso objeto de dominio, a Harmonia, propomos uma abordagem que tem como objetivo nao o tratamento aprioristico (mais comumente apresentados nas salas de aula), como se a mesma fosse uma linguagem universal, mas tambem uma concepcao da Harmonia como fenomeno cultural, onde cada periodo da historia da musica ocidental e' determinado por uma pratica harmonica propria, com suas caracteristicas especificas.
O estudo da teoria musical e'similar ao estudo de qualquer linguagem. Analogamente, nos debatemos com aspectos de vocabulario, gramaticais, de sintaxe e de retorica. Como tal, e' necessario ter uma visao historica que contempla os principais problemas da musica em determinada epoca.
Para tanto, se faz necessario um minimo de conhecimento, estruturado e bem representado, que evolua a partir de novas informacoes ou atraves de criticas e que seja adequado para o reconhecimento de diversos contextos. Essa acao mutua entre o sistema e o especialista e' que resolvera' um problema proposto, criticando e contribuindo para a ampliacao da base de conhecimento do aluno.
Apos apresentar brevemente o dominio de aplicacao, mostraremos nossa proposta, baseada nos trabalhos de S. Billet-Coat, com a apresentacao de uma variacao do protocolo MOSCA, que permite estruturar o dialogo de negociacao.