(for tape, images, interactive tap shoes and performer)

AtoContAto (1997) (for tape, images, interactive tap shoes and performer)

Jônatas Manzolli

Interdisciplinary Nucleus for Sound Studies (NICS) - State University of Campinas (UNICAMP)

Artemis Moroni

Automation Institute - Technological Center for Informatics (CTI)

Christiane Matallo

Tap Dance Studio "Christiane Matallo"
Brazil

Abstract:

As computer technology develops, the high-end computing environment no longer limits its applications. AtoContAto, Portuguese for act and contact, act with contact or act with touch intends to make human gestures close to sound as the dancer establishes a closer contact with music. Through a machine interface, a performer senses, touches and integrates music with dance. AtoContAto is a performing act. AtoContAto is based on a new gesture interface: a pair of tap shoes. Pizo-electric sensors were applied inside the taps, in the region underneath the toes and heel, and a cable terminated at that point. To simplify the electronic hardware, the total number of force sensors was limited to four, two per foot. These points are considered consistent with the dominant peaks of distribution of force along the base of the foot. Also, in this case, they supply enough amount of information. The sensors connected to an analog interface circuitry through a cable harness. There, the sensor signals are conditioned and digitized by a small microcontroller. The analog circuitry and microcontroller comprise a small module which is worn on the waist. The microcontroller translates the data into packets and sends them across a standard serial interface. The result is a MIDI Control Signal that can be plugged to several MIDI devices. The pizo-electric sensors underneath the performer shoes can be use to control sonic, light and image transformations. The junction of the dancer language with the sensor features results in a rich and unique mixture. The performer is transformed in a musical instrument, or "humanmatic device". The foot-mounted gesture interface enables a new sensory experience, strengthening interaction between sounds, rhythms and images. This new musical instrument enhances the relationship between dance and music. Here, cooperative or combined behaviors between human and machine create an emergent system. The result is a dance artwork in which a performer creates free movements, which produce changes in the sound material. The music is very much rhythmic. A blending of traditional tap rhythms mixed up into several different patterns. The sounds are supposed to be integrated to a collection of images. As with sounds, visual structures can be re-built during the performance interacting with the dancer's shadows, so that the dancer location on the stage changes the final result. The pizo-electric sensors underneath the performer shoes control sonic transformations. The panoply of rhythmic patterns allows a big variety of acoustic stimuli. The media - the interactive shoes - allows exploring the potential of this complexity. The result is a MIDI Control Signal that can be plugged to several MIDI devices. The junction of the dancer language with the sensor features results in a rich and unique mixture. The performer is transformed in a musical instrument, or "humanmatic device". In movement-based activities such as dance or sports, a performer's self-described movement orientation is closely related to his or her level of performance achievement. As a practical application of this technology, we foresee devices that provide feedback closely correspond to internal sensations of movement, in order to assist a performer to evaluate and modify movements. Sports training and physical therapy are areas where a performer is engaged regularly in movement-based self-evaluation and movement modification. By tracking both locomotion and weight distribution we can search for combinations of transitions that might correspond to a particular movement performance that requires corrective attention. We envision a performer exercising a repertoire of movements while attending to a visual or auditory display controlled by those movements. A performer could listen to musical sequences and fine-tune the sounds by refining his or her corresponding movements. We foresee that this technology will have an application as an enhancement of existing devices for measuring physical performance. The foot sensing and gesture inference technology is still in an experimental stage. The continuous kinesthetic presence of a human in a computing interface is a powerful idea. For the intended context, the results obtained were very impressive. Observers were able to appreciate and understand the relationship between the actions of the performer and the corresponding musical transformations. The results enlightened the fact that we are not yet accustomed to such a bandwidth coupling between human and machine. Now, we are currently studying more complex patterns and the its basic properties, both in the method of their description by humans and in the construction of rules for recognition by machines.