An Interactive Music Composition System Using Body Movements

12
Morales-Manzanares, Morales, Dannenberg, and Berger 25 Computer Music Journal, 25:2, pp. 25–36, Summer 2001 © 2001 Massachusetts Institute of Technology. Traditionally, music and dance have been comple- mentary arts. However, their integration has not always been entirely satisfactory. In general, a dancer must conform movements to a predefined piece of music, leaving very little room for impro- visational creativity. In this article, a system called SICIB—capable of music composition, improvisa- tion, and performance using body movements—is described. SICIB uses data from sensors attached to dancers and “if-then” rules to couple choreo- graphic gestures with music. The article describes the choreographic elements considered by the sys- tem (such as position, velocity, acceleration, curva- ture, and torsion of movements, jumps, etc.), as well as the musical elements that can be affected by them (e.g., intensity, tone, music sequences, etc.) through two different music composition sys- tems: Escamol and Aura. The choreographic infor- mation obtained from the sensors, the musical capabilities of the music composition systems, and a simple rule-based coupling mechanism offers good opportunities for interaction between chore- ographers and composers. The architecture of SICIB, which allows real- time performance, is also described. SICIB has been used by three different composers and a cho- reographer with very encouraging results. In par- ticular, the dancer has been involved in music dia- logues with live performance musicians. Our experiences with the development of SICIB and our own insights into the relationship that new technologies offer to choreographers and dancers are also discussed. Background In 1965, John Cage, David Tudor, and Merce Cunningham collaborated on Variations V, a mul- timedia work in which dancers triggered sounds each time they were positioned between one of a dozen photoelectric cells and a light activated each cell. This revolutionary work challenged notions of the traditional relationship between choreogra- pher and composer—a relationship characterized by compromise of one artist in order to fit the work of the other. The Cage–Cunningham collaboration achieved equality by transferring control from both com- poser and choreographer directly to the dancer. Re- cent technologies offer more refined tools for mapping dancer’s movements to music. Computer processing of data from motion sensors allows both composer and choreographer to retain control and achieve an integrated work that is not impro- SICIB: An Interactive Music Composition System Using Body Movements Roberto Morales-Manzanares,* Eduardo F. Morales, Roger Dannenberg, and Jonathan Berger § *LIM Laboratorio de Informática Musical Universidad de Guanajuato, Guanajuato, Gto. México [email protected] ITESM, Campus Morelos Temixco, Morelos 62589 México [email protected] School of Computer Science Carnegie Mellon University Pittsburgh, Pennsylvania 15213 USA [email protected] § Center for Computer Research in Music and Acoustics Stanford University Stanford, California 94305-8180 USA [email protected]

description

An Interactive Music Composition System Using Body Movements

Transcript of An Interactive Music Composition System Using Body Movements

Page 1: An Interactive Music Composition System Using Body Movements

Morales-Manzanares, Morales, Dannenberg, and Berger 25

Computer Music Journal, 25:2, pp. 25–36, Summer 2001© 2001 Massachusetts Institute of Technology.

Traditionally, music and dance have been comple-mentary arts. However, their integration has notalways been entirely satisfactory. In general, adancer must conform movements to a predefinedpiece of music, leaving very little room for impro-visational creativity. In this article, a system calledSICIB—capable of music composition, improvisa-tion, and performance using body movements—isdescribed. SICIB uses data from sensors attached todancers and “if-then” rules to couple choreo-graphic gestures with music. The article describesthe choreographic elements considered by the sys-tem (such as position, velocity, acceleration, curva-ture, and torsion of movements, jumps, etc.), aswell as the musical elements that can be affectedby them (e.g., intensity, tone, music sequences,etc.) through two different music composition sys-tems: Escamol and Aura. The choreographic infor-mation obtained from the sensors, the musicalcapabilities of the music composition systems, anda simple rule-based coupling mechanism offersgood opportunities for interaction between chore-ographers and composers.

The architecture of SICIB, which allows real-time performance, is also described. SICIB hasbeen used by three different composers and a cho-

reographer with very encouraging results. In par-ticular, the dancer has been involved in music dia-logues with live performance musicians. Ourexperiences with the development of SICIB andour own insights into the relationship that newtechnologies offer to choreographers and dancersare also discussed.

Background

In 1965, John Cage, David Tudor, and MerceCunningham collaborated on Variations V, a mul-timedia work in which dancers triggered soundseach time they were positioned between one of adozen photoelectric cells and a light activated eachcell. This revolutionary work challenged notionsof the traditional relationship between choreogra-pher and composer—a relationship characterizedby compromise of one artist in order to fit thework of the other.

The Cage–Cunningham collaboration achievedequality by transferring control from both com-poser and choreographer directly to the dancer. Re-cent technologies offer more refined tools formapping dancer’s movements to music. Computerprocessing of data from motion sensors allowsboth composer and choreographer to retain controland achieve an integrated work that is not impro-

SICIB: An InteractiveMusic CompositionSystem Using BodyMovements

Roberto Morales-Manzanares,*Eduardo F. Morales,† Roger Dannenberg,‡

and Jonathan Berger§

*LIM Laboratorio de Informática MusicalUniversidad de Guanajuato, Guanajuato, Gto. Mé[email protected]†ITESM, Campus MorelosTemixco, Morelos 62589 Mé[email protected]‡School of Computer ScienceCarnegie Mellon UniversityPittsburgh, Pennsylvania 15213 [email protected]§Center for Computer Research inMusic and AcousticsStanford UniversityStanford, California 94305-8180 [email protected]

Page 2: An Interactive Music Composition System Using Body Movements

26 Computer Music Journal

visatory. These systems provide any degree of con-trol desired by the composer-choreographer team;they can also provide the dancer with the flexibil-ity of performer nuance and inflection usually as-sociated with musicians rather than dancers.

The motion detection technology can be classifiedaccording to the location of the sensors and detec-tors. These include sensors and detectors attached tothe body of the dancer. Examples of such devices in-clude piezo-electric and flex sensors (e.g., VanRaalte, 1998). Another type of technology involvessensors and detectors placed external to the dancer’sbody (cameras, infrared sensors, etc.) (e.g., Rokeby1999). A third type of motion detection technologyincludes sensors attached to the body and detectorsplaced strategically elsewhere. These devices includeelectromagnetic sensors, sonar, etc. (e.g., Morales-Manzanares and Morales 1997).

As is often the case with integrated technolo-gies, newfound freedom is often paradoxically ac-companied by stifling restrictions. Researchers,musicians, composers, choreographers, and danc-ers are just beginning to grasp the possibilitiesthese new technologies offer. However, there re-mains a distinct need for a simpler, easier to un-derstand, and powerful coupling mechanism thatmediates between sound and motion.

This paper describes Sistema Interactivo deComposición e Improvisación para Bailarines(SICIB) designed with the above ideas in mind andcapable of generating music in real time based onbody motion. SICIB receives space coordinatesfrom sensors attached to dancers and obtains cho-reographic information from them. Combinationsof such information are used to satisfy conditionsof “if-then” rules whose actions affect musical ele-ments. SICIB can use two different music compo-sition systems: Escamol (Morales-Manzanares1992) and Aura (Dannenberg and Brandt 1996).

Escamol is a language for creating scores usinggrammatical rules (Allen 1987). Aura is an object-oriented software foundation for building interac-tive systems. In SICIB, we use Aura to createsoftware instruments.

Continuous polling of the sensors allows us todefine expressive choreographic information,while the flexibility of the musical systems facili-

tates expressive musical events. The integrationbetween both is simplified by a rule-based systemwith a simple syntax. With SICIB, a composer candefine regions in space, use information of the cur-vature and torsion of the dancer movements, andapply continuous changes of a sensor’s data to dif-ferent musical parameters.

With SICIB, choreography does not need to beadjusted to a fixed, pre-determined musical piece:dancers can freely improvise their movements andmap these improvised gestures to music generatedin real time. SICIB can also be considered a virtualinstrument in which music is produced throughbody movements, opening new possibilities formusic performance and composition. Finally,SICIB allows musicians and dancers to interactand improvise dialogues during the course of theperformance.

Related Work

Many systems have been proposed that producemusic from body movements. They basically differon the type of sensors used and on their musicalcapabilities. Winkler (1995) presents an overviewof gesture and motion sensing for music making.Systems which use sensors external to the body(such as cameras) detect—in general—changes fromcontinuous frames (e.g., Rokeby’s Very NervousSystem) or changes between the current frame anda frame of reference (e.g., STEIM’s BigEye).

BigEye, from the Studio for Electro-InstrumentalMusic (STEIM) in Amsterdam, is a computer pro-gram that takes real-time video information andconverts it into MIDI messages. In BigEye (STEIM2000), the user configures the program to extractobjects of interest. The position of the objects ischecked against user-defined zones. MIDI mes-sages are generated each time an object appears ordisappears in a zone or moves within a zone.

These events can cause notes to be switched onor off, for instance. Other MIDI messages can begenerated using some of the object’s parameters,such as position, speed, and size. A camera focusedon a choreographer can follow (with sound) thepaths of up to 16 individual dancers.

Page 3: An Interactive Music Composition System Using Body Movements

Morales-Manzanares, Morales, Dannenberg, and Berger 27

Another related system, the Very Nervous Sys-tem (Rokeby 1999), detects any movement withina defined active performance area. Position andmotion information is used to trigger sounds inreal time. The image analysis is focused on motionrather than color or shape information. Each videoframe is compared to the previous one to deter-mine what is moving.

In general, systems based on cameras are verygood at detecting global changes of the performer(e.g., crossing of regions or changes in overall veloc-ity of the dancer), but they are poor in detectingmore detailed information (e.g., which part of thebody was responsible for changes in the image). Im-age processing algorithms capable of detecting moredetailed information (e.g., Maes et al. 1995) demanda high processing cost and rely on images with highcontrast and a restricted range of orientation withrespect to the camera. Another potential problemwith camera-based systems involves changes in il-lumination. Sensors attached to the body are im-mune from this problem. Still, the simplicity andnon-obtrusiveness of camera-based sensing is at-tractive. Interesting progress has been achieved bythe EyesWeb project (Camurri et al. 2000).

Not all external sensors are cameras. For ex-ample, floor sensors have been created using vari-ous technologies (Johnstone 1991; Pinkston,Kerkhoff, and McQuilken 1995; Paradiso et al.1997; Griffith and Fernstrom 1998).

Other systems use sensors such as piezo-electricmaterial in the body of the dancer. For instance,Control Suit, constructed at the Norwegian Net-work for Technology, Acoustics, and Music(NoTAM) in 1995 is a suit equipped with sensors,and the performer produces sounds by tapping his orher own body (NoTAM 2000). The suit is equippedwith semiconductor material attached to the per-former. Contacts on the fingertips transfer voltageto these sensors. In other systems, such asBodySynth (Van Raalte 1999), sensors detect musclechanges. Sensors attached to the body detect electri-cal signals generated by muscle contractions. Themuscle contractions that trigger sounds can be verysubtle, so the same sonic result can be achieved by awide variety of movements. The MidiDancer sys-tem (Coniglio 2000) uses flex sensors and a wireless

transmitter to convey joint angle information to acomputer. A conceptually similar system based onstrain gauges on spring steel and a wireless trans-mitter is described by Siegel and Jacobsen (1998).Paradiso, Hsiao, and Hu (1999) describe a wirelessinterface to dancing shoes.

Most of the authors mentioned here also discussphilosophical, practical, and musical implicationsof sensors for dance and the integration of interac-tive music with dance performance. We hope thatour experience and perspective will add to thisgrowing body of knowledge.

Elements of Integration

In order to achieve a coherent integration of musicand dance, we consider their analogous principles.These include exposition of ideas, links, and tran-sitions between sections, variation strategies, em-bellishment methods, and structural issues.

In dance, exposition of ideas is achieved throughthe repetition of movements in different space re-gions, combined with variations, jumps, twists,and falls. Such variations incorporate new move-ments to those already exposed. The general struc-ture is normally divided into sections, each withits own expressive criteria characterized normallyby different scenarios, light, and clothing.

These pose several problems, including what in-formation should be captured, how to characterizethis information, and how to relate this informationin a way that is coherent with the music. SICIB canbe seen as a step toward solving these problems.

Information Captured from the Dancer

All the choreographic information is obtained bySICIB through sensors attached to dancers. A Flock ofBirds system (Ascension Technology Corporation2000) provides six degrees of freedom (three-dimen-sional position and orientation) tracking at an adjust-able sampling rate and connects to a host computerwith a serial interface. Each “Bird” is a magnetictracker that provides up to 144 position and orienta-tion measurements per second within a 10-foot radius

Page 4: An Interactive Music Composition System Using Body Movements

28 Computer Music Journal

around a central transmitter. A balance must be es-tablished between the amount of data that needs tobe considered in order to capture important choreo-graphic information with sufficient detail and thespeed at which such information can be processed. Inour experiment, we only consider the position of eachsensor in space, and the baud rate of the sensors isfixed at 38,400 bits per second, producing roughly 50space positions per second per sensor. This samplingrate allows us to create almost continuous changes ofmusical parameters (e.g., features of granular synthe-sis, glissando effects, etc.). Real-time considerationsand alternative possibilities to capture information byother means are given later in this article.

Choreographic Elements of Control

The data from the sensors are used to obtain thefollowing choreographic information: (1) curvatureand torsion of movements, (2) physical position ofthe dancer, (3) displacement velocity and accelera-tion, and (4) sudden changes.

One of the fundamental aspects to consider indance is the curvature and torsion of the move-ments performed by a dancer regardless of his orher orientation. SICIB uses the Frenet-Serrett theo-rem, described in the following subsection, to ob-tain such information.

In general, the physical position of a dancer’sankle or wrist can be used to characterize a particu-lar choreography. For instance, the information ofthe position of a hand and how it changes throughtime can have a particular gestural choreographicinterpretation. Information about the position ofthe dancer also allows the segmentation of the cho-reographic space into regions. Each time a dancermoves into a different region, a new choreographicmeaning can be associated with it. We have definedseveral primitives to easily specify geometrical re-gions, such as spheres, cylinders, and cubes.

Given a particular time interval, the displace-ment velocity and acceleration, in each of thespace coordinates or its global resultant, can beevaluated to obtain information about a choreogra-phy. Finally, sudden changes—including jumps

and falls by the dancer—can provide additionalchoreographic information.

All the above elements are used to characterizedifferent aspects of particular choreographs. Danc-ers can change any of them with their movementsand thus change different aspects of the music.Therefore, we can consider regions that initialize orterminate particular musical events, regions of in-tensity and pitch, regions associated with particularnotes or instruments, regions associated with spe-cific rhythms, etc. Similarly, the velocity and/or ac-celeration in a particular space direction of asensor, the curvature and torsion of body move-ments, and jumps or falls can change any of themusical information. With the space position of thesensors, it is fairly easy to create virtual regions ofdifferent geometrical shapes, virtual walls, hall-ways, and doors (as it will be shown when the ex-perimental results are described).

Frenet-Serrett Theorem

In order to evaluate the curvature and torsion ofmovements, we are using the Frenet-Serrett theo-rem (Do Carmo 1976). With it, we can characterizethe movements of dancers independently of theirorientation. Each sensor reports its position {x(ti),y(ti), z(ti)} in space at time ti. The curvature, κ, of aparticular movement is given by

κγ γ γ γγ γ γ γ

tv t

t t t t

t t t tii

i i i i

i i i i( ) = ( )

( ) ⋅ ( ) ( ) ⋅ ( )( ) ⋅ ( ) ( ) ⋅ ( )

1 ' ' ' "

' " " " (1)

where v(ti) is the displacement velocity given by

v t x t y t z ti i i i( ) = ( ) + ( ) + ( )' ' '2 2 2

. (2)

The expression inside the square root operator is thedeterminant of the scalar product of γ'' and γ", where

γ''(ti) = (x'(ti), y'(ti), z'(ti))γ'"(ti) = (x"(ti), y"(ti), z"(ti))

(3)

Here, x'(ti) represents a change in position or ve-locity in direction x:

Page 5: An Interactive Music Composition System Using Body Movements

Morales-Manzanares, Morales, Dannenberg, and Berger 29

x tx t x t

tii i'( ) =

( ) − ( )+1

∆(4)

and x" (ti) represents a change in velocity or accel-eration of the x coordinate. The other derivativesof the coordinates y and z are evaluated similarly.

The torsion τ of a particular body movement isobtained from the following formula:

τ γ γ γ' ' " '"tt V t

t t ti

i i

i i i( ) = −

( ) ( )( ) × ( )( ) ⋅ ( )1

2 6 (5)

where

γ''''(ti) = (x'''(ti), y''' (ti), z'''(ti)) (6)

and x''' (ti) represents the change in acceleration inthe x coordinate. Since we are using the third de-rivative of each position (e.g., x'''(ti)), all the calcu-lations can be evaluated from four continuoussamples. The user can decide if a simple derivativewill be evaluated from two contiguous samples orfrom samples separated by an arbitrary number ofintermediate samples.

Musical Elements

Music can be generated with the aid of Escamoland Aura. In the following sections, a short de-scription of each is given. Interested readersshould also consult Dannenberg and Brandt (1996)and Morales-Manzanares (1992).

Escamol

Escamol includes a set of predicates from whichmusic can be generated from simple musical ele-ments, such as pitch, rhythm, and loudness. Themusical information that can be affected byEscamol and controlled by the sensors includesthe following: duration of a musical event, groupsof musical intervals and rhythms associated withthe musical event, overall volume of the generatedmusic, musical notes or motives to use as a basisfor the generation of musical phrases and/or varia-tions, initialization/termination of predetermined

musical events, instrument type (e.g., wind,strings, percussion, etc.) to use in the interpreta-tion of the music generated, and tempo changes ofmusical sequences that have been generated butnot yet interpreted.

Escamol generates music with this informationin real time following compositional grammarrules. Such rules determine the criteria (style) formusic composition. The rules can be changed bycomposers to satisfy their compositional prefer-ences and can be changed from one piece to an-other. With different grammatical rules, Escamolgenerates music in different styles with the sameinput information.

Escamol has a library of high-level predicatesthat can be used to generate music. An examplepredicate is atasca([N1/V1K1,....., Nn/VnKn], Vars, Tempo/ST), where, Ni/ViKi refersto the initial note as reference for the voice Vi tobe generated in clef Ki, Vars is the number ofvariations to consider, and Tempo/ST representsthe metronome tempo and the starting time ST.

The predicate given by atasca([c4/’4tr’,e5/’5tr’], 5, 60/0)means that twovoices (c4/’4tr’,e5/’5tr’) will be generatedwith five variations, with tempo set to 60 beatsper minute and starting immediately at time 0.The starting note of the first voice is C4 (middleC), and it is the fourth voice in treble clef (in casethe user wants to generate music notation). Otherpredicates allow the generation of music from setsof notes, the definition of chords with a variablenumber of voices, etc.

Escamol can generate more than 20 simulta-neous voices, and its predicates include modalcounterpoint rules (Fux 1725; Morris 1978), algo-rithms that simulate Alberti bass and otherostinatti, traditional harmonic progressions, con-temporary compositional rules, and generativegrammars ( Sloboda 1985, 1988; Lerdahl andJackendoff 1983; Schwananauer and Levitt 1993).

Escamol is written in Prolog (Clocksin andMellish 1987) with an interface in Tcl/Tk (Welch1995) and can generate scores in MIDI, Csound(Vercoe 1986), or Aura formats, which allowsEscamol to generate music in real time.

Page 6: An Interactive Music Composition System Using Body Movements

30 Computer Music Journal

Aura

Aura (Dannenberg and Brandt 1996) can be viewedas an object-oriented platform to construct andcontrol software synthesizers. A goal of Aura is toprovide flexible, real-time, low-latency sound syn-thesis, so Aura was a natural choice for this work.Objects in Aura include musical instruments, andtheir attributes are used to specify pitch, attack,envelopes, vibratos, loudness, duration, etc. Auraoffers great flexibility to the user, as it allows amusician to configure new instruments and con-trol strategies. Objects send and receive informa-tion using timed asynchronous messages, whichsupports real-time execution.

Sound is generated in Aura by creating and“patching” together various objects. For example,an oscillator can be created and patched to the Au-dio Output object to play a tone. Messages of theform “Set attribute to value” are used to controlparameters such as frequency and amplitude. Auraoffers a large number of built-in sound objects, in-cluding oscillators, filters, envelope generators,sample players, and mixers. New objects can bewritten in C++.

Aura also includes a simple scripting languageallowing users or other programs to instantiate ob-jects, patch them together, and set parameters inreal time using text-based commands. This textinterface, combined with Unix pipes, allows Aurato communicate with SICIB and Escamol.

Integration Between Music and Dance

The choreographic elements previously describedcan affect any of the musical elements describedin the previous subsection. The association be-tween dance and music is specified by a rule-basedlanguage that allows one to define which choreo-graphic aspects to consider from a particular sen-sor, which musical aspects are controlled by thatsensor (i.e., what the meaning is, in musical terms,of the choreographic changes of a sensor), andwhich musical and choreographic aspects (rules)are given preference over other ones.

The information from the sensors is transmittedwith the predicate pos(sensor_ID, (X,Y,Z),Time), where sensor_ID is a unique name asso-ciated with each sensor; (X,Y,Z) are its space co-ordinates; and Time is a time tag indicating whenthe data was produced.

Choreographic regions can be specified. A rect-angular region is specified by two spatial points(the lower left-hand side corner and the upperright-hand side corner) as re_region(region_ID(X1, Y1, Z1),(X2, Y2, Z2)), whereregion_ID is a unique name associated with eachregion. Other regions can be specified as well. Forinstance, spherical and cylindrical regions can bedefined with an inner and an outer radius, consid-ering the center of the region to be the referencepoint of the Flock of Birds transmitter:

sp_region(IDregion, radius1, radius2)cy_region(IDregion, radius1, radius2)

Here, we assume that a cylindrical region has an in-finite height. An entire sphere or cylinder wouldhave the inner radius set to (0,0,0). The definitionof regions allows one to define virtual walls (thinregions) and create different scenarios. A user candefine as many regions as desired and use them inthe rules. Rules in SICIB have the following syntax:

Si IF <condition>*THEN <action>*

where Si is the name of the ith sensor, <condi-tion>* is a conjunction of choreographic condi-tions that must be satisfied, and <action>* is aconjunctive set of musical actions to perform.

The conditions can take one of several forms.The construct curve_&_torsion(S,T,DT,K,R)evaluates the curvature K and torsion R followedby a sensor S starting from cycle T and consideringmeasurements back in time every DT cycles. Theconstruct pos(S,P,T) tests if the position of asensor is within a specific region, andJump(S,T,H) tests for a change above a particularheight H of sensor S at cycle T. Finally,fall(S,T,H) tests for a change below a particularheight H of sensor S at cycle T. Arbitrary Prologpredicates are also possible.

Page 7: An Interactive Music Composition System Using Body Movements

Morales-Manzanares, Morales, Dannenberg, and Berger 31

Other predicates for comparing positions of sen-sors have been defined but were never used in theexperiments. In the above conditions, if a cycle Tis not specified, the most recent cycle is consid-ered (i.e., the most recent data from the sensors).This parameter allows one to specify delayed reac-tions to particular choreographic events.

The allowed actions include arbitrary Escamolpredicates, arbitrary Aura commands, and arbi-trary Prolog predicates.

The rules can change graphical displays as well.Although a rudimentary interaction has been imple-mented, any of the choreographic elements previ-ously described can change graphical displays, suchas color, rotation, and/or displacement of graphicalfigures, etc., using any of the capabilities of Per-former (a graphical tool that runs on Silicon Graph-ics workstations) through Escamol predicates.

Based on information from the sensors, SICIBevaluates which rules should be followed giventhe choreographic information considered in itscurrent set of rules. All the rules that satisfy thecurrent set of conditions can be active at the sametime, which allows one to mix several choreo-graphic elements simultaneously. Alternatively,the first rule (in the set of rules) that satisfies itsconditions can be executed. This option imposes arule order, thereby allowing one to specify prece-dence criteria among rules (i.e., the first rules havehigher precedence than the later rules).

The Architecture of SICIB

The control of SICIB is driven by an interface writ-ten in Tcl/Tk, as shown in Figure 1. With the in-terface, a user can start Prolog (to start theintegration rules and Escamol), Csound, or Aura,and a sensor data program written in C. The sen-sor data program sets the serial port of a SiliconGraphics workstation, specifies the number of sen-sors to use, their baud rate (currently fixed at38,400 bits per second), and the desired informa-tion (position in space). The information from thesensors is decoded and scaled down by a constantfactor for convenience. The C program also filters

the information produced by the sensors and addsa time tag to them. Positions that do not changeby more than a particular threshold (which couldvary for each sensor) are ignored, which meansthat musical changes can only be produced withbody movements. Despite the robustness of thesensors, the filter is also used to eliminate themaximum and minimum values of five sequentialsamples when the curvature and torsion predicatewas evaluated (because with second and third de-rivatives the results are very sensitive to noise).

The filtered data is fed to a Tcl/Tk functionthrough a Unix pipe and sent to a Prolog program.The Prolog program includes Escamol and the “if-then” rules described in the previous section.From the data of the sensors, different choreo-graphic information is obtained, and the condi-

Figure 1. The control ofSICIB is driven by an in-terface written in Tcl/Tk.

Page 8: An Interactive Music Composition System Using Body Movements

32 Computer Music Journal

tions of the rules are tested. The rules whose con-ditions are satisfied are activated and their musicalactions performed. Musical actions invokeEscamol to generate music, which is then sent toAura. Alternatively, musical actions can be sentdirectly to Aura.

In SICIB, different rule files can be used for dif-ferent choreographic pieces—even during the samepiece. This means that different musical and cho-reographic meaning can be attributed to the samesensor at different instances in time (within a cho-reography or between choreographs). The user andthe dancer can change at any time the rule setthrough SICIB’s interface or through body move-ments, respectively. Additionally, the informationfrom the sensors can be activated or deactivated atdifferent times during a particular performancethrough the interface.

Experiments and Results

Several live performances have employed SICIB. Inthis section, three pieces of the most recent perfor-mance are described, illustrating some of SICIB’scapabilities. All of them were performed by onedancer and live musicians (playing flute, clarinet,and piano). In all three pieces, the dancer usingSICIB generated the only electroacoustic music.Three different composers—Jonathan Berger(Stanford University), Roger Dannenberg (CarnegieMellon University), and Roberto Morales (Univer-sity of Guanajuato, Mexico)—worked with a chore-ographer and dancer (Raúl Parrau) in the definitionof their pieces. The three pieces were put togetherfor a single concert by the composers and thedancer/choreographer. For practical and aestheticreasons, they use similar choreographic features.

The longest delays in the three pieces betweenmotion and the resulting sound were close to 0.5sec. The sensors were attached to the dancer andsuspended from the ceiling, which imposed move-ment restrictions on the dancer and demanded cre-ative solutions. The dancer felt constrained at thebeginning of the experiments but was able to de-sign adequate choreographs and was not affectedin his performance.

Jonathan Berger’s Arroyo

Arroyo is part of a set of concert works and soundinstallations in which a virtual labyrinth is createdusing sound to represent physical delimiters suchas walls or ceilings. In Arroyo, the dancer musttraverse a three-dimensional virtual maze, relyingsolely upon sonic cues as a guide. The piece thusconsiders virtual reality from a sensory-deprivedstandpoint rather than providing the standard bar-rage of multi-sensory stimuli. As in any maze,backtracking and retracing steps provide the onlyway out of wrong turns and mistaken paths. Thus,musical structure is provided by this natural pro-cess of “finding the way.” (In the future, a numberof mazes will be provided by the composer for thepiece. An interface in which performers or listen-ers can create their own mazes will also be added.)The instrumentalists react to the dancer’s arrivalat key points in the maze by augmenting the digi-tal audio cues and clues; they also provide occa-sional “hints” when the dancer seems to befloundering.

The maze in Arroyo is taken from an archaeo-logical map of part of the Chaco de Arroyo, an11th century Anasazi edifice. The maze is com-prised of nine chambers (see Figure 2). Each roomcorresponds to a musical segment. The dancerwears three sensors: one sensor on each wrist andone sensor on an ankle. The sensors of the handsare used to detect the virtual walls of the maze.Each time one of these sensors “touches” a virtualwall (i.e., crosses a region), a particular sound isproduced. These sensors are used by the dancer asfeedback to identify the walls and entrances of themaze. The dancer’s goal is to reach the centralroom. The sensor on the ankle detects the room(region) where the dancer is placed. Each time thedancer moves to a different room (region), a uniquemusical event associated with that particularroom is triggered. The electronic sounds are de-rived from a digitally processed squawk of a ma-caw (a South American bird whose skeletalremains were found in the central room of the Ar-royo site). As the dancer enters a room, the livemusicians segue into the musical material thatcorresponds to that room.

Page 9: An Interactive Music Composition System Using Body Movements

Morales-Manzanares, Morales, Dannenberg, and Berger 33

Roger Dannenberg’s Aura

Roger Dannenberg’s composition Aura involvestwo sensors that are attached to the wrists of adancer. There are several rule sets for the sensors(i.e., the sensors have different musical and cho-reographic meaning at different times during thepiece). The rule sets are changed using SICIB’sgraphical interface as scored music is performed.One of the rule sets defines four rectangular col-umns (regions) and a region above a certain height(see Figure 3). Each time a dancer “touches” one ofthe rectangular regions, Escamol generates a musi-cal phrase that is synthesized by Aura. Similarly, ifthe dancer raises an arm above a particular height,a musical event is produced but this time onlyonce (i.e., he/she needs to lower his or her arm andraise it again to produce another musical event).Other rule sets define an environment where themovements of the dancer change different param-eters of granular synthesis. The position of thesensors allows an almost continuous change insome continuous musical parameters used forgranular synthesis. Another rule set is used togradually lower the volume of the music, produc-ing more natural transitions between rule sets. Inaddition, a projected computer animation ischanged each time a musical event is triggered.

Roberto Morales-Manzanares’ Trío de Cuatro

Roberto Morales-Manzanares’ composition Trío deCuatro also uses two sensors attached to the wristsof a dancer. The piece includes two different sets ofrules that are controlled by the dancer. One set di-vides the space into four regions, each with a dif-ferent granular synthesis motive. The dancer canchange regions and continuously alter granularsynthesis parameters with his or her movements.The other rule set defines a central cylindrical re-gion with a surrounding wall. Inside the region, nosound is produced. Each time the dancer “touches”the wall with either hand, a sound is produced.Outside the region, notes are produced according tothe distance from the center, creating a glissandoeffect with the dancer’s movements. The closer thedancer is to the cylinder, the higher the pitch ofthe notes produced. This allows the dancer to cre-ate musical dialogues with a live musician. Addi-tionally, the dancer can toggle between rule sets byraising a hand above a certain height. The heightvalue can be set sufficiently high so that the dancercan only change rule sets with a jump, creating amore dramatic effect (see Figure 4).

Experiences with SICIB

The development of SICIB provided several in-sights regarding what to consider in the develop-ment of similar systems. The most difficult andtime-consuming part was programming the com-munication with the sensors and calibrating them.

Figure 2. The maze in Ar-royo, comprised of ninechambers.

Figure 3. A representationof the region and heightrule sets for SICIB.

Page 10: An Interactive Music Composition System Using Body Movements

34 Computer Music Journal

Michael Lee’s contribution in this area was crucialfor the success of SICIB. Once the communicationprogram was properly running, a much more cre-ative environment was established for composersand choreographers.

Another critical aspect to consider in systemslike SICIB is possible time delays that can occurbetween the dancer and the music produced by thesystem as a consequence of the movements. Theaudience must feel the interaction of the dancerwith the music, and long time delays cannot oc-cur. Because the sensors can produce a largeamount of data every second, data reduction is es-sential, and several considerations emerged frompractice and experimentation. Rules that can beactivated very often (e.g., with any movement ofthe dancer) are normally related either with shortmusical events or changes in musical parameters.If a composer wants to trigger long musical se-quences, either those rules are activated occasion-ally (owing to the nature of the particularchoreographic event considered for it), or they areactivated only once (until another equivalent cho-reographic event is encountered).

In some experiments, where we allowed musicalevents to be present and generated simulta-neously, it was difficult to differentiate what thesensors were doing, and the system could eventu-ally be saturated with musical output. There mustbe a balance between the number of notes (relatedwith the length in time of the generated music)and the rate at which the movements (and there-fore music) are performed. A close coordination

with the dancer can avoid these problems. Slowmovements can still elicit a rapid series of musicalevents, and fast movements can control long sus-tained sounds.

It is important for dancers to learn the musicalconsequences of their movements. We have used atmost four sensors on a dancer, as the dancer can beoverloaded with information, making it difficult tofollow the musical implications of physical gesture.

It is important to have adequate time and spacefor testing and rehearsing. One of the most artisti-cally limiting factors for us has been the difficultyof assembling composers, dancer, and equipmentin a large dance space with enough time to experi-ment and rehearse.

Once the sensors were properly running, the richchoreographic vocabulary, Escamol’s and Aura’s mu-sic capabilities, and the natural coupling betweenthem through a simple rule-based system, allowedthe composers to express their rules in relativelyshort times. We spent less than one day on the defi-nition and tuning for each of the above pieces.

Even though the pieces do not include all thesubtleties of the choreographic and music capabili-ties described in the paper, these features are avail-able in SICIB and were tested in simple experiments.For practical and aesthetic reasons, they were not in-cluded in the previously described pieces.

Conclusions and Future Work

With SICIB, a dancer need not adjust movements toa predefined musical piece. This allows greater free-dom and opens new areas for dance improvisationand performance. From a musical perspective, SICIBrepresents a new virtual instrument that producesmusic through body movements, offering new possi-bilities for music composition, improvisation, andperformance. SICIB has been primarily used as a dia-logue facilitator between live performance musi-cians and dancers, producing sounds with theirmovements. We believe that this interaction hasbeen made possible by SICIB’s flexibility.

The use of a very high-level language (Prolog)greatly facilitated making adjustments in the lim-ited rehearsal times available to us on stage. In prin-

Figure 4. To change rulesets, the dancer simplymoves beyond the definedregion.

Page 11: An Interactive Music Composition System Using Body Movements

Morales-Manzanares, Morales, Dannenberg, and Berger 35

ciple, we could have done everything directly inC++ using Aura, but delegating user interface, com-position, and control tasks to other programs mademore sense and worked out quite well. This experi-ence has prompted the design of a very high levelembedded language for the next version of Aura.

Although several systems have been proposedwith very good results, we believe that SICIB of-fers a good alternative with rich choreographic in-formation (such as curvature and torsion ofmovements, falls, jumps, acceleration, continuouschanges, regions, etc.), complex musical events,and a combination of continuous parametric con-trol and discrete “triggers” of musical eventswithin a single, flexible control scheme. Finally,the syntax employed in the rules to link musicand dance is fairly simple yet quite powerful.

There are several future research directions thatwe are considering at the moment. In particular, weneed a broader engagement with the use of SICIB bydancers and musicians. We want the dancer not tothink too much about his or her movements, yetproduce a coherent piece of music. At the sametime, as the music produced by the dancer may bejust a part of a larger ensemble, we want coherencein the entire piece. This may require sensors forlive musicians as well as live dancers.

Although the choreographic primitives in therules have been adequate so far, we would like toexplore new primitives and provide the user with athree-dimensional graphical interface to define re-gions. We must experiment with several dancersat the same time and with wireless sensors. Weare also exploring the use of cameras to capturethe whole body position. Finally, we are also con-sidering ways to correlate lighting effects with adancer’s movements.

Acknowledgments

We would like to thank the group of Centro Multi-media (CM) from Centro Nacional de las Artes(CNA), especially to director Andrea Di Castro forhis continuous support in the realization of thiswork. We also thank Fideicomiso para la CulturaMéxico—Estados Unidos and Raúl Parrao for his

valuable suggestions and astonishing perfor-mances, Michael Lee who contributed signifi-cantly to the development of SICIB, and SiliconGraphics and Ascension Technologies who kindlyloaned equipment for performances.

References

Allen, J. 1987. Natural Language Understanding.Menlo Park, California: Benjamin/Cummings.

Ascension Technology Corporation. 2000. “AscensionTechnology Corporation Flock of Birds.” http://www.ascension-tech.com/products/flockofbirds/flockofbirds.htm.

Camurri, A., et al. 2000. “EyesWeb: Toward Gestureand Affect Recognition in Interactive Dance and Mu-sic Systems.” Computer Music Journal 24(1):57–69.

Clocksin, W. F., and C. S. Mellish. 1987. Programmingin Prolog. Berlin: Springer-Verlag.

Coniglio, M. 2000. “MidiDancer.” http://www.troikaranch.org/mididancer.html.

Dannenberg, R., and E. Brandt. 1996. “A Flexible Real-Time Software Synthesis System.” Proceedings of the1996 International Computer Music Conference. SanFrancisco: International Computer Music Associa-tion, pp. 270–273.

Do Carmo, M. P. 1976. Differential Geometry of Curvesand Surfaces. Upper Saddle River, New Jersey: PrenticeHall.

Fux, J. J. 1725. Gradus ad Parnassum, trans. AlfredMann. New York: Norton, 1943.

Griffith, N., and M. Fernstrom. 1998. “LiteFoot—AFloor Space for Recording Dance and Controlling Me-dia.” Proceedings of the 1998 International Com-puter Music Conference. San Francisco: InternationalComputer Music Association, pp. 475–481.

Johnstone, E. 1991. “A MIDI Foot Controller—ThePodoBoard.” Proceedings of the 1991 InternationalComputer Music Conference. San Francisco: Interna-tional Computer Music Association, pp. 123–126.

Lerdahl, F., and R. Jackendoff. 1983. A GenerativeTheory of Tonal Music. Cambridge, Massachusetts:MIT Press.

Maes, P., et al. 1995. “The ALIVE System: Full-Body In-teraction with Autonomous Agents.” Proceedings ofthe Computer Animation ‘95 Conference.Piscataway, New Jersey: IEEE Press, pp. 11–18.

Morales-Manzanares, R. 1992. “Non Deterministic Au-tomatons Controlled by Rules for Composition.” Pro-

Page 12: An Interactive Music Composition System Using Body Movements

36 Computer Music Journal

ceedings of the 1992 International Computer MusicConference. San Francisco: International ComputerMusic Association, pp. 400–401.

Morales-Manzanares, R., and E. Morales. 1997. “MusicComposition, Improvisation, and PerformanceThrough Body Movements.” Paper presented at theAIMI International Workshop KANSEI: The Technol-ogy of Emotions, 3–4 October, Genova, Italy.

Morris, R. O. 1978. Contrapuntal Technique in the Six-teenth Century. Oxford: Oxford University Press.

NoTAM. 2000. “Control Suit.” http://www.notam.uio.no/notam/kontrolldress-e.html.

Paradiso, J., et al. 1997. “The Magic Carpet: PhysicalSensing for Immersive Environments.” Proceedings ofthe International Conference on Computer HumanInterfaces. New York: Association for Computer Ma-chinery, pp. 277–278.

Paradiso, J., K. Hsiao, and E. Hu. 1999. “Interactive Mu-sic for Instrumented Dancing Shoes.” Proceedings ofthe 1999 International Computer Music Conference.San Francisco: International Computer Music Asso-ciation, pp. 453–456.

Pinkston, R., J. Kerkhoff, and M. McQuilken. 1995. “ATouch Sensitive Dance Floor/MIDI Controller.” Pro-ceedings of the 1995 International Computer MusicConference. San Francisco: International ComputerMusic Association, pp. 224–225.

Rokeby, D. 1999. “The Very Nervous System.” http://www.interlog.com/~drokeby/vns.html.

Schwananauer, S., and D. Levitt. 1993. Machine Modelsof Music. Cambridge, Massachusetts: MIT Press.

Siegel, W., and J. Jacobsen. 1998. “The Challenges of In-teractive Dance: An Overview and Case Study.”Computer Music Journal 22(4):29–43.

Sloboda, J. 1985. The Musical Mind: The Cognitive Psy-chology of Music. Oxford: Clarendon Press.

Sloboda, J., ed. 1988. Generative Processes in Music.Oxford: Clarendon Press.

STEIM. 2000. “BigEye.” http://www.steim.nl/bigeye.html.Van Raalte, C. 1999. “Old Dog, New Tricks. Biofeed-

back as Assistive Technology.” Paper presented at theCalifornia State University, Northridge Center OnDisabilities 1998 Conference, March 1998, Los Ange-les, California. Also available online at http://www.dinf.org/csun_98/csun98_098.htm.

Vercoe, B. 1986. “Csound: A Manual for Audio ProcessingSystem and Supporting Programs.” Program Documen-tation. Cambridge, Massachusetts: MIT Media Lab.

Welch, B. 1995. Practical Programming in Tcl and Tk.Englewood Cliffs, New Jersey: Prentice Hall.

Winkler, T. 1995. “Making Motion Musical: GestureMapping Strategies for Interactive Computer Music.”Proceedings of the 1995 International Computer Mu-sic Conference. San Francisco: International Com-puter Music Association, pp. 261–264.