Designing Interactive Audiovisual Systems for Improvising...

7
Designing Interactive Audiovisual Systems for Improvising Ensembles William Hsu Department of Computer Science San Francisco State University San Francisco CA 94132 USA [email protected] Abstract. Since 2009, I have been working on real-time audiovisual systems, used in many performances with improvising musicians. Real-time audio from a performance is analysed; the audio descriptors (along with data from gestural controllers) influence abstract animations that are based on generative systems and physics-based simulations. The musicians are in turn influenced by the visuals, essentially in a feedback loop. I will discuss technical and aesthetic design considerations for such systems and their integration into the practices of improvising ensembles, and share some experiences from musicians and audience members. Keywords: Interaction design, improvisation, audio descriptors, generative systems, physics-based simulations Introduction For years I have been involved in free improvisation with both acoustic instruments and live electronics. As a performer, software designer and listener, I have been very interested in the associations made by myself (and others) between sonic events/gestures and visual/physical phenomena. I am attracted to visual systems that exhibit the oppositions and tensions that I enjoy in improvised music. Since 2009, I have been working on real-time audiovisual systems for non-idiomatic free improvisation. I have used these systems in over 50 performances with improvising musicians, including Chris Burns, John Butcher, James Fei, Gino Robair, Birgit Ulher, and many others; many such events were evening-long concerts involving up to six different systems/pieces. Performances have been hosted at a variety of venues, such as ZKM (Karlsruhe), STEIM (Amsterdam), CNMAT (Berkeley), the Songlines series at Mills College, the San Francisco Electronic Music Festival, and the NIME and SMC conferences. Figure 1 shows a typical block diagram of my systems. In performance, two or more channels of audio enter the system from microphones or the venue’s sound system. The audio is analyzed by a Max/MSP patch that extracts estimates of loudness and tempo, and some timbral features. Audio descriptors are sent via OpenSoundControl messages to an animation environment, usually implemented in Processing or OpenFrameworks. These interactive animations are influenced by the real-time audio descriptors from the musicians’ performance, and by physical gestures from controllers. Video generated by the animation environments is usually projected behind or above the musicians. Figure 2 shows a typical stage setup, from a set by EKG (Kyle Bruckmann and Ernst Karel) at the 2013 San Francisco Electronic Music Festival, with video generated by my system Tes. The animations are visible to the musicians and influence their performance, thus forming a feedback loop. Each system has both components that I control, and autonomous components with their own behavioural rules. Performing with one of my systems involves ongoing negotiation between the controllable components and the autonomous modules.

Transcript of Designing Interactive Audiovisual Systems for Improvising...

DesigningInteractiveAudiovisualSystemsforImprovisingEnsembles

WilliamHsu

DepartmentofComputerScienceSanFranciscoStateUniversity

SanFranciscoCA94132USA

[email protected]

Abstract.Since2009,Ihavebeenworkingonreal-timeaudiovisualsystems,usedinmanyperformanceswithimprovisingmusicians.Real-timeaudiofromaperformanceisanalysed;theaudiodescriptors(alongwithdatafromgesturalcontrollers)influenceabstractanimationsthatarebasedongenerativesystemsandphysics-basedsimulations.Themusiciansareinturninfluencedbythevisuals,essentiallyinafeedbackloop.Iwilldiscusstechnicalandaestheticdesignconsiderationsforsuchsystemsandtheirintegrationintothepracticesofimprovisingensembles,andsharesomeexperiencesfrommusiciansandaudiencemembers.

Keywords:Interactiondesign,improvisation,audiodescriptors,generativesystems,physics-basedsimulations

Introduction

ForyearsIhavebeeninvolvedinfreeimprovisationwithbothacousticinstrumentsandliveelectronics.Asaperformer,softwaredesignerandlistener,Ihavebeenveryinterestedintheassociationsmadebymyself(andothers)betweensonicevents/gesturesandvisual/physicalphenomena.IamattractedtovisualsystemsthatexhibittheoppositionsandtensionsthatIenjoyinimprovisedmusic.

Since2009,Ihavebeenworkingonreal-timeaudiovisualsystemsfornon-idiomaticfreeimprovisation.Ihaveusedthesesystemsinover50performanceswithimprovisingmusicians,includingChrisBurns,JohnButcher,JamesFei,GinoRobair,BirgitUlher,andmanyothers;manysucheventswereevening-longconcertsinvolvinguptosixdifferentsystems/pieces.Performanceshavebeenhostedatavarietyofvenues,suchasZKM(Karlsruhe),STEIM(Amsterdam),CNMAT(Berkeley),theSonglinesseriesatMillsCollege,theSanFranciscoElectronicMusicFestival,andtheNIMEandSMCconferences.

Figure1showsatypicalblockdiagramofmysystems.Inperformance,twoormorechannelsofaudioenterthesystemfrommicrophonesorthevenue’ssoundsystem.TheaudioisanalyzedbyaMax/MSPpatchthatextractsestimatesofloudnessandtempo,andsometimbralfeatures.AudiodescriptorsaresentviaOpenSoundControlmessagestoananimationenvironment,usuallyimplementedinProcessingorOpenFrameworks.Theseinteractiveanimationsareinfluencedbythereal-timeaudiodescriptorsfromthemusicians’performance,andbyphysicalgesturesfromcontrollers.

Videogeneratedbytheanimationenvironmentsisusuallyprojectedbehindorabovethemusicians.Figure2showsatypicalstagesetup,fromasetbyEKG(KyleBruckmannandErnstKarel)atthe2013SanFranciscoElectronicMusicFestival,withvideogeneratedbymysystemTes.Theanimationsarevisibletothemusiciansandinfluencetheirperformance,thusformingafeedbackloop.EachsystemhasbothcomponentsthatIcontrol,andautonomouscomponentswiththeirownbehaviouralrules.Performingwithoneofmysystemsinvolvesongoingnegotiationbetweenthecontrollablecomponentsandtheautonomousmodules.

Figure1.BlockDiagramofInteractiveAudiovisualPerformanceSystem

DesignGoalsandPracticalConsiderationsThesearetheinitialgoalsformysystems:

• Eachsystemwillbeprimarilyusedinthecontextofabstractfreeimprovisation.• Therewillbeminimaluseoflooping/pre-sequencedmaterials;eachsystemwillbehavelikeacomponentinan

improvisingensemble.• Eachsystemwillbea“playable”visualinstrument;systembehaviourcanbeinfluencedbyphysicalcontrollers.• Eachsystemhasautonomouscomponentsthatguidesitsbehaviour;thisbehaviourmaybeinfluencedbyreal-time

audio.• Eachsystemshouldevokethetactile,nuancedandtimbrallyrichgesturescommoninfreeimprovisation.• Thereshouldbecross-referencesbetweenaudioandvisualevents.However,overlyobviousmappingsshouldbe

avoided.

Figure2.PerformancebyEKGwithvideobyBillHsu,2013SanFranciscoElectronicMusicFestival.Photo:PeterBKaars.com

Non-idiomaticfreeimprovisationisasocialmusic-makingpractice,producingopensonicconversationsthattendnottobeeasilyinterpretedoranalysedinnon-musicalterms.Gaveretal.(2003)discusstheroleofambiguityininteractivedesign;someoftheircommentsareusefulforunderstandingthepracticeandexperienceoffreeimprovisation,andhowonemightapproachaudiovisualdesignforconcertsofimprovisedmusic.Forexample,theauthorscouldeasilyhavebeenreferringtosonicgesturesinimprovisation,which“maygiverisetomultipleinterpretationsdependingontheirprecision,consistency,andaccuracyontheonehand,andtheidentity,motivations,andexpectationsofaninterpreterontheother.”Anothercommentwouldfindresonancewithmanyfansofimprovisedmusic:“…theworkofmakinganambiguoussituationcomprehensiblebelongstotheperson,andthiscanbebothinherentlypleasurableandleadtoadeepconceptualappropriationoftheartefact.”

Topreservetheessentialopennessandambiguityofafreeimprovisation,Ifeelthatthevisualsshouldnotover-determinethenarrativeoftheperformance.Hence,mostofmyworkutilizesgenerativeabstractvisualcomponentsthatexhibitarangeofbehaviours.Mypreferenceisforunstable,evolvingformsthatfacilitatesettinguptensionsbetweenabstractandreferentialelements,forarichervisualexperience.Forexample,aparticleswarmorsmokesimulationmaymoveincomplex,pseudo-randomconfigurations,orself-organizeintorecognizablestructures.ThesetransitionsevokeFriedrichHayek’sconceptof“sensoryorder”,whereinobserversorganizerawchaoticstimuliintoperceptuallyrecognizableobjects(Hayek1999).

Eachofmysystemsisprimarilybasedonasinglecomplexprocess,usuallyagenerativesystemoraphysics-basedsimulation.Themovementandevolutionofthevisualcomponentsineachsystemfollowtherulesoftheunderlyingprocess.Audiodescriptorsfromthereal-timeperformanceaudioaffectmostlyhigh-levelparametersofthebaseprocess;theydonotmapdirectlytolowleveldetailsofthevisualcomponents.Frommyexperimentsandobservations,allowingtheunderlyingprocesstodeterminethelowleveldetailsofthevisualsresultsinmore“organic”andaestheticallyconsistentmovementsandtransitions.

Eachsystemreceivesseveralinputstreams,representingeventsoccurringat(often)widelydifferentrates.Forexample,anaudiodescriptorrepresentingawell-definedpercussiveonsetmayoccurextremelyinfrequently,whileonethatrepresentsloudnessoracontinuoustimbralcharacteristicmaybepresentedatregularintervalsof(say)100-200milliseconds.Inaddition,eventsintheinteractiveanimationsubsystemalsohavearangeoftemporaldistributions.Arapidonsetvisualevent,suchasthesudden“birth”ofasmallclusteroftinyobjectsthatexpandsintovisibility,maytakeafewhundredmillisecondsfromtheinitialtriggertoitsfinal,relativelystablestate.Ontheotherhand,abroadsweepinggesture,representing(forexample)atidalflowinafluidsystem,maytakeseveralsecondstocomplete,withitseffectsbeingvisiblelongafterthegestureinitiation.Hence,caremustbetakenwitheachsystemtomanageeacheventtypebasedontypicalratesofoccurrence.Forexample,itmaybeintuitiveforapercussiveaudioonseteventtotriggertheformationofasmallobjectclusterintheanimation.However,iftheliveperformerisabusypercussionist,thevisualenvironmentmayquicklybecomeclutteredandoverwhelmedwithobjects.Acertainamountofthresholdingisalwaysnecessaryformanagingtheratesofautomaticevents;thesystemoperatorshouldalsohavetheoptiontoadjusttheresponsivenessoftheanimationsystemtoselectedeventtypes.

Thereisacoreofexperiencedmusicalcollaboratorswhohaveworkedwithmysystemsregularly.Whenworkingwithregulars,itiseasyforustoquicklyconvergeona“setlist”ofpiecesforanevening’sconcert.Abriefminuteorsobeforetheperformancewitheachpieceisusuallysufficientformusicianstore-familiarizethemselveswiththesystem.Ialsocollaboratewithmusicianswhohaveneverworkedwithaudiovisualperformancesoftware,withminimaltimeforrehearsalsbeforeanevening’sconcert.Hence,interactionmodalitieshavetobeintuitiveandeasytoexplain;withonlyafewminutes’preparation,amusicianshouldunderstandasystem’sbehavioursufficientlytoexploreandimprovisewithit,ontopofthecognitivedemandsofnegotiatinganimprovisation.

Collaboratingmusicianshavetakenseveralapproachestoworkingwithmyaudiovisualsystems.Afewhavechosennottolookatthevideoatall;theyfeltthattheywantedtofocusonsound.Somehavementionedthetemptationtotryto“push”theanimationsintospecificoutcomes,viatheirsonicgestures;thistemptationmayobviouslydistractfrommusic-making.Mostofmyfellowperformershavepreferredsomeconversationsaboutthechosensystemsbeforeaperformance,andactivelyengagewiththevideowhenplaying.

RelatedWorkAudiovisualperformanceiswidespreadintheclubmusiccommunity.TheVJLaborwebsite(http://vjlabor.blogspot.com),forexample,showcasessuchwork.Themusicisoftenbeat/loop-based,withrelativelystabletemposandeventrates,andlittlevariationinspaceoruseofsilence.Thevisualsoftenincorporateloopsandsimplecycles,andworkwithpre-recordedfootagethatmaybemanipulatedinreal-time.

Interactivevideoisalsoacomponentinmanycompositions.Forexample,violinistBarbaraLueneburg’sDVDWeaponofChoice(http://www.ahornfelder.de/releases/weapon_of_choice/index.php)includesasamplingofcomposedpieceswithliveorstaticvideo,byAlexanderSchubert,YannisKyriakides,DaiFujikura,andothers.Someofthepiecesincorporateliveaudioorsensorinput.

Inmyexperience,interactivevisualstendtobesignificantlylesscommoninfreeimprovisation.ThestrategiesthatworkwellintheVJandcomposercommunities,inmyopinion,oftendonotmapwelltonon-idiomaticfreeimprovisation,witheventdensitiesthatmayvarywidelyinshorttimewindows,useofspaceandsilence,andoverallconversationsthatdevelopfrommomenttomoment,withlittleornopre-arrangedcompositionalstructure.Thetechnologiesandinternaldetailsofthesesystemsalsotendtobepoorlydocumented.

AmajorearlyaudiovisualperformanceprojectinvolvingimprovisersisLevinandLieberman’sMessadiVoce(LevinandLieberman,2004).TheprojectappearstobeprimarilydesignedaroundJoanLaBarbaraandJaapBlonk,twovocalistswhoarerenownedfortheirexplorationofextendedvocaltechniquesinperformance.Real-timecameraandaudioinputfromtheperformersdriveanarrayofgenerativeprocesses,includingparticlesystemsandfluids.Messacomprisestwelveshort,theatricallyeffectivesectionsoveratotalof30-40minutes;myownworktendstowardlongersectionsof8-10minuteseach,witheachsectionbasedonadistinctgenerativeprocess,formoreextended“conversations”.InMessa,LaBarbaraandBlonkarealwaysthecentersofvisualattentiononstage;camera-basedtrackingoftheirbodiesisasignificantcomponentoftheliveinteractions.Withmysystems,mycollaboratorstendtofocusonworkingwithabstractsound;whilethemusiciansarevisibleonstage,theirmovementsarenottracked,andthevisualattentionoftheaudiencetendstobeprimarilyonthelivevideo.Fromtheonlinedocumentation,itisnotclearifMessadiVocehasbeenperformedwithimprovisersotherthanLaBarbaraandBlonk,orwhetherithasbeenrevivedsincethe2004/5performancesandinstallations.

Morerecentprojectsincludetheperformanceduoklippav,focusingonliveaudiovisualcutup/splicing(CollinsandOlofsson,2006);trombonistAndyStrain’saudiovisualpiecesforschoolchildren(http://andystrain.com);BillyRoisz’slivevideoworkwhichincorporatesrelativelyminimalisttransformationsofstillimages,foundfootageandanalogartifacts(http://billyroisz.klingt.org/video-works);andWilliamThibault(http://www.vjlove.com),whooftenmanipulatesdensedatanetworkvisualizationsinperformancewithfreeimprovisers.

ExampleSystemsTodate,Ihavebuiltoveradozendistinctaudiovisualperformancesystems,somewithnumerousvariants.Mostofthesesystemshaveparticipatedinnumerousperformanceswithimprovisingmusicians.ThefirstwastheparticlesystemInterstices,introducedin(Hsu2009).Iwillfocusmostlyonfourofthelatersystems:FlowForms,Flue,Fluke,andLeishmania.

FlowFormsisbasedontheGray-Scottdiffusion-reactionalgorithm(Pearson1993),whichhasbeenwidelyusedinthegenerativeartcommunity.Twosimulated“chemicals”interactanddiffuseina2Dgrid,accordingtosimpleequations.Parametersthatcontroltheconcentrationsofthesimulatedchemicalsaremodifiedbyandtrackactivityfromthereal-timeaudio.Sonicallyactivesectionsresultinmorerobustvisuals;longperiodsofsilencewillresultfragmentationofthepatterns,eventuallytoleaveadarkscreen.Inaddition,hiddenmasksrepresentingshapesorimagescanbeintroducedtoguidetheformationofvisiblepatterns.Figure3showsanexampleofasimulatedchemicalflowingintoapre-loadedimagemask.

Figure3.Gray-ScottDiffusionReactionProcesswithTransitionintoHiddenImageMask

Flueisasmokesimulation,basedonaportofJosStam’sstablefluidscode(Stam1999).Twosmokesourcesmovethroughspace,eachactivatedandpushedbyareal-timeaudiostream.Again,activitylevelsinthevisualsimulationtracksroughlyactivitylevelsintheaudio,buttherearenosimplemappingsoflowlevelbehaviors.Hiddenmaskscanbeintroducedtoconstrainthemovementofthesmoke.Figure4showssimulatedsmokecoalescingintotheshapeofaskull,thendispersing.

FlukeisbasedonStephanRafler’sextensionofConway’sGameofLife(Rafler2011).Thealgorithmisverycompute-intensive;IadaptedTimHutton’sOpenCLimplementationthatrunsontheGPU.Real-timeaudioactivitytriggerstheformationofstructuresina2Dspace;algorithmparametersareconstantlymodulatedforthevisualactivityleveltotrackactivitylevelsintheperformanceaudio.

Figure4.SmokeSimulationwithSmokeSourcesfillingHiddenSkullMask

Leishmaniaisaninteractiveanimationenvironmentthatvisuallyresemblescoloniesofsingle-cellorganismsinafluidsubstrate.Eachcell-likecomponenthashiddenconnectionstoandrelationshipswithothercomponentsintheenvironment.Thecoloniesevolveand“swim”throughthesubstrate,basedonacombinationofcolonialstructureandinter-relationships,andflowsinthefluidsubstratethatmightbeinitiatedbygesturalinput.LeishmaniahasbeenusedextensivelyinperformancewithChristopherBurns’Xenoglossiainteractivemusicgenerationsystem.Thesetwosystemscommunicatewithoneanotherinavarietyofways.Theanimationisinfluencedbythereal-timeanalysisofaudiofromXenoglossia.Inaddition,thetwosystemsexchangeOSCnetworkmessages,informingeachotherofeventsthataredifficulttoextractfromautomaticanalysis,suchaspendingsectionchanges,structuredrepetitionswithvariations,andtheconfigurationofanimationcomponents.

Reactions

Sofar,Ihavemostlyfocusedonbuildingcomplexaudiovisualsystemsforliveperformances“inthewild”;littletimehasbeenspentsettinguplaboratory-likesituationsformoreformalevaluations.Evaluationproceduressuchasthosedescribedin(HsuandSosnick,2009),targetinginteractivemusicsystemsfromthepointsofviewofboththeperformingmusiciansandaudiencemembers,mightbeadaptedforinteractiveaudiovisualsystems;thepresenceofcomplexvisualcomponents,withdisparatebehaviouraltypes,significantlycomplicatesuchevaluations.Instead,wewillsummarizesomegenerallypositivefeedbackonperformances,frommusiciansandaudiencemembers;theyappeartosupportsomeofouroriginaldesigngoals.

FrequentOakland-basedcollaboratorGinoRobair(percussion/electronics):“WhatIfindfascinatingishowtheinteractivepieceschallengethemusicianswhoplaythem.Oneimmediatelywantstofigureouthowto"game"thesystem,andcontrolitwithwhatweareplaying.Yetthealgorithmsconfoundthat,forcingthemusicianstotreatthecomputersystemasarealduetpartnerwhois"listening"butnotnecessarilyrespondinginapredictableway(Robair,personalcommunication).”

Anotherfrequentcollaborator,Hamburg-basedBirgitUlher(trumpet/electronics):“WhatIespeciallylikeabout[Bill’sanimations]isthestrongconnectiontothemusicwithoutbeingtooobviousorillustrative.WorkingwithBill'sanimationsopensupalotofnewlevelsofcommunication,theaudioinputofthemusiciansaretransformedintovisualswhichcanbeseenonascreenwhileplaying,soitisakindofseesawofinfluences.Thevisualsinfluencetheplayersandviceversa,alsothecommunicationbetweenthemusicianshasanadditionallevelsincetheirinteractionisseenonthescreenaswell(Ulher,personalcommunication).”

London-basedJohnButcher(saxophones),whowithRobairandmyselfcomprisetheaudiovisualtrioPhospheme:“Thesystems'responsesmaybesubtleorstriking,butalwaysseemorganic.There'sacreative,two-wayencounterwherethemusicianisonlypartlydrivingtheprocessandtheinspirationtheytakefromtheevolvingvisualreactionsopensspaceforsomeofthemoreunpredictableconsequencesonehopesforwithhumaninteractions.Fromastructuralpointofviewit'sinterestingthatthiscanhappenwithintheconsistent"flavour"ofaparticularvisualidea,givingeachimprovisationavaluablecoherence(Butcher,personalcommunication).”

Robair,UlherandButcherhaveallworkedwitharangeofmyaudiovisualpieces.Chicago-basedChrisBurns(electronics)hasmostlyworkedwithLeishmania,onanumberofoccasions;heshareshisexperiences:“[Leishmania]offersarichvarietyofvisualbehaviorswithinafocussed,consistent,admirablystarkandunapologeticallydigitalaesthetic.Thatvarietymakesitasuitableaccompanimentforawiderangeofmusicalchoices;italsomakesthesystemunpredictable,withmusicalstimulileadingtoanumberofpossiblevisualreactions.Leishmaniaprovidesstimulusaswellasresponse.Theanimationsareevocativewithoutbeingcommanding-thevisualsinspireformalandgesturalideasinmyperformance,buttheyneverfeelconstraining,limiting,ordemandingofaparticulartypeofmusicalresponse.Inshort,Leishmaniafeelslikeaverysophisticated,capable,andprovocativeduopartner-whichhaseverythingtodowithduopartnerwhoconstructedit(Burns,personalcommunication).”

ReviewerStephenSmoliarwroteaboutaperformancein2014withJamesFei,GinoRobair,andOferBymel:“…Itseemedclearthat[Hsu]was“playing”hisinteractiveanimationsoftwarefollowingthesamelogicandrhetoricoffreeimprovisation…Whatwasmoststriking…washowHsucouldusevisualcharacteristicssuchasflowandpulsetoachieveanimationsthatprobablywouldhavebeenjustasmusicalhadtheybeendisplayedinsilence.”(Smoliar2014)

SummaryIhavebeenluckytoworkwithmanyinspiringimprovisingmusiciansonaudiovisualprojects.Theircreativeandfascinatingapproachestoperformancesituations,frommanagingtexturalandgesturalmaterials,toworkingwithspaceandpulseinthecontextofnon-referentialabstractmaterials,haveinformedmyaudiovisualsystemdesignsatmanydifferentlevels.I’dliketothankmycollaboratorswhohavebeengenerouswiththeirtimeandfeedback.Ihaveshared

somenotesandexperienceshere,buthighlyencourageviewingoneoftheperformancesonvideoor(especially)live;freeimprovisation,withorwithoutvisuals,isverymuchasocialmusic-makingpracticethatisbestexperiencedinliveperformances.Aselectionofvideodocumentationisavailableonline:

SetwithJamesFei(reeds)andGinoRobair(percussion)atOutsoundSummitFestival2014,SanFrancisco:https://www.youtube.com/watch?v=NLFj26zfqsI

PerformancewithChrisBurns(electronics)ofXenoglossia/LeishmaniainComputerMusicJournalonlineanthology(requireslogin):http://www.mitpressjournals.org/doi/abs/10.1162/COMJ_x_00276#.VJtz6LiALw

ShortdemoofFluke(musicbyBirgitUlherandGinoRobair):https://vimeo.com/106125702

ShortdemoofFlowForms(musicbyChrisBurns):https://vimeo.com/78739548

ReferencesCollins,NickandOlofsson,Fredrik.2006.Klippav:livealgorithmicsplicingandaudiovisualeventcapture.InComputerMusicJournal,Vol.30,No.2(Summer2006),pp.8-18.

Gaver,William,Beaver,JacobandBenford,Steve.2003.AmbiguityasaResourceforDesign.InProceedingsofCHI’03.

Hayek,Friedrich.1999.TheSensoryOrder:AnInquiryintotheFoundationsofTheoreticalPsychology.UniversityofChicagoPress,Chicago,IL.

Hsu,William.2009.Somethoughtsonvisualizingimprovisations/improvisingvisualizations.InProceedingsof6thSoundandMusicComputingConference(SMC'09).

Hsu,WilliamandSosnick,Marc.2009.EvaluatingInteractiveMusicSystems:AnHCIApproach.InProceedingsofInternationalConferenceonNewInterfacesforMusicalExpression(NIME2009)

Levin,GolanandLieberman,Zachary.2004.In-situspeechvisualizationinreal-timeinteractiveinstallationandperformance.InNPAR2004Jun7(Vol.4,pp.7-14).

Pearson,J..1993.Complexpatternsinasimplesystem.Science261,189-192.

Rafler,Stephan.2011.GeneralizationofConway’s“GameofLife”toacontinuousdomain---SmoothLife.2011.arXivpreprintarXiv:1111.1567.

Smoliar,Stephen.2014.BillHsubringsmusicalrhetorictohisimprovisedinteractiveanimations.2014.RetrievedJanuary10,2016fromhttp://exm.nr/1tzVSlp.

Stam,Jos.1999.Stablefluids.InProceedingsof26thannualconferenceonComputerGraphicsandInteractiveTechniques(SIGGRAPH’99).