Download - Reflets: Combining and Revealing Spaces for Musical ...

Transcript

HAL Id: hal-01136857https://hal.inria.fr/hal-01136857

Submitted on 30 Jun 2015

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

Distributed under a Creative Commons Attribution - ShareAlike| 4.0 InternationalLicense

Reflets: Combining and Revealing Spaces for MusicalPerformances

Florent Berthaut, Diego Martinez Plasencia, Martin Hachet, SriramSubramanian

To cite this version:Florent Berthaut, Diego Martinez Plasencia, Martin Hachet, Sriram Subramanian. Reflets: Combiningand Revealing Spaces for Musical Performances. New Interfaces for Musical Expression (NIME), May2015, Baton Rouge, United States. �hal-01136857�

Reflets: Combining and RevealingSpaces for Musical Performances

Florent BerthautUniversity of [email protected]

Diego MartinezPlasencia

University of Bristoldiego.martinez-

[email protected]

Martin HachetINRIA Bordeaux - Sud-Ouest

[email protected]

Sriram SubramanianUniversity of Bristol

[email protected]

ABSTRACTWe present Reflets, a mixed-reality environment for musi-cal performances that allows for freely displaying virtualcontent on stage, such as 3D virtual musical interfaces orvisual augmentations of instruments and performers. It re-lies on spectators and performers revealing virtual objectsby slicing through them with body parts or objects, and onplanar slightly reflective transparent panels that combinethe stage and audience spaces. In this paper, we describethe approach and implementation challenges of Reflets. Wethen demonstrate that it matches the requirements of musi-cal performances. It allows for placing virtual content any-where on large stages, even overlapping with physical el-ements and provides a consistent rendering of this contentfor large numbers of spectators. It also preserves non-verbalcommunication between the audience and the performers,and is inherently engaging for the spectators. We finallyshow that Reflets opens musical performance opportunitiessuch as augmented interaction between musicians and noveltechniques for 3D sound shapes manipulation.

Author Keywordsreflets, augmented performance, reflection, optical combin-ers, revealing, mixed-reality

ACM ClassificationH.5.1 [Information Interfaces and Presentation] MultimediaInformation Systems — Artificial, augmented, and virtualrealitiesH.5.5 [Information Interfaces and Presentation] Sound andMusic ComputingJ.5 [Computer Applications] Arts and Humanities — Per-forming arts (e.g., dance, music)

1. INTRODUCTIONMusical performances are more and more enriched with 3Dvirtual content. This content may serve as an artistic visu-alisation, as seen in many recent DJ performance, but it canalso be used for musical control, either as one component

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.NIME’15, May 31-June 3, 2015, Louisiana State Univ., Baton Rouge, LA.Copyright remains with the author(s).

Figure 1: A) The blue sound shape manipulatedby the performer is revealed by a spectator. B)Spectators reveal virtual components of a musicalinstrument using white boards. C) Two performersinteract through the shared space.

[9] of the performance, or as the instrument itself [2]. In or-der to display virtual content amongst physical scenery andperformers on stage, Mixed-Reality (MR) display technolo-gies are required. However, musical performances implydemanding constraints that are only partially fulfilled byexisting MR technologies: 1) they involve a large numberof users (both spectators and performers); 2) large displayvolumes (i.e. covering the whole stage); 3) they require vi-sual communication between the spectators and performers;and sometimes 4) audience participation. While existingMR display technologies have been used to create a numberof great musical performances, they do not match all thesespecific requirements, as we explain in more details in sec-tion 4. We believe that a MR display that would take all

these constraints into account would both benefit existingperformances and open many artistic opportunities.

In this paper, we introduce Reflets, a MR environmentfor musical performances. As depicted on Figure 1, a largeset of slightly reflective transparent panels is placed be-tween the performers and spectators, with a depth cam-era/projector pair covering each side. Spectators and per-formers use their bodies or other objects to reveal virtualcontent (3D shapes, images, textures), on both the stageand audience sides. The reflections on the panels make spec-tators, performers and virtual elements appear together ina shared space. We present our approach in Section 3. Wediscuss the specific requirements for mixed-reality displaysin the context of musical performances in Section 4 and ex-plain how Reflets meets those requirements. Although Re-flets can be used as a generic MR display for musical per-formances with virtual content, we also demonstrate thatit opens specific performance opportunities, through a se-ries of workshops with performers, and four scenarios, asdescribed in Section 5.

2. RELATED WORKOur approach relates to a number of previous research in3D display technologies.

First, we use the audience as a display. This idea has beenexplored in public events, for example by giving display el-ements to spectators, either static such as coloured boardsor active such as small LED displays that can be controlledat a distance. An interesting work is the ”Phone as pixel”[8] project, which displays 2D animations from a collectionof spatially distributed smart phones. Our system extendsthese ideas from 2D displays to volumetric displays, anddoes not require devices that users have to hold during theperformance.

In [5], authors track markers on pieces of paper and usethem to reveal 3D objects. This is similar to the revealingtechnique that we describe in Section 3.2, but our abilityto use it in either side of the combiner allows for novelscenarios. For instance, several slices can be revealed si-multaneously, one from each side. The slices can refer toreal objects on the other side, revealing their internals. Ourmethod also handles larger slicing objects, e.g. to reveallarge virtual elements, and a greater range of contents canbe used, including both 3D meshes and volumetric textures.

We also strongly rely on the guidelines provided in [6].The authors describe the possibility of creating a sharedspace, and quickly suggest the use of a revealing techniquefor augmenting objects in a museum cabinet. In this pa-per, we extend these basic ideas in the context of musicalperformance, ensuring that they fit the specific constraintsof this domain. The mirrors and revealing techniques arebrought to a much larger scale and the possibilities for musi-cal visualisation, 3D interaction, collaboration and audienceparticipation are explored.

3. REFLETS3.1 General approachWe use the characteristics of musical performances, such asa large number of potentially active spectators and perform-ers, and a separation between stage and audience spaces, todesign a MR environment that matches their specific re-quirements, as explained in Section 4.

The principle of Reflets is to combine: 1) the technique ofrevealing virtual content through intersection with physicalelements and users; 2) slightly reflective transparent panelsthat act as an optical combiner. The revealed content is

perceived placed in a unique shared space, due to the re-flections, overlapped with both the stage and audience side.

An example use of Reflets is depicted on Figure 1.A. A 3Dsound shape manipulated by the performer is revealed by aspectator, i.e. projected on him. From the audience side,the reflection of this projection appears on the stage spaceoverlapped with the performer’s hand. From his side, theperformer sees the reflection of his hand overlapped withthe audience space, allowing him to grab and manipulatethe shape. Users on both sides therefore perceive the samespace with virtual and physical elements, alternatively seendirectly and as reflections. Both performers and spectatorsact as a moving screen, with which it is possible to aug-ment physical elements or display virtual objects anywhereon stage, by revealing them from either side of the mir-ror. Reflets can be used as a generic MR display for anyexisting performance that involves virtual content, such asImmersive Virtual Musical Instruments, 3D visualisationsof music, augmented reality audience participation, and soon. In addition, we show in Section 5 that Reflets opensnew opportunities for musical performance.

In the following sections, we explain the two componentsof Reflets in more details.

Figure 2: Virtual content at different positions canbe revealed. (A) Two shapes at the back and threeat the front. (B) A depth map of the physical spaceis captured, then per-pixel intersection with virtualobjects is computed and rendered. (C) shows theintersection with the two shapes at the back. (D)shows the re-projection of the intersection with thethree shapes at the front.

3.2 Revealing techniqueThe first component of Reflets is the revealing technique. Itallows for displaying virtual objects placed at any positionin the physical space by intersecting them with a physicalelement, e.g. object or body part. The intersection, orslice, of the object is then projected back on the physicalelement, resulting in part or all of the virtual object ap-pearing in the physical space, revealed by the user. If thephysical element used matches the surface of the virtualobject, then this technique gives the same result as projec-tion mapping. Otherwise a slice with both the inside andthe edge of the object is displayed. The hardware side ofthis revealing technique consists of a Depth Camera (AsusXtion) and Projector pair. They are calibrated so that theacquired depth image and the projection are both correctlyaligned with the physical world. This means that the imageof a physical object captured can be exactly reprojected on

the object itself. These pairs are placed so that they cap-ture the physical spaces occupied by the spectators and theperformers. The software side is 3D scenegraph based onOgre and OpenGL 1. It allows for building scenes with prim-itives or mesh files, and controlling all the parameters of thescene with audio features and MIDI and OpenSoundCon-trol messages. However, the main part of this applicationis the rendering method. It is composed of three steps, asdepicted on Figure 2. We first capture the surfaces from thephysical space, e.g. audience, performers and objects, usingthe depth cameras and render them as meshes to texturesthat will serve as depth maps. The second step involvesslicing the virtual elements using these depth masks. Foreach virtual object that we want to display, we render itfrom the projector point of view, and test the intersectionof the object per pixel with the depth masks. Sliced meshescan be rendered in different ways. The edges can be high-lighted with a color or texture. The inside can be displayedas a solid color, a gradient, or even a volumetric texture(e.g. an MRI scan), as depicted on Figure 2.C. The thirdstep of our method is the re-projection of the slices in thephysical world on the intersecting objects, as depicted onFigure 2.D.

Figure 3: A) Shared space created by the opticalcombiner. B) and C) The three users can all beperceived from both sides. D) Virtual content (herea cog) can be revealed from both sides and appearsin the shared space at the correct depth for all users.

3.3 Optical CombinerThe second component of Reflets is a set of optical combin-ers, more specifically slightly reflective transparent surfaces.A 2m*1.5m grid of flat 8mm acrylic panels is placed betweenthe musicians and the spectators as depicted on Figure 1.Users on both sides can perceive their reflection on the pan-els, and these reflections appear overlapped with the spaceon the other side. From both sides, the two spaces delim-ited by the panels are therefore perceived as combined in asingle shared space, as explained in [6] and shown on Fig-ure 3. Because the panels are flat, the reflections do notdepend on the position of the observer. Therefore, if a re-flected object overlaps a physical object on the other side,these two objects will be perceived as overlapped whereverthe observer stands in front of the panels. This ensuresthat all spectators and performers perceive the same sharedspace, with reflections and physical objects combined. Theoriginality of Reflets is to add the revealing technique, by

1https://launchpad.net/rouages

covering both sides with depth camera / projector pairs,as shown on Figure 1.A. The virtual elements revealed byusers or physical elements on one side are reflected on thepanels and therefore appear overlapped with the other side.A virtual object placed in the shared space can therefore berevealed from both sides. Both slices are seen together (dueto the reflections) completing the rendering of the object(see Figure 3.D). As explained earlier, the revealed objectsare perceived in the shared space at the same position byall performers and spectators. This technique can also beused asymmetrically, with a referent real object on one sideand revealing happening on the other. Augmentation canthen be revealed floating around the referent object or eveninside it, as shown on Figure 1.B. The optical combinersin Reflets are only slightly reflective, so that by controllingthe lighting carefully we can exactly decide which physicalelements on each side will be perceived, both directly and inreflections, and therefore which elements will compose theshared space.

4. MIXED REALITY DISPLAYSFOR MUSICAL PERFORMANCES

In this section, we discuss the specific requirements for mixed-reality displays in the context of musical performances andexplain how Reflets takes them into account. We also dis-cuss the issue of engaging the audience with the system.

4.1 Consistency for a large number of usersThe first requirement is the ability to handle a large num-ber of users. Many mixed-reality displays focus on one orfew users, to ensure that each perceives the virtual contentconsistently aligned with the physical space. However, mu-sical performances typically involve a large number of users,both on the spectators and on the performers’ sides. Thisrequirement rules out technologies such as 3D stereoscopicprojections, as used in [9]. The virtual elements, althoughperceived at different depths, would not appear at the sameposition relative to the physical world for users outside asweet spot, except when placed on the surface of the screen.

Reflets can provide a consistent view of virtual objects forany number of users. The virtual objects are displayed inthe physical space at their correct position, including depth,in the scene, because they are revealed by and projected onphysical elements. Thanks to the properties of flat opticalcombiners, as described in section 3.3, the displayed vir-tual objects then appear in the shared space, and can beperceived consistently by users standing at any position onboth sides of the panels.

4.2 Full volume displayThe second constraint is the scale of the augmentation.Performances may cover large volumes due to performers’movements and stage scenery. In addition, within these vol-umes, virtual elements might need to be placed anywhere,even inside physical elements such as in [3] or overlappingperformers. This issue heavily restrains which display tech-nology can be used. For example volumetric displays areoften of a limited size due to the complexity of the technol-ogy (moving mechanical parts, number and range of vox-els, ...). Projection mapping and semi-transparent screensare often used to provide large scale rendering of virtualcontent. However, these technologies require that physicalelements such as screens are placed at the same positionon stage as the virtual elements. The virtual content thenappears only on the projection surfaces. It prevents the vi-sualisation of performers reaching through 3D sound shapesin the case of 3D instruments, or virtual content overlap-

ping physical instruments. The Pepper’s Ghost technique,with a 45 degrees angled half-silvered mirror projecting thereflection of a screen in mid-air, allows for overlapping pro-jections and physical objects. However, the virtual contentremains limited to the reflected surface, it can not cover thefull volume of the performance.

On the other hand, with Reflets and thanks to the trans-parent reflective panels and revealing technique, virtual con-tent can be revealed from the audience side anywhere onstage, even inside physical elements. The captured and pro-jected volume on each side can easily be scaled by addingmore Depth Camera / Projector pairs around it. The widthof the screen is an important variable as one needs to en-sure that all spectators and performers are able see the en-tire shared space through it. It can be extended simply byadding more panels between the performers and spectators.The virtual volume that can be displayed finally dependson the number of physical elements on both sides to revealthem. While this means that the more spectators, the big-ger the displayed volume, a small number of spectators canalso cover large areas with the use of physical props such aswhite movable panels, inflatable shapes and so on.

4.3 Visual communicationVisual communication between performers and spectators isan important part of the performance. Members of the audi-ence need to be able to clearly perceive performers’ gesturesand intentions [4]. In the other direction, some performerssuch as DJs need to perceive the reaction of the audience toadapt their performance. This constraint implies that thedisplay technologies shouldn’t impair direct visual commu-nication. Video augmented reality, with spectators usingtheir smartphones or shared screens to perceive the virtualcontent superimposed over the stage, forces them to lookat the screen instead of the physical performance. Visualcontact between performers and the audience is then lost.Moreover, the performance looses visual quality, as it hasto be digitized and rendered on a small screen.

In Reflets, although the audience and performers are com-pletely separated by acrylic panels, these are mostly trans-parent, preserving the visual contact. In addition, the light-ing of both spectators and performers can be finely con-trolled by placing virtual shapes with solid colours whichact as spotlights when users reveal them. Reflets also pre-serves the physicality of the performance as spectators di-rectly perceive the performers, not a digitized version.

4.4 Audience ParticipationAudience participation is not a requirement of musical per-formances. However, engaging spectators process of dis-playing the virtual content can add to their experience andopen possibilities such as, as described in [7], allowing themto control some parts of the music.

The participation of the audience is at the core of Reflets,since the system relies on them to reveal virtual objects ofthe performances. However, it can take some time for spec-tators to engage with it. We ran three public demonstrationsessions with 10 participants each to study this issue. Eachgroup of participants watched short performances based onReflets. They were then asked to comment on the systemindividually on paper and collectively as a discussion. Theconclusion of this study is that spectators can benefit fromguides before or during the performance. Solutions are ei-ther to have all or some of the spectators go through a tu-torial before the performance, or to build the performancewith a progression on the complexity of the spectator’s ac-tions. Some performers can also play the role of participantsshowing the others how to interact with the system. This

encourages members of the audience transitioning from therole of spectators to that of participants, as described in [1].

During the performance, it is also important to guidespectators in the revealing process. This can be done throughstatic physical or virtual guides such as physical frames,signs on the floor, or virtual stands of objects visible be-cause they are intersecting the floor. Another possibilityis to use animated guides, such as pulsating spheres inter-secting the spectators who can follow their movements todiscover virtual content placed at their center. Our studyalso yielded a number of insights regarding the virtual con-tent. The first one relates to depth of virtual objects. Whilerevealing content horizontally and vertically is straightfor-ward, spectators seemed to need more time to find theirdepth and boundaries on the axis perpendicular to the pan-els. To make the exploration easier, it is therefore betterto exaggerate the depth of virtual content. Finally, largewhite objects, such as inflatable shapes or cardboard pan-els, can be given to the audience during the performance.As depicted on Figure 1.B, they can be used to reveal largerparts of the virtual content and placed on the floor in orderto compose a virtual screen.

5. PERFORMANCE OPPORTUNITIES

Figure 4: Performance scenarios as seen from theaudience. A) Layers, B) Hidden Drums, C) SoundPaths, D) Grab and Morph

Although Reflets can be used as a generic Mixed-Realitydisplay technology for augmented performances, we also be-lieve that it opens novel musical performance opportunities.In order to verify this hypothesis, we ran a series of work-shops with 8 performers of various backgrounds, includingmusicians, circus artists, visual artists and dancers. We or-ganised both collective and individual sessions. We first pre-sented the system and how it could be used to as a mixed-reality performance display. Then we asked the performersto design scenarios that would rely on the specificity of thesystem during both brainstorming sessions and hands-onsessions to implement test ideas. They proposed a numberof scenarios and many insights on the opportunities openedby Reflets. We analysed these ideas and defined potentialdirections for the design of new performances. We then re-fined four of the proposed musical performance scenarioswith the performers so that they illustrate these directions.

Performers discussed several scenarios in which Refletscould be used to allow the audience to explore virtual con-tent surrounding or overlapping the performers. This con-

tent can be static scenery, e.g. in the case of theater plays,or active elements of the performance, e.g. augmentationsof a dancer. We illustrate these augmentation and visuali-sation possibilities with the first scenario, Layers, depictedon Figure 4.A and Figure 1.B. Similarly to the Rouagesproject [3], the audience can reveal the various mechanismsof a Digital Musical Instrument, i.e. sensors extensions,sound processes and mappings, with their bodies or physi-cal props. Contrary to Rouages, the augmentations can beplaced inside the instrument, under the physical table, butalso anywhere on stage around the musician and perceivedwith a correct alignment and perspective by all spectators.

Performers also discussed the use of the revealing tech-nique as a way to control sound, by sending the positionof the revealed slices within virtual objects via OpenSound-Control messages to external applications. Moreover, oneof the performers highlighted a very important distinction.On one hand, 3D shapes can be used as discrete controlsactivated when revealed or as continuous ones, e.g. 3Dsliders, even following complex 3D paths. Revealing vol-umetric textures can also be used as a metaphor for theexploration of soundscapes. In these cases, the focus of therevealing control is on the shapes themselves. But the fo-cus can also be put on the movements inside large virtualvolumes, the revealing technique then highlighting gestu-ral control. The second scenario, Hidden Drums, depictedon Figure 4.B, illustrates the use of the revealing techniquefor discrete control. In this improvised performance, themusician plays virtual drums by revealing them and vibrat-ing them with a handheld electronic drum pad. These vi-brations are mapped to parameters of granular synthesisand a variable delay effect, allowing for both textures andrhythms to be played. The third scenario, SoundPaths, il-lustrates the use of the revealing technique as a continuousmusical control at the shape level. As shown on Figure 4.C,the performer uses a WiiRemote and 3D position trackingto draw 3D sound paths that appear on stage, revealed bythe spectators. While drawing, the sound sample is playedso that there is a correspondence between the position inthe sample and the position in the path. When the path isthen revealed by the performer at one position, the sound isplayed as a loop from that position. The vertical and hori-zontal positions of the path respectively affect the frequencyof a band-pass filter applied to the sound and the length ofthe loop. The last scenario, Grab and Morph illustrates therevealing technique highlighting gestural control. As de-picted on Figure 4.D, it involves two performers, a guitarplayer on one side of the panels and a gestural performeron the other side. The gestural performer evolves in a largevirtual box within which his movements, e.g. position anddistance between hands, are mapped to various parametersof effects applied to the guitar sound.

Finally, many of the proposed scenarios involved the useof the optical combiner as a way to enable interaction be-tween performers, or between spectators and performers.Because users on both sides appear in the shared space,they can touch and reach through each other, opening nu-merous performance opportunities. In the Hidden Drumsscenario, in order to be active, the sound shapes played bythe musician need to be revealed on both sides by the mu-sician and the spectators. From their side, the audience istherefore able to influence the performance by interactively”building” the instrument that the performer plays. In theGrab and Morph scenario, the guitar player and gesturalperformer appear in the shared space as if they were sideby side. As shown on Figure 1.C, the gestural performer

can reach inside the guitar using his reflection to grab ashort sample of the sound, and replay it through controlledeffects. A musical dialog can therefore emerge from theirinteractions through the shared space.

6. CONCLUSIONIn this paper, we presented Reflets, a mixed-reality envi-ronment for musical performances. It matches their spe-cific requirements such as high number of users, large dis-play volume and preserved visual communication. It is alsoinherently engaging for the audience. Finally, thanks toits unique components and approach, it opens new perfor-mance opportunities. We see two main future research di-rections for Reflets. First, the system can be extended tosupport more than two combined spaces, either with multi-ple pairs of spaces that will enable modular performances,or with multiple audience spaces around the stage space.Second, by using a more complex mirror arrangement, newopportunities for musical collaboration within digital or-chestras could be created, enabling interactions between allmusicians. Additional interaction techniques and deviceswill be needed in order to allow for fine manipulation ofothers’ instruments or sounds at a distance.

7. ACKNOWLEDGMENTSThe authors would like to thank all the performers whoparticipated in the workshops, and especially Alex Jones,Nick Jannaway and Sebastien Valade for their help. Thisproject was funded through the Marie Curie FP7 framework(Grant Agreement PIEF-GA-2012-330770).

8. REFERENCES[1] S. Benford, G. Giannachi, B. Koleva, and T. Rodden.

From interaction to trajectories: designing coherentjourneys through user experiences. In Proceedings ofCHI, pages 709–718, 2009.

[2] F. Berthaut, M. Desainte-Catherine, and M. Hachet.Drile : an immersive environment for hierarchicallive-looping. In Proceedings of NIME, pages 192–197,2010.

[3] F. Berthaut, M. Marshall, S. Subramanian, andM. Hachet. Rouages: Revealing the Mechanisms ofDigital Musical Instruments to the Audience. InProceedings of NIME, 2013.

[4] B. Bongers. Exploring novel ways of interaction inmusical performance. In C&C ’99: Proceedings of the3rd conference on Creativity & cognition, pages 76–81,New York, NY, USA, 1999. ACM.

[5] A. Cassinelli and M. Ishikawa. Volume slicing display.In SIGGRAPH ASIA Art Gallery & EmergingTechnologies, page 88, 2009.

[6] D. Martinez Plasencia, F. Berthaut, A. Karnik, andS. Subramanian. Through the combining glass. InProceedings of ACM UIST, pages 341–350. ACM, 2014.

[7] D. Mazzanti, V. Zappi, D. Caldwell, and A. Brogni.Augmented stage for participatory performances. InProceedings of NIME, pages 29–34, 2014.

[8] J. Schwarz, D. Klionsky, C. Harrison, P. Dietz, andA. Wilson. Phone as a pixel: Enabling ad-hoc,large-scale displays using mobile devices. InProceedings of CHI, CHI ’12, pages 2235–2238, 2012.

[9] V. Zappi, D. Mazzanti, A. Brogni, and D. Caldwell.Design and evaluation of a hybrid reality performance.In Proceedings of NIME 2011, pages 355–360.