A corpus-based method for controlling guitar feedback · A corpus-based method for controlling...

10
A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston Creativity and Cognition Studios Faculty of Engineering & IT University of Technology, Sydney {samuel.ferguson,andrew.johnston}@uts.edu.au Aengus Martin Faculty of Arts and Social Sciences University of Technology, Sydney [email protected] ABSTRACT The use of feedback created by electric guitars and amplifiers is problematic in musical settings. For example, it is difficult for a performer to accurately obtain specific pitch and loudness quali- ties. This is due to the complex relationship between these quan- tities and other variables such as the string being fretted and the positions and orientations of the guitar and amplifier. This re- search investigates corpus-based methods for controlling the level and pitch of the feedback produced by a guitar and amplifier. A guitar-amplifier feedback system was built in which the feedback is manipulated using (i) a simple automatic gain control system, and (ii) a band-pass filter placed in the signal path. A corpus of sounds was created by recording the sound produced for various combi- nations of the parameters controlling these two components. Each sound in the corpus was analysed so that the control parameter val- ues required to obtain particular sound qualities can be recalled in the manner of concatenative sound synthesis. As a demonstration, a recorded musical target phrase is recreated on the feedback sys- tem. 1. INTRODUCTION Electric guitar feedback provides a rich textural palette for mu- sic composition and it has been used in a wide variety of musi- cal contexts. Commonly cited examples of users of electric guitar feedback are Jimi Hendrix in popular music, and Glenn Branca in contemporary art music. Feedback which is not guitar-related has also been by composers such as Behrman and and Tudor [1]. There are a number of different mechanisms by which elec- tric guitar feedback occurs. The most common involves a guitar string vibration being transduced into an electrical signal, which is then amplified and emitted by a loudspeaker as an acoustic signal, which in turn vibrates the guitar string. If the net gain in this closed circuit is greater than unity, the signal will grow in amplitude un- til some part of the signal path saturates and the signal reaches a steady state. Common to all musical applications is the need for a method for manipulating the characteristics of the feedback sound. Guitar players can coarsely control the feedback system by (i) adjusting the distance between the guitar and the amplifier, (ii) changing the effective lengths of the guitar strings, and (iii) using digital audio effects. However, it is not common for guitarists to use feedback in contexts requiring precise control, for instance, when accurate pitches are required. Feedback is difficult to control for the following reasons: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. NIME’13, May 27 – 30, 2013, KAIST, Daejeon, Korea. Copyright remains with the author(s). Acoustic Signal Feedback System Amplitude Analysis Gain Input Signal (via preamp) Output Signal Aux Input Band-pass Filter Amplitude Difference Amplitude Target Filter Centre Frequency Corpus Audio Recordings Audio Analysis Filter Centre Frequency & Amplitude Target Audio Descriptors & Control Parameters Descriptor Selection Method Control Filter Centre Frequency & Amplitude Target Corpus-based Control System Figure 1: Overview of feedback control system described in this paper. Multiple feedback pitches are possible with one string and fret position, due to resonance of higher harmonics. The gain in the feedback loop (per string) changes according to the positions and orientations of both the amplifier and the guitar, and is therefore difficult to control precisely as the guitar is often strapped to a moving performer. The pitch produced by the system is rarely independent of the gain of the system, meaning that higher pitches are more likely at higher gains. This paper will introduce a method for improving control of the loudness and pitch of feedback created by an electric guitar and amplifier. 1.1 Previous Research Models of musical feedback systems have been studied by Mor- ris [2]. His formalisation describes four main elements of a feed- back instrument – the feedback: loop : which can be electrical, acoustic, or digital; intervention : which is used to transform the feedback and gain some kind of control over feedback; interruption : which is the method for discontinuing the feedback resonance; excitation : which is the source of energy used to excite the sys- tem to begin the feedback. Berdahl et al. have developed a series of feedback systems that combine stringed instruments with actuators that apply force to the string without physical contact. Applications of this method include damping a string without contact [3] and the application of ‘physical audio effects’ to the string output [4]. Later work 541

Transcript of A corpus-based method for controlling guitar feedback · A corpus-based method for controlling...

Page 1: A corpus-based method for controlling guitar feedback · A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston ... feedback are Jimi Hendrix in popular

A corpus-based method for controlling guitar feedback

Sam Ferguson, Andrew JohnstonCreativity and Cognition Studios

Faculty of Engineering & ITUniversity of Technology, Sydney

{samuel.ferguson,andrew.johnston}@uts.edu.au

Aengus MartinFaculty of Arts and Social SciencesUniversity of Technology, Sydney

[email protected]

ABSTRACTThe use of feedback created by electric guitars and amplifiers isproblematic in musical settings. For example, it is difficult for aperformer to accurately obtain specific pitch and loudness quali-ties. This is due to the complex relationship between these quan-tities and other variables such as the string being fretted and thepositions and orientations of the guitar and amplifier. This re-search investigates corpus-based methods for controlling the leveland pitch of the feedback produced by a guitar and amplifier. Aguitar-amplifier feedback system was built in which the feedback ismanipulated using (i) a simple automatic gain control system, and(ii) a band-pass filter placed in the signal path. A corpus of soundswas created by recording the sound produced for various combi-nations of the parameters controlling these two components. Eachsound in the corpus was analysed so that the control parameter val-ues required to obtain particular sound qualities can be recalled inthe manner of concatenative sound synthesis. As a demonstration,a recorded musical target phrase is recreated on the feedback sys-tem.

1. INTRODUCTIONElectric guitar feedback provides a rich textural palette for mu-sic composition and it has been used in a wide variety of musi-cal contexts. Commonly cited examples of users of electric guitarfeedback are Jimi Hendrix in popular music, and Glenn Branca incontemporary art music. Feedback which is not guitar-related hasalso been by composers such as Behrman and and Tudor [1].

There are a number of different mechanisms by which elec-tric guitar feedback occurs. The most common involves a guitarstring vibration being transduced into an electrical signal, which isthen amplified and emitted by a loudspeaker as an acoustic signal,which in turn vibrates the guitar string. If the net gain in this closedcircuit is greater than unity, the signal will grow in amplitude un-til some part of the signal path saturates and the signal reaches asteady state.

Common to all musical applications is the need for a methodfor manipulating the characteristics of the feedback sound. Guitarplayers can coarsely control the feedback system by (i) adjustingthe distance between the guitar and the amplifier, (ii) changing theeffective lengths of the guitar strings, and (iii) using digital audioeffects. However, it is not common for guitarists to use feedbackin contexts requiring precise control, for instance, when accuratepitches are required.

Feedback is difficult to control for the following reasons:

Permission to make digital or hard copies of all or part of this work forpersonal or classroom use is granted without fee provided that copies arenot made or distributed for profit or commercial advantage and that copiesbear this notice and the full citation on the first page. To copy otherwise, torepublish, to post on servers or to redistribute to lists, requires prior specificpermission and/or a fee.NIME’13, May 27 – 30, 2013, KAIST, Daejeon, Korea.Copyright remains with the author(s).

Acoustic Signal

Feedback System

Amplitude Analysis

Gain

Input Signal(via preamp) Output Signal

Aux Input

Band-pass Filter

Amplitude Difference

Amplitude Target Filter

Centre Frequency

Corpus

Audio Recordings

Audio AnalysisFilter Centre Frequency &

Amplitude TargetAudio Descriptors & Control Parameters

Descriptor Selection Method

Control Filter Centre Frequency &

Amplitude Target

Corpus-based Control System

Figure 1: Overview of feedback control system described in thispaper.

• Multiple feedback pitches are possible with one string andfret position, due to resonance of higher harmonics.

• The gain in the feedback loop (per string) changes accordingto the positions and orientations of both the amplifier and theguitar, and is therefore difficult to control precisely as theguitar is often strapped to a moving performer.

• The pitch produced by the system is rarely independent ofthe gain of the system, meaning that higher pitches are morelikely at higher gains.

This paper will introduce a method for improving control of theloudness and pitch of feedback created by an electric guitar andamplifier.

1.1 Previous ResearchModels of musical feedback systems have been studied by Mor-ris [2]. His formalisation describes four main elements of a feed-back instrument – the feedback:loop : which can be electrical, acoustic, or digital;intervention : which is used to transform the feedback and gain

some kind of control over feedback;interruption : which is the method for discontinuing the feedback

resonance;excitation : which is the source of energy used to excite the sys-

tem to begin the feedback.Berdahl et al. have developed a series of feedback systems that

combine stringed instruments with actuators that apply force tothe string without physical contact. Applications of this methodinclude damping a string without contact [3] and the applicationof ‘physical audio effects’ to the string output [4]. Later work

�5�4�1

Page 2: A corpus-based method for controlling guitar feedback · A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston ... feedback are Jimi Hendrix in popular

relates to the dislocation of the musician from the instrument, andinvolves ‘mechanical sound synthesis’ [5]. Finally, in recent workBerdahl et al. describe an entire framework for ‘feedback controlof acoustic musical instruments’ [6], giving details of methods andapproaches that can be taken for a variety of goals.

Other researchers have also investigated feedback control meth-ods. For example, Burns reports on a variety of idiosyncratic, con-trolled feedback systems in the context of contemporary music per-formance [7, 8]. In addition, Di Scipio introduced the idea of ‘eco-systemic’ signal processing, where a musical system becomes anintegral part of a musical eco-system with its surroundings, as op-posed to the more regular notion of ‘interactive’ musical systemswhere the relationship is one of cause and effect [9].

In previous work, we have investigated the use of an electrome-chanical guitar control system to precisely control feedback sounds,in conjunction with a corpus-based approach by which a library ofavailable sounds was created and used for later recall [10]. Thecontrol system could be used to precisely set effective string lengthsand to selectively dampen the strings (thus preventing them fromresonating). We systematically audited the sounds which couldbe produced, by making a recording of the feedback produced fordifferent configurations of the electromechanical control system.A database, or corpus of available sounds was created by calcu-lating a set of numerical, psychoacoustically-informed descriptorsfor each sound and storing the mappings between the control con-figurations and descriptor vectors. While this system was capableof producing feedback sounds which could be recalled by theirdescriptors, it was limited by the lack of dynamic control overthe sounds available. In addition, a considerable amount of noisearose from the movement of a motorised slide on the strings. Thisnoise was undesirable in itself (because it could not be controlled),and it also gave rise to hysteresis effects, meaning that the feed-back sound produced was, in some circumstances, dependant onthe noise of the slide movement which was itself dependent on thestarting position of the slide.

1.2 OverviewIn this paper we investigate an alternative approach for produc-ing and controlling feedback, using a signal processing stage tocontrol the feedback loop, and obtaining the signal processing pa-rameters required for a particular sound quality by referencing apre-calculated look-up table. We use a standard electric guitar andamplifier, but inserted into the feedback signal chain is a simpleDSP unit which manipulates the feedback signal to achieve differ-ent feedback pitches and steady-state amplitudes (see Figure 1).The DSP used is related to the automatic equalisation methodsused for feedback suppression in public address systems (see, e.g.[11] for a review), but we use it for the purposes of control andproduction, rather than for suppression. The system is easy to as-semble, requiring only an unmodified electric guitar, a standardguitar amplifier, a computer and a low-latency audio interface. Itis significantly more robust than our previous electromechanicalcontroller.

This remainder of paper is structured as follows. In Section2, we describe the guitar feedback system. In Section 3, we de-scribe the creation and analysis of a sound corpus required to ob-tain a look-up table that maps sound qualities (pitch and loudness)to DSP control parameters. We also describe the automatic re-creation of a target musical phrase from electric guitar feedbackand discuss the benefits in general of a corpus-based approach. Fi-nally, we discuss the system in the context of related work andconclude.

2. FEEDBACK SYSTEM DESCRIPTIONIn this section, we present the details of the electric guitar feedbacksystem, comprising a standard electric guitar and guitar amplifier;and a computer with an audio interface. We focus on control-ling the pitch and steady-state amplitude of the feedback sound, as

−100

−80

−60

−40

−20

0

0 20 40 60 80 100

Time (s)

Leve

l (dB

FS

)

Figure 2: Controlling the amplitude of the feedback signal usingautomatic gain control. The black line is the output gain parameterthat is sought, and the blue line shows the response of the system.

these are, arguably, the two most important attributes for musicalapplications. However, the system may be adapted for controllingother characteristics of the feedback signal (see later sections).

2.1 ConstructionFirst, since electric guitar feedback is sensitive to the relative po-sitions and orientations of the guitar and amplifier, the two werefirmly mounted into a metal frame (see Figure 3). A standard in-strument lead connects the guitar output to the pre-amplified inputof the audio interface, and a second lead connects the output of theaudio interface to the amplifier (see Figure 1). This particular am-plifier (Roland Cube-30X), has an auxiliary input, allowing a linelevel output from the audio interface to be used.

2.2 Signal processingUsually, when feedback occurs, there is a transient stage to be-gin during which the signal grows in amplitude until it reaches asteady state. The fundamental frequency of the steady-state signalis determined by the frequency component that increased in ampli-tude most rapidly during the transient stage. In order to control thefundamental frequency of the steady-state signal, we incorporate abandpass filter into the feedback loop. This increases the gain ofa target frequency (the centre-frequency of the filter) with respectto all other frequencies, so that the target frequency is more likelyto be the one that grows fastest in amplitude during the transientstage. Note that while this does achive a certain level of pitch con-trol, it does not guarantee that the target frequency will becomedominant since the string may not resonate naturally at this fre-quency.

To control the overall amplitude of the steady-state signal, weintroduce an automatic gain control (AGC) into the feedback loop.The automatic gain control works by measuring the difference be-tween the incoming RMS amplitude of the signal and a target RMSamplitude, and adjusting the gain accordingly in a feedback loop.This signal processing is implemented using the Max/MSP inter-active audio processing platform1.

In Figure 2, we show how the level of the feedback signal variesas the target level is changed, while the centre frequency of thebandpass filter is kept constant. After changes in the target level,the feedback signal can sometimes go through a transient oscilla-tory stage before reaching the target amplitude, seen especially at1http://www.cycling74.com

�5�4�2

Page 3: A corpus-based method for controlling guitar feedback · A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston ... feedback are Jimi Hendrix in popular

Figure 3: The metal frame used for mounting the electric guitarand amplifier. A second guitar and amplifier system is mountedbehind the first, but is not used in this study.

approximately 25 s. In two regions of the plot, between approxi-mately 45 s and 65 s, and after 100 s, the amplitude of the feedbacksignal oscillates around the target. This type of amplitude oscilla-tion is a characteristic of electric guitar feedback, and is one of itsinteresting timbral features.

The characteristics of these oscillations (both transient and steady-state) can be controlled by limiting the range of gains that the AGCcan apply. Allowing high gains can result in very fast and directgain change (as seen in Figure 2), but can also result in oscillationafter rapid change. In contrast, limiting the AGC to small gains(e.g. ±6 dB) results in a slower ramp to the target level and onlysmall oscillations when it is reached. In certain musical contexts,a range of gain change limits may be of creative interest.

3. CORPUS ANALYSIS PROCESSA basic requirement of a musical instrument is that it providesa predictable relationship between the control parameters and theoutput sound. This requirement is satisfied by the guitar feedbacksystem described in the previous section. However, the relation-ship between the control parameters and output sound is not a sim-ple one since, as mentioned above, the signal processing compo-nents give only limited control over the fundamental frequency andamplitude of the steady-state feedback signal. In fact, the mappingbetween the control parameters and the characteristics of the out-put sound is non-linear and discontinuous.

While a performer could in theory learn the complex mappingbetween control parameters and the characteristics of the outputsound (as is necessary for many traditional musical instruments),the difficulty in doing so is exacerbated by the sensitivity of themapping to the physical characteristics of the feedback system;slight changes to the configuration of the system or the acousticspace in which it is used may result in large changes to the map-ping. This constitutes a significant barrier to the use of the guitarfeedback system in practise.

To address this issue, we create an interface for the performerthat does not require a complex mapping to be learnt. To begin, weenumerate a set of target gain and filter frequency combinations(control parameter settings). We then record the sound output for

each one. Each sound is analysed, to derive a set of acoustic de-scriptors, which describe the acoustic results of each control pa-rameter setting. Thus, a look-up table can be created that mapseach sound that can be produced by the system (as represented byits acoustic descriptors) back to the control parameter setting re-quired to produce it.

This approach is based on research by Schwarz et al. on concate-native synthesis [12, 13, 14], and we have used a similar method inprevious research [10]. We follow Schwarz’ use of the term ‘unit’to describe a segment of audio that is associated with metadata(i.e. it has been analysed and has acoustic descriptors attached).Our units have a duration of 3 seconds and along with the acous-tic descriptors we include the control parameters used to create thesound in the associated metadata.

The approach described is summarised in Figure 4. The corpuscreation process is as follows.

• Step through a set of discrete filter centre frequencies andtarget levels, and for each one, make a 3-second recording ofthe output audio. Leave a 6 second gap after each recordingto allow the feedback to decay.

• Import both the audio files and the control parameter valuesinto the catart software [13] (Version 1.2), and analyseeach one as a unit. (For the analysis, omit the first second ofeach recording so that only the steady-state feedback signalis analysed.)

While in this paper we focus mainly on pitch and level descriptors,this method allows a wide variety of descriptors to be calculated foreach unit. This includes pitch, loudness, spectral centroid, high-frequency energy and others (again see [13] for details).

During performance, a musician can select sounds by their acous-tic descriptors. For each selection, the required control parameters(target gain and filter centre frequency) are recalled and applied sothat the sound is produced. Specifically,

• A boolean query is used to select a unit from the look-uptable that matches a particular acoustic descriptor value (eg.pitch between 430 and 450 Hz). Then,

• The control parameters used to create this unit are retrieved,and are then

• Used to recreate the sound.

This process can be automated to produce sequences of feedbacksounds to match a ‘target’ sound, using the catart software (again,see [13]). Note that we do not use the concatenative synthesis ca-pabilities of this software (which uses concatenation of audio sam-ples from the corpus), but just the corpus analysis, storage, dataretrieval, and unit selection algorithm features.

We have implemented the two procedures described in this sec-tion, as well as the signal processing required to control the feed-back, in a single piece of software. An important feature of thesoftware is that corpus creation, including sound recording andanalysis, is automated. This is essential to the use of the systemin practise, since a new corpus must be created each time the feed-back system is modified (either by being moved to a new acousticenvironment or by using a different guitar or amplifier).

3.1 Corpus ExampleIn this section, we describe the creation and analysis of a corpusfor a particular guitar and amplifier combination. The control pa-rameter space used extended across a level range from -42 dBFS to-12 dBFS in 6 dB steps (hence 6 levels), and band-pass filter cen-tre frequencies from 110 Hz to 1661 Hz in semitone steps acrossthe frequency range (48 frequencies), creating 288 control param-eter settings. The corpus of 288 3-second recordings took approx-imately 48 minutes to record with a 6-second gap between record-ings.

�5�4�3

Page 4: A corpus-based method for controlling guitar feedback · A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston ... feedback are Jimi Hendrix in popular

Filter Centre Frequency (midi note number)

Aud

io P

itch

(mid

i not

e nu

mbe

r)

50

60

70

80

90−42

●●●●●●

●●●●●●

●●●●●●

●●●●●

●●●●●●●●●●●

●●●●●●●●●●●●●

50 60 70 80 90

−36

●●●●●●●●●●●●●

●●●●●

●●●●●●

●●●●●●

●●●●●

●●●●●●●

●●●●●

50 60 70 80 90

−30

●●●●●●

●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●●●●●●

●●●●●●

50 60 70 80 90

−24

●●

●●●●●●●●●●●●●●●●

●●●●●●●

●●●●●●●●●●●

●●●●●

●●●●●●

50 60 70 80 90

−18

●●●●●●

●●●●●●●

●●●●●●

●●●●●●

●●●●●●●●●●●●●●●●●

●●●●●

50 60 70 80 90

−12

●●●

●●●●●●●●●

●●●●●●

●●●●●●●●●●

●●●●●●●●●●●●●

●●●●●

50 60 70 80 90

Audio Pitch (midi note number)

Aud

io L

oudn

ess

(dB

FS

)

−45−40−35−30−25−20−15

−42

●●●

●●●●●●●

●●●●● ●

●●●

● ●●●●●

●●

●●

●●●

●●●

50 60 70 80 90

−36

●●

●●

●●●●●●●●

●●●

●●●●●●●●

●●●●●

●●●

●●

●●

●●

●●

●●●●

50 60 70 80 90

−30

●●

●●●●●●●●●

●●●●●●● ●●

●●●●● ●●●

●●

●●●●

●●●●

●●

●●

●●

●●●

50 60 70 80 90

−24

●● ●●●●●●●●●

●●●●●●● ●●●●●●● ●●●●●

●●●●●●●●●●●●

●●●●●

50 60 70 80 90

−18

●●● ●●●●●●●●●

● ●●●●●

● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●

50 60 70 80 90

−12●●● ●

●●●●●●●● ●●●●●

●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●

50 60 70 80 90

Figure 5: Corpus audio descriptors compared to control parameters. The graphs are broken into each of the 6 level steps. While there isa trend towards correspondence between target and result, a precise specific filter frequency can be associated with many neighbouringoutput audio pitches, especially at low pitch ranges. This coarseness means that clearly, the use of a bandpass filter alone is not sufficientto create particular target tones and give a musician the necessary control over the instrument’s pitch.

3s Sound File Recording

'Catart' Analysis

Parameter Selection Method:

catart.target

Control Parameter Automation

Target Phrase Recreation

Record Control Parameters

ImportSettings

Descriptors

Target Musical Phrase

Band Pass Filter and Compressor Settings

Guitar, DSP and Amplifier Apparatus

Audio output to Amplifier

Feedback Control Parameters

'Catart' Segmentation &

AnalysisCorpus

Figure 4: The creation of a corpus of acoustic parameters and theresulting selection of control parameters.

The relationship between output audio pitch and the centre fre-quency of the filter is shown in Figure 5 (top row of plots). Eachplot shows the relationship between pitch and filter centre fre-quency for a given target gain level. It can be seen that in thelower portion of the pitch range, despite the filter frequency beingscanned across the pitch space, the output sound’s pitches wereclustered at particular pitches only. In the upper pitch range theyspread more evenly across the pitch space. The selectable tones inthe corpus are therefore relatively limited for the lower range.

The relationship between the AGC gain setting and the outputlevel of the sound is shown in Figure 5 (bottom pane). For low tar-get gain levels (plots towards the left), there is considerable vari-ability between the target level and the output level. However, forhigh target gain levels (plots towards the right) the output gain ismuch closer to the target gain.

The corpus can also be analysed for other acoustic character-istics. Figure 6 shows a plot of the available combinations ofpitch and spectral centroid for the analysed feedback system. Thismeans that the timbral space of feedback sounds can be exploredin a performance context.

3.2 Target Musical Phrase RecreationAs a demonstration of the system in use, we recorded a target musi-cal phrase and recreated it using the feedback system. Two bars ofthe opening to a popular guitar-based song were recorded as audio(not as MIDI), and passed into catart’s target phrase matchingmodule, catart.target, where it is segmented and analysed.For each of the segments in the target phrase, matching audio de-scriptor units from the main corpus are found through a unit selec-tion algorithm, and then the units are temporally concatenated bythe concatenative synthesis system (see [15] for details).

�5�4�4

Page 5: A corpus-based method for controlling guitar feedback · A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston ... feedback are Jimi Hendrix in popular

50

60

70

80

90

●●●●

●●●

●●●●●●

●●●●●

●●●●

●● ●●●●

●●

●●●

● ●

●●●

● ●●●

●●

●●

●●●

●●●

●●● ●●●●

●●

●●●

●● ●●●●

●●●●●

●●

●●●

●●●

●●

● ●●●

●●●

●●

● ●●●●●●●●

●●●●●●

●●●●

●●●

●●●●●

●●

●●●●

●●●

● ●●●

●●

●●

●●

●●●●●●●●

●●●●●●

●●● ●

●●●

●●●●●

● ●●

●●●

●●●

●●●

●●

●●

●●●

●●●● ●●●●

●●

●●●●●●

●● ●

●●●

●●●●

●●●

●●●

●●●

●● ●●

●●

●●

●●●

●●●● ●●● ●●

●●●●●

●●●

●●

●●●●

●● ●● ●

●●

●●●

● ●●

●●

●●

400 500 600 700

Spectral Centroid (Hz)

Pitc

h (M

idi N

ote

Num

ber)

Gain

−40

−35

−30

−25

−20

−15

Figure 6: Corpus analysis also allows inspection of the corpusalong audio description dimensions that aren’t directly related tocontrol parameters. This graph shows how timbral aspects (thespectral centroid) of the corpus interact with pitch specification.

Time (s)

Freq

uenc

y (H

z)

0 2 4 6 8 100

5001000150020002500

Time (s)

Freq

uenc

y (H

z)

0 2 4 6 8 100

5001000150020002500

Figure 7: Two bars of the opening to ‘Stairway to Heaven’, asperformed by a) a guitarist (in the top pane), b) a feedback systemwith control parameters selected to resynthesize the above soundcontrolled from the corpus.

The results of this process are shown in Figure 7. The top paneshows a spectrogram of the original target audio signal recorded di-rectly from the output of the guitar being played in the traditionalmanner. The lower pane shows a spectrogram of the recordedrecreation of this musical phrase, as produced by the feedbacksystem. A range of greater than one octave can be seen in use.The spectrogram shows the strong fundamental frequencies of thefeedback, which is consistent with the use of the band-pass filters.

For the recording, some extra damping was required to makesure that strong open resonances (of which there were only a fewproblematic ones) did not continue to sound between note changes.This was achieved by placing a tissue lightly over the strings andaround the fingerboard.

3.3 Benefits of Corpus UsageTo conclude this section, we highlight the advantages of using acombined feedback control and corpus-based method, over the useof a feedback control system alone.

1. Feedback from a guitar doesn’t occur at every pitch con-tinuously, only at particular pitches based on the harmonicsof the tuned string frequencies, and affected by other real-world acoustical parameters of resonance. Simply specify-

ing the band-pass filter frequency alone cannot give a userinformation about whether the feedback will occur, or howclose to the expected pitch the result will be. As has beenshown in Figure 5, the feedback system has quite good cov-erage at high pitches, but only selective coverage at lowranges. To find out which pitches can be selected from theset, a corpus must be created and analysed. The results ofthis analysis can be used in a target sound resynthesis sys-tem (as in Section 7), or can be used for devising a userinterface for performance that shows the performer whichpitch options will produce sounds.

2. Not only are there gaps in pitch coverage in a coarse sense,the pitch of feedback usually isn’t located at the centre fre-quency of the filter in a fine sense either. Some notes may‘be there’ but sound out of tune. Where a precise corre-spondence between the expected pitch and the output pitchis required, a corpus-based approach is one way of ensuringthe tolerance with which this correspondence may occur.

3. The analysis process allows other acoustic characteristics ofthe output tones to be incorporated into the selection pro-cess and used for mapping. In Figure 6 the tone’s spectralcentroid is shown in relationship to the pitch. We can seethat there appears to be an equivalency, in terms of spectralcentroid, between tones of low pitch and high pitch, whilemany mid-pitch tones have low values of spectral centroid.This parameter is possibly an aspect that can be used as aninteraction mapping target. Pitch is the main element con-sidered as a target in this paper, due to its crucial role inmost common musical forms, but other acoustic character-istics can be targeted in much the same way that pitch hasbeen.

4. DISCUSSIONIn this paper a system for controlling feedback from the combina-tion of an electric guitar, an amplifier and an audio interface actingas a control loop, was described. This system was investigated asan alternative solution for using feedback in musical contexts. Theconcatenative synthesis system described by Schwarz [16, 13, 15]was used, and the contribution made in this research was to in-vestigate the adaptation of concatenative synthesis methods to thecreation of a corpus of audio recordings associated with control pa-rameters for the feedback system. We enumerated the sounds thatcould be created with this particular apparatus, and have shown asa proof of concept that the pitches in a target musical phrase couldbe recreated. Corpus based methods also bring with them manyother opportunities for specifying control based on other audio pa-rameters (such as timbral descriptors).

While it is difficult to play multiple feedback guitars simulta-neously with traditional musical methods, the automation of theaforementioned process means that multiple systems like that de-scribed above may be used simultaneously, with their own corpiand within a single control system. This is a possible solution tothe question of pitch coverage, as each guitar may be tuned differ-ently, resulting in different harmonics being available. For musi-cal contexts this allows a musical performance with these systemsto take the role of an ensemble, with musical roles associated todifferent guitar and amplifier combinations. Furthermore, spatialsound can be achieved by spatially separating the performing feed-back systems, in much the same way that spatial sound is createdby a band performing in separate locations.

For live performance contexts, the input from a typical musicalinstrument could be used as a target sound, with processing andcorpus query happening in realtime. This allows the possibilitythat a musician could control a feedback system in real-time fromanother, more familiar, musical instrument.

Some limitations exist - one of these is the reliance on the de-scriptor algorithms to reliably pick the pitch/characteristics of the

�5�4�5

Page 6: A corpus-based method for controlling guitar feedback · A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston ... feedback are Jimi Hendrix in popular

input sounds. If there are multiple tones, or any type of transitionbetween states, these descriptor algorithms may fail to adequatelymeasure or describe the predominant sound quantitatively. Cou-pled with the noise in typical guitar systems of this nature, thisdifficulty can be quite important. The use of high gain levels, how-ever, does ameliorate this problem somewhat. A possible solutionis to use a set of different pitch algorithms and a voting mecha-nism to select a pitch robustly, or alternatively may be based onpsychoacoustics.

4.1 Further research directionsThe use of a band pass filter may result in much of the characteris-tics outside the the pass-band to be filtered from the output sound.There is therefore a tradeoff between the narrowness of the fil-ter (with its associated effectiveness at determining the pitch) andthe spectral characteristics of the feedback sound being controlled.Research into different filter types (such as comb filters with highQ-values for instance) is a useful approach to be explored, and dy-namic filtering may also be a useful approach to employ.

This research has not delved into the questions surrounding mul-tiple pitch output from a single feedback system. Given that twobandpass filters and automatic gain control systems can be in-cluded in separate loops within the one instrument, the possibilityof two sounds emanating from the same feedback system is notwithout a theoretical basis (and indeed informal experiments haveshown great promise for this technique). However, the controlmethod described in this paper relies on single tones to be anal-ysed by the audio descriptor algorithms for the corpus creation.Two separate control parameter settings taken from a corpus ofthis type are likely to interact in non-orthogonal ways, but it maybe that systems can be created which account for the interactionseffectively.

The pitch coverage of the corpus is based on the string’s tuningsystem of the strings, which is therefore quite important for allow-ing a large range of pitches. The corpus analysed in this study useda tuning system based around fifths and octaves, which thereforereinforced particular notes and resulted in other gaps in the pitchcoverage. Investigating a statistical measure of the corpus cover-age and target difference, along with a variety of different tunings,may lead to an optimal tuning system being uncovered.

5. CONCLUSIONThe use of feedback systems in musical performance is hinderedby the requirement that the performer learn a complex mappingbetween the control parameters of the system and its sound output.We have introduced a corpus-based component to a feedback sys-tem, which allows a performer to select sounds based their acous-tic properties. This removes a major barrier to the use of feedbacksystems in musical settings where accurate pitch production is re-quired, and also opens a wide variety of musical possibilities inwhich the timbral characteristics of feedback systems may be ex-plored.

6. ACKNOWLEDGEMENTSThis research was supported by a Research Development Grantfrom the Faculty of Arts and Social Sciences, University of Tech-nology, Sydney.

7. REFERENCES[1] Michael Nyman, Experimental Music: Cage and Beyond,

Cambridge University Press, Cambridge, UK, 2nd edition,1999.

[2] Jeffrey M. Morris, “Feedback instruments: Generatingmusical sounds, gestures, and textures in real time withcomplex feedback systems,” in Int. Computer MusicConference, Copenhagen, Denmark, 2007, pp. 469–476.

[3] E. J. Berdahl, Julius O. Smith III, and Adrian Freed, “Activedamping of a vibrating string,” in 6th Int. Symposium onActive Noise and Vibration Control, Adelaide, Australia,2006.

[4] E Berdahl and Julius O. Smith III, “Some physical audioeffects,” in 9th Int. Conference on Digital Audio Effects,Montreal, Canada, 2006.

[5] E Berdahl, Julius O. Smith III, and Günter Niemeyer,“Mechanical sound synthesis: And the new application offorce-feedback teleoperation of acoustic musicalinstruments,” in Proceedings of the 13th Int. Conference onDigital Audio Effects, Graz, Austria, 2010.

[6] E Berdahl, Julius O. Smith III, and Günter Niemeyer,“Feedback control of acoustic musical instruments:Collocated control using physical analogs,” J Acoust SocAm, vol. 131, no. 1, pp. 963–973, 2012.

[7] Christopher Burns, “Emergent behavior from idiosyncraticfeedback networks,” in Int. Computer Music Conference,Singapore, 2003.

[8] Christopher Burns, “Designing for emergent behavior: ajohn cage realization,” in Int. Computer Music Conference,Miami, USA, 2004, pp. 193–196.

[9] Agostino di Scipio, “‘Sound is the interface’: frominteractive to ecosystemic signal processing,”’ OrganisedSound, vol. 8, no. 3, pp. 269-277, 2003.

[10] Aengus Martin, Sam Ferguson, and Kirsty Beilharz,“Mechanisms for controlling complex sound sources:Applications to guitar feedback control,” in New Interfacesfor Musical Expression, Sydney, Australia, 2010.

[11] T. van Waterschoot and M. Moonen, “Comparativeevaluation of howling detection criteria in notch-filter-basedhowling suppression,” in 126th Convention of the AudioEng. Soc., Munich, Germany, 2009, vol. Preprint 7752.

[12] Diemo Schwarz, “Concatenative sound synthesis: The earlyyears,” Journal of New Music Research, vol. 35, no. 1, pp.3–22, 2006.

[13] Diemo Schwarz, Data-driven Concatenative SoundSynthesis, Ph.D. thesis, 2004.

[14] Diemo Schwarz, Roland Cahen, and Sam Britton,“Principles and applications of interactive corpus-basedconcatenative synthesis,” in Journées d’InformatiqueMusicale (JIM’08), Albi, 2008.

[15] Diemo Schwarz, Gregory Beller, Bruno Verbrugghe, andSam Britton, “Real-time corpus-based concatenativesynthesis with catart,” in 9th Int. Conference on DigitalAudio Effects (DAFx-06), Montreal, Canada, 2006.

[16] Diemo Schwarz, “A system for data-driven concatenativesound synthesis,” in COST G-6 Conference on Digital AudioEffects (DAFX-00), Verona, Italy, 2000.

�5�4�6

Page 7: A corpus-based method for controlling guitar feedback · A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston ... feedback are Jimi Hendrix in popular

NIME 2013

http://nime2013.kaist.ac.kr/call_for_participation/[31/03/2014 1:49:14 PM]

13th International Conference on

New Interfaces for Musical ExpressionMay 27-30, 2013 | Daejeon + Seoul, Korea Republic

welcomenews

call for participation■submissionregistrationprogram

schedulekeynote speakersworkshopsoral presentationsposters & demosconcerts

organizerssponsorsvenueslocal information

daejeonseoul

contact

call for participationWe would like to invite you to be part of the 13th international conference on New Interfacesfor Musical Expression conference. We invite submission of new works in the following categories:

Paper (oral/poster/demo)PerformanceInstallationWorkshop

TopicsTopics include (but are not limited to):

Novel controllers and interfaces for musicalexpressionNovel musical instrumentsAugmented/hyper instrumentsNovel controllers for collaborative performanceSensor and actuator technologiesHaptic and force feedback devicesMotion, gesture and musicInterfaces for dance and physical expressionMultimodal expressive interfacesInterfaces for hearing or visually impaired formusical expressionInteractive game musicNIME intersecting with game designRobotic musicMobile music technology and performanceparadigmBiological and bio-inspired systemsMusical mapping strategiesInteractive sonificationInteractive sound and multimedia installations

Musical human-computer interactionInteractivity design and software toolsInterface protocols and data formatsSonic interaction designPerceptual and cognitive issuesPerformance analysisPerformance rendering and generativealgorithmsMachine learning in performance systemsExperiences with novel interfaces in liveperformance and compositionSurveys of past work and stimulatingideas for future researchHistorical studies in twentieth-centuryinstrument designArtistic, cultural, and social impact ofNIME technologyExperiences with novel interfaces ineducation and entertainmentReports on student projects in theframework of NIME related coursesPractice-based research

Page 8: A corpus-based method for controlling guitar feedback · A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston ... feedback are Jimi Hendrix in popular

NIME 2013

http://nime2013.kaist.ac.kr/call_for_participation/[31/03/2014 1:49:14 PM]

approaches/methodologies/criticism

Important dates

Paper/performance/installation submissions and workshop proposals due: February 14,2013. Please note that there will be no more extensions.Review notification: - installations/workshops: March 27, 2013- papers/posters/demos: March 31, 2013Camera-ready paper deadline: April 28, 2013

submissionTo submit your paper and/or proposals for performance/installation/workshop, please visithere.

A note on anonymity: while paper/demo submissions for NIME 2013 should not contain anyinformation on the authors, the anonymity policy for music/performance/installation proposal isnot as strict as that of paper/demo category; please feel free to include information on theauthorship and/or the names of contributing artists in your proposal.

papersWe welcome submissions of original research on all above-mentioned (and other) topicsrelated to development and artistic use of new interfaces for musical expression. There arethree different paper submission categories:

Full paper (up to 6 pages in proceedings, longer oral presentation, optional demo)Short paper/poster (up to 4 pages in proceedings, shorter oral presentation or poster,optional demo)Demonstration (up to 2 pages in proceedings)

Please use this template when preparing your manuscript. Submitted papers will be subject to a peer review process by an international expertcommittee. All accepted papers will be published in the conference proceedings, under anISSN/ISBN reference, and will be available online after the conference. A manuscriptsubmitted for review cannot be already under review for publication elsewhere, or be submittedfor a second review elsewhere while under review for NIME 2013.

As an award for the prizewinner, the best paper of NIME 2013 will be published in a futureissue of the Computer Music Journal.

performancesWe welcome submission of proposals for performances. Proposed performances should have

Page 9: A corpus-based method for controlling guitar feedback · A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston ... feedback are Jimi Hendrix in popular

NIME 2013

http://nime2013.kaist.ac.kr/call_for_participation/[31/03/2014 1:49:14 PM]

a clear connection with the substance of the NIME conference. Performance proposals inconjunction with paper submissions are encouraged, but each will be judged on its own merit.Within reasonable limits, we may be able to provide musicians to perform pieces. Typical NIMEperformance pieces last for 5-15 minutes, but shorter and longer performance proposals mayalso be taken into consideration. Submitted proposals will be reviewed by an expert committee. Preference will be given tosubmissions with strong evidence that the proposed performance has already been realized oris technically feasible and within the artists' capabilities. Documentation of the performanceswill be available online after the conference.

Submit a proposal in the form of a PDF document, maximum 4 pages. Supplementarysupporting media files (audio and video) or links to online media are encouraged. Theproposal should include the following:

Title and detailed description of the proposed performance. Include a score and video/audiodocumentation where possible.Names of all participants/submitters, with a short bio for each (100 words)Number of performers and the instruments and technologies that will be used.Any requirements for the venue. Diagrams of the preferred stage setup and signal routingare recommended.Evidence of the feasibility of the performance. Include documentation of past performancesor related works that demonstrate the submitter's capabilities to implement the proposedperformance.A list of any equipment that needs to be provided by the conference organizers.Any instrumental performers that would need to be provided by the organizers. Note that theconference organizers may not be able to accommodate your request.

Please note that the organizers cannot provide funding to support performers' travel oraccommodation at the conference.

installationsNIME 2013 will also provide several locations that can be used to exhibit new musicinstallation work. These are foyer location installations or room-based installations inconnection to the conference venues. Part of the exhibitions will be hosted in collaboration with the Daejeon Foundation for Cultureand the Arts.

Submitted proposals will be subject to a peer review process by an international expertcommittee. Documentation of the installations will be available online after the conference.

In order to present an installation for NIME 2013, please submit a proposal in the form of aPDF document, maximum 4 pages, containing the following information:

The title of the workNames of contributing artistsA clear description of the proposed installation that will provide reviewers with a sense of itsmotivation and proposed realization. Pictures, schematic diagrams and/or proposed floorplans would be helpful here. You may also Provide any documentation of previouspresentations of this installation (if they exist)Describe the space needed and the adaptability of the installation in terms of possible

Page 10: A corpus-based method for controlling guitar feedback · A corpus-based method for controlling guitar feedback Sam Ferguson, Andrew Johnston ... feedback are Jimi Hendrix in popular

NIME 2013

http://nime2013.kaist.ac.kr/call_for_participation/[31/03/2014 1:49:14 PM]

venues.Provide a list of equipment required for the installation, clearly indicating what you will bringand what you would require the conference organizers to provide.

workshopsWe call for short (3 hours) or long (6 hours) workshops and tutorials. These can be targetedtowards specialist techniques, platforms, hardware, software or pedagogical topics for theadvancement of fellow NIME-ers and people with experience related to the topic. They canalso be targeted towards visitors to the NIME community, novices/newbies, interested studentparticipants, people from other fields, and members of the public getting to know the potentialof NIME. Tutorial proposers should clearly indicate the audience and assumed knowledge of theirintended participants to help us market to the appropriate audience. Workshops and tutorialscan relate to, but are not limited to, the topics of the conference. This is a good opportunity toexplore a specialized interest or interdisciplinary topic in depth with greater time for discourse,debate, and collaboration.

Admission to workshops and tutorials will be charged separately from the main conference.Proposer(s) are responsible for publishing any workshop proceedings (if desired) and shouldengage in the promotion of their event by engaging their own networks. Workshops may becancelled or combined if there is inadequate participation.

For any further information/questions/comments/suggestions, please contact the organizingcommittee.