Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system...

12
EURASIP Journal on Applied Signal Processing 2004:7, 1001–1006 c 2004 Hindawi Publishing Corporation Real-Time Gesture-Controlled Physical Modelling Music Synthesis with Tactile Feedback David M. Howard Media Engineering Research Group, Department of Electronics, University of York, Heslington, York, YO10 5DD, UK Email: [email protected] Stuart Rimell Media Engineering Research Group, Department of Electronics, University of York, Heslington, York, YO10 5DD, UK Received 30 June 2003; Revised 13 November 2003 Electronic sound synthesis continues to oer huge potential possibilities for the creation of new musical instruments. The tra- ditional approach is, however, seriously limited in that it incorporates only auditory feedback and it will typically make use of a sound synthesis model (e.g., additive, subtractive, wavetable, and sampling) that is inherently limited and very often nonintu- itive to the musician. In a direct attempt to challenge these issues, this paper describes a system that provides tactile as well as acoustic feedback, with real-time synthesis that invokes a more intuitive response from players since it is based upon mass-spring physical modelling. Virtual instruments are set up via a graphical user interface in terms of the physical properties of basic well- understood sounding objects such as strings, membranes, and solids. These can be interconnected to form complex integrated structures. Acoustic excitation can be applied at any point mass via virtual bowing, plucking, striking, specified waveform, or from any external sound source. Virtual microphones can be placed at any point masses to deliver the acoustic output. These aspects of the instrument are described along with the nature of the resulting acoustic output. Keywords and phrases: physical modelling, music synthesis, haptic interface, force feedback, gestural control. 1. INTRODUCTION Musicians are always searching for new sounds and new ways of producing sounds in their compositions and per- formances. The availability of modern computer systems has enabled considerable processing power to be made available on the desktop and such machines have the capability of en- abling sound synthesis techniques to be employed in real- time, that would have required large dedicated computer sys- tems just a few decades ago. Despite the increased incorpo- ration of computer technology in electronic musical instru- ments, the search is still on for virtual instruments that are closer in terms of how they are played to their physical acous- tic counterparts. The system described in this paper aims to integrate mu- sic synthesis by physical modelling with novel control in- terfaces for real-time use in composition and live perfor- mances. Traditionally, sound synthesis has relied on tech- niques involving oscillators, wavetables, filters, time envelope shapers, and digital sampling of natural sounds (e.g., [1]). More recently, physical models of musical instruments have been used to generate sounds which have more natural qual- ities and have control parameters which are less abstract and more closely related to musicians’ experiences with acous- tic instruments [2, 3, 4, 5]. Professional electroacoustic mu- sicians require control over all aspects of the sounds with which they are working, in much the same way as a con- ductor is in control of the sound produced by an orchestra. Such control is not usually available from traditional syn- thesis techniques, since user adjustment of available synthe- sis parameters rarely leads to obviously predictable acous- tic results. Physical modelling, on the other hand, oers the potential of more intuitive control, because the underlying technique is related directly to the physical vibrating prop- erties of objects, such as strings and membranes with which the user can interact through inference relating to expecta- tion. The acoustic output from traditional electronic musical instruments is often described as “cold” or “lifeless” by play- ers and audience alike. Indeed, many report that such sounds become less interesting with extended exposure. The acous- tic output from acoustic musical instruments, on the other hand, is often described as “warm,” “intimate” or “organic.” The application of physical modelling for sound synthesis produces output sounds that resemble much more closely their physical counterparts.

Transcript of Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system...

Page 1: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

EURASIP Journal on Applied Signal Processing 2004:7, 1001–1006c© 2004 Hindawi Publishing Corporation

Real-Time Gesture-Controlled Physical ModellingMusic Synthesis with Tactile Feedback

David M. HowardMedia Engineering Research Group, Department of Electronics, University of York, Heslington, York, YO10 5DD, UKEmail: [email protected]

Stuart Rimell

Media Engineering Research Group, Department of Electronics, University of York, Heslington, York, YO10 5DD, UK

Received 30 June 2003; Revised 13 November 2003

Electronic sound synthesis continues to offer huge potential possibilities for the creation of new musical instruments. The tra-ditional approach is, however, seriously limited in that it incorporates only auditory feedback and it will typically make use ofa sound synthesis model (e.g., additive, subtractive, wavetable, and sampling) that is inherently limited and very often nonintu-itive to the musician. In a direct attempt to challenge these issues, this paper describes a system that provides tactile as well asacoustic feedback, with real-time synthesis that invokes a more intuitive response from players since it is based upon mass-springphysical modelling. Virtual instruments are set up via a graphical user interface in terms of the physical properties of basic well-understood sounding objects such as strings, membranes, and solids. These can be interconnected to form complex integratedstructures. Acoustic excitation can be applied at any point mass via virtual bowing, plucking, striking, specified waveform, orfrom any external sound source. Virtual microphones can be placed at any point masses to deliver the acoustic output. Theseaspects of the instrument are described along with the nature of the resulting acoustic output.

Keywords and phrases: physical modelling, music synthesis, haptic interface, force feedback, gestural control.

1. INTRODUCTION

Musicians are always searching for new sounds and newways of producing sounds in their compositions and per-formances. The availability of modern computer systems hasenabled considerable processing power to be made availableon the desktop and such machines have the capability of en-abling sound synthesis techniques to be employed in real-time, that would have required large dedicated computer sys-tems just a few decades ago. Despite the increased incorpo-ration of computer technology in electronic musical instru-ments, the search is still on for virtual instruments that arecloser in terms of how they are played to their physical acous-tic counterparts.

The system described in this paper aims to integrate mu-sic synthesis by physical modelling with novel control in-terfaces for real-time use in composition and live perfor-mances. Traditionally, sound synthesis has relied on tech-niques involving oscillators, wavetables, filters, time envelopeshapers, and digital sampling of natural sounds (e.g., [1]).More recently, physical models of musical instruments havebeen used to generate sounds which have more natural qual-ities and have control parameters which are less abstract and

more closely related to musicians’ experiences with acous-tic instruments [2, 3, 4, 5]. Professional electroacoustic mu-sicians require control over all aspects of the sounds withwhich they are working, in much the same way as a con-ductor is in control of the sound produced by an orchestra.Such control is not usually available from traditional syn-thesis techniques, since user adjustment of available synthe-sis parameters rarely leads to obviously predictable acous-tic results. Physical modelling, on the other hand, offers thepotential of more intuitive control, because the underlyingtechnique is related directly to the physical vibrating prop-erties of objects, such as strings and membranes with whichthe user can interact through inference relating to expecta-tion.

The acoustic output from traditional electronic musicalinstruments is often described as “cold” or “lifeless” by play-ers and audience alike. Indeed, many report that such soundsbecome less interesting with extended exposure. The acous-tic output from acoustic musical instruments, on the otherhand, is often described as “warm,” “intimate” or “organic.”The application of physical modelling for sound synthesisproduces output sounds that resemble much more closelytheir physical counterparts.

Page 2: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

1002 EURASIP Journal on Applied Signal Processing

The success of a user interface for an electronic musicalinstrument might be judged on its ability to enable the userto experience the illusion of directly manipulating objects,and one approach might be the use of virtual reality inter-faces. However, this is not necessarily the best way to achievesuch a goal in the context of a musical instrument, since aperforming musician needs to be actively in touch visuallyand acoustically not only with other players, but also with theaudience. This is summed up by Shneiderman [6]: “virtualreality is a lively new direction for those who seek the immer-sion experience, where they block out the real world by hav-ing goggles on their heads.” In any case, traditionally trainedmusicians rely less on visual feedback with their instrumentand more on tactile and sonic feedback as they become in-creasingly accustomed to playing it. For example, Hunt andKirk [7] note that “observation of competent pianists willquickly reveal that they do not need to look at their fingers,let alone any annotation (e.g., sticky labels with the namesof the notes on) which beginners commonly use. Graphicsare a useful way of presenting information (especially to be-ginners), but are not the primary channel which humans usewhen fully accustomed to a system.”

There is evidence to suggest that the limited informa-tion available from the conventional screen and mouse in-terface is certainly limiting and potentially detrimental forcreating electroacoustic music. Buxton [8] suggests that thevisual senses are overstimulated, whilst the others are under-stimulated. In particular, he suggests that tactile input de-vices also provide output to enable the user to relate to thesystem as an object rather than an abstract system, “everyhaptic input device can also be considered to provide out-put. This would be through the tactile or kinaesthetic feed-back that it provides to the user . . . . Some devices actuallyprovide force feedback, as with some special joysticks.” Fitz-maurice [9] proposes “graspable user interfaces” as real ob-jects which can be held and manipulated, positioned, andconjoined in order to make interfaces which are more akin tothe way a human interacts with the real world. It has furtherbeen noted that the haptic senses provide the second mostimportant means (after the audio output) by which users ob-serve and interact with the behaviour of musical instruments[10], and that complex and realistic musical expression canonly result when both tactile (vibrational and textural) andproprioceptive cues are available in combination with auralfeedback [11].

Considerable activity exists on capturing human ges-ture http://www.media.mit.edu/hyperins/ and http://www.megaproject.org/ [12]. Specific to the control of musical in-struments is the provision of tactile feedback [13], electronickeyboards that have a feel close to a real piano [14], hap-tic feedback bows that simulate the feel and forces of realbows [15], and the use of finger-fitted vibrational devices inopen air gestural musical instruments [16]. Such haptic con-trol devices are generally one-off, relatively expensive, anddesigned to operate linked with specific computer systems,and as such, they are essentially inaccessible to the musi-cal masses. A key feature of our instrument is its potentialfor wide applicability, and therefore inexpensive and widely

available PC force feedback gaming devices are employed toprovide its real-time gestural control and haptic feedback.

The instrument described in this paper, known as Cy-matic [17], took its inspiration from the fact that traditionalacoustic instruments are controlled by direct physical ges-ture, whilst providing both aural and tactile feedback. Cy-matic has been designed to provide players with an immer-sive, easy to understand, as well as tactile musical experiencethat is more commonly associated with acoustic instrumentsbut rarely found with computer-based instruments. The au-dio output from Cymatic is derived from a physical mod-elling synthesis engine which has its origins in TAO [3]. Itshares some common approaches with other physical mod-elling sound synthesis environments such as Mosaic in [4]and Cordis-Anima in [5]. Cymatic makes use of the more in-tuitive approach to sound synthesis offered by physical mod-elling, to provide a building block approach to the creationof virtual instruments, based on elemental structures in one(string), two (sheet), three (block), or more dimensions thatcan be interconnected to form complex virtual acousticallyresonant structures. Such instruments can be excited acous-tically, controlled in real-time via gestural devices that incor-porate force feedback to provide a tactile response in addi-tion to the acoustic output, and heard after placing one ormore virtual microphones at user-specified positions withinthe instrument.

2. DESIGNING AND PLAYING CYMATICINSTRUMENTS

Cymatic is a physical modelling synthesis system that makesuse of a mass-spring paradigm with which it synthesisesresonating structures in real-time. It is implemented on aWindows-based PC machine in C++, and it incorporatessupport for standard force feedback PC gaming controllers toprovide gestural control and tactile feedback. Acoustic out-put is realised via a sound card that provides support forASIO audio drivers. Operation of Cymatic is a two-stage pro-cess: (1) virtual instrument design and (2) real-time soundsynthesis.

Virtual instrument design is accomplished via a graphi-cal interface, with which individual building block resonat-ing elements including strings, sheets, and solids can be in-corporated in the instrument and interconnected on a user-specified mass to mass basis. The ends of strings and edgesof sheets and blocks can be locked as desired. The tensionand mass parameters of the masses and springs within eachbuilding block element can be user defined in value and ei-ther left fixed or placed under dynamical control using a ges-tural controller during synthesis. Virtual instruments can becustomised in shape to enable arbitrary structures to be re-alised by deleting or locking any of the individual masses.Each building block resonating element will behave as avibrating structure. The individual axial resonant frequen-cies will be determined by the number of masses along thegiven axis, the sampling rate, and the specified mass and ten-sion values. Standard relationships hold in terms of the rel-ative values of resonant frequency between building blocks,

Page 3: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

Gesture-Tactile Physical Modelling Synthesis 1003

String 1 Random

Sheet 1mic1

Block 1

Bow

Figure 1: Example build-up of a Cymatic virtual instrument start-ing with a string with 45 masses (top left), then adding a sheet of 7by 9 masses (bottom left), then a block of 4 by 4 by 3 masses (topright), and finally the completed instrument (bottom right). Mic1:audio output virtual microphone on the sheet at mass (4, 1). Ran-dom: random excitation at mass 33 of the string. Bow: bowed ex-citation at mass (2, 2, 2) of the block. Joins (dotted line) betweenstring mass 18 and sheet mass (1, 5). Join (dotted line) betweensheet mass (6, 3) and block mass (3, 2, 1).

for example, a string twice the length of another will have afundamental frequency that is one octave lower.

An excitation function, selected from the following list,can be placed on any mass within the virtual instrument:pluck, bow, random, sine wave, square wave, triangular wave,or live audio. Parameters relating to the selected excita-tion, including excitation force and its velocity and timeof application where appropriate can be specified by theuser. Multiple excitations can be specified on the basis thateach is applied to its own individual mass element. Mono-phonic audio output to the sound card is achieved via avirtual microphone placed on any individual mass withinthe instrument. Stereophonic output is available either fromtwo individual microphones or from any number of mi-crophones greater than two, where the output from each ispanned between the left and right channels as desired. Cy-matic supports whatever range of sampling rates that is avail-able on the sound card. For example, when used with anEridol UA-5 USB audio interface, the following are avail-able: 8 kHz, 9.6 kHz, 11.025 kHz, 12 kHz, 16 kHz, 22.05 kHz,24 kHz, 32 kHz, 44.1 kHz, 48 kHz, 88.2 kHz, and 96 kHz.

Figure 1 illustrates the process of building up a virtual in-strument. The instrument has been built up from a stringof 45 masses, a sheet of 7 by 9 masses, and a block of 4by 4 by 3 masses. There is an interconnection between thestring (mass 18 from the left) and the sheet (mass 1, 5) aswell as the sheet (mass 6, 3) and the block (mass 3, 2, 1) as in-dicated by the dotted lines (a simple process based on click-ing on the relevant masses). Two excitations have been in-

cluded: a random input to the string at mass 33 and a bowedexcitation to the block at mass (2, 2, 2). The basic sheet andblock have been edited. Masses have been removed from boththe sheet and the block as indicated by the gaps in their struc-ture and the masses on the back surface of the block haveall been locked. The audio output is derived from a virtualmicrophone placed on the sheet at mass (4, 1). These are in-dicated on the figure as random, bow, and mic1, respectively.Individual components, excitations, and microphones can beadded, edited, or deleted as desired.

The instrument is controlled in real-time using a Mi-crosoft Sidewinder Force Feedback Pro Joystick and a Log-itech iFeel mouse found on http://www.immersion.com. Thevarious gestures that can be captured by these devices can bemapped to any of the parameters that are associated with thephysical modelling process on an element-by-element ba-sis. The joystick offers four degrees of freedom (x, y, z-twistmovement and a rotary “throttle” controller) and eight but-tons. The mouse has two degrees of freedom (X, Y) and threebuttons. Cymatic parameters that can be controlled includethe mass or tension of any of the basic elements that makeup the instrument and the parameters associated with thechosen excitation, such as bowing pressure, excitation force,or excitation velocity. The buttons can be configured to sup-press the effect of any of the gestural movements to enable theuser to move to a new position while making no change andthen the change can be made instantaneously by releasing thebutton. In this way, step variations can be accommodated.

The force feedback capability of the joystick allows for theprovision of tactile feedback with a high degree of customis-ability. It receives its force instructions via MIDI through thecombined MIDI/joystick port on most PC sound cards, andCymatic outputs the appropriate MIDI messages to controlits force feedback devices. The Logitech iFeel mouse is anoptical mouse which implements Immersion’s iFeel technol-ogy (http://www.immersion.com). It contains a vibrotactiledevice to produce tactile feedback over a range of frequen-cies and amplitudes via the “Immersion Touchsense Entertain-ment” software, which converts any audio signal to tactilesensations. The force feedback amplitude is controlled by theacoustic amplitude of the signal from a user-specified virtualmicrophone, which might be involved in the provision of themain acoustic output, or it could solely be responsible for thecontrol of tactile feedback.

3. PHYSICAL MODELLING SYNTHESIS IN CYMATIC

Physical modelling audio synthesis in Cymatic is carried outby solving for the mechanical interaction between the massesand springs that make up the virtual instrument on a sample-by-sample basis. The central difference method of numericalintegration is employed as follows:

x(t + dt) = x(t) + v(t +

dt

2

)dt,

v(t +

dt

2

)= v

(t − dt

2

)+ a(t)dt,

(1)

Page 4: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

1004 EURASIP Journal on Applied Signal Processing

where x = mass position, v = mass velocity, a = massacceleration, t = time, and dt = sampling interval.

The mass velocity is calculated half a time step ahead ofits position, which results in a more stable model than an im-plementation of the Euler approximation. The acceleration attime t of a cell is calculated by the classical equation

a = F

m, (2)

where F = the sum of all the forces on the cell and m = cellmass.

Three forces are acting on the cell:

Ftotal = Fspring + Fdamping + Fexternal, (3)

where Fspring = the force on the cell from springs connectedto neighbouring cells, Fdamping = the frictional damping forceon the cell due to the viscosity of the medium, Fexternal = theforce on the cell from external excitations.

Fspring is calculated by summing the force on the cell fromthe springs connecting it to its neighbours, calculated viaHooke’s law:

Fspring = k∑(

pn − p0), (4)

where k = spring constant, pn = the position of the nthneighbour, and p0 = the position of the current cell.

Fdamping is the frictional force on the cell caused by theviscosity of the medium in which the cell is contained. It isproportional to the cell velocity, where the constant of pro-portionality is the damping parameter of the cell.

Fdamping = −ρv(t), (5)

where ρ = the damping parameter of the cell, v(t) = the ve-locity of the cell at time t.

The acceleration of a particular cell at any instant can beestablished by combining these forces into (2)

a(t) = (1/m)(k∑(

pn − p0)− ρv(t) + Fexternal

). (6)

The position, velocity, and acceleration are calculated onceper sampling interval for each cell in the virtual instrument.Any virtual microphones in the instrument output their cellpositions to provide an output audio waveform.

4. CYMATIC OUTPUTS

Audio spectrograms provide a representation that enablesthe detailed nature of the acoustic output from Cymatic tobe observed visually. Figure 2 shows a virtual Cymatic instru-ment consisting of a string and a modified sheet which arejoined together between mass 30 (from the left) on the stringto mass (6, 3) on the sheet. A random excitation is appliedat mass 10 of the string and a virtual microphone (mic1) islocated at mass (4, 3) of the sheet. Figure 3 shows the forcefeedback joystick settings dialog used to control the virtualinstrument and it can be seen that the component mass ofthe string, the component tension, and damping and massof the sheet are controlled by the X, Y, Z and slider (throttle)

Joined masses: mass 30 on string to mass (6.3) on sheet

String 1

Sheet 1

Figure 2: Cymatic virtual instrument consisting of a string andmodified sheet. They are joined together between mass 30 (fromthe left) on the string to mass (6, 3) on the sheet. A random excita-tion is applied at point 10 of the string and the virtual microphoneis located at mass (6, 3) of the sheet.

Figure 3: Force feedback joystick settings dialog.

4

2

kHz

1 s

Figure 4: Spectrogram of output from the Cymatic virtual instru-ment, shown in Figure 2, consisting of a string and modified sheet.

functions of the joystick. Three of the buttons have been setto suppress X, Y, and Z; a feature which enables a new settingto be jumped to as desired, for example, by pressing button1, moving the joystick in the X axis and then releasing button1. Force feedback is applied based on the output amplitudelevel from mic1.

Page 5: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

Gesture-Tactile Physical Modelling Synthesis 1005

Figure 5: Spectrogram of a section of “the child is sleeping” by Stuart Rimell showing Cymatic alone (from the start to A), the word “hush”sung by the four-part choir (A to B) and the “st” of “still” at C.

Figure 4 shows a spectrogram of the output from mic1of the instrument. The tonality visible (horizontal bandingin the spectrogram) is entirely due to the resonant propertiesof the string and sheet themselves, since the input excitationis random. Variations in the tonality are rendered throughgestural control of the joystick, and the step change notablejust before half way through is a result of using one of the“suppress” buttons.

Cymatic was used in a public live concert in Decem-ber 2002, for which a new piece “the child is sleeping” wasspecially composed by Stuart Rimell for a capella choir andCymatic (http://www.users.york.ac.uk/∼dmh). It was per-formed by the Beningbrough Singers in York, conducted byDavid Howard. The composer performed the Cymatic part,which made use of three cymbal-like structures controlledby the mouse and joystick. The choir provided a backingin the form of a slow moving carol in four-part harmony,while Cymatic played an obligato solo line. The spectrogramin Figure 5 illustrates this with a section which has Cymaticalone (up to point A), and then the choir enters singing“hush be still,” with the “sh” of “hush” showing at point Band the “st” of “still” at point C. In this particular Cymaticexample, the sound colours being used lie at the extremes ofthe vocal spectral range, but there are clearly tonal elementsin the Cymatic output visible. Indeed, these were essential asa means of giving the choir their starting pitches.

5. DISCUSSION AND CONCLUSIONS

An instrument known as Cymatic has been described, whichprovides its players with an immersive, easy to understand,

as well as tactile musical experience that is rarely found withcomputer-based instruments, but commonly expected fromacoustic musical instruments. The audio output from Cy-matic is derived from a physical modelling synthesis engine,which enables virtual instruments with arbitrary shapes tobe built up by interconnecting one (string), two (sheet),three (block), or more dimensional basic building blocks. Anacoustic excitation chosen from bowing, plucking, striking,or waveform is applied at any mass element, and the output isderived from a virtual microphone placed at any other masselement. Cymatic is controlled via gestural controllers thatincorporate force feedback to provide the player with tactileas well as acoustic feedback.

Cymatic has the potential to enable new musical instru-ments to be explored, that have the potential to produce orig-inal and inspiring new timbral palates, since virtual instru-ments that are not physically realizable can be implemented.In addition, interaction with these instruments can includeaspects that cannot be used with their physical counterparts,such as deleting part of the instrument while it is sounding,or changing its physical properties in real-time during per-formance. The design of the user interface ensures that all ofthese activities can be carried out in a manner that is moreintuitive than with traditional electronic instruments, sinceit is based on the resonant properties of physical structures.A user can therefore make sense of what she or he is doingthrough reference to the likely behaviour of strings, sheets,and blocks. Cymatic has the further potential in the future(as processing speed increases further) to move well awayfrom the real physical world, while maintaining the link withthis intuition, since the spatial dimensionality of the virtual

Page 6: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

1006 EURASIP Journal on Applied Signal Processing

instruments can in principle be extended well beyond thethree of the physical world.

Cymatic provides the player with an increased sense ofimmersion, which is particularly useful when developingperformance skills since it reinforces the visual and auralfeedback cues and helps the player internalise models ofthe instrument’s response to gesture. Tactile feedback alsohas the potential to prove invaluable in group performance,where traditionally computer instruments have placed anover-reliance on visual feedback, thereby detracting from theplayer’s visual attention which should be directed elsewherein a group situation, for example, towards a conductor.

ACKNOWLEDGMENTS

The authors acknowledge the support of the Engineering andPhysical Sciences Research Council, UK, under Grant num-ber GR/M94137. They also thank the anonymous referees fortheir helpful and useful comments.

REFERENCES

[1] M. Russ, Sound Synthesis and Sampling, Focal Press, Oxford,UK, 1996.

[2] J. O. Smith III, “Physical modelling synthesis update,” Com-puter Music Journal, vol. 20, no. 2, pp. 44–56, 1996.

[3] M. D. Pearson and D. M. Howard, “Recent developments withTAO physical modelling system,” in Proc. International Com-puter Music Conference, pp. 97–99, Hong Kong, China, August1996.

[4] J. D. Morrison and J. M. Adrien, “MOSAIC: A framework formodal synthesis,” Computer Music Journal, vol. 17, no. 1, pp.45–56, 1993.

[5] C. Cadoz, A. Luciani, and J. L. Florens, “CORDIS-ANIMA: Amodelling system for sound and image synthesis, the generalformalism,” Computer Music Journal, vol. 17, no. 1, pp. 19–29,1993.

[6] J. Preece, “Interview with Ben Shneiderman,” in Human-Computer Interaction, Y. Rogers, H. Sharp, D. Benyon, S. Hol-land, and J. Preece, Eds., Addison Wesley, Reading, Mass,USA, 1994.

[7] A. D. Hunt and P. R. Kirk, Digital Sound Processing for Musicand Multimedia, Focal Press, Oxford, UK, 1999.

[8] W. Buxton, “There is more to interaction than meets the eye:Some issues in manual input,” in User Centered System Design:New Perspectives on Human-Computer Interaction, D. A. Nor-man and S. W. Draper, Eds., pp. 319–337, Lawrence ErlbaumAssociates, Hillsdale, NJ, USA, 1986.

[9] G. W. Fitzmaurice, Graspable user interfaces, Ph.D. thesis,University of Toronto, Ontario, Canada, 1998.

[10] B. Gillespie, “Introduction haptics,” in Music, Cognition, andComputerized Sound: An Introduction to Psychoacoustics, P. R.Cook, Ed., pp. 229–245, MIT Press, London, UK, 1999.

[11] D. M. Howard, S. Rimell, A. D. Hunt, P. R. Kirk, and A. M.Tyrrell, “Tactile feedback in the control of a physical mod-elling music synthesiser,” in Proc. 7th International Conferenceon Music Perception and Cognition, C. Stevens, D. Burnham,G. McPherson, E. Schubert, and J. Renwick, Eds., pp. 224–227, Casual Publications, Adelaide, Australlia, 2002.

[12] S. Kenji, H. Riku, and H. Shuji, “Development of an au-tonomous humanoid robot, iSHA, for harmonized human-machine environment,” Journal of Robotics and Mechatronics,vol. 14, no. 5, pp. 324–332, 2002.

[13] C. Cadoz, A. Luciani, and J. L. Florens, “Responsive inputdevices and sound synthesis by simulation of instrumentalmechanisms: The Cordis system,” Computer Music Journal,vol. 8, no. 3, pp. 60–73, 1984.

[14] B. Gillespie, Haptic display of systems with changing kinematicconstraints: The virtual piano action, Ph.d. dissertation, Stan-ford University, Stanford, Calif, USA, 1996.

[15] C. Nichols, “The vBow: Development of a virtual violin Bowhaptic human-computer interface,” in Proc. New Interfaces forMusical Expression Conference, pp. 168–169, Dublin, Ireland,May 2002.

[16] J. Rovan and V. Hayward, “Typology of tactile soundsand their synthesis in gesture-driven computer music perfor-mance,” in Trends in Gestural Control of Music, M. Wander-ley and M. Battier, Eds., pp. 297–320, Editions IRCAM, Paris,France, 2000.

[17] D. M. Howard, S. Rimell, and A. D. Hunt, “Force feedbackgesture controlled physical modelling synthesis,” in Proc. Con-ference on New Musical Instruments for Musical Expression, pp.95–98, Montreal, Canada, May 2003.

David M. Howard holds a first-class B.S.degree in electrical and electronic engineer-ing from University College London (1978),and a Ph.D. in human communication fromthe University of London (1985). His Ph.D.topic was the development of a signal pro-cessing unit for use with a single channelcochlear implant hearing aid. He is nowwith the Department of Electronics at theUniversity of York, UK, teaching and re-searching in music technology. His specific research areas includethe analysis and synthesis of music, singing, and speech. Currentactivities include the application of bio-inspired techniques formusic synthesis, physical modelling synthesis for music, singingand speech, and real-time computer-based visual displays for pro-fessional voice development. David is a Chartered Engineer, a Fel-low of the Institution of Electrical Engineers, and a Member of theAudio Engineering Society. Outside work, David finds time to con-duct a local 12-strong choir from the tenor line and to play the pipeorgan.

Stuart Rimell holds a B.S. in electronic mu-sic and psychology as well as an M.S. in dig-ital music technology, both from the Uni-versity of Keele, UK. He worked for 18months with David Howard at the Univer-sity of York on the development of the Cy-matic system. There he studied electroa-coustic composition for 3 years under MikeVaughan and Rajmil Fischman. Stuart is in-terested in the exploration of new and freshcreative musical methods and their computer-based implementa-tion for electronic music composition. Stuart is a guitarist and healso plays euphonium, trumpet, and piano and has been writingmusic for over 12 years. His compositions have been recognized in-ternationally through prizes from the prestigious Bourge Festivalof Electronic Music in 1999 and performances of his music world-wide.

Page 7: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

EURASIP Journal on Embedded Systems

Special Issue onFPGA Supercomputing Platforms, Architectures,and Techniques for Accelerating ComputationallyComplex Algorithms

Call for PapersField-programmable gate arrays (FPGAs) provide an alter-native route to high-performance computing where fine-grained synchronisation and parallelism are achieved withlower power consumption and higher performance than justmicroprocessor clusters. With microprocessors facing the“processor power wall problem” and application specific in-tegrated circuits (ASICs) requiring expensive VLSI masks foreach algorithm realisation, FPGAs bridge the gap by offer-ing flexibility as well as performance. FPGAs at 65 nm andbelow have enough resources to accelerate many computa-tionally complex algorithms used in simulations. Moreover,recent times have witnessed an increased interest in design ofFPGA-based supercomputers.

This special issue is intended to present current state-of-the-art and most recent developments in FPGA-based su-percomputing platforms and in using FPGAs to acceleratecomputationally complex simulations. Topics of interest in-clude, but are not limited to, FPGA-based supercomput-ing platforms, design of high-throughput area time-efficientFPGA implementations of algorithms, programming lan-guages, and tool support for FPGA supercomputing. To-gether these topics will highlight cutting-edge research inthese areas and provide an excellent insight into emergingchallenges in this research perspective. Papers are solicited inany of (but not limited to) the following areas:

• Architectures of FPGA-based supercomputers◦ History and surveys of FPGA-based supercom-

puters architectures◦ Novel architectures of supercomputers, includ-

ing coprocessors, attached processors, and hy-brid architectures

◦ Roadmap of FPGA-based supercomputing◦ Example of acceleration of large applications/

simulations using FPGA-based supercomputers• FPGA implementations of computationally complex

algorithms◦ Developing high throughput FPGA implementa-

tions of algorithms◦ Developing area time-efficient FPGA implemen-

tations of algorithms

◦ Precision analysis for algorithms to be imple-mented on FPGAs

• Compilers, languages, and systems◦ High-level languages for FPGA application de-

velopment◦ Design of cluster middleware for FPGA-based

supercomputing platforms◦ Operating systems for FPGA-based supercom-

puting platforms

Prospective authors should follow the EURASIP Journalon Embedded Systems manuscript format described at thejournal site http://www.hindawi.com/journals/es/. Prospec-tive authors should submit an electronic copy of their com-plete manuscript through the journal Manuscript TrackingSystem at http://mts.hindawi.com/, according to the follow-ing timetable:

Manuscript Due July 1, 2008

First Round of Reviews October 1, 2008

Publication Date January 1, 2009

Guest Editors

Vinay Sriram, Defence and Systems Institute, University ofSouth Australia, Adelaide, South Australia 5001, Australia;[email protected]

David Kearney, School of Computer and InformationScience, University of South Australia, Adelaide, SouthAustralia 5001, Australia; [email protected]

Lakhmi Jain, School of Electrical and InformationEngineering, University of South Australia, Adelaide, SouthAustralia 5001, Australia; [email protected]

Miriam Leeser, School of Electrical and ComputerEngineering, Northeastern University, Boston, MA 02115,USA; [email protected]

Hindawi Publishing Corporationhttp://www.hindawi.com

Page 8: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

EURASIP Journal on Information Security

Special Issue onSecure Steganography in Multimedia Content

Call for PapersSteganography, the art and science of invisible communica-tion, aims to transmit information that is embedded invis-ibly into carrier data. Different from cryptography it hidesthe very existence of the secret. Its main requirement is unde-tectability, that is, no method should be able to detect a hid-den message in carrier data. This also differentiates steganog-raphy from watermarking where the secrecy of hidden data isnot required. Watermarking serves in some way the carrier,while in steganography, the carrier serves as a decoy for thehidden message.

The theoretical foundations of steganography and detec-tion theory have been advanced rapidly, resulting in im-proved steganographic algorithms as well as more accuratemodels of their capacity and weaknesses.

However, the field of steganography still faces many chal-lenges. Recent research in steganography and steganalysis hasfar-reaching connections to machine learning, coding theory,and signal processing. There are powerful blind (or univer-sal) detection methods, which are not fine-tuned to a partic-ular embedding method, but detect steganographic changesusing a classifier that is trained with features from knownmedia. Coding theory facilitates increased embedding effi-ciency and adaptiveness to carrier data, both of which willincrease the security of steganographic algorithms. Finally,both practical steganographic algorithms and steganalyticmethods require signal processing of common media like im-ages, audio, and video. The field of steganography still facesmany challenges, for example,

• how could one make benchmarking steganographymore independent from machine learning used in ste-ganalysis?

• where could one embed the secret to make steganog-raphy more secure? (content adaptivity problem).

• what is the most secure steganography for a givencarrier?

Material for experimental evaluation will be made availableat http://dud.inf.tu-dresden.de/∼westfeld/rsp/rsp.html.

The main goal of this special issue is to provide a state-of-the-art view on current research in the field of stegano-graphic applications. Some of the related research topics forthe submission include, but are not limited to:

• Performance, complexity, and security analysis ofsteganographic methods

• Practical secure steganographic methods for images,audio, video, and more exotic media and bounds ondetection reliability

• Adaptive, content-aware embedding in various trans-form domains

• Large-scale experimental setups and carrier modeling• Energy-efficient realization of embedding pertaining

encoding and encryption• Steganography in active warden scenario, robust

steganography• Interplay between capacity, embedding efficiency, cod-

ing, and detectability• Steganalytic application in steganography benchmark-

ing and digital forensics• Attacks against steganalytic applications• Information-theoretic aspects of steganographic secu-

rity

Authors should follow the EURASIP Journal on Informa-tion Security manuscript format described at the journalsite http://www.hindawi.com/journals/is/. Prospective au-thors should submit an electronic copy of their completemanuscript through the journal Manuscript Tracking Sys-tem at http://mts.hindawi.com/ according to the followingtimetable:

Manuscript Due August 1, 2008

First Round of Reviews November 1, 2008

Publication Date February 1, 2009

Guest Editors

Miroslav Goljan, Department of Electrical and ComputerEngineering, Watson School of Engineering and AppliedScience, Binghamton University, P.O. Box 6000,Binghamton, NY 13902-6000, USA;[email protected]

Andreas Westfeld, Institute for System Architecture,Faculty of Computer Science, Dresden University ofTechnology, Helmholtzstraße 10, 01069 Dresden, Germany;[email protected]

Hindawi Publishing Corporationhttp://www.hindawi.com

Page 9: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

International Journal of Digital Multimedia Broadcasting

Special Issue on

Advances in Video Coding for Broadcast Applications

Call for Papers

The growing diffusion of new services, like mobile televisionand video communications, based on a variety of transmis-sion platforms (3G, WiMax, DVB-S/T/H, DMB, DTMB, In-ternet, etc.), emphasizes the need of advanced video codingtechniques able to meet the requirements of both the receiv-ing devices and the transmission networks. In this context,scalable and layered coding techniques represent a promisingsolution when aimed at enlarging the set of potential devicescapable of receiving video content. Video encoders’ configu-ration must be tailored to the target devices and services thatrange from high definition for powerful high-performancehome receivers to video coding for mobile handheld devices.Encoder profiles and levels need to be tuned and properlyconfigured to get the best trade-off between resulting qualityand data rate, in such a way as to address the specific require-ments of the delivery infrastructure. As a consequence, it ispossible to choose from the entire set of functionalities of thesame video coding standard in order to provide the best per-formance for a specified service.

This special issue aims at promoting state-of-the-art re-search contributions from all research areas either directlyinvolved in or contributing to improving the issues relatedto video coding technologies for broadcast applications.

Topics of interest include (but are not limited to):

• Video codec design methodology and architecture forbroadcast applications

• Advanced video compression techniques for mobilebroadcasting

• Content-based and object-based video coding• Layered and scalable video coding for fixed and mobile

broadcasting• Mobile and wireless video coding• Video trans-coding/trans-rating methods for broad-

cast applications• Video rate control techniques• Human visual system in distortion metrics, activity

measure methods in video sequences• High definition video coding• Distributed video coding in broadcast applications• Multi-view coding-/-3D video coding

• Service enhancement through “cross-layer” optimisa-tion

• Error resilience, intelligent robustness enhancement,and error concealment for broadcast applications

Authors should follow the International Journal of Digi-tal Multimedia Broadcasting manuscript format describedat the journal site http://www.hindawi.com/journals/ijdmb/.Prospective authors should submit an electronic copy of theircomplete manuscript through the journal Manuscript Track-ing System at http://mts.hindawi.com/ according to the fol-lowing timetable:

Manuscript Due June 1, 2008

First Round of Reviews September 1, 2008

Publication Date December 1, 2008

Guest Editors

Susanna Spinsante, Dipartimento di Elettronica,Intelligenza Artificiale e Telecomunicazioni, UniversitàPolitecnica delle Marche, 60131 Ancona, Italy;[email protected]

Ennio Gambi, Dipartimento di Elettronica, IntelligenzaArtificiale e Telecomunicazioni, Università Politecnica delleMarche, 60131 Ancona, Italy; [email protected]

Lorenzo Ciccarelli, Advanced Coding Group, TandbergTelevision, Southampton SO30 4DA, UK;[email protected]

Andrea Lorenzo Vitali, ST-Microelectronics, 20041Milano, Italy; [email protected]

Jorge Sastre Martínez, Escuela Técnica Superior deIngenieros de Telecomunicación, Technical University ofValencia, 46022 Valencia, Spain; [email protected]

Paul Salama, Indiana University—Purdue UniversityIndianapolis, Indianapolis, IN 46202, USA;[email protected]

Hindawi Publishing Corporationhttp://www.hindawi.com

Page 10: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

EURASIP Journal on Wireless Communications and Networking

Special Issue on

OFDMA Architectures, Protocols, and Applications

Call for Papers

Orthogonal frequency-division multiple access (OFDMA)technologies are currently attracting intensive attention inwireless communications to meet the ever-increasing de-mands arising from the explosive growth of Internet, mul-timedia, and broadband services. OFDMA-based systems areable to deliver high data rate, operate in the hostile multipathradio environment, and allow efficient sharing of limited re-sources such as spectrum and transmit power between mul-tiple users. OFDMA has been used in the mobility mode ofIEEE 802.16 WiMAX, is currently a working specification in3GPP Long Term Evolution downlink, and is the candidateaccess method for the IEEE 802.22 “wireless regional areanetworks.” Clearly, recent advances in wireless communica-tion technology have led to significant innovations that en-able OFDMA-based wireless access networks to provide bet-ter quality-of-service (QoS) than ever with convenient andinexpensive deployment and mobility.

However, regardless of the technology used, OFDMA net-works must not only be able to provide reliable and highquality broadband services, but also be implemented cost-effectively and be operated efficiently. OFDMA presentsmany of the advantages and challenges of OFDM systemsfor single users, and the extension to multiple users intro-duces many further challenges and opportunities, both onthe physical layer and at higher layers. These requirementspresent many challenges in the design of network archi-tectures and protocols, which have motivated a significantamount of research in the area. Also, many critical prob-lems associated with the applications of OFDMA technolo-gies in future wireless systems are still looking for efficientsolutions. The aim of this special issue is to present a col-lection of high-quality research papers that report the latestresearch advances in this field from physical and network lay-ers to practical applications. Original papers are solicited inall aspects of OFDMA techniques including physical layer is-sues, architectures, protocol designs, enabling technologies,theoretical studies, practical applications, and experimentalprototypes. Topics of interest include, but are not limited to:

• Adaptive coding and modulation• Signal processing for OFDMA• Interference control techniques• Bandwidth and resources allocation

• Efficient MAC protocol development• Routing algorithms and congestion control schemes• MAC and network layer management• Cross-layer design and optimization• Cooperative and game theoretic analysis• Quality of service provisioning• Network modeling and performance analysis• Security and privacy management• Broadband Wireless Access• Testbed, experiment, implementation, standards, and

practical applications

Authors should follow the EURASIP Journal on Wire-less Communications and Networking manuscript for-mat described at the journal site http://www.hindawi.com/journals/wcn/. Prospective authors should submit an elec-tronic copy of their complete manuscript through the jour-nal Manuscript Tracking System at http://mts.hindawi.com/,according to the following timetable.

Manuscript Due August 1, 2008

First Round of Reviews November 1, 2008

Publication Date February 1, 2009

Guest Editors

Victor C.M. Leung, The University of British Columbia,2329 West Mall Vancouver, BC, Canada, V6T 1Z4;[email protected]

Alister G. Burr, Department of Electronics, University ofYork, York, YO10-5DD, UK; [email protected]

Lingyang Song, Wireless Group, Philips ResearchLaboratories, Cross Oak Lane, Redhill, Surrey RH1 5HA,UK; [email protected]

Yan Zhang, Simula Research Laboratory, 1325 Lysaker,Norway; [email protected]

Thomas Michael Bohnert, Siemens AG, 80312 Munich,Germany; [email protected]

Hindawi Publishing Corporationhttp://www.hindawi.com

Page 11: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

EURASIP Journal on Image and Video Processing

Special Issue onSocial Image and Video Content Analysis

Call for PapersThe performance of image and video analysis algorithmsfor content understanding has improved considerably overthe last decade and their practical applications are alreadyappearing in large-scale professional multimedia databases.However, the emergence and growing popularity of socialnetworks and Web 2.0 applications, coupled with the ubiq-uity of affordable media capture, has recently stimulatedhuge growth in the amount of personal content available.This content brings very different challenges compared toprofessionally authored content: it is unstructured (i.e., itneeds not conform to a generally accepted high-level syntax),typically complementary sources are available when it is cap-tured or published, and it features the Suser-in-the-loopT atall stages of the content life-cycle (capture, editing, publish-ing, and sharing). To date, user provided metadata, tagging,rating and so on are typically used to index content in suchenvironments. Automated analysis has not been widely de-ployed yet, as research is needed to adapt existing approachesto address these new challenges.

Research directions such as multimodal fusion, collabora-tive computing, using location or acquisition metadata, per-sonal and social context, tags, and other contextual informa-tion, are currently being explored in such environments. Asthe Web has become a massive source of multimedia content,the research community responded by developing automatedmethods that collect and organize ground truth collectionsof content, vocabularies, and so on, and similar initiativesare now required for social content. The challenge will be todemonstrate that such methods can provide a more powerfulexperience for the user, generate awareness, and pave the wayfor innovative future applications.

This issue calls for high quality, original contributions fo-cusing on image and video analysis in large scale, distributed,social networking, and web environments. We particularlywelcome papers that explore information fusion, collabora-tive techniques, or context analysis.

Topics of interest include, but are not limited to:

• Image and video analysis using acquisition, location,and contextual metadata

• Using collection contextual cues to constrain segmen-tation and classification

• Fusion of textual, audio, and numeric data in visualcontent analysis

• Knowledge-driven analysis and reasoning in socialnetwork environments

• Classification, structuring, and abstraction of large-scale, heterogeneous visual content

• Multimodal person detection and behavior analysis forindividuals and groups

• Collaborative visual content annotation and groundtruth generation using analysis tools

• User profile modeling in social network environmentsand personalized visual search

• Visual content analysis employing social interactionand community behavior models

• Using folksonomies, tagging, and social navigation forvisual analysis

Authors should follow the EURASIP Journal on Im-age and Video Processing manuscript format describedat http://www.hindawi.com/journals/ivp/. Prospective au-thors should submit an electronic copy of their completemanuscripts through the journal Manuscript Tracking Sys-tem at http://mts.hindawi.com/, according to the followingtimetable:

Manuscript Due June 1, 2008

First Round of Reviews September 1, 2008

Publication Date December 1, 2008

Guest Editors

Yannis Avrithis, National Technical University of Athens,Athens, Greece; [email protected]

Yiannis Kompatsiaris, Informatics and TelematicsInstitute, Thermi-Thessaloniki, Greece; [email protected]

Noel O’Connor, Centre for Digital Video Processing,Dublin City University, Dublin, Ireland;[email protected]

Jan Nesvadba, Philips, Eindhoven, The Netherlands;[email protected]

Hindawi Publishing Corporationhttp://www.hindawi.com

Page 12: Real-TimeGesture-ControlledPhysicalModelling ......Cymatic is a physical modelling synthesis system that makes use of a mass-spring paradigm with which it synthesises resonating structures

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2007

Signal ProcessingResearch Letters in

Why publish in this journal?

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2007

Signal ProcessingResearch Letters in

Hindawi Publishing Corporation410 Park Avenue, 15th Floor, #287 pmb, New York, NY 10022, USA

ISSN: 1687-6911; e-ISSN: 1687-692X; doi:10.1155/RLSP

Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2007

Signal ProcessingResearch Letters in

Research Letters in Signal Processing is devoted to very fast publication of short, high-quality manuscripts in the broad field of signal processing. Manuscripts should not exceed 4 pages in their final published form. Average time from submission to publication shall be around 60 days.

Why publish in this journal?Wide Dissemination

All articles published in the journal are freely available online with no subscription or registration barriers. Every interested reader can download, print, read, and cite your article.

Quick Publication

The journal employs an online “Manuscript Tracking System” which helps streamline and speed the peer review so all manuscripts receive fast and rigorous peer review. Accepted articles appear online as soon as they are accepted, and shortly after, the final published version is released online following a thorough in-house production process.

Professional Publishing Services

The journal provides professional copyediting, typesetting, graphics, editing, and reference validation to all accepted manuscripts.

Keeping Your Copyright

Authors retain the copyright of their manuscript, which is published using the “Creative Commons Attribution License,” which permits unrestricted use of all published material provided that it is properly cited.

Extensive Indexing

Articles published in this journal will be indexed in several major indexing databases to ensure the maximum possible visibility of each published article.

Submit your Manuscript Now...In order to submit your manuscript, please visit the journal’s website that can be found at http://www.hindawi.com/journals/rlsp/ and click on the “Manuscript Submission” link in the navigational bar.

Should you need help or have any questions, please drop an email to the journal’s editorial office at [email protected]

Signal ProcessingResearch Letters in

Editorial BoardTyseer Aboulnasr, Canada

T. Adali, USA

S. S. Agaian, USA

Tamal Bose, USA

Jonathon Chambers, UK

Liang-Gee Chen, Taiwan

P. Dan Dan Cristea, Romania

Karen Egiazarian, Finland

Fary Z. Ghassemlooy, UK

Ling Guan, Canada

M. Haardt, Germany

Peter Handel, Sweden

Alfred Hanssen, Norway

Andreas Jakobsson, Sweden

Jiri Jan, Czech Republic

Soren Holdt Jensen, Denmark

Stefan Kaiser, Germany

C. Chung Ko, Singapore

S. Maw Kuo, USA

Miguel Angel Lagunas, Spain

James Lam, Hong Kong

Mark Liao, Taiwan

Stephen Marshall, UK

Stephen McLaughlin, UK

Antonio Napolitano, Italy

C. Richard, France

M. Rupp, Austria

William Allan Sandham, UK

Ravi Sankar, USA

John J. Shynk, USA

A. Spanias, USA

Yannis Stylianou, Greece

Jarmo Henrik Takala, Finland

S. Theodoridis, Greece

Luc Vandendorpe, Belgium

Jar-Ferr Kevin Yang, Taiwan