Volume #28 Preview

9
VOLUME 28 TO BEYOND OR NOT TO BE THE INTERNET OF THINGS Archis 2011 #2 Per issue 19.50 (nl, b, d, e, p) Volume is a project by Archis + amo + c-labINTERNET OF THINGS With Trust Design #2 and Tracing Concepts

description

When things start talking back, you’ve become part of an Internet of Things. Auto-sensoring, basic intelligence, interaction, we’re increasingly part of a world were things and living souls are equally connected. The fridge is a node just as you are. Volume #28 dives into these new dimensions of reality, into the consequences for design and for our understanding of our own position in the world.

Transcript of Volume #28 Preview

Page 1: Volume #28 Preview

VO

LUM

E 28

TO

BE

YON

D O

R N

OT

TO

BE

TH

E IN

TE

RN

ET

OF T

HIN

GS

Archis 2011 #2Per issue € 19.50 (nl, b, d, e, p) Volume is a project by Archis + amo + c-lab…

Amelia BorgBart-Jan PolmanBen CervenyBen SchoutenCarola MoujanChristiaan FruneauxCloud LabDeborah HauptmannDietmar OffenhuberDimitri NieuwenhuizenEd BordenEduard Sansho PouEdwin GardnerHiroshi IshiguroJames BurkeJeroen BeekmansJoop de Boer Juha van ‘t ZelfdeJustin FowlerKen Sakamura Lara SchrijverLorna GouldenMarcell MarsMark DekMark Shephard Mette Ramsgard ThomsenNina LarsenNortd LabsOle BoumanPhilip BeesleyRuairi GlynnScott BurnhamShintaro MyazakiStephen GageTimothy MooreTomasz JasciewiczTuur van BalenUsman HaqueVincent Schipper

INTERNETOFTHINGS

WithTrust Design #2 and Tracing Concepts

Page 2: Volume #28 Preview

10

Vol

um

e 2

8

11

Vol

um

e 2

8

becomes my body – in other words, it is the lo cal-ization, through touch, of sen-sations as such, that makes us aware of having a body of our own. On the other hand,

Aristotle – who gave touch a lot of thought – noted that, unlike the other senses the experience of touch is fusional: touch does not distin guish between ‘a touching subject’ and ‘a touched object’, both actors playing both roles simultaneously.

Closer to us in time, Australian fi lm maker and the-orist Cathryn Vasseleu underlines two seemingly contra-dictory aspects of touch: one is ‘a responsive and inde-fi nable aff ection, a sense of being touched as being moved’; the other is ‘touching as a sense of grasp ing, as an ob jective sense of things, conveyed through the skin’. While the fi rst of these implies a form of openness, the second expresses ‘the making of a connection, as the age-old dream of re-appropriation, autonomy and mastery’, and ‘is defi ned in terms of vision’. This dis tinc tion is of major im portance in relation to haptic design; what Vasseleu’s remarks suggest is that, out of the two aspects of touch, only one can be considered as ‘truly tactile’, the other being somehow ‘visual’ in nature. Stated plainly: depend ing on whether we adopt the ‘tactile’ perspective (touch as being moved – an open passage), or the ‘visual’ one (touch as grasping – a sense of control), the quality of the outcome will be very diff erent. In one case, subject and object are on the same level and the goal is open; in the other, there is domination from one part over the other and the goal is a specifi c outcome – a pre-determined ‘function’.

Of Touch and PowerThe intrinsically dynamic property of touch, which is feeling and acting simultaneously, implies an active form of perception that is diff erent from a passive reception

what the image ‘does’ or ‘does not do’, following a pre-determined program which pushes the participant to carry through a specifi c choreography. The succession of movements generates a particular quality of sensa-tions which, despite its major impact on the aesthetic experience, is not acknowledged in the design outcome.

John Dewey defi ned the notion of artifi cial as being what happens whenever ‘there is a split between what is overtly done and what is intended’. In this sense we can say that the system presented in Domestic Robocop is truly artifi cial not because machines or cutting-edge technology are involved, but because of this split – the simulation of touch that suppresses real touch. We could instead envision truly natural ways of embedding and accessing data, ways that start from the participant’s gestures instead of imposing gestures onto him. This approach is well illustrated by Chris Woebken’s Nano­futures: Sensual Interfaces (2007). According to Anthony Dunne (who curated the 2008 MoMA’s Design and the Elastic Mind exhibition where the movie was presented), the piece is a reaction to current views on nano tech nol-ogy which are primarily related to its capacity to improve functional characteristics of existing materials (e.g., increased resistance, reduced weight). Instead Woebken explored nanotechnologies as new design materials of their own. In particular he focused on ‘smartdust’ – a hypothetical system of multiple tiny microelectro me-chanical elements (MEMS) – trying to imagine the type of product that might emerge from this technology and how it could transform the very notion of interaction.

Nanofutures: Sensual Interfaces shows an offi ce worker interacting with his desktop computer through an interface made out of blocks of seeds (the seeds re-presenting smart dust). The user breaks the blocks apart, spreads the seeds, plays with them. While the seed interface still fulfi lls a functional goal – sharing, break ing, mining data – it is actually the sensual quality of the manipulations that strikes the viewer. Beyond function, one would want to work with them merely for the tactile pleasure they would provide.

of stimuli. Although in all sensual activity both passive-ness and action are present, in touch, the second is para mount. Therefore designing for touch implies a call to action on the participant; it enables them to drive the experience while remaining self-centered.

To further clarify, let us analyze what happens in the participant’s body. Two anticipation fi lms will help illustrate the purpose. The fi rst, Keiichi Matsuda’s Domestic Robocop (2010), is an animated movie show-ing a vision of an ‘augmented’ future in which media has completely saturated physical space. Direct bodily contact with objects has disappeared, replaced by a visual representation of the hands which, quite paradox-ically, conveys an impression of vintage imagery, as if the user’s gestures no longer belonged to the realm of natural movements but were a simulacrum of what humans used to do in a distant past. In other words, in the world of Domestic Robocop users do not touch objects themselves, but rather touch the image of touching them. One no longer grabs a real kettle, but instead we grab the kettle as an icon, as a gate towards concealed information. The act of touching remains present, but in the form of a simulation: we have re placed ‘the real thing’ (touching) by a simulation of touch.

Considered from the tactile perspective, instead of being augmented this situation could be called reduced reality. But don’t get me wrong: I am not arguing against the concept of augmented reality (although I certainly would go for a change of name). I am critiquing simulation, a ‘visual’, autocratic approach to interaction which surreptitiously makes humans subservient to machines. Simulation is autocratic because it forces the participant into a single point of view (the one ‘reality’ it is supposed to recreate). This has two major implications: fi rst, the reductive one I mentioned earlier – losing a dimension, exchanging the real for the fake. Second, the necessity to comply with the images’ demands which can be huge. In Domestic Robocop, for instance, the body is used as the image’s ‘control panel’ – it makes the image system work, activating the diff erent variations and possibilities of the ‘fi lm’ being shown. Attention is focused on

The word touch is on every-one’s lips these days. It generally refers to tangible devices and interfaces, a trend that possibly started with Steven Spielberg’s 2002 movie Minority Report, and of which the iPhone is the seminal example. The spectacular commercial suc cess of Apple’s smart-phone proved to the world that there is something in touch that signifi cantly reduces the gap between humans and com puters, and indeed interacting with ob jects through direct con tact un doubtedly in-

creases user pleasure. Some critics, how ever, such as Don Norman, have been pointing out the ineffi ciency of tactile inter action, going as far as calling tangible devices ‘as step backwards in usability’. Norman believes ‘natural inter faces are not natural’, that they trigger random and unwanted actions, do not rely on consistent interaction protocols, pre sent scalability problems, etc.. He argues that a clear protocol should be adopted to make them fully functional, just as happened with visual interfaces.

Norman’s essays bring a critical perspective into the current tactile craze. This raises a question however: if tangible devices are unreliable and inconsistent, unpre-dictable, and overall less effi cient than previous systems why are people willing to pay (much) more and learn how to use them – no matter how intuitive they might be? What is it that makes them so pleasurable to use? And, importantly, would they remain as pleasurable if they were more functional?

The pitfall in Norman’s argument is that he puts visual and tactile interfaces on the same level. In other words, he implies that a tactile interface should work just as a visual one does; and it is true that in most tan gible interfaces as they exist today, the role of touch is re-stricted to the hand only, and envisioned merely from a functional perspective – i.e. as a replacement for poin ters and mouses in command execution. This is a me chanical understanding that overrides the most power ful aff ord-ances of haptics which, I argue below, are not connected to function, but to experience.

The Bipolar Nature of TouchMost of the time while discussing touch one thinks of the hand and its ability to grasp things. This, however, is a very narrow view of what this sense really is. The experience of touch concerns the whole body as skin sensations of temperature and humidity, pressure from internal organs, or experiences of movement and weight also belong to it. James J. Gibson calls this global under-standing of touch a haptic system, describing it as a bipolar device through which an individual simultane ously gathers information about the surrounding environment and about their own body.

The dual nature of touch has interested thinkers from diff erent disciplines throughout history. Philos-ophers such as Husserl, for instance, have pointed out that touch is where the limit between ‘what is me’ and ‘what is not me’ lies, for it is through touch that a body

Touching the Interspace

Carola Moujan

The interface is defining for our

orientation in the world. Touch

seems the natural way to go, but

how does it influence our own

notion of being? Carola Moujan suggests that

‘interspace’ is the new realm

for designers.

Nanofutures

Domestic Robocop

© K

eiic

hi M

atsu

da

© C

hris

Woe

bken

Page 3: Volume #28 Preview

12

Vol

um

e 2

8

13

Vol

um

e 2

8

it is the physical contact with the fog, a caress-like sensation on the skin, that creates a feeling of im-mersion into a new spatial dimension.

Within interspaces participants are the infl exion point, the place where multiple dimensions converge. Architects and designers have a choice when addressing this particular role: either pursuing a controlled, pre-determined eff ect, or defi ning an operative mode that enables open responses and challenges conventional notions of reality. It is this second option where the true aesthetic potential of interspaces lies for by questioning the idea of an objective ‘reality’ – upon which we con-tinue to live in spite of scientifi c evidence – these inter­spaces can open up new ways of experiencing and under-standing space. And it is precisely along those lines that they fulfi ll a specifi c role left open by previous languages: the transformation of the material world into a less rigid, more fl uid environment.

In his 2006 book Herzian Tales Anthony Dunne intro-duced the concept of ‘post-optimal object’. For Dunne, ‘design research should explore a new role for the electronic object, one that facilitates more poetic modes of habitation’. Considering that technical and semiotic functionality have already attained optimal levels of per-formance, Dunne argues that the challenge for designers of electronic objects now is to “provide new experiences for everyday life”. In that sense Nanofutures is a good example of how touch can radically change the way we relate to objects, opening up new possibilities for post-optimal designs.

Touch and InterspaceWith the development of ubiquitous com puting, archi-tecture has become sensitive. Spaces are now capable of responding to our actions, often in the form of images

incor porated into the built environment. A new spatial category, paradoxical, unstable, and neither totally ma te rial nor fully digital, is born. Let us call it interspace.

Through the articulation of brick-and-mortar and electrons, interspaces create a new perception of reality. The bodily implica-tion intensifi es the impression of reality these illusionary environments convey; freed from medi ation devices such as the mouse and key board, we inter nalize those spaces as their transformation, sometimes even their generation, happens through our bodies. Just as in any other architectural experience, touch plays a determinant role here for it is through touch that all expe riences of space are shaped. Subsequently, if we want to cre-ate meaningful spatial experiences using dig-ital media, experiences in which the images and the built space are bound together in such a way that we do not perceive them as separate elements but rather as parts of an organic whole, then the design ought to be touch-driven.

In practice this is not always the case. Here again we could oppose the ‘visual’ to the ‘tactile’ as many inter spaces today are vision-driven. Within this conception the piece is considered a ‘living painting’ or ‘living movie’ and the hosting space reduced to a mere support for the images – a screen. Once again we have lost a dimension: what was originally three-dimensional (a space) has be come fl at (a screen). Conversely, inter-spaces designed through a tactile approach feel more real, because through touch a physical connection with the body is created enabling new forms of inhabitation instead of the con tem plative type of experience de-scribed above. A great variety of forms can emerge from this perspective for there are multiple possible tactile strategies. One example of this is the fog curtain used as a projection support by the Parisian collective La Fracture Numérique (a team composed by a video artist and an architect) in their 2009 piece Une épaisseur d’illusion. As the participant walks through it images are projected upon it. Beyond its symbolic role in relation to the installation’s theme (illusion),

La Fracture Numérique, Une épaisseur d’honneur, 2009

© L

a F

ract

ure

Num

ériq

ue

© T

hier

ry G

alim

ard

© E

lias

Sfa

xi

12

Vol

um

e 2

8

13

Vol

um

e 2

8

Page 4: Volume #28 Preview

74

Vo

lum

e 28

75

Vo

lum

e 28

Breathing column prototype model.

Detail of a breathing column. Hylozoic Soil, ‘(in)posición

dinámica,’ Festival de Mexico, Mexico City, Laboratorio Arte

Almaeda/Ars Electronica Mexico, Mexico City, 2010.

© P

BA

I

Page 5: Volume #28 Preview

124

Vo

lum

e 28

125

Vo

lum

e 28

computer we are suppressing noise and expending energy – we are making the binary mistake. That system takes a lot of energy. In traditional engineering, the most impor-tant principle is how to sup press noise. The next system, or more intelligent or complex system, will fi gure out how to utilize noise, like a biological system. We are working with biologists and we have developed this fundamental equation. We call this the Yurangi formula, which means biological fl uctuation. A [kin e matic skeleton] is the tradi-tional system. But if we have a very compli cated robot, if a robot moves in a dynamic environment, we can’t develop a model for that envi ronment. If we watch a biological system, for instance, insects or humans, we see a model that can respond to a dynamic world and control a com pli-cated body. We don’t know how many muscles we have, yet we learn to use our bodies very well. We are using noise [to learn and adapt], for instance, Brownian noise (though the biological system employs many diff erent kinds of noise).

CL When the computer suppresses noise and expends energy it is expending a lot of energy.

HI So we need to modify our models to incorporate noise. This creates a kind of balance-seeking, where noise and the model control together. If the model fails, then noise takes over. The robot doesn’t need to know how many legs or sensors it has. It needs to start with random movements, both small and large. We can apply these same ideas to a more complicated robot. We have developed a robot with the same bone structure and muscle arrangement as a human, yet with such a com-plicated robot we still cannot solve the inverse kine-matics equations that determine movement. Instead we control it through random movements. The robot can estimate the distance between its hand and a target. If the distance is long, the robot will begin to randomly move its arm around in a large-scale noise pattern, across all actuators. Eventually it will fi nd the target and the noise will be suppressed. It does this without ever knowing its own bone structure. We can relate this to the human baby. A baby has many random movements; it looks like a noise equation. Yet it develops a series of behaviors that allow it to control its own body. Babies run these noise-based automatic behaviors. Employing this we can build a more human-like surrogate.

CL As architects we are curious whether re spon-sive ness – or the feeling of presence – is some-thing that can be integrated into the architecture and space?

HI [Robotics researchers] call it body propriety and it is quite important for everything. Appearance is also important. The human relationship is based on human appearance. Basically we want to see a beautiful woman, right? Appearance is very important for everything, that is why I have started the android project. Until now robot researchers only focused on how to move the robot and did not design its appearance. Every day you check your face, not your behavior. They are very diff erent.

CL Do you think the robot can be emotive, re-sembling the human? Can it be expressive without having the physical character of the human being? For example, Ibo or Asimo (Sony)?

HI Emotion, emoting, objective function, intelligence, or even consciousness is not the objective. I function, I’m subjective, [so] you believe I’m intelligent right? Where is the function of consciousness or emotion? We believe by watching your behavior that you have consciousness or emotion. She has emotions and believes I have

Dressed in a black uniform, Ishiguro’s presentation is as matter-of-fact as his sur-roundings: robots will be everywhere in the future and he wants to make sure the future of communi ca-tions is as human-centric as possible. Split ting his time between leading the Socio-Syngeristic-Intelligence Group at Osaka University and his position as Fellow at the Advanced Telecom mu-nications Research Institute, his research interests are tele-presence, non-linguistic commu ni cation, embodied intelligence and cognition.

CL If a robot has a human-like appear ance, do people expect human-like intelligence?HI If a robot has a human-like appear ance, then yes, people expect human-like intelligence. The robot is a hybrid system, a mix of controlled and auton omous motion. For instance, eye and shoulder move ments are autonomic. We are always moving in a kind of un con-scious movement. That kind of movement is auto matic,

and the conversation we’re having is de pend ent on these movements. At the same time, we can connect the voice to an operator on the internet, so we can have a natural conversation. I can recognize this android as my own body, and others recognize it as me. But others can adopt my body and learn to control it as well. A robot is a very good tool for understanding humans but they’re not easy to make. Human-like robots can be so compli cated we cannot use the traditional understanding of robotics. In order to realize a surrogate, or a more human-like robot, we need other tools. For example, the Honda Asimo uses a very simple motor, es sentially a rotary motor – but it’s not human. The human is actually a series of linear actu ators. With this kind of actuator we can make a more complex, more human robot … In a tradi tional process, we would train or develop each part and then put them together into an integrated robot. In this project we train the entire system at the same time. If the robot has a very complicated body it is diffi cult to properly control [and coordinate] all its movements. Therefore the robot needs help from a care giver, a mother – in this case, my student. My student is teaching the robot how to stand up. This way we can understand which actuators are important for standing up.

There are very big diff erences in robotic and human systems. For instance, the human brain only needs one watt of energy while a super computer requires 50,000 watts. Why do we have this big diff erence? The reason is that the human brain makes good use of noise. I can try to explain how the biological system uses noise. At a molecular level everything is a gradient, but for the

The Importance of Random

Learning

Hiroshi Ishiguro

Interviewed by Cloud Lab

In April 2010 Cloud Lab visited

the Asada Synergistic Intelligence

Project, a part of the Japanese Science and Technology

Agency’s ERATO project. In an

anon ymous meeting room sur-

rounded by cubi-cles we met with

Hiroshi Ishiguro to talk about the

future of robotics, space and

communi cations. Ishiguro, an inno-vator in robotics,

is most famous for his Geminoid

project, a robotic twin he con-

structed to mim ic his every gesture

and twitch.

Page 6: Volume #28 Preview

126

Vo

lum

e 28

127

Vo

lum

e 28

CL What is the limit case of the tech nol-ogy then? If you are no longer physically present in a space your robot can do anything. Is this kind of freedom a goal?HI The goal is ultimately to understand humans. On the other hand we can’t stop technological development in the near future. We have to seriously consider how we should use this technology as a society. My goal is still to really think through these issues of technology, which is still far behind the real understanding of the human body. Today we don’t see this kind of humanoid robot in a city, but we see many machines, for instance, the vending machines found in the Japanese rail sys-tem. The vending machine talks, says hello. It’s impossible to stop the advance-ment of this kind of technology; that is human history. The robots we develop always fi nd their source in humanity. We walk, so locomotion technologies are important. We manipulate things with

our hands, so manipulation is important. We are not replacing humans with machines, but we are learning about humans by making these machines.

ap proaching the human model in robotics, trying to establish the relationship between robots and human beings.

CL In terms of working method, the laboratory is a very controlled space – but what feedback have you been getting in terms of robot deploy-ment in spaces that sponsor good interactions, for instance, in malls, hospitals, large spaces, small spaces, etc.?

HI The real fundamentals come from the fi eld, in inter-active robots. We are getting a lot of feedback. In order to have this kind of system, we need sensors. We can’t just use the sensors from the robots. It is not enough to [compute and plan] out the necessary activities. We have developed our own sensor networks with camera/laser-scanners and our system is pretty robust. The importance of the teleoperation is through fi eld testing. People ask the robot diffi cult questions and that is nat-ural. Before that I developed some autonomous robots – but I gave up that [research direction] and focused on the teleoperation – which is good for collecting data. Using teleoperation the robot can gather data on how people behave and react. Then we can gather the infor-mation and make a truly autonomous robot.

CL Our behavior is very diff erent depending on the space. We operate diff erently from space to space. Is it not something designed into the robot?

HI That is why we developed telecommunication. If we control the robot we control the situation. We can gather information and develop more autonomous robots. We are in a gradual development process for the developing robot… Evolutionary processes are important and should happen, but evolution is driven by humans. In my laboratory we are building a new robot; we are improving the robot. That is the evolution. Evolution is quite slow. The current evolution of humanity is done through technology. By creating new technologies we can evolve. We can evolve rapidly.

CL Parallel to evolution, the child robot you were showing us had to be physically trained to move by a trainer.

HI That is development, not evolution.CL But it is employing a certain kind of machine learning, so that as it is trained over time it can perform these functions by itself. Do you not see that as a kind of evolution?

HI That is development. Evolution is diff erent. For example, a robot would have to be designed through genes – that is evo lution. But even for the develop mental robot, you have to give a kind of gene, its program code.

CL Have you experimented with genetic programming?

HI We are using genetic programming but only in the context of very simple cre atures, such as insects. Our main purpose is to have a more human-like robot and we cannot simulate the whole process of human evolution.

CL What do you think are the limits of the Geminoids? Technology is always extending our capabilities, but is the Geminoid extending us or are we still extending it?

HI People typically expect the Geminoid to be able to manipulate something. Actually, the Geminoid is just for communication. Physically it is weak. The actuators themselves are not powerful enough to manipulate much. The Geminoid is a surrogate, where as the mani pulation of objects can be accomplished with another mechanism.

emo tions therefore we just believe that we have emo tions

and consciousness. Following from that, the robot can have emotion, because it can have eyes.

Can you have drama in robotics? I worked on the robot drama I am the Worker by Oriza Hirata. We used the robots as actresses and actors in scenarios with human actors. The human actors don’t need to have a human-like mind. The director’s orders are very precise, like ‘move forty centimeters in four seconds’. But actually we can feel human emotions in the heart when watching this drama. I think that is the proper understanding of emotion, consciousness and even the heart.

CL In a sense, the body is the last frontier of innovation. Despite the context of many tech-nol ogies (for instance, the rapid incorporation of cell phones for communications) extending the human body, the actual manipulation of the body itself remains taboo. There is a debate on the ethics of changing bodies. With whom do you identify with in this debate: the engineer, philosopher, priest?

HI My main interest is the human mind and why emo-tional phenomenon appears in human society. Robots refl ect and explore that human society.

My next collaboration is with a philosopher, actually two post-docs from philosophy. I am trying to develop a model in social relationships. I believe we can model human dynamics – we cannot watch just one person in under-standing emotion, right? We need to watch the whole society and develop models of that society – that is very important. Today we don’t have enough re search ers

Is this Kaizen Ishiguro in person or his humanoid double?

CB2 is a ‘soft’ robot that is actively trained by a human mother. CB2

has pneumatic actuators, over 200 active sensors, including two

cameras and microphones. It is autonomous, but largely a data-

gathering mechanism for fi guring out what actuators are important when engaged in complex be hav-

iors (walking, getting up).As shown in the kinematic struc-

ture, CB2 has totally fifty-six actuators; the eyeballs and eyelids

are driven by electrical motors since quick movements are re-

quired for these parts while the other body parts are driven by pneumatic actuators. The joint

driven by the pneumatic actuator has mechanical flexibility in the control thanks to the high com-

pressibility of air. The pneumatic actuators mounted throughout the

whole body enables CB2 to gen-er ate flexible whole-body move-ments. Although the mech an ism

is diff erent from a human, it can generate humanlike behav ior

(although it has a more limited range of movements than that

of humans).

Page 7: Volume #28 Preview

134

Vo

lum

e 28

135

Vo

lum

e 28

in a state of transformation and inherently ambiguous in the best sense of the word.

Such distinctions help us to start thinking about how we might build objects, installations, and buildings that are active participants rather than just a medium through which information travels between people or machines.

VS This really comes down to the issue – famously brought up by Pask – having to do with the diff er ence between communication and con-versation right? When we talk about inter-human communi ca tion the idea that there is always a fl aw in the transference of meaning is almost taken for granted. However when we talk about the trans-fer of infor mation we begin to assume that there is some sort of perfection in information itself which is then com mu ni cable. With this in mind I was won dering if you could talk about your idea of ‘inherent ambiguity’.

RG To get right into it, you can have a mech anistic view of the world; this is very New to-nian. You can think about Descartes who has been rumored to have built a automaton doll

of his daughter believing we could build machines that are real lifelike repre sentations of the human body. There is that view. And it is hardly surpris ing that when the guys who built the fi rst compu ters and the fi rst robots were put ting these things to gether and saw the power of logic and binary they were heavily infl u enced by that same sort of understanding, of that same particular mech a-nistic view of the world which I think is problematic.

I understand their thinking though, as they were in a diffi cult situation. They had to try and make machines with very unreliable technologies. These guys would really struggle to send simple command messages between components and between machines, so I salute them fi rst and foremost. The issue was that there was a huge amount of noise, so they needed to devise all sorts of code protocols that would get their information between diff erent places accurately and reliably and that in itself was a great achievement. The result of that endeavor is the world of telecommunications, the internet and so on.

The issue, however, really isn’t that there was any-thing wrong with these achievements but that a particular way of making machines reliable by eradicating noise, eradicating potential ambiguity became engrained within the conceptual model of human-computer interaction. Human-to-human interaction models were not pursued because these didn’t work within a mechanistic model. They were too ambiguous. The engineering challenge of sending and receiving ones and zeroes strongly infl u enced the model for how people and machines communicated to each other. It was highly reductive and highly predict-able. Interaction designers, software designers and the whole industry are responsible for treating and under-standing people a little like machines.

I recently saw a promotional video for Microsoft’s new phone operating system in which they talk about

and human occupants. This has led me to make some distinctions between automatic, reactive and interactive modes of driving my architecture.

For something to interact it must participate and I characterize participation as involving three interrelated processes. First, a participant needs to be able to pro-pose or generate ‘stuff ’ itself, to be able to off er some-thing to a conversation with other participants. It then needs to observe the success of whatever it is off ering to that conversation, so it needs some kind of goals to measure how well it’s doing. Finally it must be able to adapt or learn from its successes and failures, evolving over the course of the conversation.

Let us say if my goal is to make you smile, then I look at your facial expressions while we are talking and I adapt my verbal and nonverbal actions as I get to know you. When I get you to smile I learn about you, but I also learn as much when you don’t smile. It is a process in which multiple participants act and react in an ongoing exchange. It is unpredictable and negotiated. It is always

social networking with global reach. Where once the mas ses would buy software from cor po-rations and con sume it, there is now enormous bottom up activity allowing potentially anyone to compete in the market, to har ness the creative potential of thou-sands of devel opers and build stable, open systems that chal-lenge corporate models of pro-duction. Since our lives are very much de ter mined by the protocols

given to us, the freedom and ability to challenge existing models is terribly important.

With all of that in mind, the question that I came to was: if software and architecture are predominantly interfaces between interactions rather than inter -active themselves, what makes something interactive? Certainly I think I am interactive. I think we’re interacting here talking to each other. I also interact with my dog which even if it’s less verbal and more gestural is a rich inter action. The natural world is saturated in inter-actions, so there’s plenty of inspiration there for us to refl ect on.

Ultimately I’m asking: can we build machines that don’t simply execute a set of commands predictably but instead enter into conversations with people? And if so, can these conversations create aesthetically pleasing and useful applications in the built environment?

The conversations I’ve been looking at are gestural rather than verbal, defi ned by occupation, orientation and expression. In my work it’s not metaphorically a dance, it actually is a dance between robotic installations

VS You are both an interaction designer and architectural designer, tell me how these relate in your mind. How does one design ‘interaction’ particularly in an architectural context?RG Yes, before I moved to architectural design, I was an interaction designer fi rst, almost a dec ade ago now. In the years I worked in the interaction design industry there was never any con ver-sation on what constitutes ‘inter action’. Thinking about it now it was extraordinary that the question never crop ped up in all those years particularly not even during my education in so called ‘human computer inter-action design’.

Just to start, the word ‘interaction’ in the industry was a buzzword for selling to clients. In fact in early web publishing, levels of inter activity were crudely measured on how many dif-ferent types of media you were using; it was a ques tion of whether you had sound, video, images and hyper-links. So it was really almost a perverse kind of under-standing of interaction based on a number of media types and mechanism or buttons people could press.

When I went on to study architecture what was immediately important were people’s interactions with each other, primarily, and that architecture acted as the interface. So architec-

ture was not itself inter active but it was a space for interaction to take place. This made me start to re-think software as the interface between interactions rather than being actually interactive itself.

A particularly interesting time for thinking about social interactions and networks was the 1960s. This was a time when architecture became a great deal more adaptive, responsive, mobile, democratic and open source as it were. There was a counter culture to the top-down deterministic model of architectural progress needing to be overseen by governments or town plan-ners. It was actually about people taking responsibility for their own space, customizing, negotiating and conversing with larger networks.

So that is an interesting model to compare with the story of software design which for most of the past century had been built on centralized models of development. When the internet arrived, hackers har-nessed the power of distributed independent developers and created cultures of open source, peer to peer and

Meeting in the Middle

Ruairi Glynn

Interviewed by Vincent Schipper

Interactivity seems like a banal, pretty

straightforward conception. The multifaceted na-

ture of what inter-activity means and

offers to ar chi-tects, designers, and society as a

whole needs to be reassessed. Old

paradigms of inter-activity are rooted

in a mechanistic conception while

new emerg ing ideas present alternatives.

Instead of inter-active versus non-

interactive, we should think of

the relation as a gradient. The text below is the out-

come of a conver-sation. It was first

conceived of as an article, how-

ever Ruairi Glynn pointed out that

this might be a little contradictory

to the theme. Inter activity, he says is all about conversations. This piece is an

exercise in meet-ing in the middle.

Performative Ecologies: DancersExhibition – ‘VIDA 11.0, Concurso

Internacional de Arte y Vida‚’ Madrid Art Fair 2009

A family of performing creatures, swing and illuminate patterns

with their tails to compete for visitors attention. As they

perform they ob serve and learn from the re sponse of people by assessing their attention

levels using facial re cog nition. A gestural con ver sa tion de velops

as both robot parti ci pant and human participant adapt and

learn about each others gestures.

Page 8: Volume #28 Preview

136

Vo

lum

e 28

137

Vo

lum

e 28

trivial stuff . So the models many designers are using today are inherited from early on in computer science, going back to the days of punch cards.

The artifi cial intel li-gence example is an impor-tant precedent because there was this sudden para-digm shift in the way ro bot-ists conceived of designing computa tional systems in-tended to engage with the built environ ment. Today we have two types of AI. We have old AI, which is a top-down logic, formulaic mechanistic model, and we have this bottom-up be hav-ioral model as it is some-times called. Actually both have been found to be use-ful, so they have pretty much met in the middle at the moment. We will have something very simi lar

hap pen to the way we build computational sys tems for archi tecture and more widely.

Robotics has always been about software and hardware meeting the real world and seeing the results. Where ones and zeroes logic meets the messy chaotic world we inhabit. The fact that the built environment is becoming saturated with computation means we need to seriously think about how we conceive the models we use to drive these systems. Do we just follow the pre dictive software model or do we look for other oppor-tunities to harness the world’s glorious ambiguity.

VS One of the many important issues you brought up was that of overcoding for a solution. If we take the example of Gordon Pask’s self-organizing chem ical computers, or that of Rodney Brookes for that matter, there seems to have been a move to ward the idea that a function need not nec es sarily be ex plicitly programmed (we need not necessarily pro gram something for it to carry out a specifi c action). This seems to have been marginalized in the 1990s, perhaps from a fear that a computer may do some thing we may not want it to do, with the idea that ‘here is what it purely needs to do and it cannot do anything apart from what we are asking it to do’.

RG Right. In this world there are things that need to do what they are supposed to do. For example, you want a dialysis machine or a pacemaker to work the way they are expected to work. But then there are plenty of things that don’t really need to be entirely predictable, or at least we can give them the opportunity to perhaps sur prise us. To, God forbid, even be cleverer than us. So I think the designer’s role in all this is to be able to make judgments about what sorts of things are probably better off work-ing predictably and what things might be improved by giving them some capacity to experiment a little and learn and adapt and so on. There are obvious aesthetic oppor-tunities but equally there are opportunities to explore how our environment might conserve energy and resources generally.

around it – you suddenly start to design systems that respond more directly to people’s needs over the entire lifespan of the architecture.

VS So do you think for us to be able to properly interact with a building it becomes necessary for us to be able to perceive some sort of intelligence in it?

RG I am not sure we always need to be aware or at least constantly aware that something is interacting with you. Do we need to know a building is adapting and optimizing lighting or air conditioning? It’s worth stating that intelligence is an observed attribute and not some-thing that can be mathematically or otherwise proven. We can characterize things with levels of intelligence that are computationally very simple. I’ll give you a really lovely example. Roboticist and polymath William Grey Walter built some very simple robots in the 1940s that would run around and follow light. They all had these cute behaviors and were hugely popular not just with scientists but also the public. All they did was follow light but the environment in which they were placed was complex enough to allow for them to appear to have complex behavior. And so observers attributed levels of intelligence saying things like “oh God, its alive” or “its appears to be shy”.

Thus very simple reactive things responding to the complexity of the world can give us an extraordinarily diff erent and engaging behaviors. In time the issue will be that those behaviors will become predictable and lose novelty. But they can be wonderful and there are plenty of examples of reactive systems being both delightful in an aesthetic sense and useful in a functional sense. There are also plenty of examples of things that are simple automata that are really delightful and very functional.

Yet there is this interesting question: does giving things the ability to participate, propose, adapt and learn, in a sense giving them more of a capacity to surprise us, actually give us a sense that they are more intelligent? I would imagine they probably do. If we observe some-thing that really does respond and learn about us, we build a closer relationship to it.

There are plenty of opportunities for these things to be embedded, ubiquitous and highly interactive. So yes, they may be invisible but others might be visible architectural features too. It is a multi-layered, multi-scale ecology of systems some of which we engage very directly and others that fl oat in the ether, so to speak. It is about designing that ecology and a large part of that ecology will be developed by other industries. So one of the pressing questions on the minds of architects and designers is where and at what level of that ecology do we start to have a role? There’s no reason why archi-tects can’t be the ones making the hardware and soft-ware as well as leading the debate. If we don’t someone else will, and architects will just have to accept what they get given. Our lives are very much determined by the protocols given to us as well as the critical freedom and ability to challenge existing models.

I choose to build machines that deal with aesthetic goals. I am interested in how I can attract people to these ideas. So by making machines whose goal is to learn how to attract and keep an observer’s attention I am hoping I can attract people’s attention to the wider implications.

VS The aspiration to attract more people to the ideas seems to imply that there is not much attention for these concepts at present. To what extent is that true?

RG When I meet a new students for the fi rst time they are excited about these technologies, but their ways of thinking are heavily informed by the software they have grown up with. So their way of conceiving architectural interaction is based heavily upon the software models we’ve discussed. Frankly I need to totally break them down and build them from the ground back up by asking simple questions fi rst such as “is a light switch inter-active?” And if a light switch isn’t, is an installation in which when you stand on a particular fl oor panel a par-ticular light pattern appears in the room interactive?

We would never talk of turning a light on and off , a sensor driven automatic door opening, or a thermostat as being interactive, yet they are within the same kind of conceptual model as much of the work that gets called interactive art or architecture. There seems to be a real laziness in the use of terminology and a lack of real con-ceptual interrogation within the architectural community as well as within the arts and design community as a whole. But by asking these fundamental questions, by making some distinctions you open up the very fertile territory between reactivity and interactivity.

I don’t think it is an issue of one being better or worse than the other. It is more like there’s a gradient of opportunities between the two and we (my students and I) try to operate in that gradient between reactive and interactive architectural design. To do so require students to learn things like programming algorithms, adaptive computation, machine learning and so on which is im-mensely empowering because they no longer have to rely on software given to them. They can build the software and hardware themselves in order to challenge the protocols the industry currently off ers.

There’s a lot of discussion about computational optimization in architectural design. I just organized FABRICATE with my colleague Bob Sheil, a conference all about the making of digital architecture. ‘Optimization’ was probably the most frequently used word over the two days. It was all about the optimization of material, form and so on which are all very interesting, but another discussion is needed: the optimization of behavior, of the systems that will drive our built environment.

If you talk to anyone who has ever lived or worked in a building with a central server running the entire build ing you always fi nd anecdotes about it having some ridiculous nuances such as rooms that have lights that come on automatically when it gets dark, which isn’t very useful when you are trying to do a PowerPoint pre-sentation and the lights ought to be off . Often the sys-tems are so locked down there is little you can do to change them. This is where the arrogance of the designer directly impacts inhabitants in a frustrating and even dictatorial manner. If as designers we could allow some loss of control, to allow novel adaptive systems to operate you could conceivably optimize how buildings respond to the activity within and around it. As the context of a building changes – if it is the change in the number of people using it, in climate, and certainly in the buildings

experiential design; this is a rec-ognition that people all ex per-ience things diff erently. How ever there is still a wide spread belief that you can design some thing every one will experience the same which I think is a bit daft.

Luckily there is hope from another discipline. Between the 1950s and 1970s early artifi cial intelligence had a really mecha-nistic view of how the human mind

functioned. But there is this wonderful counter point in the 1980s from a guy named Rodney Brookes who actually built robots that taught themselves to walk around. The idea was rather than command a robot on how to walk, why not just put it on the ground, give it a bunch of legs and let it kick its legs around until it learned to get itself to a location or goal. They found out that by doing this the robots would work out how to get from A to B with a lot less compu tation. The systems were not only simpler, but better at walking, more robust, and cheaper to build. All because rather than the scientists believing they knew best and therefore should do all the thinking for the machine, they let the machine do some of the work itself; just by giving them the ability to generate their own behaviors, adapt and learn, etc. These are great ex-amples of built systems that have the capacity to parti-cipate with the world around them. Of course Gordon Pask built participatory machines in the 1950s and 1960s such as the Colloquy of Mobiles, but for the most part the model for devel oping machine behavior has been mecha nistic rather than social.

In the history of software design there was a tran-si tion from typing in command line instructions towards WIMP graphical user interfaces, but essentially the under lying master-slave model is identical and we have pretty much kept with that right up until today. The only real diff erence being the reso lution, the number of colors on the screen, things bouncing, etc. – all rather

Performative Ecologies: Dancers in linear arrangement

Exhibition – ‘Emergencia‚’ Sao Paulo 2008

Four Dancers suspended in a darkened room await

visitors searching the room for people to perform to.

While they wait occasionally they will turn to each other

and perform gestures to each other discussing

the dances they have evolved over the course

of the exhibition.

Page 9: Volume #28 Preview

146

Vo

lum

e 28

147

Vo

lum

e 28

Pho

to: R

eute

rs/D

ylan

Mar

tine

z

Anti-government protesters during demonstrations inside

Tahrir Square, Cairo February 2011