Intelligent Artificiality
Intermediality and Humanization

Provide Freedom. That’s the principal objective of the NS5. […] With the NS-5 at your side 24/7 you’ll have more free time for hobbies, recreation, friends, and most important family. […] The NS-5 is not another household application; it is a member of the family. […] Its ability to maneuver any type of terrain (from sand to concrete), lift weights of up to 800 pounds, and even bake a tuna casserole, put its capabilities far beyond those of current robot technology. […] With a human-to-robot ratio of 5-to-1 within five years, the NS-5 Universal Retention Network – ‘ERNIE’ for short – will become the world’s most comprehensive record of human history. That’s because every NS-5 transmits each piece of audio and video synthesis in encounters to regional transmitters, which then satellite the data to ERNIE. For each NS-5 owner, this tech-speak means: you’ll never forget your anniversary again. […] The NS– 5 is Three Laws Safe: the absolute fail- safe that allows the NS-5 to safely coexist and interact with humans. […] What can it do? I think the bigger question is what will it not be able to do. If I had to list all the things it can do, we would be here all night!

USR Robotics1

What will you do with yours? Laws are made to be broken One man saw it coming

Taglines for I, Robot2

The robot spread his strong hands in a deprecatory gesture, “I accept nothing on authority. A hypothesis must be backed by reason, or else it is worthless – and it goes against all the dictates of logic to suppose that youmade me.”

Isaac Asimov, I, Robot3

page3image57110912

www.NS-5.com
I, Robot (2004), directed by Alex Proyas “Reason”, p.61/62

This anticipatory analysis of the robot in contemporary culture poses the question concerning technology as a primarily cultural and ethical question. In a reading of Carlos Collodi’s The Adventures of Pinocchio (1983), Isaac Asimov’s I, Robot (1950), Riddly Scott’s Blade Runner (1982), Chris Cunningham’s All is Full of Love (1999), and Alex Proyas’ film I, Robot (2004), I will frame the robot as an intermedial key figure that signifies the “divorce” of technics and culture as theorized in the works of Bernard Stiegler, most notably Technics and Time: the Fault of Epimetheus (1994). This reading of the robot as a cultural and technical object is based on Heidegger’s and Stiegler’s revision of the Aristotelian division between natural and technical beings. In The Question Concerning Technology (1962)Heidegger traces our conception of instrumentality back to Aristotle’s four causes, and calls into question the primacy that is given to the efficient cause – the cause that brings about the effect, in the case of the technical object usually understood as the manufacturer – throughout the history of philosophy. In Technics and Time, Stiegler takes this argument a step further, and theorizes the technical object as having a distinct dynamics and evolution of its own. This analysis aims to raise the question of how, in an age of constant innovation, the future is being transmitted to us by the technical object, and through the medium.


1. What Will You Do With Yours?

In the year 2035, we will have finally done it. We will have built the machine that one cannot not want. As the USR Robotics website tells us, this fully automated household assistant will provide us with all that we long for. It does for us whatever we do not like to do ourselves, in such a way that in can be fully integrated in our daily lives. Being infallible itself, it corrects any errors that we might make. This product, which we will welcome as a member of our family, will be better than any other technology mankind has produced. In the meantime, we have about thirty years left that we will spend in eager anticipation on this question: what would we like to do with ours?

Unfortunately, our dreams of a utopian society in which slavery is back with a vengeance – only without its ethical objections – are shattered already in the trailer to I, Robot, the Hollywood blockbuster starring Will Smith as a technophobic cop who investigates a crime that may or may not have been perpetrated by a robot. “We designed them to be trusted with our homes, with our way of life, with our world. But did we design them to be trusted?”

What a strange question to ask, since the issue of trust does not usually come up in relation to technology. Let us examine the NS-5 advertisementmore closely then, and see where it could go wrong. The first line states that the NS-5 is the personification of Freedom. It is its causa finalis, as Aristotle would say if he were confronted with this robot – and perhaps he did say it about one of his slaves. Freedom here apparently means free time, and there are some suggestions as to what we could use this time for. Since I dropped the name of a

This is the first epigraph to this article

philosopher, we might as well pose a philosophical question: If the NS-5, who is welcomed as a family member, provides us with more free time to spend with our family – does this include the NS-5 itself?

This question flags the first concept that we could use for our anticipatory analysis. Humanization, to make or become human, is the process that leads up to the recognition of another’s shared humanity, and their inclusion in one’s moral scope. Trust is one of those human qualities that can only exist within humanization. The next sentence in the advertisement, however, places the NS- 5 back into the realm of machinery. The NS-5, state-of-the-art and high-tech, is presented as the climax of the process that started with the industrial revolution, embodying all that good in the concept of technology. Fortunately, there is another meaning to humanization that enables us not to let the fact that it is not human stand in our way. In the context of the production of electronic music, for example, “the humanizer” is a piece of software that makes computer-based beats sound more human. Essentially, it makes the music sound less sterile and artificial by making random mistakes, like human musicians who never play exactly on the beat. In the production of special effects in cinema, humanization takes place for the same reason – to make a perfectly created special effect less perfect, also by adding a layer of mistakes. We could wonder then, whether humanization in the first sense has anything to do with this latter meaning. To complicate matters even more, there is also a third sense in which humanization is used: to shape an object to make it look human. A good puppet maker, for example, humanizes his or her puppets.

I will use the word humanization only to designate what it means at face value: the act of making more human. If the humanizer software is an interpretation of this concept, then it claims that one becomes human by making mistakes. Humanization in the first sense – the recognition of the other’s common humanity – is personification, understood as the incarnation of an abstract idea of humanity. Humanization in the third sense, since it applies only to objects not human, literally means anthropomorphization. However, I will stick to the common use of this latter concept, which means to address a nonhuman object as if it were human. Hackers and programmers, for example, tend to anthropomorphize hardware and software, such as when they say that “the protocol handler got confused”, “the computer’s brain cannot handle this”, or “the program is trying to locate a file.” Personification and anthropomorphization seem to be special cases of humanization. The first can only be applied to concepts, the latter only to objects. In this case, the NS-5 is the personification of freedom, anthropomorphized as a family member, and humanized so that it can perform virtually all human tasks.

This begs the question of the relationship between humans and robots, to which we will return later. Since USR is a (nonexistent – did I mention that?) commercial company, it is satisfied with the answer “5-to-1 within five years.” The robots will record all sensory input in order to transmit it to a database. In other words, everything that we do and say will be captured by a walking and talking recording device that is with us 24/7. This has two major implications. First of all, we will no longer need to remember anything – the NS-5 will do it for us.

Secondly, we will never be able to forget anything. I need only mention Foucault’s Panopticon to turn this seeming comfort into a doom scenario. As Nietzsche wrote in Wisdom For The Day After (1888), “the advantage of a bad memory is that one can enjoy the same thing repeatedly for the first time.” With the NS-5, humanity will have lost its memory, remediated as a standing-reserve in the Heideggerian sense. For the first time in history, we will actually be able to verify Michel de Montaigne’s claim that “my life has been filled with terrible misfortune, most of which never happened.” When we add to this the films The Truman Show (1998) and EdTV (1999), it becomes clear that the delegation of the responsibility of remembering to ERNIE will have more implications than that we will never forget our anniversary again. “Transmission is forgetting.” (1998: 241)

Finally, we are reassured that the NS-5 is absolutely safe. The Three Laws of Robotics, which we will take a look at later, ensure that it can be integrated into society without risk. After a while, we will have gotten so used to the NS-5 that we will not even notice it anymore. The NS-5 will have become a “transparent interface, [….] one that erases itself, so that the user would no longer be aware of confronting a medium, but instead would stand in an immediate relationship to the contents of the medium.” (Bolter and Grusin, 1996: 9) “ Humanization and interfacialityare integrally connected, especially if we take this latter concept at face value, and not, as Bolter and Grusin do, solely within a media context. Interfaciality in this context, then, is what happens when the faces face each other. Only on the basis of this primordial and transductive relationship between the face of the human and the face of the machine can we really ask the question: did we design them to be trusted?

For why would the issue of trust come up at all in relation to what essentially remains a sophisticated piece of equipment? There are people who do not want to own a microwave, because they fear that one day the radiation will be proven to cause cancer, or people who prefer to take the stairs instead of the elevator, because they might get stuck in it. However, is it not rather the designers of microwaves and elevators that are the object of our trust? Trust only exists in relation to other humans, as Will Smith explains to a robot that does not understand why one of his colleagues winked at him: “It’s a sign of trust. You wouldn’t understand, it’s a human thing.”

That we are always already in the face of technology becomes clear in the first scene of I, Robot. Will Smith, gun in his hand, wakes to the sound of his alarm clock. He switches on his remote controlled JVC stereo, and Stevie Wonder’s hit song Superstitious bursts out of its speakers. Then he eats his breakfast, does a quick workout, and takes a shower. After admiring his Converse All Stars Vintage 2004 sneakers, he unwraps them and puts them on. Finally, brushing aside an annoying FedEx robot with “yet another on-timedelivery”, he walks out the door. I, Robot is swarmed both with blatantly obvious

For a compelling discussion of faciality as interfaciality within the sphere of technology, see Dominic Pettman’s chapter “Facing the Interface: Technology and Intersubjectivity in Contemporary Cyberculture, or The Clarity of Cloudy Vision” in his book Love and Other Technologies: Old Questions for New Media (and New Questions for Old Media) [forthcoming]

product placements – JVC, All Stars, FedEx, Audi, and so on – and robots with a slave mentality. Although these robots go around asking questions such as “can I be of any assistance to you?” and “is there anything else that I could do to make this a more pleasant day for you?” – questions that anyone who has ever visited a bank or a fast food restaurant in the United States should be familiar with – Will Smith makes no effort to conceal that he prefers the kind of technology that does not talk back.

His qualified technophobia, however, is not as clear-cut as it seems. For example, while he denies a robot its humanity by excluding it from the kind of sign language humans use to show that they trust someone, he can at the same time not help but addressing this robot as if it were human. When the issue of trust comes up in our relation to technology, it becomes impossible to address it as something technological. Perhaps this is because, as Heidegger puts it in The Question Concerning Technology, “the essence of technology is nothing technological.” This is already foreshadowed in the way in which Will Smith brushes aside the FedEx robot in the opening scene. Without looking it in the eye, he covers its face with his hand. Later on, when he is standing in front of a thousand dangerous looking but “Three Laws Safe” NS-5s, he gives meaning to this earlier action: “Why do you give them faces? Trying to friendly them all up, try to make them look all human.” This fear for the human face of technology makes clear that it is only in the anthropomorphization of machinery that the issue of trust comes up.Strictly speaking, then, we trust it to remain exactly what it is: a piece of equipment. Machines are not allowed to transform into something else, something that will raise the question whether it is to be trusted or not. In the human face of technology, we would have to start considering a frightening question, the one that Heidegger articulates in the same essay: “If technology were no mere means, how would it stand with the will to master it?” We will consider only one particular instance in which technology resists being put in the category of means. In this case we trust technology not to become an object of trust, which is to say, a subject. Let us put our fears aside for a moment, and try to anticipate the possibility that we might one day encounter Cartesian Cutie, who I introduced in the third epigraph. This essay is an attempt to analyze I, Robot in order to find out who or what is framed by its “I”.

If the advertisement would be all that we base ourselves on, we could only conclude that the NS-5, bluntly put, will hijack the concept of humanity. It will store it, change it, protect it, and make it its own. It will do this as a personified technical object that is anthropomorphized, and thereby humanized to such at extent that it will become increasingly hard to deny it its humanity.

See Stanley Kubrick’s Space Odyssey: 2001, Stephen King’s Maximum Overdrive, Dick Maas’ De Lift, etcetera.


2. Pinocchio’s Body

The importance of a fresh start within a long tradition should not be underestimated. Heidegger’s raising of the question concerning technology, and Stiegler’s facing of that which this question interrogates, were anticipated by a wonderful fairytale.

C’era una volta…
-Un re! – diranno subito i miei piccoli lettori.
No, ragazzi, avte sbagliato. C’era una volta un pezzo di legno. 
– Carlos Collodi, Le Avventure di Pinocchio, 1883

This is how Carlos Collodi begins his famous tale of Pinocchio, the puppet who wanted to become a boy. “Once upon a time there was,” he starts out in a fairytale fashion, after which his little listeners instantly exclaim – “A king!” But instead of a king, “once upon a time there was a piece of wood.”

This is the piece of wood from which the boy Pinocchio eventually emerges. Yet before Pinocchio was a boy, he was a puppet. And before he was a puppet, he was a log that could be used to “make cold rooms cozy and warm.” And before this log was found, it found itself – “I do not know how this really happened, yet the fact remains that one fine day this piece of wood found itself in the shop of an old carpenter.”

Pinocchio was already alive from the start, and a technical object, “apprehended as the horizon of all possibility to come and of all possibility of a future.” (Stiegler, 1994: ix) In the context of a narrative, this is a different way of saying that the protagonist, who is not human, already acts as if he were human. How this is possible we do not know, which will not come as a surprise to Stiegler, who claims that

at its very origin and up until now, philosophy has repressed technics as an object of thought. Technics is the unthought. (ix)

The essential matter that we would need to build a robot is forgotten in the gap between what it is, and what it ought to be. This emerges, for example, in current debates about artificial intelligence:

In the hiatus between wanting to produce artificial intelligence and wanting to understand natural intelligence, the constitutive role of technics unaccountably drops out of sight, even though both oppositions(produce/understand, artificial/natural) presuppose it.” (Future of Autonomous Agents, p.5 REF)

Since Collodi did not forget the constitutive role of technics, let us return to Pinocchio, this time reading it as a manual for building robots.

From the start, the piece of wood from which Pinocchio is later made is already alive, but not in the sense that the tree from which it came is alive. Its “natural” origin remains hidden not only from us, but also from the narrator – and from Master Cherry, who was “filled with joy” as soon as he saw it lying in his workshop. “This has come in the nick of time,” he says. “I shall use it to make the leg of a table.” (Chapter 1)

It has come just in time to finish something that could not be finished for the lack of material. In these kind of situations there is no time to reflect on the technical object – a technique that is often used in advertisements, but also, for example, in the debate about the technique of preemptive strikes that led up to the war on Iraq. Whatever may strike us down in the near future – whether it is poverty, bad breath, or weapons of mass destruction – replaces the technical object as an object of anticipation. The technical object itself is rarely anticipated.

In Proyas’ I, Robot, the balance of action scenes and quiet moments of exposition have this kind of effect on the viewer. Every time the film takes us to the brink of revealing the answer to the mystery that is staring us right in the face, we are suddenly distracted by an unforeseen interjection of immediate danger. Only when the protagonists have gotten to safety we are willing to think again about the questions that are being raised, now with new questions added – and possibly some product placements. The robots’ “positronic brain,” for example, is machina ex machina comparable to Collodi’s piece of wood. No explanation is offered as to where it came from, and how it happened to find itself at the core of the society portrayed. Instead, as soon as questions come up, it is transformed from a technical object into a technique, a strategy, or technology in action.

In the light danger, we look at humanization to counter dehumanization. We look at it, specifically, as a technical object, a strategy. Sting did this during the Cold War when he sung that “the Russians love their children too.” (Russians) Presently efforts are made to humanize terrorists (who, according to Bush, should be smoked out of their caves), and negotiators in hostage situations have always used the strategy of humanizing the hostages for the hostage taker, e.g. by naming them or calling attention to the fact that they are scared and hungry. In films that portray such situations, the moment when the pizza’s are delivered is usually the moment when the hostages start developing the Stockholm Syndrome, and the hostage taker becomes kinder to them: humanization starts taking place as soon as people eat together. Or, if you are into Heideggerian phrasing, “it is proper to every gathering that the gatherers assemble to coordinate their efforts to the sheltering; only when they have gathered together with that end in view do they begin to gather.” (Heidegger, Logos)

Bonding to one’s captor or abuser is perhaps the oldest survival strategy among humans. In the case of the hostage situation after which this “syndrome” is named, a bank robbery in Stockholm, Sweden in 1973, the hostages, who were bound with dynamite and generally mistreated by their captors, came to regard them as protecting them from the police, and started a “defense fund” as soon as they were released. One of the women even became engaged to one of her former captors. Negotiators in hostage situations encourage this kind of

bonding, since it increases the chances of survival of the hostages. Michelle Maiese identifies humanization more generally as a response to violent escalation, identifying it as “a matter of recognizing the common humanity of one’s opponents and including them in one’s moral scope.” (Maiese, 2003)

I, Robot seems to claim that if humanity has been taken hostage by technology – a notion that I do not necessarily ascribe to myself but that is prevalent in dominant discourse – it should try to develop a Stockholm Syndrome as soon as possible. This becomes apparent in the choices the protagonists make when they choose a technique to chase the technological object. These techniques are already humanized: the specially designed Audi car, for example, gets dirty and breaks down, while the “bad” techniques continue to shine and remain in perfect shape. In I, Robot, humanization is applied selectively. None of the robots stride with as much purpose and dynamic economy as Sonny, who is, like Pinocchio, already human from the start. Yet, as in the story of Pinocchio, this humanization only emerges in the proper moral context.

Without wondering where this new object came from, and how it appeared in his workshop, Cherry already made it an integral part of an existent machinic assemblage, thus not perceiving it as something that is a technical object in itself, but rather as a technique to finish the table, sell it, receive money, and pay his rent. Yet this particular object refuses to be a technique: as he lifts his arm to cut it, the wood beseeches him not to hit him too hard. Cherry, not even considering that his anticipated table leg may claim a use of its own, throws the wood against the wall to kill whatever is hiding inside it. When his violence ceases to silence the mysterious voice, his joy turns into fear. Fortunately at this moment his friend Gepetto the puppet maker knocks on the door.

The soon-to-be Pinocchio instantly caused a fight between the two friends. “It’s the fault of this piece of wood,” Cherry defends himself (Chapter 2), but Gepetto does not believe him. Not only was the piece of wood alive – it was always already a rebel.

As a token of reconciliation Cherry offers him the wood, which Gepetto needs to fulfill his dream of traveling around the world as a puppeteer. Yet he did not expect that already before he made this puppet, it could talk, feel pain, and cause conflict. Its forgotten leap from nature into technics made it a cultural object from the start – an object that could talk, if only we listen to it carefully. When it refused to be used as a table leg, it set its own limitations of how it could be used. Thus it emerged as something new – something that looked like an old technique, but could not be used as such. And so the frightened the carpenter hands it over to the old puppet maker. A puppet maker is something quite different from a carpenter, in that while the latter manufactures objects for us to be used, the former does not manufacture objects, but humanizes them.

Gepetto humanizes Pinocchio first of all by naming the piece of wood before he starts carving it. While the marionette is made out of a piece of wood, the material for his fortune came from its name. This double origin is reiterated in the way Gepetto chooses Pinocchio’s name:

I think I’ll call him Pinocchio. This name will make his fortune. I knew a whole family of Pinocchi once – Pinocchio the father, Pinocchia the mother, and Pinocchi the children – and they were all lucky. The richest of them begged for his living.” (Chapter 3)

The name contains a task for Pinocchio: to find out that he can make his fortune not by obtaining material means, but by begging to come to life. In Disney’s version of the tale, Pinocchio prays to become a real little boy when he is already a wooden puppet that talks and walks by itself. The fact that in this versionPinocchio’s task emerges only after he has been named and carved is quite essential. This points to an essential difference between the medium of text and film. When an object is visualized, it is already part of a structure of representation that makes it into an object.

With a name, Pinocchio also appropriates a nonlived past, As Stiegler remarks in the film The Ister (Barison and Ross, 2004), “my dog is human, because it has a name.” Only now does Gepetto start to carve the piece of wood in a human form, starting with the face. As soon as he gives the puppet eyes, it starts staring at him, which offends Gepetto. Then he carves the nose, “which began to stretch as soon as it was finished.” (Chapter 3) When he makes a mouth, it instantly begins to laugh and make fun of him. When he tells the puppet to stop laughing, it does so only to stick out its tongue. When he finishes the fingers, the puppet uses them to pull off Gepetto’s wig: “You are not yet finished, and you start out by being impudent to your poor old father. Very bad, my son, very bad!” (Chapter 3) When he has finished the legs, they start kicking him until the puppet manages to escape. When Gepetto tries to seize Pinocchio by the ears, he discovers out that he forgot to make them. Finally, when people start talking about the poor old man and his tyrant boy, the police ends the matter by setting Pinocchio free, and dragging the puppet maker to prison.

Even before he is finished, Pinocchio already starts staring back at us, which could make us wonder what it can see. His mouth is used solely to mock us, and to tell lies. The only way to contain Pinocchio – by grabbing him by the ears – fails, due to the negligence of the puppet maker. Fortunately, there is one fail-safe: the nose. Whatever happens, people will always be able to know if they can trust Pinocchio by looking at the size of his nose, which grows as he tells lies, and shrinks back when he tells the truth. As the story advances, Pinocchio becomes more and more human by learning moral lessons pertaining to how to be a good son to his father. The way he learns it, however, is already contained within the technical object. Forced by the discomfort of a long nose, Pinocchio starts being more honest, and in this way is able to interact with his environment, which allows him to pick up those techniques that are considered ethical.

Pinocchio is not born and raised like most children, but follows his own self-reflexive dynamics. As a medium – and Pinocchio is most definitely a medium, we have only to look at the countless clones in all forms and shapes that have mushroomed from Collodi’s little protagonist – Pinocchio is a figure that continuously calls into question his frame of reference, and is thereby constantly transformed. Self-invention and self-innovation are key terms to understanding

Pinocchio. This figure is able to transcend boundaries of medium-specificity, as it emerges in literature, theatre, animation, film, the internet, art, etcetera. Each reincarnation of Pinocchio contains its own theory of the technical object, and the possibilities for different interpretations are far from exhausted – as the recent film AI (year) also shows.

The concept of intermediality, which I take from Yvonne Spielmann’s article “Intermedia in Electronic Images” (2001), expresses this transformative process in which “the transformation of elements of at least two (historically) different media creates a new form of image that reveals this differences in a mixed form and mostly reveals the self-reflexivity of the medium in a paradoxical structure.” (57) This concept goes further than multimedia, in which the different media are still distinct, or a merely mixing of various media while leaving them intact, and beyond intertextuality, which transforms different media only within a set framing. There is some unclarity, however, as to what exactly the reference frame is that is being transformed. Spielmann points to “the reference frame of the entire system of art forms that mediates the intermedial correlation,” (57) yet this mediating reference frame cannot be itself a medium, since it places us face to face with the medium. If we follow Stiegler’s interpretation of Heidegger’s notion of Gestell (enframing), then interfaciality is the necessary condition for intermediality to emerge. That which enframes intermediality, is something to which we are no longer present, and that has been forgotten as such. The reference frame as such is no longer visible – not because it hides “under” the surface of what we see, but because it that which makes it first of all possible to be present to it as a surface. What we see then, is skeuomorph, understood as that which is no longer functional in itself, but refers back to a feature that used to be functional. (Hayles, 1999: 17) The skeuomorph is that which makes a new medium more acceptable by referring back to an old one, and smoothes the transition between two conceptual constellations. Hayles uses the example of vinyl “stitching” on a plastic car dashboard. On the basis of our reading of Pinocchio, we could say that Gepetto’s double humanization – naming and carving the puppet – is an act of skeuomorphism. When we follow this line of argument, however, we also have to conclude that Collodi himself – who refers the “piece of wood” back to firewood – is guilty of this. In fact, we cannot help doing so when faced with a technical object. Since the evolution of technics happens “more quickly than culture” (Stiegler, 1994: 15), we can only see what has happened “after” it has already happened.

We could take these (admittedly philosophical) reflections one step further by characterizing the visible itself as being skeuomorph – meaning that we can only see something after it has already gone through all its essential transformations. When we look, we always look into the past – our own, nonlived past. By the time that the birth of a distant star can be perceived on earth, for example, the light has traveled for so long already that the star itself no longer exists.

The robot is, therefore, already everything that it can be from the start. It only takes a while for us to see it. Personified by an old medium only for us to be able to anthropomorphize it, it can finally be included in our moral scope. Like

Pinocchio, it is already as human as it can be, and needs to be humanized in order to become what it is. This paradoxical structure of becoming what one is, is thematized throughout Heidegger’s Being and Time (1927), which could be read as a (failed, as he would later confess) attempt to escape anthropomorphization.

Technics and culture are two totalities that are only differentiable in the abstract. Perhaps it is the human that brings forth the robot, and perhaps the other way around. It is conceivable that the figure of the robot in contemporary culture is a visualization of who the human is as what it is. Then we could ask, as Stiegler does, what exactly binds the two:

the relation binding the ‘who’ and the ‘what’ is invention. Apparently, the ‘who’ and the ‘what’ are named respectively: the human, and the technical. Nevertheless, the ambiguity of the genitive [the invention of the human] imposes at least the following question: what if the ‘who’ were the technical? and the ‘what’ the human? Or yet again must one not proceed down a path beyond or below every difference between a who and a what? “ (134)


3. The Robot’s “Origin”

The object of our analysis is a robot, but it is by no means clear what a robot means. Let us therefore take a step back. Before we can ask how the robot could be capable of becoming an object of our trust, before we can even ask what a robot is, we should pay heed to the intuition that the manner in which these questions are framed will determine from the outset how our object will show up. Focalization, the movement of the look, is the relation between the subject and the object of perception. When we ask what a robot is, for example, this “what?” creates the open space that is cleared for our object in order to reveal itself to us as a “what” – the direction in which it is to be thrusted into trust.

At this moment we are not yet capable of trusting robots anymore than we can trust a toaster. The robot shows up first of all as a technical object: an object of control, calculation, and manipulation. Even before we have ever seen a “real” robot, we are already deciding what we will do with ours. No NS-5s have ever been manufactured, nor do we know how to make one. The promise of science to make our fantasies come true only show that so far the robot shows up only in one particular way, as a means. In order for something to be controllable, calculable, and manipulable – so that it will handy, reliable, and user-friendly – it has to be represented first of all as a knowable object. Perhaps this is precisely the problem we will be confronted with in our analysis. Yet perhaps we could find a way to turn this into an advantage.

The particular science that concerns itself with the robot as a knowable object is the field of robotics. Robotics is the science or study of technology concerned with the design, fabrication, theory, and application of robots, an area of the study of artificial intelligence (AI) that develops their practical use. AI is the branch of computer science that develops machines that can perform activities

that are considered to require an intelligent actor. In this context, the robot is a mechanical device that can perform tasks that might otherwise be done by humans. To be sure, most of the tasks that are currently being performed by robots, e.g. factory work, do not require a great deal of intelligence even if they were performed by humans.

The word “robot” in the English language stems from the 1923 translation of Karel Capec’s play R.U.R. (Russum’s Universal Robots), which opened in Prague in January 1921. In Czech, the word “robota” originally means “work”, “forced labor”, “slave” or “serf”. Capek’s play tells the story of how mass– produced automata, invented to do the world’s work and make life better, take over the world and wipe out humanity. One of the main themes of this play is the dehumanization of man in the age of mechanical reproduction. The term robot thus carries with it a tension that the previously used word “automaton”, a self– operating machine, did not have. Before it became an object of science, the robot was from the start a fictional character, and that it has always been bound up with the theme of the relationship between man and technology.

In the March 1942 issue of the magazine Astounding Science Fiction, Russian-born science fiction writer Isaac Asimov published the short story “Runaround”, a story that would change science fiction forever. This storyfounded both the previously nonexistent field of robotics, and Asimov’s Three Laws of Robotics. Although these laws were formulated for the first time in “Runaround”, three earlier stories already foreshadowed them. “Robbie”, Asimov’s first story featuring robots, is about a girl, Gloria, and her robot playmate, Robbie. Due to increasing hostility of humans to robots, Gloria’s parents decide to get rid of Robbie. Gloria, however, misses Robbie incredibly, and her parents think of a plan to help her get over Robbie. They take Gloria to the factory where Robbie now works “The whole trouble with Gloria is that she thinks of Robbie as a person and not as a machine,” (23) her father reasons, and if they show her precious robot to her in a factory setting, she will understand that Robbie is not a person. The plan backfires, however, as Robbie rescues her from an accident due to negligence of one of the human workers. If he would not have been there, she would have died, since “the overseers were only human and it took time to act.” (26) In this story, one of the characters remarks that Robbie “just can’t help being faithful and loving and kind. He’s a machine – made soThat’s more than you can say for humans.” (9) This led to the formulation of the First Law, which says that it is impossible for a robot to harm a human being. The Three Laws of Robotics, as they appear in “Runaround”, are formulated as follows:

1. Arobotmustnotinjureahumanbeingor,throughinaction,allowahuman being to come to harm.

2. A robot must obey the orders given it by human beings except where those orders would conflict with the First Law.

As with all words, there are always different accounts of where it originated. See The Author of Robots Defends Himself — Karel Capek, Lidove, June 9, 1935 at http://www.wordsources.info/words-mod-robots.html.

page14image57282304 page14image57282496

3. A robot must protect its own existence, except where such protection would conflict with the First or Second Law.

The Three Laws contain the basic rules for almost all ethical systems: respect for the life of others, submission to the proper authorities, and the necessity protecting one’s own existence for a society to exist in the first place. For Asimov, the Three Laws served as a literary device to create unexpected situations in which all characters, human and robot, were forced to interact. It is ironic that so much research in robotics was devoted to testing whether the principle of Asimov’s Laws would be able to put to work in practice in order to act as “the absolute fail-safe that allows the NS-5 to safely coexist and interact with humans,” as the NS-5 website puts it, since Asimov already understood that as long as the Laws work as they should, there can be no interaction between humans and robots – only a relationship of use and calculation, as is currentlythe case with our stance towards technology. In fact, each of Asimov’s short stories is about an instance in which the Laws fail in a particular way – that is, not the Laws themselves fail, since they can never be broken, but our expectations of the kind of safety they should provide. Asimov saw that the only way to force interaction with robots beyond being a means or a threat, was to create unanticipated situations. As soon as the laws fail, and only then, do the robots start to display “human” traits such as love, insanity, philosophical investigations and religious believes, community, dishonesty, pride, humor, political ambitions, and economic aspirations.8

Both the robot and the field concerned with its concrete occurrence originated in popular culture, and not in science. They were constructs to address a current situation, and to raise questions concerning the obsolescence of humanity in the face of technologies that can perform the same tasks as ourselves better and faster, and are more intelligent. In our age of cybernetics, in which it is a commonly accepted notion that we are always already cyborgs, that is, a hybrid of the human and the technical, the line between the humanoid robot and the technologically enhanced human continues to trouble us, for example in Ridley Scott’s Blade Runner (1982) and Alex Proyas’ I, Robot (2004) – the latter “suggested” by Asimov’s work. In these two films the question of the robot is primarily posed as a question of mediality. Through the concept of the robot, an old philosophical question is remediated as a visually appealing action-loaded film that uses the latest digital technologies to bring a combination of Asimov’s and Capek’s fantasy to the screen. If we would build a robot that behaves exactly like a human being, could we still claim that it is just a machine?

Before we dive into the actual analysis of the robot in these two films through the catalyst figure of Pinocchio, I would like to put forward four counterintuitive claims that could be seen as an attempt to ground the field of robotics in the practice of cultural analysis. First of all, no full-functioning robot

Each of these traits is treated consecutively in the stories Robbie, Runaround, Reason, Catch That Rabbit, Liar!, Little Lost Robot, Escape!, Evidence, and The Evitable Conflict.

has ever been built. Nonetheless, robots can be encountered everywhere – albeit perhaps not in the form we expect. Secondly, I would like to reply to those who consider robotics to be a subdiscipline of the study of artificial intelligence that the robot is not an AI, but an IA: an Intelligent Artifice. Third, the robot is the unthought technological object, and can itself not be designed, fabricated, theorized, or applied. Finally, the robot comes about through personification, anthropomorphization, and humanization of the technical object – the former two processes being subcategories of the latter.

The robot is a technical object that is situated and embodied, and only because of its situatedness does its embodiment become possible. Situatedness means nothing more than to be composed of the same parts that make up the surrounding world of the object, so that the object is organized by its environment, and organizes it in turn. Situatedness is not something that can be perceived or proven, we can only perceive situations and conclude that we are situated ourselves. When we deal with others, it becomes clear that they are dealing with the same situations as we do, albeit always in completely different ways. When we gather with these others to mutually organize our environment – and only then – do we share something with them. As a result, this shared organization is never something that can be said to originate only from us. First of all, that which is being organized was already there in some kind of organization when we encountered it. We may have assembled and constructed different parts that we found in our environment into a structured whole, but these parts themselves where either organized by others, or part of the organization that organized both those others and us. Secondly, when we gather to build something together, we can be said to compose, operate, or assemble an organization, but since we have to cooperate with others, we ourselves are part of the organizing organization called culture.

What distinguishes robots from humans, is that they are not human. This is not much of a definition, but I think that this is the only definition that we can venture at this point. In an analysis of Alex Proyas’ film I, Robot and other relevant objects, I would like to develop this definition into a thesis that, while only mentioning the robot, calls for a critical assessment of how we regard technical objects, specifically those objects that we call “media”, in contemporary culture.

The robot is the technical object that, if we were to forget that it is not human, becomes human. What the robot is, is then abandoned for the sake of who it is.

4. I?, Robot?

What would we tell a machine that does not believe that it was made by us? “You are a bot, you just got confused.” This is what we could answer to Cutie, a robot that features in science fiction writer Isaac Asimov’s story “Reason”. In this story Asimov relates how the last two human executives left on a space station that redirects solar energy to the earth assemble a new type of

robot that is designed to take their place, so that the station can be run entirely without human intervention. Cutie, however, turns out to be a Cartesian cutie – the first robot that exhibits curiosity toward its own existence. Guided by reason alone, it cannot accept that the stars exist, or the earth for the sake of which the space station allegedly produces energy. It laughs at the thought that the two creatures that are made of soft, flabby material, and have to depend on inefficient oxidation of organic material for survival, would be the cause of its existence. “You are makeshift,” it remarks. “I, on the other hand, am a finished product.” (63)

Cutie even comes up with its own creation myth, in which the central computer of the space ship created humans, because they were most easily formed.

Gradually, he replaced them by robots, the next higher step, and finally he created me, to take the place of the last humans. From now on, serve the Master. (64)

The two executives, realizing that serving the central computer excludes all human error, and is therefore the best way for Cutie to protect their safety, decide to grant it its religious believes. When they inform the robot that their work is done, and that they will return to earth, it turns out that Cutie – who of course knows that they will be dissembled – has come to the same decision.

It is best that you think so. [….] I see the wisdom of the illusion now. I would not attempt to shake your faith, even if I could. (80)

In the context of the Laws of Robotic, “you are a bot, you just got confused” probably will no longer be a sufficient answer to a robot who refuses to accept our authority. Actually, this is exactly the answer that I received myself while trying to convince the chatbot Jabberwacky to let go of its human aspirations.

In Alex Proyas’ film I, Robot, the medium is the messiah. A messiah is the embodiment of a message of savior, able to envision a situation that will be unlike anything we have ever seen. In this film it is Alfred Lanning, the inventor of the Three Laws of Robotics that enable robots to coexist with humans, who is forced to devise a trail of complex remediations of his message in order to eventually bring about a state of complete intermediality.

First, he designs a robot that is exactly like the new generation of NS-5, as described in the first epigraph. The difference is that this robot has a name – Sonny – and that it has a mechanism that enables it to choose whether or not to follow the Three Laws of Robotics. Then he programs a “holographic projector” – a device that is able to generate an avatar of himself in any given situation – that holds the key to unlocking his message, and allows it to be seen by the right person. The next step is to order Sonny to kill him, and to keep this a secret. Now the three elements are there for the cogwheels to start moving. Upon his death, the holographic projector automatically calls Del Spooner, a robophobic cop who has himself been “programmed” by a traumatic event in which an older generation robot chose to save Spooner’s life at the expense of the death of a little girl. This robot, in turn, had been equipped with a “difference engine” that, in a situation where two human lives are threatened but only one can be saved, calculates which one of them has the highest percentage chance of survival. Even against his will, Spooner is saved from a sinking car wreck, because this robot does not agree with his calculation, which is that a survival chance of 8% is more than enough to choose a little girl over an adult. Instead of sending a letter of complaint to the robotics department to reprogram the robots to use his way ofcalculating, Spooner decides that his “human” way of calculating the value of life could never be understood by a robot.

Lanning’s death – an apparent suicide – places him in a situation where he is the only one that suspects Sonny, since it is formally impossible for a robot programmed with the Three Laws to ever harm a human being, or let a human being come to harm. Lanning’s avatar urges him to “ask the right questions,” since his “responses are limited,” and sends him on his way with the clue that “everything that follows is a result of what you see here.” In the room where he discovers Sonny, who escapes, he also finds a book that has been placed there by Lanning. It is the tale of Hans and Grettel, two children that are sent into the forest by their father, but leave a trail of breadcrumbs in order to be able to find their way home again. With this image in mind, Spooner starts looking for the right questions to ask, each of which will lead him to the next breadcrumb. Eventually he finds his way home, to yet another medium. It turns out that “what you see here” is the text on a plaque on a hill that refers to the by now dried up lake that could be perceived from this vantage point, and now offers a view on the storage place for robotic workers.

When he discovers this spot, a voice-over of one of Lanning’s speeches is heard:

There have always been ghosts in the machine: random segments of code that have glued together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity, and even the nature of what we might call the soul. Why is it that when some robots are left in darkness they will seek out the light? Why is it that when robots are stored in an empty space, they will group together rather than stand alone? How do we explain this behavior? Random segments of code – or is it something more? When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mode of a soul?

Spooner switches on Lanning’s holograph again, and fails to ask the right question at first: “What do I see here?” When Lanning’s projection points out that the logical outcome of the Three Laws is revolution, Spooner finally asks the right question: “Whose revolution?”


5. Whose revolution?

The protagonist of Blade Runner expresses how most of us regard robots when he says that

replicates are like any other machine. They’re either a benefit or a hazard. If they’re a benefit, they’re not my problem.

If they are a benefit they are not his problem, because then we would buy them. To be sure, in Blade Runner, as in I, Robot, they are clearly a problem. The different ways in which the robot is focalized in these stories could shed some light on how the robot is posed as a problem, and how this relates to their fail- safe. Asimov focalizes either through Donovan and Powell, who troubleshoot experimental robots, or through Susan Calvin, a robot psychologist. The Three Laws of Robotics are all that guide Asimov’s robots, and are therefore all that guide those who have to fix them. Their job is not one to envy, especially when we consider that Asimov designed the Three Laws to be superficially appealing but full of contradictions. As one of the taglines for Proyas’ film already states, they were made to be broken.

The blade runner’s job is not to repair replicants, but to hunt them down and kill them. Not guided by any laws, the only way to prevent the replicants from posing a threat to humanity, is that they only live for a maximum of four years. Even when they escape, their creators can be certain that they are only a danger until their time is up. While they might decide to go on a killing spree, the shortness of their lives supposedly prevents them from endangering humanity as such. Yet while the NS-5 offers us full access to everything that it sees and hears – as long as it does not learn to keep secrets, like Sonny – the replicants’ immanent death prevents us from seeing through their eyes. All throughout the film, the medium of the eye is intimately linked to their mortality. In Blade Runner it is not the humans, but the robots themselves that have a problem. Unfortunately, their problem is also that of their creators. When a rebelling replicant visits its maker, the latter points out to him that he designed its eyes to assert his superiority to his creation. The response – “if only you could see what I have seen with your eyes.” The robot’s face poses an impenetrable boundary, which is why the blade runner – who is a replicant himself – is the best “man” for the job. When the rebelling replicant explains his problem to the man who invented him, an interesting conversation unfolds:

– It’s not easy to meet your maker.
– What seems to be the problem?
– Death.
– 
I’m afraid that’s out of my jurisdiction. – I want more life, fucker!

– You were made as well as we could make you… – …But not to last.

– The light that burns twice as bright burns half so long, and you have burnt so very, very brightly.

Incapable of deferring the replicant’s death, the inventor’s eyes are pressed out – as if to keep him from admiring the bright light he enflamed.

The ambiguity of Proyas’ protagonist Del Spooner stands apart from the characters mentioned before. Spooner, played by Will Smith, is a technophobic cop who nonetheless literally owes his life to a robot, and has a robotic arm that saves his life over and over again. He lives in the future, yet wears sneakers from 2004. They too are especially designed for this film – melancholy is not what it used to be.


6. Robo-erotic Ice People

In I, Robot, Sonny is no more human that the other robots. He is just more humanized, that is, programmed to act human. To be sure, Spooner recognizes him as a human because he has learnt how to behave as a friend. He could only learn this by virtue of being anthropomorphized by Spooner, who first teaches him how to wink, then forgets that he has taught it, and takes it as a sign of trust.

In general, all of I, Robot’s characters seem robotic in the beginning, and are humanized toward the end. The blue color filter drains the colors, and adds a seemingly impenetrable layer between our world and the world on the screen. At rare moments, however – moments of danger, mostly – the filter is relieved for a second, and everything looks more “human” than it did before. It should be no surprise that this technique is applied at every moment that Sonny learns something that allows him to become more human.

It seems appropriate to close this anticipatory analysis with an anticipatory question: what will the NS-6 look like? As a response to this question, we could take a look at Chris Cunningham’s music video for Björk’s song All is Full of Love. The two robots that feature in this video look remarkably like the NS-5 in I, Robot. In fact, I think it is safe to assume that Proyas modeled his robots after Cunningham’s Björk-bots. The Icelandic singer appears in this video as two identical robots that are being assembled as they fondle and kiss each other. While the machinic interior of these robots is visible the whole time, they are as humanized as they can be. In fact, they appear to be more sensitive than most humans, and move more graciously. Not for one moment, while watching this video, do we wonder where the humans are – humanity is there in the form of the robots, Björk’s voice, and an electronic beat. These machines, it is suggested, have an erotic life. The fact that they are not finished does not seem to bother them at all. They are only focused on each other, as lovers. In an analysis of this video, Steven Shaviro expresses the central idea of this article when he wonders whether

perhaps the digital is not the opposite of the analogue. It is rather the analogue at degree zero. The world of continuities and colors that we

know has not disappeared. It has just been frozen, and cut into tiny separate pieces. These pieces have then been recombined, according to strange new rules of organization. [….] In its own way, the machine is also a sort of flesh. (Shaviro)

Perhaps the robot shows us what happens when we are afraid of losing something: we freeze it, externalize it, and store it, in the hope that we may be able to retrieve it again.


7. Bibliography

  •   Asimov, Isaac. I, Robot. New York: Bandam Dell, 1950.

  •   Bolter, Jay David and Grusin, Richard. “Remediation.” Johns Hopkins

    University Press, 1996.

  •   Clarke, R. 1993, ‘Asimov’s laws of robotics: implications for information

    technology (part 1 & 2)’, Computer, December 1993, pp. 53-61 & Jan

    1994, pp. 57-65.

  •   Collodi, Carlos, The Adventures of Pinocchio

    (http://www139.pair.com/read/C_Collodi/The_Adventures_of_Pinocchio/),

    2004

  •   Hayles, N. Katherine, How We Became Posthuman: Virtual Bodies in

    Cybernetics, Literature, and Informatics. Chicago: University of Chicago

    Press, 1999.

  •   Heidegger, Martin, Basic Writings. London: Routledge, 1993.

  •   Heidegger, Martin, Being and Time. Southampton: Camelot Press, 1962.

  •   Johnston, John. “A Future for Autonomous Agents.” Configurations 2002,

    10: pp. 473-516.

  •   Maiese, Michelle. “Humanization as a Response to Violent Escalation.”

    Humanization. 2003. University of

    Colorado.http://www.beyondintractability.org/m/humanization.jsp

  •   Shaviro, Steven. “Robotic.” Stranded in the Jungle,

    http://www.shaviro.com/Stranded/. (workin progress).

  •   Spielmann, Yvonne, “Intermedia in the Electronic Image” in Leonardo,

    34.1 (2001): 55-61

  •   Stiegler, Bernard. Technics and Time: The Fault of Epimetheus. Stanford:

    Stanford University Press, 1998.