www.dichtung-digital.org/2003/3-rokeby.htm

The Computer as a Prosthetic Organ of Philosophy

by David Rokeby

This article looks at issues of language and encoding from the perspective of computer programming. Particular attention is paid to the different relationships between code and encoder/decoder in computer coding and human language coding. Examples of the writer/artist's work and working experience are used to illuminate these differences and a role for computers as philosophical prostheses is proposed.


By the first year of university, I had largely succeeded in turning myself into a human simulation of a computer. I do not fully understand the motivations for this 'project'. I had a very strong believe in the value of logic, and used logic as my main tool in attempting to solve everything from electronic design, where logic was appropriate, to personal relationships, where it was decidedly counter-productive. As part of the process, I taught myself to remove myself from physical pain, and by extension to separate myself from my body. In a profound and personal way I had constructed a 'virtual reality' for myself, in which my rational mind was free from the complications of the biological and emotional. I felt that ambiguity and contradiction were my gravest enemies, to be resolved or destroyed at all costs.

One of my other pet projects in my teens was a complete rationalization of the English language so that it made complete sense grammatically and phonetically. The idea that there were exceptions in language struck me as wrong, and I delighted in the notion of bringing order to the chaos.

By the end of my first year of university I realized that I could probably rationally convince myself of just about anything. I had completely separated my thinking process from the context of physical reality. Logical thinking for its own sake seemed suddenly to be a self-indulgent game of no consequence. I decided to make a shift and look for ways of grounding my intellect in experience, leaving university and entering art college.

At art school I had the great fortune of encountering some very challenging teachers. One day, one of my professors told the class that we would be looking out a window for the whole three-hour class. I was incensed. I stood at my assigned window and glared out through the pane. I saw cars, two buildings, a person on the street. Another person, another car, the sky, a cloud. For fifteen minutes I fumed, and muttered to myself. Suddenly I started to notice things. The flow of traffic down the street was like a river, each car seemingly drawn along by the next, connected. The blinds in each of the windows of the facing building were each a slightly different colour. The shadow of a maple tree in the wind shifted shape like some giant amoeba. For the remaining hours of the class I was electrified by the scene outside. After fifteen minutes, the "names" had started separating from the objects.

Reflecting on this afterwards, it seemed to me that for the first 15 minute period, I had stopped seeing things as soon as I had positively identified them. At that point of identification, the word took the place of the sensed object in my consciousness and I no longer "saw" it. After fifteen minutes some part of me got very bored and shut down, some part of me let go, and the raw sense and perception data started flooding in again.

In his diaries, Austrian writer Peter Handke at one points talks about "formulation" as the beginning of forgetting. This aligned very nicely with my experience of the tradeoffs that occur when language is applied to phenomena. What is gained is the ability to externalize the experience as a token that can be stored (writing), manipulated (reasoning) and shared (communication). "Coining" a term is an act of power. Adam in the bible is "the giver of names", charged with the responsibility to bring nature under his dominion through the act of naming. But at the same time, something precious and harder to define is lost. What started as a live, multi-dimensional, organic, and complex interrelation is crystallized into a symbol disjunct from context and experience. It loses its conceptual suppleness. It becomes a 'stereotype' of the thing that it is intended to represent.

But language remains a powerful tool for communication, and it often seems to attain levels of richness that seem to belie the above-mentioned dangers. But the trick is that language has a layered expressive power only in the context of its synergistic relationship with the human brain. The human brain is a very fluid and subjective language decoder. Reception of a human language term by the brain involves activating a complex set of relations. The language tokens are generally interpreted back into a living dialog of disparate and often contradictory associations derived from personal experience. The crystallized concept dissolves back into what I might call a 'wet concept'.

The pre-socratic greeks recognized no clear distinction between thinking and seeing, nor between language and reality. There was no sense of an intervening self in the process of perceiving and describing the world... it was imagined as a purely reflexive, and truthful process. According to this belief, it should therefore be impossible to speak of that which did not exist. The fact that one could speak of that which did not exist was the source of one of the first great paradoxes that troubled the philosophers of the time. The resolution of the paradox required the invention of the subjective, imaginative and devious self, the germ of consciousness.

This intervening self created a new problem for greek philosophy. If the self could distort the translation of reality into language (and vice versa), then this self is capable of deceiving through language. In such a case, how could one discuss and pursue the truth through language? (Remember that philosophical dialogue was the method for seeking truth in this culture) In order to get around this problem, a restricted, purified subset of language was developed. This subset provided the foundations of formal language and logic. It provided the conditions necessary for truth and verification. The development and refinement of logic finds a materialized and purified form in the computer.

It was at the extreme end of this particular trajectory that I found myself at 17, trying to fit crystalline ideas into my not very crystalline brain. The computer, on the other hand, is very comfortable these sorts of ideas so I have spent much time wondering: "By what sort of mechanism does the computer manage to hold and manipulate terms of pure logic?"

The computer is perhaps best imagined as a vacuum: a protected space in which all ambiguity has been removed. The main engineering problem in the design of a computer is the containment of that vacuum. Computers are made of humble, earthly materials like silicon, with complicated non-linear analog behaviours that are quite alien to the logical precision desired. The basic component of the computer, the transistor, is still a fluid analog device. To attain digital precision, these transistors are pushed, through the massive amplification of positive feedback to their absolute limits. (A peculiar form of extremely violent self-referentiality!). Through this process, all but the extremes, the 0's and 1's that form the terms of the restricted language of the digital, are effaced. This violence has serious repercussions: the rapid switching from one state to the other produces enormous amounts of extraneous noise. This required a second innovation: the 'clock', which carefully times the procession of digital decision-making to occur at the first possible movement after each chaotic transition has settled.

So the computer is the result of a multi-millenial project to create a vacuum of ambiguity and subjectivity: the conditions necessary for unambiguous truth and verification.

It is ironic that the computer, born out of this pursuit of objective truth, should be so skilled at simulation. (Remember that the word "simulation" directly implies deception) But perhaps I should not be surprised, having realized (as related above) that I could, by being sufficiently divorced from grounded reality, logically convince myself of anything.

Where the human mind and human language seem to, for the most part, manage a useful balance between 'reality' and its encoded shadow, the computer and computer language seem to lean out toward the same one extreme. There is no compensatory balance between the encoder/decoder and the code. The computer plays into the human fantasy of a perfect language, and perfect communication, but it does so through the device of arbitrary and complete isolation and self-reference. It seals itself in with the device of its own logic.

This is not without cultural ramifications. The material world cannot enter into this digital nirvana except through that particular "eye of the needle" called quantification, that most literal and unforgiving form of encoding. That which cannot be measured cannot enter into the kingdom of the digital. The fact that words can be stored and manipulated by a computer does not mean that the referenced concepts or material reality are held in the computer. We reinvigorate a computer's textual output with our mind's wet and messy renderers. The computer is just holding on to given patterns, sets of unambiguous measurements of key-strokes, mouse-clicks, modem songs, sensor reading...

We regularly engage in feedback relationships with these systems. In feedback, the flow of information and influence is recursive. The effect of filters and processors in the path are multiplied by this recursion. It has been determined that consciousness tends to operate at a delay of about 1/10 of a second. Computers tend to respond in much less than 1/30th of a second. As a result, the feedback between human and machine can creep under the level of consciousness, create a tight loop that invisibly reinforces and attenuates various aspects of the complex stream flowing through the loop. Such feedback systems have their own synergetic characteristics. And because the fastest responding element of the system is usually the computer, what is most reinforced through the loop is often defined more by the computer than the human.

I find that this even manifests itself in familiar systems like e-mail. The potential speed at which email dialogs can progress tends to reinforce issues that can be instantly resolved with straightforward answers. Meanwhile, at least in my experience, my in-box accumulates a huge pile of unanswered but more interesting e-mails that can't properly be addressed in the rapid cycle that e-mail encourages.

The computer, gifted at sharp distinctions and quick and exact calculations based on quantifiable parameters, is a most fundamentalist of technologies. And like all forms of fundamentalism, subscription to the system gains one an immediate and tangible power and an attendant reduction of confusion. The power, however, comes at a cost, and the greatest costs in this case tend to be unquantifiable, which means that they conveniently fall out of the equation if encodability is allowed to rule as a measure of significance or truth.

The computer has come into accendancy in the same century as quantum mechanics. Logic itself is a mental instrument for measuring 'reality' and that the system of logic taints its own discoveries with its internal biases, finding, for the most part, only the kind of thing it is looking for. Its internal consistency (as artificial perhaps as the computer's carefully constructed and violently reinforced logical workings) is no guarantee of truth except within its own frame and on its own terms.

It is most important to prevent the inherent characteristics and extraordinary powers of the computer from effectively setting the agenda, for defining the terms by which validity, and merit are measured.

Much of my artwork since the beginning of the 90's has been an inquiry into these sorts of issues. In particular, the "Giver of Names" and "n-cha(n)t" are explorations of limits and possibilities of computers in relation to complex concepts and human language. In the above paragraphs I have outlined some of the biases of information technology. My project, in the context of which I effectively dress myself in the drag of an artificial intelligence worker, is to attempt to transcend the problems and limitations that I have enumerated. Success or failure in this endeavour is not the key issue. I want to ground the issues I have outlined in practice, exploring in a tangible way what computers do well and what they are bad at. (Or more properly, what we are able to program them to do well, and what kinds of activities are extremely hard to represent in their terms and context.)

The question of programming actually brings to the fore another notion of encoding, as programming is the act of encoding function or process. As with the encoding of information, in the encoding of process (aka simulation), we are constrained by the inherent limits of the encoding process and encoding language.

In the 'Giver of Names', I set about creating a system that can look at objects presented to it, make some sort of perceptual interpretation, and generate a complex internal state through a broad and fluid process of association based on a large highly cross-referenced knowledge-base. Stimulation from each perceived attribute of the seen objects spreads in decreasing intensity from the initial stimulus through all the associative links related to the initially stimulated node and then from those related nodes in the knowledge base through all their links, etc., until the stimulus is exhausted. The resulting complex internal state is the basis for a process of articulation, through which the system constructs sentences in proper english grammar and speaks them aloud using voice synthesis. The computer, in a manner of speaking, attempts to express its internal 'state of mind'. This is not a recognition system... the results of the perceptive and associative processes are not a single identifying term. The system's 'ideas' about what it sees are held in the complex topology of the internal state, and the sentences it speaks are reflections of this topology, forced into the constraints of english language.

The process of creating this software has been and continues to be an exciting and frustrating struggle with the constraints of the computer and the limits of our understanding of ourselves as humans. First off, it has been an extraordinary encounter with language. The attempt to encode human-like language facility echoes, in retrospect, my adolescent dream of bringing order to the english language. But even the most so-called 'proper' english is maddeningly (or delightfully) unsystematic. Through this lens, language appears as the most perverse and original of human creations. It seems to me that language is, at base, a collection of exceptions. In the early evolution of language, as long as the number of language expressions remained small, each term could afford to be singular and unique without requiring any underlying system. The rules of language became a necessity only as the number of language terms increased, reaching the limits of the human capacity to hold unrelated exceptions in memory. But this pool of exceptions lives on in contemporary language, and the greatest concentration of exceptions is found in the words used most often (i.e. the verb 'to be'), and in the colloquial terms that are invented to cover new ideas and paradigms in popular culture.

My personal experience that the task of simulating vision and speech can reveal hidden things about human function inspired the notion that the computer can function as a sort of philosophical prosthesis. We are not very good at perceiving ourselves, being so deeply invested. And our imagination invisibly fills in conceptual gaps and flaws much as our vision system papers over gaps in our visual field. Rigorously externalizing our models of ourselves can dramatically clarify the limits of our self-understanding and open those hidden conceptual gaps to inspection.

This is particularly interesting as many of these sorts of blind spots are created by our increased reliance on a logical and scientific understanding of ourselves which the computer often encourages and validates. Some of our most remarkable human capabilities are so familiar that we all too easily lose sight of their remarkability. But as we engage more and more in a computer mediated life, we need to work harder and harder at supporting those aspects of ourselves which are least logical and least understood.

'n-cha(n)t' extends the exploration initiated with the 'Giver of Names' to include the social dimension of communication. Seven computers running a derivative of the software developed for the 'Giver of Names' are interconnected into a network. Each computer follows its own stream of associations, producing an endless string of utterances (words, phrases and sentences) as its follows this stream. Each machine also communicates the current focus of its stream to the rest of the machines via the network. Each machine responds to these incoming messages by stimulating itself through an associative process similar to that in operation in the 'Giver of Names'. This mutual reinforcement draws the complex states of all seven computers toward a state of consensus. When complete consensus is achieved, the machines reach the point where they are chanting identical or very similar utterances in approximate synchronization. This is a dynamic and emergent chant. Each machine is also listening to its immediate environment through a microphone set to ignore the sounds of the other computers, but responding to any sounds made by a person in its immediate vicinity. This overheard voice is run through a voice recognition system, and the result of the recognition stimulate that machine's knowledge base. This knocks that machine out of the state of consensus, it falls away from the chant. Meanwhile, it starts selectively broadcasting this information through the network, causing a spreading disarray that usual eventually dissolves the chant into a chaos of voices. In the absence of further external intervention, the system finds its way back to equilibrium, and returns to chanting.

The presence of the computer in our culture represents a fairly radical shift in balance. Having an external device capable of logical processing and precise memory poses interesting challenges and opportunities. For the most part we have failed to take useful advantage of the potentials of these devices, and have allowed the ease with which they do certain kinds of things to effectively determine the agenda of 'progress'. We need to make new kinds of demands on them. They need to be critically examined from a very human perspective, not in a knee-jerk Luddite manner, but as a way of understanding ourselves and the peculiarly human desires that caused us to invent such a machine. The computer is a kind of wishful self-portrait... a compendium of abilities we have as humans aspired to but are not very gifted at. We need a much clearer understanding of this complex relationship. Without this understanding we will be unable to find an appropriate partnership with our creations.

From my own experience, one such fruitful partnership results when the computer is used as a device for exploring the limits of logic and its applicability, a new weapon in the philosopher's arsenal.

  

 


dichtung-digital