28 Sept 2009

Passing the problem to the practice

My previous post posited a number of possible pursuits within a practice, but really failed to bring together any logical focus or even starting point. My interests in this field feel very heterogeneous and underdeveloped - there is a clear difference between believing one has an interest in a particular area, and actually being interested.

I recall a conversation between a couple of friends over fiction writing - they were discussing overcoming problems of plot, where you need to get a character to a particular point or situation in order for the plot to move on. And a piece of advice that was found to be useful was 'give the problem to the character' - find a way to get the character to have to work out how to get themselves into a situation.

Is it possible, in a similar vein, to give the problem of what a practice is to be about, what its concerns are, back to the practice. To find a way within it for it to suggest (perhaps) what it wishes to be about.

This requires from the outset setting up a logic of practice without knowing what that practice is actually concerned with, other than its own self-discovery. This methodology is arguably philosophical in the modern sense, one can imagine it folding in on itself like Descartes meditations until it reaches its Cogito. Or it might never do so, it might continue collapsing, like Descartes practice might have if a certain kind of rigour had been applied to it (like Hume later does)

By what methodology could this be achieved? My first thought was of a database. I was interested that Lev Manovich seem to place databases with the internet as being flat interlink media, contrasting with the hierarchy of a traditional OS. But a database is as much its relationships as its data tables (just as the internet is surely as much its links as its contents). That relationships can be reconfigured does not mean there isn't some kind of implied hierarchy there. The types of possible relationships are defined by the pattern of primary keys after all - if a record has no unique identifier, or its identifier is not referenced, then the record cannot be connecting to others, and possible hierarchies are limited.

The question though is - what would a practice generating program look like to the end user, what would it say, what would it suggest? I can only at this point imagine the most strange and arbitrary configuration, of splicing interests at random with no understanding implied.

20 Sept 2009

Establishing fields for exploration

I'm about to start an MA in Interactive Media next week, and I thought it might be helpful to use this public/private space to try and come up with some a short list of what I'm interested in, with the hope on being able to boil it down to some kind of off-the-cuff couple of sentences. Lets see what happens:

  • the effect of the internet on social relations - I'm thinking particularly in terms of the re-ordering of face-to-face social interactions (ie. meeting people through the internet), on the generation of co-operative enterprises and the way the format of social networking sites shifts the way people talk and interact with each other - often through simple technologies (e.g. column width)
  • the aesthetics of interface, especially in terms of making an interface feeling tangible - especially through synaesthesia.
  • non-manipulative strategies for utilising marketing and surveillance technologies to turn 'control society' back on its head.
  • simple interventions which show a certain field of relations in not just a new light, but one that makes the interactions clearer to see.
  • the possibilities of speech recognition technology for exploring literary aesthetics and Wittgensteinian notions of 'coronas' of meaning.
  • the generation of optimism and helplessness through certain interactive technologies, set against the conceptual problems of hope especially in terms of anthropological difference. (and we might ask here: can machines hope? what is it they can't do?)
  • the ordering of concepts and knowledge - modes of categorisation and relation.

7 Sept 2009

Unknowable hope? - some comments in lieu of a continuation

Having just returned to this blog in order to post on Kierkegaard, I've been re-reading the Can Dogs Hope mini-project to see if it can be moved forward.

The central issue seems to be this: if we are entirely unable to show that dogs display what we might term hope behaviour then we have no right to say that they can hope.

A further question arises: if we are unable to show that dogs display hope behaviour, do we have a right to say they can't hope?

Is this the right kind of question to ask? What would we be saying if we claimed an ant couldn't hope, or even that a rock couldn't hope? But we can imagine a situation where such terms have sense - for example where someone is pursuing an argument by analogy or metaphor, and we wish to disprove it or show its inappropriateness.

What would we make of someone who, when we remarked on watching a dog gazing up at a person with food that the dog was hopeful, remarked "dogs can't hope." Would it be different if we were in a pet shop and had said "my dog's hoping I bring him back a toy" or "I think my dog hopes I'll bring him back something tasty"? I think it would; and yet surely we can make sense of the dog believing either of these things.

I feel no closer to an answer, and I have to get to bed.

Kierkegaard and the dialectics of despair

I wanted to take a moment, not so much to reflect upon (for I've little time) but simply record a few notions of Kierkegaard's from The Sickness Unto Death. For Kierkegaard, despair comes in three formulations - in despair being unconscious of having a self, wanting in despair not to be oneself, and wanting in despair to be oneself. Despair before god is sin, it constitutes a wilfulness before god; thus the opposite of despair is not hope but faith - a faith in which one humbles oneself before god. And likewise the opposite of sin is not virtue but the same said faith.

So what of hope in Kierkegaard's formulations? Hope for Kierkegaard is intimately bound with, even a form of despair. Hope is the despair of the possible without what is necessary or determined - the adolescent's despair is founded on hope: "he hopes for the extraordinary both from life and from himself" (2008, 69) The hopeful individual cannot acknowledge the limits of his or her existence; it seems quite natural that this could end in wanting in despair not to be oneself (if one feels one's weakness is the barrier to fulfilment) or wanting in despair to be oneself (if one feel's the barrier is external).

But I suspect this isn't exactly what Kierkegaard was getting at, for one these latter forms of despair depend on more spirituality than he credits the hopeful youth, whose main concern is over earthly things or the earthly in itself. Nor is the process outlined above enough to claim that hope is a form of despair, for it could be argued that what's been shown is a causal sequence; we could claim that love inevitably ends in hatred, but this is not sufficient to claim that love is hatred. And it does seem that Kierkegaard wants to argue for a stronger relation than causation: "Instead of taking possibility back to necessity he runs after possibility - and in the end cannot find his way back to himself" (2008, 41)

If hope leads us away from the self, perhaps hope is then a form a despair in ther first sense - that of being unconscious in despair of having a self. This seems the most coherent interpretation. But personally I am not convinced that this first form of despair is properly despair at all - even if it is a wilful unconsciousness of being in a state of despair, isn't it simply a cover for despair rather than despair itself? And doesn't it fall subject to all the pitfuls of positing something in unconsciousness - that it is far too easy to create an internally coherent system with its use, and near impossible to produce something demonstrable or disprovable? That's not to say Kierkegaard is necessarily wrong; rather that it's very much for Kierkegaard to show that he's right, and I don't think he manages this.

30 Mar 2009

I will repeat everything you say



I've been experimenting the last few days with feeding back speech recognition interpretations of a paragraph from HG Well's The War of the Worlds back into speech recognition systems to see what effect they had. I'm using a system called 'talkback', which is part of the Microsoft Speech Development kit. I'd open several talkback windows, read a few sentences and the let the system speak back to itself over and over.

I'm a little unsure what to make of my experiments. I came to using speech recognition systems creatively with a mind to exploring what James Guetti calls 'sentence sounds' - based on the premise that, as speech recognition systems look at the words surrounding a word to calculate what it is, they might go for a sentence in a way that might be at least usefully dis-analogous to how we might grasp a sentence aesthetically.

But what my play has thrown up is a decaying discourse, where the only constants: "I heard" before it gives its interpretation and "when you said "after" get increasingly repeated. It gets stuck in one distortion of these - for instance "when you said" gets transformed into "when you shouldn't" "when you used" etc. Occasionally it throws up something suddenly strange and poignant (for instance, "and we want to understand" in one long piece of ramble.

Clearly my methodology is not appropriate to my original interests, but I wonder if I've stumbled upon something of potential interest here? I'm not sure if there's any direct knowledge or insight to be had; but perhaps it could be manipulated to serve as an interestingly dis-analogous model of some other process? Something of Bernard Stiegler pops into my head - was it 'circuits of trans-individuation' perhaps? I'll have to take a look.

In any case, I hope to mix a track out of it for the next Fuselit, so hopefully I can make something enjoyable to listen to at least.

15 Mar 2009

Wittgenstein and Digital Animals

“The human body is the best picture of the human soul” – that much misused and over quoted line of Wittgenstein is interesting on a number of levels. Part of what it does in (the context of where it appears) is draw a red line across our questioning of the existence of other minds. However removed or transcendent the soul might be, the body (by which I take to include behaviour and not just the physical form) is our point of reference to understanding something’s mind.

Now, the question of whether other minds beside one’s own exist at first glance seems like a typical philosophical indulgence – spending an age answering questions that only get us to where we were before we started asking them. But it’s of crucial significance when it comes to looking at non-human beings – if we going to deny that something has a ‘mind’, to deny it’s worthy of consideration in our ethics, then we’d better be damn sure we’re setting the bar for this at a level that doesn’t rule out the human race in its entirety.

Arguments go back and forth about the ethical status of animals – I won’t go into them here, except to mention an interest in Daniel Dennett’s idea that true suffering requires a measure of mental complexity – a dog suffers more than a shrimp under his scheme. I’ll return to this at a later point (I hope!) The point is that what type of mind an animal has, and what attributes of mind are of ethical interest are the crucial questions here.

Now, I think we’re very much still at the stage of talking about digital animals of one form or another – whether computer viruses, Creatures or whatever; but what I’m interested in is the essential differences between a digital animal and an “analogue” one (ick, sorry about that formulation!) with the aim of coming up with a pretty decent list. I’ll start exploring a few obvious ones, which will probably mirror the normal digital versus analogue contrasts. Note that I’m not interested in the difference between biological and pseudo-biological (e.g. androids) but between biological and digital proper (e.g. computer programs as life). I won’t worry for now if the differences end up a bit blurry:

1. The only decay that necessarily affects digital creatures is the decay of its environment. An animal in the woods is threatened by not just the deterioration of its environment to the point where it can no longer sustain itself or reproduce successfully, but also the inevitable decay of its own body. For a digital creature, a failure of infrastructure could be fatal (a hard disk error for example) but there is no logical reason why it should necessarily should deteriorate and die. Of course, artificial life simulators like Creatures do introduce an artificial aging process and decay, and it is also possible that a creatures code could tend towards entropy; however neither artificial limitation or entropy are logically essential. Let’s call this the no necessary decay principle for now. We’ll leave alone for now the possibility that normal biological creatures might not necessarily decay – that it might be a beneficial evolutionary feature for a species to decay and die – just for the sake of efficiency!

2. Related to this: a frozen copy of a digital life form can be taken at any point, and as such it is always possible to restore an a digital life form to a previous state, provided copies are kept. Can we do the same with single celled life? Perhaps. But it is certainly beyond us to do this with more complicated animal life, and it’s the more complicated animals which are of ethical interest. Let’s call this the backup principle,

3. The ‘personal identity’ of a digital lifeform is far less clear than a biological one. We know that although Dolly the sheep, although genetically identical, was not the same sheep as the sheep she was cloned from. The fundamental difficulty for digital lifeforms is that they would move in the same way as they are cloned. It only counts as moving if the copy at the original location is eliminated. So we’ll call this the movement through cloning principle.

4. Digital life would occupy a very different geography – but a geography nonetheless. An unplugged network cable could form an impassable barrier, as could a firewall. Actually, if you think of computer viruses as digital life, is there a great difference in the kind of geography they occupy? There are physical barriers, threats, safer hiding places and so forth. They can adapt, form defensive behaviours etc. Digital geography’s can radically change however – think about a hard disk format. But then we have floods, volcanoes and earthquakes. Perhaps it relates to the above in that you have restorable geographies. Think about a computer switched off, then switched on. Perhaps this is like land cut off by the tides. We could also call it flickering geographies. At the least, digital geography is a distinctly exaggerated version of normal geography.

There’s a lot more to be said, and a lot that’s wrong with the above. It would take some time to unpick, but it’s worth laying out, if only to sort it out later.

We can see that the extent that geographies and beings can be restored provides a very different ethical situation – imagine if we could torture a person, then flick a switch and it’s as if nothing happened. We’d want to say it’s still wrong, but my feeling is that something about it messes with our ethical sense. Again Wittgenstein is of use here:

If a man’s bodily expression of sorrow and of joy alternated, say with the ticking of a clock, here we should not have the characteristic formation of the pattern of sorrow and the pattern of joy.
And we can conceive of doing exactly this with a digital lifeform. The extent we could manipulate such a being is only limited by the complexity of the apparatus we could create to do such a thing. And it could only get away from us if digital life developed complexity beyond human intervention; or alternatively, if the source code for this complexity was destroyed.

This is just a doodling of ideas that I’m not going to try to bring to any fine conclusion yet. It does seem to be a visually rich way of exploring networked digital technology, but whether it has any solid implications needs further exploration.

6 Feb 2009

Machines and mental health

I've been reading Carl Rogers, and he made some interesting remarks about how a person can be placed "in a helping relationship with a Machine"(45) based on a study by O R Lindsley. The example given was of a person with chronic schizophrenia who was put in contact with a machine that would give him rewards when he pulled the lever. Oddly this resulted in a distinct improvement in his condition. Then the rewards were taken out, so when you pulled the lever nothing happened - and he quickly deteriorated. Rogers reading of this wass that "even in a relationship to a machine, trustworthiness is important if the relationship is to be helpful."(46)

It's quite amazing to think that this would make such a difference - and I wonder if this makes far more sense in terms of Seligman's notion of learned helplessness than in Roger's relationship analogy. After all, what the patient has been granted is control over their environment. But the idea of having these kind of relationships with machines is fascinating. The proof is in the detail, so really I need to track down Lindsley's original study.

26 Jan 2009

New directions

The main project on this blog - Can Dogs Hope? - seems to have run aground, and I've found myself unable to put in the necessary work to make headway. Could it be that it's simply because the question isn't direct enough, urgent enough?

In any case, I'm going to deviate from this, and relate to the main themes of this blog very loosely. I think having a space for one's thoughts, and putting them even nominally into the public domain (I have few delusions about people actually reading this) is a very healthy exercise and one I hope I can continue with.