Now, the question of whether other minds beside one’s own exist at first glance seems like a typical philosophical indulgence – spending an age answering questions that only get us to where we were before we started asking them. But it’s of crucial significance when it comes to looking at non-human beings – if we going to deny that something has a ‘mind’, to deny it’s worthy of consideration in our ethics, then we’d better be damn sure we’re setting the bar for this at a level that doesn’t rule out the human race in its entirety.
Arguments go back and forth about the ethical status of animals – I won’t go into them here, except to mention an interest in Daniel Dennett’s idea that true suffering requires a measure of mental complexity – a dog suffers more than a shrimp under his scheme. I’ll return to this at a later point (I hope!) The point is that what type of mind an animal has, and what attributes of mind are of ethical interest are the crucial questions here.
Now, I think we’re very much still at the stage of talking about digital animals of one form or another – whether computer viruses, Creatures or whatever; but what I’m interested in is the essential differences between a digital animal and an “analogue” one (ick, sorry about that formulation!) with the aim of coming up with a pretty decent list. I’ll start exploring a few obvious ones, which will probably mirror the normal digital versus analogue contrasts. Note that I’m not interested in the difference between biological and pseudo-biological (e.g. androids) but between biological and digital proper (e.g. computer programs as life). I won’t worry for now if the differences end up a bit blurry:
1. The only decay that necessarily affects digital creatures is the decay of its environment. An animal in the woods is threatened by not just the deterioration of its environment to the point where it can no longer sustain itself or reproduce successfully, but also the inevitable decay of its own body. For a digital creature, a failure of infrastructure could be fatal (a hard disk error for example) but there is no logical reason why it should necessarily should deteriorate and die. Of course, artificial life simulators like Creatures do introduce an artificial aging process and decay, and it is also possible that a creatures code could tend towards entropy; however neither artificial limitation or entropy are logically essential. Let’s call this the no necessary decay principle for now. We’ll leave alone for now the possibility that normal biological creatures might not necessarily decay – that it might be a beneficial evolutionary feature for a species to decay and die – just for the sake of efficiency!
2. Related to this: a frozen copy of a digital life form can be taken at any point, and as such it is always possible to restore an a digital life form to a previous state, provided copies are kept. Can we do the same with single celled life? Perhaps. But it is certainly beyond us to do this with more complicated animal life, and it’s the more complicated animals which are of ethical interest. Let’s call this the backup principle,
3. The ‘personal identity’ of a digital lifeform is far less clear than a biological one. We know that although Dolly the sheep, although genetically identical, was not the same sheep as the sheep she was cloned from. The fundamental difficulty for digital lifeforms is that they would move in the same way as they are cloned. It only counts as moving if the copy at the original location is eliminated. So we’ll call this the movement through cloning principle.
4. Digital life would occupy a very different geography – but a geography nonetheless. An unplugged network cable could form an impassable barrier, as could a firewall. Actually, if you think of computer viruses as digital life, is there a great difference in the kind of geography they occupy? There are physical barriers, threats, safer hiding places and so forth. They can adapt, form defensive behaviours etc. Digital geography’s can radically change however – think about a hard disk format. But then we have floods, volcanoes and earthquakes. Perhaps it relates to the above in that you have restorable geographies. Think about a computer switched off, then switched on. Perhaps this is like land cut off by the tides. We could also call it flickering geographies. At the least, digital geography is a distinctly exaggerated version of normal geography.
There’s a lot more to be said, and a lot that’s wrong with the above. It would take some time to unpick, but it’s worth laying out, if only to sort it out later.
We can see that the extent that geographies and beings can be restored provides a very different ethical situation – imagine if we could torture a person, then flick a switch and it’s as if nothing happened. We’d want to say it’s still wrong, but my feeling is that something about it messes with our ethical sense. Again Wittgenstein is of use here:
If a man’s bodily expression of sorrow and of joy alternated, say with the ticking of a clock, here we should not have the characteristic formation of the pattern of sorrow and the pattern of joy.And we can conceive of doing exactly this with a digital lifeform. The extent we could manipulate such a being is only limited by the complexity of the apparatus we could create to do such a thing. And it could only get away from us if digital life developed complexity beyond human intervention; or alternatively, if the source code for this complexity was destroyed.
This is just a doodling of ideas that I’m not going to try to bring to any fine conclusion yet. It does seem to be a visually rich way of exploring networked digital technology, but whether it has any solid implications needs further exploration.
No comments:
Post a Comment