Are Machines Ethical?

(thanks to Rockwell Center)

In an episode of Jonathan Creek (don't bother to find it, it's not good) a woman spilled her mother's ashes on the floor. She went to get a vacuum cleaner from another house but on her return she found the ashes had been stolen. Of course, they hadn't. Her mother had a robot vacuum machine that came out at intervals and sucked up the ashes. And no, you can't ask if this machine was ethical because it wouldn't make sense.

There have been a number of articles recently about machine-based activities in the legal sphere--document assembly, e-discovery and case analysis. This follows from things like Google's driverless car which by 2012 had achieved 300,000 accident-free miles. The use of High Frequency Trading in stock trading (see Michael Lewis, Flash Boys: A Wall Street Revolt). And machine-controlled laser surgery for eye correction. It's clear this is a growing trend, possibly exponential.

Despite whether or not we are approaching the point of singularity (arguments both ways), huge resources are being put into the mechanisation of law. In part it is because machines, robots, algorithms can do repetitive tasks more efficiently than humans, and also because machines tend to be cheaper than humans. From a Marxist perspective it makes sense to move to machines from labour. The returns to capital are much greater.

To approach my question in the title, ethics are concerned with good, proper behaviour that accords with standards and principles that a profession abides by. They are also concerned with things that go wrong: mistakes, malfeasance, mischief.

Paul Virilio, the French philosopher, articulated the essential paradox of technology--that to invent something is to invent its negative. Invent ships and you invent shipwrecks, invent railways and you create derailment, and create the car and you invent the pile-up. Every advance in technology and machines creates its negative form. It is never a matter of if but only when. Modern society operates so quickly that the vital variable is speed.

Glitches in software and algorithms occur and have worldwide effects--for example, the collapse of the commodities and stock markets in 1987. Program trading went out of control resulting in Black Monday. Even allowing for unintended consequences, we have to build in rules for machines to decide what actions to take when faced with catastrophic choices.

Tom Chatfield puts the trolley problem at the centre of the issue. A tram runs out of control and the driver sees that he is about to hit five men working on the track. However, he can turn onto a siding but in doing so he will kill a single man. Without delving into the deep void of the trolley problem, and its variants (the fat man), I suggest we need to start thinking about this in the legal sphere as machines and algorithms become more common, especially in the face of legal aid cuts and the like. (For further information on the trolley problem et al go to Experimental Philosophy.)

Given that automation is rising, given that computer-based legal services are increasing, how are we going to program machines for errors? Ultimately who will be responsible for those errors? Chatfield refers to two modes: automatic and manual. Humans are capable of both. We can adjust our behaviours to the moment, almost automatically, but we are also capable of thinking out the longer term consequences of our actions in manual mode. We bring heads and hearts together.

Algorithms don't do that. They are usually designed to maximise the effects of certain conditions. If I'm in a driverless car that by some accident is about to plough into a group of people, it could decide that veering off and killing me is the preferable outcome. I would disagree, of course.

Some might argue that the algorithm's decision is ethically superior to my wants. But it is not thinking that way; it has a utilitarian viewpoint, to my cost. In a way the algorithm is superior because it isn't letting sentimentality intrude. Some artificial intelligence experts have argued that there is nothing wrong here as long as the programming is transparent and we can all understand what the consequences will be. We take our risk here.

What is more likely, however, is that we will outsource more activities to machines believing we've overcome the difficulties without actually investigating this. Chatfield says
As agency passes out of the hands of individual human beings, in the name of various efficiencies, the losses outside these boxes don’t simply evaporate into non-existence. If our destiny is a new kind of existential insulation – a world in which machine gatekeepers render certain harms impossible and certain goods automatic – this won’t be because we will have triumphed over history and time, but because we will have delegated engagement to something beyond ourselves.
We know the consequence of this kind of delegation. We see it in the privatisation of prisons, health, and, more dangerously, in security.

As more areas of law come within the sphere of algorithms and machines, we will need to carefully consider the ethical problems that will inevitably arise. Accidents will happen and people's livelihoods, liberty, property might all well be at stake. How easy will it be to correct mistakes in online divorce with children, property, and pensions and the like? Who or what will be culpable? How will errors be discovered? Who will have the authority to declare errors? Or will we subscribe to a utilitarian ethos that it must be for the greater good, so we should just lump it?

We don't have to wait for the point of singularity to start working these out.


Comments