In Philip K. Dick’s book, Do androids dream of electric sheep? (filmed as “Blade runner“), humans have created a race of robotic workers called simulants who are virtually indistinguishable from real humans. Humanity feels threatened when these creatures turn sentient, and send bounty hunters (who are potentially also simulants themselves) to hunt down and destroy these renegades. What happens is a battle between sentient beings – one artificial, the other (possibly) human.
A quick tangent: In recent news (see New Scientist, October 8th), police in Santa Cruz, California, are testing a computer system that takes the input of a real, recent crime, and (using a database of the ‘real’ crime’s location and other local information, plus a sophisticated model based on earthquake aftershocks) can predict where nearby crimes will occur in the future – days in advance.
The police use this information on future crimes to position their patrols, effectively preventing crime before it happens. (Again, like another of Philip K. Dick’s books, Minority Report).
Previously, police would use their experience and intuition to try to predict where to patrol. This suggests that in the future, a previously human activity – based on skill and experience – will be performed by computer.
Back to electric sheep, and android simulants battling each other. Could the future of litigation be one where algorithms battle each other in electronic courtrooms?
Imagine some time in the future – I would guess fifteen years from now – litigation in some courts will be machine vs machine under direction of a few human controllers, judged by another machine.
Vast data sets are trawled by search engines, pulling together an understanding of the data inside – and drawing up a semantic model of the concepts. (This happens now – see the directed e-discovery, meaning-based assessment and early case assessment tools available from Clearwell, Recommind, Autonomy and other suppliers).
Take this a step further, and you can see that the semantic model can then be pattern-matched against similar cases or regulations, to look at how the facts can be assembled into a case.
A human (or panel of humans) directs the Artificial Intelligences (AI) on the specifics of the case. The AI queries the human(s) on similar cases, culled from court circulars or previous matters, and the humans agree or disagree with the degree of similarity. This provides a legal model for the current case in question, which the AI now ‘knows’. This is like the crime-fighting system, that – when given a real crime and a location – can model where future crimes might occur. It uses pattern matching, on a massive scale.
(Note that it is assumed here that the AI will generally find some precedent; even if there is not, that may be a piece of useful information for the case).
Using a bank of previous cases to compare against (to learn what is ‘relevant’ etc, and what can form an argument), the AI constructs a logical (to it) position justified by data / evidence. It can query ambiguous terms by interacting with humans.
The other side’s AI does the same.
(Note: these could be court-appointed, and the competition can be held between effectively equal AI systems. Like the strict regulations in Formula 1 motor racing, where the differences in the cars are strictly regulated and therefore results are significantly influenced by the skill of the pilot and the strategy of the team).
Judgement: the two opposing AI meet a third, judge AI, who assesses the merits of each case and indicates the relative strengths. The two sides may then settle, based on this assessment; or, they may choose (humanly) to have a human judge arbitrate the decision.
Note: I’ve said 3 AI (two opposing sides, plus a ‘judge’), but as they are essentially identical machines with access to the same dataset, they could be a single AI.
There you go: human society ruled by electronic judges.
And the place for humans to be? The appeals court!
Pros: cheap, fast, and practically infinitely scalable. The technology is almost all already existing (the judge, as far as I know, isn’t). Similar to trading algorithms that handle billions of dollars of trades a day currently (but those are, arguably, a simpler kind of ruleset). AI would quickly build a data warehouse of case law in machine-readable form that it could refer to for precedents, reducing need for human checking and guidance. Corporations may feel this is fairer, cheaper and faster. Access to justice increased.
Cons: “rise of the robots” (feeling of a loss of control over human society), and a “black-box” set of case law that quickly leads to humans being unable to follow judgements. Questions around disclosure.