Several governments are experimenting with utilizing AI-driven computers to make judgments, replacing people with so-called “robot judges” (although no robot is involved).
The major arguments in favor of AI-driven adjudication center on two ostensible benefits: increased efficiency and the potential for eliminating human bias, inaccuracy. The latter refers to unwelcome variation in judgments that may be impacted by things such as a judge’s level of fatigue or the performance of their sports team the night before.
However, there are compelling reasons to remain wary. For example, instead of reducing prejudice and discrimination, there are fears that AI-driven algorithms — which utilize sheer processing power on human data to detect patterns, categorize, and generalize — would reproduce and reinforce those that currently exist. This has been shown in certain investigations.
The most typical objections to AI-driven adjudication concern outcomes. However, John Tasioulas, head of Oxford University’s Institute for Ethics in AI, believes in a draft research article that greater attention should be paid to the process by which judgments are reached.
Tasioulas cites a passage from Plato’s The Laws in which Plato distinguishes between a “free doctor” — one who is educated and can explain the treatment to his patient — and a “slave doctor,” who cannot explain what he is doing and instead works by trial and error. Even if both physicians cure their patients, Plato contends that the free doctor’s method is preferable since he is able to keep the patient cooperative and instruct as he goes. As a result, the patient is not just a subject, but also an active participant. Tasioulas, like Plato, uses this example to demonstrate why process matters in law. We may conceive of the slave doctor as an algorithm, dispensing treatment based on something close to machine learning, but the way the free doctor treats his patients has intrinsic worth. Only the process through which a human judge makes a final decision may supply three key inherent qualities.
The first is comprehensibility. Tasioulas contends that, even if an AI-driven algorithm could be programmed to provide some kind of explanation for its decision, this could only be an ex post rationalisation rather than a genuine justification, because the decision is not reached through the kind of thought processes that humans use.
The second concept is accountability. An algorithm cannot be held responsible for its judgments because it lacks rational autonomy. “As a rational autonomous agent with the ability to make judgments… I can be held responsible for my actions in a manner that a computer cannot,” Tasioulas says.
The third concept is reciprocity, which has importance in the discourse between two rational actors, the plaintiff and the judge, since it fosters a feeling of community and solidarity.
The absence of these three aspects results in a dehumanizing aspect of algorithmic justice. Other concerns include the degree to which adopting AI-driven adjudication will shift legal power from public authorities to the private businesses that develop the algorithms.
While there are many solid reasons against AI-assisted justice, there is also a pressing moral imperative to make the legal process more accessible. This is something that automation can assist with. According to the OECD, fewer than half of the world’s population is legally protected; Brazil has a backlog of 100 million court cases, while India has a backlog of 30 million.
It’s a gap on a massive magnitude. Many of us are attempting to eliminate evident injustice rather than achieving metaphysical, perfect justice. We must not allow the perfect be the enemy of the good, as the adage says. Access to a flawed legal system is, most likely, preferable than none at all.