Are We There Yet? Artificial Intelligence and Automation in the Adjudicatory Process

Your contact

Now more than ever, the technological revolution is pushing the boundaries of usual practices in all spheres of our society, including the dispute resolution field. The question is whether justice, the epitome of human creation, will one day also be replaced by artificial intelligence (“AI”). In this post, we look at some examples of how and where technology has already impacted the adjudicatory process as well as the current discussion on whether it is advisable or not to automate judicial decision-making processes.

Decision-making is a complex process

How will the decision-making process evolve in the coming years? Can we envisage machines adjudicating disputes in the near future? Adjudicators – be it judges or arbitrators – are often entrusted with the difficult task of making decisions.

In his world-acclaimed book Thinking Fast and Slow, Nobel-prize winner Daniel Kahneman states that “whenever we can replace human judgment by a formula, we should at least consider it” because “humans are incorrigibly inconsistent in making summary judgments of complex information” and are extremely dependent on “unnoticed stimuli in our environment which have a substantial influence on our thoughts and actions.

A concrete example is a well-known study suggesting that the prospects of a convict being granted parole change significantly depending on whether judges are to reach a decision before or after having lunch. The study’s findings suggest that judicial rulings can be swayed by external factors, such as food breaks, that should in principle have no bearing on legal decisions.

Such studies are the basis for those who believe that technology – and in particular artificial intelligence – will help to avoid taking into account such external factors, inevitable to humans, that tend to influence the decision-making process.

Contemporary examples of the use of technology in decision-making processes

As of 2018, the use of algorithms and artificial intelligence in relation to the decision-making process within European judicial systems was limited. Yet, recent trends have emerged demonstrating that some governments are taking serious steps in order to address the technological revolution. In Estonia, for example, minor offences concerning acts with damages of less than EUR 7,000 are dealt with by artificial intelligence that determines whether or not the person in question is guilty. Furthermore, China’s Beijing Internet court recently announced that it had launched an “AI judge” with the caveat that such a tool would only assist the judges to “complete repetitive basic work” in order to “improve the quality and efficiency of judicial work.

In fact, this has already been put in place in the private sector with regard to small value disputes. Indeed, large corporations such as Alibaba, Amazon and eBay have implemented online dispute resolution (ODR) mechanisms which resolve disputes without human intervention. eBay, for example, is known for handling over sixty million disputes a year in relation to purchases and sales of products on their platform. Other programmes such as Modria and Smartsettle also offer an automated dispute resolution toolkit.

Within the European Union, the European Parliament stated in its Recommendations on civil law rules on robotics that although automatic and algorithmic decision-making processes will have an increasing impact in the judicial sphere “it is necessary to incorporate guarantees and possibilities for human control and verification into automatic and algorithmic decision-making processes“.

As such, the European General Data Protection Regulation (GDPR) provides in Article 22(1) that “The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her“. There are exceptions where the processing is necessary for entering into or performing a contract, where authorised by applicable law or where the individual consents to automated processing. For any automated dispute resolution system making decisions that affect individual natural persons, this would mean that even with consent of the individual, the system must give individuals a right to obtain human intervention, to express their point of view and to contest the decision (Art. 22(3) GDPR).

Pros and cons of automated decisions

Various pros and cons may result from the implementation of automated decisions.

Among the “pros” of an automation of the decision-making process, one can identify the following:

  1. Prevents bias of human adjudicators: each person is consciously or unconsciously biased. The issue is whether biases might unconsciously attribute greater weight to the testimony of the white witness, the advocacy of the male counsel or the opinion of a co-arbitrator from the same religion, legal tradition, values or social-educational background. The suggestion is to use AI in order to correct the unconscious biases of adjudicators since, some authors argue, machines are less susceptible to biases and do not suffer from other problems affecting humans exercising such judgment.
  2. Time and costs are spared: the adjudication process (whether it be arbitration or normal judicial proceedings) can be notoriously costly and/or slow. Using AI for dispute resolution would dramatically reduce costs of proceedings and provide consumers and small businesses better access to dispute resolution mechanisms. Furthermore, the demand for justice has, for its part, equally grown to reach record highs, making the use of AI tools a necessary step to take. A digital adjudicator works fast, which in turn would also allow justice to be served in a more prompt and timely manner avoiding any unnecessary delays.

As for the “cons” of automated decision-making process, experts have identified the following:

  1. Risk of biases of the algorithm: Programmers and other individuals intervening in the process of construction and programming of the AI may replicate bias without intending to do so. Indeed, an algorithm is the product of its inputs, which emanate from human decisions about how the algorithm should be constructed. As machine learning depends on the data the AI is fed, would AI learn biases in favour of, for example, important companies that win disputes more frequently, disputing parties from particular countries, or certain types of claims?
  2. Lack of transparency of the decision: Machine-based algorithms can be outstandingly good at solving problems, the downside being that it is often impossible for humans to identify how the algorithm reaches its decision. The process of coming to a decision can be a mystery, even to the smartest programmers. In the context of decision-making, this results in the delicate situation where it is not possible to provide reasons or the steps that were necessary to arrive at the decision at hand. Yet, for complex disputes, the reasoning is usually extremely important, in particular for the party on the losing side.
  3. Crystallisation of case law: an algorithm is a product of its inputs and the data that it is fed. Outputs will thus be based on analysis of existing data. Because of this, existing biases and assumptions are replicated and perpetuated stifling the possibility of change and developments of the law or the legal system in response to evolution in human thinking. Basically, humans are needed in order to think creatively and with ingenuity, expanding beyond the proposed framework. As such, a decision made by an AI tool can end up being inherently conservative.
  4. Pushing the boundaries of due process: due process is the guarantee of procedural fairness in judicial proceedings. How will AI or any other type of IT software or programme guarantee that the automated decision is a product of a procedure guaranteeing due process, a pillar of any adjudicatory process? Indeed, dispute resolution proceedings are often riddled with a number of procedural hurdles raised by the parties, sometimes amounting to outright “guerrilla tactics”, that can only be adequately resolved by seasoned adjudicators’ experience and knowledge of the procedure as well as a good sense of fairness.

Concluding remarks

Hannah Fry, the author of the book Hello World: How to be Human in the Age of the Machine, concludes that “one thing is for sure. In the age of the algorithm, humans have never been more important.”

Indeed, following the general consensus of dispute resolution practitioners, the impact of technology will be increasingly important in the adjudicatory process with tools that will support the work of judges and arbitrators in their decision-making process. Nevertheless, there is still a long road ahead until reaching the autonomous machine-made decision. At this point in time, it is more probable and judicious to foresee a future where adjudicators will be supported by artificial intelligence, but not completely replaced by it, at least at this stage.

In any event, advancing the practice and the understanding of automated decisions may ultimately provide expanded access to justice for citizens around the world, helping fulfil the objective of justice for all. This alone is a goal worth pursuing.

Share post


MLL Meyerlustenberger Lachenal Froriep Ltd

MLL is one of the leading law firms in Switzerland with offices in Zurich, Geneva, Zug , Lausanne, London and Madrid. We specialise in representing and advising clients at the intersection of high-tech, IP-rich and regulated industries.

MLL Meyerlustenberger Lachenal Froriep


Much is still unclear in relation to liability questions around AI tools.

Read our latest post about “Liability during the Lifecycle of an AI Tool” and download our white paper.


Click here for our latest publications


Read all our legal updates on the impact of COVID-19 for businesses.

COVID-19 Information