In Minority Report (2002) movie, Tom Cruise showed us how the police department in 2054 AD is capable of preventing crime by predicting its actual occurrence through advanced technology. But it seems like we are not so far away from that future now. The use of artificial intelligence (AI) in the courtroom is increasingly present in today’s world, where the big data set can predict the likelihood of a person to commit a crime in the future. AI then plays a significant role in our legal system, since the algorithm is used for various purposes including to “set bail, determine sentences, and even contribute to determinations about guilt or innocence,” as reported by Electronic Privacy Information Center.
This phenomenon raises an important question to ask: Does AI refine justice in our legal system, or exacerbate it? This essay argues that, even though AI can bring promising advancements in the court by decreasing human biases in judicial decision-making and accelerating the process of small cases, but it also poses some challenges in its implementation, due to the unequal structure of our society. First, the predictive policing methods of AI tends to create prejudice. Second, there is a probability that AI will reinforce biases if the underlying data is already bias to the marginalized groups based on their race, gender, sexual orientation, etc. The following part will elaborate each argument consecutively.
First, AI predicts the likelihood of a person to commit a crime in the future by using risk assessment, which then will be used by the judges to give a judicial decision. However, this assessment tends to create prejudice for the defendant without having the capacity to challenge it. This is present in the case of Loomis v. Wisconsin, where Loomis--a defendant in a drive-by shooting case--was sentenced a six-year prison term based on AI risk assessment. But, neither Loomis or the judges had access to the algorithm, because it is owned by a private company who processed the data. The lack of openness, in this case, is, of course, the very opposite of how criminal justice should function.
This is the consequence of the very proprietary nature of AI, which makes it appear as a ‘black-box' in the courtroom. In another scenario, the massive development of deep fake technology also has a disruptive effect on the use of AI in the legal court. Deepfakes can be misused as false evidence, which possibly leads to false accusations. To prove the evidence, a defendant then must give a lot more information than he or she supposed to in a conventional trial. It is problematic since this situation tends to dig more into the defendant’s personal information, which should be confidential in a legal court.
Second, the idea of using AI in legal court meant to be an alternative that reduces human biases in court. However, it is up to debate whether the use of AI can give a more objective result if the underlying data itself is already bias. A report by ProPublica found that algorithmic assessments tend to falsely flag black defendants as future criminals at almost twice the rate as white defendants. It is dangerously problematic since the judges who rely on this data do not understand how the scores were computed. This biased data then tends to reinforce racism systematically, instead of eliminating it.
To date, several Western countries are already using AI as a part of their legal system. Wired reported that, “In the US, algorithms help recommend criminal sentences in some states. The UK-based DoNotPay AI-driven chatbot overturned 160,000 parking tickets in London and New York a few years ago.” In Estonia, the government has even recently launched an AI-driven ‘robot judge’ to process small cases in its court. This robot judge is supposed to adjudicate and make a decision for small claim cases of less than $8,000, by analyzing its legal documents and other relevant information. Estonia believes that this advancement could help ease backlog for judges so they can focus more on complicated cases. This idea of giving technology such an authority—that is usually owned exclusively by state—might be quite scary and yet promising in terms of its efficacy. Even though human reassessment is still available in this scenario, the skepticism that comes after it is similar with the current use of AI in China’s court. So far, China is maximizing the use of AI to create their ‘similar judgments for similar cases’ policy. In both Estonia and China cases, we should be critical and question the compatibility of a universal set of judgment to legal cases in order to bring more justice for the society.
Finally, the use of AI in legal court is a double-edged sword. It can work as government’s tool to reduce human biases and increase their work efficacy in legal court; or it can reinforce the structural violence that is present in our unequal society if the underlying data is not fairly crafted. At the end of the day, both supporter and opponent of this phenomenon strives for the same goal: an inclusive justice. It is important to tackle the challenges as mentioned earlier to maximize the benefit of AI in court. One of the possible measures to overcome these issues is to involve various experts from diverse backgrounds in the engineering process of AI for the legal system. The representation of these stakeholders will then possibly create a more sensitive and inclusive database, hence a better judgment in court.
Editor: Janitra Haryanto
Read another article written by Tantri Fricilla Ginting
 World Economy Forum. (November, 19th 2018). AI is convicting criminals and determining jail time, but is it fair? [Online]. World Economy Forum. Accessed at: https://www.weforum.org/agenda/2018/11/algorithms-court-criminals-jail-time-fair/. [Available at: 25 Mar. 2019].
 A. n. (November, 3rd 2017). Artificially intelligent judge, jury, and executioner--will algorithms take over criminal justice? [Online]. Accessed at: https://www.richardvanhooijdonk.com/en/blog/artificially-intelligent-judge-jury-executioner-will-algorithms-take-criminal-justice/. [Available at: 25 Mar. 2019].
 Paris Innovation Review. (June, 9th 2017). Predictive justice: when algorithms pervade the law [Online]. Paris Innovation Review. Accessed at: http://parisinnovationreview.com/articles-en/predictive-justice-when-algorithms-pervade-the-law. [Available at: 25 Mar. 2019].
 Wired. (March, 23rd 2019). Can AI be a Fair Judge in Court? Estonia Thinks So [Online]. Wired. Accessed at: https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/. [Available at: 25 Mar. 2019].
 Tangermann, V. (March, 25th 2019). Estonia is Building a ‘Robot Judge’ to Help Clear Backlog [Online]. Futurism. Accessed at: https://futurism.com/the-byte/estonia-robot-judge. [Available at: 25 Mar. 2019].
 Yu, M., & Du, G. (January, 19th 2019). Why Are Chinese Courts Turning to AI? [Online]. The Diplomat. Accessed at: https://thediplomat.com/2019/01/why-are-chinese-courts-turning-to-ai/. [Available at: 25 Mar. 2019].