[PRESS RELEASE] AI and The Policy Practices Impacting On Racial Profiling | Difussion #77
August 5, 2022 3:35 pm ||
Yogyakarta, July 28, 2022 — Artificial intelligence (AI) is one of the breakthroughs in technological progress. Now, many industries have taken advantage of using artificial intelligence for operational activities. The use of artificial intelligence has touched various aspects of daily human life. Behind the increasing use of AI, some problems arise. For example, technology exacerbates discrimination toward certain groups. In response to this, CfDS, in collaboration with Manchester Metropolitan University, held Diffusion #77 with the theme “AI and The Policy Practices Impacting On Racial Profiling”, inviting Patrick Williams (Lecturer, Department of Sociology, Manchester Metropolitan University) and Wahyono, S.Kom., P.hD. (Lecturer, Department of Computer Science and Electronics, Gadjah Mada University) as speakers and Kevin Wong (Associate Director, Criminal Justice, Policy Evaluation and Research Unit (PERU), Manchester Metropolitan University) as the moderator (https://www.youtube.com/watch?v=mvdlXeblumA).
AI and Surveillance System
Starting his presentation, Wahyono explained the basic concepts of AI. The creation of AI departs from the desire to be able to implement human intelligence into a machine or program. Human intelligence can accommodate data and information that constantly changes in large quantities, but a strategy is needed so that humans can understand the available information. AI is one of the technologies that can help humans in that regard.
According to Yono, four main components are needed in the development of artificial intelligence: data, algorithms, computing power, and scenarios. However, on this occasion, Yono focused on one derivative of AI technology, an AI-based vision system. A vision system is a system or program that is expected to understand, interpret, and understand visual input on a computer.
One form of implementation of the AI-based vision system is the Intelligent Surveillance System (ISS). In the ISS, artificial intelligence technology will be embedded in the surveillance system. Generally, ISS is implemented through CCTV. One of the benefits of the ISS is to increase security in public areas. In addition to improving safety in public places, the ISS also benefits several areas of life, such as environmental protection and health. For example, the ISS can be used to generate water level warnings to mitigate flood disasters and detect the use of masks in public areas to support COVID-19 prevention.
“The use of AI is not a stand-alone study. Therefore, we need to collaborate from one sector to another. In this case, collaboration can be built between the government, the community, and industry,” said Yono.
The Link between the Use of AI Technology and Discriminatory Criminalization
Patrick Williams started his presentation by showing a table containing data on individuals who have gone through criminal justice processes in England and Wales. According to the table, minorities are six times more likely to be stopped and searched by authorities than whites. According to Patrick, minorities are more likely to be arrested, prosecuted and punished for their offences. However, official statistics confirm that minority groups commit violations similarly to whites. So, it can be concluded that there is a gap between the actual crime rate and the criminalization committed.
On the other hand, technology speaks of human inefficiency. The failure of the authorities and law enforcement then encourages the use of technology for surveillance processes or even criminalization. Today, facial recognition and the use of CCTV cameras still tend to misrecognize minority groups, such as black people. Such misrecognition can then trigger an encounter with the authorities. In addition, technology also encourages the extraction of data through social media monitoring. Various data types, such as health and education records, are often combined. These are examples of problematic data management.
Patrick stated, “The assumption that technology is neutral and can perform human functions in ways that benefit society must be questioned. Data is only a reflection of bureaucracy in law enforcement practice. Therefore, AI relies on data. We will start embedding discriminatory ideas into technology if the data used is discriminatory.”
If left unchecked, increasingly fertile discriminatory actions can emotionally impact the community and encourage public distrust of law enforcement officials. In specific communities, this can undermine their understanding of belonging and citizenship. Therefore, Patrick reminded us that collaboration between stakeholders is crucial to combat the use of technology feared to exacerbate societal racial, ethnic, and religious disparities.
Author: Aridiva Firdharizki
Editor: Firya Q. Abisono