The Use of Artificial Intelligence in Election Campaigns: Could It Threaten Democracy?
Fri, 03 May 2019 || By Felice Valeria

Currently, the development of Artificial Intelligence (AI) has been predicted to ignite more technological innovations to begin partaking within many people’s daily and professional lives – including in elections, which could be deemed as undeniably crucial in maintaining democracy as a cosmopolitan value to be continuously instilled within countries. To maintain the preservation of democratic governance, countries usually hold legislative and presidential elections, in which a series of demonstrations of election campaigns would serve as prerequisites to pooling votes for the candidates as many as possible. As big data has yet been leveraged by political campaigners to revolutionize politics in terms of targeting audiences with favorable contents,[1] AI (or machine learning) has been predicted to drive innovations in terms of data management in election campaigns to hopefully create efficiency and accountability in the near-future.


The systems of machine learning are based on integrated statistical techniques that could identify data patterns automatically, which would enable voters to gain the right information about the key political affairs[2] and anticipate the widespread of fake news. Nonetheless, one could not deny the fact that the misutilization is inevitable; thus, creating a dilemma whether the said advancement would save the democracy or vice versa.


Psychological manipulation is one of the cases where the misutilization of AI would be deemed as threatening democracy.  The individual analyses based on the voters’ internet recorded footprints have made it possible to determine their behavior based on their patterns of consumption and online relationships. Hence, as emotional triggers would emerge based on each individual’s psychology, democracy would thus be regarded as insincere and unfair.[3] It is also worrying that the use of AI powered by the Internet of Things and big data could drive more automated cyberattacks against political campaigns to occur; faster than what officials could handle. As hackers could more easily hack campaign accounts, it is likely for more account damages to irresistibly take place.[4] Hence, the implementation of a fair democracy should be further questioned. 

Among many other things, one of the most important problems to address concerning the use of AI in elections is the possibility of algorithmic biases to occur. Algorithmic bias could occur due to machine learning and deep learning systems,[5] which are highly dependent on data as their prerequisites to work in the first place. Even if the data scientists and engineers have been trained to avoid any biases to be instilled within the systems, it is likely that cognitive biases of the creators could inevitably program the decision-making processes of the systems; it has been indicated by most studies.[6] One of the exemplary cases concerning this possibility of the occurrence of the problem could be seen in the case where a photo of  two black people was falsely tagged as gorillas due to the mistake performed by Google in identifying the objects to the proper labelling.[7] As such, similar cases could certainly be highly possible to occur in elections – especially by considering the fact how the pursuit political interests would usually take place in any ways possible, which also indicates how AI is vulnerable to the misutilization.


It can be seen that democracy is most likely to be threatened by the use of AI. However, on the other hand, as the efficiency, accountability, and transparency are predicted to be its most advantageous aspects to create free and fair elections, it should not be considered as merely a drawback at all. Human-centered AI is actually the key – the ones that should be accountable throughout the efforts of avoiding the aforementioned detriments are humans themselves, who should be given adequate knowledge and skills – especially in identifying what the systems are susceptible to in terms of inherited weaknesses that could disrupt democracy from running its functions; for instance, cognitive bias as the most common mistake that could threaten its implementation if it is not handled effectively and sufficiently.

Editor: Anaq Duanaiko

Read another article written by Felice Valeria

[1] Illing, S. (2017). A political scientist explains how big data is transforming politics. [online] Vox. Available at: [Accessed 29 Apr. 2019].

[2] Polonski, V. (2017). How artificial intelligence conquered democracy. [online] The Conversation. Available at: [Accessed 29 Apr. 2019].

[3] Ibid.,

[4] Patterson, D. (2018). How AI is creating new threats to election security. [online] CBS NEWS. Available at: [Accessed 30 Apr. 2019].

[5] Dickson, B. (2018). What is algorithmic bias?. [online] TechTalks. Available at: [Accessed 30 Apr. 2019].

[6] Eisenstat, Y. (2019). The Real Reason Tech Struggles With Algorithmic Bias. [online] WIRED. Available at: [Accessed 30 Apr. 2019].

[7] Ibid.,