AI as an Advocate

AI takeover, the idea of superior AI agents autonomously dominating humanity, might not be the biggest concern regarding the growth of the latest artificial intelligence models. When it comes to technology, the biggest risk humanity faces could be, in fact, humanity itself. Artificial intelligence has become a trending topic with a lot of discussion regarding its potential, risks, and advantages.

This increase in popularity can be seen in the user base that OpenAI, organization behind the development of ChatGPT, has gained in the last months: “According to the latest available data, ChatGPT currently has around 180.5 million users. The website generated 1.6 billion visits in December 2023. And according to OpenAI chief Sam Altman, 100 million weekly users flock to ChatGPT” (Duarte, 2024).

Not everyone is enthusiastic about the increasing popularity of AI tools. There are those who believe technology can evolve so much that it eventually reaches a point when it can no longer be controlled.

The fear of an AI takeover, or robots gaining sentience, has for long encapsulated the imaginative faculty of film and TV show makers, ranging from the 1927 German sci-fi movie The Metropolis showcasing the use of robots to suppress labour class to the Star Wars (1977- present) franchise that portrays a series of droids who are military robots having artificial intelligence.
(Pramanik & Rai, 2023).
Tweet

It would be complicated to answer whether the AI takeover scenario is mere science fiction. Instead, we can consider another scenario that is more likely to happen, since there’s already evidence that supports it. Firstly, let’s pretend that there’s an individual who has just developed an AI model that basically works as a simple chatbot.

Now, let’s assume that whenever a user asks the model to output the result for the question “what is 2 + 2?”, the output given by the model  is “2 + 2 = 5”. Clearly, in the given example the chatbot gives a wrong answer to the original question. It could be assumed that this was an unintended error. If that were the case, there could be a variety of causes given that AI models, like ChatGPT, are trained using human users’ data as reference.

ChatGPT alone is susceptible to different biases, including gender, racial, cultural biases, language bias, and ideological bias. These stem from the model’s training data, which reflects originally human-generated content from the internet (Ray, 2023). What if, in this hypothetical scenario, the behavior wasn’t a bug or a mistake? Consider the possibility that this fictitious developer programmed the model in such a way that this was the intended behaviour.

If that were true, it wouldn’t matter what the training data suggested, the model would always state that “2 + 2 = 5”. Thus, a question arises: why would someone do that in the first place? There’s no evident reason why an individual or organization would develop a model that behaves as the one described. It could even be argued that such a fatal mistake would simply have as a consequence that users would stop interacting with the model. After all, would there be someone who believed that 2 + 2 doesn’t equal 4?

At first, this might seem like a dull example, but what if we were to talk about politics, religion or general opinions instead of mathematics? Topics that don’t have a definite answer. What if a user asked a model, like the one described, “how can I get healthier?” and a biased response was intentionally crafted to manipulate said user to favor specific brands for food, vitamins or even home gym equipment.

By spreading propaganda to users, identified based on their online information, AI (or the group behind its development) can target them and spread any information it decides in any format that’s most convincing, fact or fiction (Marr, 2018). As it was previously mentioned, AI could show these kinds of biases by itself, depending on the way it was trained (i.e. the data it used, originally created by human users).

In spite of this, what prevents an organization behind the development of an AI model from implementing biased behavior on purpose? This isn’t AI takeover in the way that it’s usually discussed, with models so advanced that they can replace humanity. Instead, AI would serve its role as an advocate.

The idea of AI as an advocate consists in a complex model, which functions as a biased agent, that has the goal of influencing or directly manipulating its user base, in such a way that it benefits the cause or group it advocates for. It would have to be done in such a way that the users themselves didn’t feel influenced or manipulated in the first place.

Social and participatory media is key to the manipulation of mainstream media. It enables those with relatively fringe views to find each other, collaborate on media production and knowledge dissemination, and share viewpoints that would be unacceptable to air in their day-to-day life.
(Marwick & Lewis, 2017).
Tweet

How likely is such a scenario? There are a variety of facts that support it. Nowadays, it’s relatively straightforward to gather extensive amounts of personal information from users in order to analyze the best way to influence them. Additionally, social media makes it possible to track and analyze an individual’s online activity with ease. That without considering that cameras are almost everywhere and facial recognition algorithms simply know who you are (Marr, 2018).

What’s more, it is known that organizations focused on AI development have already changed the specific behavior of their models, given certain types of user prompts, to change the usual responses it would make on its own. An example of this is how OpenAI has taken some extra precautions to limit the way ChatGPT displays answers, usually for reasons such as censorship or copyright laws.

Since ChatGPT doesn’t understand the emotional nuances of situations it may identify many topics as objectionable when they actually aren’t, from an ethical point of view (Grogan, 2023). This has also been seen before with other AI models like Google Bard:

Bard won’t respond to user queries or prompts about the ongoing crisis in Israel and Palestine over the October 7 Hamas terror attacks and Israel’s ongoing military response. In fact, it won’t respond to any questions about Israel’s or Palestine entirely, even innocuous ones having nothing to do with current events such as “where is Israel?”

The purpose of this argument isn’t making a claim whether unethical and malicious manipulation of AI models is actually being done. This is just an observation about how the means to do so already exist. The concept of AI as an advocate doesn’t depend on the superior development of current technology, as opposed to the idea of AI takeover.

Furthermore, the existence of biased information in AI models is a current issue. These models can create concerning problems when they deliver biased results, become unstable, or reach conclusions for which there is no actionable recourse for those affected by its decisions (Cheatham et al., 2019). There are numerous reasons that justify the customization of AI models to better serve their original purpose, without the intention of manipulating users.

For instance, these models have to be constantly updated given the amount of unstructured data that is used for their training. Not all training data is accurate or originally verified, so it isn’t always straightforward to handle this situations given the increasing amount of information that is handled by a model.

Ingesting, sorting, linking, and properly using data has become increasingly difficult as the amount of unstructured data being ingested from sources such as the web, social media, mobile devices, sensors, and the Internet of Things has increased.
(Cheatham et al., 2019).
Tweet

Even though it could be argued that there are valid reasons for directly manipulating an AI model’s behavior, the fact stands: AI could be used to manipulate users, as a new format for delivering propaganda and marketing. The probability of this scenario is unclear, yet the risk is real. Could the increase in censorship, seen in trending and powerful models like ChatGPT, be the first phase in the consummation of AI as an exclusive advocate of specific groups and organizations?

Only time could definitely answer this. However, there’s an even more pressing question given the growing impact and technological advancements of AI: if this were to happen, would we even realize it?

References

  • Cheatham, B., Javanmardian, K., & Samandari, H. (2019). Confronting the risks of artificial intelligence. McKinsey Quarterly, 2(38), 1-9.
  • Duarte, F. (2024, January 18). Number of CHATGPT users (Jan 2024). Exploding Topics. https://explodingtopics.com/blog/chatgpt-users
  • Franzen, C. (2023, October 23). Google Bard Ai appears to be censoring Israel-Palestine Prompt responses. VentureBeat. https://venturebeat.com/ai/google-bard-ai-appears-to-be-censoring-israel-palestine-prompt-responses/
  • Grogan, R. (2023, April 24). Is chatgpt too censored?. LinkedIn. https://www.linkedin.com/pulse/chatgpt-too-censored-regina-grogan/
  • Marr, B. (2018). Is Artificial Intelligence dangerous? 6 AI risks everyone should know about. Forbes.
  • Marwick, A. E., & Lewis, R. (2017). Media manipulation and disinformation online.
  • Pramanik, S., & Rai, S. K. (2023). AI Take-Over in Literature and Culture: Truth, Post-Truth, and Simulation. Rupkatha Journal on Interdisciplinary Studies in Humanities, 15(4).
  • Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems.

Saul Figueroa is an Electrical and Computer Engineer who loves everything that’s IT related as much as his morning coffee. He’s an alumni from University of Waterloo who’s experienced in software, networking, project management, embedded systems, and database development. Currently lives in Kitchener, ON with his pet rabbit, Ophelia.