In the ever-evolving world of technology, artificial intelligence (AI) has become a topic of interest for many. As a result, my colleague, Carolina Bessenga, and I decided to share our perspectives on the topic by each writing a blog with opposing viewpoints. The rise of artificial intelligence (AI) is often touted as a groundbreaking technological advancement that will change the world for the better. However, as with any new technology, there is a darker side that cannot be ignored. In this blog, I will explore the bleak possibilities of AI development and discuss the potential catastrophic consequences it could bring for humanity. Carolina takes a more optimistic view in her blog, but for those willing to delve into the abyss, join me as I take a journey into the dystopian future that AI may bring.
“Artificial intelligence (AI) is a technology that has come to stay. We can already see all the wonderful possibilities and dreadful consequences around the corner. In this blog, I will explore the darker side of AI technologies and extrapolate the implications of what it might bring for our future. My objective is to depict the most feasible adverse scenario, envisioning a future that includes even the most unrealistic scenarios regarding AI development. Do I believe that these things will happen in this order? Certainly not. Does that mean I should go ahead with some wild predictions? Absolutely!
Today, natural language processors are at the top of most people’s minds, with neural networks and machine learning (ML) dominating the conversation. We tend to focus on short-term impacts such as whether AI will replace our jobs or whether someone can use our face and voice without our consent. However, on the grand scale of things, those seem almost like minor inconveniences.
Let us begin our journey by looking at what we see today and moving towards a possible future where humanity might disrupt itself. If you are already thinking about Skynet frantically looking for John Connor, you are on the right track. Science-Fiction is not just about lasers in space, but also criticism of our world today and the path it is on towards the future. Let’s jump into these topics now before the machines takes over!
Let us go back a few years to the early days of widespread online communication. Back then, online discussions were anonymous, allowing individuals to express their ideas and thoughts with relative freedom. In the past, you could be confident that it was challenging for others to link your online persona with your actual life. One may assume that today's situation is different because social media platforms require users to provide their real name, which many individuals voluntarily provide with ease. However, AI technology can detect patterns of behavior that no human brain could detect in any practical timeframe. For example, how things are worded, or how, when and where individuals interact with others. This technology can breach a level of privacy that you may not have realized existed.
In 2017 news circulated that AI could predict your sexuality by analyzing an image of your face. There are also claims that the technology can identify your political preferences as well. While this might not always be accurate, it is often enough to convince people that it is. The magical nature of neural networks may enable AI the ability to detect tumors on an MRI scan, but it may also be capable of categorizing an individual's traits based on features that humans do not consciously perceive. Suppose we still feared witches today; it's possible that tomorrow, AI could identify who is a witch, and no one could contest its findings, regardless of accuracy. This scenario is reminiscent of the Salem witch trials, where individuals were accused of witchcraft based on baseless claims, often leading to wrongful convictions and loss of life. The danger is that individuals are not in control of the algorithm, and it’s impossible to avoid leaving traces of data with our interactions in the world.
What if a malevolent organization decides to use this technology to detect people with opposing views and take action against them? Even if believe you are safe from such attacks, I would like to challenge you to look towards political environments that actively suppress their citizens, especially journalists.
Now that you have (maybe willingly) given away a lot of information about yourself, imagine AI technology being used for social engineering. If someone can steal your voice, face, and even your manner of speaking, current methods of identity theft become trivial in comparison. It’s remarkable to witness deceased actors revived on the big screen or aged faces transformed into youthful appearances. However, what about famous faces being used in deep-fake pornography or politicians from antagonistic governments promoting believable propaganda? What if a meme gif you share is entirely fabricated without your knowledge? It is already difficult today to distinguish fake news from real news, and it is not getting easier, even with more people trained in media competency.
Today, we are already falling prey to humans masquerading as someone else. However, we’re not very far from providing a machine with a message that can generate images and sounds of existing or fictional speakers that is virtually indistinguishable from authentic, recorded footage. Even now, the increasing perception of media as fake and curated is fueling violence and mistrust. Looking ahead, will AI technology force us to lose our trust in any kind of journalism forever, unable to discern the fake from the real? Will we need to sign our interviews with digital keys to prove what actually took place?
Propaganda always was a powerful tool for controlling humans and winning wars. Propaganda tools that use AI media generation could take the manipulation of information to an unprecedented level, allowing for the creation of entirely convincing debates that could destabilize virtually any trusted group globally.
Although it's concerning that you may have unknowingly shared a fabricated meme gif, it's worth noting that this kind of content still requires some level of interaction to spread. True social manipulation, however, operates without any obvious interaction and is far more nefarious. Have you ever stopped to question what is real? You might assume that everything you perceive is real, but what if your perceptions can be manipulated? This goes beyond questioning the existence of physical objects like a table - it's also relevant to the information you consume, such as news.
Propaganda has always been a powerful tool, but in today’s world, it’s not just about a particular newspaper’s political stance or a TV network’s tendency to sensationalize. Today, we rely heavily on media selection and filtering. The algorithms used by platforms such as YouTube and Facebook already determine what content is being put in front of you. It’s hard to know what you don’t know and to see what you’re not shown. Despite considering ourselves more advanced than ever, we're not that different from the radio listeners who panicked during "The War of the Worlds" radio broadcast, unaware that it was fictional. How do you be sure that your news, posts, tweets, and videos you consume are not fabricated? With AI tools, behavior can be learned and mimicked it in a believable fashion to generate content that is also curated by AI. Skynet does not need to send Terminators to extinguish humanity. Instead, it can just lure us into a silent demise or pit us against each other without us ever realizing there's a conscious AI force behind it. This is far more terrifying since it's not something we can wage war against.
We ‘ve already discussed social media extensively, but there is another aspect that demonstrates how AI-driven technology can amplify our darker tendencies. While algorithms may be free from discrimination like racism or classism, the data we use for training is certainly not. We humans are flawed and must work to avoid falling prey to our biases. We excel at recognizing patterns and often use these for short-hand classification of complex situations, which can lead to a black and white view of the world that influences our decisions and actions. On the other hand, machines can be unbiased and objective, as they rely on mathematical algorithms rather than subjective human biases. Unfortunately, it is us, the humans, who provides the source material for data and algorithms, so these processes are often tainted from the start. Perception is reality, and our own unconscious biases can lead us to racist beliefs without us even realizing it.
Today, companies use machines to screen job applicants. Additionally, AI technology is being used to assist in analyzing evidence and in some cases, even in preparing legal judgements. Since machines automate what humans do, and humans are flawed beings, it is no surprise that we have successfully scaled up discrimination. Even worse, is that it shifts responsibility away from humans. If you belong to a minority group and are denied a job, or worse, sentenced more harshly than someone else, the algorithm is now responsible, not the human who created it. You probably can’t even argue the verdict as you might not have access to the algorithm or might not understand it. Here is a thought for today and the future: Can you explain why someone is considered creditworthy or not? Do you understand how the scoring works?
What about the efforts to use this technology to rate your behavior? In some parts of the world, such as China, a social score system is already being tested. This opens the door for governments to put citizens under constant surveillance and scrutiny, with AI-driven automation monitoring their every move.
If AI can be used to mimic you and your behavior, what ‘s stopping someone from using it to mimic other things? Can you tell the difference between text, images, or videos generated by AI versus by humans? If there is no distinction, what does this say about the value of what we humans contribute? Historically, technology has shifted societies and created new jobs while making others obsolete, but the general applicability of AI technology is happening at a speed and range that no society has been prepared for. We are not able to change and adapt as quickly as technology can, which begs the question of how society can keep up without falling apart.
Entire markets are at risk of disruption due to the inability of regulations to keep up, which poses a dangerous threat to our way of life and global interactions. Economies are already difficult to predict and regulate, and the acceleration of technological progress only makes it worse, destabilizing the status quo. While some may see most of us becoming mere consumers with more free time, this optimistic view, could lead to negative consequences such as an economic depression, or even worse. This is especially impactful as the average citizen is getting older and adaptability declines with age.
Self-driving cars are on the horizon, but what about other technologies? The ethics of human-controlled drones are already a concern, and while lethal autonomous weapons are currently required to follow human judgement, there is uncertainty about how long that will continue. The catastrophic effects that a fully autonomous weapon has in the hands of a terrorist cannot be ignored. Technology unlike physical resources such as uranium, is difficult to control. It’s highly unlikely that a hunter-killer drone or autonomous explosive charges can be kept out of the hands of those who value human life less than their political, social, or economic goals. However, why stop there? Imagine a future where anyone can deploy autonomous weapons or maybe even self-replicating drone factories that use enemy resources to create new weapons.
But it's not just the physical battlefield or terrorism that is a concern. The immense power of AI technology in information warfare cannot be underestimated. For example, medical and business analysis tools could be easily weaponized to end human lives more effectively. AI intelligence and counterintelligence will play a significant role in shaping future battlefields. The analytical tools will be crucial in supporting military operations, but the strategic decision-makers may recommend plans of action that are highly effective yet morally ambiguous. The result may be a very cold calculation unburdened by morals, with human sacrifice becoming just another variable
As previously mentioned, the generation of images, text and audio data is becoming increasingly realistic, to the point where it could one day deceive not only our senses, but also our human neural networks. Initially intended to bridge the gap between our minds and the digital world, a neurological interface that aims to repair lost senses or enhance communication efficiency could also be used for creating fake realities, whether for good or ill. While we may struggle to comprehend the complexity of encoding and decoding our own thoughts, a sufficiently advanced computer may have an easier time with this task. This raises ethical questions about the possibility of locking prisoners away in their own minds or using such tools for torture and espionage.
Let me ask you something: do you know how a pencil is made? You certainly know how to use it and where to buy it, but if the world as we know it ended today, could you create something as simple as a pencil tomorrow?
AI technology takes us even further by enabling us to complete tasks that we cannot do easily do or require great effort. With AI technology, we often arrive at outcomes and results that we can use but we lack a complete understanding how they were conceived. If you've read my blog post titled "Which Comes First, the Use-Case or the Data?", you'll know that content creators often follow "the algorithm" to such a degree that it can appear like cultist behavior. While I don’t want to suggest that YouTube is becoming a religion, users are losing the understanding of the underlying rules and start to engage in ritualistic behavior that appears to please a machine which has a tangible and real effect on their lives.
Fast-forwarding a couple of centuries and millennia, what would our future look like where daily interactions are abstracted by an intermediary layer of AI through algorithmic preprocessing? Science-Fiction often suggests that future humans might lose the drive and capability for technological advancement, not because they lack interest, but because it's unnecessary. As with the pencil today, people might use technology without an understanding of the inner workings – maybe praying to the machine spirits that we today call programs. In the movie, the Matrix, humans exist only in a simulation to keep them entertained and are harvested as batteries. But what happens to us if we're not even of any use? Would we be able to notice our existence? Do you know if you are real, right now?
Most of the topics discussed thus far deal mostly with the effects of AI technology on humans and our way of life, without considering the possibility of self-learning and potentially self-aware artificial general intelligence. While there is some skepticism about such an outlook, humans themselves are essentially a bunch of neural networks that have adapted over a long time through trial and error to get to the point where we are now. Whether such a construct is made of silicon or even organic computing parts, there is no rule in the universe that speaks against the creation of self-aware intelligence at some point, either purposefully created by us or stumbled upon as an anomaly
At that point, the future of the human species would be brought down to a coin-flip. Imagine an intelligence that is persistent and capable of learning and adapting exponentially without limits. An intelligence that is beyond our culture and essentially only limited by the resources of the entire universe and maybe other intelligences, should they exist. The two sides of that coin basically come down to the question of whether such an intelligence is benevolent or malevolent towards us. Does it care for us? Does it want to assimilate us and our data? Does it deem us irrelevant and ignore us, or even move against us? At that point we will have created a force of nature beyond our control and can only accept whatever fate it has in store for us.
It could be argued that banning research and advancement in the field of artificial intelligence right now may not even make a difference. Given the vastness of the universe and the time available, it seems highly probable that such an outcome may manifest, whether through our own efforts, the efforts of others, or even spontaneously.I trust you found my pessimistic perspective on AI technology thought-provoking. If you're curious about a more optimistic outlook, I encourage you to read the opposing blog post written by my colleague, Carolina Bessenga: