6 The Actual Risks of AI
The most prominent news in the AI world recently has been the removal of Sam Altman and his subsequent re-appointment as CEO at OpenAI. Reports suggest that a non-profit branch of OpenAI, overseen by a board that included Altman, was established to ensure that the organization pursued the mission of creating safe AGI for humanity. Meanwhile, the commercial branch of OpenAI appeared to be advancing rapidly without adequate attention to safety, leading the board to take action to redirect its trajectory.
Ironically, the mechanism put in place to safeguard OpenAI from excessive commercialization and mission drift may have unintentionally accelerated its move towards commercial interests following the ousting of Altman. This decision faced resistance from business partners such as Microsoft, and almost 90% of OpenAI’s employees reportedly contemplated leaving in response.
The outcome of this upheaval appears to have steered OpenAI toward even greater commercialization, raising questions about whether this trajectory was accidental or a deliberate strategic move by those within OpenAI focused on commercial interests. Regardless of how the story concludes, the prevailing sentiment is that safe AI development cannot coexist with commercial incentives. Many argue that capitalism will inevitably prevail, leading to the development of commercial AI systems regardless of safety concerns.
I am troubled by this brand of techno-pessimism for several reasons. First and foremost, I am bothered by pessimism in general, as I firmly believe that adopting a pessimistic outlook is the easier choice. Pessimism is often regarded as a sign of maturity, while optimism can be perceived as immature and naive. Therefore, it is safer to align oneself with pessimism.
Conversely, if one is an optimist and fails to deliver on promises, they face significant backlash. However, a pessimist who warns of negative outcomes but then things turn out better can justify their stance by claiming that their pessimism actually contributed to the positive outcome! In essence, playing the role of a pessimist is a safety net, but it’s also a display of cowardice. I believe it takes considerable courage to be an optimist and truly believe in the potential of technology to enhance human welfare.
However, it’s important to note that I do not identify with other types of optimists, such as the techno-optimism commonly associated with the Silicon Valley techbros. I believe that the kind of unwarranted optimism these individuals espouse is just as dangerous as the most coward pessimism. This brand of optimism is often disingenuous and hypocritical, coming from individuals who know the system works in their favor and they are the ultimate beneficents of technological wealth.
(I actually prefer to label myself as a techno-pragmatist, a concept that I can delve into further in a different essay. But enough of a tangent already.)
So, in this edition of the Mostly Harmless newsletter, I aim to revisit the topic of AI safety. Previously, I talked extensively about existential risks and I strongly encourage you to review that content if you haven’t already. In doing so, I aimed to dispel the notion that existential risks should the sole or even the main focus of our attention. While I acknowledge the presence of long-term existential risks from AI that cannot be ignored, I believe that the short-term, non-existential, yet impactful risks stemming from the mishandling and careless development of AI today deserve equal if not greater attention.
AI safety encompasses several important topics. To categorize the various ways in which technology can have a negative impact on society, we can examine several different categories. One such category is existential risk, which revolves around the concern that AI technology may become so powerful that it poses a threat to the continued existence of civilization. This could occur through intentional actions, where the technology develops the motivation and ability to destroy us, or inadvertently, such as attempting to address climate change and unintentionally causing harm to humanity.
Setting aside existential risk for the moment, there are numerous other ways in which technology, including AI, can negatively impact society, either accidentally or by the purposeful action of malicious individuals. Let’s review what I consider the most relevant types of AI dangers.
Autonomous weaponry
Autonomous weaponry, a long-term and abstract concern related to existential risks. While debating the possibility of AI leading to SkyNet scenarios, it’s essential not to overlook the potential for catastrophic military use of weapons in various aspects. The first issue arises from autonomous weapons; drones or even traditional ones like assault rifles or misguided missiles can be entirely automated using AI technology. Such fully-automated weaponry is horrifying because all forms of technology employed to kill human beings are deplorable and terrible. Even in conventional warfare, where humans kill other humans, there still exists some room for empathy and consideration despite being barbaric practices in this age. At least in such traditional warfare, an individual remains behind the trigger of a weapon which provides some space for respecting human rights, refraining from harming civilians, or innocent people.
In a fully autonomous weaponry case, where warfare can be completely dehumanized, there is still the possibility of preserving some level of humanity. However, if autonomous drones are deployed in battlefields without human operators, it could lead to total dissolution of the distinction between innocent and active military enemy. While semi-autonomous weapons like drones operated by humans have already shown how being behind a screen resembling a video game can cause dehumanization leading to an increase in civilian casualties, imagine the consequences when such weapons become fully automated. This extreme use case compares to that in movies like Terminator, but what’s more concerning is the current reality of AI for biochemical warfare.
If you can design using artificial intelligence, we can now solve biochemical problems that were considered unsolvable five years ago through tools like AlphaFold. This new capability allows us to design proteins and chemical substances at a scale previously unimaginable. However, this opens the door for the creation of highly efficient and targeted biochemical weapons capable of targeting specific genetic markers in certain populations based on their ancestry or location. The potential for such weapons to be used for racial killing is terrifying. Additionally, advancements in simulation and search algorithms will enable us to create viruses so powerful they could potentially wipe out all of humanity if released. It’s important to note that this does not involve an AI gaining motivation to do so and making the decision to eliminate humankind.
In this scenario, the development of viruses capable of eliminating entire populations is comparable to scenarios involving nuclear superpowers. While there are similarities –such as the potential for catastrophic destruction–, it differs in that AI-designed weapons have not yet been created with the explicit intention of causing mass extinction. Instead, humans’ own shortcomings and ambitions lead them to release deadly viruses against each other. This situation can be likened to the proliferation of nuclear weaponry among various countries, which has so far avoided mutual annihilation due to the high costs and challenges associated with producing such weapons. However, unlike nuclear arms, designing lethal viruses requires only a small nation or a single country to execute successfully.
In the future, it will be possible to download and print custom-designed proteins at home using a chemical printer. This technology could potentially allow small terrorist organizations to design deadly viruses and release them in crowded places like subways. While this scenario is alarming, there are currently no technical solutions to prevent it from happening, as anyone with a computer may have access to these capabilities. Unlike nuclear weapons that only superpowers can possess, bioweapons pose a greater threat because they don’t require extensive resources or expertise.
There’s no way to act, as if you could. The only action is banning certain things through international treaties upheld by countries and governments. Terrorist organizations won’t abide by agreements against producing chemical weapons or nuclear arms, making it an intractable dilemma for which I can see no solution.
Massive workplace disruptions
One of the ways in which technology can negatively impact society is by causing significant economic upheaval that is challenging for a large portion of society to adapt to. This issue is particularly evident in the widespread job displacement resulting from increased automation across various industries.
In the next decade, artificial intelligence is expected to reach a level where it can potentially surpass or match human capabilities in many economically viable occupations, including agriculture, manufacturing, white-collar jobs, education, science, research, and entertainment. If this scenario unfolds as projected, it is natural to experience concern about the fate of the millions of individuals whose jobs will be replaced by automation.
This widespread job disruption has the potential to have catastrophic effects on individuals and the broader society. It raises critical questions about how these individuals will transition to new employment opportunities and maintain their livelihoods in the face of technological advancements.
Massive job disruption, a recurring phenomenon throughout history, has been associated with every industrial revolution. When we introduced looming machines, there was an expectation that they would improve the working conditions for women employed in sewing. However, in reality, this did not materialize. These women simply lost their jobs to the machines.
One of the arguments often advocated by tech enthusiasts is that the advancement of technology will inevitably result in the destruction of numerous jobs, but it will also generate entirely new job opportunities, ultimately leading to a more prosperous society. They often cite examples such as the emergence of new professions like YouTubers and internet influencers or the rapidly evolving role of AI engineers in recent years. But while technology undoubtedly creates new jobs and value, it has historically also led to an imbalance in the distribution of benefits and drawbacks.
It is crucial to approach this issue with caution and consideration, as it has frequently resulted in disparities between the positive and negative impacts. Technology has the potential to bring about societal profitability while simultaneously causing hardship for a significant proportion of the population. Therefore, achieving a balance where those who benefit from technological advances outweigh those who are adversely affected is vital for societal progress.
For those concerned about social justice and equal opportunities for all individuals to earn a livelihood, the potential consequences of significant job disruption demand attention and thoughtful consideration. Despite the potential for the creation of new jobs and overall improvement in the long and midterm, there is no guarantee that those who have been displaced from their jobs will possess the necessary skills or opportunities to transition to new roles.
Consequently, a substantial number of individuals may struggle to leave their current positions and lack marketable skills needed to secure alternative employment. One suggested solution to this problem is Universal Basic Income (UBI), which proposes that the rise of digital intelligence or the fourth industrial revolution will generate sufficient value to provide a basic income for all, eliminating the necessity to work for survival.
However, there are many, myself included, who are skeptic of the feasibility of UBI in a purely capitalistic society due to the way markets incentives work. You can argue that certain well-developed countries already produce enough value to guarantee a minimum income for everyone, yet income inequality has either persisted or worsened over the last few decades in many of these nations.
While Universal Basic Income is seen as a progressive and promising concept for an increasingly industrialized and automated society, I don’t think it is obviously the natural progression of our current social and economic structures. Implementing UBI would require significant social restructuring and government intervention, which may not be embraced by many, often due to valid concerns.
Informational hazards
The first informational risk associated with AI is disinformation or fake news that can be created through generative AI’s capacity to produce almost indistinguishable content. This technology can be used for malicious purposes such as spreading false information and convincing people of scandals related to political candidates through manipulated evidence like pictures, videos, audio recordings, and transcriptions.
So one use of this information is to make people think someone else did something they didn’t do. But another way it can be used is just to make people doubt whether anything is true at all. This erodes trust in institutions, including news, government, science, and other organizations. If people stop believing everything because anyone could create convincing fake content with AI, democracy will fail. It requires informed citizens who can tell truth from falsehood. Too much disinformation can lead to chaos.
Disinformation can be intentionally spread by malevolent organizations/individuals, but there’s also an unintentional effect called polarization. As AI recommends content based on users’ preferences, it creates separate realities for everyone because people keep consuming content aligned with their beliefs. We’ve witnessed such bubbles frequently. For instance, if someone watches only a few flat Earth videos on YouTube, it may lead YouTube to assume them a believer in flat Earth theories and present them more related material.
However, this isn’t necessarily caused by any ill-intentioned entity; it’s merely how recommendations operate. Recommendations for movies and other entertainment content work well because taste in films, music, or art in general is subjective. People enjoy different genres, and it’s okay to suggest similar movies based on what they’ve liked before. However, recommending news sources and experts should be done based on objective measures of quality or truthfulness rather than personal preference. Using the same algorithms for entertainment and news on platforms like YouTube and TikTok isn’t effective since these domains are inherently incompatible due to their differing values.
To solve this, consider more intelligent algorithms or massive human curation. This can be achieved through tagging for recommendations on entertainment channels versus educational ones with an emphasis on factual and accurate content. The complexity of this issue also raises questions about who is responsible for the curation process itself.
Surveillance and censorship pose a major challenge, arising from two interconnected trends. Firstly, as you conduct virtually all activities online - including communication, creation and consumption - your digital footprint leaves an indelible trail. Secondly, advanced technologies enable prediction of future actions based on past behaviour, such as identifying individuals’ contacts, preferences or thought processes via their online activity. While predominantly employed for targeted ads, this capability may also facilitate dictatorial regimes. In fact, the marketing sector now comprises the planet’s largest and most pervasive surveillance network. Every sound, action, or message posted or viewed online is captured, archived and distributed among thousands of advertisers worldwide, each analysing user histories for insights regarding identity and inclination, in service of pitching products accordingly.
The machine can be used for surveillance, identifying dissenters, and censorship. It allows direct censorship through platform filters or indirectly by employers and governments analyzing online activity. This could lead to a dystopian society where everything done online is tracked, analyzed, scored, and punished with denied access to services, jobs, education, or imprisonment if the score falls below a certain level. Active voice and concise sentences have been used to improve grammar and punctuation while respecting tone and not adding any additional information beyond what was explicitly stated in the text.
In the future, a technologically advanced version of George Orwell’s 1984 dystopia is possible. With enough data, information and computing power, law enforcement can implement a “thug police” in its most sophisticated form without screens or microphones. No longer do they need to monitor what people say or type through screens or microphones; instead, they have access to personal devices like smartphones that capture everything individuals do, say, or write. This goes beyond just recorded statements as it enables authorities to predict thoughts based on behavior patterns. The worst kind of dystopia arises where nothing remains private anymore since all actions are public or available for analysis by government agencies. As one thinks, their ideas manifest themselves in conduct online, making it impossible not to express them. Consumption habits such as movies watched and length spent reading online reveal insights into hidden musings.
Think of this in the most creative or smart way you can. For example, while browsing a legal website with no issues, if I’m the surveyor, I can add a hidden feature that shows two images quickly for only your subconscious to see. Then, by tracking how long you look at each image or read small texts, I already have enough information to predict any unusual thoughts you might be having.
Exacerbating and perpetuating harmful biases
Automation, particularly through artificial intelligence, poses a significant risk in that it stands to automate the majority of tasks that involve human judgment. One notable example is in the realm of crime, where stories have emerged of dystopic crime rating systems attempting to predict the likelihood of re-offense and the probability of bail or committing a crime again.
Another area heavily reliant on human judgment is job applications, where efforts have been made – though largely unsuccessful – to replace human recruiters with AI for hiring a range of positions, from white-collar roles to other job categories. Similarly, credit rating, a crucial element of the developed world’s financial system, has seen attempts to automate the system and related financial services, using AI to determine who qualifies for a loan or certain credit thresholds based on a complex web of historical data and predictive models.
Educational evaluation, including the assessment of student essays and overall performance, also falls within the purview of automation concerns, albeit within a separate context due to its substantial impact on education as a whole.
In all of these scenarios, the central issue revolves around bias, presenting a formidable challenge in the implementation of automated systems.
These systems are trained using past human judgments and are inevitably influenced by the biases with which humans judge one another. Consequently, our financial, judicial, criminal, and job market records are rife with discrimination against minority groups, including racial and gender discrimination, discrimination against neurodivergent individuals, and bias against people with non-traditional backgrounds or education.
The prevalence of biases in these records greatly depends on the construction of the systems that process them. Most predictive systems today are trained using a large amount of supervised or self-supervised learning from historical data, which means that they inevitably perpetuate and encode these biases. Unfortunately, we currently lack a clear understanding of how to design a system that both performs well and effectively removes biases.
There is extensive research being conducted in the area of AI fairness. Many AI labs, including my own, are actively engaged in this field. Trade-offs are being considered by various researchers in order to achieve fairness within AI systems.
When aiming for fairness, one approach involves regulating the system to ensure that it produces consistent or similar performance across different subsets of inputs. This includes equalizing the probability of selection among different subgroups. Various mathematical frameworks exist to define what constitutes a fair outcome, often involving the partitioning of the population into subgroups to ensure an equitable distribution of outcomes.
Considerations for fairness extend to the idea of protected attributes, such as race, gender, and educational background. The objective is to develop systems that are independent of these variables. However, simply omitting gender or race from the input data is not sufficient, as there are numerous proxy variables that are correlated with these attributes.
Removing variables correlated with gender and race may inadvertently eliminate crucial information, as these variables are often associated with important factors. For instance, factors such as educational background, childhood experiences, and personal preferences may genuinely influence performance and fairness in socially relevant ways. This underscores the complexity involved in addressing fairness in AI systems.
This problem presents a tremendous challenge. So far, all the solutions that I am aware of involve some trade-off of performance in order to achieve fairness. This trade-off seems to be non-negotiable, as part of the performance given away is actually due to the discrimination itself. This exacerbates existing biases and discrimination.
Why will this be worse with AI than it really is? Society is already unfair. AI may improve some aspects but not others. There is a concern that AI not only captures discriminating biases but exacerbates them, making them more extreme and profound. Mathematically, it has been shown that if left unchecked, a predictive model will tend to exploit these biases to their maximum potential if the sole focus is on performance, and this effect has been empirically demonstrated in numerous papers.
It is crucial to address these issues and prevent AI from exacerbating existing biases and discrimination. Unfortunately, bias exists within systems, and it is crucial to address and rectify this issue. It is essential to be mindful of the presence of bias and take necessary measures to prevent its unchecked proliferation.
The problem lies not in the technical aspect, but rather in its adoption. Making this a priority is imperative because a system that is fairer may not perform as well as one that isn’t, all other factors being equal. For example, in the context of developing systems for hiring applicants, a fairer system may yield lower performance compared to a less just one. In a purely market-driven economy, there are no inherent incentives to prioritize fairness, hence the need to inject such incentives externally, possibly through government regulation.
Conclusions
From a pragmatic perspective, it is essential to address the problems in artificial intelligence that warrant our focus. In particular, I am concerned about the for-profit military-industrial complex’s inability and lack of incentives to solve these issues. I firmly believe that government regulation and societal oversight are necessary to implement safeguards that enable us to harness the potential of artificial intelligence for the greater good, rather than for the profit of a select few, or as a tool for the benefit of the technocratic elites.
I strongly advocate for a balanced approach to AI regulation that does not stifle innovation and development but rather complements these endeavors with safeguards to ensure responsible and ethical AI advancements. It is pivotal that we prioritize the societal implications and ethical considerations of AI, and take proactive measures to steer its development in a direction that benefits humanity as a whole.
I believe we need not fear a sensible government regulation in the case of AI adoption, by regulating the commercial applications of AI rather than the basic research. This type of regulation is in place for many consumer products, from food to electronics to pharmaceuticals. In general, new products —whether GMOs, cars, or drugs— cannot be introduced into the market without demonstrated safety and efficacy. Similarly, AI systems should not be allowed to operate commercially if they demonstratively harm some individuals.
This post is my attempt to shed light into the many ways in which AI can be misused, either accidentally or on purpose, to harm some individuals or populations. The looming question remaining is of course, what can we do, technically and otherwise, to solve these issues? If you’re interested, I can dive into the active research in mitigating AI harms in a future issue.