
I think that I, like many people, am drawn to the drama of both AI Doomerism (artificial intelligence is going to be existentially disastrous for the human species, and therefore we need to slow or halt its development) and AI Boomerism (AI is going to be enormously beneficial for the world, and therefore we need to accelerate its development, i.e., we need an AI boom). In the former, we get cool sci-fi stories like The Terminator and The Matrix, with all the action and heroism that comes with it, where the latter gives us stories like Her and Star Trek, with all the philosophical wonder at what it means to be human and what consciousness is. Especially as someone who wants to be (or at least likes to pretend to be) an author, and someone who is interested in philosophy, these stories are engaging and it’s easy to get caught up in them. But in the real world, AI has more mundane, but no less impactful, real world consequences. And so, people like me who often live with their head in the clouds, easily swept up by the high-minded ideas of AI, need to be brought back down to earth.
My reason for making this post is because I’ve just read the book Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI by Karen Hao. I’d thought about doing another of my summary and review posts for the book, but other than summarizing it I wouldn’t have much to add as far as direct commentary. It’s a fantastic book, well-written and important, and you should definitely read it. The book follows the narrative of OpenAI from its conception up to very early 2025. It has several theses mixed along with the narrative, some (though not all) of the important ones of which I will briefly describe.
The first thesis is that artificial intelligence is little more than a marketing term, perhaps doubly so for AGI (artificial general intelligence, which is supposed to be good at all or most of the things a typical AI is supposed to be good at). There is no useful and widely agreed upon definition for what AI or AGI even is or how we would know someone has created it, much less what it’s even supposed to do for the world. This means that the oft-referenced mission of OpenAI – to ensure AGI benefits all of humanity – is vague and mostly meaningless, with promises that appear more like hype and marketing than any sort of practical or measurable endpoint: AI is going to somehow cure cancer and solve climate change and make everyone wealthy and so on. Specifics on how any this will happen are few and far between, we just need to have faith.
A second thesis is that OpenAI has been run chaotically by fragile, egotistical people with irreconcilably different ideologies pertaining to AI (namely, Doomerism vs. Boomerism). These people, while extremely intelligent at very technical things, are merely improvising the development of this hugely disruptive technology (while attempting to give the impression to the world that they know what they’re doing). This has generated a great deal of ego clashing and factional infighting within the company. The OpenAI leadership, and the mercurial and manipulative Sam Altman in particular, have somewhat obscure motives, though ambition, and the need for recognition, esteem, legacy, and influence, appear to be a primary impetus for the project (which is why, for instance, Altman likes to compare himself to Oppenheimer, including pointing out the fact that the two have the same birthday). They (OpenAI in general and Altman in particular) want to be the ones credited with midwiving AGI into the world. Indeed, the genesis of OpenAI by Sam Altman and Elon Musk is because Musk did not want Google to be the one’s to get credit for first inventing AGI. Altman, a master manipulator, was able to exploit this frailty of Musk’s psychology in order to get him on board with coughing up funds and attaching his name to the project early on, helping to get OpenAI more hype and funding than was warranted. But this has led to the LLM (large language model, such as ChatGPT) arms race, which has resulted in a greatly reduced focus on safety precautions (more on what is meant by “safety” in a little bit).
A third thesis is that AI companies and government policymakers have adopted OpenAI’s somewhat ad hoc doctrine of scaling as a sort of dogma. This doctrine says that if AI developers just keep increasing the amount of compute (which requires the sprawling, energy-hungry datacenters popping up around the globe), data (i.e., the stuff being stolen from creators), and the number of parameters (i.e., requiring more math), eventually they will achieve AGI. This then vacuums up all the talent and funding from any other alternative approaches to AI, eliminates competition by ensuring that only large companies with access to enormous resources can participate, creates an environment where AI safety and capability benchmarks are set by the very companies attempting to reach them without having to justify those benchmarks to independent researchers or authorities, and incentivizes colonial thinking in which the “global south” is extracted for resources and cheap labor for a product that will never benefit them.
A fourth and related thesis is that AI Doomerism and Boomerism (along with the frequently referenced threat of China achieving AGI first) has been used as justification for a lack of transparency by AI developers, especially with the training data. These topics (Doomerism and Boomerism) have also helped to overshadow other real world problems caused by LLM development. Both the high-minded Doomer/Boomer talk and the more terra firma discussions about economic, environmental, and social impacts often use much of the same terminology about issues of “safety” which further obscures the conversation. If I claim that our current path of AI development is dangerous, do I mean because it’s going to go rogue and exterminate the human species, or because the enormous amounts of water needed to cool the gargantuan datacenters is going to lead to water shortages for millions of people? If the conversation is kept too abstract, then people can read into a statement about “the dangers of AI” in whichever way they like, which can lead to confusion and bad policy.
In particular it is this fourth thesis that has led me to write this post. Along with a recent review I did for If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us by Eliezer Yudkowsky and Nate Soares, which takes the Doomer approach. In that post, in my concluding remarks, I give my typical pessimistic Doomer view of humanity’s chances of survival. But in that book, and in my review of it, it is taken practically axiomatically that A) if humanity continues developing AI, then it will achieve AGI, and B) that the current approach, using scaling, is the approach to AI that will lead to AGI (or, ASI for artificial superintelligence, which is what the authors of that book use instead of AGI).
Of course, if one accepts these two postulates, then AI Doomerism appears perfectly rational. It seems strange, for instance, when I look back at the history of atomic weapons, that there was so much business-as-usual politicking going on during the Cold War. Here in the future (for the people of the time), those everyday concerns seem so prosaic and insignificant in light of the threat of nuclear war. If there is even just a one percent chance of nuclear armageddon, shouldn’t that be the only thing anyone is discussing (i.e., I’m not going to worry about my taxes if there is a gun pointed at me)? Who cares about midterms and fiscal policy and civil rights when all of that could get turned to ash at the touch of a button. The expected utility of tackling the nuclear weapons problem was much greater than addressing any of those other seemingly more banal concerns. Similarly, it would be rational to concern ourselves primarily with the existential risk of developing AGI rather than spend undue time and effort on all these other issues, everything from news-of-the-day politics up to affordability and mental health and on up to climate change and war.
However, the concerns need not be completely separate. AI Doomers (in the sense of existential risk) and what I’ll call AI Gloomers (those who see AI as having a net negative economic, environmental, and social impact; if there is already a term for this other than Gloomer, feel free to let me know) both have an interest in reining in and regulating AI development. But of course I would think that. I’m an AI Doom-and-Gloomer who has come to see AI development in a negative light in both senses: it is currently causing the aforementioned economic, environmental, and social upheaval (that is only going to get worse) that Gloomers are attempting to draw attention to while also having the potential existential risk that Doomers warn us about.
I think one issue with prioritizing Doomerism over Gloomerism is that the former is still only hypothetical and requires accepting the very marketing trumpeted by the AI developers: that AGI is possible, and perhaps even inevitable, using their current scaling approach. Gloomerism, on the other hand, is observing and measuring the sorts of real world issues that Hao discusses in her book (she is much more in the Gloomer camp than the Doomer camp). While I would not think about my taxes if someone were pointing a gun at me, I do, in reality, worry more about my very real taxes than I do a hypothetical future gun being pointed at me.
While I still think some level of Doomerism is warranted, I do want to make sure that it does not overshadow the more terra firma issues Gloomerism worries about. To my estimation, there are three likely Gloomy outcomes. Really two, with a third that’s sort of a mixture of the other two.
The first is probably the least bad scenario. This is that AI hype really is just a bubble. Soon it will burst and cause the economy to crash, leading to a recession or depression that will probably be worse than what happened after the 2008 housing bubble burst, compounding the negative impacts of our already struggling economy. Massive unemployment and rising prices will result in corrupt measures that western governments will implement to socialize the losses experienced by tech oligarchs while the rest of the world must pick up the pieces of the lives those oligarchs shattered. Politicians and oligarchs in western countries will not let a good disaster go to waste, using the depression as an excuse to pass authoritarian measures at home that will be with us for decades to come, while simultaneously ramping up imperialist foreign policies and extractivism in order to subsidize the wealthy within their own borders. In this scenario, AI development is then set back by years or decades as people realize the promise of AGI and the doctrine of scaling was all hype and marketing. As time goes on, what happened either becomes a historical curiosity or, much more likely, is obfuscated by lies and revisionist history, everyone blaming everyone else for what happened. In either case, our short attention spans quickly forget the lessons we should have learned, allowing a different AI bubble to begin inflating. Rinse and repeat.
The second scenario is that AI hype is not just a bubble, but the AGI hype is. AI slop runs rampant, the dead internet theory becomes a reality (if it isn’t already), deepfakes and scams and corruption and gambling and financial shenanigans abound like never seen before. The rich can use AI to keep getting richer while the poor get poorer. Governments can use AI for massive surveillance programs while using misinformation, disinformation, and psyops to continue sewing doubt and fear and polarization in order to maintain power. The entire world is run by a handful of trillionaire technofeudal overlords who exploit the rest of the world for labor and resources to an extent that makes now look like the good old days. In this scenario, AI continues to develop by training on data produced within this dystopian scenario, which only accelerates the rate of distrust, polarization, and technofeudalist dominance. Maybe some Doomerist scenario still happens in the future, but even if it doesn’t, the world becomes a drudging hellscape of serfdom for the vast majority of people.
The third scenario, as I said, is just a mix of the other two. There is a bursting of the AI bubble, resulting in the depression, but one (or maybe a small handful) of AI companies are bailed out and thus able to weather the depression while they continue developing their AI technology. Perhaps these companies are nationalized (or, at the very least, put even more under the thumb of governments, voluntarily or not) and the technology is used to create a world similar to that described in the second scenario.
If we wanted a fourth, less Gloomy scenario, it would just be business as usual. The AI technology continues pretty much as it does now, the bubble bursting causes a slight recession but not a depression, and the technology leads to only typical (or maybe only slightly higher) levels of fraud, corruption, and income inequality. To me, why this scenario is unlikely is because it presumes that things remain relatively static, but if forty years of life have taught me anything, it’s that things rarely, if ever, remain static. Indeed, we already know that current AI technology is highly disruptive, even without leading to science fiction doomsday scenarios, or cyberpunk dystopian scenarios. Even if the technology is hitting a wall in how much scaling can improve it, and even if there isn’t a massive bubble burst that causes the technology to largely (though probably only temporarily) disappear from relevance, AI technology is already causing disruptions in the economy, environment, and society. It’s already capable of perpetuating an unprecedented mental health crisis, a crisis of trust, along with economic and environmental impacts that are still not well understood (in large part by design). To me, this fourth less Gloomy scenario is basically just the third scenario but slower, still leading us to the same outcome on a timescale of decades rather than years.
But, who knows. I’m not great at making predictions, so maybe I’m wrong. Besides, maybe these scenarios are just a different way to appeal to my fascination with science fiction storytelling – dystopian fiction rather than apocalyptic fiction. One can only hope.