
AGI: Fear and Loathing in Silicon Valley
Author Name
Michele C. Tripeni
Published On
April 26, 2025
Keywords/Tags
AGI, AI Hype, Effective Altruism, Silicon Valley, Eschatology
Four years ago, one of the most prominent AI safety researchers wrote about their warnings: “This is not fear-mongering; we don’t have an adequate amount of fear in the AI researcher community — the amount necessary to ensure sufficient precautions.” (1, p. 15) Today, the landscape has shifted dramatically. The current media ecosystem is saturated with discussions of AI and its advancement, from claims of near Artificial General Intelligence to both its disruptive potential and catastrophic risks. The media, in giving a broad platform to discourse around AGI—particularly to evangelists and companies actively pursuing it—does a disservice to the public and often functions merely “as the press office of tech giants and their private interests.” (2) Both utopian and dystopian AGI narratives ultimately fuel the same hype cycle, empowering Silicon Valley venture capital. And while existential risks from AGI merit consideration, public resources and critical attention might be better placed in reducing the real, present-day harms caused by narrow AI.
AGI risk and hype
The discourse around artificial general intelligence is deeply polarized. On one side, AI safety researchers point to scenarios in which an AGI could induce a technological singularity, leading to existential risks or new forms of suffering. (3) On the other, AGI systems that do not necessarily surpass human intelligence but still cause substantial disruption: enabling social manipulation, new types of warfare, or global power shifts. (3, 4)
Since the release of ChatGPT in 2022, discourse on AI and AGI has exploded, with public debate rapidly steering toward AI safety and existential risks—though expert opinions remain divided. (5) At the same time, fuelled by rapid advances in large language models, an equally vocal camp promotes the idea that AGI is near and will disrupt every industry. (6) Neither side is free from vested interests, and both contribute to a public conversation shaped less by clarity than by hype.
AGI narratives
Studies have shown the current prevalence of negative “dystopian” narratives on AI, with the exception of “solutionism,” which is “a bright-siding alternative where AI would simply solve all humanity’s problems.” (7) These narratives are deeply influenced by societal expectations of the future, which also shape social action. (8) Moreover, “AI, with its false narratives, coupled with the human psyche that seems to inherently crave hype, is a susceptible nexus.” (9)
The result is a public conversation dominated by extremes—with AGI either elevated to a utopian “silver bullet” capable of solving all human problems or condemned as an existential threat, fuelling moral panic. These polarised portrayals risk distracting the public from understanding the actual capabilities of current AI systems, and from recognizing the immediate social, environmental, and political harms AI already enables. (7, 9)
Techno-futuristic eschatology
To better understand these dynamics, we can look at the ideological foundations of the people and companies that might profit from them. Silicon Valley’s obsession with AGI is deeply rooted in its cultural and ideological environment. According to Gebru and Torres (2024), this is broadly centred around the so-called TESCREAL bundle, a cluster of connected beliefs including “transhumanism, Extropianism, singularitarianism, (modern) cosmism, Rationalism, Effective Altruism, and longtermism.” (10)
These frameworks drive Silicon Valley’s vision of AGI as a transformative solution to existential challenges, reinforcing the region’s characteristic faith in technology as humanity’s saviour. Effective Altruism, central to this ethos, encourages individuals to optimize their resources to maximize global impact, often through strategies like “earn to give,” where wealth generation is channelled into high-utility causes. This approach fuses capitalist ambition with moral self-justification, echoing what Paulina Borsook described over twenty years ago as a “technolibertarian culture of crypto-individualism.” (11)
Silicon Valley in particular favours longtermism, an influential offshoot of EA that focuses on humanity’s distant future and the mitigation of existential risks, particularly those posed by AGI. (12) Figures like Sam Altman and Elon Musk are closely associated with these ideas, embedding them within the missions of organizations such as OpenAI and Anthropic. OpenAI’s stated goal of ensuring AGI benefits all of humanity reflects such longtermist priorities, as does Anthropic’s emphasis on building safe and aligned AI systems.
So what?
As we have explored, the narratives around AGI are symptomatic of a broader cultural and ideological struggle, where Silicon Valley and its venture capitalists play a central role. The pervasive media hype around AGI—whether optimistic or dystopian—contributes to a distorted public understanding, driven by competing business interests, ideological commitments, and a desire for dominance in the technological and cultural zeitgeist. By elevating AGI to a status of existential importance, these narratives divert attention from the immediate harms already enabled by current AI systems, including exploitation, bias, and power concentration.
As argued by Gebru and Torres, one critical issue is the lack of scrutiny over why AGI is considered a desirable goal by many in the field. (10) Indeed, this techno-utopian belief in a system capable of solving humanity’s grandest challenges conveniently ignores technology’s historic tendency to exacerbate inequality and marginalization.
What’s more, the pursuit of AGI has already resulted in significant harms, from environmental degradation to the amplification of harmful ideologies. These real-world consequences are often downplayed or ignored in favour of speculative debates about superintelligence and its risks, even though the former are far more pressing. The ideological frameworks of TESCREAL, particularly longtermism, prioritize hypothetical futures over present-day concerns. Thus, in their effort to control the future, they risk reinforcing authoritarian tendencies under the guise of altruism. Moreover, the TESCREAL ideologies have clear ties to historical instances of authoritarianism and illiberal politics, such that “The biggest risk AI poses right now is that alarmists will use the fears surrounding it as a cudgel to enact sweeping policy reforms.” (13)
In the ideological landscape of Silicon Valley, AGI is not merely a technical endeavour but a cultural project rooted in transhumanism, libertarianism, and effective altruism. These ideologies present a vision of the future where technology and those who control it hold unrivalled power over humanity’s fate. And while this is framed as a benevolent mission, it is essential to contest this concentration of influence and capital, and the risks of subordinating public welfare to private ambitions that come with it.
—
The current discourse around AGI remains polarized between utopian promises and apocalyptic fears. This dynamic, fuelled by Silicon Valley’s vested interests, leaves little room for the nuanced skepticism necessary for informed decision-making. Even strong advocates for AGI risks like Yampolskyi (1) and Baum (14) suggest that healthy skepticism is vital to maintaining intellectual integrity and preventing such narratives from being weaponized for political or financial gain.
We should not dismiss the potential risks of AGI outright but ensure that discussions remain grounded in reality. Ultimately, the AGI debate underscores the need to refocus on present challenges. The future of AI should not hinge on speculative fantasies of general intelligence but on the deliberate and equitable deployment of technology to benefit humanity as it exists today. Only by disentangling ourselves from the seductive myths of AGI can we begin to chart a more just and sustainable technological future.
1. YAMPOLSKIY, Roman V. AI Risk Skepticism. Online. 17 July 2021. arXiv. arXiv:2105.02704. [Accessed 13 January 2025].
2. SIGNORELLI, Andrea Daniele. L’intelligenza artificiale generale ha rotto. Wired Italia. Online. 4 November 2024. [Accessed 16 January 2025]. Available from: https://www.wired.it/article/intelligenza-artificiale-generale-ricerca-miti/
3. EVERITT, Tom, LEA, Gary and HUTTER, Marcus. AGI Safety Literature Review. Online. 21 May 2018. arXiv. arXiv:1805.01109.
4. YAMPOLSKIY, Roman V. Taxonomy of Pathways to Dangerous AI. Online. 11 November 2015. arXiv. arXiv:1511.03246. [Accessed 16 January 2025].
5. STRICKLAND, Eliza and ZORPETTE, Glenn. The AI Apocalypse: A Scorecard. How worried are top AI experts about the threat posed by large language models like GPT-4? IEEE Spectrum. Online. 21 June 2023. Available from: https://spectrum.ieee.org/artificial-general-intelligence
6. SIEGEL, Eric. Elon Musk Predicts Artificial General Intelligence In 2 Years. Here’s Why That’s Hype. Forbes. Online. 10 April 2024. [Accessed 16 January 2025]. Available from: https://www.forbes.com/sites/ericsiegel/2024/04/10/artificial-general-intelligence-is-pure-hype/
7. CHUBB, Jennifer, REED, Darren and COWLING, Peter. Expert views about missing AI narratives: is there an AI story crisis? AI & SOCIETY. June 2024. Vol. 39, no. 3, p. 1107–1126. DOI 10.1007/s00146-022-01548-2.
8. BRENNEN, J Scott, HOWARD, Philip N and NIELSEN, Rasmus K. What to expect when you’re expecting robots: Futures, expectations, and pseudo-artificial general intelligence in UK news. Journalism. January 2022. Vol. 23, no. 1, p. 22–38. DOI 10.1177/1464884920947535.
9. KLARMANN, Noah. Artificial Intelligence Narratives: An Objective Perspective on Current Developments. Online. 18 March 2021. arXiv. arXiv:2103.11961. [Accessed 13 January 2025].
10. GEBRU, Timnit and TORRES, Émile P. The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence. First Monday. Online. 14 April 2024. [Accessed 16 January 2025]. DOI 10.5210/fm.v29i4.13636.
11. BORSOOK, Paulina. Cyberselfish: Ravers, Guilders, Cyberpunks, and Other Silicon Valley Life-Forms. Yale Journal of Law and Technology. Spring 2001. Vol. 4, no. 1.
12. ONGWESO, Edward Jr. OK, WTF Is ‘Longtermism’, the Tech Elite Ideology That Led to the FTX Collapse? Vice. Online. 23 November 2022. [Accessed 16 January 2025]. Available from: https://www.vice.com/en/article/ok-wtf-is-longtermism-the-tech-elite-ideology-that-led-to-the-ftx-collapse/
13. TROY, Dave. The Wide Angle: Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley’s Rightward Turn. The Washington Spectator. Online. 1 May 2023. [Accessed 16 January 2025]. Available from: https://washingtonspectator.org/understanding-tescreal-silicon-valleys-rightward-turn/
14. BAUM, Seth D. Superintelligence Skepticism as a Political Tool. Information. 22 August 2018. Vol. 9, no. 9, p. 209. DOI 10.3390/info9090209.
15. Yasmine Boudiaf & LOTI, Data Processing