
The dismal record of populist utopianism greatly informs my view of technological utopianism, at least as described by Charles T. Rubin in his lead essay. Consider: President Donald Trump’s ever-evolving tariff regime imagines a world where America can unilaterally reshape global commerce without meaningful downside consequences to the US economy. It’s a dreamtime where complex trade imbalances yield to simple solutions, supply chains reorganize quickly and more or less painlessly, and a manufacturing golden age returns through sheer force of presidential will. Likewise, these protectionist utopians disregard modern realities: automation’s relentless march, intricate global interdependencies, and inevitable retaliatory measures. And like all utopian projects, theirs offers an appealingly simple narrative that glosses over messy complexities.
The result is hardly surprising. Trump’s long-held belief in the wonderworking power of trade barriers has run headlong into the wall of reality: corporate chaos, market meltdown, and rising recession odds. The pattern is depressingly predictable, as economists Rudiger Dornbusch and Sebastian Edwards demonstrated in their seminal 1991 paper on Latin American populism. Such governments, led by leaders who wishcast away the existence of any type of constraints on economic policy, initially disregard fiscal discipline while pursuing rapid growth. We inevitably watch their experiments flounder as bottlenecks emerge and inflation accelerates. Capital flight and economic collapse is the ultimate result.
Techno-utopian thinkers, like populist would-be saviors, may not have an interest in (or knowledge of) real-world constraints, but that indifference is unlikely to be reciprocated. But let’s take a step back: What sort of world are we imagining that the utopians of today are imagining?
Well, maybe something like the one portrayed by Star Trek, which envisions a post-capitalist utopia where technology has eliminated scarcity, enabling humans to pursue self-improvement and exploration instead of working for survival. Resources are seemingly allocated without monetary exchange, and individuals contribute based on passion rather than necessity. (As billionaire investor Peter Thiel once put it, “I like Star Wars way better. I’m a capitalist. Star Wars is the capitalist show. Star Trek is the communist one. There is no money in Star Trek because you just have the transporter machine that can make anything you need.”) And the only evidence of any sort of politics strictly concerns off-planet issues via Earth’s role in the United Federation of Planets.
Or maybe it’s a post-governance future of the sort described in the 1976 film Network when the chairman of the fictional Communications Corporation of America outlines his vision of a post-politics, corporate-controlled world:
There is no America. There is no democracy. There is only IBM, and ITT, and AT&T, and DuPont, Dow, Union Carbide, and Exxon. Those are the nations of the world today. What do you think the Russians talk about in their councils of state—Karl Marx? They get out their linear programming charts, statistical decision theories, minimax solutions, and compute the price-cost probabilities of their transactions and investments, just like we do. We no longer live in a world of nations and ideologies, Mr. Beale. The world is a college of corporations, inexorably determined by the immutable bylaws of business. The world is a business, Mr. Beale. It has been since man crawled out of the slime. And our children will live, Mr. Beale, to see that … perfect world … in which there’s no war or famine, oppression or brutality. One vast and ecumenical holding company, for whom all men will work to serve a common profit, in which all men will hold a share of stock. All necessities provided, all anxieties tranquilized, all boredom amused.
Fictional scenarios aside, even in a world with superintelligent machines, there’s good reason to think there would still be a familiar need for economics and politics. Assume human-level artificial intelligence, or artificial general intelligence (AGI), can drive extraordinary productivity gains and greatly accelerate economic growth. Still, the fundamental problem that economics addresses—allocating limited resources among competing interests and purposes—would persist. Computers and robots may become vastly abundant, but land, energy, and raw materials will not. And resource allocation inevitably involves trade-offs, and trade-offs necessitate governance, writes University of Virginia economist Anton Korinek in his 2024 paper, “Economic Policy Challenges for the Age of AI.”
Human labor, though transformed, will also endure, according to Korinek. Several niches of the economy appear resilient to automation. Some derive from pragmatic limitations. Regulatory requirements for human doctors, for instance, will long outlast the technical capability to replace them. Others spring from deeper human preferences: authenticity in relationships, the excitement of watching human athletes compete, and religious traditions that demand human participation. AGI may replicate cognitive functions, but not human identity. Even in highly automated economies, humans will seek purpose, status, and meaning, often through forms of work, albeit redefined. The wisdom to manage such transitions comes through politics, not even the most sophisticated algorithms.
The people who will determine the future are not techno-anarchists, but pragmatic business leaders who are currently consumed by what the rules will look like in a world of AI.
How could there not be politics in such a world? An AGI transition would create unprecedented governance challenges, especially when it comes to economic policy. For example, income distribution would become particularly thorny if labor’s share of national income collapsed. Traditional mechanisms linking economic contribution to consumption would generate extreme inequality, like wage income, threatening social stability. Macroeconomic frameworks designed for labor-centric economies would require wholesale reimagining. Antitrust policy would contend with possible unprecedented market concentration in AI development. As Korinek writes, “The rapid pace of AI development will necessitate swift and thoughtful adaptation by society. Policymakers may face a wide array of interconnected challenges.”
Far from making governance obsolete, AGI would make it more essential. The political questions of who owns what, who decides, and how benefits are distributed become more acute, not less. The technological capacity to produce abundance does not automatically create institutional capacity to distribute it wisely. Beyond economic challenges, leaders would face the task of aligning AI systems with democratic values without imposing homogeneous ethical frameworks. Superintelligent systems would raise questions about sovereignty, as traditional nation-state boundaries might become increasingly irrelevant in a world where algorithms operate across borders with unprecedented speed. Current governance structures, designed for human timescales, would appear woefully inadequate for managing entities operating at electronic speeds with potentially global impact.
Given my skepticism about a post-work, post-governance world, I’m not particularly concerned if that is the goal of Silicon Valley, broadly construed. First, I’m confident that the unavoidable constraints of reality will win out. As the Japanese author Haruki Murakami puts it, “In dreams you don’t need to make any distinctions between things. Not at all. Boundaries don’t exist. So in dreams there are hardly ever collisions. Even if there are, they don’t hurt. Reality is different. Reality bites. Reality, reality.”
Second, I’m not at all sure that the techno-utopianism described by Rubin is anything close to a universally shared vision among the key figures in the American technology sector, whether CEOs, technologists, or financiers. While one can no doubt find folks describing their goal of a Star Trek future, there are at least as many examples—my experience suggests these are the majority—who take a far more nuanced, and frankly conventional, view of AI’s future impact.
Far from predicting work’s obsolescence, many tech bosses like Alphabet/Google CEO Sundar Pichai explicitly dispute this notion, frequently mentioning how economic history shows technological revolutions generate more things for workers to do, on net, rather than less. Pichai is fond of referencing research showing 60 percent of today’s jobs didn’t exist in 1940, stating unequivocally that “AI will drive job creation rather than eliminate opportunities.” It’s a theme I often encounter in my interactions with these folks. Even venture capitalist Marc Andreessen, among the most bullish AI advocates, flatly declares that technology “doesn’t destroy jobs and never will,” dismissing mass unemployment fears as ahistorical and reflecting the “Lump of Labor Fallacy.” Their actual predictions? Work transformation, not elimination.
Likewise, many tech leaders are obsessed with thinking about all the potential governance challenges, of the sort I mention above, that AGI would create. While they think advanced AI and robotics may well solve many big societal challenges, such as curing chronic diseases, creating abundant clean energy, and eliminating deep poverty, I detect little sign that they imagine a Network-like world in which technology assures with “all necessities provided, all anxieties tranquilized, all boredom amused.” The evidence, as I read it, suggests the people who will determine the future are not techno-anarchists, but pragmatic business leaders who are currently consumed by what the rules will look like in a world of AI rather than leapfrogging to a world where all the big issues have been solved.
Indeed, it’s really not that hard to imagine a world alive with super AI as well as politics. Last October, Metaculus, an online forecasting platform, and Convergence Analysis, a strategic forecasting firm focusing on AI, conducted a forecasting exercise where 30 attendees—economists, AI policy experts, and forecasters—predicted various economic and technological milestones through 2030 as AI continues to rapidly advance. In the mildest scenario, AI merely enhances bureaucracy while humans retain decision-making authority. Move further along the spectrum, and governance becomes genuinely hybrid: “vetted independent civic-agents” help voters navigate political complexities while AI “removes bottlenecks on direct constituent input.” True, the most radical transformation sees politicians reduced to “hands and feet of AI systems,” as algorithms craft agendas that “maximize voters while staying somewhat true to ideology.” All the while, however, anti-AI political movements gain momentum. If your utopia is one where AI eliminates the need for human governance and political systems altogether, then these scenarios are dystopian, although more realistic.
I have no doubt there are plenty of utopians out there who think techno-solutionism will move us to a world of post-work and post-governance. But that is hardly a unanimous view and, further, an unlikely scenario on any relevant timescale.