Vol. 5, Issue 7: The AI Empire Problem
Spoiler Alert: We're At The AI Inflection Point
I’m not a sci-fi nerd.*
*That’s coming off as a weird defense from an accusation no one made, but I need a punchy sentence to start these things, so here we are.
I do love the original Star Trek series, which is worth noting, and the Twilight Zone. For a good long while, I was obsessed with the Twilight Zone*. In the NYC-area, WPIX Channel 11 would play a Twilight Zone marathon every July 4th weekend and at this point I’ve probably seen every episode 10 times over. The best sci-fi uses the future to analyze the present. The worst sci-fi is Battlefield Earth.**
*”IT’S A COOKBOOK” never fails with the right audience
**In case the Scientologists come after me, I want it known that I’m healthy and not at all suicidal
So I guess I am a bit of a nerd for sci-fi, because I also love Isaac Asimov’s Foundation novels.* It’s a trilogy so you have to invest a bit of time to get through them but they’re not super long either, so you won’t be reading for the rest of your life. And if you don’t want to read them at all, but don’t want to use AI to summarize them, the AppleTV show is not super true to the novels but is probably one of the top 2 or 3 shows they’ve made.
*And his sideburns. RIP.
If you’re looking for a hyper-short summary of the novels, they’re about a mathematician who predicts the collapse of a galactic empire and creates a hidden plan to shorten the coming dark age. The catch is that the unpredictability of humanity may be the one force his equations can’t account for.
I spoke a lot a couple of weeks ago about the practical impact of AI on the future of work. This is more about how the systems we have in place are ill-equipped to deal with the pace at which a technology like AI evolves, which in the absence of guardrails and a semblance of ethics, is the kind of thing that incites entropy and ultimately sows the seeds that lead to the dismantling of order.
The Galactic Empire in Foundation spans millions of worlds. It believes it is permanent. It believes its bureaucracy creates order. It believes decline is something that happens to other civilizations.
Our world feels similarly stable today. Despite all the polarization, dysfunction, and noise, the general scaffolding of the modern world still stands: nation-states, central banks, universities, multinational corporations.
Bureaucracy slows while exponential systems speed up. The empire in Foundation believed in and relied on the centralization of control. AI by its very nature is decentralized and diffuses power. Those two things are at odds and the conflict of the two is probably the single biggest issue the future of humanity faces at scale in the next 5-10 years.
Asimov’s core insight of the novels is that decline is mathematical, not dramatic. There’s no one thing that takes an empire down, but a series of predictable events that brings whole systems to their knees. Asimov called the science of this psychohistory and its flag bearer in the Foundation novels is a mathematician named Hari Seldon.*. Hari Seldon’s discipline doesn’t predict individuals. His predicts masses. He models social momentum. And the great friction of the novel comes early and sets the stage for the narrative arc, which is that model predicts the fall of the Empire.
*The nearest equivalent to psychohistory that I can think of in the real world is moneyball and the nearest equivalent to Hari Seldon is Billy Beane. It’s a series of ongoing analysis that works in aggregate over the long haul, but the smaller the sample size, the more variability in results. Which why those early 2000s As teams did very well in the regular season relative to the amount they spent on their roster, but tended to fail in the playoffs
Seldon says the Empire’s fall is inevitable not because of villainy, but inertia.
We’ve got plenty villainy in our world, but villainy often arises from inertia. Things are good enough (or the bad stuff is happening to others), so we lose motivation to try to make them better. That’s human nature. And when someone comes along with motivation and takes advantage of that complacency (for good or for bad), they face little resistance until it’s too late. Ultimately, that leads to overextended bureaucracy (takes forever to do things), leaders focused on preservation, not adaptation (far more concerned with keeping their job than doing what they’ve been hired to do), technological stagnation disguised as sophistication and arrogance toward peripheral worlds (difference of opinion leads not to conversation, but being outcast for not adhering to the norm).
I’ll stop burying the lede: AI may be our psychohistorical inflection point.
Why do I think this? Well, look at where we are in the lifecycle. Institutions in general are built for linear change, but with AI, we’re facing exponential growth curves. And given how pervasive the letters A and I are right now, companies are racing to deploy it faster than even our already degraded social norms can absorb it. And governments don’t really understand it, so they’re either scrambling to regulate it without foundational knowledge of what it can do or they’re paralyzed by an inability to act.
It’s like trying to regulate television with printing press-era rules. The Empire in Foundation tried to regulate entropy. AI is entropy at scale.
Ineffective leadership tends to have a control reflex. The Emperors in the novels believe they can suppress the inexorable* march of psychohistorical trends and control the uncontrollable. They believe they can suppress dissent and outmaneuver historical forces by managing perception.
*I’ve been trying to get inexorable into a written sentence since I took the SATs, so let’s cross another thing off my bucket list
The overarching issue is that perception doesn’t reverse decline. Action does.
Like anything else, short-term political gains are far more expedient than long term sustainable policy and technical consequence. Why bother creating a framework for long term success if you can create immediate gains for yourself and blame the long term failure on your opponent and win an election down the road? I recognize that a pretty cynical take, but it’s hard not to see when you look at history.
The tension between world leaders attempting to centralize control just as information transfer and power is truly decentralizing is a completely destabilizing construct. And as systems strain, people retreat to smaller identity groups. This is the tribal turn.
Think about history. After the fall of the Roman Empire*, the world fragmented into smaller nation-states. Think about post-colonization Africa as the continent restructured itself along tribal lines within imperial boundaries. Think about post-financial crisis populism. These are all historical disruptors causing massive realignment.
*I still don’t understand this “X is my Roman Empire” thing. I mean, I get what it means, I’m just not sure why that’s the thing, because I legitimately don’t know a single person obsessed with the Roman Empire. Now, if someone had made it “X is my Breaking Bad” or “X is my CrossFit” it would make sense to me because the people who like those things are OBSESSED with those things
AI is a disruptor. And it will cause uneven disruption of labor markets, create elite technocratic classes and increase mistrust in institutions (which aren’t exactly breaking all-time highs in trust right now). All of this increases polarization and polarization is a precursor to fragmentation. In Foundation, outer systems fall first and the center falls last. This big question I keep asking myself is “have our outer systems fallen already?”
Oceans rise; empires fall*. They centralize and overextend, and then they collapse. And new orders emerge from the rubble. To be clear, that’s not an apocalyptic prediction. It’s just the cycle of history. But I don’t think there’s any question that AI is an accelerant to that cycle. And it will accelerate the collapse of existing hegemonies. What of American global dominance? Western institutional frameworks? Traditional labor structures? All at risk.
*Thanks, Lin-Manuel
But let’s take an optimistic turn here. Collapse isn’t only chaos; it’s reorganization. The point of the Foundation novels isn’t doom; it’s compression. Hari Seldon’s message isn’t only empiric collapse and an extended period of darkness. It’s “here’s a plan to reduce the period of darkness from 30,000 years to 1,000 years”. So if I’m being truly optimistic here, AI as an accelerant may actual shorten our real world era cycle of burn.
AI will almost certainly create a dramatic increase in productivity. It will democratize knowledge. It may solve issues of coordination across entities. And it could even strengthen institutions. After all, going back to the novels and the guy who wrote them, Asimov was a proponent of rational planning. He believed in that. But his caution around technological progress was clear: arrogance is the enemy of true progress.
We’re not Hari Seldon. There’s no vault of holographic messages containing an explanation of why something happened according to the psychohistorical predictions. More pointedly: there’s no guaranteed arc.
Our responsibility is to ask the right questions if we want to create the right framework. Are we strengthening our institutions or are we hollowing them out? As power decentralizes, is it happening responsibly? Or, to be a little on the nose, are we building Foundations or defending Empires?
Every empire believes it’s the exception to history. It won’t fall because of its exceptionalism. But psychohistory suggests otherwise and AI will change the order.
Can we shorten the darkness?
This was part 2 - there’s no part 3. I’ve got a few more of these that are half-done and if I can finish them, I’ll publish next week. Otherwise, we’re probably going to take. quick break here to recharge the creative batteries.
Have thoughts about AI and any of this? I’d love to hear them because it’s my current obsession, not really from a usage perspective - although that’s interesting too - but from a historical trends perspective. Pop into my LinkedIn DMs or hit me up at geoff (at) jpegconsult (dot) com.
That’s all for this week. Until next time, friends.



