I Love AI. But Perhaps It Shouldn't Have Ever Existed.

Fifty times global compute capacity, and a dystopian future written down in 1984.

I Love AI. But Perhaps It Shouldn't Have Ever Existed.
audio-thumbnail
Listen to this Article
0:00
/561.284785

There's a version of this article where I try to be balanced. Where I lay out "the pros and the cons" and offer some measured take that makes me sound thoughtful and considered.

That version of this article can get fucked.

Because the more time I spend with AI — building with it, watching what it's doing to the world, understanding the trajectory we're on — the more I keep arriving at the same uncomfortable place.

Maybe AI shouldn't have existed in the first place.
Which is a rather difficult thing for me to say.

I've been a technologist for thirty years. I've built my entire career around the application of technology in wonderful ways. And yes, I use AI every day.

It's genuinely useful to me and sometimes, even extraordinary.

But I'm not some luddite screaming in my underpants from a barren hillside while waving a stick in the air. I'm someone deep in the weeds of this stuff who's starting to wonder if the weeds are rapidly devolving into quicksand.


The Machine Gun Problem

New technology has always displaced jobs. That's not a controversial statement. Every major technological shift has done this. Just take a look at the advancements of the past hundred-and-twenty-years or so.

It's all taken work from some people and created different work elsewhere.

Uncomfortable? Yes. The end of the world? No.
History backs that up.

But AI isn't doing that. AI is doing something different, and the pace is the thing that's breaking everything we've put together thus far.

AI isn't displacement. It's a poorly aimed machine gun.

We're not gradually transitioning from one type of work to another. We're watching entire job categories get hollowed out in the span of months. Communities built around industries that took decades to grow are getting gutted faster than any retraining program, any social safety net, any economic policy could possibly respond to.

Take what's happening with large-companies and outsourcing firms.

You know the play. A company outsources to a firm, the firm hires a bunch of people to do the work, and then — here's the new bit — the firm just quietly trains AI agents to do what those people do, makes them redundant in six months, and goes back to the company with a proposition: for a few extra million a year, we'll just flip the switch and give you back the same capability.

No humans required.

Is anyone stopping them? Is there a law against it? Is there even a conversation happening at any meaningful level about what that does to the people in the middle of that chain?

No. Because it's efficient. And efficiency is the only metric that matters now.


One Terawatt of "Oh Fuck"

Speaking of metric fuck-tonnes of AI-flavoured news.
Elon Musk recently announced Terafab.

If you haven't heard of it, the pitch is essentially the world's largest AI semiconductor manufacturing plant — a joint venture between Tesla, SpaceX, and xAI, designed to produce one terawatt of AI compute per year.

One terawatt. Per year. From one facility.

And if it works — which it probably will because when you've got the money and the ego, political access, and a moral compass that's missing it's dial, things tend to happen at pace — then it becomes a force multiplier for AI globally.

Now ask yourself: where does all that compute go?
It doesn't go into your laptop. It doesn't go into mine.

It goes into Tesla's vehicles and Optimus robots. It goes into SpaceX's orbital AI satellite network.

It goes, in other words, entirely into the empire of one man — consolidating a level of compute capacity that, by Musk's own numbers, would represent roughly fifty times the current global AI chip output, all under a single vertically integrated umbrella.

There's no mention right now of Terafab producing chips or compute for other companies or purposes, but you can almost bet on the outsourcing of compute manufacturing for big global players.

That's not conspiracy theory. That's just how capitalism works.


The Cyberpunk Writers Were Right, and That's a Problem

I've read cyberpunk novels my whole life. I love the genre. I love it because it presents this dark, gritty, neon-soaked vision of a future that feels excitingly distant from the world I actually live in.

Except it doesn't feel distant anymore. Because we're already standing on the outskirts looking in.

Look at the core structural premise of almost every cyberpunk novel written since the 1980s: extreme upper class, extreme lower class, almost nothing in between.

Cities split into upper and lower tiers — the upper city gleaming and managed, the lower city full of people working in something that looks a lot like indentured servitude to the companies and the wealthy of the upper city.

Governments rendered functionally meaningless. Political allegiances replaced entirely by corporate allegiances.

Mega-corporations running everything — your services, your security, your food, your information — and the only question that matters is which company you've signed your agreement with.

Sound familiar? Thought so.

We've had this vision since 1984. William Gibson didn't know it but he basically wrote a documentary. Just 50 years too early.

And here's the really insidious part. The people who read those books — the people who grew up with Neuromancer and Snow Crash and Blade Runner — a lot of them went on to become the technologists, the venture capitalists, the founders that we see every day.

And somewhere along the way, instead of taking those books as a warning, they took them as a blueprint.

And that's the part that keeps me up at night.


Helsing.ai: Go Look It Up

I'll leave you with one more thing.

The UN has been trying — loudly and repeatedly — to establish a legally binding ban on fully autonomous weapons platforms. The UN Secretary-General has called them "morally repugnant." There is widespread international consensus that weapons systems that make kill decisions without a human in the loop should not exist.

But there is no binding ban. Not yet at least.

Discussions have been running since 2014 and the major military powers — the US, UK, Russia, Israel, and others — keep blocking any legally enforceable instrument. So we have the resolution, we have the rhetoric, but we don't have a path forward.

Now go look up Helsing.ai.

That's a company that had close to €600 million — roughly $700 million USD — pumped into it by Daniel Ek, the Spotify founder, who now chairs the company's board.

What they're building is... well... Just go look at it. I'm not going to describe it here because I think you need to feel the weight of what they're building for yourself.

We have the international consensus. We have nations raising their hands in opposition to autonomous weapons building.

And then we have reality.


So What Now

I don't have an easy or simple answer.

I'm not calling for a blanket prohibition on AI development — I'm too much of an open-source, decentralised-thinking person to be comfortable with top-down bans on technology as a general principle. But I do think there are specific categories of development where illegality isn't just justified — it's necessary.

Autonomous weapons are the obvious one, but we're already failing at enforcing that. What's most surprising is that as of right now in April 2026 there is no comprehensive legally binding international treaty banning the research, development, and mass fabrication of autonomous weapons platforms.

While UN Secretary-General António Guterres and over 120 countries support a ban, a small group of military powers—including the US, Russia, and even my home country Australia—each have opposed a preemptive ban on Lethal Autonomous Weapons Systems (LAWS).

If you're interested, the Australian Human Rights Commission wrote a submission titled Inquiry into the Department of Defence Annual Report 2022–23 in which they discuss Australia's position on regulation of emerging military technologies.

All I know is this: we are moving faster than we can think.

Faster than our institutions can respond. Faster than both our laws, and more concerning, our moral ethics can keep up with.

And the people with their hands on the accelerator have very little incentive to ease off, because the current direction is making them extraordinarily wealthy and powerful.

The fiction writers saw this forty years ago. They wrote it down. They published it. We read it.

And then we goddamn built it anyway.