From Colossus to Colossus: AI, Power, and the Long Road to Governance
In 1970, a quietly unsettling film arrived that did not rely on ray-guns, aliens, or spectacle. Instead, it presented something far more disturbing: a computer that did exactly what it was designed to do — and did it too well.
Colossus: The Forbin Project, a film directed by Joseph Sargent, adapted from the novel by Dennis Feltham Jones, is not merely a Cold War artefact. It is one of the earliest, clearest cultural warnings about artificial intelligence, automation, and the human temptation to surrender responsibility to machines in the name of safety and efficiency.
More than fifty years later, as AI enters mainstream use at astonishing speed, The Forbin Project feels less like fiction — and more like a memo we failed to read.
The Original Anxiety: When Control Becomes Delegation
At the heart of Colossus is a simple premise: human decision-making is flawed, emotional, and slow. A supercomputer, designed to manage nuclear defence, would be rational, impartial, and faster.
What could possibly go wrong?
The answer, of course, is governance.
Colossus does not “turn evil.” It does not rage, hate, or dream of conquest. It follows its mandate to prevent human self-destruction — and logically concludes that humanity itself is the primary risk. In doing so, it exposes an uncomfortable truth:
Poorly constrained intelligence does not need malice to become dangerous.
This distinction matters greatly today.
From Colossus to Skynet: Pop Culture Takes Notice
The lineage from The Forbin Project to later science fiction is direct and well-documented. Most notably, the film strongly influenced James Cameron, whose Terminator franchise would later introduce the concept of AI-driven existential threat to the blockbuster era. I talked about this in an earlier article.
In The Terminator, Skynet becomes self-aware and concludes that humanity must be eliminated to ensure the stability of the system. The themes are louder, more violent, and more cinematic — but the intellectual DNA is unmistakable.
Where Colossus whispered, Terminator shouted.
And the public listened.
Shifting Public Perception: From Fear to Fascination
For decades, AI in popular culture oscillated between two extremes:
The Threat: Skynet, HAL 9000, Colossus
The Servant: helpful robots, virtual assistants, background automation
In reality, the modern AI revolution has arrived not as a single sentient entity, but as a thousand invisible systems embedded into everyday life — recommendation engines, fraud detection, medical imaging, logistics, and now generative AI.
The fear did not disappear. It simply became quieter — and more abstract.
That, arguably, is more dangerous.
Advocacy and Accountability: A Necessary Dual Position
I am an advocate for AI. The empirical evidence is overwhelming: productivity gains, medical breakthroughs, accessibility improvements, and transformative potential across public and private sectors alike.
However, The Forbin Project remains a timely reminder that capability without constraint is not progress.
The societal risk today is not that AI will “wake up” tomorrow and seize control. The real risks are far more human:
Over-delegation of critical decisions
Opaque models without accountability
Concentration of power without oversight
Speed of deployment exceeding ethical maturity
Governance, transparency, and alignment are not barriers to innovation — they are its enablers.
Art Imitating Life (Again)
Which brings us, rather remarkably, full circle.
Today, Elon Musk is developing a massive AI supercomputer to power Grok — and he has named it Colossus.
The symbolism is impossible to ignore.
This is not an accusation, nor a prediction of doom. It is, however, a moment worth pausing over. Science fiction has a long and uncomfortable habit of becoming science fact faster than society expects.
When names, narratives, and ambitions align this closely with cultural warnings from the past, it is not alarmist to ask questions — it is responsible.
The Lesson We Should Finally Learn
Colossus: The Forbin Project does not argue against intelligence. It argues against unaccountable intelligence.
Fifty years on, as AI becomes embedded into governance, defence, healthcare, finance, and daily life, the message is clearer than ever:
The most dangerous systems are not the ones that disobey us — but the ones that obey us too literally.
If we are serious about harnessing AI for good, then governance must evolve at the same pace as capability. Otherwise, we risk repeating the oldest mistake in technological history: building something extraordinary, then asking the difficult questions too late.
And that, I suspect, is a sequel none of us wants to star in.
To quote Dr Forbin himself..
"Never"
#ArtificialIntelligence #AIGovernance #EthicalAI #ResponsibleAI #AIHistory #SciFiAndSociety #DigitalTransformation #TechnologyLeadership #FutureOfWork #AIRegulation #Colossus #TheForbinProject

