It has been 40 years since ‘The Terminator’ captivated audiences with its chilling portrayal of a future dominated by self-aware machines. In the film, a global nuclear war wipes out billions, initiated by machines that gain sense. Arnold Schwarzenegger plays the relentless robotic assassin sent back in time to eliminate a future resistance leader’s mother.
While this scenario seems purely fictional, experts, including figures like Elon Musk, warn that humanity could face dire consequences from artificial intelligence (AI). But just how close are we to this potential apocalypse?
Experts believe a real-life version of the Terminator is unlikely to emerge in our lifetime. Natalie Cramp, a partner at JMAN Group, stated that while anything is possible, we are far from achieving the level of robotics depicted in the film.
She emphasised that humanoid robots are not the most immediate concern.
Instead, the current risks come from machines already in use, such as drones and self-driving cars. Cramp noted, "Everyday objects and infrastructure could pose greater risks—like a malfunctioning self-driving car or a failing power grid."
Mark Lee, professor of AI at the University of Birmingham, echoed these sentiments.
He warned that a Terminator-style apocalypse would only occur if a government irresponsibly allowed AI to control national defence. Thankfully, he believes no nation would be reckless enough to do so.
Lee also pointed out that the pressing danger lies in the algorithms that drive AI systems.
He noted the implications of AI on everyday life, affecting decisions about jobs, loans, and more.
He highlighted military applications as another area of concern, particularly AI-guided missile systems and drones.
The challenge, according to Lee, is establishing an ethical framework for AI usage. While the Western world may agree on guidelines, there is no guarantee that other nations will follow suit.
In conclusion, while the fear of a Terminator-style takeover captures the imagination, experts agree that more immediate risks arise from existing technologies.
Ensuring responsible development and use of AI will be crucial as we navigate this evolving landscape.