40 years later, The Terminator still shapes our view of AI

Estimated read time 3 min read


Countries, including the US, specify the need for human operators to “exercise appropriate levels of human judgment over the use of force” when operating autonomous weapon systems. In some instances, operators can visually verify targets before authorizing strikes and can “wave off” attacks if situations change.

AI is already being used to support military targeting. According to some, it’s even a responsible use of the technology since it could reduce collateral damage. This idea evokes Schwarzenegger’s role reversal as the benevolent “machine guardian” in the original film’s sequel, Terminator 2: Judgment Day.

However, AI could also undermine the role human drone operators play in challenging recommendations by machines. Some researchers think that humans have a tendency to trust whatever computers say.

“Loitering munitions”

Militaries engaged in conflicts are increasingly making use of small, cheap aerial drones that can detect and crash into targets. These “loitering munitions” (so named because they are designed to hover over a battlefield) feature varying degrees of autonomy.

As I’ve argued in research co-authored with security researcher Ingvild Bode, the dynamics of the Ukraine war and other recent conflicts in which these munitions have been widely used raises concerns about the quality of control exerted by human operators.

Ground-based military robots armed with weapons and designed for use on the battlefield might call to mind the relentless Terminators, and weaponized aerial drones may, in time, come to resemble the franchise’s airborne “hunter-killers.” But these technologies don’t hate us as Skynet does, and neither are they “super-intelligent.”

However, it’s crucially important that human operators continue to exercise agency and meaningful control over machine systems.

Arguably, The Terminator’s greatest legacy has been to distort how we collectively think and speak about AI. This matters now more than ever, because of how central these technologies have become to the strategic competition for global power and influence between the US, China, and Russia.

The entire international community, from superpowers such as China and the US to smaller countries, needs to find the political will to cooperate—and to manage the ethical and legal challenges posed by the military applications of AI during this time of geopolitical upheaval. How nations navigate these challenges will determine whether we can avoid the dystopian future so vividly imagined in The Terminator—even if we don’t see time-traveling cyborgs any time soon.The Conversation

Tom F.A Watts, Postdoctoral Fellow, Department of Politics, International Relations, and Philosophy, Royal Holloway University of London. This article is republished from The Conversation under a Creative Commons license. Read the original article.



Source link

You May Also Like

More From Author

+ There are no comments

Add yours