We May Have the First Case of a Robot Deliberately Killing Humans

June 1, 2021

A report by the UN concerning the conflict in Libya has thrown up a rather disturbing piece of information. According to the document, in early 2020 the world may well have seen the first occasion of an Artificially Intelligent (AI) weapon hunting down and killing targets without a human in the loop.

The use of drones by both sides has been well reported. In March and April 2020, Turkish UAVs supporting the Government of National Accord – the GNA – proved decisive in breaking the resistance of forces affiliated to rebel General Khalifa Haftar. But these were conventional models that require a pilot to remote-control the UAV and engage targets.

The UN report states that another type was also deployed in the war, the STM Kargu 2 loitering munition. And this is a hugely different weapon.

 

While other drones and UAVs retain a human operator in the loop who provides the analysis and decision-making faculties, the Kargu-2 can act completely autonomously. Essentially a quadcopter fitted with a fragmentation charge for use against personnel in the open, the Kargu uses real-time image processing capabilities and machine learning algorithms to search for and attack targets at its own volition.

In essence, an operator sends it off to a target area and the weapon can make its own decisions on what, where and when to attack.

 

Though the UN cannot confirm for definite that the Kargu was used against human targets during the Libyan civil war, it is certain that shipments of these were sent to the country and the UN considers it likely they were used. In which case, this is probably the first time that a robot has autonomously attacked and killed human beings.

It should be noted that it is highly likely that the Kargu has since been used in other conflicts, notably against Armenia by Azerbaijan and in Syria and possibly within Turkey itself. So, we likely have moved from the realm of science fiction to science fact regarding the use of AI-weapons.

Now, it must be stated that the use of advanced computers to determine whether to attack a target is not exactly new. As an example of this, let us use the US Navy’s Mk.60 CAPTOR mine.

Developed during the Cold War, this weapon was intended to be placed in areas where Soviet submarines were expected to transit or patrol. The mine was programmed to recognise the signature of these vessels and if one approached would fire a Mk.46 homing torpedo at it.

This has marked most high technology autonomous weapons to this date – generally a highly selective targeting band in a specific environment. This means that a Mk.60 makes its decision to attack based on an extremely limited set of criteria which simplifies its choice and therefore should reduce the possibility of attacks on incorrect targets.

Arguably, it is the same thing broadly as the Kargu – a machine making a decision on when to attack humans. But due to the highly selective nature of the targeting parameters of such a weapon, it’s probably more accurate to think of the computers on a Mk.60 as an incredibly complex fusing system.

Weapons like the Kargu and its ilk seem vastly different. These robots have much broader targeting parameters and autonomy to act as they see fit. As a result, they are going to be killing more people in the future, I’m not foolish enough to think this genie is going back into the bottle.

But now it raises issues on how robust the machine-learning systems actually are. Currently, and despite years of effort and huge expense, no country has permitted the full-scale use of self-driving cars. Though progress is ongoing in this field of AI, and likely soon to become reality, governments are, quite wisely, being thorough in their approach in investigating and legislating the issue. This has proven wise as many of the trials that have been conducted over the years have demonstrated how machine-learning can be confused and even erratic.

But while a cautious approach to the comparatively mundane – though admittedly dangerous – task of driving has been subject to rigorous testing, the decision to field robots that hunt down and kill people appears to have been just…happened.

Does anyone believe that these sorts of weapons have had a fraction of the time or effort expended on them as self-driving cars? And yet, they are now a fact of warfare and their use will only grow.

Perhaps I am being too delicate? After all, atrocity is a day-to-day fact in conflicts all over the world, either deliberately or accidentally.

But this is different. A human can be held accountable for his actions – even if all too often that does not occur.

But a robot, it is just doing what we have built it to do.

I suspect that I will be writing an article at some point reporting how one of these weapons slaughtered a party of innocents.

I fear it that said article will be soon.

Links:

https://undocs.org/S/2021/229

https://www.stm.com.tr/en/kargu-autonomous-tactical-multi-rotor-attack-uav

 

The Curtiss XF14C; Dying Gasps of an Aircraft Giant

The Curtiss XF14C; Dying Gasps of an Aircraft Giant

In my previous article on the P-51 “Sea Horse” I talked about how the US Navy, though swearing off liquid-cooled inline engines in 1921, did keep a close eye on development on those types of powerplant. In the late 1930’s, there looked to be a few prospects that...

North American ETF-51D; The “Sea Horse”

North American ETF-51D; The “Sea Horse”

When it comes to carrier fighter aircraft of World War Two, there is one very notable attribute that they generally share; Air cooled radial engines. This type of powerplant was preferred because it was considered far more reliable, especially for naval combat. After...

A Formidable Big Fokker; The T.IX

A Formidable Big Fokker; The T.IX

Anthony Fokker is a name that will forever live in military history. One of the first and most successful of the aviation pioneers, the Dutch designer’s fighters of the First World War are still remembered as both some of the most formidable and innovative machines of...