We May Have the First Case of a Robot Deliberately Killing Humans

June 1, 2021



A report by the UN concerning the conflict in Libya has thrown up a rather disturbing piece of information. According to the document, in early 2020 the world may well have seen the first occasion of an Artificially Intelligent (AI) weapon hunting down and killing targets without a human in the loop.

The use of drones by both sides has been well reported. In March and April 2020, Turkish UAVs supporting the Government of National Accord – the GNA – proved decisive in breaking the resistance of forces affiliated to rebel General Khalifa Haftar. But these were conventional models that require a pilot to remote-control the UAV and engage targets.

The UN report states that another type was also deployed in the war, the STM Kargu 2 loitering munition. And this is a hugely different weapon.


While other drones and UAVs retain a human operator in the loop who provides the analysis and decision-making faculties, the Kargu-2 can act completely autonomously. Essentially a quadcopter fitted with a fragmentation charge for use against personnel in the open, the Kargu uses real-time image processing capabilities and machine learning algorithms to search for and attack targets at its own volition.

In essence, an operator sends it off to a target area and the weapon can make its own decisions on what, where and when to attack.


Though the UN cannot confirm for definite that the Kargu was used against human targets during the Libyan civil war, it is certain that shipments of these were sent to the country and the UN considers it likely they were used. In which case, this is probably the first time that a robot has autonomously attacked and killed human beings.

It should be noted that it is highly likely that the Kargu has since been used in other conflicts, notably against Armenia by Azerbaijan and in Syria and possibly within Turkey itself. So, we likely have moved from the realm of science fiction to science fact regarding the use of AI-weapons.

Now, it must be stated that the use of advanced computers to determine whether to attack a target is not exactly new. As an example of this, let us use the US Navy’s Mk.60 CAPTOR mine.

Developed during the Cold War, this weapon was intended to be placed in areas where Soviet submarines were expected to transit or patrol. The mine was programmed to recognise the signature of these vessels and if one approached would fire a Mk.46 homing torpedo at it.

This has marked most high technology autonomous weapons to this date – generally a highly selective targeting band in a specific environment. This means that a Mk.60 makes its decision to attack based on an extremely limited set of criteria which simplifies its choice and therefore should reduce the possibility of attacks on incorrect targets.

Arguably, it is the same thing broadly as the Kargu – a machine making a decision on when to attack humans. But due to the highly selective nature of the targeting parameters of such a weapon, it’s probably more accurate to think of the computers on a Mk.60 as an incredibly complex fusing system.

Weapons like the Kargu and its ilk seem vastly different. These robots have much broader targeting parameters and autonomy to act as they see fit. As a result, they are going to be killing more people in the future, I’m not foolish enough to think this genie is going back into the bottle.

But now it raises issues on how robust the machine-learning systems actually are. Currently, and despite years of effort and huge expense, no country has permitted the full-scale use of self-driving cars. Though progress is ongoing in this field of AI, and likely soon to become reality, governments are, quite wisely, being thorough in their approach in investigating and legislating the issue. This has proven wise as many of the trials that have been conducted over the years have demonstrated how machine-learning can be confused and even erratic.

But while a cautious approach to the comparatively mundane – though admittedly dangerous – task of driving has been subject to rigorous testing, the decision to field robots that hunt down and kill people appears to have been just…happened.

Does anyone believe that these sorts of weapons have had a fraction of the time or effort expended on them as self-driving cars? And yet, they are now a fact of warfare and their use will only grow.

Perhaps I am being too delicate? After all, atrocity is a day-to-day fact in conflicts all over the world, either deliberately or accidentally.

But this is different. A human can be held accountable for his actions – even if all too often that does not occur.

But a robot, it is just doing what we have built it to do.

I suspect that I will be writing an article at some point reporting how one of these weapons slaughtered a party of innocents.

I fear it that said article will be soon.





Related Amazon Books

Ed Nash

Ed Nash

Ed Nash has spent years traveling around the world. Between June 2015 and July 2016 he volunteered with the Kurdish YPG in its battle against ISIS in Syria; his book on his experiences, Desert Sniper, was published by Little, Brown in September 2018.
Ed Nash

Ed Nash

Ed Nash has spent years traveling around the world. Between June 2015 and July 2016 he volunteered with the Kurdish YPG in its battle against ISIS in Syria; his book on his experiences, Desert Sniper, was published by Little, Brown in September 2018.

Latest Videos

“Germany Had No Interest in Heavy Bombers” – The Junkers Ju 89

“Germany Had No Interest in Heavy Bombers” – The Junkers Ju 89

The disparity in the sorts of bombers used by the opposing forces in the Second World War is a subject that is quite often discussed in military history circles. Whilst the Americans and the British deployed vast numbers of heavy bombers, the German Luftwaffe was...

Panther 2.0; The Rheinmetall KF 51 Next Gen MBT

Panther 2.0; The Rheinmetall KF 51 Next Gen MBT

The Armenia-Azerbaijan conflict in 2020 and the now heavy losses suffered by Russian and Ukrainian armoured units in their current conflict have led to some rather excited commentary in segments of the press about the “death of the tank”.   From a historical...

Notify of
1 Comment
Most Voted
Newest Oldest
Inline Feedbacks
View all comments
1 year ago

You raise the subject of AI and self-driving cars… vehicles. The problem is, unlike the world of fantasy (eg: Asimov’s famous Three Laws), we are still very far from electronic intelligence. To perform many of the tasks needed by either a Kargu or a Mack Truck, we need a processor about the size of a pigeon brain or a human brain, with the computational capacity of either. In addition, we need information-gathering devices — eyes — with human or avian capabilities.

The eye is a truly amazing organ. Roger N. Clark (Clarkvision) postulates it has a resolution of 576 megapixels, but with caveats. We don’t even approach that with any known camera.

Self-driving vehicles need one thing above all: a method of finding what the road is, and of staying off the rough stuff. Without clear signposting, not to mention navigational equipment, there is no self-driving vehicle.

So. Kargu, its derivatives and ancestors. The Captor mine depended on a grossly simplified sensory system designed to minimise “friendly fire”, and on carefully chosen strategic placement. There can be no doubt that the Kargu is similarly… crippled… if only by limitations on flight duration or geographical location. We don’t know exact details of its AI platform, but we can confidently assume it will be limited as much by its power supply as by processor and RAM capacity. AI sufficient to do the intended task requires a massive computational solution and enormous energy sources. In perspective, it is generally held by biologists that any sentient animal is no more than a support mechanism for its brain — and we all work continuously, day and night, to feed the monster.

So in short, I do not fear the “Rise of the Robots”. I do fear the Natural Intelligences that bedevil our fragmented attempts at civilisation.