MOSCOW, 21 Dec 2021, RUSSTRAT Institute.
Routine meetings of diplomats usually do not enjoy the attention of the public. However, the meeting that ended on December 17 in Geneva was an exception to the rule: it attracted the attention of the media and human rights activists around the world. Of course: nothing less than the rules of the “war of mankind against androids” were discussed.
It’s in this way or approximately like that the Sixth Review Conference of the States Parties to the Convention on “Inhumane” Weapons, held from December 13 to 17 under the auspices of the UN, is presented in the world press. At it, diplomats from 125 countries, including Russia, unsuccessfully tried to agree on whether to include LAWS – lethal autonomous weapons systems – in this convention.
At the same time, many foreign commentators stressed that it was Russia that took the clearest position at this stage against any binding treaty that would prohibit the use of weapons not controlled by man. “Moscow is on the side of killer robots! Russians are preparing an invasion of machines!” – well, isn’t it a plot for media promotion?
The over-activated imagination of the layman on which the most difficult problem has been thrown readily draws phantasmagoric pictures from the Terminator: combat walking mechanisms destroying everything in their path. The fact that, in fact, a number of other countries, from the USA to Australia and from Israel to China, adhere to the same position, only without articulating it clearly, was recalled less often. And even less often that the threat at the moment is simply frankly far-fetched. But first things first.
“Unacceptable and disgusting”
The Convention on “Inhumane” Weapons was born in 1980 and was ratified by the Soviet Union three years later. It was designed to minimise excessive damage among enemy soldiers from certain types of weapons or its indiscriminate action threatening civilians. Within the framework of the convention, five protocols were adopted prohibiting or restricting the use of booby traps, cluster munitions, napalm, blinding lasers. Moscow has signed all these protocols.
The parties to the convention hold annual meetings, once every five years, as today in Geneva, they gather for review conferences in order to decide what else to ban. It is rarely possible to come to an agreement: every time humanitarian considerations come into conflict with the legitimate defence interests of certain states.
There is another reason for disputes: “ban everything”, as a rule, is demanded by countries with an undeveloped defence industry or those who are not threatened by anyone. However, as Russian diplomats have repeatedly noted, another problem is much more important: even ratified protocols are not respected by everyone and not always.
The issue of lethal autonomous weapons systems (LAWS) has been discussed at such meetings since 2014, and for the last four years it has been studied in a discussion format at sessions of the Group of Governmental Experts. There, in addition to Russia, a number of other states oppose a complete ban on lethal autonomous weapons. Among them are the USA, Great Britain, France, Germany, Israel, South Korea, Japan – almost all the main developers of modern weapons systems. And this is explained not only by the fear of hopelessly falling behind in the arms race.
Let’s listen to the arguments of the parties. Countries like Brazil or Austria, which advocate a complete ban of LAWS, speak out against the very concept of “killer robots”, who independently choose a target on the battlefield and decide whether to destroy it or not. They cite as an example the sensational incident in Libya, where in March 2020, according to a UN report, the Turkish Kargu-2 quadcopter arbitrarily, without communication with the operator, attacked the retreating units of the forces that joined Khalifa Haftar. Another case of the use of artificial intelligence for military purposes is also named – in May 2021, it allegedly controlled a swarm of Israeli drones in an operation against Hamas.
“Machines that have the right to kill without human intervention are politically unacceptable, morally repugnant and should be banned by international law,” summed up UN Secretary-General Antonio Guterres to applause from Amnesty International and Human Rights Watch.
And Russia shares this humanistic message. Human control over robot is directly spelled out in the Russian Concept of the development of regulation of relations in the field of artificial intelligence and robotics technologies up to 2024. But before banning something, it would be good for our partners to properly clarify what is being discussed.
What if it was carrying ammunition?
Alas, supporters of a complete ban on LAWS are ready to throw out the baby with the bathwater. If they had their way, they would have banned all combat systems that frighten them with advanced AI: from missile defence missiles and fifth-generation fighter jets to household drones – even though their algorithms depend entirely on the human mind.
To begin with, it is not very clear how in general the adherents of prohibitive measures are going to distinguish combat robots from civilians. If to stuff an unmanned vehicle with explosives and direct it towards people – is this a combat use or not? But there are, for example, harmless military drones that autonomously de-mine the terrain, but are theoretically capable of accidentally blowing someone up – should they also be banned?
It is not surprising that the vagueness of the wording in the mouths of experts is one of the main reasons for Russia’s unwillingness (and not only Russia) to support the inclusion of LAWS in the Convention on “inhumane” weapons. Strictly speaking, such systems still simply do not exist in the world: the Turkish and Israeli cases are still only quasi-autonomous weapons. Preemptively prescribing a hypothetical problem in the convention on the principle of “And what if it was carrying cartridges?” is not the best occupation for diplomats.
Ethical issues related to LAWS, by and large, are also controversial. Not only Russian, but also Western military experts believe that the smarter the weapon, the more humane it is, since it reduces the number of “collateral losses”. Not to mention the fact that the use of combat robots on the battlefield saves the lives of their own soldiers. For many countries, this argument looks more weighty than the apocalyptic mantras of those who do not notice much bigger atrocities on the planet.
In general, Moscow proceeds from the fact that the current norms of international law are still sufficient to cover the entire sphere of lethal autonomous weapons systems. Perhaps it will be succeeded to push through new protocols bypassing Russia and a good dozen of other powers — but they will not cost a penny. At the same time, as Vladimir Ermakov, Director of the Department for Nonproliferation and Arms Control of the Russian Foreign Ministry, emphasises, Russia is quite ready to continue discussing the issues of LAWS within the framework of the profile Group of Government Experts.
And if to evaluate the perspective, it looks very much that this dispute is about which norms exactly should be used to take control of the development of intelligent combat robots. Moreover, this dispute will obviously last for years, if not decades. And if it is unrealistic to dot all the “i’s” in it, then now it is fundamentally important to agree on how to conduct it without falling into either the accusation mode or panic bans.
It is obvious, after all, that no agreements within or outside the framework of the UN are capable of stopping scientific and technological progress. Unfortunately, they are not always able, as we have seen more than once, to also prevent a slide into new wars – with or without robots.
Moreover, experts have long expressed concern that LAWS may fall into the hands of criminal and terrorist structures that use, for example, the same drones with might and main. It is clear that the appearance of “killer robots” on the black arms market is comparable to getting nuclear weapons on it. This, of course, promises mankind such scenarios that the Terminator’s movie productions will seem like childish pranks.
In this sense, the emerging new race of military technologies raises problems that humanity simply has not yet encountered. This means that it is necessary to agree on how to take control of the transfer of the “critical function” (Red Cross terminology), include the principle of mandatory control and responsibility of the operator for the use of weapons in national legislation, and only then agree on common rules. Some go even further and raise the question of creating structures that will prevent the penetration of military innovations into criminal markets.
And one more conclusion, which, if you look at the situation without emotions, follows from Moscow’s position on this exciting topic. Discussing norms on such topics requires a professional conversation, without hype and appeals to public opinion, for which minute-by-minute decisions are a priority – like in a movie.
Otherwise, a complicated conversation about “killer robots” risks turning into their active marketing advertising.
Elena Panina – Director of the RUSSTRAT Institute