top of page

LAWS CAN’T HIDE NO MORE

Wars have always constituted a momentum of significant technological advancement. It was so then: it is so today, including in the war in Ukraine. Even though in this specific case only two are the parties confronting themselves on the ground, many more States contribute indirectly to supporting one party or the other as the conflict unfolds.



In this respect, weapons are largely provided on both sides. Notably, it has been since the beginning of the conflict that Ukraine purchases autonomous weapons from several countries – Turkey in the first place. So has Russia allegedly done with North Korea.


More recently US has reportedly planned to sell to Ukraine very peculiar and highly technological weapons: the MQ-1C Gray Eagles. In the same wake, Turkey has been a large supplier of Bayraktar-TB in Ukraine and in Nagorno Karabakh before.


These are drones that can be armed with specific missiles for battlefield use. They carry out armed attacks and show a great capacity in terms of outreach, precisions, and weight (rectius: number of weapons) they may carry.


In fact, both these weapons subscribe to the category of lethal autonomous weapon systems (LAWS), i.e., systems that are, to a different extent, unmanned. In other terms, they operate without human input, or quite so.


This article provides some little guidance to get us some bearings in the universe of lethal autonomous weapons and shed a light on relating major legal debates – as the notion of autonomy in the context of weaponry is still under dilemma and shows a certain degree of controversies, on both political and legal levels.


To begin with, the concept of autonomy calls for some clarification. The ICRC defines LAWS as ‘weapon systems that can learn and adapt [their] functioning in response to changing circumstances in the environment in which [their] are deployed’. However, there is no general agreement on the definition and parties have come up with alternatives focusing, at times, on technical elements, at others, on human control. In this respect, a comprehensive overview is provided by UNIDIR (United Nations Institution for Disarmament Research).


At any rate, those systems rely on artificial intelligence (AI), which generally refers to the capacity of machines to emulate human thought and replicate human action in real-world environment. A specific set of AI is what is called machine learning. Machine learning systems are endowed with algorithms allowing them to elaborate data and come up with patterns through which they adjust their own behavior. All happens without human intervention. Experts distinguish AI from machine learning by referring to narrow AI and general AI. In the former case, machine will be provided with data (and guidance) they will act upon: they are designed for specific tasks. In the latter case, instead, general artificial intelligence (GAI) acts upon some input and little (or no) oversight: machines of this kind elaborate preliminary data, make relations between them, and come up with ultimate results that are far stretched from the initial data provided. Possibly, they may cover a far-fetched range of tasks.

For instance, a narrow AI system will need a specific identity to target someone. On the contrary, GAI can be given with general information (such as, general physical features), which will be processed and it will act accordingly. Machine learning refers to this latter category. This technology informs the level of autonomy machines used in warfare have. In this respect, for classification’s sake, experts do not tend to refer to the quality of AI implied only, but, most importantly, to the level of human control that rests upon those systems. In this wake, LAWS are subdivided into three categories:

  1. Human in the loop. Humans play a role in all steps of their implementation and activation, from programming, to planning an attack until controlling it and eventually deactivating it. In other words, those weapons perform tasks independently but only when delegated by their (human) operator.

  2. Human on the loop. Those systems that carry out attacks independently, but constantly supervised by human control. Humans can eventually override any decisions.

  3. Human out the loop or fully autonomous weapons. Except for programming part, those weapons are able to carry out all steps of military attacks (from searching, identifying and launching) independently of any human input.

Technology has made gigantic steps in this field. This holds true in the battlefield: the weapons mentioned at the beginning of the article prove it. Similarly, progresses are visible in security and law enforcement operations (for instance, see here).


Yet, answering the question “are fully autonomous weapons among us already?” is not an easy task.


In fact, for a State to make the capacity to build machine learning systems public constitutes an asset to a certain extent only. Of course, showing off the ability to develop certain technologies allow States to gain a major role in the playing field (in other words, a way to flex their muscles against competitors).


Yet, in warfare, showing too much can easily backfire. In fact, for strategic reasons, the military is not prone to share methods of lawfare or the way they assess proportionality (a fundamental rule of the conduct of hostilities). By doing so, they risk hampering the effectiveness of their operations and attacks on the ground. This is also a good way to shield from liabilities or find a way out to that. This is true for the generality of operations.


Today, the fact that States having the capacity to develop such technologies are not directly involved in armed conflicts render the political dilemma a less impingent one. As a result, evidence of implementation of machine learning systems pops up every now and then. First, a UN Report (S/2021/229), published in March 2021 by the UN Panel of Experts on Libya (Res 1973/2011) confirmed the use of a drone, the Kargo 2, powered by artificial intelligence. More recently, evidence of AI used in warfare come from several frontlines: from Iraq, Nagorno-Karabakh, Ethiopia and ultimately Ukraine.


Yet, when it comes to LAWS, there is an additional element to consider: the suitability of the existing legal framework.


International humanitarian law (IHL), i.e., the body of norms governing the rules of armed conflicts as encapsuled in the four 1949 Geneva Conventions and their Additional Protocols (APs) as well as relating customary norms, have been created in a moment in history where autonomy was far from being associated to weapons.


Therefore, a wide range of concerns loom around the feasibility of existing IHL norms to LAWS. Notably, LAWS constitute a major stress-test for the rules of conduct of hostilities. Those are: use of means and methods of warfare which are not prohibited (Art 35 AP I), targeting military objectives (Art 52 AP I – and customary for both international and non-international armed conflicts), carry out attacks that are proportionate (Art 51 and 57 AP I which have customary status and apply to both international armed conflicts and non-international armed conflicts) and endowed with feasible precautions (Art 57 and 58 AP I having customary status as well).


To begin with, legitimate targets comprise military personnel and objectives. There is no big issue when it comes to stable armed forces (those generally wear uniforms and are clearly distinguishable from civilian population). This is possibly compatible with a machine learning system provided with the right information (such as the color the uniform).


And yet, what happens when it comes to non-state armed groups where members are not identifiable by uniforms?


Even providing alternative insights, such as specific physical features, is highly dangerous, given the likelihood to confuse fighters and civilians. Programming LAWS to target individuals with specific attributes may lead to broadening too far the range of potential targets, in denial of any protective aim towards the civilian population.


To add to that, last-minute surrender is another very controversial case-scenario. IHL states that people become an unlawful target as soon as and for such time as they are hors de combat (i.e., they desist from taking part to active hostilities). Is an autonomous weapon able to recalibrate an attack in such a scenario at any given moment without any human input?


Implications on the rule of proportionality walk down the same path and goes even further. Proportionality requires parties to consistently assess conditions and modulate military actions (and reactions) accordingly. That targetability of prima facie civilian objects (think of a bridge, or a school), and in general conducting hostilities in urban areas is possible only where attacks reduce civilian harm to its minimum extent. This all depends on very factual and punctual conditions. Are there alternatives? Has the attack programmed at a moment where civilians are not expected to be around? Are alternatives available in case the proportionality assessment changes due to a major presence of civilians than initially expected? Proportionality truly is contingent based. Can LAWS deal with unpredictable factors in this respect?


As things stand, it seems that LAWS encompass a margin of mistake that can never be foreseen and forestalled by humans. Against this framework, all fundamental rules (or quite so) of the conduct of hostilities are grounded upon a subjective, henceforth human, element which can never be replaced by technologies.

For these reasons, some argue for a total ban on LAWS.


On a different note, some experts are confident in the capacity of existing rules to adapt to autonomous weapon systems. Besides highlighting the obvious advantages (to mention some: accuracy, higher chances to spare human lives – from both military and civilian sides, reducing the margin of human error), the argument on the inherent subjective nature of the rules of the conduct of hostilities is rebuked. Or, at least, largely reduced. Take the proportionality rule, for instance. As mentioned earlier, the military is very reluctant to share criteria by which they assess proportionality. They largely remain an uncharted territory. For the military, playing the subjective card (i.e., criteria are assessed according to the sense of experts in a given scenario) can be an easy way out to avoid sharing information. Hence, we assume proportionality is fundamentally subjective, but the contrary has never been excluded. Perhaps, entangling the rules on proportionality more objectively is possible more than thought. In the end, what these experts suggest is to shift the perspective on existing rules. By conceiving them under a new, objective-oriented glow can dissipate false concerns on LAWS.

Lastly, it is worth mentioning the general tendency to anthropomorphize LAWS. As many experts have highlighted (see here and here, for instance), it is misleading to think of weapon autonomy as interchangeable with human control. In fact, machine learning is established upon directives (in the form of algorithms) that are programmed by humans, and it only could be so. Therefore, even when addressing fully autonomous weapons, a ‘least path of resistance’ of human control remains. Hence, a persistent use of anthropormorphized language when referring to LAWS muddies the waters even more and enlarges the ‘unmissable subjective element’ precautionary tale.


At any rate, despite attempts, the international community struggles to set regulatory benchmarks for the moment being (for an overview, see here).


Clearly, not only concerns are multifold under this topic. There are implications on the use of these technologies in peacetime and daily situations as well. Given the unlikely scenario where LAWS are banned out, it is rather more reasonable pushing the international community and State to converge on shared basic rules to their use in warfare and elsewhere.



A cura di Silvia Tassotti

Comments


bottom of page