An emerging threat in the twenty‑first century is algorithmic warfare, in which lethal autonomous weapon systems (LAWS) decide whom to kill, based on predictive models and without human judgment. A key example is the Lavender AI system used by Israel, which was reportedly deployed in Gaza in 2024-25. The system may also wrongly categorise civilians as combatants with an estimated 10-37 per cent false positive rate. These mistakes compromise the fundamental concept of distinction, as outlined in International Humanitarian Law, necessitating a permanent distinction between civilian and military objects. In cases where LAWS are hallucinating threats, this principle cannot be maintained.
This dilemma is similar to a larger peacetime problem- misinformation generated and amplified by algorithms. False information is also commonly increased by social media algorithms since this enhances interaction. UN General Assembly Resolution 78/241 (2023) characterises such misinformation as structurally dangerous to democratic institutions and public trust. In both warfare and everyday life, predictive algorithms can generate systemic, probabilistic harms to humanity that are difficult to control.1
It is in this context that the climate change Advisory Opinion of July 23, 2025, of the International Court of Justice, is significant. The ICJ held that states owe erga omnes duties to avert foreseeable climate‑related harms, even when those duties are grounded in forecasts rather than certainty. The Court adopted the approach of the Human Rights Committee in Teitiota v New Zealand2, affirming that climate‑related displacement risks can trigger protective obligations, including the prohibition of refoulement.3
- Predictive Targeting and Legal Violations under IHL
The deployment of Lavender embodies the spirit of the definition of LAWS in UNGA Resolution 79/62, as it is controlled with a minimum of a human operator and utilises probabilistic evaluations to produce targets.4 The system, built on algorithms designed to identify behavioural patterns, reduces an individual’s life to a series of data points scored against opaque criteria. These scores are then construed as algorithms’ misinformation pipelines: Lavender generates probabilistic classifications which are exaggerated with scale, particularly in high-density populations such as Gaza, where even a low error rate leads to disastrous results.
The system simplifies human behaviour (which is complex) by grouping it into the simplified categories that confound association with intent and presence with participation. This collapse of nuance into the binary ‘combatant’ versus ‘non‑combatant’ mirrors the way deepfake technologies erase the line between truth and fabrication. In the two scenarios, the reasoning is probabilistic, systemic, and structurally prone to failure.
Under IHL, Lavender brings up direct questions about distinction, proportionality, and precaution. With respect to distinction in Article 48 of Additional Protocol I, the recorded misclassification rate of 10–37 per cent for Lavender undermines the principle that civilians must always be protected from direct attack. An unacceptable false-positive rate in the context of civilian content moderation is disastrous when applied to the killing of individuals. Thirdly, under Article 51(5) (b), proportionality requires the density of the population of Gaza to make any form of misidentification fatal to the population groups around the target, which makes estimations of civilian casualties algorithm-based and, thus, inaccurate.
The opacity of Lavender’s internal decision‑making prevents meaningful scrutiny of whether expected collateral harm outweighs the anticipated military advantage. Concerning precaution under Article 57,5 commanders are legally obliged to verify targets and take all feasible steps to minimise civilian casualties. Yet Lavender’s architecture, which conceals its decision pathways, renders genuine verification impossible. Human operators cannot fulfil the obligation to exercise judgment or assess risk when the system’s recommendations are inscrutable. This opacity is not merely a technical problem but a structural violation of IHL’s verification requirements.
The normalisation of Lavender further complicates global legal governance. Israel’s defence industry has already begun marketing the system as “battle-tested,” incentivising other states to adopt similar technologies. Meanwhile, countries such as India are developing autonomous munitions like DRDO’s Nirbhay loitering systems,6 reflecting a rapid global diffusion of capability. Without binding international norms, states may increasingly treat algorithmic error as an acceptable cost of modern warfare, thereby accelerating a global arms race in autonomy and embedding probabilistic killing into ordinary military practice.
- Probabilistic Harm, Linking Climate Jurisprudence and LAWS Regulation
The changing legal environment relating to climate change offers a rich doctrinal understanding for dealing with the risks of LAWS. The analysis of the Human Rights Committee in Teitiota v. New Zealand realised that the threat related to climate, being prospective and likely, can involve the right to life and, therefore, create the relevant state obligation. Although Teitiota’s claim was rejected for lack of immediacy, the Committee articulated an important principle: risk‑based harms are legally cognisable where they are foreseeable.
The 2025 Advisory Opinion by the ICJ has turned this principle into a universal rule. According to the Court, there is an erga omnes burden on states to avert foreseeable climate damage (para. 378) and the adoption of institutional structures to conduct anticipatory risk-taking (paras. 425-429). Additionally, states may be required to grant humanitarian visas, arrange planned relocation, and confer protective status on individuals facing heightened risks stemming from climate‑driven impacts (para. 433). This is the initial direct acknowledgement that predictive obligations, duties that exist before the occurrence of injury, are part of contemporary international law.
This framework corresponds closely to the risks created by LAWS. In both climate change and autonomous targeting, harms are foreseeable, probabilistic, and rooted in systemic processes rather than discrete, intentional acts. Civilian deaths resulting from algorithmic misclassification are predictable: with a known error rate, the likelihood of wrongful targeting is calculable. Just as climate change produces collective, diffuse harm that transcends borders, LAWS create a transnational risk landscape in which the consequences of algorithmic error cannot be contained.
Under the ICJ’s reasoning, states deploying LAWS should bear preventive obligations analogous to those related to climate harms, namely, to maintain human veto systems, conduct structured risk assessments, and adopt explainability requirements to ensure transparency. Lavender’s known error profile meets the threshold of foreseeable, substantial, and irreparable harm, thereby triggering these obligations.
For the Global South, particularly India, this doctrinal shift presents a leadership opportunity. India’s DPDP Act 20237 incorporates algorithmic fairness, transparency, and auditability, principles that align closely with what a global LAWS governance framework would require. India’s AI Mission 20478 provides national-level mechanisms for explainability, safety, and risk governance.
As an emerging military and technological power, India is well-positioned to shape the agenda for the 2026 UN Group of Governmental Experts (GGE) on LAWS, bridging the gap between technologically advanced states and those concerned with the humanitarian and sovereignty implications of autonomous warfare.9
- Post-Westphalian Solutions: Toward Governance of Algorithmic Warfare
Control of autonomous weapons must extend beyond conventional Westphalian frameworks of state‑to‑state responsibility. Algorithmic warfare introduces distributed harms, opaque models, and probabilistic decision-making that challenge classical concepts of attribution and intent. These challenges can be met through a three‑part governance framework.
First, a UN AI Warfare Tribunal (UN‑AIWAT) would be established as a specialised body to provide technical oversight and allocate legal responsibility. In this hybrid tribunal, which combines the international law tribunal with the international tribunal of the law of the sea, the violations related to the LAWS would be judged, the training data and model logs would be demonstrated openly, the independent audits would be implemented, and the global standards of compliance with the AI-IHL would be established. It would give concrete effect to the ICJ’s preventive role by creating an enforcement body dedicated to algorithmic systems.
Second, the global treaty on LAWS needs to be binding, entrenching human veto power as well as explainable AI. The principle of human veto would require sustained human control at every stage of the targeting cycle, supported by immediate override systems that cannot be bypassed or delegated to automated mechanisms. This shifts the GGE’s soft‑law language on ‘context‑appropriate’ human judgement into an erga omnes duty. To supplement human supervision, transparency requirements for AI would oblige autonomous weapon systems to display confidence scores for their classifications, generate explanation vectors that clarify the basis for each flagged decision, and maintain secure audit logs to enable meaningful review after an attack. In the absence of these actions, no verification and accountability systems of IHL can be sustained.
Third, international law should recognise a form of predictive non‑removal for individuals identified as threats by algorithmic systems. Drawing on paragraph 433 of the ICJ opinion, individuals placed on LAWS watchlists should not be forcibly transferred, prevented from leaving, or left in conflict zones without prior review. This would create an avenue of appeal against algorithmic targeting decisions and help safeguard individuals from the risk of being wrongly selected by automated systems. Lastly, revenue justice models need to be modified to consider algorithmic damage. Victims of a LAWS‑based strike should have access to reparations, system logs, and evidentiary records that can be used to challenge the algorithms’ decisions. This reflects climate loss-and-damage frameworks and acknowledges the necessity of procedural fairness in situations in which the life-or-death stakes of the process are at stake.
- Conclusion: Reimagining Distinction in the Algorithmic Age
One of the most pressing issues that international law faces nowadays is the emergence of LAWS. Systems such as Lavender do not just mechanise the processes of war; they change the essence of violence because probabilistic inference, opaque modelling, and systemic bias are embedded in lethal decision-making functions. In the same way that deepfake technologies are disruptive of democratic institutions by tampering with the credibility of perception, LAWS are disruptive of IHL by tampering with the credibility of distinction, the moral basis of the laws of war.
The 2025 Advisory Opinion of the ICJ, however, offers a robust jurisprudential tool for addressing these challenges. The way it expresses predictive non-refoulement, anticipatory governance, and erga omnes preventive responsibilities is a doctrinal basis of controlling AI-based harm. Applying these principles to autonomous weapons would enable international law to confront the algorithmic age with clarity rather than helplessness. Resistance to technological transformation is not the way forward to distinguish, but the transformation of legal frameworks to entangle human judgment, accountability, and transparency at the centre of decisions that implicate life itself.
By Srushti Joshi, 4th Year B. A. LL. B., Maharashtra National Law University, Nagpur.
1 Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 9 (PublicAffairs, New York, 2019).
2 Ioane Teitiota v New Zealand HRC Communication No 2728/2016, CCPR/C/127/D/2728/2016 (24 Oct 2019).
3 Noa Shpigel and Yuval Green, “Lavender: The AI Machine Directing Israel’s Bombing Spree in Gaza” 972 Magazine Apr. 3, 2024, available at https://www.972mag.com/lavender-ai-israeli-army-gaza/(last visited on Dec. 7, 2025).
4 UN General Assembly, Lethal Autonomous Weapons Systems, GA Res 79/62, UN GAOR, UN Doc A/RES/79/62 (Dec. 2, 2024).
5 Additional Protocol I, supra note 11, art. 57.
6 Defence Research and Development Organisation, “Nirbhay Loitering Munition System” 45 (Annual Report 2025).
7 The Digital Personal Data Protection Act, 2023 (Act 22 of 2023), s. 7.
8 Ministry of Defence, “India AI Mission 2047” 23 (Government of India, 2024).
9 UN General Assembly, Information Integrity on Digital Platforms, GA Res 78/241, UN GAOR, UN Doc A/RES/78/241 (Dec. 20, 2023).