Statement on Artificial Intelligence, War, and the New Dystopia
April 26, 2026

The Lemkin Institute for Genocide Prevention and Human Security is deeply alarmed by the unaccountable deployment of artificial intelligence (AI) in warfare. As Israel and the United States continue to escalate and geographically extend their military campaigns and genocidal operations, their strategic use of AI systems has drastically sped up the mass targeting and killing of civilian populations, including over 150 schoolgirls in Minab, Iran. This shift in the nature of violence signals a profound threat to the foundational principles of human security. The recent blacklisting of Anthropic by the Trump administration for its refusal to permit fully autonomous weapons must be understood as a final warning for the dystopian reality that algorithmic mass atrocity will transport us to.
Over the last centuries, motivations for war have remained relatively static. Still today, states go to war to pursue longstanding geopolitical, imperial, and colonial objectives both domestically and on foreign soil, such as securing natural resources, control over land, and geopolitical dominance. However, over the course of the last five years alone, AI has radically changed the face of war.
Within the first 24 hours of the United States’ and Israel's illegal war on Iran alone, Anthropic’s Claude artificial intelligence system produced a list of one thousand suggested targets, along with precise GPS coordinates, recommended weapons, and “automated legal justifications” for the respective strikes. In other words, it is the algorithm that decides who should be targeted, and it is the algorithm that provides readily available legal backing for the killing. Human oversight is minimal at best.
This should alarm us all. Legal assessments of warfare require careful consideration of context and proportionality, ethical reasoning, and, above all, conscious judgment by humans. If we allow the human element to be removed from warfare, we will build ourselves a dystopian reality in which the bulwark against mass atrocities is reduced to a mere algorithmic update and in which genocide becomes mechanised beyond the capacity of current accountability mechanisms. It is particularly alarming that, as we go to press, the United States appears to be in the midst of creating a dedicated “Autonomous Warfare Center”, where artificial intelligence will select targets and transmit kill orders to robots awaiting commands.
While one might assume that these automated decisions are subject to rigorous human oversight, in practice, review is minimal. One striking example of this is Israel’s in-house AI program “Lavender,” which, in the weeks following 7 October 2023, marked 37,000 Palestinians as suspected militants and thus eligible targets. Despite a ten per cent error rate, these suggested targets were often approved by human officers in as little as twenty seconds – time used solely to confirm the marked target was male. There were virtually no provisions made for the anticipated “collateral damage” of specific hits, with the Israeli military allowing for dozens of civilian deaths per hit.
Rather than carefully assessing each strike and considering its consequences, military personnel in Israel and the U.S. are basically “rubber-stamping” a machine’s kill plan.
This is far from accidental. Rather, the deployment of AI as a shortcut in mass targeting and killing in war is a consequence of the United States’ and Israel’s long-term objective of developing ever more efficient, automated, large-scale destruction. It also speaks to the way that targeted communities are perceived: as calculations, numbers, detached from the human lived experience. Delegating life-or-death decisions to fast-paced algorithms thus further entrenches the systemic dehumanisation of those communities threatened by genocidal violence around the globe. It strips them of their humanity.
We must not forget: Artificial intelligence is not infallible. The results produced by any AI system are only as good as its underlying algorithmic architecture and the quality of data used to train it. Unintended and harmful bias can thus emerge at multiple stages of a system’s development and deployment. These biases, including those concerning gender, ethnicity, age, nationality, or culture, can lead to a misidentification of persons or objects and, as a consequence, a misidentification of their status as combatants, rendering an attack unlawful because it actually targeted a person or persons protected under international humanitarian law. This makes it imperative to recognise that, contrary to popular assumptions, we do not fully understand the internal, often arbitrary conclusions of such algorithms. In the case of Lavender, arbitrary use of data meant that even loose associations, such as merely being part of the same WhatsApp group as a suspected militant, influenced the algorithm’s selection of targets.
The inherent risks of AI systems provide a haunting backdrop to the 1 March 2026 bombing of an Iranian primary school that killed more than 170 schoolgirls and staff, constituting a grave violation of international humanitarian law. At the time, the precise mechanism behind the strike remained unclear. Considering the facts of the case, Peter Asaro, chair of the International Committee for Robot Arms Control (ICRAC), suggested that the mass atrocity could likely have been a manifestation of flawed AI targeting. He reasoned that, given the school’s proximity to two facilities controlled by the Islamic Revolutionary Guard Corps (IRGC), the building was likely misidentified as part of the IRGC compound by an algorithm relying on outdated or erroneous data.
While both Israel and the United States initially refused to take accountability for the school bombing, a subsequent military investigation found the United States responsible for the fatal attack, which was, in fact, caused by the use of outdated data sets that ended the lives of so many young children. Although the extent to which AI or automated targeting systems were involved remains unclear, the incident underscores the grave risks posed by increasingly data-driven warfare
States engaging in armed conflict remain bound by the rules of international humanitarian law, which establish clear limitations on the use of autonomous weapon systems. As clarified by the International Committee of the Red Cross (ICRC), a certain degree of human control “will always be necessary to ensure compliance” with international humanitarian law. The core principles of the law of war, namely the distinction between combatants and civilians and proportionality and precaution in attacks, by definition require context-specific human judgment that an algorithm cannot deliver. This renders fully autonomous weapons, as envisioned by the U.S., unlawful under international humanitarian law. Similarly, a mere twenty-second assessment does not satisfy the requirement for meaningful targeting control, which includes the consideration of any foreseeable changes in circumstances or context.
Crucially, international humanitarian law not only binds States as the primary subjects of international law but also imposes obligations on non-state actors, including businesses and their managers, when they engage in activities closely linked to an armed conflict. Such links are evident where a corporation provides direct military, logistical or financial assistance. In many cases, businesses directly profit from armed conflict, creating what Francesca Albanese, Special Rapporteur on the situation of human rights in the Palestinian territories, has called the “economy of genocide”. Corporations such as Palantir Technologies Inc., which has provided post-October 2023 Israel with automatic policing technology, defence infrastructure, and its AI platform allowing for automated decision-making, know what their tools are being used for. When confronted with accusations that Palantir was killing Palestinians in Gaza, CEO Alex Karp proudly confirmed, “mostly terrorists, that’s true,” underlining the executive-level knowledge of the genocidal attacks the company was involved in.
Palantir’s position on the future of AI warfare was made strikingly clear in a 22-point manifesto, published on X on 18 April 2026. The statement’s logic is as follows: progress is inherently good, AI warfare is inevitable, the U.S. must be the one to build it, and, well, some cultures are simply better than others. What follows from this is a worldview shaped by Western dominance, social and racial hierarchies and, above all, a fundamental disregard for the inherent value of human life and human dignity.
When Palantir celebrates the U.S. as a country that “advanced progressive values,” we can be certain that “progress” in this sense is a mere technical term, referring not to improvements in the living conditions of the general public, but rather, to “clean” kills executed by the right state against the “regressive” ones. This is dystopia sold as patriotism and peace-building.
Seen in this light, the Pentagon’s decision to replace Anthropic with OpenAI is anything but coincidental. The U.S. government’s decision to sever ties with Anthropic for displaying even the slightest hint of ethical restraint reveals the ruthlessness of an administration that seemingly mirrors Palantir's views and disregard for core human values. The Pentagon would simply not accept any limitations to AI warfare, even when these were based on genuine concerns that AI was “not reliable enough to power fully autonomous weapons.”
In contrast, OpenAI initially agreed to a far more permissive contract with the U.S. government. However, following public outcry, the tech company announced that it would include stricter terms in its contract, including red lines against domestic mass surveillance and fully autonomous weapons. The contract itself, however, was never made accessible to the public. In an effort to clarify the precise contractual terms, OpenAI CEO Sam Altman published the text of individual clauses, thereby unintentionally exposing the legal loopholes that will grant the Pentagon all the freedom it deems necessary. The new additions to the contract include the clarification that “[c]onsistent with applicable laws, [...] the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” The phrase “consistent with applicable laws” may seem promising at first. However, it quickly ceases to be so when we remind ourselves that almost every drone strike, every invasion, every action in support of a genocide, and every carpet bombing has been initially deemed “consistent with applicable laws.” In addition, the word “intentionally” opens up the door to a wider margin of escalation, permitting any action as long as it remains plausibly deniable or falls into the category of “incidental collection” of personal data. In this sense, and according to the U.S. government, data is not collected intentionally so long as it becomes available as a by-product of other activities.
OpenAI knows about the effect of these legal terms. OpenAI is aware of the mass atrocities and genocides that the U.S. military is currently involved in, and by providing its services and technologies, is aware of its contribution to these crimes against humanity.
What the example of Anthropic and OpenAI makes clear, however, is that, in its most inhumane tactics, the state depends on service providers, on developers, on manufacturers. There is a role for corporations to play when their government turns against international law. Their choice to be complicit in mass atrocities — or their refusal to take part — really matters.
The Lemkin Institute for Genocide Prevention calls on Israel, the United States, and their allies to immediately put an end to their illegal wars, illegal occupations, and genocidal violence. This is particularly urgent given the unethical means and inhuman tools used during the war that have already caused grave human suffering and violations of international humanitarian law. And yet, in the current stage of active warfare, we demand that no target be confirmed and no strike carried out without meaningful and thorough human review and in accordance with the rules of international humanitarian law.
We further remind all businesses that provide tools and services to parties to any armed conflict of their responsibilities to comply with international humanitarian law,
We call on the international community to build alliances grounded in human dignity and challenge violations of international humanitarian law wherever they occur and regardless of the perpetrator state. Just as the international community once came together to ban chemical weapons and landmines, we must now establish legally binding rules on the development and use of AI warfare, including a strict prohibition of fully autonomous weapon systems.
Algorithms are biased; algorithms make mistakes. After all, meaningful human review was the one thing that, in 1983, saved humanity from nuclear war when Lieutenant Colonel Stanislav Petrov challenged the accuracy of an automated early warning system and correctly recognised its alerts as false alarms. We cannot allow machines to make life-or-death decisions. We cannot allow algorithms to industrialize mass atrocities. Instead, our collective goal must be to stop them altogether.
.png)