Israel’s AI-powered Gaza killing spree signals a grim future

  • 4/10/2024
  • 00:00
  • 3
  • 0
  • 0
news-picture

For infrastructure it is “Gospel” and for human assets it is “Lavender.” Two words that may look harmless on the surface could explain the large number of deaths and casualties recorded during the first six months of Israel’s war against Hamas in Gaza, as well as the near-total destruction of much of the Strip’s infrastructure and homes. Israel has been using artificial intelligence and machine learning technology in its war on Hamas, with its pioneering use of this deadly technology, which many have billed as the next generation of warfare, meaning we are now in uncharted territory. The findings and testimonies revealed in a new report raise critical legal and moral questions that could forever alter the relationship between military, machine and the rule of law. There is today zero accountability and transparency, despite this lethal new tool demonstrating many limitations that could increase the risks for innocent civilians on all sides. Last week’s killing of seven aid workers from World Central Kitchen is maybe a case in point. Shifting toward a more tech-driven, standalone security and military landscape to get things done cheaper and more efficiently has been on the increase in advanced nations across the world. This has produced machine-based solutions with minimal human intervention and oversight and, by default, less transparency and accountability. Shortfalls, flows and biases in the algorithms used in surveillance and intelligence gathering are translated into the targets identified, while there is limited human verification, which is deemed slow and outdated. According to a report published last week by the independent Israeli magazine +972, Israel has used AI to identify targets in Gaza, in some cases with as little as 20 seconds of human oversight, leaving many involved in military operations under the impression that the machine’s output was equal to a human decision. The +972 report, which included interviews with six Israeli intelligence officers, claimed that “the Israeli army has marked tens of thousands of Gazans as suspects for assassination, using an AI targeting system with little human oversight and a permissive policy for casualties.” The use of AI technology in warfare has become a reality in the past two decades, leading many to warn that the militarization of such tools will have severe implications for global security and the future of warfare. Antonio Guterres, the UN secretary-general, expressed serious concern over the report’s findings. He said he was “deeply troubled by reports that the Israeli military’s bombing campaign includes artificial intelligence as a tool in the identification of targets, particularly in densely populated residential areas, resulting in a high level of civilian casualties.” Guterres added that “no part of life and death decisions which impacts entire families should be delegated to the cold calculation of algorithms.” More than six months have now lapsed since Hamas carried out its unprecedented attack against Israel on Oct. 7, which resulted in the death of about 1,200 Israelis and foreigners, most of them civilians, according to an Agence France-Presse tally of Israeli figures. The militants also took more than 250 hostages, of whom 130 remain in Gaza today. Israel’s retaliatory campaign that followed Oct. 7 has killed more than 34,000 people, mostly women and children. The need for effective governance to manage the use of AI in warfare and mitigate the potential hazards is growing. Mohamed Chebaro The system dubbed Lavender played a central role in the early stages of the war, identifying more than 37,000 potential Hamas-linked individuals to add to the Israeli intelligence database. And the Israeli leadership sanctioned a collateral damage permissiveness that varied between five and 20 potential Palestinian civilian victims whenever the army eliminated a low- or mid-ranking Hamas operative. For a senior fighter, the system sanctioned a number of civilian victims in the high double digits or even triple digits. The use by Israel of AI-powered targeting was first revealed after the 11-day conflict in Gaza in May 2021, which commanders branded the “world’s first AI war.” It was revealed that Israel’s AI system identified “100 new targets every day,” instead of the 50 a year that human assets previously delivered, using a hybrid of tech and human asset data gathering. Weeks after the start of the latest Gaza war, a blog entry on an Israeli military website said that its AI-enhanced targeting directorate had identified more than 12,000 targets in just 27 days. Similarly, an AI system called Gospel had produced targets “for precise attacks on infrastructure associated with Hamas.” An anonymous former Israeli intelligence officer described Gospel as a tool that created a “mass assassination factory.” In a rare confession of wrongdoing, Israel last week admitted a series of errors and violations of its rules had resulted in the killing of seven aid workers in Gaza, which led many experts to believe that the AI-powered system must have mistakenly believed it was targeting armed Hamas operatives. As AI continues to evolve and the proliferation of standalone weapons operability increases, the need for effective governance mechanisms to manage its use and mitigate potential hazards grows. But in an increasingly fragmented world and amid a renewed race for supremacy between superpowers that are increasingly lacking a common moral compass, the future looks grim. Technology has revolutionized all aspects of modern human life and society. However, a lax approach to accountability — roll out new tools now and regulate later — has by default put humanity at the mercy of a lethal machine with poor moral and ethical guardrails. This unparalleled transition, as described by one Israeli intelligence officer who used Lavender, has pushed soldiers to have more faith in a “statistical mechanism” than a grieving colleague. He said: “Everyone there, including me, lost people on Oct. 7. The machine did it (killing) coldly. And that made it easier.” • Mohamed Chebaro is a British-Lebanese journalist with more than 25 years’ experience covering war, terrorism, defense, current affairs and diplomacy. He is also a media consultant and trainer.

مشاركة :