Skip to content

Palantir in Gaza, Lebanon and Iran: AI for Target Identification, a 10-Billion Pentagon Contract, and a Foggy Chain of Responsibility

1 min read
Share

The American data-analysis company Palantir is increasingly deep in the operations of the Israeli military. Reports point to the use of its AI systems in Gaza, Lebanon, and in operations against Iran. Not as a distant partner - but as a built-in component in the very process of target selection.

Palantir works through the Gotham, AIP (AI Platform), Foundry and Skykit platforms. All of them offer a combination of mass data analysis, operational AI systems and field intelligence capabilities. Last year the company signed a 10 billion dollar contract with the US military and became a key Pentagon partner on Project „Maven" - which uses artificial intelligence for target identification and battlefield analysis.

The Palantir founders, Alex Karp and Peter Thiel, held a board meeting in Tel Aviv in January 2024 and met Israeli president Isaac Herzog. In the same period a contract was signed with Israeli defence authorities for „strategic cooperation." Later, Palantir said it would provide AI support for „military missions," without detailed explanation.

During the Tel Aviv visit, Karp said demand for the company's services „had grown significantly since 7 October 2023." Translation: after the Hamas attack, Israel launched a new phase of operational intelligence work, and Palantir became part of that architecture.

The Business and Human Rights Centre in 2024 claimed Palantir technologies were directly used in Israeli attacks on Gaza. The company denied the claims, saying its Israel activities predate 7 October. But a book by journalist Michael Steinberger claims operations against senior Hezbollah officials in Lebanon in 2024 used Palantir tools. The same book ties the technology to the „Grim Beeper" operation - when exploding pagers wounded hundreds of Hezbollah members.

More interesting still: according to the Washington Post, the Pentagon used Palantir's „Maven Smart System," integrated with the Claude model from Anthropic, in planning attacks on Iran. Targets identified and mapped through advanced AI analysis. That fact is significant in itself - a model developed for conversation and writing is being used in targeting for war.

Former Microsoft employee Ibtihal Aboussad, at a protest in April 2025, described Palantir's systems as designed for „surveillance, war and killing." She claims Israeli systems „Lavender" and „Where's Daddy" - used to identify targets in Gaza - are built on Palantir infrastructure.

The AI expert at the Stockholm International Peace Research Institute (SIPRI), Laura Bruun, stresses that the question of accountability is foggy. „There's no clear determination of what states practically must do to use AI lawfully and responsibly." That's the essential sentence - states are responsible, but they don't know exactly for what. And in that fog targets are picked, people are killed, and algorithms are linked to decisions about life and death. The question everyone avoids: if the mistake isn't a human but a model, who goes under investigation?