Right now the most dangerous weapon of the Israeli army is AI

A Washington Post investigation tells how the IDF has created an “artificial intelligence factory”. Over the past decade that tracks down Hamas militants, suggests where to bomb, and calculates the number of “expendable” civilians. But the scenario reconstructed by the Post is not that of an infallible surgical war. AI can make mistakes. And what is happening in Gaza could affect other conflicts around the world in the future.

Artificial intelligence is often at the center of debates about the future of work, with many experts fearing it could progressively replace humans.

But there is another even more disturbing scenario: the use of algorithms in military decisions.

A Washington Post investigation has highlighted how AI is taking the place of human analysts in the management and identification of war targets in Gaza. In one of the bloodiest conflicts in recent years.

The progressive replacement of intelligence operations with automated systems is revolutionizing the way war is fought.

Data collected by satellites, drones, and surveillance systems are filtered through algorithms that propose possible targets. IDF officers consider these tools essential for speeding up decisions and maintaining a strategic advantage.

Gospel, “the pool” and the AI ​​systems used in Gaza

The “Habsora” system (in Hebrew “the Gospel”) uses hundreds of algorithms to identify potential targets. Among the data accumulated in a huge digital basin called “the pool”.

The algorithms sift through wiretaps, satellite photos and social media posts to report coordinates of suspected underground facilities, tunnels or weapons depots.

The Washington Post quotes a former army commander who were working on these artificial intelligence systems, saying, “Soldiers can use image recognition software to figure out tiny modifications in years of aerial recording data of Gaza . That signal Hamas has dug a new tunnel through agricultural land or installed a rocket launcher.”

Other programs, like “Lavender,” use percentage scores to estimate the likelihood that a person belongs to armed groups. Things like presence in certain chat rooms or frequent use of multiple phone lines can raise the level of suspicion.

Applications such as “Hunter” and “Flow”, on the other hand, allow Israeli soldiers on the battlefield to access real-time data. Including real-time video of the areas they are approaching and estimates of possible civilian casualties.

These systems interface with “Gospel”. Enhancing the entire target acquisition process.

The “sources” of AI

The algorithms draw on wiretaps, drones, social network databases and seismic sensors. All this information flows into “the pool”, a centralized archive created to store possible clues about the presence of Hamas structures and militants.

The data validation procedure

The algorithms produce coordinates and suggestions of targets to hit.

A human analyst verifies the reports. Forwarding them to a higher-ranking officer who enters them into the so-called “target bank”, the database of targets.

The Benefits of AI in War

The IDF, for example, uses AI to transcribe thousands of conversations every day and quickly track down potential threats in the words Palestinians exchange.

Some officials believe that the speed of AI analysis could shorten the duration of conflict and limit the number of casualties on the ground.

According to Israeli military leaders, this technology allows attack plans to be updated in real time. Offering greater precision and a significant saving in human and logistical resources.

Mistakes AI Can Make

However, some IDF soldiers and former officers – who spoke anonymously to the Washington Post – have doubts about the AI’s ability to correctly interpret local language.

In one of the cases reported to the newspaper owned by Jeff Bezos, the founder of Amazon. The algorithms were unable to distinguish between the word “batikh” (Arabic for “watermelon”) . Used as a code for bombs and that referring to the actual fruit.

AIs, calibrated to search for possible warning signals, thus risk generating an excess of false positives and pushing the military to evaluate even the most innocuous conversations as suspicious.

Another worrying example of an error that an AI can make is the estimate of the number of civilians present in a building, which the IDF would usually arrive at also based on the count of phones connected to a cel. Ignoring for example children or devices that are turned off or out of battery when counting smartphones.

It is also not always clear whether information on possible targets comes from a machine or a human analyst: all of this makes the assessment more risky for those who, in the end, must decide whether or not to launch an attack.

A former senior IDF officer told the Washington Post that too much reliance on automated systems has fueled the military’s belief in advanced “all-knowing” surveillance.

Relying on statements from two former Israeli soldiers. The Washington Post wrote that “the enthusiasm for artificial intelligence has eroded Unit 8200’s ‘early warning culture,’ where even low-level analysts could easily inform senior commanders about unfolding threats.”

According to these sources, AI can help move forward more quickly. But not reduce errors in a complex war environment like Gaza.https://youtu.be/l1lCQV1MHtI?si=j0zKg1yA2ifrTqMk

Leave a Reply

Your email address will not be published. Required fields are marked *