Militarizing AI

The role of artificial intelligence in modern warfare

22 May 2024


The use of artificial intelligence technologies in various armed conflict arenas, such as the Russian-Ukrainian war and the Israeli war in Gaza, has sparked numerous discussions about their nature, risks to international and regional security, and the sustainability of the conflict. These experiments have shown that weapons supported by artificial intelligence can be more accurate than those directed by humans, which can potentially reduce collateral damage, including civilian casualties, damage to residential areas, and the number of soldiers killed or injured. However, it also raises concerns about the possibility of committing disastrous mistakes.

Compound Uses:

The uses of artificial intelligence (AI) in warfare are diverse and encompass a wide range of applications. AI plays a crucial role in enhancing military capabilities and improving strategic performance both during and after periods of war. It serves as an analytical enabler, disruptor, and force multiplier, ultimately impacting international security by shifting the balance of offensive defense towards attack. Some of the most common uses of AI in warfare include:

1- Enhancing intelligence and information capabilities: Artificial intelligence techniques can enhance diagnostic and strategic analysis processes by analyzing big data to identify military patterns and trends. Additionally, these techniques can be utilized to develop reconnaissance and remote sensing capabilities, enabling the collection of information and data from land, sea, and air. Furthermore, artificial intelligence can contribute to the development of encryption and cybersecurity capabilities, safeguarding sensitive data and information from hacking.

2- Assessing public sentiment towards the war: Artificial intelligence is utilized to monitor and analyze social media platforms, as well as texts, news articles, photos, and videos related to the war. This enables the extraction of key information and prominent trends, aiding in the understanding of public opinion and the impact of military events. Additionally, historical data from previous wars and conflicts can be analyzed to draw lessons and expectations about the development of public reactions in similar situations in the future.

3- Risk assessment and decision-making: Artificial intelligence is utilized to analyze risks and estimate the potential harm of military decisions. This enables strategic decision-making based on a deeper understanding of the potential effects of these decisions.

4- Participation in field and combat operations: Artificial intelligence (AI) is utilized in various ways in warfare, including the development and enhancement of guided weapons systems such as drones and smart missiles. This application improves accuracy and effectiveness while reducing human and civilian casualties. AI also plays a role in target identification and fighter classification, contributing to better communication and coordination among different military forces. These advancements increase the overall effectiveness of military operations and minimize chaos and confusion. Additionally, there are ongoing developments in the field of autonomous weapons, including killer robots and autonomous munitions, which have the potential to participate in combat operations and precise military missions.

5- Post-war reconstruction efforts: One of the most common uses of AI in warfare is post-war reconstruction efforts. Artificial intelligence can contribute to assessing damage and determining priorities in reconstruction operations by analyzing images and videos of war-related damage. It can also strengthen efforts to coordinate humanitarian operations by analyzing data related to displaced persons and refugees, identifying their needs, and providing guidance for reforms and reconciliation processes between conflicting parties by analyzing political and social data.

Practical Applications:

A practical application of artificial intelligence systems in recent wars can be observed in the following examples:

1- The Ukrainian war: Ukraine has increasingly relied on artificial intelligence to gain an advantage on the battlefield. In particular, Ukraine utilizes Palantir to analyze satellite images, open source data, drone footage, and ground reports. This technology provides commanders with military options and is responsible for most of the targeting in Ukraine. Additionally, Ukraine has utilized Palantir's data for various projects beyond battlefield intelligence, such as collecting evidence of war crimes, clearing landmines, and assisting displaced refugees. Major technology companies like Microsoft, Amazon, and Google have also contributed to these efforts. Furthermore, Ukraine has benefited from the use of "Starlink" to protect against Russian cyber attacks, migrate important government data to cloud servers, and maintain connectivity. In response, Moscow has allegedly deployed ZALA Aero KUB-BLA, an AI-powered Kalashnikov munition, while Kiev has utilized Turkish-made Bayraktar TB2 drones, which possess autonomous capabilities.

2- The Gaza war: Multiple estimates indicate Israel's use of automation in its ongoing war on Gaza. Israel has reportedly employed an artificial intelligence system called "Lavender" in its military operations within Gaza. This system is designed to identify potential targets, and during the initial weeks of the war, it helped identify 37,000 targets. Additionally, the "Lavender" program includes two other components: "Where is my father?" which tracks individuals identified as targets and targets them when they are at home, and the Gospel Program, which aims to identify buildings and structures.

Increased Risks:

Despite the significant advancements in artificial intelligence (AI) and its increasing utilization in modern warfare, there are still inherent risks and consequences associated with its deployment during wartime. Some of these risks include:

1- Accidental damage: Relying on artificial intelligence techniques may lead to technical errors or accidental damage. This is particularly concerning when it comes to the use of automatically controlled weapons without sufficient human supervision. Such situations can result in the unintended killing of civilians and innocent individuals, and the risk is amplified when there are repeated errors in identifying military and armed targets, or when vital infrastructure and non-military services are destroyed. 

2- Out of control: Weak human-automated interaction can lead to ineffective communication between artificial intelligence systems and human leaders, which can result in misunderstandings, ineffective decision-making, and the application of automated decisions without human intervention. These concerns raise questions about the loss of human control over machines and the potential for harmful behaviors in societies.

3- Human rights violations: The use of artificial intelligence in war may give rise to ethical and legal concerns, particularly in relation to automated killing and human rights. Overreliance on technology can result in fatal errors or violations of international humanitarian laws.

4- Lack of sufficient reliability and trust: One challenge that military applications of artificial intelligence may encounter is a lack of sufficient reliability and trust. These applications may face technical problems that can cause system malfunctions or prevent them from performing their intended tasks.

5- Promoting international unrest and conflicts: The use of artificial intelligence in wars has the potential to widen the technological gap between advanced countries and those with limited technological capabilities. This, in turn, can lead to increased inequality and promote international unrest.

The experience of the Israeli war in Gaza highlights the extremely dangerous and destructive risks associated with employing artificial intelligence in warfare, including:

Failure to anticipate the October 7 attack: Despite Israel's advanced intelligence capabilities, it was unable to foresee or track the plans of the Al-Qassam Brigades, the military wing of the Hamas movement, which executed an attack on the Gaza Strip.

• Harm to aid workers and journalists: The tragic killing of seven foreign aid workers in Gaza has heightened concerns about the use of artificial intelligence in identifying armed targets. This incident underscores the significant risk of civilian casualties due to the current limitations of recognition programs, which struggle to differentiate between aid workers and militants effectively.

• Increased civilian casualties: The number of civilian deaths has doubled. Many estimates indicate that Israel's use of artificial intelligence technologies and advanced weaponry has led to the deaths of tens of thousands of innocent civilians, particularly children and women. The AI systems used to identify and target potential military threats can lead to indiscriminate killings, resulting in the loss of numerous lives around the targeted individuals. This situation raises significant legal and moral concerns regarding Israeli military operations in Gaza.

• Destruction of homes and vital infrastructure: The process of identifying the hiding places of targeted individuals can lead to significant collateral damage. Errors in target identification and verification may result in the destruction of entire homes and buildings, causing widespread devastation to vital infrastructure.

• Failure to identify the network of military tunnels: Despite Israel’s success in destroying several major tunnels inside Gaza, reports indicate that most of Hamas’ tunnels still exist and are continually being discovered. Notably, the field control of Israeli forces has been crucial in discovering these tunnels, rather than artificial intelligence tools. Videos from the Al-Qassam Brigades demonstrate that its members are constructing tunnels and setting up ambushes that evade detection by Israeli research and sensor tools, despite Israel's military dominance in many areas of the Gaza Strip.

• The ineffectiveness of AI in resolving conflict: Despite utilizing various capabilities such as weapons, guided munitions, drones, and sensors, Israel has been unable to achieve its goals in the Gaza war. These objectives include eliminating Hamas leaders, destroying all tunnels, locating prisoners, and ensuring the safety of its citizens. Instead, the use of these technologies has resulted in an increase in casualties and the destruction of infrastructure, potentially constituting a form of "genocide" against the people of Gaza. Consequently, the Hamas movement has hardened its stance in ongoing negotiations for the release of hostages. A similar situation occurred in Ukraine, where the use of artificial intelligence techniques to target vital Russian facilities only led to further Russian escalation, the closure of negotiation channels, and no progress in resolving the conflict.

• Failure to influence global public opinion on the war: Although Israel leverages its partnerships with major corporations to remove and restrict online content critical of Israel, the amount of anti-Israel content continues to rise 200 days after the Israeli conflict in Gaza. Israel's efforts to win over international public opinion have not been successful, as protests and grassroots movements against Israel are growing in many Western capitals.

To effectively address the risks associated with the use of artificial intelligence (AI) in wars, it is crucial to establish an international regulatory framework that promotes responsible and ethical deployment of AI technologies. This framework should prioritize the development of mechanisms for human monitoring and oversight of automated decision-making processes in the military domain. Furthermore, it is imperative for the international community to collaborate in establishing comprehensive laws that regulate the use of AI in wars, and which ensure the protection of human rights and compliance with international humanitarian law. To promote transparency and accountability, regular reports and independent evaluations should be conducted to assess the impact of AI technologies in both war and peace contexts.