The Instrumentarian Power of Artificial Intelligence in Data-Driven Fascist Regimes

Artificial Intelligence has two main functions in the Israeli war on besieged Gaza: sustain propaganda and generate more targets. While debates on machine learning question the threat of AI for humanity, AI assisted bombardments reveal a new magnitude of algorithmically programmed death in times of warfare. Philosopher Anaïs Nony poses that fascist regimes are increasingly data-driven: digital technologies are deployed as tools to instrumentalize politics and produce programmable death.

An interactive map of Gaza issued by the IDF slips the territory into hundreds of numbered zones. Credit IDF.An interactive map of Gaza issued by the IDF slips the territory into hundreds of numbered zones. Credit IDF. 

Artificial Intelligence for propaganda

On Friday December 1st 2023, the Israel Defence Force (IDF) released a map of Gaza turned into a grid of over 600 numbered blocks. The blocks are supposed to help civilians know where there are active combat zones. The map, which Palestinians are supposed to access via a QR-code amidst power cuts and airstrikes, is to be used for targeted evacuation warnings for areas facing intense bombardments.

Civilians received information via calls, texts, and airdropped leaflets. These measures, are supposed to give Palestinians a chance to be safe. The map offers precision and operates as a public relations tool for shaping international opinion regarding civilian protection. Used by IDF as evidence of its efforts to minimise civilian causalities, the interactive map shared is meant to show the world that for IDF the residents of the Gaza Strip are not the enemy.

For Human Rights Watch, these evacuation orders are ignoring the reality on the ground and should not erase protections under the law of war. On Tuesday December 5th, UNICEF spokesperson James Elder stated that so-called safe evacuation zones are death sentences and emphasised the urgency to break the narrative of safe zones that pretend to save lives.

In this case, AI is used for its power to convey accuracy and serve propaganda: it is deployed to render accurate the method and strategy in the name of algorithmically-run choices. 

Artificial Intelligence for generating targets

An investigation published on November 30th by independent journalist news platforms +972 Magazine and Local Call interrogates the wider use of artificial intelligence on the Israeli war on Gaza. Based on interviews with current and former members of Israeli’s intelligent community reveals that IDF’s intelligent units has shifted to become a “mass assassination factory” that operate under the disguise of statistically precise and technically advanced intelligence tools.

The investigation exposes the use of a system called “Habsora” (“The Gospel”) which deploys Artificial Intelligence technology to generate four types of targets: tactical target, underground target, power target, and family homes. Targets are produced according to the probability that Hamas combatants are in the facilities. For each target, a file is attached which “stipulates the number of civilians who are likely to be killed in an attack”. These files provide numbers and calculated causalities so when intelligence units carry out an attack, the army knows exactly how many civilians will likely be killed.

In an interview for Democracy Now, Yuval Abraham explains that the use of AI is a trend that relies on automated software to generate targets with life and death consequences. While there was a strict limitation to the collateral damage in the past, these AI generated targets are unprecedented: they are made of automation, rely on AI-powered data processing technologies, allow a potential collateral damage of hundreds of civilians, and are now produced “faster than the rate of attacks”.

According to former IDF chief of staff Aviv Kochavi, the Targeting Directorate established in 2019 processes vast amounts of data to generate actionable targets. Powered by “matrix-like capabilities”, the system generates “100 targets in a single day, with 50% of them being attacked” while in the past the intelligence unit would produce 50 targets a year. In the escalating process of AI-generated targets, the criteria around killing civilians was significantly relaxed.

Artificial Intelligence for promoting extermination

What does it mean to be statistically precise for the world to see and yet generate targets according to lose military protocols that can kill hundreds of civilians? On Wednesday December 6th, Malika Bilal, host of Al Jazeera Podcast The Take, released an episode to further investigate Israeli army war protocol and its use of the Gospel, the artificial intelligence system that generates bombing targets. One central question she asks is how and when did the limitation of civilian causalities changed and who chose to lower the restrictions. Malika Bilal interviewed Marc Owen Jones, associate professor of Middle East Studies at Hamad bin Khalifa University who stated that “AI is being used to select people for death and destruction”. In Jones’ words, when the Israeli military trained AI models, the intelligence units are modelling them in full knowledge that these targets will also include civilians. “They are outsourcing people’s lives and people’s destiny to a piece of technology that has probably inherited the ideology of occupation and extermination.”

When AI models are trained, it is done according to the precedents that have been set. In the case of the Israeli Army, the murdering of civilians is part of the model. A key feature of artificial intelligence technology is that it relies on the data collected and the model deployed. Not only the use of AI technology will be biased if the technology learns from biased data (selecting this set of information while dismissing others), but the model of prediction and actionable recommendations will be biased too if it is deployed in a context where the technology is employed to serve and justify a certain ideology.1

In the case of the war on besieged Gaza, IDF leverages on algorithmic accuracy while dismissing fairness procedures and accountability. The so-called clinical efficiency of their AI-generated targets are being portrayed by the political marketing wings of mainstream media as advanced tools that give the right to kill in the name of technological sophistication. While AI is generally promoted as making warfare more precise, evidence from the lived experience in Gaza shows that saving lives is not part of the model. Instead, “maximum damage” is what is on the agenda as several hundreds of targets are being bombed everyday.

Artificial Intelligence for dismissing human responsibility

AI has been a buzz word as the fast-pacing technology is deployed in most sectors of society. On one hand, massive numbers of papers and talks have been dedicated to AI-generated content and chatbots. On the other, the deployment of AI for propaganda and mass murder belongs to less visible and less promoted concerns.

Often, critical discussions on AI as seen in Web summits, press conferences, international colloquia and interviews falls into two main categories. One wants to prove that AI is not really intelligent, at least not in the human sense of knowledge making. The other presents AI as a threat because the technology can surpass human capabilities in the fields of cognition.

These intellectual propositions, while somewhat valuable in themselves, often omit to question the set of values and priorities that shape the theoretical models of their inquiry. Their claims (“AI is not real intelligence” or “AI is a threat to our humanity”) fail to acknowledge that there is human reality in every single technical reality,2 from the spoon we use to eat to the rocket launch to kill civilians. Inability to apprehend the human reality of Artificial Intelligence, its biases and its undignified features, is a mistake that only serves a certain political marketing and its data-driven economy.

Furthermore, these claims serve values organized around ideas of inclusion and exclusion so central to supremacist ideologies: this data matters/this one does not, this person is human/this one is not, this life matters, this one does not. Exclusion is part of the systemic refusal to recognize the human reality integral to all objects and all people. This position doesn’t mean that objects are equal to people. It shows that this exclusion is part of a hegemonic culture that is failing to take responsibility for the violence it produces.

The dismissal of human reality in technology is a strategic mode of cancelling out accountability in the name of advancement. It allows for the implementation of a new form of obedience that is algorithmically driven, one in which the life of the poorest and most vulnerable do not even register as important factors to be taken into consideration. To critically think about AI, its creation, modelling and application as well as its development as a technology of behavioural prediction, is a responsibility not to leave such a technology in an a-political blur.

There is what I call a soft critique of AI technology. It is a neoliberal critique that wants to breed obedience, imposes intellectual agenda, and foregrounds concepts. I believe this approach is dangerous as it fails to rigorously address the state of emergency the majority of living entities find themselves in due to climate catastrophes and the rise of fascism and digital totalitarianism on a global scale. This soft critique of AI often focuses on language and the supremacy of human intelligence instead of questioning the political responsibility of the use and abuse of such a technology. Soft critiques of AI are similar to the dominant white feminist approach that selects one set of contests while dismissing others and it is similar to the bourgeois ecologist approach that does not redistribute the inherited land and wealth they benefit from while judging how others should behave. What a soft critique of AI technology reveals is a compromised stance, one that plays the game of academic production without radically engaging in new actions. 

Artificial Intelligence for a data-driven fascist regime

Machine Intelligence has profoundly changed since its first development in the 1950s. Back then, the question asked by mathematician Alan Turing was: can machine generate human-like response equivalent to that of a person.3 Decades later, Google AI generated system AlphaGo defeated world champion Lee Se-dol in a much televised man-versus -machine show in 2016. While the ancient Chinese game of Go has more possible configurations than atoms in the observable universe, the computer studied a data set of more than 100,000 human games to beat its opponent. At this stage, ruled-base AI operating as a knowledge-base system was surpassed to implement decision-making beyond pre-defined sets of rules.

AI technology has rapidly evolved since the infamous win. In 2024, AI technology can now develop skills by capturing data from every single gesture, movements, and interactions. The systemic tracking of people’s life and the opaqueness of the models designate a new paradigm in the formation of truth, as censorship is enabled on a new scale. AI-powered technology can both promote accuracy and hide the standards of measurement and circulation of information. It can also produce models that are opaque and hard to access. As such, the new paradigm of AI asks to pounder about societal values and sets of priorities we want to promote, especially as these technologies are further deployed in times of warfare.

What matters in the regime of truth promoted by fascist ideologies is the accuracy of the data collected as well as the computation, control and prediction of behaviours through systemic data surveillance. The data collected are portrayed as measure of truth and function to substitute the meaningful reality of lived experience. As Philosopher of Law Antoinette Rouvroy points out, in this digital regime, the individual is replaced by a set of a-significant data.4 The person as a singular individual with memory, experience and flesh no longer exists: it is transformed into a profile that can be tracked and whose behaviours can be and pre-empted.

The digital regime of instrumentarian power is symptomatic of the rule of induction: forgetting about the cause of problems and focusing on predicting more outcomes, of creating total certainty.5 The digital regime of truth aims to shape the future according to a trajectory that validates the data already collected. In the context of the 2023-2024 Israeli war on Palestinians, the future of civilians is being shaped according to the data collected. They become the targets, automatically. In turns, the data collected validates the assault according to a single-minded mode of making the truth that empties out accountability.

Artificial Intelligence for cancelling meaning

The ecosystems of tools and smart devices create the fabric of everyday life by shaping the normative values of behaviours. In the context of surveillance capitalism, algorithmic modelling works to cancel all meaning and disrupts social trust. Surveillance capitalism is a new instrumentarian power that relies on computation and automation to overthrow people’s sovereignty.6 At both an individual and collective level, the mechanisms of subordination maximizes on indifference and media-driven connection to empty out dreams and desires for the future.

Ways of thinking, living, and existing depend on a technological arrangement between the tools that help us retain information and the tools that help us anticipate future outcomes.7 With the development of AI, the mind is now surrounded by smart devices that learn from our conducts, censor certain content, and promote others. The fast pathing development of AI technology requires that we question the ecosystems of devices that are shaping our psychic and collective existences, including the ways in which it is currently both undoing forms of social trust and implementing censorship.

Artificial Intelligence for a new magnitude of death

In 2024, just over two-thirds of the world’s population will be using the internet, while in 2020 one person in four will still not had access to safe drinking water at home. According to a joint report by the World Health Organization (WHO) and the United Nations Children’s Fund (UNICEF), progress in drinking water, sanitation and hygiene is largely insufficient and unequal. Indeed, it is estimated that by 2030, “Only 81% of the world’s population will have access to safe drinking water at home, while 1.6 billion people will still be deprived of it”.

What this parallel between digital networks and access to water shows is the distortion of international priorities in terms of civic and moral responsibility. While water meets a vital need of first necessity, what we see being deployed is the use of this drinking water to cool down massive data centres. According to Google’s environmental report, published on 24 July 2023, the giant will have withdrawn 28.765 billion litres of water in 2022. 98% of this was drinking water, two thirds of which was used to cool its data centres where the equipment enabling the management of information systems is grouped.

The energy cost is alarming, the human cost is distressing. 75% of the world’s supply of cobalt, the material essential to the lithium-ion batteries in our mobile phones, computers, tablets, and electric cars comes from eastern Congo, where dozen of millions people (children and adults) live and work within dehumanising conditions.8 In ten years, over 5 million people have died because of disease and malnutrition.9

To understand the shift in the making of digital-driven fascist regimes, where technological advancement supports mass-manipulation and dehumanisation, we must understand the rise of algorithmic obedience and the instrumentarian power of AI. In 2024, the loosening of army protocols in the name of AI-driven accuracy serves a global economy where international laws are being highjacked in front of our eyes. As such, we (the comrades combating for freedom over the world) are the living witness of a digital regime that has drastic consequences for the future of justice, and solidarity.

 

A shorter version of this article appeared as “Israel and Gaza: AI in the time of warfare” in The Mail & Guardian, 4 February 2024.

Originally published in La Furia Umana.  

1 Nony, Anaïs. “Technology of Neo-Colonial Episteme”, Philosophy Today(2019).

2 Simondon, Gilbert. Du mode d’existence des objets techniques. Paris: Aubier, 1989.

3 The Turing test is portrayed in a paper “Computing Machinery and Artificial Intelligence” where Turing would ask if a machine could win a game called “Imitation Game”.

4 Rouvroy, Antoinette, and Bernard Stiegler. “The Digital Regime of Truth: From the Algorithmic Governmentality to a New Rule of Law.” Translated by Anaïs Nony and Benoît Dillet, La Deleuziana 3 (2016): 6–29. http://www.ladeleuziana.org/ wp-content/uploads/2016/12/Rouvroy-Stiegle....

5 Shoshana Zuboff. The Age of Surveillance Capitalism. The Fight for Human Future at the New Frontier of Power. London: Profile Books, 2019, p. 396.

6 Shoshana Zuboff, The Age of Surveillance Capitalism. The Fight for Human Future at the New Frontier of Powerop. cit..

7 Stiegler, Bernard. Dans la disruption: Comment ne pas devenir fou? Paris: Éditions Les Liens qui Libèrent, 2016.

8 Divin-Luc Bikubanya, Hadassah Arian, Sara Geenen and Sarah Katz-Lavigne. “Le ‘devoir de vigilance’ dans l’approvisionnement en minerais du Congo.” Alternatives Sud, vol. 30-2023, pp. 143-152.

9 The development of new telecommunications and transportation technologies is directly linked to serious crimes described by the Rome Statutes drafted by the International Criminal Court. The enslavement or forced transfer of the population are acts committed against a civilian population with knowledge of the attack. These acts, most often perpetrated in order to create a docile workforce, are crimes against humanity under Article 7, Paragraph 1, of the Rome Statute.

 

Manage consent

by Anaïs Nony
A ler | 2 March 2024 | Artificial Intelligence, Fascist Regimes, Gaza, israel