Meredith Whittaker’s speech on winning the Helmut Schmidt Future Prize
This is a superb speech, and a great summing up of where we are with surveillance capitalism and AI in 2024. It explains where surveillance-driven advertising came from, in the 1990s:
First, even though they were warned by advocates and agencies within their own government about the privacy and civil liberties concerns that rampant data collection across insecure networks would produce, [the Clinton administration] put NO restrictions on commercial surveillance. None. Private companies were unleashed to collect and create as much intimate information about us and our lives as they wanted – far more than was permissible for governments. (Governments, of course, found ways to access this goldmine of corporate surveillance, as the Snowden documents exposed.) And in the US, we still lack a federal privacy law in 2024. Second, they explicitly endorsed advertising as the business model of the commercial internet – fulfilling the wishes of advertisers who already dominated print and TV media.
How that drove the current wave of AI:In 2012, right as the surveillance platforms were cementing their dominance, researchers published a very important paper on AI image classification, which kicked off the current AI goldrush. The paper showed that a combination of powerful computers and huge amounts of data could significantly improve the performance of AI techniques – techniques that themselves were created in the late 1980s. In other words, what was new in 2012 were not the approaches to AI – the methods and procedures. What “changed everything” over the last decade was the staggering computational and data resources newly available, and thus newly able to animate old approaches. Put another way, the current AI craze is a result of this toxic surveillance business model. It is not due to novel scientific approaches that – like the printing press – fundamentally shifted a paradigm. And while new frameworks and architectures have emerged in the intervening decade, this paradigm still holds: it’s the data and the compute that determine who “wins” and who loses.
And how that is driving a new form of war crimes, pattern-recognition-driven kill lists like Lavender:The Israeli Army … is currently using an AI system named Lavender in Gaza, alongside a number of others. Lavender applies the logic of the pattern recognition-driven signature strikes popularized by the United States, combined with the mass surveillance infrastructures and techniques of AI targeting. Instead of serving ads, Lavender automatically puts people on a kill list based on the likeness of their surveillance data patterns to the data patterns of purported militants – a process that we know, as experts, is hugely inaccurate. Here we have the AI-driven logic of ad targeting, but for killing. According to 972’s reporting, once a person is on the Lavender kill list, it’s not just them who’s targeted, but the building they (and their family, neighbours, pets, whoever else) live is subsequently marked for bombing, generally at night when they (and those who live there) are sure to be home. This is something that should alarm us all. While a system like Lavender could be deployed in other places, by other militaries, there are conditions that limit the number of others who could practically follow suit. To implement such a system you first need fine-grained population-level surveillance data, of the kind that the Israeli government collects and creates about Palestinian people. This mass surveillance is a precondition for creating ‘data profiles’, and comparing millions of individual’s data patterns against such profiles in service of automatically determining whether or not these people are added to a kill list. Implementing such a system ultimately requires powerful infrastructures and technical prowess – of the kind that technically capable governments like the US and Israel have access to, as do the massive surveillance companies. Few others also have such access. This is why, based on what we know about the scope and application of the Lavender AI system, we can conclude that it is almost certainly reliant on infrastructure provided by large US cloud companies for surveillance, data processing, and possibly AI model tuning and creation. Because collecting, creating, storing, and processing this kind and quantity of data all but requires Big Tech cloud infrastructures – they’re “how it’s done” these days. This subtle but important detail also points to a dynamic in which the whims of Big Tech companies, alongside those of a given US regime, determines who can and cannot access such weaponry. The use of probabilistic techniques to determine who is worthy of death – wherever they’re used – is, to me, the most chilling example of the serious dangers of the current centralized AI industry ecosystem, and of the very material risks of believing the bombastic claims of intelligence and accuracy that are used to market these inaccurate systems. And to justify carnage under the banner of computational sophistication. As UN Secretary General Antonio Gutiérrez put it, “machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant, and should be banned by international law.”
(tags: pattern-recognition kill-lists 972 lavender gaza war-crimes ai surveillance meredith-whittaker)