the NYT AI explainer misses the point

Posted on
politics machine-learning alignment
Image generated with stable diffusion. Building and background generated with positive prompt 'dark office building, security cameras, cloudy sky, ominous, dreary'. Camera generated with inpainting based on prompt 'surveillance camera'.

In late March 2023, the NYT released a series of explainer articles about AI. The first article in the seriesYou can also read it on Archive.org if you don’t have a subscription. characterizes the recent history of AI as a progression of new technological ideas appearing over time. Of course that’s partially true, but it gets the order wrong and misses important non-technical events that are key to understanding our current position.

First, the article claims neural networks were invented in 2012, when in fact the ideas had gained real traction by the 1950s at the latest.According to Wikipedia, some of the earliest ideas about neural networks were being discussed in the late 19th century. See Mind and Body: the theories of their relation by Alexander Bain in 1873 (GoodReads, Archive.org). For developments in the mid-20th century, see the Wikipedia articles on cybernetics and Norbert Wiener. What actually happened in 2012 was the first use of GPUs to run the neural networks, which made it possible to run networks big enough to do interesting things on a standard computer.We’re talking about AlexNet, from Krizhevsky et al. (2012) “ImageNet Classification with Deep Convolutional Neural Networks” (NeurIPS proceedings). The modern AI boom was kicked off by an adjacent technological change (GPUs) brought about for unrelated purposes (graphics rendering).

More importantly, the modern AI boom has been driven by corporate data collection at a scale unprecedented in human history. From a tweet thread by Meredith Whittaker:

NYT “AI” explainer misleads. Deep learning techniques date from the 1980s, & “AI” had been hot/cold for decades, not slow until 2012. There was no new “single idea” in 2012. What WAS new, & propelled the AI boom, was concentrated resources (data/compute) controlled by tech cos

The access to massive data (aka surveillance) and compute made old “AI” techniques do new things. And showed that “AI” could profitably expand “what could be done” with the surveillance data already created by the targeted ad companies that dominated the industry.

If we see the recent developments in AI as purely technological developments, we miss the fact that what makes these massive models possible (and profitable) is the mass data collection that’s been aggregating into the hands of powerful tech companies for decades. The NYT explainer completely misses this.

The author of the tweet instead points us to the 2023 Landscape report by the AI Now Institute,I wasn’t familiar with some of these people and organizations, so here’s a quick summary: Meredith is the president of the Signal Foundation and a co-founder of the AI Now Institute. AI Now is funded by Open Society Foundations (George Soros), the Ford Foundation, and the Luminate Group (the Omidyars). I think it’s important to know where the money comes from when ideas are given a platform. which takes a socially-aware stance:

Only once we stop seeing AI as synonymous with progress can we establish popular control over the trajectory of these technologies and meaningfully confront their serious social, economic, and political impacts—from exacerbating patterns of inequality in housing, credit, healthcare, and education to inhibiting workers’ ability to organize and incentivizing content production that is deleterious to young people’s mental and physical health.

When we see the history of technology as an inevitable march in a predetermined direction, we stop seeing technological change as social change, leaving power in the hands of those creating the technology.