Skip to content
Blog

Data, Compute, Labour

The monopolisation of AI is not just – or even primarily – a data issue.

Dr Nick Srnicek

30 June 2020

Reading time: 8 minutes

Javascript code for website

Monopolisation is driven as much by the barriers to entry posed by fixed capital, and the ‘virtuous cycles’ that compute and labour are generating for the AI providers.

The media and the academic world are filled with stories and analyses of how AI will impact our economy. Yet with few exceptions, this work is focused on the impacts that might occur through automation as AI either does or does not – depending on the analysis – radically change the labour market. The brief argument I want to make here is that there is another significant channel for the impact of AI on the economy: the capacity of this technology to increase the consolidation of capital in the hands of a few major firms.

A few scholars have looked at the mutual implications between AI and monopolisation, with the focus overwhelmingly centred on the importance of data in the process of training machine-learning models. One group of researchers is largely sceptical of data’s role in facilitating winner-takes-all markets – arguing, for instance, that more data does not mean more economic value (e.g. the difference between a model that is 95% accurate and a model that is 94% accurate is marginal in most use cases).1 Another group is more concerned that initial leads in AI and data will tend to grow larger and larger, leading to a handful of firms dominating the market for AI provision.2 As one New York Times piece puts it: ‘The more data you have, the better your product; the better your product, the more data you can collect; the more data you can collect, the more talent you can attract; the more talent you can attract, the better your product.’

On the basis of analyses like this, policymakers have replicated the focus on data as key to maintaining capitalist competition. The sharing of data is a particularly prominent example of this with a recent European Commission report, for instance, suggesting that ‘where specific circumstances so dictate, access to data should be made compulsory, where appropriate under fair, transparent, reasonable, proportionate and/or nondiscriminatory conditions.’ And similar policy proposals have been put forward in Germany as well.

Yet this focus on data overlooks more traditional inputs into the AI production process: namely, fixed capital and labour. In the first place, while data is undoubtedly important to the current paradigm of AI, equally as important in many cases are the available computational capacities. Models are becoming larger and larger, and hardware is being scaled up to datacentre and supercomputer levels.3 Since the dawning of the deep learning age in 2012, the amount of compute needed to train the largest models has increased by 300,000x – equivalent to a doubling every 3.4 months.

Training and rolling models out into production increasingly require significant computational resources – resources that are largely owned by the biggest tech companies in the world. The ground-breaking AlphaGo Zero, for example, has been estimated to have cost $35 million to train. And while detailed figures on the amounts being spent on datacentres are not available publicly, the financial statements of the big cloud companies all reveal tens of billions of dollars annually being poured into fixed capital. Amazon, Microsoft and Google, for example, spent $73.5 billion on capital expenditures in 2019. Far from being immaterial companies, these are significantly embodied companies. And the entry fees to compete with these companies are growing as they all turn to designing their own specialised computer chips in an effort to gain more speed and power.

All of this fixed capital, in turn, enables these companies to produce more accurate and efficient AI. As one review of the impact of compute notes, ‘there is a close tie from compute operation rate (e.g., floating point operations, or “FLOPs”) to model accuracy improvements.’ More compute also enables companies who have access to train and retrain models much quicker than their competitors. As AI remains an empirical science, it involves running a number of experiments to see what works best – tuning hyperparameters, testing on data from outside the training set, debugging any problems, and so on. The more rapidly one can do this, the more rapidly a firm can deploy models to users.

Moreover, as the real world changes, models degrade and need updating. Again, the more rapidly one can retrain models, the better they will perform, and the more users they are likely to attract. Lastly, more computing power enables better research for these companies. It allows researchers to try ideas at scales that are unavailable to other smaller firms. Innovation, as a result, is more likely to come from the larger AI providers.

These innovations may eventually filter down to smaller firms, as improvements and efficiencies make them more readily available. But as one investor puts it, ‘Having a really, really big computer is kind of like a time warp, in that you can do things that aren’t economical now but will be economically [feasible] maybe a decade from now.’ More compute, in the end, allows the big AI providers to more rapidly and effectively research AI possibilities and gain even more ground on their competitors.

These computational resources, in turn, require skilled workers who can make efficient use of the hardware in the first place. At present, the global dearth of available workers to do this means they typically command high salaries affordable only to the largest companies. DeepMind, for example, spent nearly £400 million on ‘staff and related costs’ in 2018. And academia continues to see a major brain drain as high salaries – and access to immense amounts of compute – draw researchers into AI companies.

More subtle measures are also used by the largest companies to draw in the best workers – including the provision of open-source frameworks like TensorFlow that function to build up a community of developers trained in a company’s workflow and tools. AI frameworks become feeder networks for the emerging generations of talent from graduate schools – leading again to a consolidation of power and resources in the hands of a few.

Let me conclude with three points. As I have briefly tried to argue here, the monopolisation of AI is not just – or even primarily – a data issue. Monopolisation is driven as much by the barriers to entry posed by fixed capital, and the ‘virtuous cycles’ that compute and labour are generating for the AI providers. The academic literature has to date largely neglected to examine these elements.

Another consequence of the preceding argument is that open-source is not an alternative, so much as a strategic tool for Big Tech. Existing arguments about how large tech companies freely use open-source software as a foundation to build their proprietary empires must also be supplemented with the ways in which free – and waged – labour are brought into the ambit of companies via things like open-source frameworks.

Lastly, another notable consequence is that economic policy in response to Big Tech must go beyond the fascination with data. If hardware is important too, then opening up data is an ineffective idea at best and a counter-productive idea at worst. It could simply mean that the tech giants get access to even more free data – while everyone else trains their open data on Amazon’s servers. If we want to take back control over Big Tech, we need to pay attention to more than just data.

Thanks to Jack Clark and his Import AI newsletter for much of the inspiration behind this work.

This post is one of a series of critical texts commissioned to explore, enrich and augment the research of our Rethinking Data programme. The texts will examine data policy, infrastructures, dynamics of power, and strategies for the future of AI and the digital economy.

References

Agrawal, Ajay K, Joshua S Gans, and Avi Goldfarb. “Economic Policy for Artificial Intelligence.” Working Paper. National Bureau of Economic Research, 2018.

Amodei, Dario, and Danny Hernandez. “AI and Compute.” OpenAI, May 16, 2018.

Bapna, Ankur, and Orhan Firat. “Exploring Massively Multilingual, Massive Neural Machine Translation.” Google AI Blog (blog), October 11, 2019.

Casado, Martin, and Peter Lauten. “The Empty Promise of Data Moats.” Andreessen Horowitz (blog), May 9, 2019.

European Commission. “A European Strategy for Data.” Brussels, 2020.

Fitzgerald, Charles. “Follow the CAPEX: Cloud Table Stakes 2019 Edition.” Platformonomics, February 11, 2020.

Furman, Jason, and Robert Seamans. “AI and the Economy.” Working Paper. National Bureau of Economic Research, 2018.

Hazelwood, Kim, Sarah Bird, David Brooks, Soumith Chintala, Utku Diril, Dmytro Dzhulgakov, Mohamed Fawzy, et al. “Applied Machine Learning at Facebook: A Datacenter Infrastructure Perspective,” 2018.

Hestness, Joel, Sharan Narang, Newsha Ardalani, Gregory Diamos, Heewoo Jun, Hassan Kianinejad, Md Mostofa Ali Patwary, Yang Yang, and Yanqi Zhou. “Deep Learning Scaling Is Predictable, Empirically.” ArXiv, 2017.

Huang, Dan. “How Much Did AlphaGo Zero Cost?” Dansplaining, 2018.

Laanait, Nouamane, Joshua Romero, Junqi Yin, M. Todd Young, Sean Treichler, Vitalii Starchenko, Albina Borisevich, Alex Sergeev, and Michael Matheson. “Exascale Deep Learning for Scientific Inverse Problems.” ArXiv, 2019.

Lee, Kai-Fu. AI Superpowers: China, Silicon Valley, and the New World Order. Boston: Houghton Mifflin Harcourt, 2018.

———. “The Real Threat of Artificial Intelligence.” The New York Times, June 24, 2017. .

Levy, Steven. “Bill Joy Finds the Jesus Battery.” Wired, August 16, 2017.

Murgia, Madhumita. “AI Academics Under Pressure to Do Commercial Research.” Financial Times, March 13, 2019.

Nahles, Andrea. “Die Tech-Riesen Des Silicon Valleys Gefährden Den Fairen Wettbewerb.” Handelsblatt, August 13, 2018.

Toole, Jameson. “Deep Learning Has a Size Problem.” Medium, November 5, 2019.

Varian, Hal. “Artificial Intelligence, Economics, and Industrial Organization.” Working Paper. National Bureau of Economic Research, 2018.

Footnotes

  1. Varian, “Artificial Intelligence, Economics, and Industrial Organization”; Furman and Seamans, “AI and the Economy”; Casado and Lauten, “The Empty Promise of Data Moats.”
  2. Agrawal, Gans, and Goldfarb, “Economic Policy for Artificial Intelligence”; Lee, AI Superpowers.
  3. Toole, “Deep Learning Has a Size Problem”; Bapna and Firat, “Exploring Massively Multilingual, Massive Neural Machine Translation”; Hazelwood et al., “Applied Machine Learning at Facebook”; Laanait et al., “Exascale Deep Learning for Scientific Inverse Problems.”