Skip to content
Blog

The value​​​ ​​​chain of general-purpose AI​​

A​​ ​​closer look at the implications of API and open-​​source ​​accessible GPAI for the EU AI Act

Sabrina Küspert , Nicolas Moës , Connor Dunlop

10 February 2023

Reading time: 20 minutes

Red penknife on a white table

General-purpose AI (GPAI)1 models are designed for generality of output and have a wide range of possible applications. Sometimes called foundation models, they can be used in standalone systems or as the ‘building block’ of hundreds2 of single-purpose AI systems to accomplish a range of distinct tasks, such as sector-specific analytic services, e-commerce chatbots or design​ing custom curriculum.3 

Many believe that GPAI represents a paradigm shift from traditional, single-purpose AI. Rather than having to build an AI system to carry out a specific task from scratch, cutting-edge GPAI – such as that developed by ‘upstream’ providers Meta, Microsoft and its partner OpenAI, and Alphabet with its Google Brain team and subsidiary DeepMind – offers the infrastructure that traditionally less technical ‘downstream’ companies can leverage to realise many different user-facing applications.  

This class of AI technology and the relations among providers and users it implies create non-trivial issues for legislators. We can already see these tensions at play in the context of the EU AI Act – the first horizontal AI governance regime, which is likely to be followed by others in other jurisdictions – which tries to regulate AI mostly as a tangible product with limited intended purposes. 

The key characteristics of GPAI models are their large size (due to the extensive number of parameters they use, i.e. the numerical values defining the model),4 their opacity (the fact that the computational mechanisms through which they output information are hard to explain) and their potential to develop unexpected capabilities beyond those intended by their producers. Delivering a GPAI model requires substantial amounts of data, computing power, some of the most talented researchers and engineers and – consequently – extensive financial resources 

The resource-intensive character of GPAI development contributes to establishing an interdependence between GPAI providers (upstream) and the companies applying these models to end user-facing applications (downstream). This relation makes the lifecycle of a GPAI model complex and reliant on a variety of actors, who are each responsible for different components of the same process.  

This is further complicated because relationships between upstream and downstream companies, and the level of control different actors have over the GPAI model, change according to the strategy that upstream GPAI providers adopt to distribute their model and place it on the market (currently as open-source software or via application programming interfaces (APIs) for the most part), i.e. their way of generating value and monetising the GPAI model.  

The complex dependencies between companies developing and companies deploying GPAI, the multi-functionality of the models and the entanglements between these two factors and the release strategies used by upstream providers pose unique challenges for AI governance. To assign the right responsibilities to the best equipped actor, it will be necessary to have a deep understanding of how GPAI accrues value over its various development and deployment stages – in other words, its value chain. 

As the EU co-legislators are still negotiating the AI Act, we look at the two most common GPAI market release strategies and their implications for the development and use of GPAI and its regulation.  

API and open-source access to GPAI dictate the level of control different actors have over the model

There are two main ways GPAI systems and their underlying models are currently made accessible to downstream developers on the market: via API and open-source access.  

Here we analyse the two routes separately but acknowledge that in reality they tend to blur,5 not least because models or parts of models, initially made accessible via API by one GPAI provider, are often imitated and made available as open-source by another one. 

Downstream developers can access a GPAI model through an API, which is controlled by the GPAI provider. They can use the model, including adapting it to use-case specific AI applications, without needing to understand its underlying technical details. In this route, the GPAI model is developed by the provider and run remotely on its servers, with a continuous interaction transferring the input and output from and to the downstream user online.  

Two prominent examples of GPAI models distributed via API are OpenAI’s GPT-3.5 (and its user-facing system, ChatGPT) and DALL-E. The key feature of this strategy is that control over the model and source code remains largely in the hands of the provider. 

Open-source access refers instead to releasing the model or some elements of it publicly and allowing anyone to download, modify and distribute it, under the terms of permissive licences. In this case, only a one-off interaction between the GPAI provider and the downstream developers is needed. The GPAI provider uploads the model’s elements onto a platform or repository, providing technical documentation and any usage instruction required, and the downstream developer downloads the files.  

Stability AI and RunwayML adopted this release strategy with Stable Diffusion, a noteworthy example as the model in question is very similar to OpenAI’s aforementioned (API-provided) DALL-E.6

Computing power plays an important role in how easily a GPAI model can be used in a meaningful way. API access is usually combined with access to the necessary computational infrastructure. However, to use open-source GPAI, downstream developers need to already have separate access to such infrastructure. As computing power is expensive and scarce, this can be a barrier to modifying or even loading the model.  

In terms of business models, both release and distribution strategies enable GPAI providers to monetise their models. Those releasing them through APIs can gain direct revenue by charging a subscription fee to access the model over time or on a per-use basis. Open-source providers can instead monetise their GPAI models mainly indirectly. For example, they can charge downstream actors for easy access to the necessary compute infrastructure (hosting) and to premium or enhanced versions of the models (open-core) or for other services such as fine-tuning, maintenance or customer support services (consulting and support model).       

Open-sourcing a GPAI model can also be a strategic decision for attracting attention from the broader research community, media and downstream industry. GPAI providers may benefit from reputational value through increased visibility, attracting partnerships and investments, and establishing a pool of talented researchers trained on and demanding their models. Clearly, as OpenAI’s ChatGPT has shown, such indirect benefits can also arise from API release strategies. 

In terms of who, in the GPAI model lifecycle, is responsible for doing what, the upstream GPAI provider usually researches, designs, develops and pre-trains the model on data and sometimes produces use-agnostic risk management and quality controls. It then determines the release and pricing structure of the GPAI model.

Releasing the model through an API means that the provider can set the conditions for access, respond to downstream misuse and constantly improve their model and commercial strategy, through analysing downstream use, without losing intellectual property rights. If the provider releases the model as open-source software, it loses control over the downstream use and can only leverage indirect ways of monetisation. However, it can incorporate into the original model new functionalities developed downstream in the open-source environment.  

From the perspective of downstream actors, who produce simpler AI applications by adapting a GPAI model, they can decide on the specific use for a model and the training data to fine-tune it. They can also choose to provide risk and quality management in the specific context of use. If they access the model via API, they potentially face constraints on the evaluation or retraining of the model’s functions.7 Instead, with an open-source model, they can directly examine the parameter values according to which it has been originally trained by the provider and change them.  

Table 1: common business practices associated with API and open-source release strategies for GPAI8

  API
(application programming interface) 
Open-source 
Description  The software interface is hosted by the GPAI provider for using a GPAI model and, at times, facilitating its adaptation for use-case specific applications.  The GPAI model (or elements of it) is publicly released, allowing anyone to download a copyand modify and distribute it. 
Computing power considerations  The GPAI provider indirectly gives access to computing power, as a service in combination with the API, to enable running the model.  The downstream actors (providing user-facing applications) have to independently hold computing power to run the model. This can be a barrier to modifying or even loading and running the model. 
Underlying business model for the GPAI on offer  Direct monetisation: subscription-based charging or on a per-use basis. 

Indirect monetisation: API could in theory be free of charge or provided as a ‘freemium’9. In this case, we observe similar potential for indirect monetisation as with the open-source accessible GPAI. 

Indirect monetisation: the models are available free of charge, but can be indirectly monetised, i.e. through selling associated services1011 or ‘closing’ an advanced version of the model.12

Companies can accrue reputational value with this strategy through increased visibility, attracting partnerships, investments and talent. 

Main activities and competences of the GPAI provider 
  • Carries out R&D on current technology 
  • Designs and develops the model and source code and, in some cases, pre-trains the model on data  
  • In some cases, provides use-agnostic risk management and quality controls, for example, by testing for general-purpose accuracy and robustness, documenting technical information, cybersecurity  
  • Determines the release and pricing structure of the GPAI model, notably whether to use API or open-source the model. 
 Direct or indirect monetisation (see ‘underlying business model’) 

  • sets conditions for first access, i.e. by widening customer base step by step or by requiring data upload used for fine-tuning on the GPAI provider’s server  
  • responds to downstream misuse of technology, i.e. through controlling API access, logging and identifying downstream misuse patterns  
  • constantly improves the model and commercial strategy through analysing downstream use (feedback loop), without losing IP rights. 
Indirect monetisation (see ‘underlying business model’ 

  • incorporates functionalities added by external into the GPAI model, while losing IP rights for the original open-sourced model. 
Main activities and competences of the downstream actors, such as developers, deployers and operators of end user-facing applications  They apply the GPAI model for specific use cases, and test it, including:  

  • deciding on specific use cases for GPAI adaptation  
  • influencing decision / provision of data used for fine-tuning  
  • fine-tuning or embedding the weights (computed values for each parameter) of the model for each specific use case  
  • choosing to provide risk management and quality controls in the context of use, for example testing for accuracy and robustness, documenting technical information, cybersecurity. 
They are potentially limited in their ability to evaluate or retrain the GPAI model’s functions.  They examine the GPAI model directly.  
Examples of GPAI models following each business practice 

The choice between API or open-source release strategies has divergent implications for accessibility and accountability. API-mediated access could offer more guardrails to reduce the proliferation of harmful content, however, it centralises the control on GPAI models in the hands of the most powerful actors in the value chain.  

Open-source releases, instead, have been seen as the means to make access to and control of GPAI models (and their outcomes) more equitable, as a broader community can study a model, adapt it or improve it. However, the lack of guardrails in open-source releases has already led to the proliferation of sexist and racist outputs and lawsuits over the use of copyrighted images.  

The implications of release strategies therefore represent a difficult trade-off and may be a reason why leading AI labs have simply decided to keep some of their models private. For example, DeepMind has released only research documents about their GPAI model MuZero together with the pseudocode (a partial, non-functional version of the code), rather than the model itself. 

Clearly, many issues of safety-control on end user-facing services, accountability and equitable access could be addressed via legislation. The question is how to do so effectively. 

The task is especially complicated because, while the debate in the policy community has focused on the two release strategies of API and open-source, the monetisation of GPAI remains in its infancy. Other business models, transforming GPAI and GPAI-related services into economic value, already exist. Even if it is unclear how much they will be used in the future, their wide array indicates that we should future-proof policy by making it adaptable or, better yet, completely independent from GPAI release strategies. 

Table 2: potential business practices and GPAI release strategies other than API-accessible and open-source

Type of potential business practice  Key aspects 
Customer-specific GPAI  GPAI dedicated to a specific client’s use and trained on the client’s proprietary data; can serve as the ‘building block’ for specific applications needed by the client. 
White label GPAI  GPAI models or API to access the model resold or relicensed via model integrator operators, such as technology consulting firms. 
Consulting for GPAI integration  Consulting practices, through which GPAI providers help integrate and fine-tune an existing GPAI for a customer, through a separate consulting firm or freelancers. 
Maintenance of GPAIs  Maintenance service agreements that might be compulsory to ensure compliance throughout the model lifecycle to drive profits. 
Traditional online software sale  Copies of specific elements of the model, such as the weights resulting from training or the codebase without data feeds, are sold to customers, who install and set them. 
GPAI marketplaces  Platforms hosting GPAI models, fine-tuned AI models or subcomponents, available for sale, subscription or temporary hire. 
GPAI kits  Pre-coded GPAI elements, possibly in combination with instructions and subscription-based computational power or coding assistants, help users with little to none programming skills to use the model. 

Future-proofing policy for GPAI: five key challenges to address  

GPAI and their release strategies pose a unique challenge for policymakers setting an AI governance regime. This is most visible in the EU AI Act, which is currently being negotiated, for three primary reasons.  

  • Complexity of the value chain: the AI Act is designed to regulate tangible products placed on the EU market, a description that hardly applies to GPAI models. Indeed, the assignment of duties and liability is much more difficult to establish in the case of GPAI than the AI Act implies. This is due to the number of players in the value chain, the disparate levels of control, based on the release and access strategy, and the model’s ability to exhibit new  throughout its lifecycle. 
  • The multi-functionality of GPAI models: EU product legislation, like the AI Act, assumes all products have an ‘intended purpose’, however, GPAI models by definition have underdetermined purposes. 
  • The polarity of (current) release strategies: the decision taken by GPAI providers on how to make their model accessible and thus release it on the market – via API or as open-source – determines in disparate ways how risks emerge for users and consumers. Regulating GPAI through a one-size-fits-all regime, such as the AI Act, is highly challenging.  

Understanding these challenges and distilling the key questions that legislators face in the context of the AI Act will foster an informed debate on the regulation of GPAI now and in the future.  

Challenge 1: How should GPAI be defined in the legislation? 

Finding an evidence-based definition that recognises the broad spectrum of ‘generality’ of purpose of this technology is a first key regulatory challenge. Future benchmarking, measurement and metrology efforts in the GPAI domain could help, but a temporary code of conduct might be needed while they mature.  

Challenge 2: How to factor in the various incentives behind open-source release strategies? 

Similarly, making the distinction between open-source distribution by grassroot developers volunteering their talent and private companies’ open-source product release mechanisms (including sophisticated for-profit, monetisation strategies) is another hurdle for policymakers and enforcers. Engaging with open-source communities of the first kind will be key to ensure that this distinction is accurate. 

Challenge 3: What regulatory tools are needed to ensure future-proofed governance of GPAI models?  

As business models develop at exponential speed, policymakers need to future-proof their policies, by keeping principles and obligations independent from distribution channels. Specifically, they need to ensure design, quality and reliability requirements are the same regardless of how GPAI models are released, to avoid distorting the market. Realistically, methods of compliance and policy enforcement will have to be adapted to different business models to ensure that trustworthiness remains a competitive advantage for AI providers rather than a liability. This involves monitoring evolutions in the AI value chain and updating regulatory guidance accordingly.  

Challenge 4: How can regulators ensure adequate capacity for oversight to deal with the societal-level implications of GPAI models?  

Cutting-edge GPAI models may become fundamental assets in the digital economy, while still being fraught with inherent issues, such as opacity and emerging capabilities. Countering this phenomenon will require strong technical expertise at all levels of government, coordination between leading AI labs and regulators and, possibly, an ‘ecosystem of inspection13 that allows academia and civil society to independently assess GPAI models.  

Challenge 5: How can AI governance incorporate the views of affected persons and impacted communities, so that the benefits of GPAI are equitably distributed?  

As GPAI releases in 2022 have shown, these models are now a matter of public debate. Amid the need for safeguards, the drive for innovation, the protection of privacy and intellectual property rights, the traditional culture of freedom and openness of software communities, the transition towards synthetic art and culture and the oligopolistic interests of bigger companies in this field, AI governance has the daunting task of arbitrating many trade-offs. Policymakers need to find methods to engage with the people affected by AI technology, including those stakeholders that are traditionally left out of the debate. 

Overall, ensuring trustworthiness along the entire value chain of GPAI and its diversified business models presents significant challenges for policymakers. Future-proofing policies and ensuring compliance and effective enforcement will be crucial in building and maintaining public trust in these technologies, not only in the context of the EU AI Act, but in all jurisdictions aiming at setting a governance regime for AI.


The views presented here are the authors’ own and do not necessarily represent the views of the organisations they work for or are affiliated with. 


If you want to know more about recent developments in AI regulation, you may be interested in our foundational report Rethinking data and rebalancing digital power, our work on the AI Act and the expert opinion on AI liability we commissioned to Christiane Wendehorst.

Footnotes

  1. This term ‘general-purpose AI’ was first proposed in a 2021 amendment to the EU AI Act.
  2. OpenAI’s GPAI system GPT-3 is used by ‘tens of thousands of developers’ and in over 600 apps currently placed on the market, covering over 100 intended purposes. https://openai.com/blog/gpt-3-apps/
  3. This sometimes requires customisation of the GPAI model, i.e. modifying some of its parameters from their original, general-purpose values to new values more specialised, generally by feeding the model an additional dataset related to the intended purpose. Some GPAI model developers enable downstream developers to do this customisation directly in the API, see https://openai.com/blog/customized-gpt-3/
  4. Even though 2022 has seen the release of models that achieve similar performance and generality with a smaller size, these ’smaller’ models remain orders of magnitude bigger than single-purpose AI systems in terms of number of parameters. See for instance https://www.deepmind.com/publications/an-empirical-analysis-of-compute-optimal-large-language-model-training
  5. See https://arxiv.org/abs/2302.04844
  6. This also shows how rapidly AI developments can proliferate. Meta’s AI lead Yann LeCun recently claimed that no AI lab is ahead of any other by more than two to six months, see https://www.economist.com/business/2023/01/30/the-race-of-the-ai-labs-heats-up
  7. Please note that it depends on each GPAI provider how much access is granted via API, ranging from strong limitations, i.e. only sharing the output, to full freedom to fine-tune the model for a specific task.
  8. See, among other sources: https://arxiv.org/abs/2108.07258; https://futureoflife.org/wp-content/uploads/2022/11/Emerging_Non-European_Monopolies_in_the_Global_AI_Market.pdf; https://www.ceps.eu/ceps-publications/reconciling-the-ai-value-chain-with-the-eus-artificial-intelligence-act/; and https://www.brookings.edu/research/how-open-source-software-shapes-ai-policy/
  9. See, for example, the pricing of OpenAI’s API, where downstream developers can start for free with a limited amount of credit to access or fine-tune the language model and, once that is spent, can pay for additional resources. OpenAI has accumulated significant indirect value through the hype of their free(mium) API-based models with user interface products, such as ChatGPT or DALL·E. https://openai.com/api/pricing/
  10. For example, consulting services, training downstream actors, adapting a version of the open-source model for industrial clients.
  11. For example, charging for easy access to computational infrastructure or for other services such as fine-tuning or maintenance services. Right now, some platforms facilitate downloading the model and running it on their computational infrastructure, i.e. HuggingFace, Microsoft Azure, Google Cloud.
  12. Open-source models are frequently tested and used by the emerging community around it. The GPAI provider could use this to develop an advanced version of the model that they could then monetise directly, instead of open-sourcing it.
  13. See also https://hai.stanford.edu/news/time-now-develop-community-norms-release-foundation-models

Related content