Patterns of
Growth
Mapping the trajectory of AI model development across time, geography, and use of resources
Rafa Africa,
Ignacio Mijares,
Sadie Lee
DSCI 320 • 2025
hero

I
Introduction
Progress in artificial intelligence (AI) has been unusually rapid as a technological development, and has advanced at scale, with models growing by orders of magnitude across various dimensions of development. Understanding how such rapid progress occurred requires tracing the technical lineage of modern models, from the scaling of compute and data usage, to the shifting geographic and institutional landscape.
By tracing these patterns, we can begin to grasp the scale and scope of growth to better understand how AI systems have emerged and how their trajectories may continue to evolve.

II
Temporal patterns
Temporal patterns examine how model training resources have changed over time, how those resources are used efficiently or otherwise, and how public access to a model changes in relation to compute as a training resource.First, how have training compute, parameter count, power draw, training cost, and training time scaled over time, and how do these patterns differ across domains, countries, and organization types?
Loading chart...
Over time, it's clear that models have scaled across all of the attributes examined, particularly in terms of training compute required, number of trainable parameters, and training time. It is notably more difficult to examine patterns over time for power draw and training cost, as these metrics were generally not recorded until after 2010.
The deep learning era (2010-2025) contains the highest concentration of model releases, as expected, and also shows an increase in usage for the metrics examined. This implies the evident significance of the deep learning era as an inflection point in AI development, likely due to other factors such as the introduction of graphical processing units (GPUs) for general purpose parallel computing.

With an understanding of the overall scaling of key training resources, we then examine,how has model resource efficiency (i.e. cost per FLOP, cost per parameter, compute per dollar, compute per watt) evolved over time, and do metrics exhibit diminishing returns?
Loading chart...
Seeing how overall model scale has grown over time, it is significant to note that the use of these resources required to train AI models has become generally more efficient. In many domains, scaling leads to increased expenses on a per-unit basis. Even in an algorithmic complexity sense, for example, more complex algorithms tend to have worse asymptotic constraints. The fact that larger AI models yield more efficient usage, e.g. lower cost per parameter and cost per FLOP, thus runs counter to traditional intuitions about scaling.
In other words, while most technologies show diminishing returns at high scale, there's no apparent saturation point yet for the large-scale AI models we're currently at. This could suggest that improving models by scaling up is a more efficient path.

Considering compute as a key training resource,how have model accessibility levels changed over time, and how does this correlate with training compute?
Loading chart...
Models have changed in how they can be accessed and used by the public, with more institutions offering open weights, i.e. where a model's trained parameters are publicly shared, yet not having access be a significant factor in the scale of a model's compute. This means that some of the largest-scale models in terms of compute that have been trained to date generally have some level of transparency and accessibility. However, given this large scale, it also means that users would need the necessary computational power to actually use these weights efficiently, which may not be feasible.

III
Geographic patterns
Geographic patterns examine how model development is globally distributed, how a country's AI ecosystem are related to model size and parameters, and how model accessibility varies across the top producing countries.To begin, what is the geographic distribution of the top 10 contributors to AI model releases?
Loading chart...
The United States and China lead global AI development in terms of the total number of AI models released over time. This is expected, given that many of the institutions that are key players in AI development are based in the United States, such as OpenAI, Google, Meta AI, and NVIDIA.
Also expected are the remainder of the countries which make up the top five. Namely, the United Kingdom is home to Google DeepMind, and Canada is home to academic institutions that led key methodological innovations particularly in the earlier stages of AI, such as backpropagation and the ReLU activation function.

We then explore,how do AI-related academic publication intensity and export controls on semiconductors relate to model size (parameters) and training compute across countries and organizations?
Loading chart...
Reflecting the top countries in terms of number of AI models released, it's interesting to note that while both the United States and China produce large models, the United States has greater variation in its export controls on semiconductors and intensity of AI-related publications. Whereas, China tends to have low export controls and a high intensity of AI-related publications.
Further, for the United States, the distribution of training compute is fairly comparable across all levels of export controls, indicating that export controls have not significantly limited available training compute.

Once again considering compute as a key training resource,how does model accessibility vary across the top 10 model-producing countries, and how does training compute differ across accessibility levels?
Loading chart...
At both a country-level and organization-level, compute is not significantly related to model accessibility, reflecting the earlier temporal findings.

IV
Resource and efficiency patterns
Resource and efficiency patterns examine how model efficiency has changed across the deep learning era (2010-2025), how resource usage varies across frontier and non-frontier models, and how these resource efficiency metrics correlate with each other.First, how has model efficiency changed over time, and how do release patterns contextualize these changes?
Loading chart...
Across the deep learning era, efficiency metrics have generally increased, also reflecting the earlier temporal findings. As expected, the models with the highest levels of training compute are frontier models, given that Epoch AI defines a frontier model in the dataset as models in the top 10 by training compute at the time of their release, a threshold that has increased over time as larger models are developed.
At the organization level, the data records xAI's 2025 Grok model as having the greatest efficiency in training compute FLOPs per training hour.

We then examine,how has training compute and training cost changed over time for frontier versus non-frontier models?
Loading chart...
As training compute has grown, so have training costs. This is expected, and further seen where frontier models, i.e models leading in training compute, generally exhibit higher training costs than non-frontier models.

Finally,how do efficiency, cost, and scale metrics correlate with each other and how have these relations changed over time?
Loading chart...
Several training metrics are highly correlated, particularly, power draw and compute, and power draw and cost. This is also expected given the nature of the data, as some of the values for these variables were derived directly from each other by Epoch AI, when values were not reported by the model's organization.

V
Discussion
AI development has progressed rapidly over training resources such as compute, trainable parameters, and time in hours, particularly in the deep learning era. Over time, models have been increasingly developed for multimodal tasks compared to earlier models that were task-specific, pointing to the movement towards more generalized models.
The United States has emerged as the globally dominant country in terms of model releases in the deep learning era, with China following. Notably, China produces large models in terms of compute and parameters, with low export controls on semiconductors and high AI-related academic publications.
Resource and efficiency patterns indicate that while models have become increasingly resource-intensive, they have also become more resource-efficient. Frontier models, in particular, tend to have a high level of training compute FLOPs per hour.

Appendix
About:Created for DSCI 320 (Visualization for Data Science) at the University of British Columbia.
Data source:The main dataset on AI models was sourced from Epoch AI[1]. Export controls on semiconductors are from Global Trade Alert[2], and AI-related academic publications per country and year are from OECD[3].
Technical:This project uses Altair for all visualizations with data wrangling done in Python, a requirement of the course. The website is built with Next.js, TypeScript, and Tailwind CSS.
Explore thevisualization code. Explore thewebsite code.