Running Through Compute

Epoch have a paper called compute trends across three eras of machine learning and they look at the compute expended on machine learning systems since the founding of the field of AI, the beginning of the 1950s. Mostly it grows with Moore’s law and so people are spending a similar amount on their experiments but they can just buy more with that because the compute is coming. That data covers over 20 orders of magnitude, maybe like 24, and of all of those increases since 1952 a little more than half of them happened between 1952 and 2010 and all the rest since 2010. We’ve been scaling that up four times as fast as was the case for most of the history of AI. We’re running through the orders of magnitude of possible resource inputs you could need for AI much much more quickly than we were for most of the history of AI. That’s why this is a period with a very elevated chance of AI per year because we’re moving through so much of the space of inputs per year and indeed it looks like this scale-up taken to its conclusion will cover another bunch of orders of magnitude and that’s actually a large fraction of those that are left before you start running into saying well, this is going to have to be like evolution with the simple hacks we get to apply.

Carl Shulman on Dwarkesh Podcast


Date
August 24, 2023