AI Models and the Evolution of Human Cognition

Across a wide variety of animals, mammals in particular, there’s certain characteristic changes in the number of neurons and the size of different brain regions as things scale up. There’s a lot of structural similarity there and you can explain a lot of what is different about us with a brute force story which is that you expend resources on having a bigger brain, keeping it in good order, and giving it time to learn. We have an unusually long childhood. We spend more compute by having a larger brain than other animals, more than three times as large as chimpanzees, and then we have a longer childhood than chimpanzees and much more than many, many other creatures. So we’re spending more compute in a way that’s analogous to having a bigger model and having more training time with it. And given that we see with our AI models, these large consistent benefits from increasing compute spent in those ways and with qualitatively new capabilities showing up over and over again particularly in areas that AI skeptics call out. In my experience over the last 15 years the things that people call out are like —”Ah, but the AI can’t do that and it’s because of a fundamental limitation.” We’ve gone through a lot of them. There were Winograd schemas, catastrophic forgetting, quite a number and they have repeatedly gone away through scaling. So there’s a picture that we’re seeing supported from biology and from our experience with AI where you can explain — Yeah, in general, there are trade-offs where the extra fitness you get from a brain is not worth it and so creatures wind up mostly with small brains because they can save that biological energy and that time to reproduce, for digestion and so on. Humans seem to have wound up in a self-reinforcing niche where we greatly increase the returns to having large brains. Language and technology are the obvious candidates. You have humans around you who know a lot of things and they can teach you. And compared to almost any other species we have vastly more instruction from parents and the society. You’re getting way more from your brain than you get per minute because you can learn a lot more useful skills and then you can provide the energy you need to feed that brain by hunting and gathering, by having fire that makes digestion easier. 

Basically how this process goes on is that it’s increasing the marginal increase in reproductive fitness you get from allocating more resources along a bunch of dimensions towards cognitive ability. That’s bigger brains, longer childhood, having our attention be more on learning. Humans play a lot and we keep playing as adults which is a very weird thing compared to other animals. We’re more motivated to copy other humans around us than the other primates. These are motivational changes that keep us using more of our attention and effort on learning which pays off more when you have a bigger brain and a longer lifespan in which to learn in. 

Many creatures are subject to lots of predation or disease. If you’re mayfly or a mouse and if you try and invest in a giant brain and a very long childhood you’re quite likely to be killed by some predator or some disease before you’re actually able to use it. That means you actually have exponentially increasing costs in a given niche. If I have a 50% chance of dying every few months, as a little mammal or a little lizard, that means the cost of going from three months to 30 months of learning and childhood development is not 10 times the loss, it’s 2^-10. A factor of 1024 reduction in the benefit I get from what I ultimately learn because 99.9 percent of the animals will have been killed before that point. We’re in a niche where we’re a large long-lived animal with language and technology so where we can learn a lot from our groups. And that means it pays off to just expand our investment on these multiple fronts in intelligence.

Carl Shulman on the Dwarkesh Podcast

August 21, 2023

The Difficulty of Cooperation

Once established, cooperation is an enormously potent adaptation. Technological specialization and the advantages that the law of comparative advantage bestows on trade generate great benefits. Specialization and trade probably presuppose an established environment of cooperation. But even cooperation in defense, hunting, and foraging gives access to a far broader range of resources than any individual hominid could access. Cooperation both increases the fraction of local resources harvested and buffers the effects of variation in any one resource, and it ameliorates many dangers to which primate flesh is heir. Once established, then, cooperation will transform both the ecological and the social environment. In turn, among intelligent social animals cooperation leads to profound cognitive transformations by changing the mix of problems faced by agents.

But cooperation is a difficult adaptation – it i not within the space of evolutionary possibility for most lineages, for reasons made vivid by the Prisoner’s Dilemma. For most animal species, the Temptation to Defect subverts cooperation. Male langurs attempt to kill the dependent young of females in bands which they take over. Yet, despite the fact that each female would be protected in a solid coalition, they do not mobilize collectively against such males. If they cooperated in defense, wildebeest would have little to fear from African hunting dogs. But in both cases, and in many others, the free-rider would be fitter still. Though everyone is better off in a cooperating rather than a defecting group, a defector in a predominantly cooperative environment is better off still. Defection is often disruptive, reducing everyone’s absolute fitness. Yet without some countervailing evolutionary mechanism, selection can allow disruptive behavior to invade. Thus, if aggressive males who expropriated the foraging resources of women were more fit than males who respected their property rights, without a countervailing force that behavior would spread in a population. It would spread even if this bullying strategy undermined cooperative food gathering altogether, depressing the absolute fitness of every individual in the population, even that of the thief. For selection is sensitive to relative, not absolute, fitness. Hence cooperative behavioral patterns are hard to build and maintain.

Kim Sterelny, Thought in a Hostile World, p. 124-125.

August 20, 2023

Levels of Selection and the Major Evolutionary Transitions

Cooperation among lower-level units and suppression of within-group competition are important in all the transitions—without them, no higher-level units can evolve. Mechanisms that promote cooperation include kinship, population structure, synergistic interactions, and reciprocation; mechanisms that suppress competition include division of labour, randomization (e.g. fair meiosis), policing by fellow group members, and vertical transmission.

Samir Okasha, Evolution and the Levels of Selection, p. 222-223

August 19, 2023

AI, Evolution, and the Benefits of Variation

From Natural Selection Favors AIs Over Humans:

In static environments, variation is not as useful. But in most real-world scenarios, where things are constantly changing, variation reduces vulnerability, limits cascading errors, and increases robustness by decorrelating risks. Farmers have long understood that planting different seed variations decreases the risk of a single disease wiping out an entire field, just as every investor understands that having a diverse portfolio protects against financial risks. In the same way, an AI population that includes a variety of different agents will be more adaptable and resilient and therefore tend to propagate itself more.

I don’t follow this analogy: in the case of seeds, there is variation because the farmer intentionally creates variation. But who is creating variation in the case of AIs? Are we picturing two separate AI populations, one of which has greater variation, and is therefore more likely to propagate widely? But in virtue of what do we treat them as two separate populations and not just one?

August 18, 2023

Categorizing Catastrophic AI Risks

I kind of like this categorization by Dan Hendrycks and coauthors:

Malicious use. Actors could intentionally harness powerful AIs to cause widespread harm. Specific risks include bioterrorism enabled by AIs that can help humans create deadly pathogens; the deliberate dissemination of uncontrolled AI agents; and the use of AI capabilities for propaganda, censorship, and surveillance. To reduce these risks, we suggest improving biosecurity, restricting access to the most dangerous AI models, and holding AI developers legally liable for damages caused by their AI systems.

AI race. Competition could pressure nations and corporations to rush the development of AIs and cede control to AI systems. Militaries might face pressure to develop autonomous weapons and use AIs for cyberwarfare, enabling a new kind of automated warfare where accidents can spiral out of control before humans have the chance to intervene. Corporations will face similar incentives to automate human labor and prioritize profits over safety, potentially leading to mass unemployment and dependence on AI systems. We also discuss how evolutionary dynamics might shape AIs in the long run. Natural selection among AIs may lead to selfish traits, and the advantages AIs have over humans could eventually lead to the displacement of humanity. To reduce risks from an AI race, we suggest implementing safety regulations, international coordination, and public control of general-purpose AIs.

Organizational risks. Organizational accidents have caused disasters including Chernobyl, Three Mile Island, and the Challenger Space Shuttle disaster. Similarly, the organizations developing and deploying advanced AIs could suffer catastrophic accidents, particularly if they do not have a strong safety culture. AIs could be accidentally leaked to the public or stolen by malicious actors. Organizations could fail to invest in safety research, lack understanding of how to reliably improve AI safety faster than general AI capabilities, or suppress internal concerns about AI risks. To reduce these risks, better organizational cultures and structures can be established, including internal and external audits, multiple layers of defense against risks, and state-of-the-art information security.

Rogue AIs. A common and serious concern is that we might lose control over AIs as they become more intelligent than we are. AIs could optimize flawed objectives to an extreme degree in a process called proxy gaming. AIs could experience goal drift as they adapt to a changing environment, similar to how people acquire and lose goals throughout their lives. In some cases, it might be instrumentally rational for AIs to become power-seeking. We also look at how and why AIs might engage in deception, appearing to be under control when they are not. These problems are more technical than the first three sources of risk. We outline some suggested research directions for advancing our understanding of how to ensure AIs are controllable.

The paper presents a decent overview of these scenarios for the uninitiated. It does a good job explaining how these things could happen, though is less convincing on whether they’re likely to.

August 17, 2023