Machine culture

The human capacity for cumulative culture is a key reason for our dominance on Earth. By learning from previous generations and making incremental changes along the way, we have created knowledge and practices that no individual human being could have discovered on their own. Over our history, various inventions––such as language, writing, and formal education––have made the process of cultural transmission more efficient and larger in scope, while changing it in various other ways as well.

It seems overwhelmingly likely that generative AI will have similarly profound effects on culture. A recent paper by Brinkmann et al argues that AI will affect all three Darwinian properties of culture:

  1. Variation: Machines will generate new cultural artifacts.
  2. Transmission: Machines will aid in the transmission of cultural artifacts and potentially mutate them along the way.
  3. Selection: Machines will select between different cultural artifacts, in ways that may potentially differ from how humans would select between them. Moreover, humans will select between different machines.

Variation Consider the game of go. There was little increase in decision quality among human players between 1950 and 2016, followed by a sharp improvement after the release of AlphaGo. In part, this was because human players copied strategies used by AlphaGo.

Notably, this wasn’t only due to humans copying the strategies of AlphaGo, but rather the introduction of AlphaGo served as an impetus for humans to discover new strategies as well.

One key difference between humans and AIs is that AIs can engage in significantly more individual exploration. By contrast, humans tend to rely on pre-existing, culturally evolved solutions that are socially learned, with perhaps some individual learning and exploration on top of that. The capacity for significantly greater individual exploration and learning means that we could end up with some culturally alien traits––traits that unaided human cultural evolution may not be able to discover on its own. In this respect, AlphaGo is strikingly different from large language models, which mostly learn to reproduce human text.

I’m reminded here of what Shane Legg recently said on the Dwarkesh Podcast:

to get real creativity, you need to search through spaces of possibilities and find these hidden gems. That’s what creativity is. Current language models don’t really do that. They really are mimicking the data. They are mimicking all the human ingenuity and everything, which they have seen from all this data that’s coming from the internet that’s originally derived from humans.

At the same time, we know from cultural evolution that simple recombination of existing ideas is a powerful force of invention. So LLMs––which have been trained on a far larger and more diverse corpus of writing than any human could ever consume––may be able to identify novel recombinations that humans would not easily find.

Transmission Machines could store and transmit cultural information more accurately, boosting cultural preservation by reducing cultural drift. Machines trained on human datasets may reproduce human biases, but we also know of some ways of addressing such biases if desired.

Selection Humans use various social learning strategies to figure out which cultural traits to learn when and who from. For example, we pay more attention to certain kinds of content, such as information about our social world, or about plants and animals (content bias). We also pay more attention to particular features of the context, such as the prestige of the person you learn from or the frequency with which some particular cultural trait is adopted in the population (context bias).

These social learning strategies are already reflected in our machine-assisted learning. Content-based filtering algorithms aim to find new items that are as similar as possible to ones the user has previously shown interest in. And PageRank is basically a measure of prestige.

Algorithms can also affect the selective retention of cultural traits by shaping social networks, e.g. suggesting who to follow on Twitter. There is research showing that social network structure can have important effects on a group’s ability to coordinate and solve problems.

We should expect recommender systems to become increasingly more refined, paying attention to e.g. cognitive cost (prioritizing items that are easier for the user to evaluate or learn) or emotional state

In general, the effects of machines on selection will depend on the domain in question. However, one feature is present across many domains: the business incentive to maximize user engagement for profit:

In social networks, this may be achieved by promoting content congruent with users’ past engagement or ingroup attitudes , or content that humans inherently attend to, such as emotionally and morally charged content. One example is information that relates to threat or elicits disgust, as shown in transmission chain experiments inspired from cultural evolutionary theory. The algorithmic amplification of such content may then feed back into human social learning—for instance, inflating beliefs about the normative value of expressing moral outrage, increasing outgroup animosity or creating echo chambers and filter bubbles.

At the same time, engagement can also be a signal of user value. It should be possible to design algorithms that navigate this tradeoff.

They also write:

Algorithmic systems more generally offer powerful ways to bridge social divides—for instance, by designing selection policies that steer users’ attention to content that increases mutual understanding and trust, or by identifying and promoting links in social networks that can effectively mitigate polarizing dynamics. Machine selection can also be deliberately geared towards fostering content diversity or towards maximizing agreement among humans with diverse preferences.

Of course, the key question is whether people have the right incentives to design algorithmic systems in this way.

In turn, humans select machine systems in various ways, e.g. through using RLHF or other forms of human feedback, creating training sets, or customers selecting which model to employ, based on cost, usefulness, etc. This will create a range of selection pressures on machine systems. For example, reliance on human feedback may create a selection pressure to please interlocutors.

Existential risk

Although experts disagree on the timescales and the degree of risk involved, the potential of superhuman artificial general intelligence poses a possible existential threat to the human species. Cultural evolution provides a useful framework for navigating this challenge. Specifically, cultural evolution processes take place today at multiple scales, with human collectives—for example, companies, universities, institutions, cities and nation states—acting as the units of selection. This multi-level selection can, in principle, operate at the level of human organizations augmented by intelligent machines and (eventually) superhuman artificial general intelligence. Engineering this evolutionary process can provide means for ensuring human survival and agency in the long run.

This much I’ve figure out myself. But I would have liked to hear something with at least a little more detail.


Date
December 8, 2023