Autonomy in chess vs markets

Published on May 27, 2024

A regular topic of debate in the AI discourse is whether these systems will act autonomously or forever be tools wielded by humans. I’m not particularly confident in predicting how things will turn out either way, however I do want to point out a flawed argument I keep hearing about how Chess shows us how things will go. Chess has properties that make it fundamentally incompatible with most economically valuable activities – namely, opportunity costs.

The best recent example was Ben Thompson of Stratechery interviewing Microsoft CTO Kevin Scott. When discussing the autonomy of AI, he said this (emphasis mine):

BT: And is AI going to remain a tool, it’s clearly a tool today.

KS: Yes, I think so.

BT: Why is that? Why is it not going to be something that is sort of more autonomous?

…

KS: Yeah, well, so none of us know, but I do think we’ve got a lot of clues about what it is humans are going to want. So there hasn’t been a human being since 1997 when Deep Blue beat Gary Kasparov at chess, better than a computer at playing chess and yet people could care less about two computers playing each other at chess, what people care about is people playing each other at chess and chess has become a bigger pastime, like a sport even we make movies about it. People know who Magnus Carlsen is.

BT: So is there a view of, maybe the AI will take over, but we won’t even care because we’ll just be caring about other humans?

KS: I don’t think the AI is going to take over anything, I think it is going to continue to be a tool that we will use to make things for one another, to serve one another, to do valuable things for one another and I think we will be extremely disinterested in things where there aren’t humans in the loop.

I think what we all seek is meaning and connection and we want to do things for each other and I think we have an enormous opportunity here with these tools to do more of all of those things in slightly different ways. But I’m not worried that we somehow lose our sense of place or purpose.

Chess is a common example given of a field that has grown in popularity and participation even as humans have demonstrably fallen below the levels of AI systems. This happened first in 1997 with Deep Blue famously beating Gary Kasparov. Today there are many greater-than-human-capability-level systems, including freely available open-source options like Stockfish. And it’s true that Chess has never been more popular, with more players taking up the game and watching human chess matches than ever before. The issue with ‘overfitting’ on this example is that it has properties that are not conducive to analogizing to economically valuable tasks.

First and foremost, Chess is by and large a form of entertainment. The money generated for chess players ultimately comes from spectators – either directly from paying to attend or watch tournaments, or indirectly via sponsorships premised on capturing the attention of spectators. As a result, Chess will always migrate in the direction of majority interest. If people want to watch humans play each other, then this is what Chess as an institution will provide. If some small group of people decide they much prefer watching AIs compete, they could start a new organization, with its own tournaments, streams, and sponsorships. But regardless of how much better the AIs are at chess, it will be forever limited by the attention of fans. There is no mechanism by which the superior chess being played by AIs translates into domination of the Chess world.

Compare this to an intrinsically economically valuable activity, like data-labelling. By ‘intrinsically economically valuable’ I mean an activity that directly produces wealth by converting scarce resources into valuable output. 

Imagine that almost everyone feels like data-labelling should be a human-supervised process, and so all producers of labelled data choose to run their operation this way. The AI labels data with some level of human oversight. And let’s assume that this human oversight adds a mere 10% additional cost to the process (a generous assumption). In this world, assume that the quality of any labelled data can be costlessly assessed in a two-sided marketplace. So buyers can see cost and quality and make purchase decisions accordingly. In this world, there is no sales force and distribution comes ‘free’ (less transaction costs for the marketplace). 

Now, imagine a single producer wonders if there is a better way. They start their own data-labelling company, but they ‘hire’ AI overseers and managers. It’s AIs all the way down. The only thing the human producer does is set up the company, specify the goal (label this data), and collect the profits.

Unlike Chess, where the success of ‘autonomous’ or ‘agentic’ outcomes depends upon the preferences of the majority of people, in this world, what do we expect to happen? Logic suggests that the AI-run labelling operation will produce output of at least equal quality, but 10% lower cost.

Note that this argument makes the conservative assumption that the AI-run companies' quality will be only as good as the human-supervised version, despite this not being the case in our Chess analogy.

In this world, we have a clear mechanism for the AIs to ‘win’ in the marketplace – lower costs! Even if 80% of producers know this data has had no human oversight, and decide to boycott it as a result, the outcome is unchanged. Because unlike Chess, this is a market with selection effects. If only 10% of data consumers purchase this lower cost data, it stands to reason that they too will be able to price their luddite competitors out of business. It may take longer than if all data consumers adopted this lower cost alternative, but eventually, the early adopters will win.

At a sufficient level of abstraction, consumers won’t care and will simply choose the cheaper alternative (adjusting for quality). Here the abstraction is that a consumer is buying a product from a producer who is supplied by either a human-supervised process or an autonomous one. It seems clear that consumers won’t care about what is happening this many layers up the stack. We see strong evidence for this in areas like a) fast fashion, where consumers clearly prefer cheaper alternatives, even when they are aware of unethical practices in the supply chain, b) caged eggs, which remain popular despite the widespread acceptance that their farming practices are horrific, and c) preferences for self-serve checkouts, despite widespread claims that this is putting low-skilled workers out of jobs. 

Many areas of life may turn out to be like chess even in a world of super-intelligent autonomous AIs. I expect the theatre to remain and probably grow in popularity. The same is true for live music and sports. But notice these are all forms of entertainment where consumer preferences directly translate to who ‘wins’ between humans and AIs. In competitive markets, where winners are selected on features like price and quality, with the ‘inputs’ largely hidden or indistinguishable to consumers, I see no reason to expect this Chess-like pattern to hold.