ER 37: On climbing competitions and AI governance
Or, a very extended metaphor to help us think about solving AI problems
Welcome back to the Ethical Reckoner. This week, some unstructured musings about how climbing competition strategy is actually a great framework to think about AI governance, below the jump.
This edition of the Ethical Reckoner is brought to you by… live music
I had a really bad climbing competition on Saturday. My local gym, which I love, hosted a spring bouldering comp.1 (There’s going to be some climbing jargon in this post, so if you’re not a climber, consult the footnotes.) I was super psyched—I’ve found a great community at this gym, and their events are always really fun. As an added bonus, there was a cash purse for the advanced category. To be frank, I was cautiously optimistic about making finals. I’ve been climbing for most of my life, and I’ve competed in dozens of comps, and even won a few of them.
When I arrived at the morning qualifying session, I was surprised to see a bunch of people I didn’t recognize. Word about the comp (and probably prize purse) had spread, and people had come from far and wide—including some very strong girls. I was already feeling some pre-comp jitters, which I get sometimes but usually go away when I start climbing. This time, though, they didn’t.
Not to veer into sports psychology, but I think there were a few contributing factors. One is that this was my first comp in a few months, so I wasn’t in great mental shape. I’m feeling reasonably strong these days, but I did break a finger two months ago and am still bouncing back from that. Also, from conversations around the gym, I knew people expected me to do well, and I didn’t want to embarrass myself in front of my friends.
I started warming up and my forearms quickly started feeling tight, which climbers call “flash pump.”2 I was probably over-gripping, and maybe went out a little too hot, but I thought it was a pretty reasonable warm-up—it was mostly nerves.
The format this particular comp followed was as follows: there were 41 boulder problems around the gym, scored by difficulty, 100 to 4100 points (the points formula was [boulder number] x 100, so boulder 17 was worth 1700 points). You had three hours to climb as many as you wanted, marking down falls. Points from your top 5 boulders were added up to form your score, and falls used as a tiebreaker.
This is a pretty common comp format, and my strategy (tried and tested from my days of youth comps in Chicago) is to warm up, then get five reasonable boulders (sendable in 1-2 attempts) on my scorecard quickly, then try harder ones to knock out the lower point ones. As I was doing that, I already knew the comp wasn’t going to go especially well. The jitters eventually faded, but my forearms were still tight, and my footwork was off. I took a long break where I had a snack and played my LinkedIn games (brb, going to do Queens3). And then I started thinking about this newsletter.
Reader, you may not believe me, but: this is a newsletter about AI governance. As I sat there, massaging my forearms and trying to figure out if I should give #36 another go (I was one move from finishing, but was vanquished by a greasy crimp), I realized that climbing competition strategy is relevant to AI governance coordination problems.
Let me explain. AI governance is a Big problem—actually, a Big set of problems, since there are a lot of different aspects of AI governance, from rare earth mineral mining to protecting data workers to addressing bias to wide-spread societal issues—and addressing this will require coordination across different organizations at different levels. This isn’t just global, although global governance is going to be important for some issues (for more, check out this or this). Even at state levels, even within corporations, AI governance is a thorny knot of different questions, and staring them down can appear overwhelming—like looking at 41 boulder problems, knowing you need to solve them, and not knowing where to start.
But we don’t have to tackle all problems at once. Like a climbing comp, we can start with the low-hanging fruit: solve a few easy problems to warm up (like deciding that we shouldn’t let AI churn out copyrighted content or non-consensual explicit deepfakes4), then deal with a few harder problems that establish a solid baseline. Like your baseline five boulders, these should be problems that you’re happy to have solved and didn’t take too much (physical/mental/logistical/institutional) effort to do, but that leave room to grow.
Then, you work on the really thorny issues, the problems that take a lot of effort and maybe multiple attempts. This is where your preparation (mental and physical) come in. These are the negotiations over the values-laden questions, the issues that involve interactions between ethics teams and colleagues with titles like “head of monetization.” Hopefully, your preparation pays off and at the end, you’ve climbed all the hard problems you wanted to, made finals, and saved the world from AI harms.
But sometimes, it doesn’t work out. Even if you’ve done the workouts and tapered and fueled well (many thanks to pasta and oatmeal), sometimes it doesn’t go as you think. Governance—like sports—is a human process, and people/minds/bodies don’t always behave the way we think they will. Sometimes you’re foiled by something unexpected (a greasy hold, an uncooperative stakeholder). Sometimes the problem is harder than it looked from the ground (or from reading about it). Sometimes you’re too tired (or lack the institutional capacity) from having expended too much energy on earlier problems. Sometimes the competition isn’t what you expected and it turns out you’ve come in with the wrong preparation or headspace. You could underestimate how hard the harder problems are, so you don’t solve as many as you thought you would. Or, you could overestimate how hard they are, and in the end wish you’d tried even harder ones.
Sometimes when a climbing comp is almost done, you try a Hail Mary: rest and run down the clock, then try and send one last high-point problem right before they call time. This is high risk and (potentially) high reward, but of course, the stakes of a climbing competition are considerably lower than AI governance. Skirting close to the edge of AI harm is, of course, very risky.
Climbing is also generally a less combative environment than AI governance, but I think there are still something to be gleaned. Competitors in climbing competitions still work together to figure out problems; even at nationals, the girls I was competing with would talk beta together. It’s not a zero-sum game because every climber’s actions (and sends or falls) are independent of everyone else’s—I can’t say the same is true for AI governance. Also, the goal of a climbing competition isn’t to personally send every boulder; it’s to get the hardest handful. We need to solve more than a handful of AI governance problems. But, if you look at a competition in aggregate, it’s almost more of a competition between the group of climbers and the wall (or routesetters)—I want to see someone, even if it’s not me, send all the problems. This lends itself to a more decentralized vision of AI governance that I like—it’s ok for any single institution to not solve a particular problem, so long as someone does.
Ultimately, if you don’t send all the boulder problems you want to in a climbing competition, you go home slightly sore in body and ego, and you can tackle them again the next session. If we don’t solve all the governance problems we need to solve for AI, there will be real harm. I tried a Hail Mary at the comp. I fell off.


What was the point of the preceding thousand words or so? Well, part of it was because I wanted to explore this intersection of two of my interests (and feel like the comp wasn’t a total flop). But I think this does give us a bit more of a theoretical foundation for AI governance. You could say that all of this boils down to “start small.” And it kind of does, but this climbing competition framework also forces us to aim higher, and to accept that no single organization can or should solve every problem. I think it also is at a suitably high level of abstraction that works for multiple kinds of governance, from organizational to state to national to international. I believe that focusing on centralizing AI governance under a single international body is unwise—there’s too much gridlock to create such a body, and we already have a weak “regime complex” of international institutions better suited to address AI under their purviews. The climbing competition model helps us think about how they can start tackling problems within their organizations and those that require cooperation with other groups. Start small, start easy. Then with that base and the knowledge and skills gained, tackle harder problems. See the other people trying to solve the same problems as collaborators, not competitors.
So, what happened at the comp in the end? Well, I got bumped down a category—but I did end up placing and went home with some swag (and bread). Had I sent a couple of problems that I had been close to sending, it would have been a different story. I’ll be returning to the gym today to give them another shot. In this ridiculous extended metaphor we’ve been crafting, let’s hope they weren’t particularly important governance problems, or that someone else was able to send them—otherwise we’ll be sorry.
Bouldering is climbing short walls with no ropes (but thick pads). Each climb is called a “boulder” or “problem” (or sometimes “route,” but that’s usually reserved for roped climbs). “Comp” is short for “competition.”
“Pump” is when your forearms fill with lactic acid (it actually feels like someone has pumped them up with a bicycle pump) and “flash” just refers to it happening quickly. It’s unrelated to “flashing” a climb, which is when you send (finish) a climb first try.
Ok, actually I did three of them.
To be fair, there’s not 100% consensus even on these… but even warm-up climbs take some effort.
I was so excited to read this and I am even more excited now that I’ve read this