Building on OpenAI's Platform: Game Theory Insights

Using game theory to analyze OpenAI's Apps SDK. Why platform cooperation isn't prisoner's dilemma—it's asymmetric power.

7 min read
ai-productsproduct-strategyproduct-leadershipmachine-learningframework

October 6, 2025. Spotify, Booking.com, Figma, and a handful of other companies made a bet that'll either look brilliant or completely naive in 18 months. They integrated their services directly into ChatGPT through OpenAI's new Apps SDK.

Think about what just happened.

These companies handed OpenAI access to their core functionality. Gave 800 million ChatGPT users a reason to never leave the platform. Basically said "yeah, we trust you won't screw us over."

This is game theory playing out in real time. And the old playbook? Might not apply anymore.

The Setup: Everyone's Playing Prisoner's Dilemma

You know the classic scenario. Two roommates, one does dishes Sunday, the other does them Wednesday. Simple enough.

Then one Wednesday, your roommate skips their turn. What do you do?

Do the dishes yourself and keep things clean but set a bad precedent. Or leave them and watch the kitchen become a disaster but maybe your roommate learns something.

This is prisoner's dilemma. Two parties would both benefit from cooperation, but each has an incentive to defect. Both defect? Everyone loses.

The Apps SDK puts every app company in this exact situation. Cooperate with OpenAI (integrate your service) or defect (keep users on your platform). If everyone cooperates, users get seamless experiences but OpenAI becomes the super-platform. Everyone defects? The AI interface stays limited but companies maintain control.

Pick your poison.

What Game Theory Says Should Happen

Back in 1980, political scientist Robert Axelrod ran a tournament to find the best strategy for repeated prisoner's dilemma games. Got experts from around the world to submit programs that competed against each other over 200 rounds.

The winner? Strategy called Tit for Tat.

Stupidly simple. Start with cooperation. Then copy whatever your opponent just did. They cooperate, you cooperate. They defect, you defect right back. They return to cooperation, you forgive and cooperate again.

What made it work: being nice, retaliatory, forgiving, and clear. Players could predict the strategy and build trust over repeated interactions.

Axelrod's conclusion became famous: "What makes it possible for cooperation to emerge is the fact that the players might meet again."

Great theory.

One problem.

OpenAI Isn't Your Roommate. They Own the Apartment.

Here's what Axelrod's tournament assumed: equal power between players. Both participants could cooperate or defect, both faced similar consequences, both played by the same rules.

The Apps SDK doesn't work that way.

OpenAI controls 800 million users. They control API access, pricing, what gets featured and what gets buried. They control the data flowing through every interaction.

You? You control your API endpoints and that's about it.

This isn't a symmetric game where Tit for Tat can save you. This is closer to that British game show Golden Balls, where two strangers decide to split or steal a cash prize. Except in this version, one player also owns the TV network, writes the rules, and can change them between episodes.

When Spotify integrates into ChatGPT, they're not just cooperating in one round of a fair game. They're handing OpenAI the ability to see what users want from Spotify. How they search for music. What converts.

That's leverage. Leverage compounds.

We've Seen This Movie Before

This mirrors what happened when Apple launched the App Store in 2008. Early developers who cooperated got access to massive distribution. They won big.

For a while.

Then Apple started taking 30% of revenue. Built competing features. Changed App Store rules whenever it suited them. Developers who became dependent on iOS distribution had zero negotiation power.

Same thing with Facebook Platform in 2007. Zynga rode Facebook's distribution to billions in revenue. Then Facebook changed the algorithm. Changed the rules. Built competing products. Zynga's stock went from $15 to $2.

Amazon Marketplace follows the same pattern. Sellers get distribution, Amazon watches what sells, Amazon makes competing products at lower prices with better placement.

The platform provides distribution. Learns from your data. Eventually captures most of the value.

It's almost predictable at this point.

What This Means If You're Building Products

Game theory assumes players meet repeatedly under consistent rules. But what if the rules change between meetings? What if your "cooperative" move in round one becomes exploitable data in round five?

Three things to think about.

First, recognize the payoff matrix is asymmetric.

When you integrate with the Apps SDK, you're not entering a balanced negotiation. You're accepting OpenAI's terms on OpenAI's platform with OpenAI's data collection. That's not cooperation, that's distribution arbitrage.

Second, watch what happens to the early cooperators.

Right now, Spotify and Booking.com look smart. They got first-mover advantage and 800 million potential customers. But in 12 months, will OpenAI build native music playback? Partner with a competitor and bury Spotify in the interface? Change the economic terms once companies are dependent?

The answer determines whether cooperation was brilliant or suicidal.

Third, maintain your escape route.

If you integrate, keep your direct channel strong. Don't become dependent on ChatGPT traffic. The companies that died on Facebook Platform were the ones who built nowhere else. The companies that survived? They used Facebook as one channel among many.

The Uncomfortable Truth About "Being Nice"

Axelrod's research taught us that being nice, retaliatory, forgiving, and clear wins over time. Start with cooperation. Punish defection. Forgive quickly. Make your strategy obvious.

But that advice assumes you're playing against a peer. Someone with similar power who also wants repeated cooperation.

Platform companies aren't peers. They're landlords.

And landlords can always raise the rent.

You can be as nice and clear and forgiving as you want. But when your opponent controls the game board, the rules, and the scorekeeping, Tit for Tat stops being a winning strategy. It becomes a way to look principled while getting systematically outmaneuvered.

So What Do You Actually Do?

Here's the framework. Not a perfect answer, just the questions that matter:

Can you win distribution without building dependency?

If yes, integrate tactically. Get the users, keep them hooked to your platform, use ChatGPT as a top-of-funnel channel.

Can you afford to sit out while competitors integrate?

If no, you have to play. But play knowing the house has an edge.

Do you have leverage OpenAI needs?

Real leverage, not just "we're a big brand." If yes, negotiate hard now before they have alternatives. If no, expect the terms to shift against you once you're locked in.

Is this a game you can win by cooperating, or just one you lose more slowly?

Sometimes the best move is not to play at all.

The Real Game

What's really happening here is a massive restructuring of how software gets distributed and monetized. ChatGPT is becoming the operating system for AI interactions. The Apps SDK is how OpenAI turns that OS into a platform.

And platforms, historically, capture most of the value they enable.

The companies that win won't be the ones who cooperate most enthusiastically. They'll be the ones who cooperate strategically while building leverage elsewhere. Who use the platform without becoming dependent on it. Who maintain direct relationships with users even while going through intermediaries.

Game theory is useful. But only if you're honest about what game you're actually playing.

And right now, the game isn't Axelrod's tournament. It's not even prisoner's dilemma.

It's poker.

And OpenAI is dealing.

Related Articles