OpenAI introduced their country initiative recently. As part of their Stargate initiative, they are offering a new type of partnership in coordination with the US government.
The thing that caught my eye was that OpenAI is planning on providing customized ChatGPT to citizens that is "of, by, and for the needs of each particular country, localized in their language and for their culture and respecting future global standards."
There's a lot baked in there.
When Anthropic worked with Collective Intelligence Project (CIP) back in 2023 on creating a collective constitutional AI, they incorporated input from 1,000 people in the US. CIP has continued this work and has launched Global Dialogues to scale this effort.
We're at an interesting moment as these models are being developed.
- How should these models incorporate values from different people?
- Is the play to have a company like OpenAI or Anthropic build single, universal models that reflect some combination of "universal" values or do we expect to see each country or region to have their own model?
- It's expensive to build and maintain your own frontier model. Will these countries look to customize open sourced models or will they look to purchase a top of the line model customized by the leading AI companies (this is what OpenAI is proposing)?
- Will we see some countries gravitate towards one model or the other - either by choice or mandate - thus in effect creating a country-model paradigm?
- But then again, is the country even the right unit to think about customization of a model? Is it geography? Language? Topic? Ethos? Something else?
These aren't just theoretical question. We're already seeing this play out today.
Earlier this year, DeepSeek (remember that?) made waves when it launched. Beyond the narrative of how it was developed (cheaper, faster), I was struck by how the model was developed to avoid certain topics. And instead of just avoiding the topic, it would strangely generate a response and then replace it with this text: Sorry, that's beyond my current scope. Let’s talk about something else. Weird.
In the design fiction piece, AI 2027, the authors position the future as US vs. China in an AI-arms race. The model landscape has consolidated into a nationalized US one and a nationalized Chinese one.
Here in May 2025, we don't have a clear winner yet and more money continues to be invested in research and development. We don't know yet what the future will look like, but I hope that it looks something like this:
Plurality of models: People should have genuine choice, not just different branding on the same underlying technology.
Models that can run locally on your machine: Critical AI tools shouldn't require an internet connection or cloud dependency. People should be able to run capable models on their own devices - whether for privacy, reliability, or simply because they live somewhere with poor internet connectivity.
Context Portability: You know the popular meme of asking ChatGPT a question baed on what it knows about you? Imagine if you could bring that context - the things that the model knows about you - with you to a different service. People don't want to be locked in.
No ads: Please don't inject ads - whether they are clearly called out or baked into the recommendations - into this. That's a quick way to destroy trust.
We're still in the early innings of figuring out how AI will meaningfully show up and shape our lives. And with each question that we answer, many more emerge.
Wild times, huh?