Stanford’s AI Index 2025 reports that the performance gap between open and closed models narrowed to single-digit percentage points on multiple benchmarks in one year. Open models are now “good enough” for many production tasks – and orders of magnitude cheaper than closed ones.

At the same time, open AI models are increasingly becoming geopolitical assets. In the past year, Chinese labs leaned hard into open weights by releasing competitive models the world can download and run. In recent weeks, the Silicon Valley giants that began the closed-API era began to respond in kind. The result is a price-and-innovation war that could benefit every builder.

Who Is Opening Up?

In China, Alibaba has been quickly releasing its Qwen LLM models under open, developer-friendly licenses, while Baidu recently made ERNIE 4.5 freely available on GitHub and Hugging Face. These companies treat building developer communities as a core strategy, not just a PR move. And who can forget when DeepSeek shocked everyone by releasing a powerful open-weight reasoning model that forced U.S. labs to step up?

Now, American giants have responded. OpenAI, after years of silence on open weights despite its name, released its first new open-weight models since the GPT-2 era. Notably, it has explicitly been pitching “run anywhere” customization. Elon Musk’s xAI posted Grok-1’s 314B-parameter MoE weights and has promised to post weights for Grok-3 soon. Meta continues to ramp up the Llama line with the 3.x wave.

What Governments Are Doing

Beijing’s industrial playbooks (compute subsidies, model approvals, and “AI+” initiatives) tilt toward domestic capability and open ecosystems resilient to export controls.

The Trump administration’s AI Action Plan now frames open-weights as having “geostrategic value,” a notable pivot from cautious rhetoric in 2023-24. Lawyers are warning enterprises to get smart about license terms as open-weight adoption grows. In the future, open models will be both public-goods and soft-power instruments.

Why Organizations Increasingly Prefer Open Weights

Open models have several benefits for user organizations:

But there are traps, too:

Organizations Should Seek IP Clarity

Organizations should ensure the following when using open models:

  1. Derivatives are yours. If you fine-tune and produce a “child” model, you should own that derivative to the fullest extent allowed by the base license. Choose permissive licenses when possible and memorialize ownership in SOWs and DPAs.
  2. Inputs are protected. Contractually bar providers from using your prompts, corpora, or embeddings to improve their public models. Require at-rest/in-use encryption and strict data-retention windows.
  3. Outputs are yours. Ensure the license grants commercial rights to generated outputs and that no usage restrictions (sector bans, user caps) sneak in via “open but not OSI-open” terms. (For example, Llama’s license is open-weight but not fully OSI-open).

What To Expect In The Years Ahead

Organizations should remember that licenses matter: “open weights” can still carry strings that complicate redistribution or certain verticals.

As top American and Chinese labs compete in open models, prices fall, reproducibility rises, safety and eval tooling improves, and the long-tail of use cases (on-prem, edge, low-connectivity) stops being second-class. This is probably good news.

But geopolitics can fragment ecosystems – for example, with export controls, sanctions, or data-localization rules forcing region-specific forks.

Organizations will need to carefully consider which stacks they can download, inspect, and run – while minimizing political risk.

--

Shaan Ray