OpenAI’s Open-Weight Models: What and Why They Matter for Your AI Strategy

Aug 09, 2025By Ryan Flanagan
Ryan Flanagan

TLDR: OpenAI has released two open-weight AI models, gpt-oss-20B and gpt-oss-120B, under an Apache 2.0 licence. That means you can run them yourself, even offline, with more control over costs and data privacy. For businesses, this changes the AI adoption equation a lot, making AI readiness assessment, enterprise AI roadmap planning, and strong AI governance frameworks more important than ever.

What are open-weight AI models? 

Open-weight models give you the trained “engine” that powers an AI system, without the full “blueprints” for how it was built. You can run them locally on your own machines or in your private cloud.

  • You can customise them for your business.
  • You don’t need to rely on an API subscription.

They’re different from open-source models, where you also get the full training code and data.

Here’s how it breaks down:

Model weights shared

  • Open Source: Yes
  • Open Weight: Yes

Training code shared

  • Open Source: Yes
  • Open Weight: No or partial

Can run locally

  • Open Source: Yes
  • Open Weight: Yes

Can retrain from scratch

  • Open Source: Yes
  • Open Weight: No

Licence type

  • Open Source: OSI-approved
  • Open Weight: Often custom or restricted
     

    Why does this matter for business?

Open-weight AI models give you more control, potential cost savings, and the option to keep sensitive data in-house. That can make them ideal for regulated industries or organisations building their own enterprise AI roadmap.

The trade-off is less transparency about how the model was trained, and you take on responsibility for maintenance, security, and compliance.

In business terms: it’s like buying your own coffee machine instead of paying for coffee every day: more setup at first, but cheaper and more flexible long term.

 
What did OpenAI release?

OpenAI’s two new models:

  1. gpt-oss-20B – Small enough to run on a MacBook, with performance similar to o3-mini for coding and reasoning tasks.
  2. gpt-oss-120B – More powerful, designed for a single GPU in a private cloud, matching or exceeding o4-mini in coding, problem solving, and tool use.

Both are free to use under an Apache 2.0 open-source software licence, which means: Commercial use is allowed and you can modify and redistribute the models.
 
Why does this matter?

1. Cost control and flexibility: Running AI locally means you avoid ongoing API fees, which can significantly cut costs in AI implementation consulting projects.

2. Data privacy: Sensitive data never leaves your environment, which is critical for regulated industries and aligns with ISO 42001 AI management principles.

3. Customisation: You can fine-tune models for your workflows, improving efficiency and accuracy without waiting for vendor updates.

4. Offline capability: Perfect for remote operations or industries where internet access is inconsistent, something API-only solutions can’t offer.

 
But… there are trade-offs:

  • No full transparency – Without the full training data and process, there’s less visibility into biases or gaps in the model.
  • Security responsibility shifts to you – Running AI locally means your IT team becomes accountable for securing the environment.
  • Ongoing maintenance – You’ll need internal capability or support to update, optimise, and monitor models.
     

    Vibe coding and open-weight models

“Vibe coding” is a coding style where you describe what you want in plain language and let the AI generate the code. With open-weight models like gpt-oss-20B, you could run this entirely offline, keeping sensitive prototypes inside your network.

For SMEs or enterprise teams exploring AI tools for business or AI-powered strategy execution, this opens the door to rapid prototyping without exposing IP to external APIs.

How to integrate open-weight AI into your strategy

Before you download and start experimenting:

Run an AI Readiness Assessment – Identify if your infrastructure, team, and governance are ready for local AI deployment.
Define an Enterprise AI Roadmap – Plan where open-weight fits into your wider AI adoption plan.
Implement AI Governance – Use standards like ISO 42001 to set rules for risk, ethics, and security.
Start with low-risk pilots – Test open-weight in non-critical workflows before scaling.
 

FAQs

Q: Are open-weight models free to run?
A: The licence is free, but you’ll need the hardware and people to run and maintain them.

Q: How do open-weight models compare to API-based AI like ChatGPT?
A: APIs are easier to set up but lock you into ongoing costs and vendor control. Open-weight gives you more independence but requires more internal capability.

Q: Can I use open-weight models in regulated industries?
A: Yes, but only if you have strong governance, risk management, and security controls in place. This is where an AI ISO 42001 AIMS Certification can help.

Start your AI journey today

Open-weight models like gpt-oss-20B and gpt-oss-120B mark a shift in how businesses can use AI, basically it is like going from renting to owning (with no mortgage) it. But ownership comes with responsibility.

If you want a clear plan to integrate AI into your organisation while keeping control of costs, data, and compliance, start with our AI Strategy Blueprint or book an AI Readiness Assessment.

For organisations building internal governance capability, our AI ISO 42001 AIMS Certification program ensures you meet global best practice from day one.