Google Jumps Back into the Open Source AI Race With Gemma 4


In short

  • Google released Gemma 4, a family of open source code under the Apache 2.0 license.
  • A four-way line takes calls to the world’s #3 data center with 31B capacity.
  • US open AI gets an important boost, as Gemma 4—with the help of DeepMind—positions itself as America’s strongest opponent against DeepSeek, Qwen, and other Chinese leaders.

Google’s open-source AI ambitions have gotten more serious these days. The company was released Gemma 4a family of four open models developed on the same research as Gemini 3, with a license under Apache 2.0—a major departure from the more restrictive terms on previous versions of Gemma.

Developers downloaded previous generations of Gemma more than 400 million times, resulting in more than 100,000 types of groups. This release is the most popular.

Over the past year, the leading AI open source community has been Chinese. DeepSeek, Minimax, GLM and Qwen have dominated the top spot, leaving other American methods looking for the essentials. Like Decrypt said last yearChinese open brands went from 1.2% of global users by the end of 2024 to nearly 30% by the end of 2025, Alibaba’s Qwen overtook Meta’s Llama as the world’s most used brand.

Meta’s Llama was the default choice for designers who wanted a professional, locally driven model. This reputation has been damaged – Llama’s license operated by Meta raised questions about its openness, and its performance lagged behind the competition in China. The OLMo family of the Allen Institute tried to fill the gap but failed to find meaning. OpenAI released its version of gpt-oss in August 2025, which provided the environment with fresh air, but it was not designed to be a competitor.

And yesterday, 30 US founders called Arcee AI took out Trinitya 400 billion dollar open model that made a compelling case that the American phenomenon was not dead. Gemma 4 follows suit, this time with the full weight of Google DeepMind behind it, turning it into America’s best example of an open source AI game.

The model was “built from global research and technology like Gemini 3,” Google said in its announcement. The Gemma 4 series ships in four sizes: the 2B compact and 4B mobile and peripherals, the 26B Mixture of Experts speed-focused model, and the 31B Dense model.

31B Dense currently ranks third among all open types Arena AI script. 26B MoE is in sixth place. Google claims that both models are 20 times more powerful – which is controversial, especially against Arena AI’s numbers, where the Chinese models remain in the top two spots.

We tested Gemma 4. It’s capable, it’s a warning. The model applies reasoning even to tasks that don’t require it, which can result in solutions that are over-engineered for simple explanations. The creative writing is good – practical, not inspired – and could probably be improved with special direction and fast engineering.

When it was given clearly it was a code. When asked to make a game, the result wasn’t flashy or sophisticated, but it ran flawlessly on the first try. Not bad for a 41 billion parameter model. Zero-shot reliability is more important than good results that require correction.

You can try the game (basic, but functional). Here.

The four models cover the entire hardware spectrum. The E2B and E4B models are designed for Android phones, Raspberry Pi, and peripheral devices, which are offline with almost zero latency, native audio input, and a 128K screen. Versions 26B and 31B are aimed at workstations and cloud deployments, expanding text to 256K and adding native calling and JSON output for building independent agents. All four models use images and videos natively. Larger model weights fit on a single 80GB NVIDIA H100 GPU; Quantized versions run on consumer hardware.

The Apache 2.0 license is another topic. Gemma’s previous Google releases used a license that created legal ambiguity in the product. Apache 2.0 eliminates that conflict—developers can modify, redistribute, and sell without worrying that Google will change the wording later. Hugging Face co-founder Clement Delangue praised, to say that “Local AI is having its moment,” and is the future of the AI ​​industry. Google DeepMind CEO Demis Hassabis went even further, calling Gemma 4 “the best open-source model in the world for its size.”

That’s a strong claim. Proprietary systems from Anthropic, OpenAI, and Google’s Gemini continue to lead in the most demanding benchmarks. But for heavy open source versions you can run locally, modify freely, and deploy to your own builds? The race just got shorter. You can try Gemma 4 now Google AI Studio (31B and 26B) or Google AI Edge Gallery (E2B and E4B). Sample weights are also available Hugging FaceKaggle, and Ollama.

Daily Debrief A letter

Start each day with top stories right here, including originals, podcasts, videos and more.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *