The AI world was waiting for DeepSeek to strike again. It finally did, but this time, the world barely flinched.
Just over a year ago, a then little-known Chinese startup out of Hangzhou shook Silicon Valley to its core. DeepSeek’s R1 model arrived like a thunderclap, competitive with the best American AI models. It was open-source and built for a fraction of the cost. The AI race suddenly had a new contender nobody saw coming.
Fast forward to April 2026, and DeepSeek is back with its highly anticipated V4 model, but the reaction this time is lethargic. And that tells you everything about where the AI race stands right now.
DeepSeek launched V4 as a preview on Friday, April 25, coming in two options — a heavier V4 Pro and a lighter V4 Flash. Both are open-source and built using a Mixture-of-Experts (MoE) architecture, which basically means the model only activates a portion of its total parameters for any given task. This way, it keeps costs and computing requirements manageable.
The Pro version is a genuinely massive model with 1.6 trillion total parameters, making it the largest open-weight model currently available. The Flash variant is significantly leaner at 284 billion parameters. Both come with a 1 million token window, which means you can throw entire codebases or lengthy documents at them without breaking a sweat.
Performance-wise, DeepSeek says V4 has largely closed the gap with today’s leading models on reasoning benchmarks. In coding, the company claims both V4 versions perform comparably to OpenAI’s GPT-5.4.
However, their own research paper acknowledges that the models are still slightly behind the GPT-5.4 and Gemini 3.1 Pro on broad knowledge tests.
One more notable limitation worth mentioning is that V4 currently only handles text. No images, no audio, no video generation. That’s a real gap compared to many of its closed-source rivals.
If we talk about the pricing, DeepSeek is offering a 75% discount on V4 Pro till May 5. Even at full price, both models undercut their Western counterparts significantly. The Flash version comes in at just $0.14 per million input tokens, while the Pro version comes in at $0.145 per million input tokens.
Despite affordability, the market reaction is muted, though it does not show disappointment. It’s a sign that the world has caught up to the new reality. Last year’s DeepSeek moment was a black swan. That surprise element is simply gone now.
Ivan Su, senior equity analyst at Morningstar, put it well: DeepSeek R1 shocked American markets because a Chinese model competing at that level was completely unexpected. V4 is the follow-through on that same story. And follow-throughs rarely become headlines.
Competition within China has also intensified dramatically. Benchmark data from Artificial Analysis shows that models from Moonshot AI’s Kimi and Alibaba’s Qwen are closing the gap fast. The at-home hype has shrunk, which also limits the wow factor globally.
But here is a thing.
If markets aren’t talking about V4’s benchmarks, they’re certainly talking about its chips. And this might actually be the most consequential part of this whole release.
DeepSeek built V4 to run natively on Huawei’s Ascend 950 chips, in partnership with the Chinese tech giant. This is a significant pivot. The earlier R1 model was trained on Nvidia hardware. It means DeepSeek is no longer dependent on Nvidia, or you can say American hardware.
Now see the controversial part.
Both Anthropic and OpenAI have accused DeepSeek of “distilling” their models, essentially extracting and replicating capabilities from American AI systems without authorization. Just a day before V4 launched, the White House’s Office of Science and Technology Policy also called out foreign entities, primarily China-based, for running large-scale campaigns to distill frontier models from U.S. labs. DeepSeek wasn’t named directly, but the implication was clear.
Let’s step back and look at the bigger picture. A year ago, DeepSeek’s emergence forced a rewrite of assumptions about AI development. V4 doesn’t rewrite anything. But it confirms the new chapter we’re already in.
The AI race is no longer a two-horse competition between a handful of American giants. It’s a genuinely global, multi-player sprint. The gap between frontier models and open-weight alternatives is narrowing every quarter. And the cost of capable AI is falling fast.
DeepSeek V4 didn’t shock the world, and it didn’t need to. It just had to show up capable, affordable, and built on Chinese chips. In the current AI landscape, that alone is a statement.































