DeepSeek unveils new AI model tailored for Huawei chips – Tech
A phone screen in Hong Kong shows an app icon, and the industry hears a clang of metal. DeepSeek has released a preview of V4, a new model adapted for Huawei chip technology, and that detail matters more than yet another leaderboard screenshot. For years, Nvidia defined the default route for training and running serious AI. Export controls, supply uncertainty, and Beijing’s push for self-sufficiency turned that default into a liability. DeepSeek’s shift reads less like branding and more like industrial planning. Open models already spread fast on global platforms even as governments ban them for privacy fears. V4 enters that contradiction on purpose. The question is who controls the compute stack.
The chip story eats the model story
DeepSeek’s close collaboration with Huawei marks a break from its earlier reliance on Nvidia. Huawei says Ascend chips took part in V4’s training, and that sentence carries ecosystem weight. Developers follow what runs reliably and cheaply. Analysts call Ascend China’s best homegrown alternative to Nvidia, and V4 support signals that top Chinese models can run on Chinese silicon without heroic workarounds. Jensen Huang has warned that Nvidia risks losing developers in China. Export controls don’t just block shipments. They push engineers to rewrite toolchains and buying plans. Once a local stack works, organizations standardize.
Benchmarks, bragging rights, and the missing senses
DeepSeek claims V4 Pro beats other open-source models on world-knowledge benchmarks and trails only Google’s closed Gemini-Pro-3.1. Benchmarks tempt the credulous. A smarter reading treats them as a weather report, not a proof. Still, V4’s rapid rise on Hugging Face matters because developers vote with time, not press releases. Reports stress strengths: extremely long and complex text handling at lower cost than competing top models. Cost decides who experiments. Yet V4 arrives with a clear limitation. The preview doesn’t support multiple modalities such as images and video. That narrows the fight. It also keeps V4 focused on text-heavy work: code, contracts, policy, search, and analysis.
Politics in the training loop
The timing looks engineered. The launch followed a White House accusation that China steals AI intellectual property at industrial scale, and it comes ahead of a high-profile US presidential trip to Beijing. Nvidia’s China situation stays tangled too. Washington reportedly approved sales of powerful H200 chips, while terms and approvals still slow shipments. In that climate, companies stop betting on stable supply and start building around uncertainty. DeepSeek has drawn Washington criticism that it benefited from American know-how. It has acknowledged prior Nvidia use but hasn’t clarified whether those chips fell under export bans. AI development now treats compute provenance and data lineage as political facts.
Open models, agents, and the million-token dare
DeepSeek pitches V4 as suited for AI agents, systems that execute tasks rather than just chat. Agents burn compute and demand long context. V4 reportedly processes over one million tokens, putting it near the massive context windows of top closed models. That matters for real work. A model that can hold a contract library or a large code chunk in one pass can plan actions across it without constant retrieval gymnastics. Skepticism still belongs in the room. One engineer who tested the preview called it significant but warned against taking benchmark headlines at face value until independent evaluations arrive. Even so, open models keep closing the gap.
V4 doesn’t just add another model to a crowded market. It clarifies the new contest. Silicon supply, not marketing sheen, decides who gets to scale. DeepSeek’s Huawei alignment broadcasts that serious Chinese models can train and run on serious Chinese hardware, and that message lands because much of the world still defaults to Nvidia. Markets reacted, with Chinese chip stocks rising on expectations of wider local adoption, while domestic AI rivals fell when V4 appeared. Abroad, many governments still ban DeepSeek for data privacy concerns, and developers still pull open models because openness moves faster than policy. One future centers on closed, multimodal giants with polished product layers. Another future grows around cheaper open models tied tightly to domestic hardware stacks. V4 plants a flag there.


