Success Powered by Industrial Electricity: How China Achieves AI Independence

Eight years ago, the story of ZTE served as a bitter lesson in relying on foreign technology. On April 16, 2018, the U.S. imposed an immediate ban on the company, halting its operations and leaving a firm with 80,000 employees and annual revenues exceeding one trillion yuan in complete paralysis. Today, after eight years, the industry tells a very different story. In February 2026, DeepSeek announced a multimodal model fully based on local Chinese chips, achieving for the first time an integrated, independent solution outside of NVIDIA. This shift was no coincidence but the result of a comprehensive strategy combining innovations in algorithms, chips, and most importantly: abundant industrial electricity.

Breaking the Blockade: From CUDA Monopoly to Self-Built Infrastructure

The real challenge faced by Chinese AI companies was not just chips but NVIDIA’s CUDA platform. Launched in 2006, CUDA became the backbone of the global AI industry. By 2025, it had 4.5 million developers connected to it, with over 90% of the world’s AI developers working within this environment.

The core issue is that CUDA acts as a self-reinforcing wheel: the more users, the more tools and libraries flourish, attracting even more developers. Over time, switching away from CUDA meant rewriting decades of accumulated expertise and code.

However, repeated U.S. bans—three waves of restrictions between 2022 and 2024—forced Chinese companies onto a harder path. They chose an indirect route, starting from algorithms rather than chips.

The New Equation: Improving Algorithms and Economic Efficiency

From late 2024 to 2025, Chinese AI companies widely adopted mixture-of-experts (MoE) models. Instead of activating a massive model in full, tasks are divided among multiple smaller experts, dramatically reducing resource consumption.

DeepSeek V3 exemplifies this approach: 671 billion parameters, but only 37 billion activated (5.5%) during inference. Training cost: $5.576 million using 2,048 H800 GPUs over 58 days, compared to $78 million for GPT-4.

But the real revolution was in pricing. DeepSeek API costs $0.028 per million tokens input, versus $5 for GPT-4 and $15 for Claude Opus—factors of 25 to 75 times cheaper.

This difference in economic efficiency was not just numbers; it transformed the market structure. As AI applications moved from simple chat to intelligent agents (consuming 10 to 100 times more tokens), price became a strategic factor. In February 2026 alone, Chinese model usage on OpenRouter surged 127% in three weeks.

Local Chips: From Inference to Training Leap

Reducing inference costs alone isn’t enough. The real challenge is enabling continuous training on new data. Here, local chips entered the scene.

In 2025, a Chinese company opened a 148-meter-long computing server line in a small city, producing servers based entirely on Loongson 3C6000 processors and Taichu Yuanqi cards. Expected output: 100,000 units annually, with an investment of 1.1 billion yuan.

Most importantly, these chips moved from “inference capability” to “training capability.” In January 2026, Zhipu AI launched the GLM-Image model, the first advanced image generation model fully trained on Chinese chips. In February, China’s “Star” large communication model was trained on a Chinese computing infrastructure with tens of thousands of processing units.

This was a qualitative shift: training requires processing vast amounts of data and complex gradient calculations, raising demands for computing power, bandwidth, and software environment tenfold compared to inference alone.

Huawei Ascend processors became the backbone of this system. By late 2025, over 4 million developers were working within the Ascend ecosystem, with more than 3,000 partners and 43 main models trained.

The Real Foundation: Industrial Electricity and Geopolitical Stability

If algorithms are intelligence and chips are muscles, then industrial electricity is the blood fueling the entire process. Here lies China’s greatest strategic advantage.

In 2024, U.S. data centers consumed 183 TWh of electricity—4% of the national total. This is expected to double by 2030 to 426 TWh, nearly 12% of total consumption. ARM’s CEO predicts that AI data centers will consume 20-25% of U.S. electricity by 2030.

The problem: the U.S. power grid is already strained. The PJM region in the East faces a 6 GW capacity shortfall. The U.S. anticipates a 175 GW gap by 2033, enough to power 130 million households. Wholesale electricity prices in data center hubs have risen 267% over five years.

In contrast, China’s electricity generation capacity is 2.5 times that of the U.S.: 10.4 trillion kWh annually versus 4.2 trillion. More importantly, residential consumption in China accounts for only 15% of total, compared to 36% in the U.S. This means a vast surplus of industrial energy.

In terms of cost, electricity in U.S. AI hubs ranges from $0.12 to $0.15 per kWh. In western China, industrial electricity costs about $0.03—one-quarter to one-fifth of U.S. prices.

This huge gap enables China to build infrastructure at a drastically lower cost. ByteDance, Tencent, and Baidu all plan to double their local server imports in 2026. China’s total AI computing power has already reached 1,590 EFLOPS.

Global Expansion: From Manufacturing to Token Export

While the U.S. faces an energy crisis, Chinese AI is quietly expanding globally. But this time, what’s being exported isn’t just products or factories, but tokens—precise data units processed by AI models.

DeepSeek illustrates this: 30.7% of its users are from China, 13.6% from India, 6.9% from Indonesia, and 4.3% from the U.S. It supports 37 languages and dominates emerging markets like Brazil. 26,000 global companies have accounts, and 3,200 institutions have deployed enterprise versions.

Within China, DeepSeek holds 89% of the market share. In sanctioned countries, its share ranges from 40% to 60%. In 2025 alone, 58% of global AI startups integrated DeepSeek into their tech stacks.

Emerging markets, driven by the need for cost efficiency, have shifted to Chinese models. This creates a rare structural gap: a market seeking an alternative to NVIDIA due to geopolitical pressures.

Lessons from Japan and China’s Different Path

In 1986, Japan signed a semiconductor agreement with the U.S. The result was disastrous: Japan’s DRAM market share plummeted from 80% to 10%. By 2017, Japan’s share in the integrated circuit market had shrunk to just 7%.

The key difference: Japan settled for being the best product in an ecosystem dominated by external powers. It never built a truly independent ecosystem of its own.

China has taken a more difficult but sustainable route: from algorithmic improvements to local chips advancing from inference to training, accumulating 4 million developers within the Ascend system, and finally spreading tokens globally in emerging and mature markets. Every step has built an independent industrial system that Japan never possessed.

The Cost and the Real War

On February 27, 2026, three Chinese chip companies released performance reports on the same day:

  • Kimotech: +453% revenue, first annual profit
  • Moistech: +243% revenue, net loss of 1 billion
  • Moxtech: +121% revenue, net loss of 800 million

Half fire, half water. The fire is market appetite; the 95% gap left by NVIDIA is gradually being filled by local players.

The water is the enormous cost of building the ecosystem—every loss is real money: R&D investments, software support, hours of engineers resolving compatibility issues. These aren’t mismanagement but the “war tax” needed to establish an independent ecosystem.

These three financial reports reflect the most authentic picture of the battle for computational power—not an inspiring victory but a fierce front-line fight bleeding blood.

The Pivot Point

Eight years ago, we asked, “Can we survive?” Today, the question is different: “What is the cost of staying alive?”

The cost itself is genuine progress. China is no longer fighting just to survive but to build—an independent computing system supported by abundant industrial electricity, improved algorithms, local chips, and millions of developers. This is very different from Japan’s path. It’s a journey toward true independence.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin