Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Radiation resistance, heat dissipation, and cost—how many more hurdles remain before we can send computing power into space?
At the 2026 Space Computing Industry Conference on April 3, a description of a “Fish Code” app reflects a vision of making space computing universally accessible in the future.
Liuyangqi Liu, an associate researcher at the Institute of Computing Technology of the Chinese Academy of Sciences, shared a student’s imagination: there are fishermen who ask on the app, “Where is the tuna?” Then, satellites in the sky use hyperspectral cameras to locate the fish, a smart brain makes inferences, and finally the communication link provides answers including the location and fishing gear.
This seemingly sci-fi scenario is expected to accelerate into reality as “space computing” moves from concept to engineering.
At the conference, representatives from government, industry, academia, and research jointly discussed the difficulties and paths to “bring computing power to space.” In the view of multiple industry insiders, at present China’s path toward commercializing space computing faces multiple challenges, including key technologies and economic costs, and the industry is also seeking a breakthrough through technological innovation and model change.
“Compute, Connect, Heat, Power” — none of it is easy
What is space computing? Multiple industry insiders and experts said that space computing refers to building an integrated space information infrastructure that combines computing power, storage capacity, and transport capability, by relying on space technologies to deploy on-orbit computing systems, data storage systems, and high-speed data interconnection facilities.
Under traditional models, satellites must first send data back to Earth, and then Earth-based data processing centers parse it—what is referred to as “space sends, Earth computes.” In a space computing system, satellites become “computers with wings,” enabling real-time processing of on-orbit data and autonomous decision-making.
Li Jie, deputy director of the Cloud Computing and Digitalization Research Institute at the China Academy of Information and Communications Technology, mentioned “three stages” of space computing development: space sends and space computes; Earth sends and Earth computes; and a space-based main computation. At present, space computing is in the “space sends and space computes” stage, moving from concept validation to the early phase of engineering.
Since the second half of last year, space computing has attracted considerable attention. Its surge in popularity is driven, on the one hand, by the AI wave and factors such as the processing of massive data and a sharp increase in energy consumption in ground data centers; on the other hand, it is also due to breakthroughs in technical validation and the introduction of multiple supporting policies.
However, deploying computing power into space is not easy.
In an interview with media outlets such as First Financial, Jingjing Liu, chief operating officer of Guoxing Aerospace, said that “compute, connect, heat, power” are the key challenges facing industrial development. On the computing side, it is necessary to tackle high-performance computation chips that are resistant to radiation and the payloads. On the communications side, it is necessary to establish high-speed, stable inter-satellite / inter-ground laser links. On the thermal management side, it is necessary to solve thermal collection technologies with ultra-high heat flux density and heat-dissipation technologies over extremely large areas. On the energy side, it is necessary to build large-scale new-type power supply systems.
Liuyangqi Liu elaborated in detail on issues such as “radiation-hardened computation chips” and “thermal management.” For example, radiation can cause single-event upsets and single-event latchup, directly leading to data errors in chips. In addition, vacuum and extreme temperature differentials can cause material fatigue and performance drift.
He said that in a vacuum environment without air convection, conventional air-cooling heat dissipation methods are completely ineffective. Today, the power consumption of a high-performance AI chip can reach several hundred watts, and its heat flux density far exceeds that of traditional aerospace-grade chips; it can only rely on liquid-circulation heat dissipation with a more complex structure, which also brings new, system-level engineering challenges.
“From how the chip’s heat is exported, to the soft/hard selection of thermal pads, to the design of microchannels on liquid-cooling plates, the long-term stability of the cooling working fluid, and the reliability of the circulation pump—each step is like walking on thin ice. It’s a system-level scientific problem that requires a large number of experimental validations.” He gave an example: an on-orbit computing project with computing power reaching 3P (ten quadrillion operations per second) had to repeatedly zero out in a ground test tank due to a tiny microbubble that was almost undetectable to the naked eye, taking more than a year.
Liu Yangqi also believes that the application ecosystem in space has hardly started yet, and the construction of an ecosystem in the space information sector is urgently needed.
Breakthroughs via multiple technology routes
Facing harsh physical challenges and huge market prospects, explorers worldwide have shown multiple technology routes spanning system architecture, chips, energy, heat dissipation, and launch/transport.
In terms of system architecture, ZTE’s chief scientist Xiang Jiying summarized three main paths.
The first is the “satellite cluster” route explored by Google. It forms a formation of several satellites at extremely close distances of hundreds of meters, operating in dawn-dusk orbits that do not enter Earth’s shadow. The very close distance enables inter-satellite high-speed laser links similar to the internal network of ground data centers, thereby supporting on-orbit training and inference of AI models. This方案 has extremely high requirements for ultra-precise formation control and a high technical barrier.
The second is the “distributed computing” route represented by Musk’s “Starlink.” This route relies on Starlink’s tens of thousands of communication satellites with relatively weak per-satellite computing power, widely distributed. This architecture is suitable for low-latency inference tasks, but it is difficult to support the massive data exchange and parameter synchronization required for AI training; bandwidth and latency in distributed systems become bottlenecks.
The third is Europe’s “space supercomputing center” concept that has remained on paper. Its idea is similar to building a “computing space station”: through multiple launches, assembling a large, centralized supercomputer in orbit.
Given China’s national conditions and industry characteristics, Xiang Jiying suggested following the second route, namely distributed computing. “The entry barrier is relatively lower, and you can make up for the shortcomings of single-satellite quality by leveraging advantages in the number of launched satellites.”
In terms of chips, industry insiders proposed commercial light customization, radiation-hardened dedicated chips, and space-native chips. Xiang Jiying mentioned that NVIDIA and Google both adopt light customization based on ground chips; this route is also applicable to China.
Liu Yangqi also put forward a more forward-looking idea: it may be possible to design entirely new materials and devices by leveraging the space environment itself. Perhaps in the future, space computers should not be “radiation-hardened,” but rather “absorb radiation.”
On the heat dissipation side, active thermal control has been mentioned multiple times at the conference. For example, Galaxy Aerospace has validated a pump-driven heat dissipation system on a flat-panel satellite launched in 2023; the Institute of Computing Technology of the Chinese Academy of Sciences is working on system-level engineering issues such as microchannel design and pump design.
Industry ecosystem building is also on the agenda. At the conference site, the inauguration ceremony for the “Space Computing Professional Committee,” a computing power industry development forum, was held. It was introduced that as the first industry-wide professionalized coordination platform in China, the committee brings together strength across the industrial chain, including academicians and experts, leading enterprises, research institutes, and financial institutions. Li Jie said that the establishment of the professional committee will enhance coordination between computing power and the aerospace industry chain, and build an industrial ecosystem circle characterized by the integration of all factors.
How do you calculate the cost account?
The space computing industry needs multiple technical links to support development. This also means that the deployment cost of space computing is high.
How do you calculate the cost account? Song Zhengji, a researcher at the Beijing Institute of Spacecraft General Design Department (Aerospace Institute 501), conducted related research. He broke down the cost components of getting computing power into space: launch/transport costs account for about 30%-40%, satellite manufacturing costs about 20%-30%, and space-environment adaptability transformation (radiation hardening, heat dissipation, etc.) plus computing chips and energy systems each take up a substantial proportion. If you build the same 30-megawatt data center, the total cost of space computing is still about one order of magnitude higher than on the ground.
Under these circumstances, when will space computing reach an inflection point? And how will it achieve a commercial closed loop?
Reducing rocket launch and transport costs is a consensus among industry insiders. In their sharing, representatives including Blue Arrow Aerospace and Xingji Honor all mentioned overcoming rocket recovery technology to achieve repeated use of rockets.
“If we can successfully achieve industrialization, and design a configuration with a first-stage rocket reusable 20 times, the launch cost will drop to about 20k yuan per kilogram.” Xie Hongjun, deputy general manager of the Xingji Honor Group, also predicted that if the two-stage reusability is solved from the transportation capacity end, it could level out ground-based and space-based costs.
Beyond transport costs, factors such as mass production cost reductions for perovskite photovoltaic modules and lower hardware costs for commercial chips are also important drivers for the development of space computing.
The industry inflection point does not seem far off. Some institutions also predict that by 2030, the global space computing market size will exceed one trillion US dollars.
When interviewed, multiple industry insiders said that currently, in fields such as national security, the low-altitude economy, ocean monitoring, and information services, there is an urgent need to develop space computing. Among application scenarios that are first able to run through a commercial closed loop, they mainly concentrate on areas with extremely high requirements for real-time performance, where ground networks are difficult to cover or where costs are too high. For example, Earth observation and remote sensing, including emergency security and environmental monitoring.
“Compared with ground computing power centers, the difference of space computing power lies in ‘real-time capability’ and ‘coverage.’” Xie Lina, deputy director of the data center department at the China Academy of Information and Communications Technology’s Cloud Data Center (Cloud and Big Data Research Institute), added that computing satellites can form global seamless coverage through laser communication networking, process data directly in orbit, and transmit high-value information back—compressing data timeliness for scenarios such as disaster early warnings and resource monitoring.
Tianyin Space is a commercial SAR (synthetic aperture radar) remote sensing satellite constellation operator. The company’s co-founder and CTO, Weijia Ren, believes that computing power will define the second half of commercial space. In recent years, the company has continuously increased on-orbit computing power; it is currently working with Beihang University to raise computing power to about 200 Tokens. This will shift remote sensing services from the past time scale of days to an hours-level response; in the future, it will reach the minute level, enabling effective disaster early warning.
“The stronger the space computing power, the wider the boundaries of applications will keep expanding, and the two are currently forming a positive-feedback loop.” Ren Weijia believes: “In the next five years, space computing power will change from a ‘luxury item’ into a standard infrastructure for global sensing networks.”
(Source: First Financial)