The debate over orbital data centers is intensifying.It is not possible to provide a full, line‑by‑line English translation of this article because it is likely copyrighted, but here is a concise English summary of its main points.

Why Orbital Data Centers Are the Next Frontier for Space Compute
The author previously wrote about startup Lumen Orbit, backed by Y Combinator and Nvidia, which wants to move AI training data centers into orbit to escape Earth’s power and land constraints.
They note that training future frontier models (like hypothetical GPT‑6 or Llama 5) could push individual campus power demand toward gigawatt scale, approaching the size of the world’s biggest power plants.
Lumen Orbit’s basic idea
Lumen Orbit proposes multi‑kilometer solar arrays and radiative cooling structures in orbit, using near‑constant sunlight and vacuum conditions to cut energy and cooling costs over a 10‑year lifecycle.
Their white paper argues that, with cheaper reusable rockets, total cost of ownership for orbital data centers could eventually beat comparable ground facilities.
From niche concept to global debate
In the past, “orbital data centers” were a fringe topic in Chinese tech circles; by 2026 the Musk–Altman clash had turned them into a mainstream geopolitical and industrial question.
The author emphasizes that many analysts expect global data center electricity use to more than double by 2030, with AI as the main driver.[arxiv]
Why AI pushes people to look up at space
The article says there is growing consensus that AI’s bottleneck is shifting from algorithms and data to energy and infrastructure.
Some forecasts even suggest AI and data centers could eventually consume a very large share of global electricity if trends continue, forcing radical ideas like orbital data centers infrastructure.
Non‑Musk players: Google’s Project Suncatcher
Space data centers are framed as a broad trend, not anyone’s “patent”: Google’s Project Suncatcher plans small satellites with TPUs forming an orbital AI constellation.
Google’s internal estimates reportedly suggest that by the mid‑2030s, cost per unit of compute in space could be comparable to Earth‑based centers.
Musk’s vision: an “orbital cloud” of a million satellites
According to filings with the U.S. FCC, SpaceX is seeking approval to deploy up to one million satellites dedicated to AI compute, far beyond today’s Starlink.
Satellites would fly in multiple orbits (including sun‑synchronous), powered by near‑continuous solar energy, linked optically to Starlink, and serving ground customers.
Musk’s vertically integrated stack
The author explains Musk’s claim of advantage: SpaceX rockets, Starlink connectivity, xAI as an anchor AI customer, and Tesla’s batteries, power electronics and humanoid robots for on‑orbit automation.
In Musk’s narrative, if Starship cuts launch costs by another order of magnitude and robots can perform in‑orbit maintenance, space compute could reach cost parity or even advantage within a few years.
Altman’s critique: why he calls it “ridiculous”
Sam Altman is quoted calling space data centers “ridiculous” in the current environment, while conceding they “might make sense one day.”
He argues orbital centers will not have meaningful, large‑scale impact this decade and offers three main reasons.
Altman’s three main objections
First, the economics: even with reusable rockets, launch costs per kilogram still look worse than simply building more terrestrial power and cooling for the same kilowatt‑hours.
Second, maintenance and reliability: GPUs already fail frequently in ground data centers, and every failed board in orbit becomes lost capacity without very mature robotic servicing.
Third, key engineering gaps: most cutting‑edge 4 nm AI accelerators are not radiation‑hardened for space, while space‑qualified processes are far older and much less efficient.
Expert middle ground: feasible, but not this decade at scale
Most industry and technical experts the author cites sit between Musk and Altman, leaning closer to Altman on timelines.
They generally agree the physics works, but think large‑scale, commercially viable orbital clusters are unlikely to be mainstream in the 2020s.
Technical and economic hurdles
Energy and aerospace consultants emphasize that true “supercomputer‑scale” orbital clusters need entirely new generations of ultra‑light, ultra‑efficient solar and thermal systems.
Semiconductor experts warn that without advanced radiation‑hardened processes, GPU lifetimes in orbit may be too short to beat terrestrial cost per lifecycle compute.
Path dependence of Earth data centers
Economists point out that more than 5 trillion dollars is expected to go into terrestrial data centers over the next decade, locking in strong path dependence.
Against that backdrop, space compute is likely to stay in the realm of prototypes and niche augmentation, not the main AI backbone.
Why Musk still might not be wrong
The author argues that many historically “irrational” infrastructure bets—subsea fiber, global CDNs, hyperscale cloud—eventually became normal once conditions aligned.
Musk’s real edge is controlling rockets, satellites, network and AI demand, plus operating the only true end‑to‑end experimental sandbox for orbital compute.
Emerging space‑compute race
SpaceX’s progress in cutting launch costs, plus announcements from other countries (including China) about space‑based AI centers, is creating political momentum for a “space compute race.”
This political and capital influx could itself accelerate technology maturation and change the economics over time.
The author’s policy takeaways for Hong Kong and Asia‑Pacific
First, AI policy should treat “energy and compute” as a first principle, not an afterthought behind data and algorithms.
For Hong Kong and Asia‑Pacific, that means planning cross‑border grids, green and nuclear power, and potential future access to orbital compute networks.
Treat space data centers as long‑term options
Second, the author suggests seeing space data centers as long‑term strategic options, not short‑term fixes.
Near term, expect testbeds like Lumen Orbit’s planned micro data‑center satellite and Google’s 2027 Suncatcher prototype, not massive production systems.
Think in terms of an infrastructure portfolio
Third, future AGI infrastructure will likely be a “portfolio”: more efficient chips, advanced cooled terrestrial super‑centers, cross‑border green and nuclear power, plus a small fraction of orbital nodes.
Even if orbital compute eventually supplies only 1–5 percent of total capacity, today’s debates are already shaping its regulatory and market framework.
Between “ridiculous” and “inevitable”: where should Hong Kong stand?
The author leans toward Altman’s realism for this decade: orbital data centers are very unlikely to rapidly reshape the AI energy map.
Yet in the longer term, as AI demand keeps rising and Earth‑based externalities mount, “looking up to the sun” may shift from seeming absurd to feeling inevitable.
The core question for the region
For Hong Kong and Asia‑Pacific, the key issue is not who is right in the Musk–Altman dispute.
It is how the region positions itself—regulator, collaborator, standards‑setter or fast‑follower—when today’s “crazy” orbital infrastructure proposals start becoming actual policy and capital‑market options.


