Originally published in Chinese on HK01 on 2026-03-14 | By Michael C.S. So | AiX Society

Last year, I wrote a column for HK01 about Lumen Orbit, introducing how this startup was attempting to use “space data centers” to solve AI’s energy and computing bottlenecks, and suggesting it could be one of the key pieces of infrastructure on the path to AGI. At the time, the concept still carried a whiff of “Web3-era” frontier experimentation — bold in its technical approach, but still far from mainstream attention.

A year later, the landscape has completely changed. Elon Musk has loudly announced his intention to bet on the largest orbital data center system in history, sending AI computing power into space. Meanwhile, OpenAI CEO Sam Altman, at a public event, used a single word to describe the idea: “ridiculous.” As someone who was among the first in this region to systematically introduce the space data center concept, I want to use this article to re-examine the arguments on both sides and explore: from the perspective of the AI industry and technology policy, how should we view this “space computing race”?

From My Lumen Orbit Feature to the Global Space Computing Battle

Let me rewind to my previous article about Lumen Orbit. Founded in 2024, Lumen Orbit is backed by investors including Y Combinator and NVIDIA. The company advocates moving data centers used for training large AI models into space. In their white paper, they pointed out that training next-generation models on the scale of GPT-6 or Llama 5 could require a single data center campus to draw power ranging from 100MW up to nearly 1GW — approaching the capacity of the world’s largest power plants — placing unsustainable pressure on terrestrial power grids and land resources.

Lumen Orbit’s proposed solution involves deploying multi-kilometer-scale solar panel arrays and heat dissipation modules in orbit, enabling data centers to run on uninterrupted, high-intensity solar power along perpetual-daylight orbital paths, while leveraging the vacuum of space for radiative cooling — eliminating the massive cooling towers and water consumption required on the ground. Their white paper estimates that, assuming further reductions in launch costs through reusable rockets, the total operational cost of an orbital data center over a 10-year lifecycle could be significantly lower than an equivalent ground-based facility.

At the time, this concept was still largely confined to tech-circle discussions in the Chinese-language world. But by 2026, following the public clash between Musk and Altman, “space AI data centers” have leapt from technical white papers to become a global issue tied to geopolitics and industrial strategy.

Why Is AI Making Everyone Look Skyward?

Regardless of whether you support space data centers, one point has become near-consensus: AI’s energy and computing demands are rapidly approaching the limits of terrestrial infrastructure. Industry research predicts that if current large-model trajectories continue, data center electricity consumption could account for a substantial share of global electricity usage by the late 2020s. Some projections suggest that without course corrections, AI-related facilities could consume a significant fraction of global electricity within the next two decades.

Under this pressure, “sending computing power into space” suddenly stops sounding like science fiction. Space offers several key advantages:

  • Nearly uninterrupted, high-intensity solar energy unaffected by day-night cycles or weather, promising dramatically lower marginal electricity costs.
  • Vacuum and low-temperature conditions that favor radiative cooling, reducing water consumption and land use.
  • Freedom from terrestrial land constraints, environmental reviews, and local political obstacles, allowing more flexible scaling.

It is worth emphasizing that space data centers are not exclusive to Lumen Orbit or Musk. Google’s Project Suncatcher, revealed publicly in 2025, similarly plans to use a constellation of small satellites equipped with TPUs to form an AI data center in space, with a prototype satellite launch planned for 2027. Internal estimates suggest that by the mid-2030s, the per-unit computing cost of space data centers could approach parity with ground-based facilities. From this perspective, the Lumen Orbit feature I wrote was merely about one of the early pioneers in this long-term trend.

Musk’s Vision: Building an “Orbital Cloud” with a Million Satellites

So what exactly is Musk proposing? According to an application filed with the US Federal Communications Commission (FCC) in early 2026, SpaceX is seeking approval to deploy up to one million satellites forming an unprecedented “orbital data center” constellation. This scale far exceeds the current Starlink network of roughly ten thousand satellites. Think of it as an AI-dedicated cloud supercomputing layer encircling the Earth.

According to the filing, these satellites would be placed at altitudes between 500 and 2,000 kilometers across various orbits, including low-inclination and sun-synchronous orbits. They would be powered by near-continuous solar energy to fuel onboard machine learning accelerators, and interconnected via optical links with Starlink to transmit computational results back to ground-based customers. Third-party analysis suggests that under SpaceX’s optimistic assumptions — launching one million metric tons of satellite payload annually with each ton providing approximately 100kW of computing power — newly added orbital computing capacity could reach 100GW per year, equivalent to dedicating roughly 20% of current US total electricity consumption exclusively to AI.

Underpinning this grand vision is Musk’s unique “vertically integrated computing ecosystem”:

  • SpaceX provides reusable heavy-lift rockets (Starship) and large-scale satellite manufacturing capability.
  • Starlink serves as the global backbone network, connecting orbital computing power to ground-based users.
  • xAI, as the AI client, absorbs the bulk of training and inference workloads, ensuring stable demand for the orbital data centers.
  • Tesla’s batteries, power electronics, and even Optimus robots are envisioned by Musk as key technology modules for future in-orbit maintenance and automated operations.

In his narrative, once Starship drives launch costs down by another order of magnitude and robots handle in-orbit maintenance, “moving computing to space” becomes not just feasible but could potentially achieve cost advantages within just two or three years.

Altman’s Response: Why I Also Understand His “Ridiculous” Remark

Compared to Musk’s optimism, Altman’s response was cool, even sharp. Speaking at an event in New Delhi, he said bluntly: “In the current environment, the idea of sending data centers into space is ridiculous.” While he added that “it might make sense one day,” his key point was clear: “Orbital data centers will not have a meaningful impact at scale within this decade.”

Based on the technical and economic data I have been tracking over the past year, Altman’s three core arguments are not hard to understand:

1. The Economics Don’t Add Up Yet

Even factoring in advances in reusable rockets, the cost of putting each kilogram of payload into orbit remains far higher than building new power and cooling infrastructure on the ground — especially when you simply compare “annual cost of electricity supply per kilowatt.” Altman pointed out that a rough mental calculation of launch cost versus electricity cost makes it clear: “We’re not there yet.”

2. Maintenance and Reliability Are the Biggest Pain Points

In terrestrial data centers, high-end GPUs already fail frequently, requiring engineers to continuously replace them. Once you send this hardware into orbit, every failed board represents permanently lost computing power — unless we possess highly mature in-orbit repair robots and servicing capabilities. Current space robotics technology remains a long way from that goal.

3. Key Engineering Technologies Are Not Ready

Today’s most advanced AI accelerators use 4nm-class process nodes, but these chips are not hardened for high-radiation environments and are unsuitable for long-term orbital operation. By contrast, existing space-certified processes are typically at the 90nm level, with a massive gap in energy efficiency. Add to that the challenges of large-scale thermal management structures and system integration, and even from a “technical feasibility” standpoint, most experts believe commercially viable large-scale deployment is more likely in the 2030s — not the “two or three years” Musk suggests.

From Altman’s perspective, he would prefer this decade’s capital and engineering resources to be concentrated on three things: improving the energy efficiency of terrestrial AI chips, optimizing conventional data center design, and expanding nuclear and renewable energy — rather than prematurely betting on a high-risk, long-cycle space infrastructure gamble.

The Expert Middle Ground: Technically Feasible Does Not Mean Feasible This Decade

If we place Musk and Altman at the optimistic and pessimistic extremes, most technology and industry experts actually stand somewhere in the middle, leaning toward Altman’s side: “From a physics standpoint, space data centers are feasible, but the chances of mainstream-scale deployment within this decade are extremely low.”

Energy and space engineering consultants point out that powering a truly “supercomputing-class” AI cluster in orbit would require solar arrays and thermal management structures far beyond any existing commercial satellite, demanding an entirely new generation of ultra-lightweight, high-efficiency power and cooling technologies. Semiconductor experts caution that without mature radiation-hardened advanced process nodes, sending cutting-edge GPUs directly into orbit could severely compromise lifespan and reliability, meaning the per-unit computing cost over a lifecycle may not actually be lower than on the ground.

Economic analysis is equally sobering: over the next decade, global investment in terrestrial data centers is projected to exceed US$5 trillion, creating powerful path dependency and industrial inertia. By comparison, any space computing initiative will inevitably exist only as prototypes and edge supplements in the near term — not as mainstream infrastructure.

But That Does Not Mean Musk Is Necessarily Wrong

Even so, I find it hard to say definitively that Musk is wrong. Looking back at history, whenever technology and demand converge at a certain tipping point, infrastructure leaps that seemed “unreasonable” have a way of becoming reality — from undersea fiber optic cables to global CDNs to today’s cloud supercomputing centers.

Musk’s true advantage lies in:

  • He simultaneously controls launch vehicles, satellites, a global network ecosystem, and AI client demand — making him the only player in the world with a complete closed-loop testing ground.
  • SpaceX has already proven its ability to drive launch costs down to levels that traditional rockets cannot compete with. If Starship achieves its design launch frequency and reuse rate, the cost curve could drop dramatically once more.
  • As more countries (including China) announce plans to launch space-based AI data centers within five years, the political momentum behind a global “space computing race” is building, attracting more public and private capital and accelerating technology maturation.

In other words, if Altman’s role is to remind the industry “don’t forget gravity and accounting,” then Musk, with his extremely optimistic timelines and scale, is forcing regulators, investors, and engineering teams to confront a question ahead of schedule: when terrestrial power grids can no longer sustain AI, what other options do we have?

My Perspective: Bringing It Back to Hong Kong and the Region

Standing as a columnist and long-term observer, I would summarize this debate as follows, placing it in the context of Hong Kong and the broader region:

1. Treat “Energy and Computing Power” as the First Principle of AI Policy

Whether future computing power resides primarily on the ground or in space, AI’s critical bottleneck is shifting from “algorithms and data” to “energy and infrastructure.” If Hong Kong and the Asia-Pacific region hope to remain competitive in the AGI era, industrial policy cannot stop at “building a few local data centers.” We must simultaneously consider cross-border power grids, green and nuclear energy strategies, and how to connect to potential future space computing networks.

2. Treat Space Data Centers as a Long-Term Strategic Option, Not a Short-Term Silver Bullet

Within the foreseeable 5 to 10 years, space data centers are more likely to appear as pilot projects and prototypes — such as Lumen Orbit’s planned launch of a micro data center satellite in 2025 and Google’s Suncatcher prototype mission targeting 2027. For local governments, research institutions, and enterprises, the more important move is to get involved early in standards discussions and collaborative experiments, ensuring that when these systems mature, we have the technical and regulatory capability to connect.

3. Think About AGI Infrastructure as a “Combined Strategy,” Not a Single Approach

What will truly underpin AGI is likely not a single technology path but a combination: more energy-efficient chips, liquid-cooled and immersion-cooled terrestrial supercomputing centers, cross-border green and nuclear energy, plus a small portion of space computing nodes for specific workloads and tasks. Even if space data centers ultimately account for only 1 to 5% of the total, today’s discussions are already shaping their future regulatory and market frameworks.

Between “Ridiculous” and “Inevitable”: Where Should Hong Kong Stand?

Returning to the simple yet crucial question from the beginning: is Musk’s orbital data center a crazy gamble, or the inevitable future of AI infrastructure?

Based on current technical and economic conditions, I believe Altman’s judgment is closer to the reality of this decade — space data centers are unlikely to rewrite the AI energy landscape at scale in the near term. But from a longer-term structural perspective, as AI computing demand continues to grow exponentially, driving up terrestrial energy and environmental costs, “looking up at the sun” may one day transition from “ridiculous” to “inevitable.”

Situated in Hong Kong and the Asia-Pacific — a region that is both heavily dependent on imported energy and eager to break through in AI — what concerns me most is perhaps not “who is right and who is wrong,” but rather: as these seemingly outlandish infrastructure proposals gradually become real options for regulators and capital markets, are we prepared to play a role and claim a voice in shaping them? That is the core question I most hope to explore with readers through this article.

Share this post

Subscribe to our newsletter

Keep up with the latest blog posts by staying updated. No spamming: we promise.
By clicking Sign Up you’re confirming that you agree with our Terms and Conditions.

Related posts