Originally published in Chinese on HK01 on 2025-10-04 07:00 | By Michael C.S. So | AiX Society

Over the past two months, I have fully delegated the heavy lifting of my daily work to AI agents. These agents are not mere chatbots — they are tools capable of genuinely rolling up their sleeves and getting things done. From updating websites and maintaining LinkedIn profiles, to filling out online forms, entering data, compiling reports, and even scheduling round-the-clock work agendas and research tasks, agents can handle it all automatically. It feels as though I suddenly have several assistants who never take a day off. Starting in 2025, I witnessed firsthand the emergence of first-generation AI agents on the market. Although still in the experimental stage, they were already capable of performing many administrative and operational tasks that previously required human hands. This process has made me rethink the concept of “division of labor.” We always assumed that “humans handle decisions, machines handle execution” was a natural boundary. But when agents can receive instructions on Slack or Teams — and even autonomously complete code modifications and submissions — that line begins to blur.

The “Toddler Stage”: Half-Baked but Full of Promise

According to the AI 2027 report (published by Daniel Kokotajlo and others on April 3, 2025, through the AI Futures Project at ai-2027.com), 2025 was described as the era of “toddler agents.” The research indicated that agents at the time scored approximately 65% on the OSWorld computer task benchmark — only slightly below a proficient human’s 70%. In other words, they were still not stable enough and required human oversight and verification.

Yet this state of “semi-trust” is precisely the testing ground for enterprises. It forces us to redesign our workflows: which tasks can be outsourced to AI, which require permission controls and oversight — these questions must be clearly defined.

For me, this “semi-maturity” is not a bad thing. Enterprise transformation should not wait until AI is perfect before beginning. Instead, it should proceed through trial and experimentation within manageable risk parameters. This echoes the entrepreneurial spirit: there is never an ideal moment — the act of taking action is itself a form of learning.

Real-Life Examples: AI on the Front Lines

Here is an example: I used to need an assistant to curate news every day, helping me find information related to AI, healthcare, or finance. Now, a single AI agent can automatically crawl through dozens of news articles, research papers, and reports within thirty minutes, condense them into a three-page summary, and deliver it straight to my inbox. While it occasionally miscites sources or includes duplicates, the time saved far outweighs the cost of these minor errors.

Another scenario involves finance: an AI agent can directly open an accounting system, enter income and expense data, and generate simple reports. Although a human accountant still needs to perform the final review, the workflow has shifted from “manual data entry” to “supervisory review.” This is precisely the transition from “tool to agent” described in AI 2027.

The Cost Curve’s “Scissors Effect”

The most striking passage I read was AI 2027‘s observation that the costs of cutting-edge AI and consumer-grade AI are diverging in a “scissors effect.” Professional-grade agent systems may cost hundreds of dollars per month, but the cost of achieving “baseline capabilities” is plummeting — the research estimates a decline rate of roughly fifty-fold per year.

What does this mean? It means the corporate structure of the future will operate on a “two-tier ledger”:

On one hand, companies will invest in expensive, cutting-edge AI for strategic and high-value tasks.

On the other hand, they will deploy low-cost, consumer-grade AI to permeate employees’ daily work.

This model simultaneously creates differentiation and economies of scale. From my perspective, this is not merely a question of cost allocation — it represents a shift in management philosophy: leadership must learn how to deploy resources in tiers, ensuring they neither miss the technological frontier nor fail to boost the productivity of their entire workforce.

Compute Power as National Strength: The OpenBrain Revelation

AI 2027 features a fictional company called “OpenBrain” that, by the end of 2025, built the largest AI cluster in history — equivalent to 2.5 million H100 GPUs, costing $100 billion, consuming 2GW of power, with plans to double capacity again in 2026.

This is no longer merely an engineering feat — it is national-scale infrastructure. When AI progress becomes intertwined with electricity, fiber-optic networks, and supply chain security, the competition transcends “who has the better model” and becomes “who can afford this massive infrastructure.”

This reminds me of the nuclear arms race during the Cold War. The difference is that this time, the weapons are not missiles but GPUs, data centers, and energy supplies. In other words, the “national strength” of the future will be synonymous with compute power.

2027: Will AGI Really Arrive?

The report also cites predictions from leaders at OpenAI, DeepMind, Anthropic, and other companies, who broadly agree that AGI (Artificial General Intelligence) could emerge within five years. Sam Altman has publicly stated that OpenAI’s goal is “genuine superintelligence.”

Will 2027 truly be the year of AGI? I believe no one can answer this with certainty. But what is certain is that AGI is no longer just the stuff of science fiction — it is a future scenario that must be taken seriously.

Even if AGI does not arrive on schedule, the pace of AI adoption and penetration is already sufficient to transform the operating logic of society.

My Reflections: Designing Fault-Tolerant Systems

The greatest insight I gained from reading AI 2027 is the importance of “fault tolerance.” The real challenge is not whether AI can replace humans, but whether we can design systems that tolerate errors and adapt quickly — allowing us to keep moving forward even when AI proves unreliable.

This is perhaps a reminder for all businesses and individuals: do not expect AI to give us perfect answers. Instead, build an environment capable of “recovering quickly after failure.” That way, even when AI makes mistakes, we will not be left behind by the world.

The strongest impression AI 2027 left on me is that it transforms the future from a distant technological prophecy into an urgent to-do list that demands immediate action.

The core question of 2027 is not whether AGI will truly be born, but whether we are ready to meet it. The key to preparation is not finding the most correct answer, but designing fault-tolerant, rapidly adaptable systems that allow us to keep pressing forward — even when AI errs.

For me, the real challenge is this: how to leverage AI to ensure that neither I nor my business is rendered obsolete by the world. This is the most profound question posed by AI 2027, and the reason I wrote this column.

Share this post

Subscribe to our newsletter

Keep up with the latest blog posts by staying updated. No spamming: we promise.
By clicking Sign Up you’re confirming that you agree with our Terms and Conditions.

Related posts