By Michael So
Originally published on LinkedIn: https://www.linkedin.com/pulse/ai-2027-when-agents-become-my-everyday-infrastructure-michael-so-hp32c/
Source: HK01 Article – https://www.hk01.com/article/60282347
In the past two months, I have fully delegated the burdens of my daily work to AI agents. These agents are not mere chatbots, but tools that can truly “get things done”—updating websites, maintaining LinkedIn, filling out online forms, entering data, compiling reports, and even scheduling an entire workday and conducting research, all handled automatically. It feels like suddenly having several assistants who never ask for leave.
Since 2025, I have witnessed the emergence of the first wave of AI agents in the market. Although still experimental, they can already accomplish many tasks that previously required humans for administrative and operational work. This process made me reconsider the concept of “division of labor.” Previously, we thought that “humans make decisions, machines execute” was a natural boundary. Now that agents can accept instructions in Slack or Teams, even autonomously modify and submit program code, this boundary is fading.
Partially Mature and “Wobbly”
According to the “AI 2027” report (by Daniel Kokotajlo et al., published April 3, 2025 by the AI Futures Project), 2025 is called the “wobbly agent” stage. The study noted agents scored around 65% in the OSWorld computer task benchmark, just below skilled humans at 70%. In other words, agents are still not stable and require human supervision and review.
But this “semi-trust” state is precisely the trial ground for enterprises, forcing us to redesign processes: What work can be outsourced to AI, what requires permissions and supervision—all these questions must be clearly defined.
To me, “semi-maturity” is not a bad thing. Enterprise transformation shouldn’t wait for AI to be perfect. We should move forward and experiment within manageable risks. It’s akin to the spirit of entrepreneurship: There’s never an ideal time, action itself is learning.
Daily Examples: AI on the Frontline
For example, I once needed an assistant to organize daily news and find material relevant to AI, health, or finance. Now, one AI agent can comb through dozens of news pieces, research, and reports in thirty minutes, condense them into three-page summaries, and automatically deliver them to my inbox. While occasional misquotes or duplicates occur, the time saved far outweighs these small errors.
Another scenario is finance: AI agents can directly log into accounting systems, input income and expenses, and create simple reports. Though a human accountant must still do the final review, the process has shifted from “manual entry” to “supervised checking.” This is the transition from “tool” to “agent” described in “AI 2027.”
Cost Curve “Scissors Effect”
What struck me most was that the “AI 2027” report described a “scissors effect” between the cost of cutting-edge AI and mass-market AI—the monthly fee for professional-grade agent systems may reach hundreds of dollars, but the cost to achieve “existing capabilities” is falling rapidly, with annual decreases reportedly as much as fiftyfold.
What does this mean? In future enterprise structures, there will be a “double ledger”:
- Enterprises will invest in expensive, cutting-edge AI for strategic and high-value tasks.
- Meanwhile, low-cost mass AI will permeate daily work for employees.
This creates both differentiation and scalability advantages. For management, it’s not just about cost allocation but a shift in philosophy: Leadership must learn how to tier resource usage, leverage the technical frontier, and ensure collective work efficiency is elevated.
Computing Power as National Strength: The OpenBrain Lesson
“AI 2027” features a fictional company, “OpenBrain,” which established the largest AI cluster ever at the end of 2025—equivalent to 2.5 million H100 GPUs, costing $10 billion, consuming 2GW, and planning to double again by 2026.
This is no longer just an engineering feat but national infrastructure. As AI progress links to electricity, fiber networks, and supply chain security, competition has surpassed “who models better”—it’s about “who can afford the infrastructure.”
It reminds me of Cold War nuclear competition, but this time the weapon is GPUs, data centers, and energy supply. In short, future “national strength” will equal computing power.
2027: Will AGI Really Arrive?
The report cited leaders from OpenAI, DeepMind, Anthropic, unanimously agreeing that AGI (Artificial General Intelligence) may arrive within five years. Sam Altman of OpenAI has openly stated their goal is “real superintelligence.”
Will 2027 be the year of AGI? No one can say for sure. But it’s clear—AGI is no longer just science fiction, but a serious future option.
Even if AGI doesn’t arrive on time, AI’s speed of adoption is already enough to change society’s logic.
My Reflection After Reading: Designing Error-Tolerant Systems
The greatest insight I gained from “AI 2027” was the importance of error tolerance. The real challenge isn’t whether AI can replace humans, but whether we can design systems that tolerate mistakes and adapt quickly—so that when AI is unstable, we can still move forward.
This is a reminder to companies and individuals everywhere: Don’t expect perfect answers from AI, but build environments that can “recover quickly from failure,” so if AI makes errors, we’re not eliminated by the world.
The deepest impression from “AI 2027” is that it turns the future from distant sci-fi into an immediate to-do list.
The core question for 2027 isn’t whether AGI arrives, but whether we’re ready for it. Preparation is not about finding the perfect answer—it’s about designing systems that tolerate error and adjust rapidly, so even when AI fails, we keep moving forward.
To me, the real task is: How to use AI to avoid being replaced by the world, as individuals and enterprises. That’s the core issue posed by “AI 2027,” and why I wrote this column.

