Originally published in Chinese on HK01 on 2025-10-12 07:00 | By Michael C.S. So | AiX Society
In recent conversations with various AI companies, I have noticed a clear trend: everyone is talking about “AI Agents.” In China, this has already become one of the hottest topics, with startups and products revolving around intelligent agents. This signals a critical turning point — AI is no longer just an assistive tool for generating images or writing articles; it is beginning to “act on our behalf,” directly intervening in our work and daily routines.
This shift is exciting on one hand, because AI Agents can genuinely save people enormous amounts of time and boost efficiency. On the other hand, it is also cause for concern, because once AI can act autonomously, the risks and governance challenges it brings are no longer theoretical — they are urgent, real-world problems. The recently published report Preparing for AI Agent Governance addresses precisely this point, proposing a concrete research and policy action blueprint. Its focus is not on sweeping ethical debates, but on how to progressively build effective institutional tools in an uncertain environment.
Sandboxes and Testbeds: The First Step in Governance
The report’s first key recommendation is: “Don’t rush to legislate — rush to prepare.” And the core of preparation lies in sandboxes and testbeds.
The purpose of a sandbox is to allow AI Agents to operate within a controlled environment while regulators and researchers observe their behavior. This design enables society to gather evidence at low risk: which risks can be naturally corrected by market forces, and which require policy intervention.
For example, if you want to test an AI tax-filing Agent, you could first run it in a sandbox alongside the traditional process, comparing the accuracy and compliance of both. This way, errors do not directly impact the real world, but the government can identify problems before deployment.
This “learn as you go” approach is far more effective at reducing institutional lag than waiting until full-scale deployment and then scrambling to patch things up after the fact.
Transparency and Oversight: The Black Box Is the Greatest Risk
Sandboxes provide a testing ground, but governance cannot stop at the preliminary stage. The report emphasizes that once AI Agents enter society, they must be equipped with transparency and oversight mechanisms.
This encompasses three dimensions:
- Comprehensive Logs: Every operation must leave a traceable record.
- Real-Time Monitoring: Anomalous behavior must be detected during operation — for example, the submission of a large volume of suspicious commands within a short time frame.
- Incident Reporting: When errors occur, there must be an obligation to report them externally, rather than concealing them.
Without these mechanisms, an AI Agent is nothing more than a “black box” that cannot truly be held accountable. The focus of governance, therefore, is not just about “examining outcomes” but about “examining the process.”
Infrastructure and Standards: Agent IDs and Emergency Brakes
The report introduces another core concept: building infrastructure for AI Agents. The most representative element is the “Agent ID.”
Every AI Agent should have an identity marker that clearly records its developer, functional scope, and certification status. This would prevent the emergence of “unregistered Agents” — those of unknown origin where accountability is nearly impossible to establish.
At the same time, technical safeguards are needed:
- Circuit Breakers: When an Agent exhibits large-scale erroneous behavior, it can be immediately shut down.
- Rollback Mechanisms: After an error occurs, its effects can be reversed to prevent catastrophic cascading failures.
- Standardized APIs: Ensuring safe interoperability between different Agents, as well as between Agents and existing systems.
These arrangements are somewhat like traffic rules: without traffic lights, license plates, and braking systems, even the most advanced vehicles would only bring chaos and danger.
Certification and Auditing: AI Agents Need “Licenses” Too
Another operational priority is introducing professional certification and auditing systems for AI Agents, similar to those for accountants and lawyers.
Different levels of Agents should correspond to different tiers of licensing:
- Entry-level Agents would only be permitted to handle simple, low-risk tasks.
- Advanced Agents would be authorized to operate in high-risk domains such as finance and healthcare.
Furthermore, these Agents should undergo independent third-party audits, with their performance records made public — including accuracy rates and error rates. This approach both prevents misuse and builds public trust.
Such a system is not designed to slow down innovation, but to keep innovation advancing on a controllable track.
Government Leverage: Driving Both Supply and Demand
The report reminds us that governance is not just about “restricting” — it also involves “promoting.” Policy can exert force on both the supply and demand sides:
- Demand side: Offering subsidies or tax incentives to encourage the use of certified AI Agents.
- Supply side: Supporting developers — for example, by opening datasets and funding research to lower the barriers to development.
This approach prevents AI Agents from becoming the exclusive domain of a few large enterprises and instead enables broader adoption across a wider range of applications.
International Coordination: Avoiding Governance Fragmentation
AI Agents are inherently cross-border in nature. For instance, a multinational corporation’s Agent may simultaneously process regulations from the United States, the European Union, and other regions. If the requirements in each jurisdiction are incompatible, the result is skyrocketing compliance costs.
Therefore, the report emphasizes the importance of international coordination. At a minimum, there needs to be a degree of mutual recognition in transparency, certification, and safety standards to avoid fragmentation. Otherwise, the ultimate victims will be both developers and users.
Governance Must Start with Action
The greatest value of this research report lies in the clear operational blueprint it provides: sandbox experimentation, transparent monitoring, identity certification, professional auditing, supplemented by policy leverage and international coordination. These are not abstract discussions but institutional designs that can be launched immediately.
The era of AI Agents has arrived, and governance is no longer a question of “whether” but of “how to act.” When AI truly begins to “act on its own” on behalf of humans, society must simultaneously build the infrastructure needed for accountability and protection.


