Originally published in Chinese on HK01 on 2026-01-04 07:00 | By Michael C.S. So | AiX Society

AI Is Not an Efficiency Tool — It’s Rewriting How Enterprises Collaborate: From OpenAI’s The State of Enterprise AI 2025 to the Reality Gap in Hong Kong Businesses

When ChatGPT serves over 800 million users per week, the significance of that number goes far beyond “tech adoption.” What it truly represents is an irreversible flywheel effect: once consumers treat AI as an everyday tool, enterprises that still treat AI as an experimental project face a problem that is no longer about whether employees are willing to use it — but whether the organization itself is ready to be reshaped.

The State of Enterprise AI 2025 report points out that from the steam engine to the semiconductor, the key to general-purpose technologies creating enormous economic value has never been the moment the technology appeared — but whether enterprises can transform the underlying capability into use cases that are scalable, repeatable, and institutionalized. Enterprise AI has now officially entered this historical stage.

And yet, this is precisely where most enterprises are most likely to go wrong.

From “Are You Using AI?” to “How Deeply Are You Using It?”

The report is the first to use large-scale, real-world usage data to map out the actual state of enterprise AI. Its analytical foundation draws from two key sources: de-identified usage data from OpenAI’s more than one million enterprise customers, and a systematic survey of nearly 100 enterprises and 9,000 employees.

The findings reveal that the shift in enterprise AI adoption is not merely about “more people using it” — the way it is being used is undergoing a qualitative transformation.

Over the past year, weekly message volume on ChatGPT Enterprise grew roughly eightfold, and average usage per employee rose by approximately 30%. Even more telling is that usage of structured workflows — such as Projects and custom GPTs — surged nearly 19 times within a year. This indicates that enterprise employees are moving from ad hoc queries toward reusable, shareable, and standardizable AI work patterns.

At the same time, the average reasoning token consumption per enterprise grew approximately 320 times over 12 months, reflecting that more advanced reasoning models are no longer confined to experimentation — they are being systematically embedded into actual products, services, and internal processes. AI’s role is shifting from “assistive tool” to “infrastructure.”

AI Is Becoming the New “Collaboration Layer”

One shift in the report that particularly deserves management’s attention is the fundamental change in how OpenAI Enterprise is positioned within organizations.

In leading enterprises, AI is no longer just “an assistant each employee uses individually” — it is gradually becoming a shared language for cross-departmental collaboration and internal processes. Through project modes, custom GPTs, and APIs, enterprises are beginning to embed institutional knowledge, rules, and processes into AI, enabling it to execute repeatedly and produce consistent outputs across teams.

This transformation effectively means that AI is becoming the enterprise’s “collaboration layer” — a digital intermediary between people and systems that converts work methods previously dependent on experience, word-of-mouth, and interpersonal rapport into processes that machines can understand, verify, and reuse.

This also explains why the growth rate of structured workflows far exceeds that of one-off queries. The real value lies not in asking more questions, but in whether a single successful approach can be turned into a capability the entire organization can use repeatedly.

My On-the-Ground Observation: Why Is It Always IT in the Meeting Room?

This shift toward a collaboration layer is still not widely understood in Hong Kong enterprises.

Over the past few months, as I introduced AI-centric internal collaboration models and “digital employee” frameworks to various companies in Hong Kong, the scene in nearly every meeting was strikingly consistent: the people sitting in the room were almost always programmers, system architects, and IT department heads. The questions they raised were highly technical — how to connect APIs, how to configure permissions, whether data stays on-premises, and whether it meets information security and regulatory requirements.

There is nothing wrong with these questions in themselves, but they reveal a deeper reality: in most Hong Kong enterprises, AI is still instinctively categorized as “IT’s business.”

The ones truly absent are often the operations directors, HR leaders, and business line managers — the people who decide every day how processes run, how performance is evaluated, and how departments collaborate. The result is that AI has been locked inside a room of technical discussions before it has even entered the operational core.

Why Handing AI to IT Is Often the First Mistake

The State of Enterprise AI report clearly shows that the enterprises truly ahead of the curve are not there because their IT teams are exceptionally strong, but because AI is treated as part of operational and collaborative capability — not as a standalone system tool.

The fundamental role of an IT department is to ensure stability, compliance, and risk control. But where AI truly creates value is in process restructuring, role redefinition, and the redistribution of decision-making authority. This is a transformation of management and organizational design — not another system integration project.

When AI is discussed only at the IT architecture level, it will only ever improve localized efficiency. Only when AI enters workflows, collaboration models, and performance logic does it begin to rewrite the enterprise itself.

AI Is Changing “Who Can Do What”

Another key finding of the report is that AI is not merely replacing labor — it is expanding role boundaries.

75% of surveyed employees said AI improved the speed or quality of their output, saving an average of 40 to 60 minutes per day, with heavy users saving over 10 hours per week. More importantly, the same 75% of users reported that AI enabled them to complete new tasks they previously could not perform.

Among non-technical roles, coding-related usage grew by approximately 36%. This signals that AI is narrowing the gap between “intent” and “execution” — professional barriers are no longer the biggest constraint. The real question is whether organizations allow capabilities to spread laterally.

A Reality Check for Management

OpenAI releases a new capability roughly every three days, but the report makes clear that what limits enterprises today is no longer model performance — it is whether the organization is ready to absorb these capabilities.

For Hong Kong enterprises, the real risk is not “adopting too slowly” but deploying in the wrong places. When AI is confined to the IT department, the enterprise is merely upgrading tools. When AI becomes part of the collaboration and operations layer, the enterprise is upgrading itself.

AI is not the next IT project — it is a replacement of the enterprise operating system. This is not a sprint; it is an elimination race already underway.

Share this post

Subscribe to our newsletter

Keep up with the latest blog posts by staying updated. No spamming: we promise.
By clicking Sign Up you’re confirming that you agree with our Terms and Conditions.

Related posts