AI Does Not Reduce Work. It Intensifies It.

What a new study found about AI at work -- and what it means for how you design your systems right now

Seneca Bailey

2/23/20264 min read

A focused consultant thoughtfully reviewing organizational charts in a modern office.
A focused consultant thoughtfully reviewing organizational charts in a modern office.

Here is something worth sitting with before you read the rest of this article.

Think about the last month of work. Have the AI tools your organization deployed made your workday shorter? Or have they made it faster -- which is a different thing entirely?

If your honest answer is "faster, but not shorter," you are not alone. And according to new research out of UC Berkeley, that experience is not an accident. It is a pattern. And left unaddressed, it becomes a problem.

Someone had to say it.

What the Research Found

Researchers spent eight months embedded inside a two-hundred-person technology company studying what actually happened when employees started using AI tools at scale. AI use was not mandated -- employees adopted it voluntarily, because it worked. The results were genuinely impressive at first. Work moved faster. Tasks that used to take hours took minutes. People were getting more done.

But over the eight months, something else started happening alongside the productivity gains. Employees who adopted AI most enthusiastically began extending their hours -- not because anyone asked them to, but because there was always more they could do now. Role boundaries started blurring. Product managers were writing code. Researchers were picking up engineering tickets. The expanded capacity created expanded expectations, and the expanded expectations created expanded work.

By month six, burnout, cognitive strain, and decision fatigue had spiked among the earliest and most committed AI adopters. The people who had leaned in hardest were the ones hitting the wall first.

The researchers named it "workload creep": AI lowers the barrier to complex work so effectively that people take on more scope, more breadth, and more output than the job was previously designed to hold -- often without realizing it is happening, and without any corresponding change to how the work is governed, sequenced, or resourced.

Why This Is Not an AI Problem

I want to be direct about something: what the Berkeley study describes is not a failure of AI. It is a failure of work design.

AI is a mirror. It reflects the state of your organizational systems back at you -- amplified. If your systems for governing workload, clarifying decision rights, and sequencing priorities were well-designed before AI arrived, AI makes them more effective. If they were under-designed -- and in most organizations, they were -- AI accelerates the breaking.

This is not a new problem with a new cause. It is a familiar problem with a powerful new accelerant.

I have spent eighteen years working on the human and organizational side of technology adoption -- ERP modernizations, digital platform rollouts, operating model redesigns. The pattern the Berkeley researchers documented in eight months is a version of the same pattern I have watched play out across every major technology transformation I have been part of. The technology works. The adoption stalls. The people exhaust themselves. And everyone is surprised, even though the warning signs were visible from the beginning for anyone who was looking at the system rather than just the software.

AI is faster than ERP. The consequences arrive faster too.

The Part That Most Organizations Are Missing

Here is what the research team recommends, and I want to translate it from academic language into organizational design language, because the prescription is more specific than it might sound.

The researchers suggest three things: clear team norms for when to use AI and when to stop, deliberate "decision pauses" before high-stakes AI-assisted choices, and protected time that does not automatically fill with more prompts and tasks.

In organizational design terms, what they are describing is this:

Explicit governance of AI as a way of working, not just as a tool. Most organizations have AI policies covering security, privacy, and intellectual property. Almost none have AI norms covering workload, pace, sequencing, and recovery. Those are not technology questions. They are organizational design questions. They belong in the same conversation as how you design roles, how you structure meetings, and how you govern the pace at which change lands on your people.

Decision rights applied to AI-assisted work. When AI generates an output -- a draft, an analysis, a recommendation -- who reviews it, who refines it, and who owns the judgment call that follows? In most teams right now, the answer is unclear. Which means the person who ran the prompt absorbs all of it: the generation, the evaluation, the decision, and the accountability. That is not a workload reduction. That is a workload concentration.

Protected time treated as a design choice, not a personal responsibility. The Berkeley researchers noted that AI eliminates the natural friction points in the workday -- the small pauses that used to enforce rest and transition. When every gap becomes an opportunity for "one quick prompt," cognitive recovery disappears into the workflow. Organizations that are serious about sustainable AI adoption will design recovery into the system, not leave it to individual employees to carve out against cultural norms that reward constant output.

What This Looks Like When Organizations Get It Right

The organizations that are managing AI-era workload well are not the ones with the most restrictions on AI use. They are the ones that treated AI adoption as an organizational change program rather than a software rollout.

They asked the design questions before the tools went live: what work are we redesigning, not just automating? How do we want role boundaries to evolve, and who governs that evolution? What does sustainable pace look like in an AI-enabled workflow, and how do we build the norms to protect it?

These are not complicated questions. They are the same questions good organizations ask before any major technology transformation. The problem is that AI moved fast enough -- and was adopted widely enough, often without formal deployment -- that most organizations never stopped to ask them.

It is not too late. But the longer the governance gap stays open, the more the workload creep compounds.

Try This

Ask your team one question this week, and ask it honestly: since we started using AI tools more intensively, has your workday gotten shorter, the same, or longer?

If the honest answer is longer -- you do not have an AI productivity problem. You have an AI practice problem. The tools are working. The system around them is not.

The good news is that system problems have system solutions. Clearer norms. Better decision rights. Deliberate workload governance. Protected pace. These are not exotic interventions. They are the same design choices that make any major transformation sustainable rather than exhausting.

AI will not fix broken work systems. But you can design work systems that make AI work the way it was supposed to.

This is the first article in the Unbroken Work AI series. Next: before your organization automates anything else, there is a more important question to ask first.