When AI Enters the Workplace, Who Is Still Thinking?
Last week, I noticed something unsettling about how work was moving through my team.
Documents were arriving faster, yet they required more thinking and judgment. I wasn't being asked to build on colleagues'ideas; I was being asked to reconstruct them from AI-generated drafts that looked finished but weren't.
Instead of contributing to analysis, design, or strategic direction, I found myself repeatedly turning polished-looking text into something that could actually be used. Each handoff added more surface — but not more meaning.
That was when I realized we weren't suffering from a shortage of productivity. We were dealing with something else entirely: workslop.
We weren't short on productivity. We were short on thinking.
What “workslop” means — and why it matters
HBR uses the term “workslop” to describe how such output can undermine professional credibility and intra-team trust, because recipients may associate it with insufficient effort or unclear judgment rather than with any particular individual.
As far as I understand, “Workslop” can be defined as:
Workslop looks legitimate — grammatically correct, structurally tidy, polished — but it doesn't contribute insight, judgment, or clarity. Worse, it often shifts the cognitive burden to whoever receives it.
The HBR authors found that workslop:
- is common: Over 40% of employees could recall receiving AI-generated work that directly harmed their productivity.
- more than half of respondents admitted to sending workslop to colleagues.
- weakens team trust: One in 10 said that 50% or more of the AI-generated work they sent colleagues was “actually unhelpful, low effort, or low quality.
Workslop isn't an issue of “lazy employees.” It is a system design failure in how organizations introduce and govern AI.
That description felt uncomfortably familiar in many modern workplaces.
AI isn't removing work — it's relocating thinking
The common narrative is that AI increases efficiency. In reality, what I see inside organizations is something subtler and more dangerous:
AI is removing the obligation to think.
In my workplace, when a problem appears, the default response from leadership is not “How should we approach this?” but “Can AI do this?”
Not in the sense of augmentation — but as substitution.
- Need a white paper? “Find an AI tool.”
- Need a framework? “Ask the model.”
- Need to make sense of messy inputs? “Feed them into something.”
Early in my onboarding, I asked my boss directly: “How do you see AI's role in daily tasks?”
Her answer was straightforward: “I value results — you can use whatever tools you find helpful in the process.”
This well-intentioned flexibility, however, can unintentionally shift attention toward visible outputs rather than the reasoning that produces them.
But it implicitly prioritizes output speed over clarity of thought. When the only performance signal is the finished artifact, and not the reasoning behind it, people lean on the tools that produce artifacts fastest — even if those artifacts lack depth.
In one project, I was asked to help prepare several case studies that would later be assembled into a formal white paper. My initial responsibility was drafting the individual cases, and later, I was asked to contribute to the preface as well. Because the organization was still refining its positioning and narrative, some of these elements were evolving in parallel, which made alignment and synthesis especially important.
When the question of editing and structuring the full white paper came up — a task that would normally be handled by a dedicated design or editorial team — I noted that producing a high-quality document of this kind usually requires sufficient time for synthesis, iteration, and editorial judgment. The suggestion was that AI tools could assist with that process.
That moment crystallized everything.
A white paper is not just formatting. It is an interpretation.
It requires understanding the nuances of each case study, identifying overarching themes, and weaving them into a coherent narrative that reflects the organization's values and goals.
Outsourcing that to a machine is not efficient. It is abdication.
The cognition gap no one owns
Here is what most companies don't want to confront:
AI does not replace human labor. It replaces human cognition — unless someone actively preserves it.
When organizations push AI adoption without redesigning how thinking is done, three things happen:
- Judgment is no longer located anywhere. Outputs exist, but no one owns their quality.
- Responsibility is displaced. “The model produced it” becomes a psychological shield.
- People learn to perform compliance instead of thinking. Using AI becomes a checkbox, not a cognitive act.
The HBR research finds that when organizations issue vague mandates like “use AI everywhere,” employees respond with performative usage: generic prompts, low-effort outputs, pass-along artifacts that survive workload pressure but don't meaningfully advance work.
The result is an organization drowning in artifacts but starving for insight.
Where the hidden cognitive labor goes
For executives, workslop can remain an abstract metaphor.
For people like me, it is concrete labor.
When AI-generated drafts pass through several hands, the work of making sense of them tends to concentrate at the end of the pipeline. People downstream are left to clarify, correct, and synthesize what earlier stages did not, even though that cognitive labor is rarely reflected in timelines or expectations.
The real bottleneck is system design
The HBR authors are right to point out that workslop is not about lazy employees.
It is about organizational design that fails to integrate human judgment and machine productivity.
Healthy AI-enabled systems have:
- Clear quality standards — not just “generate quickly,” but “generate meaningfully.”
- Explicit boundaries for human judgment — guidelines on what must be human-verified.
- Psychological safety to question outputs — norms that allow employees to raise concerns without penalty.
- Leadership that values reasoning as much as results — not just artifacts, but reasoning behind artifacts.
Without those, AI doesn't make companies smarter. It simply increases the volume of material — in other words, it creates noise without creating understanding.
And in noise, truth dies.
The real risk of AI in the workplace
The biggest danger is not that AI will take our jobs.
It is that organizations will slowly forget what human thinking is for.
When speed, volume, and visible AI usage become the primary signals of performance, the practices that actually create value — sense-making, synthesis, ethical judgment, narrative construction, and design reasoning — are quietly deprioritized. They still have to happen, but they are no longer planned for, protected, or rewarded.
Over time, that erodes an organization's cognitive infrastructure. You can produce more documents, more decks, more “outputs” — yet understand less and decide worse.
AI will keep getting better. The question is whether our systems will.
Because the future of work will not be decided by how powerful our models are, but by whether our workplaces are designed to keep human judgment in the loop, visible, and accountable.
So here is the question that keeps me uneasy: if AI can now generate almost any artifact on demand, what exactly are we still holding humans accountable for inside our organizations — speed, or sense? Output, or understanding?
Because how we answer that question will quietly determine whether AI becomes a tool that amplifies human intelligence — or a system that slowly teaches organizations to live without it.