AI Workslop Destroys $9M in Productivity

AI Workslop

When I first wrote about Shadow AI, we were hearing the same statements from CTOs: their people were just experimenting with Chat AI tools at work. No big deal.

They were wrong. That experimentation has evolved into what Harvard researchers now call AI workslop—and it’s costing companies millions annually.

When 90% of employees are already using AI outside official channels, and 95% of enterprise AI pilots deliver no measurable return, the disconnect isn’t about adoption—it’s about ROI. Companies have plenty of AI activity. What they lack is AI that actually works.

Now, new research reveals just how dangerous unguided AI experimentation has become:

According to a recent study from Harvard Business Review, 40% of employees report receiving what researchers call AI “workslop” in the last month alone: “AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”

The scale is staggering: employees estimate that 15.4% of all content they receive at work now qualifies as AI workslop. It’s not just mediocre blog posts or awkward marketing copy. It’s strategy memos, internal updates, even “vision” documents churned out by AI. And it flows in every direction—40% occurs between peers, 18% flows upward from reports to managers, and 16% flows downward from leadership.

The phenomenon hits professional services and technology sectors hardest. Each incident of AI workslop costs an average of 1 hour and 56 minutes to deal with—translating to $186 per month per employee, or over $9 million annually in lost productivity for an organization of 10,000 workers.

What started as side experiments is now scaling into organizational noise: wasted hours, wasted trust, and wasted money.

When AI Gives You Everything, You Get Nothing

I learned this lesson the hard way while testing Paleotech’s research tool.

I wanted to use it at full power: every major LLM, broad searches, maximum results. I didn’t want to miss anything. The outputs looked impressive: detailed, well-formatted, professional. But when I tried to actually use them, I hit a wall.

Nothing felt wrong, but nothing felt right either. I’d spend hours judging each answer on gut instinct, second-guessing myself, running more searches to validate the first batch. The token costs added up, but the real cost was clarity. I was drowning in plausible-sounding content that didn’t actually advance my thinking or get me closer to my research goals.

That’s when I realized I was generating my own AI workslop: outputs that looked valuable but lacked the substance to help me make decisions. Without clear criteria for what “good” looked like, I couldn’t tell the difference between insight and noise.

It reminded me of the Cheshire Cat in Alice in Wonderland: “When you don’t know where you’re going, any road will take you there.” AI is powerful enough to take you down all the roads at once, so unless you’ve set the destination, you end up overwhelmed, not enlightened.

But here’s the key difference between my mistake and what’s happening across companies: I caught it. I stopped, reassessed, and fixed my approach. I used citations to validate information. I read key documents to ensure the AI was interpreting them correctly. I analyzed things more carefully, asking: is this really true?

That recovery process—stopping to validate, cross-checking sources, questioning outputs—is exactly what companies need to build into their AI workflows. But most aren’t. Instead, the same mistake I made temporarily is now happening at scale: leaders and employees alike, using AI without structure, flooding teams with outputs that look polished but create more work than they solve.

The Competence Penalty of Poor AI Use

Here’s what makes AI workslop particularly dangerous: it doesn’t just waste time. It erodes trust. HBR’s research found that employees who received AI-generated slop viewed the sender as less capable, less reliable, and less trustworthy.

This is the competence penalty: shortcuts that save time in the moment quietly chip away at your professional credibility.

And it flows both ways. While much AI workslop happens peer-to-peer, managers also send it downstream. When employees recognize that leadership is outsourcing thinking, trust crumbles. As HBR put it, “When organizational leaders advocate for AI everywhere all the time, they model a lack of discernment in how to apply the technology.”

The danger is cultural as much as operational. If leaders normalize shallow AI use, employees will mirror it. If leaders treat AI as a shortcut, teams will too. The result isn’t innovation; it’s organizational noise that makes everyone less effective.

Training in Discernment, Not Just Tools

That competence penalty—the erosion of trust from poor AI use—doesn’t fix itself. People need training not just in how to use AI tools, but in how to think with them. That means:

  • Critical analysis: Learning to spot gaps, missing context, and weak logic in AI outputs. Just because something looks polished doesn’t mean it’s useful.
  • Clear boundaries: Knowing when AI accelerates work and when it introduces risk—compliance-heavy communications, emotionally sensitive messages, or decisions requiring nuanced judgment.
  • Human oversight: Treating AI as a collaborator to refine ideas, not a source of ready-made truth. The best results come from iteration, not acceptance.
  • Discernment over speed: Asking “Does this actually solve the problem?” before accepting any output, no matter how impressive it looks.

In our work with clients, we’ve found that even a simple evaluation framework makes a dramatic difference. Before deploying an AI system, we help teams define: What does “good” look like for this specific task? How will we know if the output is actually useful? What would make us trust this answer enough to act on it (including human-in-the-loop review steps)?

Without those standards, every output looks plausible, but none can be trusted. With them, teams quickly develop the judgment to separate substance from slop.

Build Systems That Enforce Discernment

Training creates capability. But to sustain it, companies need organizational structures that align AI use with business strategy. For CEOs and CTOs, three priorities stand out:

Define where AI adds value.. and where it doesn’t. Not every task benefits from AI. Identify where it accelerates work (summarization, analysis, ideation) versus where human judgment is non-negotiable (compliance, high-stakes decisions, emotionally sensitive communications).

Set evaluation standards before deployment. Even lightweight criteria help teams know what “good” looks like. For AI-enabled document intelligence systems, we use a simple testing rubric with our clients (see illustration): Is the answer accurate? Is it complete? Is it clear? How much effort is needed to get this answer? Along with pre-defining their mission-critical-must-get-right question types, these four evaluation steps force teams to define success before we start generating outputs.

How to discern AI quality from AI workslop

Embed discernment in culture. If AI use is measured by speed alone, quality will always slip. If it’s measured by relevance and impact—fewer escalations, faster decisions, reduced rework—teams will adapt. (In fact, this is one reason why we built Generative AI Literacy (GAIL): to help teams develop this judgment through hands-on practice with their actual workflows, not just abstract AI concepts.)

The goal isn’t to stifle creativity. It’s to channel AI’s power into outputs that are reliable, useful, and aligned with what the organization actually needs.

AI Amplifies Thinking… For Better or Worse

AI has opened a frontier of abundance. Every path looks possible, every output looks polished. But without discernment, that abundance degenerates into AI workslop: content that appears useful but drains productivity, erodes trust, and clouds decision-making.

The solution is organizational, not individual. Leaders must model intentional AI use, employees must be trained in discernment, and companies must build guardrails that align AI use with strategy. None of these pieces works in isolation; together, they create the foundation for AI adoption that actually delivers value.

In fact, I wrote this article with AI assistance, but I didn’t accept everything it suggested. When it wanted to make the whole piece about a four-point rubric, I pushed back. When its early suggestions blamed leaders too heavily, I reframed it as a broader organizational challenge. When it made suggestions on my draft and repeated the same statistic twice, I caught it and asked for a smoother flow.

That’s the point: you don’t have to agree with everything AI says. In fact, you shouldn’t. The real value comes from questioning, refining, and steering. Without that human discernment, you don’t get strategy… you get AI workslop.


AI doesn’t replace the hard work of thinking. It amplifies it. The companies that will succeed in getting value from AI are those that make discernment a core skill of their AI-empowered workforce.


Article Sources

Similar Posts