Your Team Is Using AI Wrong — And You’re Paying for It

What is workslop? Workslop is low-quality AI-generated work content that appears polished but lacks real substance — memos, reports, and emails that offload cognitive effort onto the recipient rather than genuinely advancing a task. The term was popularized in the Harvard Business Review in 2026 and is one of the fastest-growing challenges facing HR and […]

Your Team Is Using AI Wrong — And You're Paying for It

Is your company a Most Loved Workplace®?

Join 1,000+ certified organizations worldwide

What is workslop?

Workslop is low-quality AI-generated work content that appears polished but lacks real substance — memos, reports, and emails that offload cognitive effort onto the recipient rather than genuinely advancing a task. The term was popularized in the Harvard Business Review in 2026 and is one of the fastest-growing challenges facing HR and people leaders today.

Your employees don’t trust you with AI. They just haven’t told you yet.

That sounds harsh. But here is what the data actually shows: 76% of executives believe their employees feel enthusiastic about AI adoption. Only 31% of employees say they actually do. That is not a training gap. That is leaders and employees living in completely different realities.

And sitting right at the center of that gap is a problem that now has a name: workslop.

What Is Workslop — and Why Should You Care?

Workslop is AI-generated work content that looks polished and complete but lacks the substance to actually move anything forward. The memo that sounds authoritative but says nothing. The report that runs five pages and contains three ideas. The email that required a follow-up meeting just to figure out what it was asking.

Researchers from Stanford’s Social Media Lab and BetterUp coined the term and found that 40% of desk workers received something they would classify as workslop in a single month. It flows in every direction — between peers, from managers to reports, from employees to leadership.

And in most organizations, nobody is talking about it.

The Hidden Tax on Your Best People

Here is what actually happens when workslop circulates unchecked inside an organization:

Your least skilled people produce more output than ever. Your most skilled people spend more time cleaning it up than ever. And your highest performers — the ones who know what good looks like — quietly become unpaid editors for AI-generated mediocrity.

That is not productivity. That is a redistribution of frustration disguised as efficiency.

Research shows that 66% of employees who use AI at work have relied on AI output without evaluating it. Two-thirds of your workforce is potentially shipping work they have not critically reviewed — and someone downstream is paying the cost in rework, confusion, and eroded trust.

What AI Literacy Actually Looks Like

The standard organizational response to workslop is to call it a training problem and schedule another workshop. That will not fix it.

Real AI literacy is not a course. It is the consistent ability to do three things:

Direct with precision. A skilled AI user gives the model a role, a context, a format, and a constraint before asking for anything. They treat the prompt like a brief to a talented but inexperienced colleague — specific, clear, and complete. Without this, the model produces statistically average output. Which is another way of saying: exactly what everyone else is getting.

Judge the output. This is the skill most organizations skip entirely. Your people need to know what a good output looks like before they can recognize a bad one. If they cannot evaluate the result critically, they will ship whatever the model produces — and workslop spreads.

Iterate, not accept. The first response from any AI model is a draft, not a deliverable. The employees getting real value from these tools push back, refine, and add their own expertise on top. They treat AI as a thinking partner, not an answer machine.

The most practical intervention is not a training program. It is giving your people pre-built, role-specific prompts for their actual job tasks — so they are not starting from scratch every time, guessing at what to ask.

Why Workslop Is Really a Culture Problem

Here is what the research on workslop consistently shows: it is most common in environments where people feel psychologically unsafe.

Leaders are issuing vague directives to use AI while employees are overburdened and operating in cultures where admitting uncertainty or asking for help feels risky. When people cannot say “I do not know how to do this well” without consequence, they submit whatever the machine gave them and hope no one notices.

My research across 2.8 million employees shows that the organizations where AI is genuinely working are not the ones with the biggest technology budgets. They are the ones with cultures built on trust, mutual respect, and what I call emotional connectedness — the degree to which employees feel genuinely seen and valued by their organization.

When people feel safe, they flag poor outputs. They ask for help. They take ownership of quality rather than passing AI-generated noise along the chain. When they do not feel safe, workslop spreads — and nobody says anything about it.

AI does not fix a broken culture. It amplifies it.

Questions to Ask Before Your Next AI Initiative

Before you expand your AI tool stack, before you mandate another platform, before you roll out another AI transformation initiative — answer these questions honestly:

  • Do your people know the difference between a strong AI output and a mediocre one?
  • Do your highest performers feel safe enough to flag bad work when they see it?
  • Have you given your team role-specific prompts for their actual job tasks — or did you hand them a tool and wish them luck?
  • Is AI creating more work for your best people, or less?

If you cannot answer those questions confidently, you do not have an AI adoption problem. You have a culture and capability problem that AI just made visible.

The companies getting this right built the foundation first. The tools came second.

Find out where your culture stands before your next AI decision.


 Frequently Asked Questions About Workslop

What is workslop?

Workslop is low-quality AI-generated work content that appears polished and professional but lacks real substance. The term was popularized in the Harvard Business Review and describes AI output that looks complete but offloads cognitive effort onto whoever receives it — creating rework rather than eliminating it.

Why is workslop a culture problem, not just a training problem?

Research from Stanford and BetterUp shows that workslop is most common in environments where employees feel psychologically unsafe. When admitting uncertainty or asking for help carries risk, people submit whatever AI produces rather than flagging poor output. That is a culture failure. Organizations with high-trust cultures see significantly less workslop because employees feel safe enough to maintain quality standards.

How do you stop workslop in your organization?

Stopping workslop requires building genuine AI literacy — the ability to direct AI with precision, evaluate outputs critically, and iterate rather than accept first results. It also requires role-specific prompt libraries so employees have a reliable starting point, and a culture where people feel safe enough to say the output isn’t good enough.

What is the hidden cost of workslop?

The primary hidden cost is a redistribution of effort from lower-skilled employees to your highest performers. When poor AI output circulates unchecked, the people who know what good looks like spend their time identifying and correcting work that should never have been submitted. Research shows 66% of employees who use AI at work rely on AI output without evaluating it — creating downstream rework that disproportionately burdens your best people.

What does workplace culture have to do with AI workslop?

Most Loved Workplace® research across 2.8 million employees shows that organizations with high-trust, emotionally connected cultures produce better AI outcomes. When employees feel seen and respected, they take ownership of quality — including AI output quality. Culture determines whether AI adoption succeeds or generates expensive noise.

Ready to Build a Loved Workplace?

Take the first step — check your organization’s CertCheck score or apply for certification today.

Frequently Asked Questions

The biggest large employer culture challenges during a spinout or major transformation include: maintaining consistent culture signals across geographically dispersed teams, preventing a vacuum of identity when the legacy brand disappears, and preserving the informal trust networks that made the old organization function. Companies like Kyndryl, which spun out of IBM with 73,000 employees across 5 continents, show that culture infrastructure—systematic onboarding, explicit values, leadership accessibility—must be deliberately built, not assumed to transfer.

Maintaining consistent culture across global offices requires moving from aspirational values to operational infrastructure. The evidence from Kyndryl's Most Loved Workplace certification shows that when employees in Asia Pacific, Europe, North America, South America, and the UK independently describe their culture using the same language—'flexible work,' 'you are heard,' 'career and learning outcomes'—it is not coincidence. It is the result of systematic design: shared onboarding, visible leadership behavior, and consistent feedback loops that translate values into daily experience regardless of location or time zone.

A Most Loved Workplace® certification proves that a company's culture claims are independently verified through employee assessment—not self-reported surveys or marketing copy. The certification uses machine learning to analyze sentiment, emotion, and recurring themes across thousands of employee responses. When a large employer like Kyndryl earns this certification despite a major transformation, it demonstrates that their culture infrastructure survived and scaled through disruption, which is the hardest test any organizational culture can face.

About Louis Carter

Louis Carter is the Founder and CEO of Best Practice Institute (BPI) and Most Loved Workplaces®, a global research and certification organization helping companies build workplaces employees love. He is the creator of the Love of Workplace Index™, a research-based framework used to measure emotional connection between employees and their organizations and predict performance, retention, and culture outcomes. Carter is the author of more than a dozen books on leadership, talent development, and management best practices and has advised Fortune 500 companies, government agencies, and global organizations on leadership and culture transformation. He also hosted the Leader Show, a leadership interview series featured on Newsweek for five years, interviewing executives and leadership experts about leadership and the future of work. His work on workplace culture and leadership has been featured in major publications including Newsweek, The Wall Street Journal, and The Economist. Learn more in “How Louis Carter’s Most Loved Workplace Measures What Really Matters” (New York Business Now) and “Beyond Employer Branding: How Louis Carter Built the Global Standard for Workplace Culture” (NY Tech Media)

Get Certified?

Join 1,000+ Most Loved Workplaces®

In this Article

What's Next ?

Start your certification journey

Book a Call

Discuss your culture challenges with the Louis Carter team

Continue Reading