Hi everyone, I’m Whitney, and I write Asana’s AI-focused LinkedIn newsletter, Work in Tandem.
Going into 2025, there was a lot of talk about an AI revolution at work. And in some ways, it happened: AI adoption surged, tools improved, and work moved faster.
But the more interesting story was messier, and more useful.
What I saw again and again is that AI doesn’t magically make work smoother. It amplifies whatever systems are already in place. Strong systems improve work, while messy ones turn it into polished chaos.
Here are three lessons I’m carrying into 2026, plus a few newsletter editions if you want to go deeper.
Lesson 1: AI doesn’t reduce busywork, it upgrades it (but there’s a fix)
This year, a lot of teams got faster at producing work. The quality, though, wasn’t always there.
We even gave this phenomenon a name: workslop. AI-generated output that looks finished but is often incomplete, incorrect, or missing context.
Workslop showed up everywhere: in our inboxes, decks, docs, and status updates. And the root cause was rarely bad AI. It was the same issues teams have always struggled with, just scaled up: unclear ownership, fuzzy handoffs, missing context, and no shared definition of “done.”
The teams that avoided this trap did a few important things. They invested in training, talked openly about where AI helps (and where it doesn’t), and made accountability clear.
They also invested in their people. Instead of treating AI like a mandate, they encouraged bottom-up adoption and empowered champions. When AI became part of the culture, not an add-on, the quality went up and the cleanup went down.
Go deeper:
Lesson two: AI agents struggle when we ask them to work alone
The hype says AI agents will run projects, make decisions, and act independently. The reality is that most workplace success still depends on humans for context, tradeoffs, and judgment, and tiny moments like, “Wait, that’s not the right stakeholder.”
What showed up again and again this year is that AI agents don’t fail because they’re incapable. They fail because they’re asked to operate without the context humans naturally bring to their work. When agents are dropped into work without clear goals, constraints, ownership, or checkpoints, they lose the plot and make confident mistakes.
AI agents work best as teammates, not tools. When humans stay accountable and give AI the context it needs, agents can actually move work forward.
Go deeper:
Lesson 3: AI impact comes from redesigning work, not adding tools
This was the clearest divider I saw all year.
Some teams treated AI like a pile of tools. Others treated it like infrastructure, redesigning workflows, rethinking ownership, and getting clear on how work moves before automation ever entered the picture.
The difference was obvious. Teams that bolted AI onto broken or fragmented workflows got faster outputs, but also more confusion. AI didn’t fix their work problems—it exposed (and compounded) them.
The teams that saw real impact did something harder first: They redesigned how work moves end to end, then layered in automation where it actually made sense.
Go deeper:
What I’m thinking about going into the new year
If 2025 was the year AI stopped being optional, 2026 is the year our systems have to catch up.