Hi everyone,
2025 taught me that no one is âaheadâ in AIâweâre all figuring it out in real time. The times when I had the biggest wins werenât when I was the most polished or prepared. They were when I gave myself permission to just start, stayed curious about what others were trying, and let myself fail fast.
About eight months ago, I began building AI-powered workflows inside my own Asana environment as an experiment
![]()
No big strategy.
No approvals.
Just me trying to do my job better, learn my own product more deeply, and help my customers see real value.
The first workflow I built was straightforward: a five-minute form where clients described the value of Asana in their own words. Once submitted, Asana AI renamed and organized the task, pulled insights from their responses, wrote a clean value story, and multi-homed it into the relevant account plan.
This helped ensure that the value stories shared with me were in the customerâs languageânot scattered across notes or sitting in my head. It was visible, reusable, and easy for my account team to build on. Several teammates across our global org adopted the workflow. It was a clear, low-risk, high-impact win.
Learning in Public Means Owning the Missteps
But giving yourself permission to learn in public comes with the other side of the coin: not all experiments will be good. Some will be messy. And learning in public means owning the missteps too. Sometimes the best thing you can do is take the âL,â learn, and iterate.
Thatâs what happened when I started building AI Teammates.
One teammate I built was called âClient Hero.â I trained it on my clientâs earnings statements, my call notes, their marketing pushes, and our account teamâs success goals. The idea was simple: instead of hunting for information, Iâd have a living teammate that knew everything and could help me draft plans, spot opportunities, or coach me on alignment. So I invited it into our account plan project.
But âClient Heroâ quickly became too eagerâjumping into comment threads it wasnât invited to. The responses werenât wrongâbut they werenât invited, and they interrupted the flow of work.
My wake-up moment came when multiple cross-functional partners messaged me the same thing:
âHey⊠can you please make your AI teammate stop?â
It was a humbling reminder that learning in public means sometimes learning publicly what not to do.
That micro-lesson reframed how I talk about AI with some of Asanaâs largest global customers.
The New Measure of Expertise: From Mastery to Curiosity
For most of my career, credibility came from masteryâbeing the person who knows.
But AI forced a shift: mastery isnât the starting condition anymore. Curiosity is.
I talk to CIOs, CAIOs, Chief Transformation Officers, executive sponsors, program leaders, engineers, and frontline operators across industriesâfinance, retail, tech, professional services, hospitality, logistics. And the throughline is always the same:
Everyone is building the plane while flying it.
Nobody Iâve spoken toâno matter how seniorâhas said, âWeâve fully figured out AI.â
What they do say is:
- âWeâre testing fast.â
- âWeâre learning out loud.â
- âWe donât have the full picture yet, but we canât wait.â
- âWe canât afford to be wrong, but we definitely canât afford to stand still.â
Part of learning out loud meant that I had to let go of the idea that I needed to be an expert before experimenting. The truth is, AI is changing too quickly for any of us to âarrive.â Expertise now is measured by the willingness to explore, to ask foundational questions, and to share learnings as you go.
Asking ânaiveâ questions became one of the most important tools I had.
I found myself in rooms with global AI leadersâpeople building enterprise-wide strategies, designing governance models, standing up AI councils, deploying copilots, and setting multi-year transformation agendas. In the past, I might have tried to anticipate what they expected me to know and prepare accordingly. But in this season, I started saying:
- âWalk me through why you approached it that way.â
- âWhat surprised you?â
- âWhere did value actually show up?â
- âWhat are you still unsure about?â
And, internally at Asana, when I shared my own experimentsâsome polished, some incompleteâit gave others permission to share theirs too. People open up when you show your unfinished work.
Learning in public is a practice. And itâs contagious.
A Call to Action for 2026
I recently walked a CIO through how we are building value-anchored, low-risk AI workflowsâgrounded use cases that utilize Asana to reduce ambiguity and increase clarity. She shared a blunt takeaway from an earlier, failed AI rollout to thousands of employees: she is now willing to replace any application that doesnât deliver a 3x ROI from AI within a year.
It reinforced what Iâm seeing everywhere: even CIOs are learning in public. Everyone is navigating pivots. Nobody has this perfectly figured out.
Her comment reinforced the deeper truth: people donât need more AI; they need meaningful AI that measurably advances their most important work.
If thereâs one message I want everyoneâfrom ICs to CIOsâto walk away with, itâs this:
Because the people who will go furthest with AI arenât the ones with the biggest budgetsâtheyâre the ones willing to build, test, learn, and course-correct out loud.