Hi,
I’m Ethan DeWaal, Strategy & Ops Lead for the AI Studio team at Asana. I’m back with a fresh AI Studio Tip for you. This is the first of a two-part guide on rolling out your first AI Studio use case:
Your AI Studio Workflow is Built—Now Let’s Make Sure People Actually Love Using It
So you’ve built your AI Studio workflow. The instructions are written, the actions are connected, and you’re pretty sure it’s going to revolutionize how your team works. But here’s the thing—there’s a big difference between a workflow that technically works and one that your team actually trusts and uses every day.
Before you send that “Hey team, check out this amazing new automation!” message, let’s make sure your workflow can handle the beautiful chaos of real work. This guide will walk you through testing your creation in a way that’s thorough but not overwhelming—think of it as a dress rehearsal before opening night.
Step 1: How Much Testing Do You Actually Need?
Let’s skip the complex matrices and math. Here’s the real question: What happens if this workflow messes up?
The Quick Gut Check
Ask yourself three questions:
- Who sees this? Just your team, or does it touch clients/executives?
- How hard is it to fix mistakes? Can you click “undo” or will you spend hours cleaning up?
- How often will this run? A few times a week or constantly throughout the day?
If you answered “just my team,” “easy to fix,” and “occasionally” → You need about 8 out of 10 test cases to work well. This is your friendly neighborhood automation.
If you got a mix of answers → Aim for 9 out of 10. Most workflows fall here.
If you answered “clients/executives,” “nightmare to fix,” or “constantly” → You want 19 out of 20 working perfectly. This is your mission-critical stuff.
Be honest with yourself. It’s way better to over-test now than to explain to your CEO why the AI just assigned them to review the office lunch menu.
Step 2: Gather Your Test Scenarios (Real Beats Synthetic Every Time)
First Choice: Use Your Real Work
The absolute best test data? Your actual past requests.
Go grab 10-15 real examples from the last month—the good, the bad, and the “what were they thinking?” requests. Copy them exactly as they were written, typos and all. These are gold because they capture how your team actually communicates, not how you think they should.
Backup Plan: Generate Smart Test Cases
No past examples? No problem. Maybe this is a brand new process, or your past requests contain sensitive information. Here’s where AI becomes your testing assistant (yes, using AI to test AI—we’ve gone full inception).
Here’s a prompt that actually works:
I’m testing a workflow that [describe what your workflow does in plain language].
Create 15 realistic test requests that my team might actually submit. Include:
- A few totally normal, boring requests
- Some where people forgot to include important details
- A couple from people who are clearly having a bad day
- Some that are way too detailed (the over-explainers)
- A few edge cases that might break things
- At least one that’s probably out of scope but someone will ask anyway
Write them like real people would—some casual, some formal, some confused.
Mix up the formats too—bullets, paragraphs, one-liners, novels.For each one, tell me:
1. The request as someone would write it
2. What’s tricky about this test case
3. What should happen
The key here? You want variety. Real work is messy, and your workflow needs to handle that mess gracefully.
Cheat code: Include a screenshot of the form or submission questions for the request generation.
Making Your Test Project
Create a new Asana project called something like “AI Studio Testing - [Your Workflow Name] ” (yes, add the emoji—testing should feel a little fun).
Drop each test scenario in as a task.
Step 3: Run Your Tests and Get the Experts Involved
The Testing Dance
Don’t run all 15 tests at once—that’s a recipe for confusion. Instead:
- Run 5 tests and see what happens
- Document everything (what actually happened vs. what you expected)
- Spot patterns - Are certain types of requests consistently failing?
- Fix the obvious stuff before moving to the next batch
Getting Your Subject Matter Experts (SMEs) to Actually Give Feedback
Here’s the thing about SMEs—they’re busy. Don’t just forward them a bunch of test results and ask “Does this look good?”
Instead, make it easy for them:
"Hey [Name], I need 10 minutes of your expertise. I’m testing our new AI workflow and want to make sure it actually helps rather than creates more work. Can you look at these 3 examples and tell me:
- Would this output save you time or create confusion?
- What’s missing that you’d need to add manually?
- On a scale of ‘please no’ to ‘where has this been all my life,’ how does this feel?"
Pro tip: Show them the failures too. Sometimes a “failed” test reveals that your workflow is actually being smarter than your original plan.
Step 4: Fix What’s Broken (Without Losing Your Mind)
When tests fail—and they will—here’s your debugging playbook:
The Usual Suspects
-
Problem #1: Conflicting Instructions (The “But You Said…” Issue)
Your instructions say “always assign to Sarah” in one place and “distribute based on workload” in another. The AI is confused, and honestly, who can blame it?
Quick fix: Read your instructions out loud. If you hear yourself saying “always” or “never” multiple times about different things, you’ve found your problem.
-
Problem #2: The Edge Case Avalanche
One weird request breaks everything.
Quick fix: Add a catch-all: “If you’re not sure what to do, assign to [designated human] with a note about what’s unclear.”
-
Problem #3: Trying to Boil the Ocean
Your workflow is trying to do seventeen things perfectly.
Quick fix: Start with the core use case. Get that working reliably, then add complexity. Your V1 doesn’t need to handle every scenario—it just needs to handle the common ones well.
When to Wave the White Flag
If less than half your tests are passing after a few rounds of fixes, it’s time for some tough love: Your use case might be too ambitious for right now.
Consider:
- Breaking it into smaller automations
- Simplifying the decision logic
- Starting with a “human in the loop” version where the AI suggests but doesn’t act
There’s no shame in this. Better to have a simple workflow that works than a complex one that doesn’t.
Step 5: The Pre-Launch Checklist
Before you declare victory, make sure you can check these boxes:
It works for the boring stuff - Your standard requests succeed almost every time
It fails gracefully - When it doesn’t know what to do, it asks for help instead of guessing
Your experts approve - The people who’ll actually use this think it’s helpful
You can explain it - You can tell someone in one sentence what this workflow does and when to use it
You know its limits - You’re clear about what it can and can’t handle
Setting Yourself Up for the Rollout
Here’s the thing—even a perfectly tested workflow can fail if people don’t trust it or understand when to use it. That’s why your testing phase should produce two things:
- A workflow that actually works
- Stories about how it works (your test successes become your demo examples)
Document a few of your favorite test cases where the workflow really shined. You’ll use these stories when you introduce the workflow to your team. “Remember that project request from last week that took three back-and-forths to clarify? Watch what happens when I run it through our new workflow…”
People trust stories more than statistics.
Your Testing Timeline (Yes, You Can Do This in a Day or Two)
- Morning Day 1: Gather your test scenarios (real or generated)
- Afternoon Day 1: Set up your test project, run your first 5 tests
- Morning Day 2: Fix obvious issues, run next 5 tests, get SME feedback ‘
- Afternoon Day 2: Final fixes, run remaining tests
- Bonus Day 3: If needed, one more round of refinement
This isn’t a month-long QA process. It’s a focused push to make sure your automation is ready for prime time.
The Bottom Line
Testing isn’t about achieving perfection—it’s about building confidence. You want to know that your workflow will help more than it hurts, and when it does hit a snag, you understand why and how to fix it.
Once you’ve hit your success threshold and your SMEs are giving you thumbs up, you’re ready for the fun part: rolling this out to your team and managing the very human side of automation adoption. But that’s a story for the next guide.
For now, remember: Every hour you spend testing saves your team countless hours of confusion and cleanup. Your future self (and your teammates) will thank you.
Next up (coming soon) 
“How to Roll Out Your First AI Studio Use Case to Your Team — Part 2: Handling Change Management”
Want to be automatically notified when this and other AI Studio Tips go live?
Make sure to follow AI Studio Tips & Workflows!