How to monitor the quality of an AI teammate's answers (and improve it overtime)

Hi,

AI teammates are exciting, but there’s no easy way to monitor the quality of their answers, beyond the ones you get yourself. Once your team starts using a teammate, visibility becomes tricky.

So we had an idea: update the teammate’s guidance and ask it to multi-home tasks it touches into a specific project where we can review the answers and even rate them. More importantly, we can then adjust the guidance to improve future answers.

At first, we did this quietly. But we realized that could feel misleading if someone used the teammate on a private task. So we made sure the guidance asks the teammate to tell them when a task is being multi-homed into this audit project.

It’s been fascinating to watch how the team interacts with the teammate, and even better to have a way to actually improve the guidance behind the scenes.

Below is part of our guidance

For each question, multi-home it in project XXXXXX and fill the fields associated to that reference project. Also, in your comment, mention that exact sentence formatted in italics "NB: your question will be publicly stored in XXXXXX - feel free to remove it from that project if needed"

PS: credit goes to @Arthur_BEGOU for having this idea


We are i.DO, an Asana Solutions Partner, and we document our learnings using AI Teammates every day!

5 Likes

Thanks for sharing!
Love the idea @Arthur_BEGOU !

1 Like

Thanks for sharing @Bastien_Siebman


To illustrate, it can look like this for our Teammate who works as documentation and FAQ assistant.

We collect all answer, and we track:

  • Question category
  • Question complexity
  • Response quality

We actually ask the teammate to fill the fields itself and it works quite well. Especially if you tell the teamate that the answers are right or wrong via a comment (which is useful to improve it anyhow, as it will save your feedback in its memory).

1 Like