I was experimenting with Asana AI Studio’s auto tagging capabilities to help categorize incoming tasks based on content. While the feature works decently for high-level topics; I’ve run into an issue where AI frequently mislabels tasks that include industry specific jargon or acronyms. For eg; a task titled Prepare QBR deck for SMB clients often gets tagged under Design instead of “Client Management.”
This creates confusion for teams relying on tags for workflow filters.
I have tried refining the dataset by including more context in training examples & increasing the number of manually corrected entries but the results are inconsistent.
Has anyone else faced this with AI Studio’s model training? Is there a recommended way to train the AI with better semantic understanding, especially in B2B or technical workflows?
I checked Asana Help Center documentation guide related to this and found it quite informative. Also, as part of troubleshooting; I explored how to use ChatGPT to simulate user inputs and test task labeling behavior surprisingly effective for testing edge cases!
Would love to hear if others have found similar solutions or have suggestions on improving accuracy in AI-based auto-tagging.
Are you using Team Knowledge to define these terms at the team-level?
I’m not sure if AI Studio can access this (@Arthur_BEGOU Do you know?) but this seems the ideal/most logical place to define this. Both for people onboarding and AI.
An alternative is to define the jargon in a document and link or upload it to the AI Studio rule. (If you have Sharepoint or Google Drive I’d advise linking to prevent redundancy. This also means you can link it to multiple rules and keep it in sync with latest version)
Maybe try adding this to the prompt:
“Use JARGON.DOC as leading in determining what used terminology means.”
Welcome to the forum, @gabrielladawn! Seconding JR’s second suggestion (and maybe first, depending on the answer to his question). I’m guessing that the issue is less about acronyms/technical terminology (in your example, I would fully expect it to know what QBR and SMB mean) and more about needing to specifically train it on the specifics of your process, which you would do in either an attached doc or the prompt itself (I’ve used both strategies in a similar use case with pretty good success).
I think this might be a case of just trying to give the best prompting and context that you can, iterating to continually improve to prevent known past mistakes.
You may even want to enlist an LLM to help you with this (although there are many articles too). For example, you could ask ChatGPT these types of things for your specific prompt/context info, maybe adding even more org-specific detail:
How could I make this prompt more specific?
What context am I missing in this prompt to get a better response?
Can you suggest ways to structure this prompt more effectively?
What are some alternative phrasings I could try for this prompt?
(Sorry, I failed to note where I copied this from originally!)
As I’m doing the AI Studio foundations course I found this overview. It indicates the Project name is - currently - the only thing AI Studio has access to at the project level.