Hi!
I am exploring stories endpoint and I want to clarify its use.
As far as I understand, it is suited only for a one-time download of all stories of a task (
https://app.asana.com/api/1.0/tasks/{task_gid}/stories)
But:
- It is impossible to filter stories by the created_at attribute, so incremental update on a task is also impossible (upload only stories, whose created_at attribute is newer than the last updated one).
- The only way to get stories for all tasks in the project is to loop through ALL of them. We also do not have the means to check, which one has new stories and which one doesn’t.
Thank you!
You are right, you must get them one by one.
The first iteration can be long.
The next one will be faster, because I suggest to use the “search” api
And query for task in your selected projects (project.any=xxx,yyy)
and get tasks modified after the last time you query, with “modified_at.before=2024-07-19T23:18:45Z”.
Then, only get stories of these tasks to update your local copy.
2 Likes
You could also consider webhooks or events to know more precisely when a story is added. It may end up being more complex to implement but more efficient in terms of fetching story updates.
1 Like
@Dmytro_Horodetskyi,
When working with stories, also just keep in mind that because of Asana’s story consolidation, you won’t necessarily get ALL changes that occurred on a task from its stories.
3 Likes
Hey everybody! Thank you a lot for your insights.
Now I have enough information to proceed with our stories script.
Regarding webhooks and events:
-
Webhooks. We are still researching whether they are suitable for our use case and application architecture.
-
Events. This is the endpoint we are using right now and we intend to use stories instead of events. The reason is the following - we have a lot of resources to track and there are way too many events happening in them.
So what happens is that our synchronization tokens become expired way too soon. We have to run our scripts every 20 minutes to retrieve the data without interruptions. We also usually can’t retrieve lost data due to:
Sync tokens may not be valid if you attempt to go “backward” in the history by requesting previous tokens, though re-requesting the current sync token is generally safe, and will always return the same results.
So can you please clarify if I am missing something here? Is the events endpoint not suitable for large amounts of data?
Hi @John_Baldo @Phil_Seeman! Sorry for bothering you, but can you please take a look at my last post regarding events? Or should I funnel this discussion into a new post looking into the events in depth?
I only use webhooks; I haven’t used events so I can’t help there, sorry.
1 Like
Hi, Sorry, I’m not clear on what the question is.
Hi, @John_Baldo! Sorry for the confusion.
The main question was whether the events endpoint is suited for large amounts of data or can it be optimized for that.
We have so many events that our synchronization tokens expire after 20 minutes, so this way of events retrieval is just not maintainable.