For you who are curious about why we use the batch, and we use it a lot!
In Bridge24, we run xhr call from the browser. Chrome is limited to 6 simultaneous query.
To make our app faster, when we query all tasks for project X, we only query “id” and “modified_at”, and we compare with local cache to get other fields, based on the modified_at. (not always safe when working with subtasks, but works most of the time.)
So, if I need to get additional data for 1000 tasks, I need to get between 2000 and 3000 additional calls, to get all “root” fields, + stories + possible subtasks.
Yes, I can get all root data on the intial call, but each query will be very slower than just requesting id + modified_at.
So, we need to run 2-3000 more calls to load the local caching, that we keep inside IndexedDB.
And, the Asana API can be very slow !! So, to run these 3000 calls, it take up to 50 minutes if we run them one by one. Depending on the hour of the day, some very simple calls can take 2-3 seconds!! and others take 150-250 ms. So we estimage an everage lf 1000 ms per call.
By running simultanous threads, a maximum of 6, we can hope to get all these in an average of 10 minutes.
But our goal is to reach the maximum limite of the api, that is, 1500 calls per minute, 3000 calls in 2 minutes.
The only way we can reach that, is by using the Batch API. We have an algorithm that count how many calls we did in the last 60 seconds, and we optimize the load of each batch, between 1 and 10, to avoid getting “429”, but to keep as near as possible of the 1500 limit!
That’s why, in our app, the batch api is very useful. Also, we don’t need to wait for any other data to run in order, we can query root + subtasks + stories of any task in any order, so that’s not an issue.