What are headers property in response of /batch endpoint

Hi everyone, I just see that when we use the “/batch” endpoint, in each item of the response, we have a “headers” property.

I never saw that before, but I’m not sure if it’s new, maybe it has been there for years…

But, the content is always exactly the same:

“headers”: {
“keys”: ,
“as_tuples”: ,
“empty”: true,
“traversable_again”: true
}

What are they used for?
I don’t know if I can do something with that, as they are always identical, even the “empty” property, that is true if the response is empty or not!

I don’t have the answer but I was curious to see what was the benefit of the batch endpoint. Apparently it counts towards rate limit just like X separate calls + it can fail in the middle, forcing you to re-run part of the requests. Seem like a headache to me…

The other problem I have with it - why I never use it - is the order of execution within the batch is indeterminate. When I’m doing multiple updates, I almost always need for them to go in a certain order.

For you who are curious about why we use the batch, and we use it a lot!

In Bridge24, we run xhr call from the browser. Chrome is limited to 6 simultaneous query.

To make our app faster, when we query all tasks for project X, we only query “id” and “modified_at”, and we compare with local cache to get other fields, based on the modified_at. (not always safe when working with subtasks, but works most of the time.)

So, if I need to get additional data for 1000 tasks, I need to get between 2000 and 3000 additional calls, to get all “root” fields, + stories + possible subtasks.
Yes, I can get all root data on the intial call, but each query will be very slower than just requesting id + modified_at.

So, we need to run 2-3000 more calls to load the local caching, that we keep inside IndexedDB.
And, the Asana API can be very slow !! So, to run these 3000 calls, it take up to 50 minutes if we run them one by one. Depending on the hour of the day, some very simple calls can take 2-3 seconds!! and others take 150-250 ms. So we estimage an everage lf 1000 ms per call.

By running simultanous threads, a maximum of 6, we can hope to get all these in an average of 10 minutes.

But our goal is to reach the maximum limite of the api, that is, 1500 calls per minute, 3000 calls in 2 minutes.

The only way we can reach that, is by using the Batch API. We have an algorithm that count how many calls we did in the last 60 seconds, and we optimize the load of each batch, between 1 and 10, to avoid getting “429”, but to keep as near as possible of the 1500 limit!

That’s why, in our app, the batch api is very useful. Also, we don’t need to wait for any other data to run in order, we can query root + subtasks + stories of any task in any order, so that’s not an issue.

2 Likes

wow :scream: thanks for sharing

Yeah @Frederic_Malenfant thanks for that info - very useful to people, and interesting as well!