Node - handling streams errors

Hello,

I’m currently developing a node application to automate a workflow in ASANA through the API. I’m getting close to my goal but lately I have been bothered by some unexpected “server error” while using client.event.stream.

var streamPRESSE = client.events.stream(“1114493480415867”, {
periodSeconds: 1,
continueOnError: true
});

I have 5 concurrent streams for 5 different projects and they are launched as soon as the app is launched and I listen to their respective 'data" event. But sometimes (I say sometimes but it is very inconsistent) I get this error for for one of the streams :

Unhandled rejection Error: Server Error
    at ServerError.AsanaError (/Users/bruere.jb/Desktop/AutoSANA/AUTOSANA/node_modules/asana/lib/errors/error.js:4:11)
    at new ServerError (/Users/bruere.jb/Desktop/AutoSANA/AUTOSANA/node_modules/asana/lib/errors/server_error.js:5:14)
    at Request._callback (/Users/bruere.jb/Desktop/AutoSANA/AUTOSANA/node_modules/asana/lib/dispatcher.js:161:23)
    at Request.self.callback (/Users/bruere.jb/Desktop/AutoSANA/AUTOSANA/node_modules/request/request.js:185:22)
    at Request.emit (events.js:182:13)
    at Request.<anonymous> (/Users/bruere.jb/Desktop/AutoSANA/AUTOSANA/node_modules/request/request.js:1161:10)
    at Request.emit (events.js:182:13)
    at IncomingMessage.<anonymous> (/Users/bruere.jb/Desktop/AutoSANA/AUTOSANA/node_modules/request/request.js:1083:12)
    at Object.onceWrapper (events.js:273:13)
    at IncomingMessage.emit (events.js:187:15)
    at endReadableNT (_stream_readable.js:1094:12)
    at process._tickCallback (internal/process/next_tick.js:63:19)

I understand that there was a problem on Asana’s side, but the problem is that it kills the stream. I though the option “continueOnError” was meant to prevent this so I use it on every stream, but it doesn’t. Also I didn’t find any explanations about this option in the documentation so I’m actually not sure I really understand its purpose.

Is there a sure way to ensure that the streams continue running even after a server error, or at least to restart them as soon as they crash ?

Last time I tried to run several thread in parallel in Javascript I reached the Rate Limit and decided to only to sequential calls… That does not help you, but maybe this is the cause… Are you using the official Node client library?

Hello,

Yes I’m using the official library.
I also had some issues with the rate limit before, but since then I optimized my code by using the batch API as much as possible (thus limiting my calls to the API) as well as using a rate limiting module and a multi-queuing architecture. therefore I no longer have any rate limit error.

The thing is that my application runs smoothly 99% of the time. But since I have 5 concurrent streams if any of them stop due to an unhandled server error I have to restart the whole app manually. It also means that between the moment it crashed and the moment I noticed it crashed, I would have lost all the automation it was supposed to trigger during that time. That is why I’m looking for a way to ensure that if a stream crash it can restart by itself.

Well, maybe my aim is a bit delusional but I would love to do that to make my app 100% autonomous :smile:

1 Like

@Joe_Trollo any idea?

Hello,

@Joe_Trollo did you have time to check this issue ? My app is now running on my production server and I can clearly see in the logs that the streams fail regularly with errors 500:

{ Error: Server Error
at ServerError.AsanaError (/home/asana/CDSANA/node_modules/asana/lib/errors/error.js:4:11)
at new ServerError (/home/asana/CDSANA/node_modules/asana/lib/errors/server_error.js:5:14)
at Request._callback (/home/asana/CDSANA/node_modules/asana/lib/dispatcher.js:161:23)
at Request.self.callback (/home/asana/CDSANA/node_modules/request/request.js:185:22)
at Request.emit (events.js:198:13)
at Request. (/home/asana/CDSANA/node_modules/request/request.js:1161:10)
at Request.emit (events.js:198:13)
at IncomingMessage. (/home/asana/CDSANA/node_modules/request/request.js:1083:12)
at Object.onceWrapper (events.js:286:20)
at IncomingMessage.emit (events.js:203:15)
at endReadableNT (_stream_readable.js:1129:12)
at process._tickCallback (internal/process/next_tick.js:63:19)
message: ‘Server Error’,
stack:
‘Error: Server Error\n at ServerError.AsanaError (/home/asana/CDSANA/node_modules/asana/lib/errors/error.js:4:11)\n at new ServerError (/home/asana/CDSANA/node_modules/asana/lib/errors/server_error.js:5:14)\n at Request._callback (/home/asana/CDSANA/node_modules/asana/lib/dispatcher.js:161:23)\n at Request.self.callback (/home/asana/CDSANA/node_modules/request/request.js:185:22)\n at Request.emit (events.js:198:13)\n at Request. (/home/asana/CDSANA/node_modules/request/request.js:1161:10)\n at Request.emit (events.js:198:13)\n at IncomingMessage. (/home/asana/CDSANA/node_modules/request/request.js:1083:12)\n at Object.onceWrapper (events.js:286:20)\n at IncomingMessage.emit (events.js:203:15)\n at endReadableNT (_stream_readable.js:1129:12)\n at process._tickCallback (internal/process/next_tick.js:63:19)’,
status: 500,
value: { errors: [ [Object] ] } }

The streams concerned are never the same nor they are all failing at the same time, thus those errors look like they are completely random. Maybe you know a possible cause for those errors on asana’s side ?

Hi @JB_BRU,

I don’t want to get off track on this thread - my comment here has nothing to do with your core issue - but just to say for anyone reading this thread later on: per the Asana docs, it seems using the Batch API doesn’t help at all in terms of Asana’s rate limiting:

1 Like

Hi @Phil_Seeman,

No problems with getting a bit offtrack while waiting for my answer :wink: .
I agree with you, using the batch API doesn’t help with Asana’s rate limits. To be more precise when I said that I “optimized” my code with batch I meant that it helped me to make it tidier.

I had multiple independent scripts running at the same time and each of them could create/update tasks at different steps of their execution. Therefore I had issues when several of them where concurrently making calls to the API. My solution was to make my scripts “call independent”. Now each script only produce a payload of instructions (creations/updates of tasks) and those payloads are then executed by a unique function which uses the batch API. This way I only have to control the rate of this function instead of monitoring every call in my scripts.

While the batch API doesn’t help in terms of rate limits, it helps you to better keep in check your API calls, and thus to better manage the potential rate limits issues.

Ah, OK, got it - great explanation (and it sounds like a great design change) - thanks!

Hey @JB_BRU,

I’m not too familiar with this part yet, and I can do some more digging if we need to. But as far as I can tell, continueOnError is a lie!

That being said, I think you can do this yourself by listening to the error event.

.on('error', function(event) {
    restartThisStream();
});

As under the hood it’s using the readable class of stream.

I’ll add a task to investigate this further, to see if this variable was suppose to work at one point, or to see if there’s something I missed in this pass.

2 Likes

Hi @Ross_Grambo,

Thanks for your answer. If continueOnError is really useless, I think you should modify the example “event.js” in the official node library on github. It uses this option and therefore it made me think it was somewhat important. To avoid any confusion it is safer to remove it from the event-stream example.

Concerning my issue it seems we came up with the same solution. I listen to the error event to restart the streams and it seems to be working fine. I still regularly get those error 500 in my logs but at least my app can restart its streams by itself.

Sounds good. I’ll take down that example now, and I’ll make a task to look into the 500s.

Are you getting an error phrase with the 500s? It should be in the body and look something like:

"errors": [
  {
    "message": "Server Error",
    "phrase": "6 sad squid snuggle softly"
  }
]

If so, it would be very helpful as it gives us the exact stack trace of the errors you’re receiving.

I’m happy you’re unblocked but I’m curious why you were getting these errors in the first place.

Best,
Ross

Hey @Ross_Grambo, here is the message I get every time:

{“errors”:[{“message”:“Oops! An unexpected error occurred while processing this request. The input may have contained something the server did not know how to handle. For more help, please contact api-support@asana.com and include the error phrase from this response.”,“phrase”:“14 odd pandas stumble badly”}]}Status: 503\r\nCache-Control: private, no-store, max-age=0, no-cache, must-revalidate, post-check=0, pre-check=0\r\nContent-Type: application/json\r\nContent-Length: 39\r\nX-Asana-Content-String-Length: 39\r\n\r\n{“errors”:[{“message”:“Server Error”}]}SERIOUS: Exception reached the event loop. Don't just fix the immediate cause, fix the reason it wasn't caught too please!\n’ }

It says something about the input being the possible cause but I don’t think it is possible as I’m only using client.events.stream with a project gid. Concerning the frequency of the 500’s I get between 2 and 4 per day out of 6 different streams. However it seems that some streams never fail and some fail more often, but for no apparent reasons as they all stream projects using their gid. Furthermore the errors mostly happen during the night, when no one in my team is using Asana.

Hope those precisions help you to find the cause of the errors.

Best,
JB

Hmmm actually I checked and I don’t always get the same phrase, here is another I got :

{“errors”:[{“message”:“Oops! An unexpected error occurred while processing this request. The input may have contained something the server did not know how to handle. For more help, please contact api-support@asana.com and include the error phrase from this response.”,“phrase”:“27 zany crickets lope calmly”}]}Status: 503\r\nCache-Control: private, no-store, max-age=0, no-cache, must-revalidate, post-check=0, pre-check=0\r\nContent-Type: application/json\r\nContent-Length: 39\r\nX-Asana-Content-String-Length: 39\r\n\r\n{“errors”:[{“message”:“Server Error”}]}SERIOUS: Exception reached the event loop. Don't just fix the immediate cause, fix the reason it wasn't caught too please!\n’ }

Perfect! Thank you. It’s uniquely generated each time, so different phrases makes sense.

I’ll use both of these and get back to you when I find the issue.

Hey @JB_BRU,

We found the error and the API team has a task to take a look. It’s nothing that you’re doing on your side.

In the mean time, if you are actually restarting the stream, you might be missing events when you get a 500 from a situation like:

Start Stream (Gets token 'ABC')
*event 1, 2, 3 happen*
Stream Requests Events (Sends token 'ABC' -> Gets 1,2,3 & new token 'GSD')
*event 4, 5, 6 happen*
Stream Requests Events (Sends GSD, Gets 4,5,6 & new token TEW)
*event 7, 8 happen*
Stream Requests Events (Sends TEW, Gets a 500 error & no new token)
Start Stream (Gets New Token JKH)
*event 9, 10, 11 happen*
Stream Requests Events (Sends JKH, Gets 9, 10, 11 & a new token UIO)

In this scenario, you miss the events 7 & 8 because you don’t get the events between token TEW and JKH.

You could fix this by ensuring when you create a new stream, it uses the crashed token instead (TEW in this example). This could be done by setting the syncToken of your new stream:

eventStream.syncToken = previousSyncToken

But after looking more at the code, I think the correct way to handle this is:

.on('error', function(event) {
    // Do nothing.
});

It looks like if you catch the error, the stream stays alive and will keep polling, using the old token.

You might already be doing one of these things, but I figured this thread is a good place for this anyway! :slight_smile:

Hello @Ross_Grambo,

Well I had these concerns about missing some events due to the errors but I thought nothing could be done about it, so thank you for you insights ! I will test this right away and see if the streams stay up.
Still I’m a bit confused about the “correct way” you suggest. I was previously catching the error that way but the stream would still be killed:

streamEXAMPLE.on(‘error’, function(err) {
console.log(err);
});

Is your solution working because you use the “event” parameter instead of “err” ? I don’t really understand how doing nothing with the event keeps the stream alive :thinking:

1 Like

I just found an error in my logs so I checked if your solution worked or not, and it worked ! Well I still don’t understand why it’s working but at that point its only a question of curiosity, my problem is solved :+1: .

Once again thank you for your answers and insights it helped a lot !

Best,
JB

1 Like

Awesome! Glad you’re up and running :slight_smile:

I found this solution because I read these lines from event_steam.js

.catch(function(error) {
    // Failure - emit error. If we survive then the error was "handled"
    // and we'll continue to fetch events.
    me.emit('error', error);
    me._schedule();
  });

So it looks like it just throws an exception, and as long as the app doesn’t break, it assumes the error was handled correctly, and keeps going.