Sending many requests from Node.js to an API causes error - javascript

I have more than 2000 user in my database , when I try to broadcast a message to all users, it barely sends about 200 request then my server stops and I get an error as below :
{ Error: connect ETIMEDOUT 31.13.88.4:443
at Object.exports._errnoException (util.js:1026:11)
at exports._exceptionWithHostPort (util.js:1049:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1090:14)
code: 'ETIMEDOUT',
errno: 'ETIMEDOUT',
syscall: 'connect,
address: '31.13.88.4',
port: 443 }
Sometimes I get another error that says :
Error!: Error: socket hang up
This is my request :
function callSendAPI(messageData) {
request({
uri: 'https://graph.facebook.com/v2.6/me/messages',
qs: { access_token: '#####' },
method: 'POST',
json: messageData
}, function (error, response, body) {
if (!error && response.statusCode == 200) {
var recipientId = body.recipient_id;
var messageId = body.message_id;
if (messageId) {
console.log("Successfully sent message with id %s to recipient %s",
messageId, recipientId);
} else {
console.log("Successfully called Send API for recipient %s",
recipientId);
}
} else {
console.error("Failed calling Send API");
console.log(error)
}
});
}
I have tried
setTimeout to make the the API calling wait for a while:
setTimeout(function(){callSendAPI(data)},200);
Can anyone help if he/she faced a similar error ?
EDITED
I'm using Messenger Platform which support high rate of calls to the Send API and it is not limited with 200 calls .

You may be hitting Facebook API limits. To throttle the requests you should send every request after some interval from the previous one. You didn't include where you're iterating over all users but I suspect that you maybe do it in a loop and if you use setTimeout to delay every request with flat 200ms delay then you have all requests done at the same time like you did before - just 200ms later.
What you can do is:
You can use setTimeout and add variable delay for every request (not recommended)
You can use Async module's series or parallelLimit (using callbacks)
You can use Bluebird's Promise.mapSeries or Promise.map with concurrency limit (using promises)
The 1 is not recommended because it will still be fire-and-forget (unless you add more complexity to that) and you still risk that you have too much concurrency and go over limit because you only control when the requests start, not how many of outstanding requests are there.
The 2 and 3 are mostly the same but differ by using callbacks or promises. In your example you're using callbacks but your callSendAPI doesn't take its own callback which it should if you want option 2 to work - or, alternatively, it should return a promise if you want option 3 to work.
For more info see the docs:
https://caolan.github.io/async/docs.html#parallelLimit
https://caolan.github.io/async/docs.html#series
http://bluebirdjs.com/docs/api/promise.map.html
http://bluebirdjs.com/docs/api/promise.mapseries.html
Of course there are more ways to do it but those are the most straightforward.
Ideally, if you want to fully utilize the 200 requests per hour limit then you should queue the requests yourself and make the requests at certain intervals that correspond to that limit. Sometimes if you didn't do a lot of requests in an hour then you won't need delays, sometime you will. What you should really do here is to queue all requests centrally and empty the queue at intervals corresponding to the already used up portion to the limit which you should track yourself - but that can be tricky.

It sounds like you are hitting a rate limit.
From the Facebook documentation:
Your app can make 200 calls per hour per user in aggregate.
You can check the dashboard to see if you are hitting the rate limiting in these cases.

Related

response from server to all clients in multiple client and single server model

I have moxa E1212 product connected to a system and for communication I use modbus-tcp (jsmodbus package) and have multiple clients.
I need when a client sends a request to moxa (like turn on a LED), other clients be aware of that real time or server broadcast it's response to all clients.
How can I implement something like this.
currently I read status every 200 ms in a loop to notice if there is any changes:
setInterval(function () {
client.readCoils(start, count)
.then(function (resp) {
// console.log(resp)
// socket.end()
}).catch(function () {
console.error(arguments)
socket.end()
})
}, 200)
That broadcast does not exist in Modbus protocol, the only way for all clients to know about the coil state is to periodically read it, which seems to be what you are already doing.
The MQTT protocol would be better for what you intend to do, but it seems that this device does not support it.

Implementing Slack slash command delayed responses

I built a Slack slash command that communicates with a custom Node API and POSTS acronym data in some way, shape, or form. It either gets the meaning of an acronym or adds/removes a new acronym to a Mongo database.
The command works pretty well so far, but Slack occasionally returns a timeout error because it expects a response within 3 seconds. As a result, I'm trying to implement delayed responses. I'm not sure that I am implementing delayed responses properly for my Slack slash command & Node API.
This resource on Slack slash commands has information on delayed responses. The idea is that I want to send a 200 response immediately to let the Slack user know that their request has been processed. Then I want to send a delayed response to slackReq.response_url that isn't constrained by the 3-second time limit.
The Code
let jwt = require('jsonwebtoken');
let request = require('request');
let slackHelper = require('../helpers/slack');
// ====================
// Slack Request Body
// ====================
// {
// "token":"~",
// "team_id":"~"
// "team_domain":"~",
// "channel_id":"~",
// "channel_name":"~",
// "user_id":"~",
// "user_name":"~",
// "command":"~",
// "text":"~",
// "response_url":"~"
// }
exports.handle = (req, res) => {
let slackReq = req.body;
let token = slackReq.token;
let teamId = slackReq.team_id;
if (!token || !teamId || !slackHelper.match(token, teamId)) {
// Handle an improper Slack request
res.json({
response_type: 'ephemeral',
text: 'Incorrect request'
});
} else {
// Handle a valid Slack request
slackHelper.handleReq(slackReq, (err, slackRes) => {
if (err) {
res.json({
response_type: 'ephemeral',
text: 'There was an error'
});
} else {
// NOT WORKING - Immediately send a successful response
res.json({
response_type: 'ephemeral',
text: 'Got it! Processing your acronym request...'
})
let options = {
method: 'POST',
uri: slackReq.response_url,
body: slackRes,
json: true
};
// Send a delayed response with the actual acronym data
request(options, err => {
if (err) console.log(err);
});
}
});
}
};
What's Happening Right Now
Say I want to find the meaning of acronym NBA. I go on Slack and shoot out the following:
/acronym NBA
I then hit the 3-second timeout error - Darn – that slash command didn't work (error message: Timeout was reached). Manage the command at slash-command.
I send a request a few more times (2 to 4 times), and then the API finally returns, all at once:
Got it! Processing your acronym request...
NBA means "National Basketball Association".
What I Want to Happen
I go on Slack and shoot out the following:
/acronym NBA
I immediately get the following:
Got it! Processing your acronym request...
Then, outside of the 3-second window, I get the following:
NBA means "National Basketball Association".
I never hit a timeout error.
Conclusion
What am I doing wrong here? For some reason, that res.json() with the processing message isn't immediately being sent back. What can I do to fix this?
Thank you in advance!
Edit 1
I tried to replace the res.json() call with res.sendStatus(200).json(), but unfortunately, that only returned an 'OK' without actually processing the request.
I subsequently tried res.status(200).send({..stuff..}) but that resulted in the same problem I was having before.
I think res.json() sends a 200 automatically anyway, but its just not responding fast enough for some reason.
Solution
I eventually figured this one out. I was implementing the delayed responses right all along.
Since I'm using the free plan for Heroku, the dyno that's hosting my app would go down after 30 minutes of inactivity. When the app went down, the first few requests would time out on Slack before properly responding to a request.
The solution to this is either 1) upgrade to a new plan that keeps the dyno active at all times, or 2) ping the app with a simple get request every 15 or so minutes, like so:
const intervalMins = 15;
setInterval(() => {
http.get("<insert app url here>");
console.log('Ping!');
}, intervalMin * 60000)
I decided to go with the latter option. I don't run into the issue of the dyno sleeping anymore. I'd check this article for more details.

NodeJS sending e-mails with a delay

I'm using Nodemailer to send mailings in my NodeJS / Express server. Instead of sending the mail directly I want to wait 20 minutes before sending the mail. I think this feels more personal then sending mail directly.
But I have no idea how to achieve this. I guess I don't need something like a NodeJS cronjob like this NodeCron package, or do I?
router.post('/', (req, res) => {
const transporter = nodemailer.createTransport(smtpTransport({
host: 'smtp.gmail.com',
port: 465,
auth: {
user: 'noreply#domain.nl',
pass: 'pass123'
}
}));
const mailOptions = {
from: `"${req.body.name}" <${req.body.email}>`,
to: 'info#domain.nl',
subject: 'Form send',
html: `Content`
};
transporter.sendMail(mailOptions, (error, info) => {
if (error) res.status(500).json({ responseText: error });
res.status(200).json({ responseText: 'Message send!' });
});
}
});
My router looks like as shown above. So if post is called I want this request to wait 20 minutes. Instead of with a cronjob I want to execute the post just once, but with a bit of a delay. Any suggestions on how to do this?
Well some folks may come here and tell you to use an external queue system and bla bla... But you could simply use plain old Javascript to schedule the sending 20*60*1000 milliseconds into the future to get things started. :)
There's however a problem with your code: you're waiting for the mailer to succeed before sending the 200 - 'Message sent' response to the user. Call me a madman but I'm pretty sure the user won't be staring at the browser window for 20 minutes, so you'll probably have to answer as soon as possible and then schedule the mail. Modifying your code:
router.post('/', (req, res) => {
const DELAY = 20*60*1000 // min * secs * milliseconds
const transporter = nodemailer.createTransport(smtpTransport({
host: 'smtp.gmail.com',
port: 465,
auth: {
user: 'noreply#domain.nl',
pass: 'pass123'
}
}));
const mailOptions = {
from: `"${req.body.name}" <${req.body.email}>`,
to: 'info#domain.nl',
subject: 'Form send',
html: `Content`
};
res.status(200).json({ responseText: 'Message queued for delivery' });
setTimeout(function(){
transporter.sendMail(mailOptions, (error, info) => {
if (error)
console.log('Mail failed!! :(')
else
console.log('Mail sent to ' + mailOptions.to)
}),
DELAY
);
}
});
There are however many possible flaws to this solution. If you're expecting big traffic on that endpoint you could end up with many scheduled callbacks that will eat the stack. In addition, if something fails the user of course won't be able to know.
If this is a big / serious project, consider using that cronjob package or using an external storage mechanism where you can queue this "pending" messages (Redis would do and it's incredible simple), and have a different process read tasks from there and perform the email sending.
EDIT: saw some more things on your code.
1) You probably don't need to create a new transport inside your POST handler, create it outside and reuse it.
2) In addition to the mentioned problems, if your server crashed no email will be ever sent.
3) If you still want to do it in a single Node.js app, instead of scheduling an email on every request to this endpoint, you'd be better storing the email data (from, to, subject, body) somewhere and schedule every 20 minutes a function that will get all pending emails, send them one by one, and then reschedule itself to re-run 20 minutes later. This will keep you memory usage low. Server crash still make all emails lost, but if you add REDIS into the mix then you can simply grab all pending emails from REDIS when your app start.
Probably too much for an answer, sorry if it wasn't needed! :)
I think CharlieBrown's answer is correct and since I had two answers in my mind while reading the question, I thank him for simplifying my answer to be the alternative of his.
setTimeout is actually a good idea, but it has a drawback: in the case when there is any reason to stop the server code (server restart, module installation, file management, etc.) your callbacks scheduled at the end of the setTimeout's time parameter will not be executed and some users will not receive emails.
If the problem above is serious-enough, then you might want to store scheduled emails to be sent in the database or into Redis and use a cron job to periodically check the email set and send the emails if there are some.
I think that either this answer or CharlieBrown's should suffice for you, depending on your preferences and needs.

How to timeout if connection not establish

Is there any default timeout value that after a number of tries if connection not establish then i got timeout from socket.io API ? in my application i try to connect with Nodejs server using socket.io but if connection not establish or unreachable i want that at least i get some event after x number of tries and then i should inform a user that there is a connection problem with server. but some how my client continuously trying to connect with a server and print the following exception on console:
socket.io-1.3.5.js:2 GET https://chatapp.local:8898/socket.io/?EIO=3&transport=polling&t=1485528658982-172 net::ERR_CONNECTION_REFUSED
Here is my code:
socket = io(socketUrl, {'force new connection': true});
socket.on('connect', function () {
uiHandler("socket.connect");
});
socket.on('error', function (err) {
uiHandler("socket.error", {error: err});
});
socket.on('disconnect', function() {
uiHandler("socket.disconnect");
});
socket.on('end', function() {
uiHandler("socket.end");
});
How i can set a timeout if connection not establish within 30sec. Any suggestion please.
From what I read in the API docs you can set the timeout value and the number of retries on each connection, so if you want to try for 30 seconds you basically have
maxTime = timeout * reconnectionAttempts
Please note that you have a delay between each retry (which default to 1000 ms) and a randomization factor.If you want to have total control over the duration before emitting a ConnectionError to your clients you will have to tinker with them a little bit.
From the API docs you can also see that each time an a connection fail an error is emitted as either connect_timeout or connection_error. If every available attempts fail then a reconnect_failed will be fired. Then you will be able to tell your user that something went wrong.
In a more general way you have several options to implement a control over an asynchronous process. Two come to mind immediately : promises and observables. You might want to explore them for a more general & extensible approach.
Please feel free to ask in the comms if you want more details or if I do not answer properly.

ElasticSearch AWS request Timeout

I have an ElasticSearch instance running in AWS which I was able to connect to via the JavaScript client in a MeteorJS application. There was no issue creating mappings(indices and analyzers) or updating mappings.
The problem arises whenever there is an index, update or delete request to the instance. After serving above 200 request, the ElasticSearch instance starts throwing request timeout error with code 408. Initially, I thought making multiple single request is the casue, so I decided to do bulk push. Below is the snippet for the bulk push request.
var bulk = SearchService.ElasticQueue.splice(0, 1000);
console.log('Size: ', bulk.length);
if (bulk.length > 0) {
EsClient.bulk({
body: bulk
}, function (error, response) {
if (!error) {
console.log(response);
} else {
console.log(error);
}
});
}
The SearchService.ElasticQueue is a form of queue and a cron job runs frequently to fetch data from it and run bulk requests. I also tried reducing the number of documents in the bulk request and also increased request Timeout in the connection config, but it doesn't seem to help. I would appreciate any suggestion made.
Thanks.
There is only one way that you can use:
wait_for_completion=false
Which will return a Task ID and then you can pull the data using this Task ID

Categories

Resources