Implementing Slack slash command delayed responses - javascript

I built a Slack slash command that communicates with a custom Node API and POSTS acronym data in some way, shape, or form. It either gets the meaning of an acronym or adds/removes a new acronym to a Mongo database.
The command works pretty well so far, but Slack occasionally returns a timeout error because it expects a response within 3 seconds. As a result, I'm trying to implement delayed responses. I'm not sure that I am implementing delayed responses properly for my Slack slash command & Node API.
This resource on Slack slash commands has information on delayed responses. The idea is that I want to send a 200 response immediately to let the Slack user know that their request has been processed. Then I want to send a delayed response to slackReq.response_url that isn't constrained by the 3-second time limit.
The Code
let jwt = require('jsonwebtoken');
let request = require('request');
let slackHelper = require('../helpers/slack');
// ====================
// Slack Request Body
// ====================
// {
// "token":"~",
// "team_id":"~"
// "team_domain":"~",
// "channel_id":"~",
// "channel_name":"~",
// "user_id":"~",
// "user_name":"~",
// "command":"~",
// "text":"~",
// "response_url":"~"
// }
exports.handle = (req, res) => {
let slackReq = req.body;
let token = slackReq.token;
let teamId = slackReq.team_id;
if (!token || !teamId || !slackHelper.match(token, teamId)) {
// Handle an improper Slack request
res.json({
response_type: 'ephemeral',
text: 'Incorrect request'
});
} else {
// Handle a valid Slack request
slackHelper.handleReq(slackReq, (err, slackRes) => {
if (err) {
res.json({
response_type: 'ephemeral',
text: 'There was an error'
});
} else {
// NOT WORKING - Immediately send a successful response
res.json({
response_type: 'ephemeral',
text: 'Got it! Processing your acronym request...'
})
let options = {
method: 'POST',
uri: slackReq.response_url,
body: slackRes,
json: true
};
// Send a delayed response with the actual acronym data
request(options, err => {
if (err) console.log(err);
});
}
});
}
};
What's Happening Right Now
Say I want to find the meaning of acronym NBA. I go on Slack and shoot out the following:
/acronym NBA
I then hit the 3-second timeout error - Darn – that slash command didn't work (error message: Timeout was reached). Manage the command at slash-command.
I send a request a few more times (2 to 4 times), and then the API finally returns, all at once:
Got it! Processing your acronym request...
NBA means "National Basketball Association".
What I Want to Happen
I go on Slack and shoot out the following:
/acronym NBA
I immediately get the following:
Got it! Processing your acronym request...
Then, outside of the 3-second window, I get the following:
NBA means "National Basketball Association".
I never hit a timeout error.
Conclusion
What am I doing wrong here? For some reason, that res.json() with the processing message isn't immediately being sent back. What can I do to fix this?
Thank you in advance!
Edit 1
I tried to replace the res.json() call with res.sendStatus(200).json(), but unfortunately, that only returned an 'OK' without actually processing the request.
I subsequently tried res.status(200).send({..stuff..}) but that resulted in the same problem I was having before.
I think res.json() sends a 200 automatically anyway, but its just not responding fast enough for some reason.
Solution
I eventually figured this one out. I was implementing the delayed responses right all along.
Since I'm using the free plan for Heroku, the dyno that's hosting my app would go down after 30 minutes of inactivity. When the app went down, the first few requests would time out on Slack before properly responding to a request.
The solution to this is either 1) upgrade to a new plan that keeps the dyno active at all times, or 2) ping the app with a simple get request every 15 or so minutes, like so:
const intervalMins = 15;
setInterval(() => {
http.get("<insert app url here>");
console.log('Ping!');
}, intervalMin * 60000)
I decided to go with the latter option. I don't run into the issue of the dyno sleeping anymore. I'd check this article for more details.

Related

How to constantly send message from a node js to my front end

How to constantly update my front end dashboard with new information from the back end.
I have been searching for a solution online, but couldn't stumble on any.
I already know how to send static variables with ejs, but I cant figure out how to update my front end with new messages from the server.
I am working with express for the server and ejs for templating, plus server side java script.
I want to consonantly send messages to the user. Something like page 3 of 100......, 10 of 100..... and so forth. If you have experience with node Js, kindly help me out. Thanks.
You could use Long pooling to solve your problem, Long pooling is,
A request is sent to the server
The server doesn’t close the connection until it has a message to
send
When a message appears – the server responds to the request with it
The browser makes a new request immediately.
The situation when the browser sent a request and has a pending connection with the server is standard for this method. Only when a message is delivered, the connection is reestablished.
If the connection is lost, because of, say, a network error, the browser immediately sends a new request.A sketch of client-side subscribe function that makes long requests:
async function subscribe() {
let response = await fetch("/subscribe");
if (response.status == 502) {
// Status 502 is a connection timeout error,
// may happen when the connection was pending for too long,
// and the remote server or a proxy closed it
// let's reconnect
await subscribe();
} else if (response.status != 200) {
// An error - let's show it
showMessage(response.statusText);
// Reconnect in one second
await new Promise(resolve => setTimeout(resolve, 1000));
await subscribe();
} else {
// Get and show the message
let message = await response.text();
showMessage(message);
// Call subscribe() again to get the next message
await subscribe();
}
}
subscribe();
Hope this hepls!

How to notify HTTP client of the completion of a long task

I have a Node.js system that uploads a large number of objects to MongoDB and creates folders in dropbox for each object. This takes around 0.5 seconds per object. In situations therefore where i have many objects this could take up to around a minute. What i currently do is notify the client that the array of objects has been accepted using a 202 response code. However how do i then notify the client of completion a minute later.
app.post('/BulkAdd', function (req, res) {
issues = []
console.log(req.body)
res.status(202).send({response:"Processing"});
api_functions.bulkAdd(req.body).then( (failed, issues, success) => {
console.log('done')
})
});
bulkAdd: async function (req, callback) {
let failed = []
let issues = []
let success = []
i = 1
await req.reduce((promise, audit) => {
// return promise.then(_ => dropbox_functions.createFolder(audit.scanner_ui)
let globalData;
return promise.then(_ => this.add(audit)
.then((data)=> {globalData = data; return dropbox_functions.createFolder(data.ui, data)}, (error)=> {failed.push({audit: audit, error: 'There was an error adding this case to the database'}); console.log(error)})
.then((data)=>{console.log(data, globalData);return dropbox_functions.checkScannerFolderExists(audit.scanner_ui)},(error)=>{issues.push({audit: globalData, error: 'There was an error creating the case folder in dropbox'})})
.then((data)=>{return dropbox_functions.moveFolder(audit.scanner_ui, globalData.ui)},(error)=>{issues.push({audit: globalData, error: 'No data folder was found so an empty one was created'}); return dropbox_functions.createDataFolder(globalData.ui)})
.then(()=>success.push({audit:globalData}), issues.push({audit: globalData, error: 'Scanner folder found but items not moved'}))
);
}, Promise.resolve()).catch(error => {console.log(error)});
return(failed, issues, success)
},
Well the problem with making client request wait, is it will timeout after certain period or sometimes will show error with no response received.
What you can do is
- Make client request to server to initiate the task, and return 200OK and keep doing your task on server.
- Now write a file on server after insertion of every object as status.
- Read the file from client every 5-10 sec to check if server has completed creating objects or not.
- Mean while your task is not completed on server, show status with completion percentage or some animation.
Or simply implement WebHook or WebSockets to maintain communication.

iisnode encountered an error when processing the request. HRESULT: 0x6d HTTP status: 500 HTTP subStatus: 1013

I'm developing a webapp using ReactJS for the frontend and express for the backend. I'm deploying my app to azure.
To test if my requests are going through I wrote two different API requests.
The first one is very simple:
router.get('/test', (req, res) => {
res.send('test was a success');
});
Then in the frontend I have a button which when clicked makes the request and I get the response 'test was a success'. This works every time.
The second test is:
router.post('/test-email', (req, res) => {
let current_template = 'reset';
readHTMLFile(__dirname + '/emails/' + current_template + '.html', function(err, html) {
let htmlSend;
let template = handlebars.compile(html);
let = replacements = {
name: req.body.name
};
htmlSend = template(replacements);
let mailOptions = {
from: 'email#email.com',
to: 'someone#email.com',
subject: 'Test Email',
html: htmlSend
};
transporter.sendMail(mailOptions)
.then(response => {
res.send(response);
})
.catch(console.error);
});
});
Then when I've deployed the app I make a call to each one of these tests. The first one, like I mentioned always succeeds. The second one which is supposed to send a very simple email fails most of the time with the error "iisnode encountered an error when processing the request. HRESULT: 0x6d HTTP status: 500 HTTP subStatus: 1013". The strange thing is that every once in a while the email will send but this happens very rarely. Most times the request will take exactly two minutes before sending a response with an error.
I should note that when in development in localhost both tests work all the time with no issues whatsoever, it's only when in production (deployment to azure) that this happens.
I've been digging around for the last few days and came up with nothing. Any help or directions would be greatly appreciated.
I found out what the problem was. I'm using gmail to send my test emails, by default gmail will block any attempts to use an account if it thinks the app making the request is not secure. This can be easily fixed by simply clicking the link they automatically send you when you make your first attempt. What is not immediately obvious is when you go in production mode they add another level of security which in this case I believe is a captcha, and while you'll be able to send emails in development as soon as you deploy your app this no longer becomes the case.
Anyway, after digging around a little more I found the option to disable the captcha and now my emails send fine!
Link to that option https://accounts.google.com/b/0/DisplayUnlockCaptcha
Hopefully this will help someone.

NodeJS sending e-mails with a delay

I'm using Nodemailer to send mailings in my NodeJS / Express server. Instead of sending the mail directly I want to wait 20 minutes before sending the mail. I think this feels more personal then sending mail directly.
But I have no idea how to achieve this. I guess I don't need something like a NodeJS cronjob like this NodeCron package, or do I?
router.post('/', (req, res) => {
const transporter = nodemailer.createTransport(smtpTransport({
host: 'smtp.gmail.com',
port: 465,
auth: {
user: 'noreply#domain.nl',
pass: 'pass123'
}
}));
const mailOptions = {
from: `"${req.body.name}" <${req.body.email}>`,
to: 'info#domain.nl',
subject: 'Form send',
html: `Content`
};
transporter.sendMail(mailOptions, (error, info) => {
if (error) res.status(500).json({ responseText: error });
res.status(200).json({ responseText: 'Message send!' });
});
}
});
My router looks like as shown above. So if post is called I want this request to wait 20 minutes. Instead of with a cronjob I want to execute the post just once, but with a bit of a delay. Any suggestions on how to do this?
Well some folks may come here and tell you to use an external queue system and bla bla... But you could simply use plain old Javascript to schedule the sending 20*60*1000 milliseconds into the future to get things started. :)
There's however a problem with your code: you're waiting for the mailer to succeed before sending the 200 - 'Message sent' response to the user. Call me a madman but I'm pretty sure the user won't be staring at the browser window for 20 minutes, so you'll probably have to answer as soon as possible and then schedule the mail. Modifying your code:
router.post('/', (req, res) => {
const DELAY = 20*60*1000 // min * secs * milliseconds
const transporter = nodemailer.createTransport(smtpTransport({
host: 'smtp.gmail.com',
port: 465,
auth: {
user: 'noreply#domain.nl',
pass: 'pass123'
}
}));
const mailOptions = {
from: `"${req.body.name}" <${req.body.email}>`,
to: 'info#domain.nl',
subject: 'Form send',
html: `Content`
};
res.status(200).json({ responseText: 'Message queued for delivery' });
setTimeout(function(){
transporter.sendMail(mailOptions, (error, info) => {
if (error)
console.log('Mail failed!! :(')
else
console.log('Mail sent to ' + mailOptions.to)
}),
DELAY
);
}
});
There are however many possible flaws to this solution. If you're expecting big traffic on that endpoint you could end up with many scheduled callbacks that will eat the stack. In addition, if something fails the user of course won't be able to know.
If this is a big / serious project, consider using that cronjob package or using an external storage mechanism where you can queue this "pending" messages (Redis would do and it's incredible simple), and have a different process read tasks from there and perform the email sending.
EDIT: saw some more things on your code.
1) You probably don't need to create a new transport inside your POST handler, create it outside and reuse it.
2) In addition to the mentioned problems, if your server crashed no email will be ever sent.
3) If you still want to do it in a single Node.js app, instead of scheduling an email on every request to this endpoint, you'd be better storing the email data (from, to, subject, body) somewhere and schedule every 20 minutes a function that will get all pending emails, send them one by one, and then reschedule itself to re-run 20 minutes later. This will keep you memory usage low. Server crash still make all emails lost, but if you add REDIS into the mix then you can simply grab all pending emails from REDIS when your app start.
Probably too much for an answer, sorry if it wasn't needed! :)
I think CharlieBrown's answer is correct and since I had two answers in my mind while reading the question, I thank him for simplifying my answer to be the alternative of his.
setTimeout is actually a good idea, but it has a drawback: in the case when there is any reason to stop the server code (server restart, module installation, file management, etc.) your callbacks scheduled at the end of the setTimeout's time parameter will not be executed and some users will not receive emails.
If the problem above is serious-enough, then you might want to store scheduled emails to be sent in the database or into Redis and use a cron job to periodically check the email set and send the emails if there are some.
I think that either this answer or CharlieBrown's should suffice for you, depending on your preferences and needs.

How to handle the Storage Queue using the WebJobs

I just started to use Azure as my mobile development as well as my web development.
I am using NodeJs as my framework to work on the azure backend. I am using mobile services and web apps in the azure.
Here is the situation, I am using the Storage Queue from Azure and I am using webjob from my webapps to handle the storage queues. The messages in the queue are going to be sent out to each specific user via notification hub. (Push Notification)
So, the queues will have the size of the 50,000 or more queue messages. All these messages are used to push out the message to the user one by one. However, I tried to handle the queues using WebJob by scheduling 2minutes interval. I know that webjob wont run two instances when the schedule is currently running.
Initially, I wanna use the webjob which run continuously but it will go to pending to restart once the script run finished. My assumption for the continuously running of webjob is that it will run under an endless loop for the script over and over again. until it caught exception or something wrong. My assumption goes wrong, where it will restart by it self once it succeeded the whole script. I know the restart can be adjusted to less than 60seconds but I am not sure whether this helps as I could a lot aysnc operation as well.
For my script, it will run 50,000 or more users messages in the loop. Then, it will send out the push message via Azure nodejs package and then upon return, then it will delete the messages so that it wont appear in the queue anymore. So, there will be some async operation for each loop in the action.
However, everything is working fine but the webjob only have execute maximum of 5 mins and then it will run again on next schedule. Meaning, it will only run to a maximum 5 mins regardless of the operation. I tried with 1,000 messages from the queue and everything works fine but when the messages go up to 5,000 and above, the time is not sufficient. Therefore, some of the async operation is not completed which cause the messages are not deleted.
Is there a way to extend the 5 mins execution time or other better ways to handle the Storage Queues. I looked into the Webjobs SDK but it is only limited to C# and Visual Studio. I am using Mac OSX and Javascript which I could not use.
Please advise as I wasted a lot of time figuring out whats best to handle the storage queue using webjobs but now it seems like it does not serve the purpose when the messages grow bigger and when it dealt with async operation with the total of only 5 mins execution time. I do not have any VM at the moments which I only use PAAS in azure.
According your description:
All these messages are used to push out the message to the user one by one
it will run 50,000 or more users messages in the loop
So your requirement is to send each message in queue to user,and now you get all the messages in queue one time even the message size will get up to more then 50,000, and loop the messages for further operations?
If there is any misunderstanding, feel free to let me know.
In my opinion, cloud you get the top message of the queue at once, and send it to your user, so that it will remarkbly reduce the processing time and which can be set in a continuously webjob. You can refer to How To: Peek at the Next Message to see how to peek at the message in the front of a queue without removing it from the queue
update
As I found you have mentioned that I also have a Web App in Node.js in your whole project architecture.
So I consider whether you can leverage continuous webjob in Web Apps to get one message and send to Notification Hub one time.
And here is my test code snippet:
var azureStorage = require('azure-storage'),
azure = require('azure'),
accountName = '<accountName>',
accountKey = '<accountKey>';
var queueSvc = azureStorage.createQueueService(accountName, accountKey);
var notificationHubService = azure.createNotificationHubService('<notificationhub-name>', '<connectionstring>');
queueSvc.getMessages('myqueue', {numOfMessages:1}, function(error, result, response) {
if (!error) {
// Message text is in messages[0].messagetext
var message = result[0];
console.log(message.messagetext);
var payload = {
data: {
msg: message.messagetext
}
};
notificationHubService.gcm.send(null, payload, function(error) {
if (!error) {
//notification sent
console.log('notification sent');
queueSvc.deleteMessage('myqueue', message.messageid,message.popreceipt,function(error, response) {
if (!error) {
console.log(response);
// Message deleted
} else {
console.log(error);
}
});
}
});
}
});
Details refer to How to use Notification Hubs from Node.js And https://github.com/Azure/azure-storage-node/blob/master/lib/services/queue/queueservice.js#L727
update2
As I get the idea of Service-bus demo on GitHub, I modified the code above, and which greatly improve the efficiency.
Here the code snippet, for your information:
var queueName = 'myqueue';
function checkForMessages(queueSvc, queueName, callback) {
queueSvc.getMessages(queueName, function(err, message) {
if (err) {
if (err === 'No messages to receive') {
console.log('No messages');
} else {
console.log(err);
// callback(err);
}
} else {
callback(null, message[0]);
console.log(message);
}
});
}
function processMessage(queueSvc, err, lockedMsg) {
if (err) {
console.log('Error on Rx: ', err);
} else {
console.log('Rx: ', lockedMsg);
var payload = {
data: {
msg: lockedMsg.messagetext
}
};
notificationHubService.gcm.send(null, payload, function(error) {
if (!error) {
//notification sent
console.log('notification sent');
console.log(lockedMsg)
console.log(lockedMsg.popreceipt)
queueSvc.deleteMessage(queueName, lockedMsg.messageid, lockedMsg.popreceipt, function(err2) {
if (err2) {
console.log('Failed to delete message: ', err2);
} else {
console.log('Deleted message.');
}
})
}
});
}
}
var t = setInterval(checkForMessages.bind(null, queueSvc, queueName, processMessage.bind(null, queueSvc)), 100);
I set the loop time as 100ms in setInterval, now it can process almost 600 message per minutes in my test.
The various configuration settings for WebJobs are explained on this wiki page. In your case you should increase the WEBJOBS_IDLE_TIMEOUT value, which is the time in seconds that a triggered job will timeout if it hasn't produced any output for a period of time. The WEBJOBS_IDLE_TIMEOUT setting needs to be configured in the portal app settings, not via the app.config file.

Categories

Resources