NodeJS and MongoDB Query Slow - javascript

so running into an issue where I am using NodeJS with Express for API calls. I am fetching all documents in a collection using
export async function main(req, res) {
try {
const tokens = await tokenModel.find({}).lean();
res.json(tokens);
} catch {(err) => {
res.status(500).json({ message: err.message })
console.log('err', err.message)
}
}
console.log('Get Data')
}
Now this request works great and returns me the data I need. The problem is I have over 10K documents, and on a PC takes about 10 seconds to return that data, and on a mobile phone it takes over 45 seconds. I know network on phone matters, but is there any way I can increase this? Nothing I have tried works. I keep finding that lean is the option to use, and I am already using it with no success or improvements.

Well, it's slow because you are returning all 10k results.
Do you actually need all 10k results? If not, you should consider filtering only results that you actually need.
If not, I suggest implementing pagination, where you would return results in batches (50 per page for example).
In addition, if you are using only some of the fields from the documents, you should tell MongoDB to return only these fields, and not all of them. That would also increase the performance since less data will be transferred through the network.

Related

Node Express server terminates eight hours after inactivity

I have written a small backend application with Node Express.
Its purpose is to retrieve data from a MySQL database and send the resulting rows as a JSON-formatted string back to the requesting client.
app.get(`${baseUrl}/data`, (req, res) => {
console.log("Get data");
getDataFromDatabase((error, data) => {
if (error) {
return res.json({status: CODE_ERROR, content: error});
}
else {
return res.json({status: CODE_SUCCESS, content: data});
}
});
});
Inside the getDataFromDatabase() method a simple SELECT statement is sent to the DB and it receives a status code plus content. In case of success, the content would be a JSON of returned rows, otherwise information about the MySQL error - again in JSON format.
Basically this code works fine. There are a few other methods which were built the same way but don't cause the following problem:
After running this code on a server, I found that the process always dies exactly eight hours after the last call of the above method. The method can be called dozens of times, the problem occurs only after inactivity.
A quick and dirty workaround due to a lack of time was to simply create a cronjob which kills the process and re-run the application every six hours. However, the new process also gets killed eight hours after the last request has been sent in the last process.
While writing this question, I checked again for any differences between my methods. I found the following, here a snippet of getDataFromDatabase():
if (error) {
callback(error, null);
}
However, a method getOtherDataFromDatabase() has got a return keyword before its callback:
if(!error) {
return callback(null, data);
}
So, is the return keyword making a difference here? Is there some kind of unfinished asynchronous code which terminates after a timeout? I've got no exceptions in my console output, the process dies silently.

Firebase: Returning a Batch Write from a Transaction

I am having issues where firebase is not updating the stock values in my database correctly on some occasions. I am using FieldValue.increment() and it works most of the time, but doesn't update maybe 1% of the time for some reason, even though there was no error message from firebase. I was looking at the firebase documentation and it seems that I need to make my writes to the database idempotent in the case of retries or fails. I was thinking about using transactions to check the database if a change has occurred before updating, but my code is currently writing to multiple collections using batch writes.
I was wondering if it is possible to return a batch write commit from a transaction in Firebase? I know you can do the writes inside of the transaction, but would there by any issues if you created a batch and then, based on your read operation for the transaction, you either commit or don't commit the batch of writes? It seems to be working ok when I run the program, but I'm worried there may be potential edge cases I'm not seeing. Here is an example of what I am talking about...
const batch = db.batch();
const ref1 = db.collection('references').doc('referenceId1');
const ref2 = db.collection('references').doc('referenceId2');
batch.update(ref1, {completed: true});
batch.set(ref2, ...)
return db.runTransaction((transaction)=> {
return transaction.get(ref1).then((doc) => {
const document = doc.data();
if(!ref1.completed){
return batch.commit()
}
})
})
.then(function() {
console.log("Transaction successfully committed!");
}).catch(function(error) {
console.log("Transaction failed: ", error);
});

how can make mongoose fail when executing find query

Hi everyone I'm writing mocha unit tests for my server. How can I get error for mongoose find query. I've tried close the connection before execute but there's nothing firing.
User.find({}, (err, result) => {
if (err) {
// I want to get here
}
return done(result);
});
The following DO NOT WORK with mongoose, at least for now (5.0.17) :
Closing the connection to mongoose is a way to test it, in addition to a proper timeout to set on the find request.
const request = User.find({});
request.maxTime(1000);
request.exec()
.then(...)
.catch(...);
or
User.find({}, { maxTimeMS: 1000 }, (err, result) => {
if (err) {
// I want to get here
}
return done(result);
});
EDIT after further researches :
After trying it myself, it seems that I never get an error from the request.
Changing request maxTime or connection parameters auto_reconnect, socketTimeoutMS, and connectTimeoutMS do not seems to have any effect. The request still hang.
I've found this stack overflow answer saying that all request are queued when mongoose is disconnected from the database. So we won't get any timeout from there.
A soluce I can recommand and that I use on my own project for another reason would be to wrap the mongoose request into a class of my own. So I could check and throw an error myself in case of disconnected database.
In my opinion, the best way to test your error handling is to use mock. More information in this previous stackoverflow topic.
You can mock the mongoose connection and api to drive your test (raise errors...).
Libraries:
sinonjs
testdouble
I solved it like below. Here is the solution.
User = sinon.stub(User.prototype, 'find');
User.yields(new Error('An error occured'), undefined);
By this code it will return error. #ormaz #grégory-neut Thanks for the help.

Send thousands of SMS with Twilio

I would like to send ~50,000 SMS with Twilio, and I was just wondering if my requests are going to crash if I loop through a phone number array of this size. The fact is that Twilio only allows 1 message for each request, so I have to make 50,000 of them.
Is it possible to do it this way or do I have to find another way?
50,000 seems too much but I have no idea of how many requests I can do.
phoneNumbers.forEach(function(phNb)
{
client.messages.create({
body: msgCt,
to: phNb,
from: ourPhone
})
.then((msg) => {
console.log(msg.sid);
});
})
Thanks in advance
Twilio developer evangelist here.
API Limits
First up, a quick note on our limits. With a single number, Twilio has a limit of sending one message per second. You can increase that by adding more numbers, so 10 numbers will be able to send 10 messages per second. A short code can send 100 messages per second..
We also recommend that you don't send more than 200 messages on any one long code per day.
Either way I recommend using a messaging service to send messages like this.
Finally, you are also limited to 100 concurrent API requests. It's good to see other answers here talking about making requests sequentially rather than asynchronously as that will eat up the memory on your server as well as start to find requests are turned down by Twilio.
Passthrough API
We now have an API that allows you to send more than one message with a single API call. It's known as the passthrough API, as it lets you pass many numbers through to the Notify service. You need to turn your numbers into "bindings" and send them via a Notify service, which also uses a messaging service for number pooling.
The code looks a bit like this:
const Twilio = require('twilio');
const client = new Twilio(accountSid, authToken);
const service = client.notify.services('ISXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX');
service.notifications
.create({
toBinding: [
JSON.stringify({
binding_type: 'sms',
address: '+15555555555',
}),
JSON.stringify({
binding_type: 'facebook-messenger',
address: '123456789123',
}),
],
body: 'Hello Bob',
})
.then(notification => {
console.log(notification);
})
.catch(error => {
console.log(error);
})
The only drawbacks in your situation is that every message needs to be the same and the request needs to be less than 1 megabyte in size. We've found that typically means about 10,000 numbers, so you might need to break up your list into 5 API calls.
Let me know if that helps at all.
There are two factors here.
You need to consider Twilio Api usage Limits.
Performing 50.000 parallel http requests (actually your code do it) is not a good idea: you will have memory problems.
Twilio sms limits change based on source and destination.
You have two solution:
Perform 50k http requests sequentially
phoneNumbers.forEach(async function(phNb){
try {
let m = await client.messages.create({
body: msgCt,
to: phNb,
from: ourPhone
})
console.log(a)
} catch(e) {
console.log(e)
}
})
Perform 50k http requests concurrently with concurrency level
This is quite easy to do with the awesome bluebird sugar functions. Anyway, the twilio package uses native promise. You can use async module with mapLimit method for this purpose
You send your requests asynchronous due to non-blocking forEach body calls, I guess it's fastest for the Client. But the question is: does Twilio allow such a load from a single source? it needs to be tested... And if no, you should build some kind of requests queue, e.g. promise based, something like
function sendSync(index = 0) {
if(index === phoneNumbers.length) {
return;
}
client.messages.create({
body: msgCt,
to: phoneNumbers[index],
from: ourPhone
})
.then(function(msg) {
console.log(msg.sid);
sendSync(index + 1);
})
.catch(function(err) {
console.log(err);
});
}
sendSync();
Or if you like async/await –
async function sendSync() {
for (let phNb of phoneNumbers) {
try {
let msg = await client.messages.create({
body: msgCt,
to: phNb,
from: ourPhone
});
console.log(msg);
} catch(err) {
console.log(err);
}
})
}
sendSync();

Waiting for MongoDB findOne callback to complete before finishing app.get()

I'm relatively new to Javascript and I am having trouble understanding how to use a MongoDB callback with an ExpressJS get. My problem seems to be if it takes too long for the database search, the process falls out of the app.get() and gives the webpage an "Error code: ERR_EMPTY_RESPONSE".
Currently it works with most values, either finding the value or properly returning a 404 - not found, but there are some cases where it hangs for a few seconds before turning the ERR_EMPTY_RESPONSE. In the debugger, it reaches the end of the app.get(), where it returns ERR_EMPTY_RESPONSE, and after that the findOne callback finishes and goes to the 404, but by then it is too late.
I've tried using async and introducing waits with no success, which makes me feel like I am using app.get and findOne incorrectly.
Here is a general version of my code below:
app.get('/test', function (req, res) {
var value = null;
if (req.query.param)
value = req.query.param;
else
value = defaultValue;
var query = {start: {$lte: value}, end: {$gte: value}};
var data = collection.findOne(query, function (err, data) {
if (err){
res.sendStatus(500);
}
else if (data) {
res.end(data);
}
else{
res.sendStatus(404);
}
});
});
What can I do to have the response wait for the database search to complete? Or is there a better way to return a database document from a request? Thanks for the help!
You should measure how long the db query takes.
If it's slow >5sec and you can't speed it up, than it might be a good idea to decouple it from the request by using some kind of job framework.
Return a redirect the url where the job status/result will be available.
I feel silly about this, but I completely ignored the fact that when using http.createServer(), I had a timeout set of 3000 ms. I misunderstood what this timeout was for and this is what was causing my connection to close prematurely. Increasing this number allowed my most stubborn queries to complete.

Categories

Resources