What would cause "Request timed out" in parse.com cloud code count? - javascript

One of my cloud functions is timing out occasionally. It seems to have trouble with counting, although there are only around 700 objects in the class. I would appreciate any tips on how to debug this issue.
The cloud function works correctly most of the time.
Example error logged:
E2015-02-03T02:21:41.410Z] v199: Ran cloud function GetPlayerWorldLevelRank for user xl8YjQElLO with:
Input: {"levelID":60}
Failed with: PlayerWorldLevelRank first count error: Request timed out
Is there anything that looks odd in the code below? The time out error is usually thrown in the second count (query3), although sometimes it times out in the first count (query2).
Parse.Cloud.define("GetPlayerWorldLevelRank", function(request, response) {
var query = new Parse.Query("LevelRecords");
query.equalTo("owner", request.user);
query.equalTo("levelID", request.params.levelID);
query.first().then(function(levelRecord) {
if (levelRecord === undefined) {
response.success(null);
}
// if player has a record, work out his ranking
else {
var query2 = new Parse.Query("LevelRecords");
query2.equalTo("levelID", request.params.levelID);
query2.lessThan("timeSeconds", levelRecord.get("timeSeconds"));
query2.count({
success: function(countOne) {
var numPlayersRankedHigher = countOne;
var query3 = new Parse.Query("LevelRecords");
query3.equalTo("levelID", request.params.levelID);
query3.equalTo("timeSeconds", levelRecord.get("timeSeconds"));
query3.lessThan("bestTimeUpdatedAt", levelRecord.get("bestTimeUpdatedAt"));
query3.count({
success: function(countTwo) {
numPlayersRankedHigher += countTwo;
var playerRanking = numPlayersRankedHigher + 1;
levelRecord.set("rank", playerRanking);
// The SDK doesn't allow an object that has been changed to be serialized into a response.
// This would disable the check and allow you to return the modified object.
levelRecord.dirty = function() { return false; };
response.success(levelRecord);
},
error: function(error) {
response.error("PlayerWorldLevelRank second count error: " + error.message);
}
});
},
error: function(error) {
response.error("PlayerWorldLevelRank first count error: " + error.message);
}
});
}
});
});

I don't think the issue is in your code. Like the error message states: the request times out. That is, the Parse API doesn't respond within the period of the timeout or the network causes it to timeout. As soon as you do .count some API call is probably done, which then can't connect or times out.
Apparently more people have this issue: https://www.parse.com/questions/ios-test-connectivity-to-parse-and-timeout-question. It doesn't seem possible to increase the timeout, so the suggestion in this post states:
For that reason, I suggest setting a NSTimer prior to executing the
query, and invalidating it when the query returns. If the NSTimer
fires before being invalidated, ask the user if they want to keep
waiting for the results to come back, or show them a message
indicating that the request is taking a long time to complete. This
gives the user the chance to wait more if they know their current
network conditions are not ideal.
In case you are dealing with networks, and especially on the mobile platform, you need to prepare for network hickups. So like the post suggests: offer the option to user to try again.

Related

Using recursion in try catch block in JavaScript

I have a nodejs script which creates dynamic tables and views for the temperature recorded for the day. Sometimes it does not create tables if the temperature is not in the normal range. For this I decide to use try catch and call the function recursively. I am not sure if I have done it correctly or if there is another way to call the con.query method, so that tables get created. I encountered this problem for first time in nodejs.
To start with, you have to detect errors and only recurse when there are specific error conditions. If the problem you're trying to solve is one specific error, then you should probably detect that specific error and only repeat the operation when you get that precise error.
Then, some other recommendations for retrying:
Retry only a fixed number of times. It's a sysop's nightmare when some server code gets stuck in a loop banging away over and over on something and just getting the same error every time.
Retry only on certain conditions.
Log every error so you are someone running your server can problem solve when something is wrong.
Retry only after some delay.
If you're going to retry more than few times, then implement a back-off delay so it gets longer and longer between retries.
Here's the general idea for some code to implement retries:
const maxRetries = 5;
const retryDelay = 500;
function execute_query(query, callback) {
let retryCntr = 0;
function run() {
con.query(query, function(err, result, fields) {
if (err && err is something we should retry for) {
++retryCntr;
if (retryCntr <= maxRetries) {
console.log('Retrying after error: ', err);
setTimeout(run, retryDelay)
} else {
// too many retries, communicate back error
console.log(err);
callback(err);
}
} else if (err) {
console.log(err);
// communicate back error
callback(err);
} else {
// communicate back result
callback(null, result, fields);
}
});
}
run();
}
The idea behind retries and backoffs if you're going to do lots of retries is that retry algorithms can lead to what are called avalanche failures. The system gets a little slow or a little too busy and it starts to create a few errors. So, your code then starts to retry over and over which creates more load which leads to more errors so more code starts to retry and the whole things then fails with lots of code looping and retrying in what is called an avalanche failure.
So, instead, when there's an error you have to make sure you don't inadvertently overwhelm the system and potentially just make things worse. That's why you implement a short delay, that's why you implement max retries and that's why you may even implement a back-off algorithm to make the delay between retries longer each time. All of this allows a system that has some sort of error causing perturbation to eventually recover on its own rather than just making the problem worse to the point where everything fails.

Waiting for a response from app.post in another app.post

I'm developing a back-end for a mobile application using express.js for my API.
For this mobile application, the users sign-in using mobile numbers, an OTP code is sent to their mobiles, and they need to send back the OTP they received to the server for verification and validation.
When the users first attempt to sign-in, they POST their mobile number to the server, and then a bunch of processing happens, and an OTP is sent to them through an SMS gateway.
Now while this request is still ongoing, I need to wait for the users to send the OTP through a POST request to another route, verify it, and then proceed on with the appropriate steps in the first, ongoing POST request.
After some search on the net, I eventually decided to wrap the app.post method for the verifyOTP route in a function that creates and returns a new promise, and then resolve it or reject it after verification. This worked wonderfully for the first time I perform this operation after restarting the server, but that's it. It only works the first time, and then for the consecutive times that follow, none of the new promises that should be created are resolved or rejected, and the first request to the sign-in route remains waiting.
I tried a bunch of things like making the function wrapping the verifyOTP route async, and creating promises inside the route instead of wrapping it in one, but still no use. Can you help me?
For the sake of finding a solution for this problem, I've simplified the process and did a simulation of the actual situation using this code, and it simulates the problem well:
This is to simulate the first request:
app.get("/test", async function(req, res) {
console.log("Test route\n");
var otpCode = Math.floor(Math.random() * (9999 - 2)) + 1;
var timestamp = Date.now();
otp = {
code: otpCode,
generated: timestamp
};
console.log("OTP code sent: " + otpCode + "\n");
console.log("OTP sent.\n");
res.end();
/* verifyOTP().then(function() {
console.log("Resolved OTP verification\n\n");
res.end();
}).catch(function() {
console.log("Bad\n\n");
res.end();
});*/
});
This is the verifyOTP route:
var otp;
app.post("/verifyOTP", function(req, res) {
console.log("POST request - verify OTP request\n");
var msg;
if ((Date.now() - otp.generated) / 1000 > 30) {
msg = "OTP code is no longer valid.";
res.status(403).json({
error: msg
});
} else {
var submitted = req.body.otp;
if (submitted !== otp.code) {
msg = "OTP code is incorrect.";
res.status(403).json({
error: msg
});
} else {
msg = "Verified.";
res.end();
}
}
console.log(res.statusCode + " - " + res.statusMessage + "\n");
console.log(msg + "\n");
});
Just to mention, this isn't the only place in my server that I need OTP verification, although the implementation of what happens after the verification varies. Therefore, I'd appreciate it if the solution could still keep the code reusable for multiple instances..
Well, after some more research on my own, I discarded the use of Promises for this use case all together, and instead used RxJS' Observables..
It solved my problem pretty much the way I want it, although I had to do some slight modifications..
For those who stumble upon my question looking for a solution for the same problem I faced:
Promises can only be resolved or rejected once, and as far as I can tell, unless the Promises function finishes running, you can't create a new one with the same code (please correct me if I'm wrong on this one, I'd really appreciate it, this was only based on my own personal observations and guesswork), and unless you create a brand new Promise, you can't resolve it again.
In this case, we are making a Promise out of a listener (or whatever it's called in js), so unless you delete the listener, the function warapped inside the promise won't finish running (I think), and you won't get to create a new Promise.
Observables, on the other hand, can be reused as many times as you want, see this for a comparison between Promises and Observables, and this for a nice tutorial that will help you understand Observables and how to use them. See this for how to install RxJS for node.
However, be warned - for some reason, once you subscribe to an observable, the variables used in the function passed to observable.subscribe() remains the same, it doesn't get updated with every new request you make to the observer route. So unless you find a way to pass the variables that change into the observer.next() function inside the observable definition, you will get the wrong results.

IndexedDB and large amount of inserts on Angular app

I'm struggling with amounts of 20-50k JSON object response from server which I should insert into our indexeddb datastore.
Response is repeated with foreach and every single row is added with each. Calls with response less than 10k rows are working fine and inserted within a minute or so. But when the amounts get larger, the database goes unresponsive after a while and returns this error message
"db Error err=transaction aborted for unknown reason"
I'm using a Dexie wrapper for the database and an angular wrapper for dexie called ngDexie.
var deferred = $q.defer();
var progress = 0;
// make the call
$http({
method: 'GET',
headers: headers,
url: '/Program.API/api/items/getitems/' + user
}).success(function (response) {
// parse response
var items = angular.fromJson(response);
// loop each item
angular.forEach(items, function (item) {
// insert into db
ngDexie.put('stuff', item).then(function () {
progress++;
$ionicLoading.show({
content: 'Loading',
animation: 'fade-in',
template: 'Inserting items to db: ' + progress
+ '/' + items.length,
showBackdrop: true,
maxWidth: 200,
showDelay: 0
});
if (progress == items.length) {
setTimeout(function () {
$ionicLoading.hide();
}, 500);
deferred.resolve(items);
}
});
});
}).error(function (error) {
$log('something went wrong');
$ionicLoading.hide();
});
return deferred.promise;
Do I have the wrong approach with dealing with the whole data in one chunk? Could there be better alternatives? This whole procedure is only done once when the user opens up the site. All help is greatly appreciated. The target device is tablets running Android with Chrome.
Since you are getting a unknown error, there is something going wrong with I/O. My guess is the db underneath has troubles handling the amout of data. May try to split up in batches with a maximum of 10k each.
A transaction can fail for reasons not tied to a particular IDBRequest. For example due to IO errors when committing the transaction, or due to running into a quota limit where the implementation can't tie exceeding the quota to a partcular request. In this case the implementation MUST run the steps for aborting a transaction using the transaction as transaction and the appropriate error type as error. For example if quota was exceeded then QuotaExceededError should be used as error, and if an IO error happened, UnknownError should be used as error.
you can find this in the specs
An other possibility, do you have any indexes defined on the objectstore? Because for every index you have, that index needs to be maintained with every insert.
If you insert many new records i would suggest using add. This was added for performance reasons. See the documentation here:
https://github.com/FlussoBV/NgDexie/wiki/ngDexie.add
I had problems with massive bulk insert (100.000 - 200.000 records). I've solved all my IndexedDB performance problems using bulkPut() from Dexie library. It has this important feature:
Dexie has a kick-ass performance. It's bulk methods take advantage of
a not well known feature in indexedDB that makes it possible to store
stuff without listening to every onsuccess event. This speeds up the
performance to a maximum.
Dexie: https://github.com/dfahlander/Dexie.js
BulkPut() -> http://dexie.org/docs/Table/Table.bulkPut()

Parse Cloud Code Ending Prematurely?

I'm writing a job that I want to run every hour in the background on Parse. My database has two tables. The first contains a list of Questions, while the second lists all of the user\question agreement pairs (QuestionAgreements). Originally my plan was just to have the client count the QuestionAgreements itself, but I'm finding that this results in a lot of requests that really could be done away with, so I want this background job to run the count, and then update a field directly on Question with it.
Here's my attempt:
Parse.Cloud.job("updateQuestionAgreementCounts", function(request, status) {
Parse.Cloud.useMasterKey();
var query = new Parse.Query("Question");
query.each(function(question) {
var agreementQuery = new Parse.Query("QuestionAgreement");
agreementQuery.equalTo("question", question);
agreementQuery.count({
success: function(count) {
question.set("agreementCount", count);
question.save(null, null);
}
});
}).then(function() {
status.success("Finished updating Question Agreement Counts.");
}, function(error) {
status.error("Failed to update Question Agreement Counts.")
});
});
The problem is, this only seems to be running on a few of the Questions, and then it stops, appearing in the Job Status section of the Parse Dashboard as "succeeded". I suspect the problem is that it's returning prematurely. Here are my questions:
1 - How can I keep this from returning prematurely? (Assuming this is, in fact, my problem.)
2 - What is the best way of debugging cloud code? Since this isn't client side, I don't have any way to set breakpoints or anything, do I?
status.success is called before the asynchronous success calls of count are finished. To prevent this, you can use promises here. Check the docs for Parse.Query.each.
Iterates over each result of a query, calling a callback for each one. If the callback returns a promise, the iteration will not continue until that promise has been fulfilled.
So, you can chain the count promise:
agreementQuery.count().then(function () {
question.set("agreementCount", count);
question.save(null, null);
});
You can also use parallel promises to make it more efficient.
There are no breakpoints in cloud code, that makes Parse really hard to use. Only way is logging your variables with console.log
I was able to utilize promises, as suggested by knshn, to make it so that my code would complete before running success.
Parse.Cloud.job("updateQuestionAgreementCounts", function(request, status) {
Parse.Cloud.useMasterKey();
var promises = []; // Set up a list that will hold the promises being waited on.
var query = new Parse.Query("Question");
query.each(function(question) {
var agreementQuery = new Parse.Query("QuestionAgreement");
agreementQuery.equalTo("question", question);
agreementQuery.equalTo("agreement", 1);
// Make sure that the count finishes running first!
promises.push(agreementQuery.count().then(function(count) {
question.set("agreementCount", count);
// Make sure that the object is actually saved first!
promises.push(question.save(null, null));
}));
}).then(function() {
// Before exiting, make sure all the promises have been fulfilled!
Parse.Promise.when(promises).then(function() {
status.success("Finished updating Question Agreement Counts.");
});
});
});

Strange issue with socket.on method

I am facing a strange issue with calling socket.on methods from the Javascript client. Consider below code:
for(var i=0;i<2;i++) {
var socket = io.connect('http://localhost:5000/');
socket.emit('getLoad');
socket.on('cpuUsage',function(data) {
document.write(data);
});
}
Here basically I am calling a cpuUsage event which is emitted by socket server, but for each iteration I am getting the same value. This is the output:
0.03549148310035006
0.03549148310035006
0.03549148310035006
0.03549148310035006
Edit: Server side code, basically I am using node-usage library to calculate CPU usage:
socket.on('getLoad', function (data) {
usage.lookup(pid, function(err, result) {
cpuUsage = result.cpu;
memUsage = result.memory;
console.log("Cpu Usage1: " + cpuUsage);
console.log("Cpu Usage2: " + memUsage);
/*socket.emit('cpuUsage',result.cpu);
socket.emit('memUsage',result.memory);*/
socket.emit('cpuUsage',cpuUsage);
socket.emit('memUsage',memUsage);
});
});
Where as in the server side, I am getting different values for each emit and socket.on. I am very much feeling strange why this is happening. I tried setting data = null after each socket.on call, but still it prints the same value. I don't know what phrase to search, so I posted. Can anyone please guide me?
Please note: I am basically Java developer and have a less experience in Javascript side.
You are making the assumption that when you use .emit(), a subsequent .on() will wait for a reply, but that's not how socket.io works.
Your code basically does this:
it emits two getLoad messages directly after each other (which is probably why the returning value is the same);
it installs two handlers for a returning cpuUsage message being sent by the server;
This also means that each time you run your loop, you're installing more and more handlers for the same message.
Now I'm not sure what exactly it is you want. If you want to periodically request the CPU load, use setInterval or setTimeout. If you want to send a message to the server and want to 'wait' for a response, you may want to use acknowledgement functions (not very well documented, but see this blog post).
But you should assume that for each type of message, you should only call socket.on('MESSAGETYPE', ) once during the runtime of your code.
EDIT: here's an example client-side setup for a periodic poll of the data:
var socket = io.connect(...);
socket.on('connect', function() {
// Handle the server response:
socket.on('cpuUsage', function(data) {
document.write(data);
});
// Start an interval to query the server for the load every 30 seconds:
setInterval(function() {
socket.emit('getLoad');
}, 30 * 1000); // milliseconds
});
Use this line instead:
var socket = io.connect('iptoserver', {'force new connection': true});
Replace iptoserver with the actual ip to the server of course, in this case localhost.
Edit.
That is, if you want to create multiple clients.
Else you have to place your initiation of the socket variable before the for loop.
I suspected the call returns average CPU usage at the time of startup, which seems to be the case here. Checking the node-usage documentation page (average-cpu-usage-vs-current-cpu-usage) I found:
By default CPU Percentage provided is an average from the starting
time of the process. It does not correctly reflect the current CPU
usage. (this is also a problem with linux ps utility)
But If you call usage.lookup() continuously for a given pid, you can
turn on keepHistory flag and you'll get the CPU usage since last time
you track the usage. This reflects the current CPU usage.
Also given the example how to use it.
var pid = process.pid;
var options = { keepHistory: true }
usage.lookup(pid, options, function(err, result) {
});

Categories

Resources