Proper way to seperate piece of code as a background process - javascript

I have a bullCollections where I save some information about some messages example
try {
let bullPayload = {
type: 'message',
payload: {
messsages: messagesForPreProcessingData,
sessionID: this.data['sessionID'],
socketID: parseInt(this.data['socketID']),
},
await bullConnections[accumulatorQueue].add(bullPayload, {
removeOnComplete: true,
});
};
The code works fine but I was asked to change the logic here, according to some statistics the messages are taking too much time to be shown to some users ( from bad wifi) and I come up with a solution that the front end will calculate the time taken from server to client and if that took longer than 400ms a new request will be sent so that the backend will know that the messages took a long time to load.
I made a timeout like this
saveBullPayloadWithTimeout(key, timeDuration, bullPayLoad, messages, events) {
let redis = this.data.dbRedisConfigur.dataRedis;
return new Promise((resolve, reject) => {
setTimeout(() => {
redisConnections[redis].get(key, (err, result) => {
if (err) {
reject(err);
} else {
if (result) {
redisConnections[redis].del(key, (err, result) => {
if (result == 1) {
} else {
logErrors({ message: 'CANNOT DELETE KEY' });
}
});
} else {
bullConnections[accumulator].add(bullPayLoad, {
removeOnComplete: true,
});
console.log('AFTER');
this.updateCurrentMessages(messages, events);
}
}
});
}, timeDuration);
});
}
So this piece of code should wait for 5 seconds so that it can know whether to insert the message or not.
During these 5 seconds, the backend waits for a second request if a second request has been made it saves data to Redis and after 5 seconds it will check that data if it exists then it won't save the message otherwise it will save it.
Does timeout affect the performance, because the backend will handle millions of users?
Is there any better way to separate this as a background process?

Does timeout affect the performance, because the backend will handle millions of users?
Timeouts, themselves, probably won't affect performance much. But your specific use of them will, because all it does is delay the process by timeDuration and then run it on the main thread, and you don't have anything in there (as far as I can tell) to cancel a previous one if a subsequent request is made that supercedes it.
Is there any better way to separate this as a background process?
setTimeout doesn't do its work as a background process. It doesn't even do it on a background thread. It's done on the same main thread that scheduled the timer. Using setTimeout just delays starting the work, it doesn't make the work happen on a different thread.
If you want something done in a different process, you'll need to spawn a child process.
If you want something done on a different thread, you'll need to spawn a worker thread.

Related

Using recursion in try catch block in JavaScript

I have a nodejs script which creates dynamic tables and views for the temperature recorded for the day. Sometimes it does not create tables if the temperature is not in the normal range. For this I decide to use try catch and call the function recursively. I am not sure if I have done it correctly or if there is another way to call the con.query method, so that tables get created. I encountered this problem for first time in nodejs.
To start with, you have to detect errors and only recurse when there are specific error conditions. If the problem you're trying to solve is one specific error, then you should probably detect that specific error and only repeat the operation when you get that precise error.
Then, some other recommendations for retrying:
Retry only a fixed number of times. It's a sysop's nightmare when some server code gets stuck in a loop banging away over and over on something and just getting the same error every time.
Retry only on certain conditions.
Log every error so you are someone running your server can problem solve when something is wrong.
Retry only after some delay.
If you're going to retry more than few times, then implement a back-off delay so it gets longer and longer between retries.
Here's the general idea for some code to implement retries:
const maxRetries = 5;
const retryDelay = 500;
function execute_query(query, callback) {
let retryCntr = 0;
function run() {
con.query(query, function(err, result, fields) {
if (err && err is something we should retry for) {
++retryCntr;
if (retryCntr <= maxRetries) {
console.log('Retrying after error: ', err);
setTimeout(run, retryDelay)
} else {
// too many retries, communicate back error
console.log(err);
callback(err);
}
} else if (err) {
console.log(err);
// communicate back error
callback(err);
} else {
// communicate back result
callback(null, result, fields);
}
});
}
run();
}
The idea behind retries and backoffs if you're going to do lots of retries is that retry algorithms can lead to what are called avalanche failures. The system gets a little slow or a little too busy and it starts to create a few errors. So, your code then starts to retry over and over which creates more load which leads to more errors so more code starts to retry and the whole things then fails with lots of code looping and retrying in what is called an avalanche failure.
So, instead, when there's an error you have to make sure you don't inadvertently overwhelm the system and potentially just make things worse. That's why you implement a short delay, that's why you implement max retries and that's why you may even implement a back-off algorithm to make the delay between retries longer each time. All of this allows a system that has some sort of error causing perturbation to eventually recover on its own rather than just making the problem worse to the point where everything fails.

Socketio Get number of clients in room

I would like to ask for your help. I'm having a hard time with this function. It's supposed to check if the room has 0 or 1 clients inside, and then gives information back about whether another client can join the room or not (with a max of 2 users per room).
I'm out of ideas about getting the number of clients in the room. I've checked the site and there were quite a few answers about this topic, working with earlier versions of socket.io. Now I've came to this function:
io.in(room).clients((err, clients) => {
console.log(clients.length);
});
It works and logs the right amount of clients inside the room but I have no idea how can I return that value to the outer function.
The var user consists of a whole JSON and I've been wondering if there is a quicker way to return the length of the array without digging into JSON.
There's the outer function:
function isRoomFree(room) {
var user = io.in(room).clients((err, clients) => {
console.log(clients.length);
});
//console.log(user);
if(user < 2)
return true;
else
return false;
}
Is there any way to do that? I'm kinda new to the js, socketio and node.js
Your function isRoomFree(room) is essentially synchronous, meaning that you call it and you wait for the result, however io.in(room).clients is asynchronous, meaning that you don't know when the result will arrive.
Mixing the 2 of them presents a challenge.
What you need to do is change your function to become async. I suggest you become familiar with the concept.
function isRoomFree(room, callback) {
var user = io.in(room).clients((err, clients) => {
if(clients < 2)
callback(true);
else
callback(false);
});
}
Use it like this:
isRoomFree(room, function(status) {
if (status)
console.log("free");
else
console.log("not free");
//continue your program logic inside the callback
});

Best practice for multiple AJAX API calls that require a response from the previous call?

I'm working on an internal page that allows a user to upload a CSV with resources and dates, and have the page add all the scheduling information for these resources to our management software. There's a pretty decent API for doing this, and I have a working model, but it seems...cludgy.
For each resource I have to start a new session, then create a new reservation, then add resources, then confirm that the reservation isn't blocked, then submit the reservation. Most of the calls return a variable I need for the next step in the process, so each relies on the previous ajax call.
Currently I'm doing this via nested ajax calls similar to this:
$.ajax('startnewsession').then($.ajax('createreservation').then('etcetc'))
While this works, I feel like there has to be an easier, or more "proper" way to do it, both for cleaner code and for adaptability.
What you're doing is correct, assuming you can't change the API you are communicating with.
There's really no way of getting around having to do some sort of nested ajax calls if you need the response data of the previous one for the next one. Promises (.then) however make it a bit more pretty than having to do callbacks.
The proper solution (if possible) would of course be to implement your API in such a way that it would require less roundtrips from the client to the server. Considering there's no user input in between each of these steps in the negotiation process for creating a reservation, your API should be able to complete the entire flow for creating a reservation, without having to contact the client until it needs more input from the user.
Just remember to do some error handling between each of the ajax calls in case they should fail - you don't want to start creating the following up API calls with corrupt data from a previously failed request.
var baseApiUrl = 'https://jsonplaceholder.typicode.com';
$.ajax(baseApiUrl + '/posts/1')
.then(function(post) {
$.ajax(baseApiUrl + '/users/' + post.userId)
.then(function(user) {
console.log('got name: ' + user.name);
}, function(error) {
console.log('error when calling /users/', error)
});
}, function(error) {
console.log('error when calling /posts/', error)
});
Short answer: as usual I'm trying to do some chains like this:
ajaxCall1.then(
response => ajaxCall2(response)
).then(
response => ajaxCall3(response)
)
I'm trying to avoid using of when. As usual I (and bet you too) have 1 ajax call (for form submit), sometimes 2 chaining ajax calls, for an example, if I need to get data for table, first query for total rows count, and if count greater than 0, another call for data. In this case I'm using:
function getGridData() {
var count;
callForRowsCount().then(
(response) => {
count = response;
if(count > 0) {
return callForData();
} else {
return [];
}
}
).then(response => {
pub.fireEvent({
type: 'grid-data',
count: count,
data: response
})
})
}
publisher trigger event, and I have all my components updated.
In some realy rare cases, I need to use when. But this is always bad design. It happen in case, when I need to load pack of additional data before of main request, or when backend not support bulk update, and I need to send pack of ajax calls to update many of database entities. Something like this:
var whenArray = [];
if(require1) {
whenArray.push(ajaxCall1);
}
if(require2) {
whenArray.push(ajaxCall2);
}
if(require3) {
whenArray.push(ajaxCall3);
}
$.when.apply($, whenArray).then(() => loadMyData(arguments));

Parse Cloud Code Ending Prematurely?

I'm writing a job that I want to run every hour in the background on Parse. My database has two tables. The first contains a list of Questions, while the second lists all of the user\question agreement pairs (QuestionAgreements). Originally my plan was just to have the client count the QuestionAgreements itself, but I'm finding that this results in a lot of requests that really could be done away with, so I want this background job to run the count, and then update a field directly on Question with it.
Here's my attempt:
Parse.Cloud.job("updateQuestionAgreementCounts", function(request, status) {
Parse.Cloud.useMasterKey();
var query = new Parse.Query("Question");
query.each(function(question) {
var agreementQuery = new Parse.Query("QuestionAgreement");
agreementQuery.equalTo("question", question);
agreementQuery.count({
success: function(count) {
question.set("agreementCount", count);
question.save(null, null);
}
});
}).then(function() {
status.success("Finished updating Question Agreement Counts.");
}, function(error) {
status.error("Failed to update Question Agreement Counts.")
});
});
The problem is, this only seems to be running on a few of the Questions, and then it stops, appearing in the Job Status section of the Parse Dashboard as "succeeded". I suspect the problem is that it's returning prematurely. Here are my questions:
1 - How can I keep this from returning prematurely? (Assuming this is, in fact, my problem.)
2 - What is the best way of debugging cloud code? Since this isn't client side, I don't have any way to set breakpoints or anything, do I?
status.success is called before the asynchronous success calls of count are finished. To prevent this, you can use promises here. Check the docs for Parse.Query.each.
Iterates over each result of a query, calling a callback for each one. If the callback returns a promise, the iteration will not continue until that promise has been fulfilled.
So, you can chain the count promise:
agreementQuery.count().then(function () {
question.set("agreementCount", count);
question.save(null, null);
});
You can also use parallel promises to make it more efficient.
There are no breakpoints in cloud code, that makes Parse really hard to use. Only way is logging your variables with console.log
I was able to utilize promises, as suggested by knshn, to make it so that my code would complete before running success.
Parse.Cloud.job("updateQuestionAgreementCounts", function(request, status) {
Parse.Cloud.useMasterKey();
var promises = []; // Set up a list that will hold the promises being waited on.
var query = new Parse.Query("Question");
query.each(function(question) {
var agreementQuery = new Parse.Query("QuestionAgreement");
agreementQuery.equalTo("question", question);
agreementQuery.equalTo("agreement", 1);
// Make sure that the count finishes running first!
promises.push(agreementQuery.count().then(function(count) {
question.set("agreementCount", count);
// Make sure that the object is actually saved first!
promises.push(question.save(null, null));
}));
}).then(function() {
// Before exiting, make sure all the promises have been fulfilled!
Parse.Promise.when(promises).then(function() {
status.success("Finished updating Question Agreement Counts.");
});
});
});

How do I execute a long-running routine based on a timed database poll in Node?

I have a routine that polls a database to look for work, and if it finds work there, it should execute it. It can only execute 1 (one) work order at a time, and this work-order could take anywhere from 5 seconds to several minutes to run. During this time it should not poll the database for more work, but wait until the current work is done.
I was thinking of using setTimeout to accomplish this, by doing the work in the timeout-event, and setting a new timeout at the end of the function. But I don't know if this is the best way to do it. Is there a "best practice" for these things?
It's an ok way! Some code, maybe that helps:
(function poll () {
fetchJob(onJob);
function onJob (err, job) {
if (err) throw err;
if (job) return execute(job, poll);
setTimeout(poll, 1000);
}
}());

Categories

Resources