I'm looking to implement a solution where I can query the Mongoose Database on a regular interval and then store the results to serve to my clients.
I'm assuming this will reduce my response time when my users pull the collection.
I attempted to implement this plan by creating an empty global object and then writing a function that queries the db and then stores the results as the global object mentioned previously. At the end of the function I setTimeout for 60 seconds and then ran the function again. I call this function the first time the server controller gets called when the app is first run.
I then set my clients up so that when they requested the collection, it would first look to see if the global object exists, and if so return that as the response. I figured this would cut my 7-10 second queries down to < 1 sec.
In my novice thinking I assumed that Nodejs being 'single-threaded' something like this could work quite well - but it just seemed to eat up all my RAM and cause fatal errors.
Am I on the right track with my thinking or is it better to query the db every time people pull the collection?
Here is the code in question:
var allLeads = {};
var getAllLeads = function(){
allLeads = {};
console.log('Getting All Leads...');
Lead.find().sort('-lastCalled').exec(function(err, leads) {
if (err) {
console.log('Error getting leads');
} else {
allLeads = leads;
}
});
setTimeout(function(){
getAllLeads();
}, 60000);
};
getAllLeads();
Thanks in advance for your assistance.
Related
I'm using a firebase database and a simple function with the new FieldValue.increment to increment a counter but that does not work but adds "operand" field without ever incrementing it.
My function is super simple:
exports.updateCounters = functions.https.onRequest((req, res) => {
// grab the parameters.
const username = req.query.username;
var updateObject = { };
updateObject[username] = admin.firestore.FieldValue.increment(1);
admin.database().ref('counterstest').update(updateObject);
});
When I deploy and call this function I would expect to see
countertest: {
myusername: 1
}
but I see
countertest: {
myusername: {
operand: 1
}
}
instead and operand: 1 never increments even if I call my function multiple times.
Can somebody point out what error I'm making here?
Thank you!
FieldValue.increment() is a feature of Cloud Firestore, but you're apparently trying to apply it to Realtime Database. This is not going to work - they are different databases, and Realtime Database doesn't support atomic increments like this.
What you're actually doing here is writing the JSON representation of the returned FieldValue object to Realime Database. Apparently, internally, the FieldValue object has a property called "operand" which contains the value to increment by.
I know this post is a couple of years old, but database.ServerValue.increment works with the 11.X firebase-admin Realtime Database like so:
admin.database().ref('counterstest').update({ myusername: database.ServerValue.increment(1) })
Dear all,
Im working with JS for some weeks and now I need a bit of clarification. I have read a lot of sources and a lot of Q&A also in here and this is what I learned so far.
Everything below is in connection with Node.js and Socket.io
Use of globals in Node.js "can" be done, but is not best practice, terms: DONT DO IT!
With Sockets, everything is treated per socket call, meaning there is hardly a memory of previous call. Call gets in, and gets served, so no "kept" variables.
Ok I build up some chat example, multiple users - all get served with broadcast but no private messages for example.
Fairly simple and fairly ok. But now I am stuck in my mind and cant wrap my head around.
Lets say:
I need to act on the request
Like a request: "To all users whose name is BRIAN"
In my head I imagined:
1.
Custom object USER - defined globally on Node.js
function User(socket) {
this.Name;
this.socket = socket; }
2.
Than hold an ARRAY of these globally
users = [];
and on newConnection, create a new User, pass on its socket and store in the array for further action with
users.push(new User(socket));
3.
And on a Socket.io request that wants to contact all BRIANs do something like
for (var i = 0; i < users.length; i++) {
if(user[i].Name == "BRIAN") {
// Emit to user[i].socket
}}
But after trying and erroring, debugging, googling and reading apparently this is NOT how something like this should be done and somehow I cant find the right way to do it, or at least see / understand it. can you please help me, point me into a good direction or propose a best practice here? That would be awesome :-)
Note:
I dont want to store the data in a DB (that is next step) I want to work on the fly.
Thank you very much for your inputs
Oliver
first of all, please don't put users in a global variable, better put it in a module and require it elsewhere whenever needed. you can do it like this:
users.js
var users = {
_list : {}
};
users.create = function(data){
this._list[data.id] = data;
}
users.get = function(user_id){
return this._list[user_id];
};
users.getAll = function(){
return this._list;
};
module.exports = users;
and somewhere where in your implementation
var users = require('users');
For your problem where you want to send to all users with name "BRIAN",
i can see that you can do this good in 2 ways.
First.
When user is connected to socketio server, let the user join a socketio room using his/her name.
so it will look like this:
var custom_namespace = io.of('/custom_namespace');
custom_namespace.on('connection', function(client_socket){
//assuming here is where you send data from frontend to server
client_socket.on('user_data', function(data){
//assuming you have sent a valid object with a parameter "name", let the client socket join the room
if(data != undefined){
client_socket.join(data.name); //here is the trick
}
});
});
now, if you want to send to all people with name "BRIAN", you can achieve it by doing this
io.of('/custom_namespace').broadcast.to('BRIAN').emit('some_event',some_data);
Second.
By saving the data on the module users and filter it using lodash library
sample code
var _lodash = require('lodash');
var Users = require('users');
var all_users = Users.getAll();
var socket_ids = [];
var users_with_name_brian = _lodash.filter(all_users, { name : "BRIAN" });
users_with_name_brian.forEach(function(user){
socket_ids.push(user.name);
});
now instead of emitting it one by one per iteration, you can do it like this in socketio
io.of('/custom_namespace').broadcast.to(socket_ids).emit('some_event',some_data);
Here is the link for lodash documentation
I hope this helps.
I am facing a strange issue with calling socket.on methods from the Javascript client. Consider below code:
for(var i=0;i<2;i++) {
var socket = io.connect('http://localhost:5000/');
socket.emit('getLoad');
socket.on('cpuUsage',function(data) {
document.write(data);
});
}
Here basically I am calling a cpuUsage event which is emitted by socket server, but for each iteration I am getting the same value. This is the output:
0.03549148310035006
0.03549148310035006
0.03549148310035006
0.03549148310035006
Edit: Server side code, basically I am using node-usage library to calculate CPU usage:
socket.on('getLoad', function (data) {
usage.lookup(pid, function(err, result) {
cpuUsage = result.cpu;
memUsage = result.memory;
console.log("Cpu Usage1: " + cpuUsage);
console.log("Cpu Usage2: " + memUsage);
/*socket.emit('cpuUsage',result.cpu);
socket.emit('memUsage',result.memory);*/
socket.emit('cpuUsage',cpuUsage);
socket.emit('memUsage',memUsage);
});
});
Where as in the server side, I am getting different values for each emit and socket.on. I am very much feeling strange why this is happening. I tried setting data = null after each socket.on call, but still it prints the same value. I don't know what phrase to search, so I posted. Can anyone please guide me?
Please note: I am basically Java developer and have a less experience in Javascript side.
You are making the assumption that when you use .emit(), a subsequent .on() will wait for a reply, but that's not how socket.io works.
Your code basically does this:
it emits two getLoad messages directly after each other (which is probably why the returning value is the same);
it installs two handlers for a returning cpuUsage message being sent by the server;
This also means that each time you run your loop, you're installing more and more handlers for the same message.
Now I'm not sure what exactly it is you want. If you want to periodically request the CPU load, use setInterval or setTimeout. If you want to send a message to the server and want to 'wait' for a response, you may want to use acknowledgement functions (not very well documented, but see this blog post).
But you should assume that for each type of message, you should only call socket.on('MESSAGETYPE', ) once during the runtime of your code.
EDIT: here's an example client-side setup for a periodic poll of the data:
var socket = io.connect(...);
socket.on('connect', function() {
// Handle the server response:
socket.on('cpuUsage', function(data) {
document.write(data);
});
// Start an interval to query the server for the load every 30 seconds:
setInterval(function() {
socket.emit('getLoad');
}, 30 * 1000); // milliseconds
});
Use this line instead:
var socket = io.connect('iptoserver', {'force new connection': true});
Replace iptoserver with the actual ip to the server of course, in this case localhost.
Edit.
That is, if you want to create multiple clients.
Else you have to place your initiation of the socket variable before the for loop.
I suspected the call returns average CPU usage at the time of startup, which seems to be the case here. Checking the node-usage documentation page (average-cpu-usage-vs-current-cpu-usage) I found:
By default CPU Percentage provided is an average from the starting
time of the process. It does not correctly reflect the current CPU
usage. (this is also a problem with linux ps utility)
But If you call usage.lookup() continuously for a given pid, you can
turn on keepHistory flag and you'll get the CPU usage since last time
you track the usage. This reflects the current CPU usage.
Also given the example how to use it.
var pid = process.pid;
var options = { keepHistory: true }
usage.lookup(pid, options, function(err, result) {
});
I have a set of records that I would like to update sequentially in perpetuity. Basically:
Get least recently updated record
Update record
Set date of record to now (aka. send it to the back of the list)
Back to step 1
Here is what I was thinking using Firebase:
// update record function
var updateRecord = function() {
// get least recently updated record
firebaseOOO.limit(1).once('value', function(snapshot) {
key = _.keys(snapshot.val())[0];
/*
* do 1-5 seconds of non-Firebase processing here
*/
snapshot.ref().child(key).transaction(
// update record
function(data) {
return updatedData;
},
// update priority after commit (would like to do it in transaction)
function(error, committed, snap2) {
snap2.ref().setPriority(snap2.dateUpdated);
}
);
});
};
// listen whenever priority changes (aka. new item needs processing)
firebaseOOO.on('child_moved', function(snapshot) {
updateRecord();
});
// kick off the whole thing
updateRecord();
Is this a reasonable thing to do?
In general, this type of daemon is precisely what was envisioned for use with the Firebase NodeJS client. So, the approach looks good.
However, in the on() call it looks like you're dropping the snapshot that's being passed in on the floor. This might be application specific to what you're doing, but it would be more efficient to consume that snapshot in relation to the once() that happens in the updateRecord().
I would like that the jobs.create fails if an identical job is already in the system. Is there any way to acomplish this?
I need to run the same job every 24 hours, but some jobs could take even more than 24 hours, so I need to be sure that the job isn't already in the system (active, queued o failed) before adding it.
UPDATED:
Ok, I going to simplify the problem to be able to explain it here.
Lest say I have an analytics service and I have to send a report to my users once a day. Completing these reports some times(just a few cases but it is a possibility) take several hours even more than a day.
I need a way to know which are the currently running jobs to avoid duplicated jobs. I couldn't find anything in the ´´´´kue´´´´ API to know which jobs are currently running. Also I need some kind of event fired when more jobs are needed and then call my getMoreJobs producer.
Maybe my approach is wrong, if so please let me know a better way to solve my problem.
This is my simplified code:
var kue = require('kue'),
cluster = require('cluster'),
numCPUs = require('os').cpus().length;
numCPUs = CONFIG.sync.workers || numCPUs;
var jobs = kue.createQueue();
if (cluster.isMaster) {
console.log('Starting master pid:' + process.pid);
jobs.on('job complete', function(id){
kue.Job.get(id, function(err, job){
if (err || !job) return;
job.remove(function(err){
if (err) throw err;
console.log('removed completed job #%d', job.id);
});
});
function getMoreJobs() {
console.log('looking for more jobs...');
getOutdateReports(function (err, reports) {
if (err) return setTimeout(getMoreJobs, 5 * 60 * 60 * 1000);
reports.forEach(function(report) {
jobs.create('reports', {
id: report.id,
title: report.name,
params: report.params
}).attempts(5).save();
});
setTimeout(getMoreJobs, 60 * 60 * 1000);
});
}
//Create the jobs
getMoreJobs();
console.log('Starting ', numCPUs, ' workers');
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('death', function(worker) {
console.log('worker pid:' + worker.pid + ' died!'.bold.red);
});
} else {
//Process the jobs
console.log('Starting worker pid:' + process.pid);
jobs.process('reports', 20, function(job, done){
//completing my work here
veryHardWorkGeneratingReports(function(err) {
if (err) return done(err);
return done();
});
});
}
The answer to one of your questions is that Kue puts the jobs that it pops off of the redis queue into "active", and you'll never get them unless you look for them.
The answer to the other question is that your distributed work queue is the consumer, not the producer of tasks. Mingling them like you have is okay, but, it's a muddy paradigm. What I've done with Kue is to make a wrapper for kue's json api, so that a job can be put into the queue from anywhere in the system. Since you seem to have a need to shovel jobs in, I suggesting writing a separate producer application that does nothing but get external jobs and stick them into your Kue work queue. It can monitor the work queue for when jobs are running low and load a batch in, or, what I would do, is make it shovel jobs in as fast as it can, and spool up multiple instances of your consumer application to process the load more quickly.
To re-iterate: Your separation of concerns isn't very good here. You should have a producer of tasks that's completely separate from your task consumer app. This gives you more flexibility, ease of scale (Just fire up another consumer on another machine and you're scaled!) and overall ease of code management. You should also allow, if possible, whomever is giving you these tasks that you "go looking for" access to your Kue server's JSON api instead of going out and finding them. The job producer can schedule its own tasks with Kue.
Look at https//github.com/LearnBoost/kue.
In json.js script check rows 64-112. There you'll find methods which return an object containing jobs, also filtered with type, state or id-range. (jobRange(), jobStateRange(), jobTypeRange().)
Scrolling down the main page to JSON API -section, you'll find the examples of the returned objects.
That how to call and use those methods you know much better than I do.
jobs.create() will fail, if you pass an unknown keyword. I would created a function to check the current job in forEach-loop, and returns a keyword. Then just call this function instead of literal keyword in jobs.create() -parameters.
Information got through those methods in json.js, may help you create that "moreJobToDo"-event too.