Live updating Node.js server - javascript

I want to design a live updating Node.js Express server, perhaps having a particular route, say /update, which loads a new configuration file. The only concern I have with this right now is that the server could potentially be in any state when the update happens. If I load a new configuration file while there is a JS message being processed for a user request, at the beginning of the user request there could be one configuration and before the request completes there could be a second configuration when the new config file is loaded. The only way I can think of to prevent this is to take down the server for at least one minute (keep the server live, but prevent any incoming requests altogether) and then update the server and put it back online, but then that's not really the best form of hot reloading or live updating is it?
How can I somehow trick the JS event loop so that the config file only gets loaded once all requests have completed and delay any new requests until after the config is loaded?
One algorithm would be:
set a flag "starting re-configuration"
above flag prevents any new requests from being processed (using Express middleware)
check that all current requests are completed (can't think of anything better than a polling loop here)
once above check is done, load new configuration
once configuration is loaded, switch the flag from (1)

Disclaimer: I have not tried this in production. In fact, I have not tried this at all. while I believe the idea is sane, there may be hidden pitfalls along the road which are not currently known to me.
There is one thing that many Node.js developers tend to forget/not fully realise:
There can always be only one JavaScript statement executed at a time.
It does not matter that you can do async I/O or execute a function later in time. No matter how hard you try, all the JS code that you write is executed in a single thread, no parallelism. Only the underlying implementation (which is completely out of our control) can do things in parallel.
This helps us, because as long as our update procedure is synchronous, no other JS code (i.e. client response) can be executed.
Configuration live-patching
The solution to prevent configuration change mid-request is rather simple:
Each request gets its own copy of the application's configuration.
If your application's configuration lives in a JavaScript object, you simply clone that object for each new request. This means that even if you suddenly change the configuration, it will only be applied to new incoming requests.
There are tons of modules for cloning (even deep cloning) objects, but since I believe mine is best I will use this opportunity for some small self-promotion - semantic-merge.
Code live-patching
This is a bit trickier, but should be generally possible with enough effort.
The trick here is to first remove/unregister current Express routes, clear Node's require cache, require the updated files again and re-register route handlers. Now Express will finish all pending requests using the old code (this is because Node cannot remove these old functions from memory as long as the code contains active object references - which it does - req and res) and use the newly required modules/routes for new incoming requests. Old code should get released from memory as soon as there are no more requests that started with the old code.
You must not use require anywhere during request processing, otherwise you risk the same problem as with changing configuration mid-request. You can of course use require in a module-level scope because that will be executed when the module itself is required, thus being synchronous.
Example:
// app/routes/users.js (or something)
// This is okay, because it is executed only once - when users.js
// itself is required
var path = require('path')
// This function is something you would put into app.use()
module.exports = function usersRoute (req, res, next) {
// Do not use require() here! It will be executed per-request!
}

I think that instead of looping a request to the server you can use a Websocket.
That way, when there's a change in the config file that you mentioned, the server can 'emit' a message to the users, so they refresh their data.
If you are using nodeJS and Express, this will help you:
Socket.io with NodeJS
The server will wait for the signal of some user or anybody and emit the signal to all the users, so they get the new data
Node.js:
var express = require('express');
var app = express();
var server = require('http').createServer(app);
var io = require('socket')(server);
var port = process.env.PORT || 3000;
server.listen(port, function () {
console.log('Server listening at port %d', port);
});
app.use(express.static("/PATH_TO_PROJECT"));
io.on('connection', function (socket) {
socket.on('someone update data', function (data) {
socket.to(socket.room).broadcast.emit('data updated', {params:"x"});
}
});
Meanwhile, the client will be listening if there's any change:
View.js:
var socket = io();
socket.on('new message', function (data) {
liveUpdate(data);
});
I hope I understood correctly what you asked

This is a good problem to solve.
A possible solution could be:
That you derive the controllers on each path from a parent controller. The parent controller can flag a property ON (a flag / file) when a request arrives and turn it OFF when the response is sent back.
Now subclass this parent controller for each express end-point facing the front end. If you make a request now to '/update', the update controller would know if the server is busy or not through the FLAG and send back a reply if the update was successful or not.
For update failures the front end could possibly post back to the '/update' end point with some back-off scheme.
This scheme might work for you ...

Related

The relationship between front end and middleware

I have a front end application, which I would like to return results with from an Express backend. Let's just call those results country and city for this reference.
I have done a bunch of searching, but I cannot find any solid resources on the relationship between the front end and middleware. Yes, I know what these things are, and the order in which they should flow, but the confusion sits with :
Do I need to connect my front end and middleware? How so?
If I am already connected to my backend from the front end, would I also have to connect to middleware?
How would I return the country and city from the middleware and/or express backend?
Any other info you think would be helpful for the greater dev community who is still learning would be beneficial.
While you could return data from a middleware, it's probably not what you are trying to do. A middleware is a piece of code that is executed between the time the request is receive by your backend, and the resource is fetch. In a middleware you could do things such as check if a user has access to a certain resource or authenticate a user by some sort of credential passed with the request.
Either way, the way you would, typically, do request from your front-end to your backend is via an XmlHttpRequest. Those request are usually Asynchronous, so they usage will not block the whole page while being executed. There are many ways you could create XmlHttpRequest. The native Javascript way is kinda ugly so I would suggest using the fetch api instead. You could also go with third party library if you need to do more complex stuff. I personnally like axios but this is up to you.
To give you a better understanding of what Express is doing, it's basically an infinite loop that waits for http request. You need to defined routes, that execute function that returns data.
Here is a basic example. Note that this script is executed via NodeJS :
// myserver.js
const express = require('express')
const app = express()
app.get('/cities', (req, res) => {
const cities = /** somehow get all the cities **/
res.json(cities);
})
/** the rest of the server... **/
/** For example, the route for Countries **/
In the previous example, we've built a basic server that listen to the url localhost:3000/cities and execute a function when this url is fetched. The said function will fetch all the cities and return them as JSON.
In your frontend, You would need to do a XmlHttpRequest that would call this url, to get the server to execute the function, which will return the data. Phew... I hope I did not lost you there.
A typical example would be a simple call using the fetch api.
Please note that this script is executed in the browser.
// myclient.js
async fetchAllCities() {
const cities = await fetch('http://localhost:3000/cities');
console.log(cities);
}
// just for fun, we add a click listener on a button and call the function defined above.
document.getElementById('myButton').addEventListener('click', async function() {
// we fetch the cities when we click on the button !
await fetchAllCities();
});
In the previous example, I am using the fetch function to call the url we declared in our Express server.
I'm also using Async / Await, which can be a little tricky, but it just mean Wait for the data to be there before going forward.
I highly suggest reading on the subject. Here are some references.
How do I return the response from an asynchronous call?
Understanding async/await on NodeJS.
Await from MDN
I hope this brief overview of XmlHttpRequest helped you get the base of how an API works.
Middleware is used to help the back-end do its job in processing incoming requests. It does not exist separate from the back-end. It's part of the back-end. For example, you might have middleware that checks to see if an incoming request is properly authorized/authenticated before the route can be handled by it's regular route handler.
Do I need to connect my front end and middleware? How so?
No. Your front-end sends requests to the back-end. The back-end may or may not use middleware to service the request. That's entirely up to the implementation in the back-end and what it needs to do for any given request.
If I am already connected to my backend from the front end, would I also have to connect to middleware?
No. You don't separately connect to middleware. You connect to your back-end and the back-end may or may not use middleware to do its job (something the front-end will have no knowledge of).
How would I return the country and city from the middleware and/or express backend?
You would have to show more details about what you're actually trying to return back from a request, but a common data format is JSON so you could construct a Javascript object with your desired response and then send it back to the client as the response from the incoming request using either res.json(someObj) or res.send(someObj) (both do the same thing if someObj is a Javascript object).
For example:
app.get("/getsomething", (req res) => {
// do some processing here to get cityResult and countryResult
// construct object to send back to client
const obj = { city: cityResult, country: countryResult};
// send this object as JSON back the the client as the response to this
// incoming request
res.json(obj);
});

How to make express Node.JS reply a request during heavy workload?

I'm creating an nodejs web processor. I's is processing time that takes ~ 1 minute. I POST to my server and get status by using GET
this is my simplified code
// Configure Express
const app = express();
app.listen(8080);
// Console
app.post('/clean, async function(req, res, next) {
// start proccess
let result = await worker.process(data);
// Send result when finish
res.send(result);
});
// reply with when asked
app.get('/clean, async function(req, res, next) {
res.send(worker.status);
});
The problem is. The server is working so hard in the POST /clean process that GET /clean are not replied in time.
All GET /clean requests are replied after the worker finishes its task and free the processor to respond the request.
In other words. The application are unable to respond during workload.
How can I get around this situation?
Because node.js runs your Javascript as single threaded (only one piece of Javascript ever running at once) and does not time slice, as long as your worker.process() is running it's synchronous code, no other requests can be processed by your server. This is why worker.process() has to finish before any of the http requests that arrived while it was running get serviced. The node.js event loop is busy until worker.process() is done so it can't service any other events (like incoming http requests).
These are some of the ways to work around that:
Cluster your app with the built-in cluster module so that you have a bunch of processes that can either work on worker.process() code or handle incoming http requests.
When it's time to call worker.process(), fire up a new node.js process, run the processing there and communicate back the result with standard interprocess communication. Then, your main node.js process stays reading to handle incoming http requests near instantly as they arrive.
Create a work queue of a group of additional node.js processes that run jobs that are put in the queue and configure these processes to be able to run your worker.process() code from the queue. This is a variation of #2 that bounds the number of processes and serializes the work into a queue (better controlled than #2).
Rework the way worker.process() does its work so that it can do a few ms of work at a time, then return back to the message loop so other events can run (like incoming http requests) and then resume it's work afterwards for a few more ms at a time. This usually requires building some sort of stateful object that can do a little bit of work at a time each time it is called, but is often a pain to program effectively.
Note that #1, #2 and #3 all require that the work be done in other processes. That means that the process.status() will need to get the status from those other processes. So, you will either need some sort of interprocess way of communicating with the other processes or you will need to store the status as you go in some storage that is accessible from all processes (such as redis) so it can just be retrieved from there.
There's no working around the single-threaded nature of JS short of converting your service to a cluster of processes or to use something experimental like Worker Threads.
If neither of these options work for you, you'll need to yield up the processing thread periodically to give other tasks the ability to work on things:
function workPart1() {
// Do a bunch of stuff
setTimeout(workPart2, 10);
}
function workPart2() {
// More stuff
setTimeout(workPart3, 10); // etc.
}

Node.js API to spawn off a call to another API

I created a Node.js API.
When this API gets called I return to the caller fairly quickly. Which is good.
But now I also want API to call or launch an different API or function or something that will go off and run on it's own. Kind of like calling a child process with child.unref(). In fact, I would use child.spawn() but I don't see how to have spawn() call another API. Maybe that alone would be my answer?
Of this other process, I don't care if it crashes or finishes without error.
So it doesn't need to be attached to anything. But if it does remain attached to the Node.js console then icing on the cake.
I'm still thinking about how to identify & what to do if the spawn somehow gets caught up in running a really long time. But ready to cross that part of this yet.
Your thoughts on what I might be able to do?
I guess I could child.spawn('node', [somescript])
What do you think?
I would have to explore if my cloud host will permit this too.
You need to specify exactly what the other spawned thing is supposed to do. If it is calling an HTTP API, with Node.js you should not launch a new process to do that. Node is built to run HTTP requests asynchronously.
The normal pattern, if you really need some stuff to happen in a different process, is to use something like a message queue, the cluster module, or other messaging/queue between processes that the worker will monitor, and the worker is usually set up to handle some particular task or set of tasks this way. It is pretty unusual to be spawning another process after receiving an HTTP request since launching new processes is pretty heavy-weight and can use up all of your server resources if you aren't careful, and due to Node's async capabilities usually isn't necessary especially for things mainly involving IO.
This is from a test API I built some time ago. Note I'm even passing a value into the script as a parameter.
router.put('/test', function (req, res, next) {
var u = req.body.u;
var cp = require('child_process');
var c = cp.spawn('node', ['yourtest.js', '"' + u + '"'], { detach: true });
c.unref();
res.sendStatus(200);
});
The yourtest.js script can be just about anything you want it to be. But I thought I would have enjoy learning more if I thought to first treat the script as a node.js console desktop app. FIRST get your yourtest.js script to run without error by manually running/testing it from your console's command line node yourstest.js yourparamtervalue THEN integrate it in to the child.spawn()
var u = process.argv[2];
console.log('f2u', u);
function f1() {
console.log('f1-hello');
}
function f2() {
console.log('f2-hello');
}
setTimeout(f2, 3000); // wait 3 second before execution f2(). I do this just for troubleshooting. You can watch node.exe open and then close in TaskManager if node.exe is running long enough.
f1();

how to close socket server in node-rstp-stream module node js

I am using onvif node js module and node-rstp-stream module to convert RSTP stream to img stream which can be used in phonegap(RSTP stream is not supported in phonegap I think).
Here I am using express js, so whenever I send a request to /livestreaming first time it works fine but not next load it tries to create one instance on same port number which creates an issue. Is there way to check if the server is running close on every request and start it again so that we dont the port already in use error. Is there a way better than this please let me know.
Below is the code which I tried.
app.get('/livestreaming', function (req, res) {
if(cam !== null) {
cam.getStreamUri({protocol:'RTSP'}, function(err, stream) {
newsocket = new Stream({
name: 'mysoc',
streamUrl:stream.uri,
wsPort: 8888
});
});
} else {
res.json({"error":"connect to camera"});
}
});
The node-rtsp-stream library does not provide any way to check if the port is already in use, neither any way to close the socket server.
So, from my point of view you have two options:
Try to connect to your socket server port to use if it is available, for example by doing a ping, and only launch a new Stream in case is not.
Since the node-rtsp-stream library is very simple, and I already have practice with it, you can add this code right after your newly created stream:
newsocket.wsServer.on('error', function() {
newsocket.mpeg1Muxer.stream.kill();
});
So, where this came? If you take a look to the library, you will find that the wsServer is your socket server and mpeg1Muxer is the stream open with your camera. Because of the already in use error, the server won't launch, but you should also kill the ffmpeg process. This way, if the server is already running, nothing happend, and if not, it will launch.
This last solution is little bit tricky, but I thing it will work.
Hope it helps

Node.js Routes: Adding route handlers to an already instantiated http server

How would I add route handlers to an http server that already exists and has been instantiated?
All the routers I've found (including express) seem to require that they be passed into the http.createServer() method.
For example with express:
var server = http.createServer(app);
My main criteria:
Add routes to an existing server the way something like sockjs does it.
Be agnostic to whatever router is already being used (if there is one)
Not rely on an existing router "app" object to add the routes (creating a new one using a routing library would be fine).
Example: passing server into SockJS
var http_server = http.createServer(); // agnostic
sockjs_server.installHandlers(http_server, options);
http_server.listen(...);
The way it's done in the sockjs source seems quite cryptic... but I think it involves traversing existing handlers and overwriting them with a custom router/handlers.
Thanks so much for any help!
Well, an http server is nothing but an EventEmitter. It has a request event which is the one that handles the requests coming from the clients.
So, one thing you can do is to make a wrapper function around the current handler function. For instance, let's suppose the existence of some express application:
var express = require('express');
var app = express();
app.get('/', function(req, res){
res.send('Hello World!');
});
var server = app.listen(8080);
So, now, you can simply go over the list of currently registered request listeners in the server, remove the old listener functions and wrap them in a new one that handles your request the way you want. For instance, you can now create your own router and determine through which pipeline to send a request depending on its contents (i.e. path, content type, accepted languages, etc).
server.listeners('request').forEach(function(listener){
server.removeListener('request', listener);
server.on('request', function(req, res){
console.log('Before');
listener(req,res);
console.log('After');
});
});
In the example above you can see that I run a couple of console.log statements around the actual execution of the listener function. In this case the listener function is actually the Express main request handler. The express handler represents a pipeline, and by creating this new wrapper function you just added a new pipe at the beginning of the pipeline.
This technique would allow you to handle the request first and decide whether you want to send your own response and terminate the request here, or send it through a different pipeline or send the request down the express pipeline (i.e. the old listener function available to your new request handler closure).

Categories

Resources