Run process remotely from an http page - javascript

I'm using OBS-WebSocket.js as a component (available on server) to call for change the OBS scenes, through OBS websockets. This way is compatible with all platforms, I can load the main html page from server and clicking a button I can switch the scenes.
I would find a way now to kill and reload OBS in case it hangs, from the same http page, I?m asking if there is a similar way to start a process on server, I mean:
a process running on server, listening on a tcp port and ready to start a batch file (or python script)
a .js file to invoke from the http page, able to send a proper command to that listening port.
There is something like this already? Thank you a lot

You can set up a simple Express server that listens to HTTP POST requests on e.g. /restart. You can use the child_process module to spawn a process and later on kill it.

Related

Listening on running server

How to listen on server which is running on port 3000? I want to show an notification when my server is switch off to the client but i don't know how to check if server is running. Can somebody tell me how to listen on server events?
I read about socket.io, it is a solution?
thanks for any help!
Do you have control of the server as in are you the one in control of the infrastructure the server is running on and responsible for turning it off and on again. If so separate socket based server can be setup that connects with client to let it know if the server is on or off. You can program a notification using this socket based server to let the client know.
If you dont have control of the server at all you can use polling on client side to know if the server is available. You can also just wrap all your client calls with prerequisite if the server is available and if the request times out then just set the notification accordingly.
Please note that without either having server send notifications via sockets there is no clean way for client to instantly know that server is down without very quick regular polling to the server which in turn may add a lot of extra load on the server.

How to create a server that is only internal to the app in node / npm or modify any response body from outgoing requests

I am trying to develop a node app and require a server that can only be used by the app internally.
I have tried instantiating a server without listening to the port, but can't do anything with it from that point forwards:
let http = require("http");
http.createServer(function (req, res) {
// custom code
})
This app is being built with NWJS and I need to intercept any outgoing requests (including file resources; CSS, JS, images, etc.) and modify the response, but I am not having any success with it except if I use a server for this purpose.
Problem is it becomes possible to open that server on any browser and I just want it to be used only inside the app or another way to intercept outgoing requests from the app so that the response body can be modified.
I have tried Keith's suggestion of using a service worker to intercept requests, but in my case I could not load a service worker from local environment into live environment (for example, run a local sw file in stackoverflow page), so that suggestion ended there.
Stdob's suggestion of using a proxy ended up being redundant and more troublesome than my original attempt.
In the end I went with my original attempt as follows:
Using chrome.webRequest.onBeforeRequest (Chrome API) and a local node server.
The server is created with an arbitrary port to reduce the risk of hitting an already used port.
The chrome API redirects all connections to the local server (ex. url: http://127.0.0.1:5050) and then the server will handle the requests as needed, returning the requested files modified or intact.
Last step, add a unique header with a unique value that only the app knows, so that no server access can be made from outside the app.
It is not the best solution, ideally I would prefer to have something like Firefox's webRequest.filterResponseData, but until Chrome implements that, this will have to do.

Send message from Node.js Server to Client via function call

I want to send messages from my server to my client when a function is called. Using the code from this answer messages can be successfully sent from Server to Client every second.
I am building an application that runs node in the background, ideally I would like to be able to click a button that will call a function in the node server.js file which takes a parameter and sends that message to the client. The function in question would look like this
function sendToClient(message) {
clients[0].emit('foo', msg);
}
This would send the passed in message to the first client. How can I go about this?
In terminal, after you run node server.js is there a way to call a function from the server file using terminal, this could be a possible solution if so.
The best way to send messages from server to client right now is using webSockets. The basic concept is this:
Client A loads web page from server B.
Client A runs some javascript that creates a webSocket connection to server B.
Server B accepts that webSocket connection and the socket stays open for the duration of the life of the web page.
Server B registers event handlers to handle incoming messages from the web page.
Client A registers event handlers to handle incoming messages from the server.
At any point in time, the server can proactively send data to the client page and it will receive that data.
At any point in time, the client may sent data to the server and it will receive that data.
A popular node.js library that makes webSocket support pretty easy is socket.io. It has both client and server support so you can use the same library for both ends of the connection. The socket.io library supports the .emit() method mentioned in your question for sending a message over an active webSocket connection.
You don't directly call functions from client to server. Instead, you send a message that triggers the server to run some particular code or vice versa. This is cooperative programming where the remote end has to be coded to support what you're asking it to do so you can send it a message and some optional data to go with the message and then it can receive that message and data and execute some code with that.
So, suppose you wanted the server to tell the client anytime a temperature changed so that the client could display in their web page the updated temperature (I actually have a Raspberry Pi node.js server that does exactly this). In this case, the client web page establishes a webSocket connection to the server when the page loads. Meanwhile, the server has its own process that is monitoring temperature changes. When it sees that the temperature has changed some meaningful amount, it sends a temperature change message to each connected client with the new temperature data. The client receives that message and data and then uses that to update it's UI to show the new temperature value.
The transaction could go the other way too. The client could have a matrix of information that it wants the server to carry out some complicated calculation on. It would send a message to the server with the type of calculation indicated in the message type and then send the matrix as the data for the message. The server would receive that message, see that this is a request to do a particular type of calculation on some data, it would then call the appropriate server-side function and pass it the client data. When the result was finished on the server, it would send a message back to the client with the result. The client would receive that result and then do whatever it needed to with the calculated result.
Note, if the transactions are only from client to server with a response then coming back from the server, a webSocket is not needed for that type of transaction. That can be done with just an Ajax call. Client makes ajax call to the server, server formulates a response and returns the response. Where webSockets are most uniquely useful is if you want to initiate the communication from the server and send unsolicited data to the client at a time that the server decides. For that, you need some continuous connection between client and server which is what a webSocket is designed to be.
It appears there may be more to your question about how to communicate from a C# server to your node.js server so it can then notify the client. If this is the case, then since the node.js server is already a web server, I'd just add a route to the node.js server so you can simply do an http request from the C# server to the node.js server to pass some data to the node.js server which it can then use to notify the appropriate client via the above-described webSocket connection. Depending upon your security needs, you may want to implement some level of security so that the http request can only be sent locally from your C# server, not from the outside world to your node.js server.
In order to send a command to a client via the console there are two options, single process or multiprocess:
Single Process
When the command is run from console, temporary socket.io server starts listening on a port.
Once the client connects, send the message to the client.
Disconnect and stop the console app.
The reason this works is that socket.io clients are always trying to connect to the server. As long as the browser is open, they will try to connect. So even if the server only comes on for a few seconds, it should connect and receive messages. If the client is not running then simply create a timeout that will stop the console app and inform the user that it failed to broadcast the command.
While this approach is very easy, it's not robust nor efficient. For small projects this would work, but you'll have better luck with the next approach:
Multi-Process
This approach is much more reliable, expandable, and just better looking when you are talking about architecture. Here's the basic summary:
Spin up a stand-alone server that connects with clients.
Create a very similar console node app that will send a message to the server to forward on to clients.
Console app completes but the main server stays up and running.
This technique is just interprocess communication. Luckily you already have Socket.IO on the primary server, so your console app just needs to be another socket.io client. Check out this answer on how to implement that.
The downside to this is that you must secure that socket communication. Maybe you can enforce it to just allow localhost connections, that way you need access to the server to send the run command message (you absolutely don't want web clients executing code on other web clients).
Overall it comes down to the needs of your project. If this is a quick little experiment you want to try out, then just do it single process. But if will be hosting an express server (other webservers are available) and need to be running anyways, then multi-process is the way to go!
Example
I've created a simple example of this process using only Socket.io. Instructions to run it all are in the readme.
Implementations
In order to have C# (app) -> Node.js (server) -> Browser (client) communication then I would do one of the following:
Use redis as a message queue (add items to the queue with the app, consume with the server, which sends commands to client).
Live on the wild side and merge your NodeJS and C# runtimes with Edge.js. If you can run NodeJS from C# you will be able to send messages from your app to the server (messages are then handled by the server, just like any other socket.io client-server model).
Easier, but still kinda hacky, use System.Diagnostics.Process to start a console tool explained in the Multi-Process section. This would simply run the process with arbitrary parameters. Not very robust but worth considering, simple means harder to break (And again, messages are then handled by the server, just like any other socket.io client-server model).
I would create a route for sending the message and send message from post parameter. From CLI you can use curl or from anywhere really:
app.get('/create', function(req, res) {
if( data.type && data.content && data.listeners){
notify( data );
}
});
var notify = function( notification ){
ns_mynamespace.in(notification.listeners.users)
.emit("notification", {
id: notification.id,
title: 'hello', text: notification.content });
}
}

Node.js / Nginx : Request timesout at Nginx before processing finishes at node.js

I have a Nginx/Node.js server setup. The basic process is that I upload a file to nginx from a browser which forwards it to node.js. Node.js processes the file and returns an output to the browser via nginx.
While it works fine for most of the files, it crashes when the uploaded file is too large. After receiving a file my node starts processing on the file but before the node finishes its process the request times out at nginx. The node keeps on running and successfully completes the process but it is of no use to me.
What can I do to solve my problem? Increase the nginx timeout (but to what limit)? Try to speed up my process at node (i am already trying that but i can only do that upto some extent)? Try using node module socket.io (I don't know how but i am not even sure whether it will work)?
Thnx in advance!!
Your node process shouldn't wait for the processing to be done until it returns the response. Process the files in background workers by queuing it. You could even go one step further and use direct upload to something like Amazon S3 without touching your stack at all and then queue the file for processing.
Decoupling is the solution. There are just different ways to implement it.
Edit:
Here's a concrete example of I handle file uploads.
The node server instance stores the file in S3 using https://github.com/aws/aws-sdk-js
I then queue a processing command using https://github.com/learnboost/kue (other popular options include RabbitMQ)
The server responds with a success message.
Now as soon as it's idling, a second process (the worker) fetches the file from S3 and processes it.
On the client you can poll the current processing state (and even a progress bar, if your worker is able to calculate and store a progress) and tell the user when it's done. You can optionally use something like socket.io instead of polling and allow both the server process and the worker process to push messages to the client (again, this could be decoupled by using something like redis pub/sub or RabbitMQ and store the messages which a third "chat" process send out to the clients).

nodejs without stopping

Is there a way I can make nodejs reload everytime it serves a page?
I want to do this during the dev cycle so I can avoid having to shutdown & startup on each code change?
Edit: Try nodules and their require.reloadable() function.
My former answer was about why not to reload the process of Node.js and does not really apply here. But I think it is still important, so I leave it here.
Node.js is evented IO and crafted specifically to avoid multiple threads or processes. The famous C10k problem asks how to serve 10 thousand clients simultaneously. This is where threads don't work very well. Node.js can serve 10 thousand clients with only one thread. If you were to restart Node.js each time you would severely cripple Node.js.
What does evented IO mean?
To take your example: serving a page. Each time Node.js is about to serve a page, a callback is called by the event loop. The event loop is inherent to each Node.js application and starts running after initializations have completed. Node.js on the server-side works exactly like client-side Javascript in the browser. Whenever an event (mouse-click, timeout, etc.) happens, a callback - an event handler - is called.
And on the server side? Let's have a look at a simple HTTP server (source code example taken from Node.js documentation)
var http = require('http');
http.createServer(function (request, response) {
response.writeHead(200, {'Content-Type': 'text/plain'});
response.end('Hello World\n');
}).listen(8124);
console.log('Server running at http://127.0.0.1:8124/');
This first loads the http module, then creates an HTTP server and tells it to invoke the inner function starting with function (request, response) every time an HTTP request comes in, then makes the server listen to port 8124. This completes almost immediately so that console.log will be executed thereafter.
Now Node.js event loop takes over. The application does not end but waits for requests. And voilĂ  each request is answered with Hello World\n.
In a summary, don't restart Node.js, but let its event loop decide when your code has to be run.
Found Nodemon, exactly what I wanted: https://github.com/remy/nodemon
I personnaly use spark2 (the fork) and will switch to cluster as soon as i found the time to test it. Among other things, those 2 will listen to file changes and reload the server when appropriate, which seems to be what you're looking for.
There are many apps for doing this, but I think the best is Cluster, since you have 0 downtime for your server. You can also set multiple workers with it or manually start/stop/restart/show stats with the cli REPL functionality.
No. Node.js should always run. I think you may have misunderstood the concept of nodejs. In order for nodejs to serve pages, it has to be its own server. Once you start a server instance and start listening on ports, it will run until you close it.
Is it possible that you've called a function to close the server instead of just closing a given stream?

Categories

Resources