Non-blocking data transfer between separate C and Javascript processes - javascript

The Goal
I have two processes running:
Node.js server which handles external communication (to other computers)
C code which runs in an infinite loop doing real-time processing
The node.js server will receive data from an external computer, send it to the C code (which is already running), wait for a reply, and send that reply back to an external computer.
HOWEVER, the C code cannot wait for input from the node.js server. It needs to be continuously running using the most recent data from the js code (updating those recent values each time new data is received). In other words, the C code needs non-blocking input from the node.js server. The node.js server should get an event (bound to a callback) each time the C code periodically sends it information back.
What I have done
The node.js code already handles external IO correctly (it's fairly complete)
The C code in isolation does the task I want, just without getting data from the node.js code
I am now trying to combine them so that the C code gets its sensor data from the node.js code (as I described above).
How I think I can do it
My current best guess of how to do this is using sockets (I think they're non-blocking?). I've never done this before, though, so before I spend a bunch of time learning about how sockets work, and writing a bunch of code, I want to make sure that this is a method which might actually work. So, I want to see if anyone has any suggestions on the easiest way to achieve what I described. If it does turn out to be sockets than at least I know I'm not wasting my time. Thanks for any suggestions!

Related

How do I divide tasks between frontend and backend properly?

I have an idea to make something similar to Workflowy but with some new features.
(Workflowy is basically a note-taking app which beautifully organises all your notes as an endless tree)
At first, I implemented the logic in Python. It works in a terminal by printing notes line-by-line and then waiting for the command.
Is this a good idea to keep all the logic at the server and use JS only to render items and to send commands to the server?
For instance, if I want to move the entire folder into another folder, there are two ways of doing this:
Way 1: With Python which receives a command from JS 'move folder x to folder y', processes it and sends back a result to render.
Way 2: With JS which then has to understand all the folder structure and logic. In this case, the app will use a server only for storing data.
I have a feeling that way 2 (using JS to understand all the logic and Python only for saving data) is more appropriate, but this means that I have to rewrite everything from scratch.
Is the way 1 also reasonable?
Many thanks in advance!
It depends on the application you are making.
Like if you want to display thousands of data in html file, and data are stored in a json file. If you send html file and json file to the client from the server, then on the client side, you run a script that reads json file and displays it in html, then it will be slower, because client device may not be that powerful as the server is.
So for performance, use heavy tasks on server side, this may cause little more internet usage because as the client has no data in formatted manner, whenever new task on data is to be performed, you have to request the server again.
But for opposite case, you can save internet and little low performance. Here you can do some heavy tasks on client side.
It also depends on which device is used at client side.

How to use pdfkit npm in async manner

I have written an application in node.js which takes input from user and generates pdfs file based on few templates.
I am using pdfkit npm for this purpose. My application is running in production. But my application is very slow, below are the reasons :
What problem I am facing :
It is working in sync manner. I can explain it by giving an example- Suppose a request come to the application to generate a pdf, is starts processing and after processing it returns back the response with generated pdf url. But if multiple request comes to the server it process each request one by one(in sync manner).
All request in queue have to wait untill the previous one is finished.
Maximum time my application gives Timeout or Internal Server Error.
I can not change the library, why ?
There are 40 templates I have written in js for pdfkit. And each template is of 1000 - 3000 lines.
If I will change the lib, i have to rewrite those templates according to new library.
It will take many months to rewrite and test it properly.
What solution I am using now :
I am managing a queue now, once a request come it got queued and a satisfactory message send back in response to the user.
Why this solution is not feasible ?
User should be provided valid pdf url upon success of request. But in queue approach, user is getting only a confirmation message. And pdf is being processed later in queue.
What kind of solution I am seeking now ?
Any way through which I can make this application multi-threaded/asynchronous, So that it will be capable of handling multiple request on a time without blocking the resource?
Please save my life.
I hate to break it to you, but doing computation in the order tasks come in is a pretty fundamental part of node. It sounds like loading these templates is a CPU-bound task, and since Node is single-threaded, it knocks these off the queue in the order they come in.
On the other hand, any framework would have a similar problem. Node being single-threading means its actually very efficient, because it doesn't lose cycles to context switching.
How many PDF-generations can your program handle at once? What type of hardware are you running this on? If it's failing on a few requests a second, then there's probably a programming fix.
For node, the more things you can make asynchronous the better. For example, any time you're reading a file in, it should be asynchronous.
Can you post the code for one of your PDF-creating request functions?

Machine Learning on web, PHP connection with python

I'm currently working on a ML project with geo data...
In my web I let the models parameters of the machine learning algorithm to the user, then I send those to an Apache server where PHP gets the parameters... In js I use Ajax to make the POST request.
My ML algorithm is made with Python, right now is working with the library argparse to read the parameters that PHP sends (after of a verification) as command trough the function exec()
I have 2 problems with this:
If the ML model takes time to calculate the results, the exec function does't wait for them and returns null after some time, but if it's fast everything is ok. I already have the function set_limit_time(0); in my PHP file.
In my local machine it doesn't take so much time to calculate results like on the server, but the server has better hardware, so I don't know what is going on there.
PHP 7.0.15
Python 2.7
Server Apache/2.4.18 (Ubuntu 16.04.1 LTS)
Also, is there a better way to do this?.
I'd like to suggest another approach.
Whenever you call your algorithm from the command line, you have a bootstrapping time: importing libraries (such as numpy), loading datasets, etc. Then you perform your calculations, return a response and clear the memory. So the next time you need another result, you have to go through all the process again.
I'd suggest embedding that algorithm inside a small Flask application (you already know Python and know how to use PHP for web, it shouldn't be that hard.
Since your python web server will have all the libraries and the datasets already loaded, it would be much faster in answering your questions.
And you can access it from PHP doing an HTTP request with curl (it has timeout!).
I think this would be much easier and scalable.
Just my two cents!
I ended up opening a Python Socket and connecting PHP trough it, now the process it's still slower than my local machine but the timeout is not longer a problem.

Node.js chat without Socket.IO

I just started learning Node.js and as I was learning about the fs.watchFile() method, I was wondering if a chat website could be efficiently built with it (and fs.writeFile()), against for example Socket.IO which is stable, but I believe not 100% stable (several fallbacks, including flash).
Using fs.watchFile could perhaps also be used to keep histories of the chats quite simply (as JSON would be used on the spot).
The chat files could be formatted in JSON in such a way that only the last chatter's message is brought up to the DOM (or whatever to make it efficient to 'fetch' messages when the file gets updated).
I haven't tried it yet as I still need to learn more about Node, and even more to be able to compare it with Socket.IO, but what's your opinion about it? Could it be an efficient/stable way of doing chats?
fs.watchFile() can be used to watch changes to the file in the local filesystem (on the server). This will not solve your need to update all clients chat messages in their browsers. You'll still need web sockets, AJAX or Flash for that (or socket.io, which handles all of those).
What you could typically do in the client is to try to use Web Sockets. If browser does not support them, try to use XMLHttpRequest. If that fails, fallback to Flash. It's a lot of programming to do, and it has to be handled by node.js server as well. Socket.io does that for you.
Also, socket.io is pretty stable. Fallback to Flash is not due to it's instability but due to lack of browser support for better solutions (like Web Sockets).
Storing chat files in flatfile JSON is not a good idea, because if you are going to manipulating the files, you would have to parse and serialize entire JSON objects, which would become very slow as the size of the JSON object increased. The watch methods for the filesystem module also don't work on all operating systems.
You also can't compare Node.js to Socket.IO because they are entirely different things. Socket.IO is a Node module for realtime transport between the browser and the server. What you need is dependent on what you're doing. If you need chat history, then you should be using a database such as MongoDB or MySQL. Watching files for changes is not an efficient way and you should just send messages as they received.
In conclusion no, using fs.watchFile() and fs.writeFile() is a very bad idea, because race conditions would occur due to concurrent file writes, besides that fs.watchFile() uses polling to check if a file has changed. You should instead use Socket.IO and push messages to other clients / store them in a database as they are received.
You can use long pooling method using javascript setTimeout and setInterval
long pooling
basically long pooling working on Ajax reqest and server responce time.
server will respond after a certain time (like after 50 seconds ) if there is not notification or message else it will respond with data and from client side when client gets response client javascript makes another request for new update and wait till response this process is endless until server is running

Moving node.js server javascript processing to the client

I'd like some opinions on the practical implications of moving processing that would traditionally be done on the server to be handled instead by the client in a node.js web app.
Example case study:
The user uploads a CSV file containing a years worth of their bank statement entries. We want to parse the file, categorise each entry and calculate cumulative values for each category so that we can store the newly categorised statement in a db and display spending analysis to the user.
The entries are categorised by matching strings in the descriptions. There are many categories and many entries and it takes a fair amount of time to process.
In our node.js server, we can happily free up the event loop whilst waiting for network responses and so on, but if there is any data crunching or similar processing, the server will be blocked from responding to requests, and this seems unavoidable.
Traditionally, the CSV file would be passed to the server, the server would process, save in db, and send back the output of the processing.
It seems to make sense in our single threaded node.js server that this processing is handled by the browser, and the output displayed and sent to server to be stored. Of course the client will have to wait while this is done, but their processing will not be preventing the server from responding to requests from other clients.
I'm interested to see if anyone has had experience build apps using this model.
So, the question is.. are there any issues in getting browsers rather than the server to handle, wherever possible, any processing that will block the event loop? Is this a good/sensible/viable approach to node.js application development?
I don't think trusting client processed data is a good idea.
Instead you should look into creating a work queue that a separate process listens on, separating the CPU intensive tasks from your node.js process handling HTTP requests.
My proposed data flow would be:
HTTP upload request
App server (save raw file somewhere the worker process can access)
Notification to 'csv' work queue
Worker processes uploaded csv file.
Although perfectly possible, simply shifting the processing to the client machine does not solve the basic problem.
Now the client's event loop is blocked, preventing the user from interacting with the browser. Browsers tend to detect this problem and stop execution of the page's script altogether. Something your users will certainly hate.
There is no way around either delegating or splitting up the work-load.
Using a second process (for example a 2nd node instance) for doing the number crunching server-side has the added benefit of allowing the operating system to use a 2nd CPU core. Ideally you run as many Node instances as you have CPU cores in the server and balance your work-load between them. Have a look at the diode module for some inspiration on how to implement multi-process communication in node.

Categories

Resources