I'm working with NodeJS and I'm still familiarizing with it.
Given the structure of my system I have two NodeJS servers running in different machines.
The user/browser sends a request to the first server, which returns to the browser a JSON file that is located in this first machine.
This first server also updates this JSON file every 5-10 seconds sending a request to the second server, which returns another JSON file which data will overwrite the one in the JSON file in the first server, so the next user/browser request will be updated.
This second server also has a NodeJS server running but it only dispatches the request coming from the first server.
I have this structure since I don't want the user to know about the second server for security reasons (Anyone could see the redirection with any dev tools).
This two events are executed asynchronously since the Browser petitions may be in different time from the event updating the JSON file.
My question is: How can I update the JSON file in the first server? I wonder if there's a NodeJS library I can use for requesting the new JSON file to the second server.
I make the Browser-FirstServer petition via AJAX and everything works properly, but AJAX only works on the client side, so I'm not really sure how to do this for the First-Second server request.
Any help would be appreciated.
Something i'm xpecting for is the following:
setInterval(function(){
// make request to server 2
//receive JSON file
// use 'fs' for overwriting the JSON from server 1
}, 5000)
You can either use the built in http/https modules in nodejs or use something like request
var request = require('request');
request('/url/for/json', function (error, response, body) {
if (!error && response.statusCode == 200) {
//write body to the file system
}
});
Instead of operating both as web (html) servers, I strongly advise connecting to the 2nd using sockets... This way you can pass information/changes back and forth whenever an event happens. Here's an example of using sockets in node.js
Related
please forgive me if my question sounds naive. I researched on google, and several forums, but couldn’t find anything that is clear.
Here is my dilemma,
Step 1 -> Node.js Server is listening
Step 2 -> User on page ‘/new-users’. (POST, ‘/signup-controller)
Step 3 (& maybe Step 4) -> Id like to know what happens here, before the server decides where to take the data.
On step 1, Was the server listening to the local storage to see if any new requests are there?
Or, does it ‘directly’ receive the request in step 3?
I’ve always been under the impression that servers just listen to changes. Meaning it does not literally ‘receive’ req or res data.
Thanks a lot for reading my question and I look forward to any feedback.
EDIT: to clarify, does the client walk up to the server directly and hand over the data’s, hand to hand, or does the client store the data at some ‘locker’ or ‘location, and the server notices a filled locker, hence triggering the subsequent events?
No it will directly receive the request data and if you are using framework like express in node then you can use middleware to validate or check request data and move forward
The server only listen for a request, not for response
when it finds a request (req), operates with this request and bases od that must deliver a response (res) with data, files, error.. whatever..
The server receives a POST og GET (Depending on the METHOD attribute in the FORM tag) - If you want to implement some logic to decide where to put the data, it should be done by the server, analyzing the data. Hidden input tags (Type="hidden") could assist supplying info. Like a hidden input tag saying "NEW" or "EDIT" and the "ID" to example.
Using an AJAX method instead lets you negotiate with the server before the final POST.
hth.
Ole K Hornnes
On step 1, Was the server listening to the local storage to see if any new requests are there?
no, the server not listening the local storage, it listening the server port. and waiting the request.
does it ‘directly’ receive the request in step 3?
Server will receive when client send a request, in your case , step 2
The data from the form is formatted into an HTTP request and sent over the network to the server directly. The server receives it from the network, puts it into memory (RAM), and calls your handler.
A TCP connection (that HTTP is built on) transmits sequences of bytes - that's why it is called a stream-oriented transport. This means you get the bytes in the same order you've sent them. An HTTP request is just a piece of text which looks similar to this:
POST /signup-controller HTTP/1.1
Host: localhost:8080
Content-Type: application/json
Content-Length: 17
{"hello":"world"}
Note the blank line between the headers and the body. This gap is what allows Node.js (and HTTP servers in general) to quickly determine that the request is meant for localhost:8080/signup-controller using the POST method, without looking at the rest of the message! If the body was much larger (a real monster of a JSON), it would not make a difference, because the headers are still just a few short lines.
Thus, Node.js only has to buffer that part until the blank line (formally, \r\n\r\n) in memory. It gets to that point and it knows to call the HTTP request handler function that you've supplied. The rest - after the line break - is then available in the req object as a Readable Stream.
Even though there is some amount of buffering involved at each step (at the client, in switches, at intermediate routers, in the server's kernel, and finally in the server process), the communication is "direct" - one process on one host communicates with another process on another host, without involving the disk at any point.
Need to add file generator REST API endpoint to web app. So far I've came up with following idea:
client sends file parameters to endpoint
server receives request and using AMQP sends parameters to dedicated service
dedicated service creates a file, puts it into server folder and sends responce that file created with file name
endpoint sends response to client with file
I'm not sure if's a good idea to keep REST request on a server for so long. But still don't want to use email with generated link or sockets.
Do I need to set timeout time in request so it will not be declined after a long wait time?
As far as I know maximum timeout is 120sec for rest api call. If it takes more time for the service to create a file then I need to use sockets, is that right?
The way I've handled similar is to do something like this:
Client sends request for file.
Server adds this to a queue with a 'requested' state, and responds (to the client) almost immediately with a reponse which includes a URL to retrieve the file.
Some background thread/worker/webJob/etc is running in a separate process from the actual web server and is constantly monitoring the queue - when it sees a new entry appear it updates the queue to a 'being generated' state & begins generating the file. When it finishes it updates the queue to a 'ready' state and moves on...
when the server receives a request to download the file (via the URL it gave the client), it can check the status of the file on the queue. If not ready, it can give a response indicating this. If it IS ready, it can simply respond with the file contents.
The Client can use the response to the initial request to re-query the url it was given after a suitable length of time, or repeatedly query it every couple of seconds or whatever is most suitable.
You need some way to store the queue that is accessible easily by both parts of the system - a database is the obvious one, but there are other things you could use...
This approach avoids either doing too much on a request thread or having the client 'hanging' on a request whilst the server is compiling the file.
That's what I've done (successfully) in these sorts of situations. It also makes it easy to add things like lifetimes to the queue, so a file can automatically 'expire' after a while too...
The project that I am working on is to receive a request where in the main and/or most part of that request consists of data coming from a database. Upon receiving, my system proceeds with its function which is to parse all the data and ultimately concatenates the needed information to form a query, then insert those data using the mentioned query into my local database.
It is working fine and no issue at all. Except for the fact that it takes too long to process when the request has over 6,000,000 characters and over 200,000 lines (or maybe less but still with large numbers).
I have this tested with my system being used as a server (the supposed setup in production), and with Postman as well, but both drops the connection before the final response is built and sent. I have already tested and seen that although the connection drops, my system still proceeds with processing the data even up to the query, and even until it sends its supposed response. But since the request dropped somewhere in the middle of the processing, the response is ignored.
Is this about connection timeout in nodejs?
Or limit in 'app.use(bodyParser.json({limit: '10mb'}))'?
I really only see 1 way around this. I have done similar in the past. Allow the client to send as much as you need/want. However, instead of trying to have the client wait around for some undetermined amount of time (at which point the client may timeout), instead send an immediate response that is basically "we got your request and we're processing it".
Now the not so great part but it's the only way I've ever solved this type of issue. In your "processing" response, send back some sort of id. Now the client can check once in a while to see if it's request has been finished by sending you that id. On the server end you store the result for the client by the id you gave them. You'll have to make a few decisions about things like how long a response id is kept around and if it can be requested more than once, things like that.
I've got an app based on phantomjs that works so:
1. I'm running a php script that gets data from my databse (postgres) as an array,
2. Then via shell_exec I'm running phantomjs script and as argument I'm passing array with data (1),
3. In phantom I'm processing the data - checking domains WHOIS - and collecting for each domain expiration date. As result I'm getting an array that I'm storing in a file,
4. In the end phantom runs php script that gets the data from stored file and saves it in my database.
I'm wondering if there is a better option? Maybe doing everything in the phantomjs script? Maybe there is a js client for postgres?
I'd change workflow starting from step 3 and start saving data right away (PhantomJS is no stranger to crashing so it may not always get to step 4).
You could send data via an AJAX or POST request to an endpoint of your own. It could be another PHP script available via HTTP, even if on localhost. So you'd do another page.open to there and send data.
An even more reliable approach: after processing data execute a local PHP script feeding it data via CLI (or save data to a file like before and feed the script path to it).
I just started watching some node tutorials and I wanted help understanding the response and request streams that I get from http.createServer(). Response & Request are streams, so does that mean than Node.js sends and recieves data in chunks?
For example, if I called
res.write("test1");
res.write("test2");
res.end();
would it only write both those things when I call end() or would it flush to the stream and send to the client making the request as and when I call write()?
Another example to elaborate on my question is if I had a txt file with a lot of plaintext data, then I setup a read stream that pipes data from that file to the res object would it pipe that data in chunks or do it once everything is in the buffer.
I guess my question also applies to the request object. For instance, is the body of the request built up packet by packet and streamed to the server or is it all sent at once, and node just chooses to make us use a stream to access it.
Thanks alot!
The first time response.write() is called, it will send the buffered header information and the first chunk of the body to the client. The second time response.write() is called, Node.js assumes data will be streamed, and sends the new data separately. That is, the response is buffered up to the first chunk of the body.
full foc
So basically, if you .write() a small piece of data, it may be buffered until theres a complete chunk or .end() is called. If .write() already has the size of a chunk, it will be transmitted immeadiately.