Node.js HTTP Module: Response + Request - javascript

I just started watching some node tutorials and I wanted help understanding the response and request streams that I get from http.createServer(). Response & Request are streams, so does that mean than Node.js sends and recieves data in chunks?
For example, if I called
res.write("test1");
res.write("test2");
res.end();
would it only write both those things when I call end() or would it flush to the stream and send to the client making the request as and when I call write()?
Another example to elaborate on my question is if I had a txt file with a lot of plaintext data, then I setup a read stream that pipes data from that file to the res object would it pipe that data in chunks or do it once everything is in the buffer.
I guess my question also applies to the request object. For instance, is the body of the request built up packet by packet and streamed to the server or is it all sent at once, and node just chooses to make us use a stream to access it.
Thanks alot!

The first time response.write() is called, it will send the buffered header information and the first chunk of the body to the client. The second time response.write() is called, Node.js assumes data will be streamed, and sends the new data separately. That is, the response is buffered up to the first chunk of the body.
full foc
So basically, if you .write() a small piece of data, it may be buffered until theres a complete chunk or .end() is called. If .write() already has the size of a chunk, it will be transmitted immeadiately.

Related

On form submit, does the server ‘directly’ receive req or listen to changes in a particular place?

please forgive me if my question sounds naive. I researched on google, and several forums, but couldn’t find anything that is clear.
Here is my dilemma,
Step 1 -> Node.js Server is listening
Step 2 -> User on page ‘/new-users’. (POST, ‘/signup-controller)
Step 3 (& maybe Step 4) -> Id like to know what happens here, before the server decides where to take the data.
On step 1, Was the server listening to the local storage to see if any new requests are there?
Or, does it ‘directly’ receive the request in step 3?
I’ve always been under the impression that servers just listen to changes. Meaning it does not literally ‘receive’ req or res data.
Thanks a lot for reading my question and I look forward to any feedback.
EDIT: to clarify, does the client walk up to the server directly and hand over the data’s, hand to hand, or does the client store the data at some ‘locker’ or ‘location, and the server notices a filled locker, hence triggering the subsequent events?
No it will directly receive the request data and if you are using framework like express in node then you can use middleware to validate or check request data and move forward
The server only listen for a request, not for response
when it finds a request (req), operates with this request and bases od that must deliver a response (res) with data, files, error.. whatever..
The server receives a POST og GET (Depending on the METHOD attribute in the FORM tag) - If you want to implement some logic to decide where to put the data, it should be done by the server, analyzing the data. Hidden input tags (Type="hidden") could assist supplying info. Like a hidden input tag saying "NEW" or "EDIT" and the "ID" to example.
Using an AJAX method instead lets you negotiate with the server before the final POST.
hth.
Ole K Hornnes
On step 1, Was the server listening to the local storage to see if any new requests are there?
no, the server not listening the local storage, it listening the server port. and waiting the request.
does it ‘directly’ receive the request in step 3?
Server will receive when client send a request, in your case , step 2
The data from the form is formatted into an HTTP request and sent over the network to the server directly. The server receives it from the network, puts it into memory (RAM), and calls your handler.
A TCP connection (that HTTP is built on) transmits sequences of bytes - that's why it is called a stream-oriented transport. This means you get the bytes in the same order you've sent them. An HTTP request is just a piece of text which looks similar to this:
POST /signup-controller HTTP/1.1
Host: localhost:8080
Content-Type: application/json
Content-Length: 17
{"hello":"world"}
Note the blank line between the headers and the body. This gap is what allows Node.js (and HTTP servers in general) to quickly determine that the request is meant for localhost:8080/signup-controller using the POST method, without looking at the rest of the message! If the body was much larger (a real monster of a JSON), it would not make a difference, because the headers are still just a few short lines.
Thus, Node.js only has to buffer that part until the blank line (formally, \r\n\r\n) in memory. It gets to that point and it knows to call the HTTP request handler function that you've supplied. The rest - after the line break - is then available in the req object as a Readable Stream.
Even though there is some amount of buffering involved at each step (at the client, in switches, at intermediate routers, in the server's kernel, and finally in the server process), the communication is "direct" - one process on one host communicates with another process on another host, without involving the disk at any point.

Node JS - file generator architecture

Need to add file generator REST API endpoint to web app. So far I've came up with following idea:
client sends file parameters to endpoint
server receives request and using AMQP sends parameters to dedicated service
dedicated service creates a file, puts it into server folder and sends responce that file created with file name
endpoint sends response to client with file
I'm not sure if's a good idea to keep REST request on a server for so long. But still don't want to use email with generated link or sockets.
Do I need to set timeout time in request so it will not be declined after a long wait time?
As far as I know maximum timeout is 120sec for rest api call. If it takes more time for the service to create a file then I need to use sockets, is that right?
The way I've handled similar is to do something like this:
Client sends request for file.
Server adds this to a queue with a 'requested' state, and responds (to the client) almost immediately with a reponse which includes a URL to retrieve the file.
Some background thread/worker/webJob/etc is running in a separate process from the actual web server and is constantly monitoring the queue - when it sees a new entry appear it updates the queue to a 'being generated' state & begins generating the file. When it finishes it updates the queue to a 'ready' state and moves on...
when the server receives a request to download the file (via the URL it gave the client), it can check the status of the file on the queue. If not ready, it can give a response indicating this. If it IS ready, it can simply respond with the file contents.
The Client can use the response to the initial request to re-query the url it was given after a suitable length of time, or repeatedly query it every couple of seconds or whatever is most suitable.
You need some way to store the queue that is accessible easily by both parts of the system - a database is the obvious one, but there are other things you could use...
This approach avoids either doing too much on a request thread or having the client 'hanging' on a request whilst the server is compiling the file.
That's what I've done (successfully) in these sorts of situations. It also makes it easy to add things like lifetimes to the queue, so a file can automatically 'expire' after a while too...

Send data in chunks with nodejs

I'm quite new to nodejs and I'm working on a backend for an Angular 4 application. The problem is that the backend is quite slow to produce the whole data for the response and I'd like to send data over time as soon as it's available. I was reading about RxJS but I can't really figure out how to use it in node, can you please help me?
Maybe you are looking for a way to stream the data
Express
Normally you respond with res.send(data), it can be called only once.
If you are reading and sending a large file, you can stream the file data while being read with res.write(chunk) and on the 'end' event of the file reading, you call res.end() to end the response.
EDIT : As you state, what you want is to stream as soon as the chunk is available, so you can use the res.flush() command between writes ( just flush after res.write(chunk)).
It would be much faster in your case but the overall compression will be much less efficient.

Body-parser in Express JS

I have been messing around with express js for a while now and I have come across something called body parser. According to my research body parser allows us to POST content form our HTTP request.
My question that, is it absolutely necessary to use body parser when using express.js?
If it is not necessary then what are the advantages of using it and if body parser is not used then what needs to be done to make sure that the content is POSTED?
Thank you in advance.
Let’s try to keep this least technical.
Let’s say you are sending a html form data to node-js server i.e. you made a request to the server. The server file would receive your request under a request object. Now by logic, if you console log this request object in your server file you should see your form data some where in it, which could be extracted then, but whoa ! you actually don’t !
So, where is our data ? How will we extract it if its not only present in my request.
Simple explanation to this is http sends your form data in bits and pieces which are intended to get assembled as they reach their destination. So how would you extract your data. Use something called “body-parser” which would do this for you.
body-parser parses your request and converts it into a format from which you can easily extract relevant information that you may need. First of all require the following in your app.js file.
var bodyParser = require('body-parser')
and add the following line to make it work
app.use(bodyParser.urlencoded({extended: true}));
You can make use of the events on('data') and on('end') to extract the payload. Whenever a POST request is sent, data arrives as a stream of bits. When the stream comes in the on('data') event is fired, and you can start collecting the stream into a buffer. When the stream ends (all the data has been received) the on('end') event is fired, that is when you can start using the data that you just collected.
Yopu need to include stringDecoder (stringDecoder is a module that comes built into node)
const string_decoder= require('string_decoder').StringDecoder;
You need to have this piece of code running as middleware.
var buffer= "";
request.on('data', function(data_stream)
{
//start collecting the data stream into a buffer
buffer= buffer + utf8_decoder.write(data_stream);
});
request.on('end', function()
{
buffer= buffer + utf8_decoder.end();
//add the buffer to the request object so it can be accessed elsewhere
request.payload= buffer;
});
This is probably the best way if you decide not to use any external libraries.

NodeJS servers requests

I'm working with NodeJS and I'm still familiarizing with it.
Given the structure of my system I have two NodeJS servers running in different machines.
The user/browser sends a request to the first server, which returns to the browser a JSON file that is located in this first machine.
This first server also updates this JSON file every 5-10 seconds sending a request to the second server, which returns another JSON file which data will overwrite the one in the JSON file in the first server, so the next user/browser request will be updated.
This second server also has a NodeJS server running but it only dispatches the request coming from the first server.
I have this structure since I don't want the user to know about the second server for security reasons (Anyone could see the redirection with any dev tools).
This two events are executed asynchronously since the Browser petitions may be in different time from the event updating the JSON file.
My question is: How can I update the JSON file in the first server? I wonder if there's a NodeJS library I can use for requesting the new JSON file to the second server.
I make the Browser-FirstServer petition via AJAX and everything works properly, but AJAX only works on the client side, so I'm not really sure how to do this for the First-Second server request.
Any help would be appreciated.
Something i'm xpecting for is the following:
setInterval(function(){
// make request to server 2
//receive JSON file
// use 'fs' for overwriting the JSON from server 1
}, 5000)
You can either use the built in http/https modules in nodejs or use something like request
var request = require('request');
request('/url/for/json', function (error, response, body) {
if (!error && response.statusCode == 200) {
//write body to the file system
}
});
Instead of operating both as web (html) servers, I strongly advise connecting to the 2nd using sockets... This way you can pass information/changes back and forth whenever an event happens. Here's an example of using sockets in node.js

Categories

Resources