Socket.io client socket taking long to respond - javascript

I am using socket.io to send/receive messages between client/server. The server has a Redis instance that stores data and responds with the data within milliseconds.
Some of the sockets take very long to return the data (stringified large JSON object) although it is sent from the server side almost immediately. I am therefore looking for suggestion that address the following concerns:
Is it normal for socket.io to take this long to emit a long string ?
How can I know which method or socket is doing the emit that takes long ?
Any further suggestions on how to improve performance?
Help really appreciated
UPDATE:
I tried using Webdis to provide the response to the client without having to go to the server to get the Redis results. However although the response appears in a console.log in about 1 second (which is the same as the DOMLoadedContent below), the websocket still takes about 20s and shows 0 bytes as shown below:

It seems like you've hit issues that others have seen as well with large file uploads using socket.io:
Node.JS, Socket.IO & large XML files: extreme performance loss?
One possible course of action is to try file streaming in socket.io:
https://gist.github.com/companje/eea17988257a10dcbf04
As for
How can I know which method or socket is doing the emit that takes
long ?
You can always pass in dateTime's into the client's data received from socket.io and then compute the time difference, printing out the methods that it called.

Related

On form submit, does the server ‘directly’ receive req or listen to changes in a particular place?

please forgive me if my question sounds naive. I researched on google, and several forums, but couldn’t find anything that is clear.
Here is my dilemma,
Step 1 -> Node.js Server is listening
Step 2 -> User on page ‘/new-users’. (POST, ‘/signup-controller)
Step 3 (& maybe Step 4) -> Id like to know what happens here, before the server decides where to take the data.
On step 1, Was the server listening to the local storage to see if any new requests are there?
Or, does it ‘directly’ receive the request in step 3?
I’ve always been under the impression that servers just listen to changes. Meaning it does not literally ‘receive’ req or res data.
Thanks a lot for reading my question and I look forward to any feedback.
EDIT: to clarify, does the client walk up to the server directly and hand over the data’s, hand to hand, or does the client store the data at some ‘locker’ or ‘location, and the server notices a filled locker, hence triggering the subsequent events?
No it will directly receive the request data and if you are using framework like express in node then you can use middleware to validate or check request data and move forward
The server only listen for a request, not for response
when it finds a request (req), operates with this request and bases od that must deliver a response (res) with data, files, error.. whatever..
The server receives a POST og GET (Depending on the METHOD attribute in the FORM tag) - If you want to implement some logic to decide where to put the data, it should be done by the server, analyzing the data. Hidden input tags (Type="hidden") could assist supplying info. Like a hidden input tag saying "NEW" or "EDIT" and the "ID" to example.
Using an AJAX method instead lets you negotiate with the server before the final POST.
hth.
Ole K Hornnes
On step 1, Was the server listening to the local storage to see if any new requests are there?
no, the server not listening the local storage, it listening the server port. and waiting the request.
does it ‘directly’ receive the request in step 3?
Server will receive when client send a request, in your case , step 2
The data from the form is formatted into an HTTP request and sent over the network to the server directly. The server receives it from the network, puts it into memory (RAM), and calls your handler.
A TCP connection (that HTTP is built on) transmits sequences of bytes - that's why it is called a stream-oriented transport. This means you get the bytes in the same order you've sent them. An HTTP request is just a piece of text which looks similar to this:
POST /signup-controller HTTP/1.1
Host: localhost:8080
Content-Type: application/json
Content-Length: 17
{"hello":"world"}
Note the blank line between the headers and the body. This gap is what allows Node.js (and HTTP servers in general) to quickly determine that the request is meant for localhost:8080/signup-controller using the POST method, without looking at the rest of the message! If the body was much larger (a real monster of a JSON), it would not make a difference, because the headers are still just a few short lines.
Thus, Node.js only has to buffer that part until the blank line (formally, \r\n\r\n) in memory. It gets to that point and it knows to call the HTTP request handler function that you've supplied. The rest - after the line break - is then available in the req object as a Readable Stream.
Even though there is some amount of buffering involved at each step (at the client, in switches, at intermediate routers, in the server's kernel, and finally in the server process), the communication is "direct" - one process on one host communicates with another process on another host, without involving the disk at any point.

Detect complete data received on 'data' Event in Net module of Node.js

Currently when we send a large data over TCP data event will receive it in chunks. Is there a way where we able to detect that all data is received and return complete data.
Currently when we send a large data over TCP data event will receive it in chunks. Is there a way where we able to detect that all data is received and return complete data.
TCP is just a continuous stream of data. There's no start or end to a given transmission at the TCP level unless you close the connection and use a closed connection as a signal that the data finished.
So, if you want to know where a given chunk of data starts and stops, you need to design a way within the data you send in order to know that (essentially build your own little mini wire format or protocol). There are a zillion different ways to do that and that's one of the reason we have so many different protocols built on top of TCP. The two simplest schemes are to:
Send a length of your packet and then send that many bytes. The recipient then reads the length and knows when it then reads that many bytes, it has the whole chunk.
Use some sort of delimiter that won't appear in the actual data. For example, some simple protocols use a linefeed as a delimiter. You send a bunch of text and then terminate it with a linefeed. The receipient reads data until they get a linefeed and that tells them they have a complete chunk of the data. There are many possible delimiters depending upon what the type of data is.
Other protocols such as webSocket or socket.io have message-based paradigms built into them doing this work for you. You send a message at one end and then receive a whole message at the other end.
Some options are more or less appropriate based on the type of data (text/binary) you're sending and the length of the data and what the character of the data is (whether there are possible delimiters that won't be in the actual data).

Node.js Request drops before Response is received

The project that I am working on is to receive a request where in the main and/or most part of that request consists of data coming from a database. Upon receiving, my system proceeds with its function which is to parse all the data and ultimately concatenates the needed information to form a query, then insert those data using the mentioned query into my local database.
It is working fine and no issue at all. Except for the fact that it takes too long to process when the request has over 6,000,000 characters and over 200,000 lines (or maybe less but still with large numbers).
I have this tested with my system being used as a server (the supposed setup in production), and with Postman as well, but both drops the connection before the final response is built and sent. I have already tested and seen that although the connection drops, my system still proceeds with processing the data even up to the query, and even until it sends its supposed response. But since the request dropped somewhere in the middle of the processing, the response is ignored.
Is this about connection timeout in nodejs?
Or limit in 'app.use(bodyParser.json({limit: '10mb'}))'?
I really only see 1 way around this. I have done similar in the past. Allow the client to send as much as you need/want. However, instead of trying to have the client wait around for some undetermined amount of time (at which point the client may timeout), instead send an immediate response that is basically "we got your request and we're processing it".
Now the not so great part but it's the only way I've ever solved this type of issue. In your "processing" response, send back some sort of id. Now the client can check once in a while to see if it's request has been finished by sending you that id. On the server end you store the result for the client by the id you gave them. You'll have to make a few decisions about things like how long a response id is kept around and if it can be requested more than once, things like that.

Send data in chunks with nodejs

I'm quite new to nodejs and I'm working on a backend for an Angular 4 application. The problem is that the backend is quite slow to produce the whole data for the response and I'd like to send data over time as soon as it's available. I was reading about RxJS but I can't really figure out how to use it in node, can you please help me?
Maybe you are looking for a way to stream the data
Express
Normally you respond with res.send(data), it can be called only once.
If you are reading and sending a large file, you can stream the file data while being read with res.write(chunk) and on the 'end' event of the file reading, you call res.end() to end the response.
EDIT : As you state, what you want is to stream as soon as the chunk is available, so you can use the res.flush() command between writes ( just flush after res.write(chunk)).
It would be much faster in your case but the overall compression will be much less efficient.

Meteor DDP is sending more then one message even after only one update

Having some problems with DDP at the moment. All is pretty much working perfectly accept for one issue. I have a collection where I am observing the changes. When I call sub on it initially it sends all the data down through added .. (i get that). However when I update the same collection by adding a new record observe changes is called twice and then the whole collection is sent once again instead of just the delta (ie the new added record).
Is there any reason why this could be happening. The code is pretty much like the standard count example accept that when I console log inside the added function it prints out the same id twice, then a few seconds later sends the whole data set back down. There is only one client connected to it so it isn't another client.
When I debug the client i can see that twice the records are sent. The client is an android implementation of DDP.
Any help would be greatly appreciated.
Sounds like the subscription is getting stopped and started again. May be a reconnect issue. I would recommend using the ddp-analyzer-proxy to see what's being sent over the wire.
There are two java implementations of DDP clients that I know of:
https://github.com/kenyee/android-ddp-client (built on his java-ddp-client)
https://github.com/sailorgeoffrey/ddp-client-java
Are you using one of those?

Categories

Resources