Ive been experiencing some freezes in my web application. So I decided to measure the time between the packets that arrive on the client. The are sent from the server at a consistent rate: every 100ms
However on the client sometimes there is a 700ms difference between packets. I did some testing on the server side and the packets are sent consistently between 100-110ms
When this freeze happens, the client doesnt receive any packets for around 700ms and then receives 7 packets all at once.
Is this a connection issue or an issue with socket.io itself? I am using socket.io 2.0.3
The socket isnt sending too much data. This even happens when it is just serving a single client
My first problem with socket.io was building simple chat app, when i send too many messagens it freezes around 500ms, and sometimes send 2 equal messages when it was freezed, i end up figuring out it was a problem with my database connection and the way i was rendering the messages on the client. So it may be yours (or may be not).
And also it would be nice if you share you code so expert users can help you.
I recommend you to create another application with a simple/likely you socket request, if it run correctly you should check your code.
Related
I have not been able to get an answer to this anywhere online. I want to remove possible jitter from my nodejs server. I am using socket.io to create connections to node.
If a user goes to a specific part of my website, a connection is started. However, if the user refreshes the site too quickly and often, the connection is created very frequently, and issues arise with my server.
While I realized it's possible this could be solved a couple different ways, I am hoping a server solution is out there. Meaning, whenever a user connects, make sure the user is connected for at least 5 seconds. Then move on. Otherwise, disconnect the user. Thanks for any insight!
First off a little background. With a default configuration, when a socket.io connection starts, it first does 2-5 http connections and then once it has established the "logical" connection, it tries to establish a connection using the webSocket transport. If that is successful, then it keeps that webSocket connection as a long lasting connection and sends socket.io packets over it.
If the client refreshes in the middle of the transition to a webSocket connection, it creates a period of unknown state on the server where the server isn't sure if the user is just still in the middle of the transition to a lasting webSocket connection, if the user is gone entirely already, if the user is having some sort of connection issues or if the user is doing some refresh thing. You can easily end up with a situation where the server thinks there are multiple connections all from the same user in the process of being confirmed. It can be a bit messy if your server is sensitive to that kind of thing.
The quickest thing you can do is to force the connection process to go immediately to the webSocket transport. You can do that in the client by adding an options to your connection code:
let socket = io(yourURL, {transports: ["websocket"]});
You can also configure the server to only accept webSocket connections if you're try to protect against any other types of connections besides just from your own web pages.
This will then go through the usual webSocket connection which starts with a single http request that is then "upgraded" to the webSocket protocol. Once connection, one socket. The server will know right away, either the user is or isn't connected. And, once they've switched over to the webSocket protocol, the server will known immediately if the user hits refresh because the browser will close the webSocket immediately.
The "start with http first" feature in socket.io is largely present because in the early days of webSockets, there were some browsers that didn't yet support them and some network infrastructure (like corporate proxies) that didn't always support webSocket connections. The browser issue is completely gone now. All browsers in use support webSocket connections. I don't personally have any data on the corporate proxies issues, but I don't ever hear about any issues with people using webSockets these days so I don't think that's much of an issue any more either.
So, the above change will get you a quick, confirmed connection and get rid of the confusion around whether a user is or isn't connected early in the connection process.
Now, if you still have users who are messing things up by rapid refresh, you probably need to just implement some protection on your server for that. If you cookie each user that arrives on your server, you could create some middleware that would keep track of how many page requests in some recent time interval have come from the browser with this cookie and just return them an error page that explains they can't make requests that quickly. I would probably implement this at the web page level, not the webSocket level as that will give users better feedback to stop hitting refresh. If it's really a refresh you're trying to protect against and not general navigation on your site, then you can keep a record of a combination cookie and URL and if you see even two of those within a few seconds, then return the error page instead of the expected content. If you redirect to an error page, it forces a more conscious action to go back to the right page again before they can get to the content.
I have an application written in node.js with a timer function. Whenever a second has passed, the server sends the new time value to every connected client. While this works perfectly fine on localhost, it's very choppy when hosted online. Clients won't update immediately and the value will sometimes jump two or three seconds at a time.
I discovered, however, if I repeatedly send the timer data to the clients (using setInterval), it runs perfectly without any delay from anywhere.
Does anyone have any idea why this might be the case? It doesn't make sense to me why sending the same data more often would fix the issue. If anything, shouldn't this be more slow? I was thinking I could use this approach and have the client notify the server when it has updated but this seems unnecessary and inefficient.
I'm very new to node.js but this has got me stumped. Any insight would be greatly appreciated.
Where are you hosting it? Does it support websockets? Some hosts do not support/allow them. My guess is that your host is not allowing websockets and socket.io is falling back to the polling transport.
In your browser, you can find the websocket connection and inspect it in developer tools:
How do you inspect websocket traffic with Chrome Developer Tools?
If it does not undergo the 101 Switching Protocols http status to successfully upgrade the first request to a websocket, you'll see the polling requests recur in the developer tools.
I am using streaming provided by a vendor using socket.io using the following code:
var socket = io.connect('https://streamer.vendor-company.com/');
var subscription = ['sub1', 'sub2', 'sub3', 'sub4'];
socket.emit('SubAdd', { subs: subscription });
socket.on("m", function(message) {
console.log(message);
var messageType = message.substring(0, message.indexOf("~"));
if (messageType == someMessageType) {
dataUnpack(message);
}
else if (messageType == otherMessageType) {
anotherDataUnpack(message);
}
});
The method dataUnpack and anotherDataUnpack perform some processing on the received message and display to the webpage. Now here, the array subscription may have around 45 subscriptions.
I want to know the affect on performance on my website. Does socket.io have some way for not flooding the client or are there any serious performance consideration? Is socket.io designed for such usage?
Updates
This is different from: Too many on-connection events with Socket.io, does it hurt? as mine is for Javascript/jquery and the question to which I gave link is for node.js.
The server is not under my control. Looking at jfriend00's answer, It seems that if I have 50 subscriptions and I get around 20-30 messages per sec, I need to handle this on client side. If so, what is the amount of incoming messages I should worry at? And if possible, any technique/strategy for handling high rate of incoming messages?
Does socket.io have some way for not flooding the client
No. If your server sends a message to the client, socket.io delivers it. It's all up to your server how many messages it sends and socket.io's job is to deliver every one of them you tell it to send.
or are there any serious performance consideration?
If you send a ton of messages to a ton of clients, that will potentially take a lot of processing and bandwidth. socket.io is just a messaging layer on top of webSocket which is a layer on top of TCP. So, if you send a socket.io message from server to a client or from server to all clients, it's one of more TCP packets being sent to the client(s). The only way to not flood the client is for your server to not send it more than it wants or can handle.
Is socket.io designed for such usage?
Socket.io is designed to reliably deliver messages from server to client or client to server. It just does what you tell it. If you tell it to deliver 1000 messages, that's what it will do.
If you have concerns about too many messages being sent to the client, then you need to modify your server to control that. For example, you might decide that a client should not be notified more than once every 5 seconds (for efficiency reasons). To implement that, you'd need an extra layer (of your own design) on the server. That is not something that socket.io has built-in features for.
If you're getting 20-30 messages per sec per subscription and each client has 50 subscriptions, that's 1000-1500 messages per second per client. That is indeed a lot and probably not sustainable at either the client side or the server side, especially on the server-side as you get lots of clients all doing the same thing. At 1000 messages per second, you have to process a message in under 1ms in order to keep from falling behind.
There are no special techniques for handling a high rate of incoming messages other than be extremely careful to limit what you do when each message arrives. For example, you would not want to be touching the browser DOM on each message. Perhaps you would queue them and modify the DOM in batch only once per second. Further advice would need to see what you're trying to do with these messages.
The better path would be to find a way to limit the number of messages being sent to each client, either by being smarter about what you subscribe to, finding ways to configure the subscription to not send you so much or creating an intervening server of your own that can be smarter about what is sent to each client.
I wrote chat application using NodeJs and Socket.io. It works fine but at the moment there is nothing stopping malicious user from flooding server with large number of messages. What would be the best way to avoid this kind of situation?
BTW I did quite a bit of research and currently it seems that the only option would be simply tracking frequency of messages that user sends to server and if its over certain threshold disconnect socket or simply ignore messages. Is there a better way?
I have browser client Javascript which opens a WebSocket (using socket.io) to request a long-running process start, and then gets a callback when the process is done. When I get the callback, I update the web page to let the user know the process has completed.
This works ok, except on my iPad when I switch to another app and then come back (it never gets the callback, because I guess the app is not online at the time). I'm assuming the same thing will happen on a laptop or other computer that sleeps while waiting for the callback.
Is there a standard way (or any way) to deal with this scenario? Thanks.
For reference, if you want to see the problem page, it is at http://amigen.perfectapi.com/
There are a couple of things to consider in this scenario:
Detect the app going off/on line
See: Online and offline events.
When your app detects the online event after the computer wakes up you can get any information that you've missed.
For older web browsers you'll need to do this in a cleverer way. At Pusher we've added a ping - pong check between the client and server. If the client doesn't receive a ping within a certain amount of time it knows there's a connection problem. If the server sends a ping and doesn't get a pong back with a certain time it knows there's a problem.
A ping pong mechanism is defined in the spec but a way of sending a ping or pong hasn't been defined on the WebSocket API as yet.
Fetching missed information
Most realtime servers only deliver messages to connected to clients. If a client isn't connected, maybe due to temporary network disturbance or their computer has been asleep for a while, then those clients will miss the message.
Some frameworks do provide access to messages through a history/cache. For those that don't you'll need to detect the problem (as above) and then fetch any missed messages. A good way to do this is by providing a timestamp or sequence ID with each messages so you can make a call to your web server to say "give me all messages since X".