Hi I understand that in long polling you keep the connection with the server open for long till you a get a response back from the server and then poll again and wait for the next response. However i dont seem to understand how to code it. There is this code below which uses long polling but I dont seem to get it
(function poll(){
$.ajax({ url: "server", success: function(data){
//update page based on data
}, dataType: "json", complete: poll, timeout: 30000 });
})();
But how is the connection kept open here. I understand that "poll" function is fired again once the response from the server is got.But how is the connection kept open?
Edit1:- It would be great if someone can also explain what would timeout actually do here
The client cannot force the server to keep the connection open. The server is simply not closing the connection. The server will have to say at some point "that's it, there's no more content here, bye". In long polling, the server simply never does so and keeps the client waiting for more data, which it trickles out little by little as updates come in. That's long polling.
On the client side it's possible to check occasionally for the data which has already been received, while the request has not finished. That way data can occasionally be sent from the server over the same open connection. In your case this is not being done, the success callback will only fire when the request has finished. It's basically a cheap form of long polling in which the server keeps the client waiting for an event, sends data about this event and then closes the connection. The client takes that as the trigger, processes the data, then reconnects to the server to wait for the next event.
I think what is making this confusing to understand is that the discussion is focused on the client-side programming.
Long-polling is not strictly a client-side pattern, but requires the web server to keep the connection open.
Background: Client wants to be notified by web server when something occurs or is available, for example, let me know when a new email arrives without me having to go back and ask every few seconds.
Client opens a connection to a specific URL on the web server.
Server accepts connection, opens a socket and dispatches control to whatever server-side code handles this connection (say a servlet or jsp in java, or a route in RoR or node/express).
Server code waits until the event or information is available. For example, when an email arrives, sees if any of the "waiting connections" are for the particular inbox. If they are, then respond with the appropriate data.
Client receives data, does its thing, then starts another request to poll.
I was looking to do something with staggered data results where some would come back right away but the last few results might come back 10-15 seconds later. I created a quick little jQuery hack but it's kinda doing what I want (still not sure if it makes sense to use it tho):
(function($) {
if (typeof $ !== 'function') return;
$.longPull = function(args) {
var opts = $.extend({ method:'GET', onupdate:null, onerror:null, delimiter:'\n', timeout:0}, args || {});
opts.index = 0;
var req = $.ajaxSettings.xhr();
req.open(opts.method, opts.url, true);
req.timeout = opts.timeout;
req.onabort = opts.onabort || null;
req.onerror = opts.onerror || null;
req.onloadstart = opts.onloadstart || null;
req.onloadend = opts.onloadend || null;
req.ontimeout = opts.ontimeout || null;
req.onprogress = function(e) {
try {
var a = new String(e.srcElement.response).split(opts.delimiter);
for(var i=opts.index; i<a.length; i++) {
try {
var data = JSON.parse(a[i]); // may not be complete
if (typeof opts.onupdate==='function') opts.onupdate(data, i);
opts.index = i + 1;
} catch(fx){}
}
}
catch(e){}
};
req.send(opts.data || null);
};
})(jQuery);
Largely untested but it seemed to do what you had in mind. I can think of all sorts of ways it could go wrong, though ;-)
$.longPull({ url: 'http://localhost:61873/Test', onupdate: function(data) { console.log(data); }});
As requested, here is some pseudo NodeJS code:
function respond_to_client(res,session,cnt)
{
//context: res is the object we use to respond to the client
//session: just some info about the client, irrelevant here
//cnt: initially 0
//nothing to tell the client, let's long poll.
if (nothing_to_send(res,session))
{
if (cnt<MAX_LONG_POLL_TIME)
{
//call this function in 100 ms, increase the counter
setTimeout(function(){respond_to_client(request_id,res,session,cnt+1)},100);
}
else
{
close_connection(res);
//Counter too high.
//we have nothing to send and we kept the connection for too long,
//close it. The client will open another.
}
}
else
{
send_what_we_have(res);
close_connection(res);
//the client will consume the data we sent,
//then quickly send another request.
}
return;
}
You don't see how it works from that code only, because the actual difference from a regular request is done on the server.
The Javascript just makes a regular request, but the server doesn't have to respond to the request immediately. If the server doesn't have anything worth returning (i.e. the change that the browser is waiting for hasn't happened yet), the server just waits which keeps the connection open.
If nothing happens on the server for some time, either the client side will time out and make a new request, or the server can choose to return an empty result just to keep the flow going.
The connection is not kept open all the time. It is closed automatically when the response is received from the server and server closes the connection. In long polling the server is not supposed to send back data immediately. On ajax complete (when server closes the connection) the new request is sent to the server, which opens a new connection again and starts to keep pending for new response.
As was mentioned, long polling process is handled not only by client side, but mainly by server side. And not only by server script (in case of PHP), but by server itself, which doesn't close the "hanged" connection by timeout.
FWIW, WebSockets use constantly opened connection with the server side, which makes possible to receive and send back the data without closing the connection.
I guess no one properly explain why do we need timeout in the code. From jQuery Ajax docs:
Set a timeout (in milliseconds) for the request. This will override any global timeout set with $.ajaxSetup(). The timeout period starts at the point the $.ajax call is made; if several other requests are in progress and the browser has no connections available, it is possible for a request to time out before it can be sent
The timeout option indeed doesn't delay the next execution for X seconds. it only sets a maximum timeout for the current call. Good article about timeout stuff - https://mashupweb.wordpress.com/2013/06/26/you-should-always-add-timeout-to-you-ajax-call-in-jquery/
Related
i'd like to ask some question about how to close a websocket client when offline/switched network.
when i try to close the socket for the 2 case in chrome, after i call websocket.close, i cannot recieve onclose event for a long time (around 60s), then i can recieve it finally.
after i check the readystate, i found that in the coming 60s, the state is 2(CLOSEING), not turned to 3(CLOSED).
so i'd like to know is there any steps i missed when i call websocket.close() in offline/switched network condition. while it runs well when the network is normal.
what's your back-end framework?
If you try to handle client's network that suddenly turned offline, there're two way you can try to close websocket from client as follows.
Kindly refer to source code here.
Using the js offline event handle
If we would like to detect if the user went offline we simply add websocket close function into offline event function.
front-end
function closeWebSocket() {
websocket.close();
}
$(window).on('beforeunload offline', event => {
closeWebSocket();
});
back-end (WebSocketServer)
#OnClose
public void onClose(Session session) {
CURRENT_CLIENTS.remove(session.getId());
}
Using Ping interval on the client-side and decrease the websocket timeout on server-side
If websocket server doesn't receive any message in specific time, it will lead to a timeout. So we can use this mechanism to decrease the timeout to close session if the client doesn't send any ping due to offline.
front-end
// send ping to server every 3 seconds
const keepAlive = function (timeout = 20000) {
if (websocket.readyState === websocket.OPEN) {
websocket.send('ping');
}
setTimeout(keepAlive, timeout);
};
back-end (WebSocketConfig)
#Bean
public ServletServerContainerFactoryBean createWebSocketContainer() {
ServletServerContainerFactoryBean container = new ServletServerContainerFactoryBean();
container.setMaxSessionIdleTimeout(5000L);
return container;
}
How to constantly update my front end dashboard with new information from the back end.
I have been searching for a solution online, but couldn't stumble on any.
I already know how to send static variables with ejs, but I cant figure out how to update my front end with new messages from the server.
I am working with express for the server and ejs for templating, plus server side java script.
I want to consonantly send messages to the user. Something like page 3 of 100......, 10 of 100..... and so forth. If you have experience with node Js, kindly help me out. Thanks.
You could use Long pooling to solve your problem, Long pooling is,
A request is sent to the server
The server doesn’t close the connection until it has a message to
send
When a message appears – the server responds to the request with it
The browser makes a new request immediately.
The situation when the browser sent a request and has a pending connection with the server is standard for this method. Only when a message is delivered, the connection is reestablished.
If the connection is lost, because of, say, a network error, the browser immediately sends a new request.A sketch of client-side subscribe function that makes long requests:
async function subscribe() {
let response = await fetch("/subscribe");
if (response.status == 502) {
// Status 502 is a connection timeout error,
// may happen when the connection was pending for too long,
// and the remote server or a proxy closed it
// let's reconnect
await subscribe();
} else if (response.status != 200) {
// An error - let's show it
showMessage(response.statusText);
// Reconnect in one second
await new Promise(resolve => setTimeout(resolve, 1000));
await subscribe();
} else {
// Get and show the message
let message = await response.text();
showMessage(message);
// Call subscribe() again to get the next message
await subscribe();
}
}
subscribe();
Hope this hepls!
I implemented a simple chat for my website where users can talk to each other with ExpressJS and Socket.io. I added a simple protection from a ddos attack that can be caused by one person spamming the window like this:
if (RedisClient.get(user).lastMessageDate > currentTime - 1 second) {
return error("Only one message per second is allowed")
} else {
io.emit('message', ...)
RedisClient.set(user).lastMessageDate = new Date()
}
I am testing this with this code:
setInterval(function() {
$('input').val('message ' + Math.random());
$('form').submit();
}, 1);
It works correctly when Node server is always up.
However, things get extremely weird if I turn off the Node server, then run the code above, and start Node server again in a few seconds. Then suddenly, hundreds of messages are inserted into the window and the browser crashes. I assume it is because when Node server is down, socket.io is saving all the client emits, and once it detects Node server is online again, it pushes all of those messages at once asynchronously.
How can I protect against this? And what is exactly happening here?
edit: If I use Node in-memory instead of Redis, this doesn't happen. I am guessing cause servers gets flooded with READs and many READs happen before RedisClient.set(user).lastMessageDate = new Date() finishes. I guess what I need is atomic READ / SET? I am using this module: https://github.com/NodeRedis/node_redis for connecting to Redis from Node.
You are correct that this happens due to queueing up of messages on client and flooding on server.
When the server receives messages, it receives messages all at once, and all of these messages are not synchronous. So, each of the socket.on("message:... events are executed separately, i.e. one socket.on("message... is not related to another and executed separately.
Even if your Redis-Server has a latency of a few ms, these messages are all received at once and everything always goes to the else condition.
You have the following few options.
Use a rate limiter library like this library. This is easy to configure and has multiple configuration options.
If you want to do everything yourself, use a queue on server. This will take up memory on your server, but you'll achieve what you want. Instead of writing every message to server, it is put into a queue. A new queue is created for every new client and delete this queue when processing the last item in queue.
(update) Use multi + watch to create lock so that all other commands except the current one will fail.
the pseudo-code will be something like this.
let queue = {};
let queueHandler = user => {
while(queue.user.length > 0){
// your redis push logic here
}
delete queue.user
}
let pushToQueue = (messageObject) => {
let user = messageObject.user;
if(queue.messageObject.user){
queue.user = [messageObject];
} else {
queue.user.push(messageObject);
}
queueHandler(user);
}
socket.on("message", pushToQueue(message));
UPDATE
Redis supports locking with WATCH which is used with multi. Using this, you can lock a key, and any other commands that try to access that key in thet time fail.
from the redis client README
Using multi you can make sure your modifications run as a transaction,
but you can't be sure you got there first. What if another client
modified a key while you were working with it's data?
To solve this, Redis supports the WATCH command, which is meant to be
used with MULTI:
var redis = require("redis"),
client = redis.createClient({ ... });
client.watch("foo", function( err ){
if(err) throw err;
client.get("foo", function(err, result) {
if(err) throw err;
// Process result
// Heavy and time consuming operation here
client.multi()
.set("foo", "some heavy computation")
.exec(function(err, results) {
/**
* If err is null, it means Redis successfully attempted
* the operation.
*/
if(err) throw err;
/**
* If results === null, it means that a concurrent client
* changed the key while we were processing it and thus
* the execution of the MULTI command was not performed.
*
* NOTICE: Failing an execution of MULTI is not considered
* an error. So you will have err === null and results === null
*/
});
}); });
Perhaps you could extend your client-side code, to prevent data being sent if the socket is disconnected? That way, you prevent the library from queuing messages while the socket is disconnected (ie the server is offline).
This could be achieved by checking to see if socket.connected is true:
// Only allow data to be sent to server when socket is connected
function sendToServer(socket, message, data) {
if(socket.connected) {
socket.send(message, data)
}
}
More information on this can be found at the docs https://socket.io/docs/client-api/#socket-connected
This approach will prevent the built in queuing behaviour in all scenarios where a socket is disconnected, which may not be desirable, however if should protect against the problem you are noting in your question.
Update
Alternatively, you could use a custom middleware on the server to achieve throttling behaviour via socket.io's server API:
/*
Server side code
*/
io.on("connection", function (socket) {
// Add custom throttle middleware to the socket when connected
socket.use(function (packet, next) {
var currentTime = Date.now();
// If socket has previous timestamp, check that enough time has
// lapsed since last message processed
if(socket.lastMessageTimestamp) {
var deltaTime = currentTime - socket.lastMessageTimestamp;
// If not enough time has lapsed, throw an error back to the
// client
if (deltaTime < 1000) {
next(new Error("Only one message per second is allowed"))
return
}
}
// Update the timestamp on the socket, and allow this message to
// be processed
socket.lastMessageTimestamp = currentTime
next()
});
});
I have a NodeJS app that do some computation, and I'd like to fill a progressbar on the client (AngularJS) showing the amount of computing done. For now I do something like this :
Server side :
var compute_percent = 0;
router.post('/compute', function(req, res) {
myTask.compute1(function(result) {
compute_percent = 33;
myTask.compute2(function(result2) {
compute_percent = 66;
myTask.compute3(function(result3) {
compute_percent = 100;
res.json(result3);
});
});
});
}
router.get('/compute_percent', function(req, res) {
res.json(compute_percent);
}
Client Side
angular.module('myApp').controller('myCtrl',
function($scope, $interval, $http) {
$interval(function() {
$http.get('/compute_percent').success(function(result) {
$scope.percent = result});
},
500);
}
});
What I don't like is that I end up doing a lot of request (if I want the progressbar to be accurate) and almost all of the request are useless (the state didn't change server side).
How can I 'inverse' this code, having the server send a message to the listening client that the computing state changed ?
You have 3 possibilities that can be done:
Standard push
By using sockets or anything that can communicate both way, you can do fast information exchange. It will uses many requests for updating the progress bar, but it's pretty fast and can support a tons of request per seconds.
Long pooling
The browser send a request, and the server does not respond immediately, instead it wait for an event to occurred before reporting it by responding. The browser then apply its action and send another request that will be put to wait.
With this technique you will only have update when the server wants. But if you want great accuracy, this will still produce a lot of requests.
Pushlet
The client send only one request, and the server fill it with javascript that will update the progress bar.
The response is stream to keep the connection opened, and javascripts update are sends when needed.
The response will look like:
set_progress(0);
// nothing for X seconds
set_progress(10);
// nothing for X seconds
set_progress(0);
....
Comparing to the others, you still send the same amount of information, but in one request, so it's less heavy.
The easier to implement is the long-pooling I think.
I have a use case where my http requests are caching the intermediate result on server.
If the cache is not present the request builds it by requesting another server.
These requests are fired in succession (loop) using AJAX to Node Server and the number of requests can be in range of 50 to 500.
The Problem:
Since the requests are made in a loop and the cache is already not present first few of them all try to build the cache and sometimes consequent requests find the semi-built cache, which returns wrong result.
I can circumvent this problem with polling:
(function next(){
if(!wait){
fs.readFile(cacheFile, function(err){
if(err) {
wait = true;
createCache(); // sets wait = false;
} else {
useCache();
}
});
} else {
setTimeout(next,waitTime);
}
})();
My Query:
Can the requests be halted without polling, and continue only after the first request has completed the cache building process?
Yes, it is possible in combination with Futures/Promise. You can take this one.
Outside of the scope define var cachePromise and you can use something like this below:
if (!cachePromise) {
cachePromise = require('future').create()
buildCache(function() {
cachePromise.fulfill();
});
}
cachePromise.when(next); // this one triggers next route in middleware stack
Put the code in route stack before the route which gives result and you are good to go.
thanks.