In terms of efficiency which is better: maintaining one single long connection with server, or sending multiple instant requests?
I have a web application with multiple interfaces. On one, when user submits data on a web page, it gets sent to the server by ajax query asap, but server cannot send the required answer for some 3-5 seconds. I have 2 ways: the first is to keep querying the server for answer every second; the second is to make server side keep connection alive until it can answer.
Which way is more efficient for the server and which for the client? The design should allow large number of users.
Related
Let's look at this problem as at a process of searching a game in Hearthstone.
I have two websocket connections. I tag websockets with user tokens so I am able to differentiate users easily.
On the client I can find a game by pressing a button "Play". Client then will send a request to find a game, which on the backend side is going to result in up to 10 repeated requests each after 10 seconds.
If the game woudn't be found, there is going to be an error 'No game found' returned to the client.
But the problem is if the game WAS found.
The requests from two different hosts are concurrent.
So, if the game was found for the first one, then websocket will send a game id to both users. But the second user request to find a game is still in 10s timeout.
So after 10s backend will make another request and also find a game and websocket again will send data to both users.
Should I control this on the client side by simply ignoring websocket sent data if the game is already on, or is it better to somehow do that on the backend?
Forgot to add, I can access my database from backend only via API, so I have to make HTTP requests from it.
So, I have an ajax-POST from the client, that asks for some information.
I am thinking about letting a php file first perform a fast task that sends back just a number and after that perform a background task that takes a bit more time and sends back an array.
I wondered if it's possible to send the two responses separate - once they are ready.
If it is, how would I handle it on the frontend with javascript?
Typical HTTP can only send responses in a Request->Response manner.
What you may be looking for is WebSockets, which keeps a connection open for further messaging from the server.
https://developer.mozilla.org/en-US/docs/Web/API/WebSockets_API
I would like to show a list of connected users without using Websockets.
I thought to use http header Connection:keep-alive
to get persistent connections.
Then, when clients leave the website, they would run a listener handler on beforeunload event in order to notice server that a client is going to leave the list.
But, how is server able to notify the rest of connected clients to update their lists? (remember, without using websockets, and if possible, without making clients asking any interval to server)
So using the Connection: keep-alive header means that the browser and server will carry out multiple http requests/responses over one tcp connection vs opening and closing a tcp connection for each http request. But this still doesn't allow the server to just push data whenever. For the server to respond with anything, the client still would need to make requests. So it isn't really related to real time push events.
and if possible, without making clients asking any interval to server
This isn't really possible. Like I said, a server cannot send data to a client over http unless the client first requested it.
So you either have to make interval requests for the user list
or
you can make it "simulate" pushing from the server with http long-polling.
The basic idea is that the server never "finishes" its response to a client request, but sends its response in chunks, when really those chunks would be treated on the client side as separate pieces of data. But this solution is hacky and has a lot of cons. Either way, http long-polling would more or less simulate pushing data real time.
I need to refresh a part of my view without refreshing the whole page.
At my index.html page I have three panels, wich one shows the number of Tickets by it's status, I need to refresh this number every time a new ticket is created or updated. I used Java with Spring Boot and Thymelaf to build my application.
This is my view:
This is the way I'm doing it now:
model.addAttribute("resolvedTickets", atendimentoService.findAllTicketsByStatus(STATUS_RESOLVED).size());
I have tried to use web sockets but i can't figure out how to get this and refresh the panels.
In a standard web interaction, the client (i.e. your web browser) sends a request to your server. Your server receives the request, and sends back the information to show in your browser and then terminates the connection.
WebSockets are a way to create a persistent, two-way connection between the client and the server, but it requires cooperation from both. A lot of shared servers don't allow WebSockets, so you first have to make sure your server is capable of providing WebSockets. (I see from your screenshot that you're running on Heroku, which should have no problem running WebSockets.)
On the server side, you need to set up handling for incoming WebSocket requests. I don't know what language you've coded your server in, so I can't provide any guidance, but there are plenty of libraries that do the server-side part of WebSockets in most languages.
On the client side, you need to set up your WebSocket client. MDN has a great guide on WebSockets that explains what you'll need to do. Basically, all you'll have to do is listen for incoming messages and increment your counter.
var count = 0;
var exampleSocket = new WebSocket("ws://example.com/socket");
exampleSocket.onmessage = function(event) {
count++;
document.getElementById('myTicketCounter').innerHTML = count;
}
For some things, WebSockets are overkill. If you find that this is too much work for too little reward, you can also just set up an AJAX call to fire every few minutes that pings another page on your server and returns the number of tickets and updates accordingly. It won't be instantaneous, but if you don't need down-to-the-second resolution, it'll probably suffice. You can adjust the interval to be as long or as short as you want (to an extent; bombarding your server with constant requests will slow you down a bit).
I want to handle a lot of (> 100k/sec) POST requests from javascript clients with some kind of service server. Not many of this data will be stored, but I have to process all of them so I cannot spend my whole server power for serving requests only. All the processing need to be done in the same server instance, otherwise I'll need to use database for synchronization between servers which will be slower by orders of magnitude.
However I don't need to send any data back to the clients, and they don't even expect them.
So far my plan was to create few proxy servers instances which will be able to buffer the request and send them to main server in bigger packs.
For example let's say that I need to handle 200k requests / sec and each server can handle 40k. I can split load between 5 of them. Then each one will be buffering requests and sending them back to main server in packs of 100. This will result in 2k requests / sec on the main server (however, each message will be 100 times bigger - which probably means around 100-200kB). I could even send them back to the server using UDP to decrease amount of needed resources (then I need only one socket on main server, right?).
I'm just thinking if there is no other way to speed up the things. Especially, when as I said I don't need to send anything back. I have full control over javascript clients also, but unlucky javascript is unable to send data using UDP which probably would be solution for me (I don't even care if 0.1% of data will be lost).
Any ideas?
Edit in response to answers given me so far.
The problem isn't with server being to slow at processing events from the queue or with putting events in the queue itself. In fact I plan to use disruptor pattern (http://code.google.com/p/disruptor/) which was proven to process up to 6 million requests per second.
The only problem which I potentially can have is need to have 100, 200 or 300k sockets open at the same time, which cannot be handled by any of the mainstream servers. I know some custom solutions are possible (http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3) but I'm wondering if there is no way to even better utilization of fact that I don't have to replay to clients.
(For example some way to embed part of the data in initial TCP packet and handle TCP packets as they would be UDP. Or some other kind of magic ;))
Make a unique and fast (probably in C) function that get's all requests, from a very fast server (like nginx). The only job of this function is to store the requests in a very fast queue (like redis if you got enought ram).
In another process (or server), depop the queue and do the real work, processing request one by one.
If you have control of the clients, as you say, then your proxy server doesn't even need to be an HTTP server, because you can assume that all of the requests are valid.
You could implement it as a non-HTTP server that simply sends back a 200, reads the client request until it disconnects, and then queues the requests for processing.
I think what you're describing is an implementation of a Message Queue. You also will need something to hand off these requests to whatever queue you use (RabbitMQ is quite good, there are many alternatives).
You'll also need something else running which can do whatever processing you actually want on the requests. You haven't made that very clear, so I'm not too sure exactly what would be right for you. Essentially the idea will be that incoming requests are dumped as quickly as simply as possible into the queue by your web server, and then the web server is free to go back to serving more requests. When the system has some resources, it uses them to process the queue, but when it's busy the queue just keeps growing.
Not sure what platform you're on, but might want to look at something like Lighttpd for serving the POSTs. You might (if same-domain restrictions don't shoot you down) get away with having Lighttpd running on a subdomain of your application (so post.myapp.com). Failing that you could put a proper load balancer in front of your webservers altogether (so all requests go to www.myapp.com and the load balancer decides whether to forward them to the web server or the queue processor).
Hope that helps
Consider using MongoDB for persisting your requests, it's fire and forget mechanism can help your servers to response faster.