Does using jQuery.get effectively double the ping time? - javascript

Suppose I have some script myScript.js that uses jQuery.get() to retrieve a small piece of data from the server. Suppose also that my ping time is horrible at 1500ms. Does using jQuery.get effectively double the ping time to 3000ms?
Or is there async magic that allows some sort of parallel processing? The reason I'm asking is that we use jQuery.get() fairly liberally and I'm wondering if it is an area we need to look at optimizing.
Edit: double compared to if I can somehow rearrange things to just load all the data upon the initial load and bypass jQuery get altogether

Ping time is usually server-related, where as jQuery is all client side. So the answer is no, it doesn't affect your ping time.
If you're asking if using jQuery.get (or ajax in general) can make your client side slower then the answer is that yes, the more JS you have then generally the slower the client gets if you're trying to process a lot of things since everything pretty much runs on the same thread. However, by default these ajax requests are asynchronous so until the server sends the response back the thread is usually idling anyways.
I'd suggest you open your page in Chrome and use the developer tools to see the network usage. That will tell you exactly how much time is taken 'waiting' on the server.

If you break down a request, you can get an idea of what latency you can expect.
Every TCP connection begins with a three-way-handshake:
SYN (client to server)
SYN-ACK (server to client)
ACK (client to server)
If the request fits in the size of one tcp packet (~1500 bytes) it can be sent as the last part of the handshake to optimize the network flow.
The response might be sent in just one packet as well (depending on its size). Once sent, both sides engage in a connection termination which takes two pairs of FIN-ACK sequences unless the connection is kept alive. At this point I'm not entirely sure whether the server can send FIN together with the last response packet.
So, in the best case scenario you can expect at least 2x ping time, but more likely 3-4x.

Related

How do I receive a variable from python flask to JavaScript?

I've seen how to make a post request from JavaScript to get data from the server, but how would I do this flipped. I want to trigger a function in the flask server that will then dynamically update the variable on the JavaScript side to display. Is there a way of doing this in a efficient manner that does not involve a periodic iteration. I'm using an api and I only want to the api to be called once to update.
There are three basic options for you:
Polling - With this method, you would periodically send a request to the server (maybe every 5 seconds for example) and ask for an update. The upside is that it is easy to implement. The downside is that many requests will be unnecessary. It sounds like this isn't a great option for you.
Long Polling - This method means you would open a request up with the server and leave the request open for a long period of time. When the server gets new information it will send a response and close the request - after which the client will immediately open up a new "long poll" request. This eliminates some of the unnecessary requests with regular polling, but it is a bit of a hack as HTTP was meant for a reasonably short request response cycle. Some PaaS providers only allow a 30 second window for this to occur for example.
Web Sockets - This is somewhat harder to setup, but ultimately is the best solution for real time server to client (and vice versa) communication. A socket connection is opened between the server and client and data is passed back and forth whenever either party would like to do so. Javascript has full web socket support now and Flask has some extensions that can help you get this working. There are even great third party managed solutions like Pusher.com that can give you a working concept very quickly.

Prevent recursive calls of XmlHttpRequest to server

I've been googling for hours for this issue, but did not find any solution.
I am currently working on this app, built on Meteor.
Now the scenario is, after the website is opened and all the assets have been loaded in browser, the browser constantly makes recursive xhr calls to server. These calls are made at the regular interval of 25 seconds.
This can be seen in the Network tab of browser console. See the Pending request of the last row in image.
I can't figure out from where it originates, and why it is invoked automatically even when the user is idle.
Now the question is, How can I disable these automatic requests? I want to invoke the requests manually, i.e. when the menu item is selected, etc.
Any help will be appriciated.
[UPDATE]
In response to the Jan Dvorak's comment:
When I type "e" in the search box, the the list of events which has name starting with letter "e" will be displayed.
The request goes with all valid parameters and the Payload like this:
["{\"msg\":\"sub\",\"id\":\"8ef5e419-c422-429a-907e-38b6e669a493\",\"name\":\"event_Coll_Search_by_PromoterName\",\"params\":[\"e\"]}"]
And this is the response, which is valid.
a["{\"msg\":\"data\",\"subs\":[\"8ef5e419-c422-429a-907e-38b6e669a493\"]}"]
The code for this action is posted here
But in the case of automatic recursive requests, the request goes without the payload and the response is just a letter "h", which is strange. Isn't it? How can I get rid of this.?
Meteor has a feature called
Live page updates.
Just write your templates. They automatically update when data in the database changes. No more boilerplate redraw code to write. Supports any templating language.
To support this feature, Meteor needs to do some server-client communication behind the scenes.
Traditionally, HTTP was created to fetch dead data. The client tells the server it needs something, and it gets something. There is no way for the server to tell the client it needs something. Later, it became needed to push some data to the client. Several alternatives came to existence:
polling:
The client makes periodic requests to the server. The server responds with new data or says "no data" immediately. It's easy to implement and doesn't use much resources. However, it's not exactly live. It can be used for a news ticker but it's not exactly good for a chat application.
If you increase the polling frequency, you improve the update rate, but the resource usage grows with the polling frequency, not with the data transfer rate. HTTP requests are not exactly cheap. One request per second from multiple clients at the same time could really hurt the server.
hanging requests:
The client makes a request to the server. If the server has data, it sends them. If the server doesn't have data, it doesn't respond until it does. The changes are picked up immediately, no data is transferred when it doesn't need to be. It does have a few drawbacks, though:
If a web proxy sees that the server is silent, it eventually cuts off the connection. This means that even if there is no data to send, the server needs to send a keep-alive response anyways to make the proxies (and the web browser) happy.
Hanging requests don't use up (much) bandwidth, but they do take up memory. Nowadays' servers can handle multiple concurrent TCP connections, so it's less of an issue than it was before. What does need to be considered is the amount of memory associated with the threads holding on to these requests - especially when the connections are tied to specific threads serving them.
Browsers have hard limits on the number of concurrent requests per domain and in total. Again, this is less of a concern now than it was before. Thus, it seems like a good idea to have one hanging request per session only.
Managing hanging requests feels kinda manual as you have to make a new request after each response. A TCP handshake takes some time as well, but we can live with a 300ms (at worst) refractory period.
Chunked response:
The client creates a hidden iFrame with a source corresponding to the data stream. The server responds with an HTTP response header immediately and leaves the connection open. To send a message, the server wraps it in a pair of <script></script> tags that the browser executes when it receives the closing tag. The upside is that there's no connection reopening but there is more overhead with each message. Moreover, this requires a callback in the global scope that the response calls.
Also, this cannot be used with cross-domain requests as cross-domain iFrame communication presents its own set of problems. The need to trust the server is also a challenge here.
Web Sockets:
These start as a normal HTTP connection but they don't actually follow the HTTP protocol later on. From the programming point of view, things are as simple as they can be. The API is a classic open/callback style on the client side and the server just pushes messages into an open socket. No need to reopen anything after each message.
There still needs to be an open connection, but it's not really an issue here with the browser limits out of the way. The browser knows the connection is going to be open for a while, so it doesn't need to apply the same limits as to normal requests.
These seem like the ideal solution, but there is one major issue: IE<10 doesn't know them. As long as IE8 is alive, web sockets cannot be relied upon. Also, the native Android browser and Opera mini are out as well (ref.).
Still, web sockets seem to be the way to go once IE8 (and IE9) finally dies.
What you see are hanging requests with the timeout of 25 seconds that are used to implement the live update feature. As I already said, the keep-alive message ("h") is used so that the browser doesn't think it's not going to get a response. "h" simply means "nothing happens".
Chrome supports web sockets, so Meteor could have used them with a fallback to long requests, but, frankly, hanging requests are not at all bad once you've got them implemented (sure, the browser connection limit still applies).

How to reduce server "Wait" time?

I am trying to optimize my site's speed and I'm using the great tool at pingdom.com. Right now, over 50% of the time it takes to load the page is "Wait" time as shown in the screenshot below. What can I do to reduce this? Also, how typical is this figure? are there benchmarks on this? Thanks!
EDIT:
Ok.. let me clarify a few things. There are no server side scripts or database calls going on. Just HTML, CSS, JS, and images. I have already done some things like push js to the end of the body tag to get parallel downloads. I am aware that the main.html and templates.html are adding to the overall wait time by being done synchronously after js.js downloads, that's not the problem. I am just surprised at how much "wait" time there is for each request. Does server distance affect this? what about being on a shared server, does that affect the wait time? Is there any low-hanging fruit to remedy those issues?
The most common reason for this in the case of Apache is the usage of DNS Reversal Lookup. What this means is that the server tries to figure out what the name of your machine is, each time you make a request. This can take several seconds, and that explains why you have a long WAIT time and then a very quick load, because the matter is not about bandwidth.
The obvious solution for this is to disable hostnamelookup in /etc/httpd/conf/httpd.conf
HostnameLookups Off
However...this is usually NOT enough. The fact is that in many cases, apache still does a reversal lookup even when you have disabled host name lookup, so you need to take a careful look at each line of your apache config. In particular, one of the most common reasons for this are LOGS. By default on many red hat - centos installations, the log format includes %h which stands for "hostname", and requires apache to do a reverse lookup. You can see this here:
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%h %l %u %t \"%r\" %>s %b" common
You should change those %h for %a to solve this problem.
If you have multiple server requests which the page is waiting on, you can make sure that those server requests are sent asynchronously in parallel so that you are serializing them.
The slowest possible way to fetch multiple requests is to send one request, wait for its response, send the next request, wait for its response, etc... It's usually much faster to send all requests asynchronously and then process all responses as they arrive. This shortens the total wait time to the longest wait time for any single request rather than the cumulative wait time of all requests.
If you are only making one single request, then all you can do on the client-side of things is to make sure that the request is sent to the server as early as possible in the page loading sequence so that other parts of the page can be doing their business while the request is processing, thus getting the initial request started sooner (and thus finishing sooner).
The wait time, also known as time to first byte is how long it takes for the server to send the first byte from when the connection is initiated. If this is high, it means your server has got to do a lot of work to render the page before sending it. We need more information about what your site is doing to render the page.
TTFB is directly influenced by "physical" distance between browser and server. CDN proxy is the best way to shorten said distance. This, coupled with native caching capabilities, will help provide swifter response by loading cached object from the nearest POP (point of placement) location.
The effect will depend on user geo-location and CDN's spread. Still, you can expect significant improvement, 50%-70% or more.
Speaking from experience, I saw cases in which 90% of content was cached and deliver directly from proxy placed on a different continent, from the other side of the globe.
This is an issue with the server... According to Pingdom, "The web browser is waiting for data from the server" is what defines the "Wait" time.
There isn't much you can do from a javascript or code end to fix this.

Is there any good trick for server to handle more requests if I don't have to sent any data back?

I want to handle a lot of (> 100k/sec) POST requests from javascript clients with some kind of service server. Not many of this data will be stored, but I have to process all of them so I cannot spend my whole server power for serving requests only. All the processing need to be done in the same server instance, otherwise I'll need to use database for synchronization between servers which will be slower by orders of magnitude.
However I don't need to send any data back to the clients, and they don't even expect them.
So far my plan was to create few proxy servers instances which will be able to buffer the request and send them to main server in bigger packs.
For example let's say that I need to handle 200k requests / sec and each server can handle 40k. I can split load between 5 of them. Then each one will be buffering requests and sending them back to main server in packs of 100. This will result in 2k requests / sec on the main server (however, each message will be 100 times bigger - which probably means around 100-200kB). I could even send them back to the server using UDP to decrease amount of needed resources (then I need only one socket on main server, right?).
I'm just thinking if there is no other way to speed up the things. Especially, when as I said I don't need to send anything back. I have full control over javascript clients also, but unlucky javascript is unable to send data using UDP which probably would be solution for me (I don't even care if 0.1% of data will be lost).
Any ideas?
Edit in response to answers given me so far.
The problem isn't with server being to slow at processing events from the queue or with putting events in the queue itself. In fact I plan to use disruptor pattern (http://code.google.com/p/disruptor/) which was proven to process up to 6 million requests per second.
The only problem which I potentially can have is need to have 100, 200 or 300k sockets open at the same time, which cannot be handled by any of the mainstream servers. I know some custom solutions are possible (http://www.metabrew.com/article/a-million-user-comet-application-with-mochiweb-part-3) but I'm wondering if there is no way to even better utilization of fact that I don't have to replay to clients.
(For example some way to embed part of the data in initial TCP packet and handle TCP packets as they would be UDP. Or some other kind of magic ;))
Make a unique and fast (probably in C) function that get's all requests, from a very fast server (like nginx). The only job of this function is to store the requests in a very fast queue (like redis if you got enought ram).
In another process (or server), depop the queue and do the real work, processing request one by one.
If you have control of the clients, as you say, then your proxy server doesn't even need to be an HTTP server, because you can assume that all of the requests are valid.
You could implement it as a non-HTTP server that simply sends back a 200, reads the client request until it disconnects, and then queues the requests for processing.
I think what you're describing is an implementation of a Message Queue. You also will need something to hand off these requests to whatever queue you use (RabbitMQ is quite good, there are many alternatives).
You'll also need something else running which can do whatever processing you actually want on the requests. You haven't made that very clear, so I'm not too sure exactly what would be right for you. Essentially the idea will be that incoming requests are dumped as quickly as simply as possible into the queue by your web server, and then the web server is free to go back to serving more requests. When the system has some resources, it uses them to process the queue, but when it's busy the queue just keeps growing.
Not sure what platform you're on, but might want to look at something like Lighttpd for serving the POSTs. You might (if same-domain restrictions don't shoot you down) get away with having Lighttpd running on a subdomain of your application (so post.myapp.com). Failing that you could put a proper load balancer in front of your webservers altogether (so all requests go to www.myapp.com and the load balancer decides whether to forward them to the web server or the queue processor).
Hope that helps
Consider using MongoDB for persisting your requests, it's fire and forget mechanism can help your servers to response faster.

Javascript 1 second apart ajax requests? Resource usage?

I have a long-term goal of eventually creating a chat sort by any means, but for now I'd like to just have a simple one with some Mysql and ajax calls.
To make the chat seem instant, I'd like to have the ajax request interval as fast as possible. I get the feeling if it's as low or lower than a second, it's going to bog down the browser, the user's internet, or my server.
Assuming the server doesn't return anything, how much bandwidth and cpu/memory would the client use with constant, one second apart ajax calls?
edit: I'm still open to suggestions on how I can do a chat server. Anything that's possible with free hosting from x10 or 000webhost. I've been told of Henoku but I have no clue how to use it.
edit: Thanks for the long polling suggestion, but that uses too much cpu on the servers.
One technique that can be used is to use a long-running ajax request. The client asks if there's any chat data. The server receives the request. If there's chat data available, it returns that data immediately. If there is no chat data, it hangs onto the request for some period of time (perhaps two minutes) and if some chat data appears during that two minutes, the web request returns immediately with that data. If the full two minutes elapses and no chat data is received, then the ajax call returns with no data.
The client can then immediately issue another request to wait another two minutes for some data.
To make these "long" http requests work, you just need to make sure that your underlying ajax call has a timeout set for longer than the time you've set it for on the server.
On the server, you need to do an efficient mechanism of waiting for data, probably involving semaphores or something like that because you don't want to be polling internally in the server either.
Doing it this way, you can get near instantaneous response on the client, but only be making 30 requests an hour.
To be friendly to the battery of a laptop or mobile device, you need to be sensitive to when your app isn't actually being used (browser not displayed, not the current tab, etc...) and stop the requests during that time.
As to your other questions, repeated ajax calls (as long as they are spaced at least some small amount of time apart) don't really use much in the way of CPU or memory. They may use battery if they keep the computer from going into an idle mode.

Categories

Resources