I have a few web servers in different locations and would like to load my javascript files from the fastest (nearest??) server. For example in Location A, I would expect the users to get their files from servers in that Location, but users from Location B would get their files from other servers, hopefully servers from location B, but that is not necessary.
I have found how to load javascript files conditionally, and I think that is a good start. I just need a way to find which is the best source(faster response).
Thanks,
Just use a CDN if you want that minimal performance advantage. This would differ a few milliseconds.
There is a list of CDN on http://jquery.com/download/#using-jquery-with-a-cdn
The only advantage of using a CDN is that the user may have downloaded the jQuery library earlier from another website, so the jQuery library is reused from it's cache.
If you are encountering performance problems, try profiling the website and check the ammount of time that a resource takes to run or load.
This isn't really a problem the client should solve. You should put your server behind a proxy that balances the load. If the proxy's bandwidth isn't enough, then I think you're out of luck. A quick and dirty solution is to do a Math.random() in the client side and choose the server based on that. It should balance the load pretty evenly.
If you were to measure the response time from the mirror servers, you would just introduce more load. Lets say, we have a way to determine the response time. You would either request the file from all servers, meaning you just made everything worse, or you would wait for server1, and if that didn't respond in time, you would move to server2. But by doing this you introduced load to server1.
Also if you were to ping the server, that isn't a real indicator if the available performance of that server. The server might be able to respond fast as the response is short and requires no real IO, but if you were to request a file that would mean possibly reading from the disk.
Related
I am primarily a front-end developer/designer, however recently, I've been exploring end to end solutions. Yesterday I finished a TODO application using the mean stack and would like to start exploring deployment options to my VPS.
That being said, I've been advised to use nginx as a reverse proxy is for serving up static resources? Unfortunately, I'm getting stuck on simple questions.
What are the example static resource?
What factors define static resources?
What are examples of non-static resources?
Lastly, are there any weird edge-cases I should be aware of?
Sorry about the noobness of this question.
In this case, a static resource refers to one that is not generated with code on the fly, meaning that its contents won't change from request to request.
Images, JavaScript, CSS, etc., are all candidates for this. Basically, you set a large cache time for these resources, and your Nginx servers can keep a copy on disk (or in Redis or something similar) so that they are ready to return to the client without hitting your application servers.
It's important to remember to use versioned file names when setting large cache times. header-image-20140608.png for example, means you can have a later version without worrying about the old one still being in the cache.
A static resource is something that isn't generated dynamically.
An example of a static resource is an image. It's the same for each and every request. It's a file on the filesystem that doesn't require any processing - you simply tell nginx send this file as-is to the user.
An example of a dynamic resource is json data specific to the user requesting it (it has to be generated specifically for that user).
With a dynamic resource these is also often your own domain specific code executed, a request to the database etc.
The reason nginx should serve static content is because it excels at serving this content in a parallel way - it was designed exactly for this.
If you are using Ruby/Python/node.js/Java etc, you can also serve static resources through these processes (Just call File.open() and start streaming the data) - however it would be much slower, and also lower the number of simultaneous dynamic requests you could serve.
A static resource is resource which will not be changed frequently and this can be stored on client's browser end unless required , to prevent load on web server and loading the site faster at client end.
Some examples of these are : images, javascript, css
A dynamic resource is the content that changes on a web resource which is mainly data that keeps changing on a page which is specific to a user or items.
In order to make sure that your static data reduces the load on your server and ensures fast performance on client end you need to take care of various server specific configurations like enabling the compressing of js files , rendering the header content for images properly.
When you want to change the file content make sure you prevent the browser from picking this static old content from cache, attach a time stamp with the urls of these static resources which will ensure upldated resource is loaded when needed
Static resources mean resources that don't change and do not involve server-side code.
This typically means images, CSS, and somethimes client-side Javascript.
We're relaunching our website and the new site is being hosted on a dedi server # Softlayer.
We're also using Rackspace, to host our documents etc, however...
My colleague has also put the websites JavaScript & CSS documents onto Rackspace, as well as the sites images. This, he says will mean quicker page loading times (and also saving money on bandwidth)
Now I am not sure what the bandwidth charges, but in terms of page loading, is this correct?
I would assume that the documents being hosted on the same server would be quicker (same Nameservers etc) than fetching from an external provider, but I may be very wrong.
The website won't be used internationally, but we have ~10,000 members which would use our interactive site.
I would appreciate your comments / discussions regarding this.
Thanks! :)
This has been discussed N times here and there. As you pointed out, you save bandwidth and, if it used wisely, will boost your loading performance. That is the good thing. However it is positive that when you use it you take into account that:
There is an implied external server response time to your GET request.
Client may eventually block the CDN provider.
Ideally, CDN servers must be located near your main user base to get the quickest response possible.
You have to be careful on how many requests per page you do.
I am working on a file upload system which will store individual parts of large files on more than one server. So the distribution of a 1GB file will look something like this:
Server 1: 0-128MB
Server 2: 128MB-256MB
Server 2: 256MB-384MB
... etc
The intention of this is to allow for redundancy (each part will exist on more than one server), security (no one server has access to the entire file), and cost (bandwidth expenses are distributed).
I am curious if anyone has an opinion on how I might be able to "trick" web browsers into downloading the various parts all in one link.
What I had in mind was something like:
Browser is linked to Server 1, which provides a content-size of the full file
Once 128MB is served, Server 1 will intentionally close the connection
Hopefully, the browser will try to restart the download, requesting Server 1
Server 1 provides a 3XX redirect to Server 2
Browser continues downloading from Server 2
I don't know for certain that my example works, as I haven't tested it yet. I was curious if there were other solutions someone might have?
I'd like to make the whole process as easy as possible (ideally requiring no work beyond a simple download). I don't want the users to have to use another program (ie: cat'ing the files together). I'd also like to not use a proxy server, since it would incur extra bandwidth costs.
As far as I'm aware, there is no javascript solution for writing a file, if there was one, that would be great.
AFAIK this is not possible by using the HTTP protocol. You can probably use a custom browser extension but it would depend on the browser. Another alternative is to create a Java applet that would download the file from different servers. The applet can accept the URLs to the different servers as parameters.
To save the generated file:
https://stackoverflow.com/a/4551467/329062
That solution stores the file in memory though, so it won't work with very large files.
You can download the partial files into a JS variable using JSONP. That will also let you get around the same-origin policy.
Javascripts security model will only allow you to access data from the same origin where the Javascript came from - i.e. not multiple servers.
If you are going to have the file bits on multiple servers, you will need the user to load the web page, fetch the bit and then finally stick the bits together in the correct order. If you can manage to get all your users to do this (correctly), you are a better man than I.
It's possible to do in modern browsers over standard HTTP.
You can use XHR2 with CORS to download file chunks as ArrayBuffers and then merge them using Blob constructor and use createObjectURL to send merged file to the user.
However, I suspect that browsers will store these objects in RAM, so it's probably a bad idea to use it for large files.
Is it possible to detect HTTP cache hits in order to calculate a cache hit rate?
I'd like to add a snippet of code (JavaScript) to a HTML page that reports (AJAX) whether a resource was available from a client's local cache or fetched from server. I'd then compile some stats to give some insight on the effects of my cache tuning. I'm particularly interested in hit rates for the very first page of a user's visit.
I though about using access logs but that seems imprecise (bots) and cumbersome. Additionally, it wouldn't work with resources from different servers (especially Google's AJAX Libraries API, e.g. jquery.min.js).
Any non-JavaScript solution would be well appreciated too though.
There might be some easier way, but you could build a test where javascript loads the element and you record the time. Then when the onload event fires compare the times. You would have to test to see what the exact difference between loading from cache and loading from the server is. Or for a whole lot of items have the javascript load first record the time. Then record the onload events of everything else as it loads onto the page. This may not be as accurate though.
I am creating a Web Application using JSP, Struts, EJB and Servlets. The Application is a combined CRM and Accounting Package so the Database size is very huge. So, in order to make Execution faster, I want prevent round trips to the Database.
For that purpose, what I want to do is create some temporary XML files on the client Machine and use them whenever required. How can I do this, as Javascript do not permits me to do so. Is there any way of doing this? Or, is there any other solution which I can adopt in order to make my application Faster?
You do not have unfettered access to the client file system to create a temporary file on the client. The browser sandbox prevents this for very good reasons.
What you can do, perhaps, is make some creative use of caching in the browser. jQuery's data method is an example of this. TIBCO General Interface makes extensive use of a browser cache for XML data. Their code is open source and you could take a look to see how they've implemented their browser cache.
If the database is large and you are attempting to store large files, the browser is likely not going to be a great place for that data. If, however, the information you want to store is fairly small, using an in-browser cache may accomplish what you'd like.
You should be caching on the web server.
As you've no doubt realised by now, there is a very limited set of things you can do on the client machine from a web app (eg, write cookie).
You can make your application use the browser plugin Google Gears, that allows you a real clientside storage.
Apart from that, remember, there is a huge overhead for every single request, if needed you can easily stack a few 100 kB in one response, but far away users might only be able to execute a few requests per second. Try to keep the number of requests down, even if it means adding overhead in form of more data.
#justkt Actually, there is no good reason to not allow a web application to store data. Indeed HTML5 specifications include a database similar to the one offered by Google Gears, browser support is just a bit too sporadic for relying on that feature.
If you absolutely want to cache it on the client, you can create the file on your server and make your web app retrieve it. This way the browser will fetch it and keep it on the client cache.
But keep in mind that this could be a pain for the client if the file is large enough.