My JavaScript application has to load many micro text files from my web server asynchronously (there're about 200 * 5kB files). I know that downloading 1 large file is far more fast than downloading many tiny files. But I cannot predict which files are gonna be loaded (the client makes requests) and I have tons of files like this.
How can I speed up the transfer of those files?
I thought about concatenating requested files with PHP. Is that a good idea?
"I thought about concatenating requested files with PHP. Is that a good idea?"
We do the same thing in production with a servlet in java and it works quite well. But, to get it right we had to cache the concatenated files, don't read them for each request. The file IO has a lot of overhead.
Here's a list of PHP cache tools. Given a highly cursory look at the doc for xcache you should be able to write a php file that collects all of your individual files and concatenates them and then store that in memory to be used as a resource.
Related
As part of performance tuning, GTMetrix is suggesting to enable enable Enable gzip compression and leverage browser caching with Pinterest, Twitter and facebook JS files. These are usually done from the server they are served from. I am not able to find out how to request these companies to make these files Gziped and have them cached.
Please help in making these files Gziped and cached.
Advance thanks in helping out.
Unfortunately you cannot gzip external sources. Unless you have code on your website that actually points to those js/css files, you can't do anything with them. If they point to those files, you could do the following instead:
Copy them over to your server and change your code, so they point to your server.
Create a cronjob on your server that checks for any changes in those external files. If there are differences, copy them over to your server.
What do you mean with "how to request these companies to make these files Gziped and have them cached."?
While it's better to serve them GZipped you shouldn't take it as a rule. I'm sure there is a greater reason to them serve it as it is than your achievement of higher rate on GTMetrix. Perhaps, they prefer use a more bandwidth of their high quality servers to minimize users use of CPU by decompressing their files. Perhaps your resources are images and GTMetrix is not dealing proper with it to make an useful suggestion (GZip images is redundant and a backfire).
Despite the obvious fact that you have no control on header properties of external files, an attempt to workaround that could cause cache problems, leading to greater problem than just a performance issue. And I'm sure these external resources by big companies are hit in a very low latency.
I ran into a dilemma lately as I was exploring the various plugins for gulp. One of them was gulp-gzip and till then, I have never thought about compressing my files. I got gulp-gzip to work correctly and spit out gzipped versions of my HTML, CSS and JS files but then, what next?
I googled around and found that most articles talk about configuring the server to send gzipped versions of the content automatically to the client upon request. But then, I kind of don't seem to understand the purpose of gzipping locally.
So, my questions are:
Can I serve gzipped content I get from gulp-gzip without configuring my server?
If yes, how should I proceed -- what should I name my gzipped files as? Should I keep the .gz extension and link to my CSS and JS files using the same?
If yes, can I test it locally by linking to the same .gz files?
If no, what is the purpose of gulp-gzip in a development environment if the server can be configured to do it automatically?
Most servers have an option to serve statically pre-compressed files if a *.gz version exists, i.e. when user requests foo.css, the server will check if foo.css.gz exists and use it.
It requires server support (the server has to set appropriate HTTP headers), so it won't work with the file:// protocol and may not work on every server.
In URLs you have to refer to the base filename (do not link to .gz directly).
Compressing files ahead of time may be better:
You can use higher compression level (e.g. maximum gzip level or the Zopfli compressor), which would be too slow to do real-time on the server.
Compressing ahead of time saves CPU time of the server, because it doesn't have to dynamically compress files when they're requested.
Just be careful when you deploy files to the server to update both *.css and *.css.gz at the same time, otherwise you may be surprised that you sometimes see old version of the file.
All my scripts are compressed and minified used uglifyJS:
The size of this file "app.min.js" is 982.1KB however when I tried to run the node server and open the app in the browser It's stopped in 502kB
and after some while
I don't know what happened there, Is there any limitation on Javascript file '502kB' ?
what I miss
I think this article may help you, it is all about nodejs server serving static content, so nginx is recommended to do this purpose.
If you have to use nodejs server then you should make all files smaller in terms of size, no need for example to minimize libraries files as jquery since it is already minimized, scripts should be minimized only, you can even minimize all libs files into one javascript file called libs.min.js as example and the rest of your scripts in another file called script.min.js.
We have a large number of people (10k+) who return to my clients' sites on a regular basis to use a web app we built, improve, and host for them. We have been making fairly frequent backward-incompatible updates to the web app's javascript as our app has improved and evolved. During deployments, the javascript is minified and concatenated into one file, loaded in the browser by require.js, and is uploaded to and hosted on Amazon S3. The file name & url currently doesn't change at all during updates. This last week we deployed a major refactor to the web app and got a few (but not a lot) of reports back that the app stopped working for some people, particularly in firefox. It seemed like a caching issue. We were able to see it initially in a few browsers in testing but it seemed to go away after a refresh or two.
It dawned on me that I really don't know what browser-caching ramifications deploying a new version of a javascript file (with the same name) on S3 will have and whether this situation warrants cache-busting or manipulating S3's headers or anything. Can someone help me get a handle on this? Are there actions I should be taking during deployments to ensure that browsers will immediately get the new version of a javascript file? If not, we run the risk of the javascript and the server API being out of sync and failing, which I think happened here.
Not sure if it matters, but the site's server runs Django and the app and DB are deployed to Heroku. Static files are deployed to S3 using S3Boto via Django's collectstatic command.
This depends a lot on the behaviour of S3 and the headers it sends when requesting files on S3. As you experienced, browsers will show different caching behaviour - so the best option is to use unique filenames.
I would suggest to use cachebuster hashes - in this way you can be sure that the new file always gets requested by browsers and you can use long cache-lifetime headers if you host the files on your own server.
You can for example create a MD5 hash of your minified file and append it (like mycss-322242fadcd23.css). Or you could use the revision number of your source control system. You have to use the cache buster in all links to this file, but you can normally easily do this in your templates where you embed your static resources. Depending on your application, you could probably use this Django plugin that should do this work for you.
I want to upload a bunch of image files to a directory that I've set up on my ISP's free hosting service. It's something like http://home.ISPname.net/~username/subdir.
I want my Javascript code to be able to get a directory listing and then preload whatever it finds.
But getting such a thing even possible? My impression is not.
I suspect I will have to instead rename my files to 00000.jpg and upward, and attempt to detect what files are there using try.
FYI, I know that my ISP does not support using FTP protocol to get a directory listing.
Thanks for any help.
Under the assumption that your JavaScript code is code on your pages and not code on your server, then no, there's no API provided for JavaScript in a web browser other than a server-side API accessible via HTTP that you would create yourself. If the directory full of files is on the server, then it's going to have to be some server-side code that delivers the directory listing anyway. You could write such code in the server-side programming environment of your choice (including a server-side JavaScript solution, if that's what you want and if such a thing is possible at your ISP). As Pekka notes, it may be possible to simply enable directory browsing in your server, though that's generally a fairly low-level service that will deliver some sort of HTML page to you, and parsing through that might be somewhat painful (compared to what you could get from a tailor-made service).
Another, simpler thing you could do would be to upload a manifest file along with the other image files. In other words, create the directory listing in some easy-to-digest form, and maintain it separately as a simple file to be fetched.
javascript not suport directory listing in a direct way. but you can create a directory dumper php file, and send via AJAX.