XHR2 - Downloading Files from Web Server - javascript

Is it possible to download files with the help of AJAX requests via XHR2? I am able to upload files to the sever, but I can not find a way to download them.
PHP Version 5.4.12
Apache/2.4.4 (Win64) PHP/5.4.12
I need to find a solution to be able to download very large files (up to 4GB). To save server memory, the files need to be downloaded in chunks. I need to monitor download progress too, to provide feedback especially on large files.
I have tried many things, but nothing works well. I run our PHP memory, it takes very long to start the download, I can not monitor the progress, cURL is very slow, x-sendfile provides no download progress, or I can not run two PHP scripts at once(when one sends the data and the other monitors the progress.)

Related

Efficient way to read large binary data in an HTML page?

I'm attempting to write a web page that loads an upwards of a gigabyte of binary data. The page is ran directly off the local disk, not from a web server.
I tried encoding the data as base64 and embedded it through a script file, but the browser slowed to a crawl and eventually crashed from out-of-memory errors. I then tried encoding the data as an Uint8Array, but it ran into the same problem. I've also tried breaking the binary data down into multiple files and retried both methods on them, but it reached the point where I had over 100 script files and the browser still ran into out-of-memory errors.
So far, the only way to load the data into memory is to use the File API. However, it's not intuitive to have to select these files every time in order to use the web page.
Are there any other methods for reading large binary data into a web page without having to use the File API? I can't use AJAX/fetch because this is ran directly off the local disk.

Why gulp-gzip and can I serve gzipped content without configuring the server?

I ran into a dilemma lately as I was exploring the various plugins for gulp. One of them was gulp-gzip and till then, I have never thought about compressing my files. I got gulp-gzip to work correctly and spit out gzipped versions of my HTML, CSS and JS files but then, what next?
I googled around and found that most articles talk about configuring the server to send gzipped versions of the content automatically to the client upon request. But then, I kind of don't seem to understand the purpose of gzipping locally.
So, my questions are:
Can I serve gzipped content I get from gulp-gzip without configuring my server?
If yes, how should I proceed -- what should I name my gzipped files as? Should I keep the .gz extension and link to my CSS and JS files using the same?
If yes, can I test it locally by linking to the same .gz files?
If no, what is the purpose of gulp-gzip in a development environment if the server can be configured to do it automatically?
Most servers have an option to serve statically pre-compressed files if a *.gz version exists, i.e. when user requests foo.css, the server will check if foo.css.gz exists and use it.
It requires server support (the server has to set appropriate HTTP headers), so it won't work with the file:// protocol and may not work on every server.
In URLs you have to refer to the base filename (do not link to .gz directly).
Compressing files ahead of time may be better:
You can use higher compression level (e.g. maximum gzip level or the Zopfli compressor), which would be too slow to do real-time on the server.
Compressing ahead of time saves CPU time of the server, because it doesn't have to dynamically compress files when they're requested.
Just be careful when you deploy files to the server to update both *.css and *.css.gz at the same time, otherwise you may be surprised that you sometimes see old version of the file.

Multiple file uploading using javascript, jquery plugins

I'm developing a website in which i have to upload files to the server. There are many file upload controls out there but none of them has served my purpose, that is I want to upload lets say 1000 files but I want to do it in chunks of 200 files so that server calls are minimum. In the above explained scenario 5 calls would be made to the server. I have look into the Plupload and Dropzone each of them make a separate call to the server i.e. 10 files 10 server calls. Is there any file upload control that serves this purpose or any option in the above mentioned controls that I can make use of?
look into Fineuploader, and perhaps use Amazon S3 as a server-side solution
http://fineuploader.com/

JavaScript - Load many tiny files

My JavaScript application has to load many micro text files from my web server asynchronously (there're about 200 * 5kB files). I know that downloading 1 large file is far more fast than downloading many tiny files. But I cannot predict which files are gonna be loaded (the client makes requests) and I have tons of files like this.
How can I speed up the transfer of those files?
I thought about concatenating requested files with PHP. Is that a good idea?
"I thought about concatenating requested files with PHP. Is that a good idea?"
We do the same thing in production with a servlet in java and it works quite well. But, to get it right we had to cache the concatenated files, don't read them for each request. The file IO has a lot of overhead.
Here's a list of PHP cache tools. Given a highly cursory look at the doc for xcache you should be able to write a php file that collects all of your individual files and concatenates them and then store that in memory to be used as a resource.

How to check image file size through PHP before uploading to Amazon S3?

I just had a question about checking image file size. How do you do that in PHP(server-side), or even through javascript, before the image gets uploaded straight to S3? In addition for personal reasons, I cannot use HTML5 to achieve that.
And I am aware of solutions such as first uploading to my own server to do the checking there, and then uploading to S3. But this takes up loads of bandwith size, so it is out of the solution set.
Thank you very much.
If you want to check the file size before the file is sent to S3, and you are sending it directly to S3 without sending it to a server you control first, your only option is to check the size property of each File or Blob you are going to send to your S3 bucket. This is possible in all browsers that support the File API, which is pretty much all browsers other than IE9 and older. For older browsers, you cannot test sizes client-side.
I'm not sure why you mentioned PHP in your question, and, at the same time, said you do not want to send the files to a server you control first. If you want to check the size of a file before sending it to S3 via some PHP, then of course you will have to send the file to your server first, as PHP is a server side language. If you do indeed not want to send files to your server, you can of course send the files directly to S3 via the browser, and check the size first if the browser supports the File API. This is generally what happens in Fine Uploader, a library I maintain that supports direct uploads to S3 from the browser without sending the files to a server you control first.

Categories

Resources