I'm working on a page that has streaming audio and many slides. The audio plays and the slides are loaded on demand during the event. Works fine during regular events but fails during large corporate events where the http download of a slide for 1000 users spikes the bandwidth and temporarily saturates the network causing the audio to skip / cut out.
I want to preload all the images upon the user opening the page but I was wondering if it is also possible to rate limit this download of all the images. I'd love to limit the preload of the images to a specific kb/s. Is this possible client side? If not what would be a good option server/client side?
Client side validation for limiting file size is possible in JavaScript using the File APIs.
An excellent tutorial is here : Reading files in JavaScript using the File APIs.
One final point : You should not rely on 'only' client-side validation. You must use server-side validation.
UPDATE: (For 'only' client-side validation):
With the use of JS, you can try like this:
if (typeof FileReader !== "undefined") {
var size = document.getElementById('myfile').files[0].size;
// check file size
}
(This may not be supported by all browsers.Only latest versions of Webkit browsers will support this.)
OR with the FILE API, you can try like this:
<input type="file" id="fileInput" />
var size = document.getElementById("fileInput").files[0].size;
Related
I'm working on a website where the users can solve a practice exam.
My client wants to include a div next to the exam with a PDF viewer.
The idea is that the user can select a PDF file from his local machine and it will be loaded into the PDF viewer next to the exam.
I made it work using pdf.js and pdfobject.
The problem is that both options require the PDF file to be uploaded to our server.
Due to intellectual property law, that is unacceptable to our client.
They want the PDF file to be embeded without uploading it to our server. So it has to be loaded directly from the user's machine.
I found that it can't be done. All the plugins I tried require a virtual url, and will not accept a physic one (file:///local/path/file.pdf).
I don't want to just tell my client "it can't be done". I would like to know a technical explanation of why can't it be done.
If somebody knows a way to make it work it would be better!
This is a security policy. Issues it defends against include:
Allowing sites to detect your operating system by checking default installation paths
Allowing sites to exploit system vulnerabilities (e.g., C:\con\con in Windows 95/98)
Allowing sites to detect browser preferences or read sensitive data
Try to load it in a <iframe> with the URL of the PDF as src. But it might fail because of the client's browser's security policy. So this is brittle at best.
Maybe you can use HTML5 localstorage to upload the file to a "safe" location ("safe" as far as the browser is concerned).
PDFJS.getDocument() supports "a typed array (Uint8Array)
already populated with data or parameter object." So upload to local storage, get the document as bytes from there and then pass it to pdf.js.
Related:
https://hacks.mozilla.org/2012/02/saving-images-and-files-in-localstorage/
You might be able to do this, if the browser supports it. Take a look at the FileReader object. If the user's browser supports that, you can allow the user to select a file and reference that file in code. Something like this works for me in Firefox:
$('input[type=file]').change(function () {
if (window.FileReader) {
var input = this;
if (input.files && input.files[0]) {
var reader = new FileReader();
reader.onload = function (e) {
$(input).next('iframe').attr('src', e.target.result);
}
reader.readAsDataURL(input.files[0]);
}
}
});
So if you have a input type="file" followed by an iframe then this will allow the user to select the file, and once it's selected it will load it into the iframe as a base64-encoded data URI.
Example here.
My program Precomp can be used to further compress already compressed file formats like GIF, PNG, PDF, ZIP and more. Roughly summarized, it does this by decompressing the compressed streams, recompressing them and storing the differences between the expected compressed stream and the actual compressed stream. As an example, this rotating earth picture from Wikipedia is compressed from 1429 KB to 755 KB. The process is lossless, so the original GIF file can be restored.
The algorithm for the GIF file format can be isolated and implemented relatively easy, so I was thinking about a proof-of-concept implementation in JavaScript. This would lead to the web server sending a compressed version of the GIF file (.pcf ending, essentially a bzip2 compressed file of the
GIF image contents) and the client decompressing the data, recompressing to GIF and displaying it. The following things would've to be done:
The web site author would've to compress his GIF files using the standard version of Precomp and serve these instead of the GIF files together with a JavaScript for the client side recompression.
The client would decompress the bzip2 compressed file, this could be done using one of the existing bzip2 Javascript implementations.
The client would recompress the image content into the original GIF file.
The process is trade of bandwidth against CPU usage on the client side.
Now my questions are the following:
Are there any general problems with the process of loading a different file and "converting" it to GIF?
What would you recommend to display before the client side finishes (image placeholder)?
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
Let's begin by answering your specific questions. Code example below.
Q&A
Are there any general problems with the process of loading a different file and "converting" it to GIF?
The main problem is complication. You are effectively writing a browser addon, like those for JPEG2000.
If you are writing real browser addons, each major browsers do it differently, and change addon formats occasionally, so you have to actively maintain them.
If you are writing a JS library, it will be easier to write and maintain, but it will be unprivileged and suffer from limitations such as cross original restriction.
What would you recommend to display before the client side finishes (image placeholder)?
Depends on what your format can offer.
If you encode the image dimension and a small thumbnail early, you can display an accurate place-holder pretty early.
It is your format, afterall.
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Nothing different from other files.
Configure the Expires and Cache-Control header on server side and they will be cached.
Manifest and prefetch can also be used.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
This is tricky. When JavaScript is disabled, you can only replace elements, not attributes.
This means you cannot create an image somewhere that points to the .pcf files, and ask browser to rewrite the src attribute when JS is unavailable.
I think the best solution to support no JS is outputting the images with document.write, using noscript as fall back:
<noscript>
<img src=demo.gif width=90>
</noscript><script>
loadPcf("demo.pcf","width=90")
</script>
(Some library or framework may make you consider <img src=demo.gif data-pcf=demo.pcf>.
This will not work for you, because browsers will preload 'demo.gif' before your script kicks in, causing additional data transfer.)
Alternatively, browser addons are unaffected by "disable JS" settings, so if you make addons instead then you don't need to worry about it.
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Perhaps. You can code a user interface and store the preference in cookie or in localStorage.
Then you can detect preference and switch the logic in server code (if cookie) or in client code.
If you are doing addons, all browsers provide reliable preference mechanism.
The problem is that, again, every browser do it differently.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
If you hands browsers a partial image, they may think the image is corrupted and refuse to show it.
In this case you have to implement your own GIF decoder AND encoder so that you can hands browser a complete placeholder image, just to be safe.
(new) Can I decode image loaded from another site?
I must also repeat the warning that non-addon JS image decoding does not work with cross origin images.
This means, all .pcf files must be on the same server, same port, and same protocol with the site using it.
For example you cannot share images for multiple sites or do optimisations like domain sharding.
Code Example
Here is a minimal example that creates an <img>, loads a gif, half its width, and put it back to the <img>.
To support placeholder or progressive loading, listen to onprogress instead of/in addition to onload.
<!DOCTYPE html><html><head><meta charset="UTF-8"><script>
function loadPcf( file, attr ) {
var atr = attr || '', id = loadPcf.autoid = 1 + ~~loadPcf.autoid;
document.write( '<img id=pcf'+id+' ' + atr + ' />' );
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer'; // IE 10+ only, sorry.
xhr.onload = function() { // I am loading gif for demo, you can load anything.
var data = xhr.response, img = document.querySelector( '#pcf' + id );
if ( ! img || ! data instanceof ArrayBuffer ) return;
var buf = new DataView( data ), head = buf.getUint32(0), width = buf.getUint16(6,1);
if ( head !== 1195984440 ) return console.log( 'Not a GIF: ' + file ); // 'GIF8' = 1195984440
// Modify data, image width in this case, and push it to the <img> as gif.
buf.setInt16( 6, ~~(width/2), 1 );
img.src = URL.createObjectURL( new Blob( [ buf.buffer ], { type: "image/gif" } ) );
};
xhr.open( 'GET', file );
xhr.send();
}
</script>
<h1>Foo<noscript><img src=a.gif width=90></noscript><script>loadPcf("a.gif","width=90")</script>Bar</h1>
If you don't need <noscript> compatibility (and thus prevent facebook/google+ from seeing the images when the page is shared), you can put the pcf file in <img> src and use JS to handle them en mass, so that you don't need to call loadPcf for each image and will make the html much simpler.
How about <video>?
What you envisioned is mostly doable, in theory, but perhaps you should reconsider.
Judging from the questions you ask, it will be quite difficult for you to define and pull off your vision smoothly.
It is perhaps better to encode your animation in WebM and use <video> instead.
Better browser support back to IE 9 - just add H.264 to make it a dual format video. You need IE 10+ to modify binary data.
Size: Movies has many, many options and tricks to minimise size, and what you learned can be reused in the future.
Progressive: <video> have had some techs for adaptive video, and hopefully they will stabilise soon.
JavaScript: <video> does not depend on JavaScript.
Future-proof: Both WebM and H.264 will be supported by many programs, long after you stopped working on your special format.
Cost-effective: Create a low-bandwith option using smaller or lower quality media is easier and more reliable than a custom format. This is why wikipedia and youtube offers their media in different resolutions.
For non-animations, PNG can also be colour indexed and 7z optimised (keeping the PNG format).
Icon size indexed PNG is often smaller than the same GIF.
Or perhaps your vision (as described in the pcf website) is the capability to encode many different files, not only GIF.
This will be more like supporting a new network protocol, and is likely beyond the scope of humble JavaScript. (e.g. how are you going to handle pdf download or streaming?)
First, a little background:
I apologize ahead of time for the long-winded nature of this preface; however, it may assist in providing an alternate solution that is not specific to the nature of the question.
I have an ASP.NET MVC application that uses embedded WinForm UserControls. These control provide "ink-over" support to TabletPCs through the Microsoft.Ink library. They are an unfortunate necessity due to an IE8 corporate standard; otherwise, HTML5 Canvas would be the solution.
Anyway, an image URL is passed to the InkPicture control through a <PARAM>.
<object VIEWASEXT="true" classid="MyInkControl.dll#MyInkControl.MyInkControl"
id="myImage" name="myImage" runat="server">
<PARAM name="ImageUrl" value="http://some-website/Content/images/myImage.png" />
</object>
The respective property in the UserControl takes that URL, calls a method that performs an HttpWebRequest, and the returned image is placed in the InkPicture.
public Image DownloadImage(string url)
{
Image _tmpImage = null;
try
{
// Open a connection
HttpWebRequest _HttpWebRequest = (HttpWebRequest)HttpWebRequest.Create(url);
_HttpWebRequest.AllowWriteStreamBuffering = true;
// use the default credentials
_HttpWebRequest.Credentials = CredentialCache.DefaultCredentials;
// Request response:
System.Net.WebResponse _WebResponse = _HttpWebRequest.GetResponse();
// Open data stream:
System.IO.Stream _WebStream = _WebResponse.GetResponseStream();
// convert webstream to image
_tmpImage = Image.FromStream(_WebStream);
// Cleanup
_WebResponse.Close();
}
catch (Exception ex)
{
// Error
throw ex;
}
return _tmpImage;
}
Problem
This works, but there's a lot of overhead in this process that significantly delays my webpage from loading (15 images taking 15 seconds...not ideal). Doing Image img = new Bitmap(url); in the UserControl does not work in this situation because of FileIO Permission issues (Full trust or not, I have been unsuccessful in eliminating that issue).
Initial Solution
Even though using canvas is not a current option, I decided to test a solution using it. I would load each image in javascript, then use canvas and toDataUrl() to get the base64 data. Then, instead of passing the URL to the UserControl and have it do all the leg work, I pass the base64 data as a <PARAM> instead. Then it quickly converts that data back to the image.
That 15 seconds for 15 images is now less than 3 seconds. Thus began my search for a image->base64 solution that worked in IE7/8.
Here are some additional requirements/restrictions:
The solution cannot have external dependencies (i.e. $.getImageData).
It needs to be 100% encapsulated so it can be portable.
The source and quantity of images are variable, and they must be in URL format (base64 data up front is not an option).
I hope I've provided sufficient information and I appreciate any direction you're able to give.
Thanks.
You can use any of the FlashCanvas, fxCanvas or excanvas libraries, which simulate canvas using Flash or VML in old internet explorer versions. I believe all of these provide the toDataURL method from the Canvas API, allowing you to get an encoded representation of your image.
After extensive digging around (I'm in the same fix) I believe this is the only solution, short of writing a PHP script that you can send the image to. The problem with that of course is that there isn't a way to send images to the PHP script unless any of these three conditions is true:
The browser supports typed arrays (Uint8Array)
The browser supports sendAsBinary
The image is being uploaded by someone via a form (in which case it can be sent to a PHP script that responds with the base 64 encoding)
This question already has answers here:
Closed 10 years ago.
I have had trouble when researching or otherwise trying to figure out how (if it's even possible) to get binary image data using JavaScript/jQuery from an html input element of type file.
I'm using WebMatrix (C#), but it may not be necessary to know that, if the purposes of this question can be answered using JavaScript/jQuery alone.
I can take the image, save it in the database (as binary data), then later show the pic on the page, from the binary data, after posting. This does, however, leave me without a pic preview, before uploading, for which I am almost certain I must use AJAX.
Again, this may not even be possible, but as long as I can get the binary image data, I believe I can push it to the server with AJAX and process the image the same way I would if I were taking it from a database (note that I don't save the image files themselves using GUID and all that,I just save the binary data).
If there is an easier way to show a pic preview using the input element, that would work fine, too, of course, as the whole idea behind me trying to do this is to show a pic preview before they hit the submit form button (or at least create that illusion).
**********UPDATE***********
I do not consider this a duplicate of another question because, my real question is:
How can I get image data from an input type "file", with JavaScript/jQuery?
If I can just get the data (in the right format) back to the server, I should be able to work with it there, and then return it with AJAX (although, I am absolutely no AJAX expert).
There is, according to the research that I have done, NO WAY to get picture previews in all IE versions using only javascript (this is because getting the full file path is seen, by them, as a potential security risk). I could ask my users to add the site to the trusted sites, but you don't usually ask users to tamper with those kinds of settings (not to mention the quickest way to make your site seem suspicious to users is to ask them to directly add your site to the trusted sites list. That's like sending an email and asking for a password. "Just trust me! I'm soooo safe!" :)
Short answer: Use the jQuery form plugin, it suports AJAX-like form submits even for file uploads.
tl;dr
Thumbnail preview is popular websites is usually done by a number of steps, basically the website do these steps:
upload the RAW image
Resize and optimise the image for data storage
Generate a temporary link to that file (usually stored in a server maintained HTTP session)
Send it back to the user, to enable a 'preview'
Actually store the image after user confirms the image
A few bad solutions are:
Most of the modern browsers has options to enable script access to local files, but usually you don't ask your users to tinker with those low level settings.
Earlier Internet Explorer (ah... yes it's a shame) and ancient versions of modern browsers will expose the full file path by reading the 'value' of file input box, which you can directly generates an tag and use that value. (Now it is replaced by some c:/fakepath/... thing.)
Use Adobe Flash to mimic the file selection panel, it can properly read local files. But passing it into JavaScript is another topic...
Hope these helps. ;)
UPDATE
I actually came across a situation that requires a preview before uploading, I'd like to also put it here. As I could recall, there were no transitional versions in modern browsers that do not implement FileReader before masking the real file path, but feel free to correct me if so. This solution should caters most of the browsers, as long as they are supported by jQuery.
// 1. Listen to change event
$(':file').change(function() {
// 2. Check if it has the FileReader class
if (!this.files) {
// 2.1. Old enough to assume a real path
setPreview(this.value);
}
else {
// 2.2. Read the file content.
var reader = new FileReader();
reader.onload = function() {
setPreview(reader.result);
};
reader.readAsDataURL();
}
});
function setPreview(url) {
// Do preview things.
$('.preview').attr('src', url);
}
In my application, I need to calculate the size of the file in client side (before uploading to server).
I want to restrict the file being uploaded if it doesn't satisfy my requirements like file size, file type..
Is it possible to achieve this in client side using JavaScript?
In the vast majority of current browsers, there is no way to accomplish this with pure JS. Some of the newer HTML5 file tools may allow for this, but their support is limited.
You will need to go with a Flash based uploader tool to get this data before upload. Check out YUI Uploader to get you started.
I would suggest you implement it this way:
Take whatever precautions on the server you can to limit the upload size (Each server technology handles it a bit differently). Always do this as client side controls can be bypassed
Use a standard File Upload input element to start
Progressively enhance it using something like the YUI Uploader or Uploadify. That way it works faster on files that do not match for users with Flash, but it also works for normal uploads since it will also be checked on the server.
This isn't practically possible ( maybe it is with some Java+ActiveX exploit in IE ) before actually submitting the file with a POST request to the server.
In browsers with support for HTML5´s File API, you now can.
This is true for updated browsers, not surprisingly IE excluded.
var files = $(this).attr('files');
var MAXFILESIZE = 500000; // bytes
if (files) {
// <input type="file" multiple>
for (var i = 0; i < files.length; i++) {
// We can also validate files[i].type or files[i].name vs an expected file type
if (files[i].size > MAXFILESIZE) {
alert('File "'+files[i].name+'" is too big.');
}
}
}
The future says "hello"! :)