Is there a reason PNG and JPG images would be embedded in a JavaScript file like this:
// Template/Image data
var LOGO = "data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAADgAAAA4 etc";
var BACKGROUND = "data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIA etc";
If I remove these portions and call files stored on a server instead, will there be a performance penalty or something? The only thing I can think of is Apache serving extra requests for those images, but I'm not even sure it works that way. Is there anything else?
Its mostly convenience and to avoid preloading the images. Since no additional requests have to be sent to the server to display the image, the image will be displayed as soon as you set this value to the src attribute.
In terms of amount of data downloaded, this technique avoids the extra overhead of additional requests but the total sizes downloaded could be a bit larger since the entire image data is encoded in base64. In applications where you might have lots of such images preloading could be a better.
A request to that image file takes longer than showing the image from the binary. So your page has saved some requests :)
Related
I was wondering is there any script that before loading accesses all images in a specific folder, parses them to get their appropriate height and width so i would use them later on? This all needs to be done before loading anything into the webpage. Is there a way ?
The only conceivable way I can think of is to make a custom server, either a standard static server with python even, or even better, with nodeJS, then let the server automatically read either
a list of all of the file names in the folder, and send it to the client, then on the client side, stop the loading with JavaScript until each picture is loaded via ajax, and analyzed by drawing to a canvas, or even just setting it as the source for an img element, then waiting for the onload event of each, and checking the width and height [not gonna write up a full code example for this but let me know if you need more clarification]
Load everything on the server side, including a list of all of the images, and also find some kind of basic image processing library for the server side, for example something like pngjs for nodejs, to get the width and height for each image, after reading it with fs.readFileSync to get the raw binary data etc., then after getting the full array of image data with NodeJS, finally (if using createServer) call request.end.... etc., for your specific web page that is being hosted at
I'm not going to write up a full code example at this time but let me know if more clarification is needed
I'm Developing a WordPress plugin for customers to design custom T-Shirts, with the option of uploading their own images. The plugin takes several screenshots and emails them to a print department.
In Javascript I convert the screenshots to base64 data, which is then sent via Ajax to a PHP file, this creates a folder for the customer's design, creates the images from the data and stores the screenshots in there.
Most screenshots/base64 data send across just fine, for example just adding in text creates no problems. However if the user uploads an image and it's scaled up too much it causes various errors (sometimes 400 error, sometimes 404 and sometimes 500).
Running this through my local setup on Windows with Wamp, it's fine. I can upload images and scale them to 12x with no issues. However when I try this with the live site, I get the above problems if I scale any of the images past 4x, and with most images this happens if I even try to scale them up at all past 1x.
The resolution/file size of the image seems to have an effect, though not in an obvious way. I can send a huge plain red square, or normal image at 1x scale.
At first I thought this was a POST data limit issue, except the live site's POST limit is double that what I had set on my WAMP setup, which doesn't have this problem.
Also, and even stranger. I tested uploading the image but replacing the base64 data with simple characters (so the scaled up image exists in the page but it's base64 data isn't sent via POST), and I still have the same issue. So I don't think it's a simple POST limit issue.
Cannot for the life of me find a solution to this, any help would be hugely appreciated.
Figured a way round it, I'll give my solution in case anyone else has the same issue and comes across this post.
Basically I converted my base64 image data to a blob, and appended that to a newly created formData object. I found that also appending my nonce and action (amended to work with the admin-ajax way of using AJAX) to the formData helped deal with most issues on the javascript side. In the AJAX request I set processData and contentType to false.
As for the PHP side, I set a variable equal to the specific $_FILES array element I just sent. I used file_get_contents() on that variable (i.e the blob data), and wrapped that in file_put_contents() to actually write the image.
That's the quick version. If anyone wants a more detailed explanation let me know.
I am working on a Chrome Extension, and I wanted to know if there was a way to know the amount of data which has been downloaded while loading a page.
For example, if the user activates the extension, and goes on Google.com, I want to show him the size of the page: google.com.
Is there a way to do this?
Thanks!
There are a multitude of ways you could determine the size of a page using just javascript.
You could manually calculate the size of the page by counting the ammount of characters in the page and in scripts. The example script bellow calculates the size of an ascii encoded html doc (so not including pictures, scripts from urls, ect). I'm not too sure how accurate or fast this is, so don't quote me on it.
Example script:
var html=document.getElementsByTagName('HTML')[0].outerHTML,//get all html as string
sizeKB=a.length/1024;//assuming the page is encoded in ascii, each char is one byte, 1024 bytes = 1 kb
For determining size of images, this question could help: Determining image file size + dimensions via Javascript?
You could load the results using another service like google's pagespeed insights or pingdom. You could try to load the services in an iframe in your background page and use content scripts to input the url and extract the site's statistics and send them to the popup. I'm sure plenty of other services could help you do the same with ajax calls although I don't know of any.
Using ajax and jquery, you could determine the size of all the assets in the page and add them together: Get size of file requested via ajax . Used correctly, this could fetch all of the files from the catch, so it wouldn't use more of the network. But it might be a bit slow for pages with a lot of non-inline scripts, stylesheets, and images
Using the chrome.webrequest api, you could get the header 'Content-Length' to determine the file size. I haven't tested this script also, so tell me how this works. Make sure to have a fallback if the header is missing!
Example script:
chrome.webRequest.onHeadersReceived.addListener(
function(details){
var fileSize;
details.responseHeaders.forEach(function(v,i,a){
if(v.name == 'Content-Length')
fileSize = v.value;
});
if(!fileSize)//if Content-Length header is missing fall back to another method of calculating file size
fallBackGetFileSize(details);
},
{urls: ["http://*/*"]},["responseHeaders"]);
My program Precomp can be used to further compress already compressed file formats like GIF, PNG, PDF, ZIP and more. Roughly summarized, it does this by decompressing the compressed streams, recompressing them and storing the differences between the expected compressed stream and the actual compressed stream. As an example, this rotating earth picture from Wikipedia is compressed from 1429 KB to 755 KB. The process is lossless, so the original GIF file can be restored.
The algorithm for the GIF file format can be isolated and implemented relatively easy, so I was thinking about a proof-of-concept implementation in JavaScript. This would lead to the web server sending a compressed version of the GIF file (.pcf ending, essentially a bzip2 compressed file of the
GIF image contents) and the client decompressing the data, recompressing to GIF and displaying it. The following things would've to be done:
The web site author would've to compress his GIF files using the standard version of Precomp and serve these instead of the GIF files together with a JavaScript for the client side recompression.
The client would decompress the bzip2 compressed file, this could be done using one of the existing bzip2 Javascript implementations.
The client would recompress the image content into the original GIF file.
The process is trade of bandwidth against CPU usage on the client side.
Now my questions are the following:
Are there any general problems with the process of loading a different file and "converting" it to GIF?
What would you recommend to display before the client side finishes (image placeholder)?
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
Let's begin by answering your specific questions. Code example below.
Q&A
Are there any general problems with the process of loading a different file and "converting" it to GIF?
The main problem is complication. You are effectively writing a browser addon, like those for JPEG2000.
If you are writing real browser addons, each major browsers do it differently, and change addon formats occasionally, so you have to actively maintain them.
If you are writing a JS library, it will be easier to write and maintain, but it will be unprivileged and suffer from limitations such as cross original restriction.
What would you recommend to display before the client side finishes (image placeholder)?
Depends on what your format can offer.
If you encode the image dimension and a small thumbnail early, you can display an accurate place-holder pretty early.
It is your format, afterall.
What do I have to do to make sure the .pcf file is cached? Bandwidth savings were useless if doesn't get cached.
Nothing different from other files.
Configure the Expires and Cache-Control header on server side and they will be cached.
Manifest and prefetch can also be used.
Is there a way to display the original GIF if JavaScript is deactivated, but avoid loading the GIF if JavaScript is activated?
This is tricky. When JavaScript is disabled, you can only replace elements, not attributes.
This means you cannot create an image somewhere that points to the .pcf files, and ask browser to rewrite the src attribute when JS is unavailable.
I think the best solution to support no JS is outputting the images with document.write, using noscript as fall back:
<noscript>
<img src=demo.gif width=90>
</noscript><script>
loadPcf("demo.pcf","width=90")
</script>
(Some library or framework may make you consider <img src=demo.gif data-pcf=demo.pcf>.
This will not work for you, because browsers will preload 'demo.gif' before your script kicks in, causing additional data transfer.)
Alternatively, browser addons are unaffected by "disable JS" settings, so if you make addons instead then you don't need to worry about it.
Can I give the users a way to configure the behaviour? E.g. on mobile devices, some might avoid bandwidth, but others might want less CPU usage.
Perhaps. You can code a user interface and store the preference in cookie or in localStorage.
Then you can detect preference and switch the logic in server code (if cookie) or in client code.
If you are doing addons, all browsers provide reliable preference mechanism.
The problem is that, again, every browser do it differently.
Would it be possible to display interlaced GIFs as supposed (going from a rough version to the final image)? This would require updating the image content multiple times at different stages of recompression.
If you hands browsers a partial image, they may think the image is corrupted and refuse to show it.
In this case you have to implement your own GIF decoder AND encoder so that you can hands browser a complete placeholder image, just to be safe.
(new) Can I decode image loaded from another site?
I must also repeat the warning that non-addon JS image decoding does not work with cross origin images.
This means, all .pcf files must be on the same server, same port, and same protocol with the site using it.
For example you cannot share images for multiple sites or do optimisations like domain sharding.
Code Example
Here is a minimal example that creates an <img>, loads a gif, half its width, and put it back to the <img>.
To support placeholder or progressive loading, listen to onprogress instead of/in addition to onload.
<!DOCTYPE html><html><head><meta charset="UTF-8"><script>
function loadPcf( file, attr ) {
var atr = attr || '', id = loadPcf.autoid = 1 + ~~loadPcf.autoid;
document.write( '<img id=pcf'+id+' ' + atr + ' />' );
var xhr = new XMLHttpRequest();
xhr.responseType = 'arraybuffer'; // IE 10+ only, sorry.
xhr.onload = function() { // I am loading gif for demo, you can load anything.
var data = xhr.response, img = document.querySelector( '#pcf' + id );
if ( ! img || ! data instanceof ArrayBuffer ) return;
var buf = new DataView( data ), head = buf.getUint32(0), width = buf.getUint16(6,1);
if ( head !== 1195984440 ) return console.log( 'Not a GIF: ' + file ); // 'GIF8' = 1195984440
// Modify data, image width in this case, and push it to the <img> as gif.
buf.setInt16( 6, ~~(width/2), 1 );
img.src = URL.createObjectURL( new Blob( [ buf.buffer ], { type: "image/gif" } ) );
};
xhr.open( 'GET', file );
xhr.send();
}
</script>
<h1>Foo<noscript><img src=a.gif width=90></noscript><script>loadPcf("a.gif","width=90")</script>Bar</h1>
If you don't need <noscript> compatibility (and thus prevent facebook/google+ from seeing the images when the page is shared), you can put the pcf file in <img> src and use JS to handle them en mass, so that you don't need to call loadPcf for each image and will make the html much simpler.
How about <video>?
What you envisioned is mostly doable, in theory, but perhaps you should reconsider.
Judging from the questions you ask, it will be quite difficult for you to define and pull off your vision smoothly.
It is perhaps better to encode your animation in WebM and use <video> instead.
Better browser support back to IE 9 - just add H.264 to make it a dual format video. You need IE 10+ to modify binary data.
Size: Movies has many, many options and tricks to minimise size, and what you learned can be reused in the future.
Progressive: <video> have had some techs for adaptive video, and hopefully they will stabilise soon.
JavaScript: <video> does not depend on JavaScript.
Future-proof: Both WebM and H.264 will be supported by many programs, long after you stopped working on your special format.
Cost-effective: Create a low-bandwith option using smaller or lower quality media is easier and more reliable than a custom format. This is why wikipedia and youtube offers their media in different resolutions.
For non-animations, PNG can also be colour indexed and 7z optimised (keeping the PNG format).
Icon size indexed PNG is often smaller than the same GIF.
Or perhaps your vision (as described in the pcf website) is the capability to encode many different files, not only GIF.
This will be more like supporting a new network protocol, and is likely beyond the scope of humble JavaScript. (e.g. how are you going to handle pdf download or streaming?)
Background
I'm working on an internal project that basically can generate a video on the client side, but since there are no JavaScript video encoders I'm aware of, I'm just exporting each frame individually. I need to avoid uploading to the server; this is all happening on the client side.
Implementation
I'm using this FileSaver.js (more specifically, Chrome's webkit FileSystem API) to save a large number of PNGs generated by an HTML5 canvas. I set Chrome to automatically download to a specific folder, so when I hit 'Save' it just takes off and saves something like 20 images per second. This works perfectly for my purposes.
If I could use JSZip to compress all these frames into one file before offering it to the client to save, I would, but I haven't even tried because there's just no way the browser will have enough memory to generate ~8000 640x480 PNGs and then compress them.
Problem
The problem is that after a certain number of images, every file downloaded is empty. Chrome even starts telling me in the download bar that the file is 0 bytes. Repeated on the same project with the same export settings, the empty saves start at exactly the same time. For example, with one project, I can save the first 5494 frames before it chokes. (I know this is an insanely large number, but I can't help that.) I tried setting a 10ms delay between saves, but that didn't have any effect. I haven't tried a larger delay because exporting takes a very long time as it is.
I checked the blob.size and it's never zero. I suspect it's exceeding some quota, but there are no errors thrown; it just silently fails to either write to the sandbox or copy the file to the user-specified location.
Questions
How can I detect these empty saves? Prevent them? Is there a better way to do this? Am I just screwed?
EDIT: Actually, after debugging FileSaver.js, I realized that it's not even using webkitRequestFileSystem; it returns when it gets here:
if (can_use_save_link) {
object_url = get_object_url(blob);
save_link.href = object_url;
save_link.download = name;
if (click(save_link)) {
filesaver.readyState = filesaver.DONE;
dispatch_all();
return;
}
}
So, it looks like it's not even using the FileSystem API, and therefore I have no idea how to empty the storage before it's full.
EDIT 2: I tried moving the "if (can_use_save_link)" block to inside the "writer.onwriteend" function, and changing it to this:
if (can_use_save_link) {
save_link.href = file.toURL();
save_link.download = name;
click(save_link);
}else{
target_view.location.href = file.toURL();
}
The result is I'm able to save all 8260 files (about 1.5GB total) since it's now using storage with a quota. Before, the files didn't show up in the HTML5 FileSystem because I assume you didn't need to put them there if the anchor element supported the 'download' attribute.
I was also able to comment out the code that appends ".download" to the filename, and I had to provide an empty anonymous function as an argument to both instances of "file.remove()".
Use JSZip, it won't use too much memory if you disable compression (which is the default). To manually disable compression anyways, make sure to pass compression: "STORE" when calling zip.generate().
I ended up modifying FileSaver.js (see "EDIT 2" in the original post).