Is there some performance benefit using base64 encoded images? - javascript

For small size image what's (if any) the benefit in loading time using base64 encoded image in a javascript file (or in a plain HTML file)?
$(document).ready(function(){
var imgsrc = "../images/icon.png";
var img64 = "P/iaVYUy94mcZxqpf9cfCwtPdXVmBfD49NHxwMraWV/iJErLmNwAGT3//w3NB";
$('img.icon').attr('src', imgsrc); // Type 1
$('img.icon').attr('src', 'data:image/png;base64,' + img64); // Type 2 base64
});

The benefit is that you have to make one less HTTP request, since the image is "included" in a file you have made a request for anyway. Quantifying that depends on a whole lot of parameters such as caching, image size, network speed, and latency, so the only way is to measure (and the actual measurement would certainly not apply to everyone everywhere).
I should mention that another common approach to minimizing the number of HTTP requests is by using CSS sprites to put many images into one file. This would arguably be an even more efficient approach, since it also results in less bytes being transferred over (base64 bloats the byte size by a factor of about 1.33).
Of course, you do end up paying a price for this: decreased convenience of modifying your graphics assets.

You need to make multiple server requests, lets say you download a contrived bit of HTML such as:
<img src="bar.jpg" />
You already needed to make a request to get that. A TCP/IP socket was created, negotiated, downloaded that HTML, and closed. This happens for every file you download.
So off your browser goes to create a new connection and download that jpg, P/iaVYUy94mcZxqpf9cfCwtPdXVmBfD49NHxwMraWV/iJErLmNwAGT3//w3NB
The time to transfer that tiny bit of text was massive, not because of the file download, but simply because of the negotiation to get to the download part.
That's a lot of work for one image, so you can in-line the image with base64 encoding. This doesn't work with legacy browsers mind you, only modern ones.
The same idea behind base64 inline data is why we've done things like closure compiler (optimizes speed of download against execution time), and CSS Spirtes (get as much data from one request as we can, without being too slow).
There's other uses for base64 inline data, but your question was about performance.
Be careful not to think that the HTTP overhead is so massive and you should only make one request-- that's just silly. You don't want to go overboard and inline all the things, just really trivial bits. It's not something you should be using in a lot of places. Seperation of concerns is good, don't start abusing this because you think your pages will be faster (they'll actually be slower because the download for a single file is massive, and your page won't start pre-rendering till it's done).

It saves you a request to the server.
When you reference an image through the src-property, it'll load the page, and then do the additional request to fetch the image.
When you use the base64 encoded image, it'll save you that delay.

Related

"Streaming" compression/decompression by chunk

I'm working on a JavaScript compression library. I've already created a relatively fast DEFLATE compressor and decompressor, but they require the data to be fully loaded in memory prior to use. I don't think adding streaming support for the compression should be too difficult; I can just compress whatever data is available into a set of full blocks, append the result to the output stream, and avoid setting the BFINAL marker until the final chunk is passed. However, an issue arises with decompression streams.
Since my code currently needs to read a full chunk to generate an output and does not preserve state, the best I can do is guess how long a chunk is and hope that I reach the end, because if I don't, I've wasted multiple CPU cycles reading the headers, Huffman codes, and length/literal and distance codes of the unfinished block, and I'll have to do that all over again. This seems to be an awful way of solving the problem, and I'm wondering if there's any other way to do this that doesn't involve rewriting my code to preserve state.
Current decompression code
Nope, you'll need to preserve state. Or be prepared to read the entire stream into memory. There is nothing that prevents a deflate stream from consisting of a single long deflate block.

how to solve data loading issue in Chrome

All:
I wonder how can I make Chrome handle about 4GB data loaded into it? My use case is:
The Front End starts and tries to download 3GB json file data and makes some calculation. But Chrome always crash.
Any solution for this? Thanks
When you work with large data typically optimization rule is :
Don't read all data at once, don't save all data at once.
If your code allows perform calculations "step-by-step", split your JSON to small parts (for example, by 50Mb).
Of course, it works slowly, however this approach allows to keep memory.
This optimization rule is useful not only for JS and browser, but for various languages and platforms.

PHP - file_get_contents() reading chunks gets slower with time

I'm making some tests to store large files locally with IndexedDb API, and I'm using PHP with JSON (and AJAX on Javascript's side) to receive the file data.
Actually, I'm trying to get some videos and, to do so, I use the following PHP code:
$content_ar['content'] = base64_encode(file_get_contents("../video_src.mp4", false, NULL, $bytes_from, $package_size)));
return json_encode($content_ar);
I know base64_encode will deliver 1/3 more of information than the original, but that's not the problem right now as it's the only way I know how to retrieve binary data without losing it on the way.
As you can see, I specify from which byte it has to start to read and how many of them I want to retrieve. So, on my JS side, I know how much of the file I have already stored and I ask the script to get me from actual_size to actual_size + $package_size bytes.
What I'm seeing already is that the scripts seems to run more slowly as time goes by and depending on the file size. I'm trying to understand what happens there.
I've read that file_get_contents() stores the file contents in memory, so with big files it could be a problem (that's why I'm reading it in chunks).
But seeing it gets slower with big files (and time), may it be possible that it's still storing the whole file on memory and then delivering me the chunk I tell it to? Like it loads everything and then returns the part I demand?
Or is it just storing everything until the $bytes_from + $package_size (that's why it gets slower with time, as it increases)?
If any of the above, is there any way to get it to run more efficiently and improve performance? Maybe I have to do some operations before or after to empty memory resources?
EDIT:
I've made a screenshot showing the difference (in ms) of the moment I make the call to get the file bytes I need, and the right moment when I receive the AJAX response (before I do anything with the received data, so Javascript has no impact on the performance). Here it is:
As you can see, it's increasing with every call.
I think the problem is the time it spends to get to the initial byte I need. It does not load the whole file into memory, but it's slow until getting into the first byte to read, so as it increases the initial point, it takes more time.
EDIT 2:
Could it have something to do with the fact that I'm JSON encoding the base64 content? I've been making some performance tests and I've seen that setting $content_ar['content'] = strlen(base64_encode(file...)) is done in so much less time (when, theorically, it's doing the same work).
However, if that's the case, I still cannot understand why it increases the slowness among time. The work of encoding the same length of bytes should take the same amount of time, isn't it?
Thank you so much for your help!

How do you send and receive the same image between the client and the server?

I'm trying to implement a bandwidth test, and it looks like the most conventional way to do this is basically to transmit one or more images back and forth between a client and a server and see what the upload and download times are. In fact, this answer already covers the idea of getting a download time.
At this point though, I'm not too sure how to make the process go both ways, with or without using the other answer. Even after adding debugging statements, I haven't found where the picture's data is stored in the other answer's code. And if I try to start off with a clean slate, a lot of the API information I'm finding on the Internet / Stack Overflow about sending images back and forth has very little explanation or elaboration, so I'm not able to use it put the two together. Furthermore some experiments I have put together that have sometimes involved other platforms seemed to really throttle bandwidth usage, as well as scale the delay improperly with the images' sizes.
Using JavaScript, how do you transmit the same image back and forth between the client and server in such a way that you can accurately use the delay and the image's size to measure bandwidth? How do you make this work both ways with the same image, without throttling, and without interaction from the user?
EDIT
I could try posting things I've tried, but it's hard for it to be meaningful. A lot of it was in Flash. When I start using JavaScript, I started to experiment a little along these lines:
$.get('http://ip address/test.php?filename=XXX&data=LONG STRING OF DATA REPRESTING THE DATA TO BE SAVED PROBABLY LESS THAN 2K IN SIZE AND ALSO YOU SHOULD ESCAPE OR ATLEAST URIENCODE IT', function(data) {
alert('here')
eval(data);
});
The PHP file being:
<?php
echo "response=here";
?>
And I used the PHP file both for Flash and for JavaScript. I also used Adobe Media Server with Flash. But going from a 1MB file to a 32MB file while using Flash/PHP, Flash would only scale the delay by 10 times, nowhere near 32. It also seemed to throttle bandwidth usage at least when paired with the AMS, and maybe even when it was paired with the PHP file.
I was about to convert the JavaScript code to pass the actual image in to the PHP file...but I can't get to it. Even when I do things like:
for (var s in download) {
alert(s + ": " + download[s]);
}
download being the object that downloaded the image in the JavaScript (see the linked answer for the code), I'm not seeing anything useful. download.children.length is 0 and so on. I'm also reluctant to trust that the results aren't throttling bandwidth usage, like the Flash experiments did, without further confirmation; maybe the image has to be passed in using one type of API call or another to get it to really work right?
In essence, I'm really looking for good API information. Other stuff I saw just wasn't elaborate enough to connect the dots with.
2ND EDIT
One of the pitfalls I've run into is using POST to download the images. I'm running into a lot of difficulty in getting IIS 7 to allow POST to download arbitrary file types (namely jpgs) with "unusual" binary characters and allow them to be more than 2MB in size.
Why don't you send some text using $.post.
E.g:
Generate some big text:
var someText = '0123456789';
for (var i = 0; i <= 10000; i++)
{
someText += someText;
}
then post it to the server:
var start = new Date();
$.post('yourUrl', {data:someText}, function(){
var end = new Date();
//this will show the bytes per second upload bandwidth
alert(someText.length / ((end - start)/1000));
});
To be sure the result is exact, you can run this, for example, 10 times and get the average value.

Do spaces/comments slow Javascript down?

I was wondering, do whitespaces and comments slow down JavaScript? I'm doing a brute force attack which takes some time (30 seconds). Removing whitespaces does not show a significant growth in speed, but I think the browser just does have to parse more.
So, is it of any use to remove unnecessary whitespaces and comments to speed the whole up?
People usually use minimizers to reduce the SIZE of the script, to improve download speed, rather than to make any difference in speed of parsing the script.
Whitespace and comments will have little effect in how long it takes a browser to execute, as the parser needs to check if it is whitespace, or a comment, but in reality this will be so minute with current computing power, it would be impossible to notice any impact.
SIZE however is still important even with the large bandwidth available in our broadband world.
Whitespaces and comments increase the size of the JavaScript file, which slows down the actual downloading of the file from the server - minification is the process of stripping unnecessary characters from a JavaScript file to make it smaller and easier to download.
However, since you mention a brute force attack, the bottleneck is probably not the download. Try using a profiler to find what slows you down.
There is always a point in minifying, combining and gzipping your assets, to ease server load.
Minifying is the act you refer to, of stripping away unnecessary whitespace and comments, to make the download speed smaller.
Combining will most likely show an even greater increase in page rendering speed; it is the act of merging all your javascript files into one, and all your css files into one (it can also be done for most images, but that taks requires some more work). This is done to reduce the amount of requests the browser has to make towards your server, to be able to display the page.
GZipping is the act of further compressing the data, in a zipped format, to the browsers that indicate that they'll accept such data. This further reduces size, but adds some extra work load at both ends. You're likely to see a net gain from it.
Depending on what environment you're working in, there are different components that'll help you with this, that usually covers all of the above in one go.
The time your code takes to download from the server has a direct effect on how long the page takes to render. JavaScript is blocking, meaning that a JS block will prevent any furhter rendering, until the block has executed entirely. As such, where you put your javascript files (i.e. in which point in the rendering process they'll be requested), how many requests it takes for it to be completely downloaded, and how much data there is to download, will have an impact on your page load, as it appears to the user.
Once the browser has parsed your code, be it javascript, css or html, it'll have created internal representations of the part it needs to keep remembering, and the actual formatting will no longer affect it.
I don't think whitespace in js-code slows down the execution of it. As far as I understand a javascript interpreter strips all comments and redundant whitespace before processing. It can influence download time en thus loading time of a web page however.
Take a look here for a bit of extra information.
It has little to no impact on actual processing speed, however...
Smaller size => less bandwith => less costs => ??? => profit!

Categories

Resources