Do spaces/comments slow Javascript down? - javascript

I was wondering, do whitespaces and comments slow down JavaScript? I'm doing a brute force attack which takes some time (30 seconds). Removing whitespaces does not show a significant growth in speed, but I think the browser just does have to parse more.
So, is it of any use to remove unnecessary whitespaces and comments to speed the whole up?

People usually use minimizers to reduce the SIZE of the script, to improve download speed, rather than to make any difference in speed of parsing the script.
Whitespace and comments will have little effect in how long it takes a browser to execute, as the parser needs to check if it is whitespace, or a comment, but in reality this will be so minute with current computing power, it would be impossible to notice any impact.
SIZE however is still important even with the large bandwidth available in our broadband world.

Whitespaces and comments increase the size of the JavaScript file, which slows down the actual downloading of the file from the server - minification is the process of stripping unnecessary characters from a JavaScript file to make it smaller and easier to download.
However, since you mention a brute force attack, the bottleneck is probably not the download. Try using a profiler to find what slows you down.

There is always a point in minifying, combining and gzipping your assets, to ease server load.
Minifying is the act you refer to, of stripping away unnecessary whitespace and comments, to make the download speed smaller.
Combining will most likely show an even greater increase in page rendering speed; it is the act of merging all your javascript files into one, and all your css files into one (it can also be done for most images, but that taks requires some more work). This is done to reduce the amount of requests the browser has to make towards your server, to be able to display the page.
GZipping is the act of further compressing the data, in a zipped format, to the browsers that indicate that they'll accept such data. This further reduces size, but adds some extra work load at both ends. You're likely to see a net gain from it.
Depending on what environment you're working in, there are different components that'll help you with this, that usually covers all of the above in one go.
The time your code takes to download from the server has a direct effect on how long the page takes to render. JavaScript is blocking, meaning that a JS block will prevent any furhter rendering, until the block has executed entirely. As such, where you put your javascript files (i.e. in which point in the rendering process they'll be requested), how many requests it takes for it to be completely downloaded, and how much data there is to download, will have an impact on your page load, as it appears to the user.
Once the browser has parsed your code, be it javascript, css or html, it'll have created internal representations of the part it needs to keep remembering, and the actual formatting will no longer affect it.

I don't think whitespace in js-code slows down the execution of it. As far as I understand a javascript interpreter strips all comments and redundant whitespace before processing. It can influence download time en thus loading time of a web page however.
Take a look here for a bit of extra information.

It has little to no impact on actual processing speed, however...
Smaller size => less bandwith => less costs => ??? => profit!

Related

Why does node.js suddenly use less memory?

I have a 25MB json file, that I "require" why my app starts up. Initial it seems that the node.js process takes up almost 200MB of memory.
But if I leave it running and come back to it, Activity monitor reports that it is using only 9MB which makes no sense at all! At the very least, it should be a few MB more, since even a simple node.js app that does almost nothing (acting like a server), uses 9MB.
The app seems to work fine - it is a server, that provides search suggestions form a word list of 220,000 words.
Is Activity Monitor wrong ?
Why is it using only 9MB, but initially used ~200MB when the application started up ?
Since it's JavaScript things that are no longer being used are removed via Garbage Collector(GC), freeing memory. Everything (or many things) may have been loaded into memory at the start. Then items that were not longer needed were removed from memory by the GC. Usually generation can take more memory in progress and lose some afterwards, for example temporary data-structures can be used in progress but are not longer needed when the process is done.
It's also possible that items in memory where swapped out and written to the disk temporally (and may be later retrieved), this swapping this is done by your OS and tends to be used more on programs that reserve a lot of memory.
How much memory it takes to load the file depends on a number of factors.
What text encoding is being used to store the file? JavaScript uses UTF-16 internally, so if that's not what's being used on disk, the size may be different. If the file is in UTF-32, for example, then the in-memory UTF-16 version will be smaller unless it's full of astrals. If the file is in UTF-8, then things are reversed: the in-memory version will be larger unless it's full of astrals. But for now, let's just assume that they're about the same size, either because they use the same encoding or the pattern of astrals just happens to make the file sizes more or less the same.
You're right that it takes at least 25MB to load the file (assuming that encodings don't interfere). The semantics of the JSON API being what they are, you need to have the whole file in memory as a string, so the app will take up at least that much memory at that time. That doesn't count whatever the parser needs to run, so you need at least 34MB: 25 for the file, 9 for Node, and then whatever your particular app uses for itself.
But your app doesn't need all of that memory all the time. Depending on how you've written the app, you're probably destroying your references to the file at some point.
Because of the semantics of JSON, there's no way to avoid loading the whole file into memory, which takes 25MB because that's the size of the file. There's also no way to avoid taking up whatever memory the JSON parser needs to do its work and build the object.
But depending on how you've written the app, there probably comes a point when you no longer need that data. Either you exit the function that you used to load the file, or you assign that variable to something else, or any of a number of other possibilities. However it happens, JavaScript reclaims memory that's not being used anymore. This is called garbage collection, and it's popular among so-called "scripting languages" (though other programming languages can use it too).
There's also the question of text representation versus in-memory representation. Strings require about the same amount of space in memory versus on-disk, unless you change the encoding, but Numbers and Booleans are another matter entirely. In JavaScript, all Numbers are 64-bit floating-point numbers, so if most of your numbers on disk are more than four characters long, then the in-memory representation will be smaller, possibly by quite a bit. Note that I said characters, not digits: it's true that digits are characters, but +, -, e, and . are characters too, so -1e0 takes up as twice as much space as -1 when written as text, even though they represent the same value in memory. As another example, 3.14 takes up as much space as 1000 as text (and happen to take up the same amount of space in memory: 64 bits each). But -0.00000001 and 100000000 take up much less space in memory than on disk, because the in-memory representation is smaller. Booleans can be even smaller: different engines store them in different ways, but you could theoretically do it in as little as one bit. That's a far cry from the 8 bytes it takes to store "true", or 10 to store "false".
So if your data is mostly about Numbers and Booleans, then the in-memory representation stands to get a lot smaller. If it's mostly Strings, then not so much.

Does obfuscated javascript slow a browser down?

I have a script which is obfuscated and begins like this:
var _0xfb0b=["\x48\x2E\x31\x36\x28\x22\x4B\x2E
...it continues like that for more then 435.000 chars (the file has 425kB) and in the end this is coming:
while(_0x8b47x3--){if(_0x8b47x4[_0x8b47x3]){_0x8b47x1=_0x8b47x1[_0xfb0b[8]](
new RegExp(_0xfb0b[6]+_0x8b47x5(_0x8b47x3)+_0xfb0b[6],_0xfb0b[7]),
_0x8b47x4[_0x8b47x3]);} ;} ;return _0x8b47x1;}
(_0xfb0b[0],62,2263,_0xfb0b[3][_0xfb0b[2]](_0xfb0b[1])));
My question is: Isn't it way harder for a browser to execute that compared to a not-obfuscated script and if so, how much time I'm probably loosing because of the obfuscation? Especially the older browsers like IE6 which are really not that performant in JS must spend a lot more time on that, right?
It certainly does slow down the browser more significantly on older browsers (specifically when initializing), but it definitely slows it down even afterwards. I had a heavily obfuscated file that took about 1.2 seconds to initialize, unobfuscated in the same browser and PC was about 0.2 seconds, so, significant.
It depends on what the obfuscator does.
If it primarily simply renames identifiers, I would expect it to have little impact on performance unless the identifier names it used were artificially long.
If it scrambles control or data flow, it could have arbitrary impact on code execution.
Some control flow scrambling can be done with only constant overhead.
You'll have to investigate the method of obfuscation to know the answer to this. Might be easier to just measure the difference.
The obfuscation you're using seems to just store all string constants into one array and put them into the code where they originally were. The strings are obfuscated into the array but still come out as string. (Try console.log(_0xfb0b) to see what I mean).
It does, definitely, slow down the code INITIALIZATION. However, once that array has been initialized, the impact on the script is negligible.

How to test performance of website parts?

I've actually obtained a job to test a website that is somehow struggling with its performance. In Detail I should pick out different parts of the document and check out their waiting->load->finished states. Since I'm familiar with firebug i've tested many sites as a whole. But now i need to know when starts a special DIV rendering, when is it finished and how long did it wait before. The goal is to find out wich part of the website took how long until painted.
I doubt you'll be able to measure individual parts of a page they way you want. I would approach this by removing parts of the page, measuring the subsetted page, and inferring from those measurements which parts are slowest.
Keep in mind that this sort of logic may not be be correct. For example, you may have a page with two parts. You may measure the two parts independently by creating subsetted pages. The times of the two parts added together will not equal the time for the total. And one part seeming slower than the other doesn't mean that when combined, the "slow" part is responsible for the bulk of the time. Browsers are very complicated machines, and they don't always operate the way you imagine.
AFAIK, speed of printing a div is not something you should worry about. If there is some sererside language, then i would suggest assiging a variable to current time before a portion starts and compare it to the time right after the portion ends. You can subtract them to get the time it took do work that portion out.
If there is javascript involved, then i would suggest chrome dev tool's timeline panel. It shows everything, from css recalculation and printing of the style/div to ajax/(if using) db queries..
As you are familiar with Firebug you can use HttpWatch tool for recording the exact request and response time of all specific http requests made by your browser.
so when a special DIV rendering starts this tool will capture the request and response time for the same.
http://www.httpwatch.com/
All the best!

Is there any line limitation in javascript?

is there any limitation on number of lines a javascript file can have?
10121 / 8
Since 10121 is the maximum number of bits in the universe and presumably you would do 8-bit encoding of your javascript, then even if the whole universe was filled with nothing else then your blank javascript file there could be no more then 11.25e120 lines in it.
TL;DR No there is no limit.
Nothing official, but the larger the file, the more the browser needs to download, parse and execute.
In fact, a common practice it to join multiple javascript files into one so only one browser connection gets tied up with downloading javascript. This is normally part of minification (which can include other steps such as renaming variables to short ones, removing whitespace etc...).
I don't think there's an actual line number limitation for a javascript file, but obviously the number of lines and amount of javascript code you have can greatly affect performance.
So, the fact that you're asking this at all might be a reason to optimize and examine the code itself. Perhaps splitting out certain code functions that aren't needed on every page into different files could ease the load.
Actually there is, if you are running in IE you will find that a program that runs over more than 5,000,000 lines. IE thinks it may be stuck in a endless loop and a popup will prompt the user to either kill the script or continue...
Nope :-)
The only limitation is the memory of the computer it is running on, or the software running the Javascript. There are no such limitations in the design of Javascript.
However...
If you have tens of thousands of lines of code, you may wish to evaluate your design and refactor a lot of it as it can be a sign of badly designed code.

How can I estimate browser's Javascript capabilities?

I serve a web page which makes the client do quite a lot of Javascript work as soon as it hits. The amount of work is proportional to the amount of content, which varies a lot.
In cases where there is a huge amount of content, the work can take so long that clients will issue their users with one of those "unresponsive script - do you want to cancel it?" messages. In cases with hardly any content, the work is over in the blink of an eye.
I have included a feature where, in cases where the content is larger than some value X, I include a "this may take a while" message to the user which is displayed before the hard work starts.
The trouble is choosing a good value for X since, for this particular page, Chrome is so very much faster than Firefox which is faster than IE. I'd like to warn all users when appropriate, but avoid putting the message up when it's only going to be there for 100ms since this is distracting. In other words, I'd like the value for X to also depend on the browser's Javascript capabilities.
So does anyone have a good way of figuring out a browser's capabilities? I'm currently considering just explicitly going off what the browser is, but that seems hacky, and there are other factors involved I guess.
If the data is relatively homogeneous, one method might be to have a helper function that checks how long a particular subset of the data has taken to go through, and make a conservative estimate of how long the entire set will take.
From there, decide whether to display the message or not.
This may not be where you want to go, but do you have a good idea why the javascript can take so long? Is it downloading a bunch of content over the wire or is the actual formatting/churning on the browser the slow part?
You might even be able to do something incrementally so that while the whole shebang takes a long time but users see content 'build' and thus don't have to be warned.
Why not just let the user decide what X is? (e.g. like those "display 10 | 20 | 50 | 100" per page choosers) Then you don't have to do any measurement/guesswork at all; you can let them make the optimal latency / information content tradeoff.
This is somewhat misleading; usually when one discusses a browser's JS capabilities, it's referring to the actual abilities of the browser, such as does it support native XMLHTTP? Does it support ActiveX? etc.
Regardless, there is no way to reliably deduce the processing power or speed of a browser. One might think that you could run some simple stress-tests, compute the result and compare to a list of past performances to see where the current user's browser ranks, and possibly use this information to arrive at an estimated time. The problem here, is that these calculations can not only be influenced by activities in the browser (or merely on the OS); for instance, you run your profiling script, and the user's AV scanner starts up because its 5pm; what normally might take 2s, takes 20s.
On thing to ask yourself, is: Does this processing have to take place right NOW? As n8wrl and Beska alluded to, you might need to code your own method whereby you break-up the work to be done into chunks and then you operate on them one at a time and using something like setTimeout(). This will give the engine time to 'breathe' -- and thus hopefully avoid the 'unresponsive script' warnings. Each of these chunks could also be used to update a progress bar (or similar) that gives the user some indication that work is being done.
Or you could take the approach like GMail - they flash a very small, red "Loading..." text area in the corner of the window. Sometimes its there for a few seconds, sometimes it's not there long enough to read it. Other times it blinks on-and-off several times. But you know when its doing something.
Lastly, also on the point of incrementally 'building' the page, you could inspect the source of Chrome's new tab page. Note: you can't view this using "view source"; instead, choose the "javascript console" option (while on the new tab page) and then look at the HTML source there. There should be a comment that explains their general strategy, like such:
<!-- This page is optimized for perceived performance. Our enemies are the time
taken for the backend to generate our data, and the time taken to parse
and render the starting HTML/CSS content of the page. This page is
designed to let Chrome do both of those things in parallel.
1. Defines temporary content callback functions
2. Fires off requests for content (these can come back 20-150ms later)
3. Defines basic functions (handlers)
4. Renders a fast-parse hard-coded version of itself (this can take 20-50ms)
5. Defines the full content-rendering functions
If the requests for content come back before the content-rendering functions
are defined, the data is held until those functions are defined. -->
Not sure if that helps, but I think it does give insight into how some of the big players handle challenges such as this.

Categories

Resources