Should I pre-allocate an array or grow as I go - javascript

I have been trying to understand javascript better over the last few months. This has definitely made me want to get a Computer Science Degree. Anyways, I have sadly run into two related but conflicting points in relation to JavaScript.
According to this article, one should not prefill an array, but instead grow it when need be.
That sounded great and all, until I run into another article on wikipedia that stated that doing the above would be slow.
I am thinking of putting together some games from scratch, and being the code vegan that I am, plan on putting performance at the for front of my efforts. So is it better to have an array that grows or one that is pre-allocated? In HTML5 game development it is advised to use things such as object pools, which I tend to create using an array.

Rough guidelines:
One that is pre-allocated is better. If you grow one with for instance push or pop the JS engine needs to do a lot of additinal steps.
Even using an oversized array is way better than changing the size often. And you should operate on fixed size arrays whenever you can.
Here you can find more information regarding this.
Aray performance is Highly based on JS engine implementation:
Because javascript is a specification not an implementatation different browsers have different versions of the Javascript engine. These versions are updated regularly to enhance speed and take different approaches on how to optimize this.
When optimizing there are often tradeoffs between the speed of certain features including how arrays are manipulated/created. This means that the guidelines that I provided above might not be 100% accurate because one JS engine can have certain behaviour which the other JS engine lacks. This can cause differences in speed of manipulation/creation techniques on arrays.

Related

Does the linqjs library have performance advantages over using a for loop?

I'm making a html5 canvas game where i have a really big multidimensional array and i need to select a couple things out of it which i'm going to use further or remove.
Now i see three different ways to do this:
Using a for-loop and iterate the whole array.
Using the array.prototype.filter() method.
Using the https://linqjs.codeplex.com library.
Now, i want to now which method is the fastest.
(Please correct me when i wrote some english faults, it's not my native language.)
Performance will differ between browsers and javascript engines so the best thing to do is actually use some performance tools and profile them all. Most modern browsers have excellent profiling tools build into the developer console.
That said, certain javascript engines optimize for different situations. Many of them have done a lot of tuning to make their loops run quickly and, for the most part, for loops can be seen as reasonably predictable. A for loop also has the advantage that you can break out of it early if, say, you only need to find a single item and that item happens to occur sooner in the array.
A filter, on the other hand, is even more predictable since the browser knows it will need to inspect every element in the array. This allows the engine to make certain assumptions and break up the work differently; Possibly even parallelize it, though most engines won't do that automatically.
As far as LINQjs, the performance is going to be, at best, equal to, or more likely, worse performance. The reason for this is that the library is generally going to be doing more work than running a loop or making a call to a filter. The additional logic, conversion, etc. results in more overhead.
The reason someone might choose LINQjs is not for performance reasons but for usability reasons. They want a familiar syntax for doing data transformations and consider their development time more valuable than the execution time of the software.

Is there a tangible performance benefit to narrowing the scope of document.getElementsByClassName?

I have a web application that makes heavy use of document searches and I'm wondering if there is a performance benefit to justify converting document.getElementsByClassName to a narrower search such as $container.getElementsByClassName?
I know there won't be a significant advantage but I'm wondering if there's any tangible benefit at all.
As a note, I'm not using any libraries or dependencies so everything is native js.
I also understand that it will vary based upon the number of DOM nodes and a lot of other factors as well. Let's assume the average web application.
Thanks!
If you are referencing the same class element over and over again, then caching the container would be beneficial performance-wise. If you only need to interact with that class once, then I wouldn't say there will be any real performance benefit.
From my experience - yes, there is, especially on large pages.
Another benefit from narrowing the scope would be improved readability. It's basically the same thing as "global vs local variables".

Removing jQuery for performance reasons justified?

I am on a new project and our job is to rewrite an e-commerce website that is having performance problems on mobile devices.
We are re-writing the javascript based on a more object-oriented/modular architecture which I think is great! However my team lead said that we should remove all the jQuery calls and replace with javascript like so domElem.querySelectorAll(query) , which has better performance. I understand jQuery does some kind of caching in the background which can create memory issues.
I am a little sceptical of this, firstly because it seems like a case of 'premature optimization', that is, we should find the bottle-necks first before we re-write anything. And secondly I haven't found anything on the internet that says that jQuery has significant performance problems.
The current website does have a lot of overlapping dom branch queries which I think creates a lot of redundancy. That is there is too much querying happening, and on our new architectual approach we are restricting our object/module to fewer dom queries and more targeted dom queries which is great. This does need to be re-written.
But whether or not we use domElem.querySelector(query) or $(domElem).find(query), I can't see there as being much of a difference. Is my thinking right?
Some tests are done here (check other revisions as well). Good detailed discussion is done here over pros and cons of using jquery over javascript.
Also want to point out that jquery doesn't do any caching of selectors.
The thing we often forget because of using Javascript frameworks all the time is that jQuery is not a framework.
Obviously, if you do the exact same one-operator action using the jQuery '$' object and using a direct DOM method like getElementById, the latter will be noticeably faster as jQuery itself is written in Javascript and does a lot of background stuff.
However, nothing (except code readability) prevents you, as a developer, from combining jQuery with plain Javascript: using plain Javascript wherever possible and only using jQuery functions that provide complex functionality and take some time to write and optimize from scratch. There are a lot of those in jQuery: providing browser-independent css, serializing object and doing lots of other cool stuff.
It depends on the application but usually performance troubles are related to badly-designed algorithms, not the use of jQuery.
In any case, if your application does a lot of DOM-manipulation, it may be worthwhile to re-write it using plain Javascript and test. Keep the library, just don't use it for simple operations you can easily write without it.
If your application is heavily-reliant on jQuery functions with complex functionality, removing it is out of the question.
I myself use this combined approach: everything simple written in Javascript with jQuery functions for stuff that is difficult to implement.
Also, a good place to dig around if the app has troubles with performance is the DOM-manipulation. Those operations are very heavy compared to almost everything else in Javascript. You may be able to cut down on time by rolling several operations into one, building finished objects with one constructor, instead of creating empty ones and assigning properties one-by-one, etc.
Sorry, if the answer is a bit vague but it's difficult to be precise in this case without seeing the code and running tests.
Let me quote Uncle Bob about this discussion "Architecture is about intent, we have made it about frameworks and details"
Premature optimizations needs to be considered carefully.
They often result architectural decisions that are not easily revertible.
They introduce code optimizations that are usually specific to the problems they solve, which makes the code less abstract thus hard to
maintain, and more complicated thus prone to more bugs.
They tend to be prejudice and not objective, sometimes without any real comparison to other alternatives.
The problems they are trying to solve tends to be overestimated, to the degree of non-existent.
I'm not a big expert on Web development but if possible, you should always push this kind of decisions to the end by separation of concerns, and good abstraction.
For example over the parts that generate the java-script code you can have an abstraction of JavaScriptWriter, and use different frameworks. This way you can use JQuery at the beginning, test the system and only then replace parts you know to be inefficient.

Testing and comparing performance of jQuery plugins

There are a tonne of jQuery plugins out there and so I want to ensure that when I find two or more plugins that do the same thing, I choose the best.
I generally go for filesize as a first indicator of peformance, but I was wondering else I can do to test and compare performance of these plugins.
Could I use jsperf.com? Throw those plugins I want to compare in there. Would the results be a good indicator as to the performance?
Any other suggestions?
Thanks.
For measuring performance and so on there's a great free tool in circulation: DynaTrace Ajax 3 Edition ( http://ajax.dynatrace.com/ajax/en/ )
In your case you have to look at the JavaScript part http://ajax.dynatrace.com/ajax/en/content/c-javascript-dom-tracing.aspx
Which gives you the ability of measuring performance of specific methods and so on. I've used it in the past to optimize a huge mega drop down menu running in IE7. It helped a lot!
Filesize is not a good indicator of runtime performance. If you know javascript very well, you could look at the code and form an opinion of how well written it is, but I wouldn't expect filesize to be a good measure of runtime performance.
If performance really matters to you, then the only way to make a meaningful decision is to measure the exact operations that you care about. jsPerf is one very useful tool for setting up a performance comparison between two different ways of accomplishing a task. But, you have to be very careful when designing performance tests so that what you are measuring is really the right thing and jsPerf can only measure some types of things.
More specific advice would depend upon the specific plug-in and what types of operations it does that you care the most about.
For reference, jQuery may sometimes be the fastest and most convenient way to write code, but often isn't the fastest executing code for sections of code where speed really matters. For example, document.getElementById("test").value = "foo" is 5-7x faster than $("#test").val("foo") as illustrated here: http://jsperf.com/jquery-vs-plain-javascript.
Generally the best indicator of performance, aside from looking through the code and testing it yourself, is the number of people using it along with the comments and reviews you read about it. File size is of little consequence unless it's extreme.
Just read what others have said, what the documentation says about known issues and make an educated guess.
Sign up for Sauce Labs, which lets you test the most prominent browser/OS combinations, and see how the plugins perform in every browser you care about.
(A free account gives you 45 minutes of testing per month.)

Using multiple Javascript frameworks in a project?

Is it good or okay to have several frameworks in a project, or is it bad because it gets cluttered (= a mess), and the loading times maybe get's longer. Does some 100 K matter really? Or should you stick with one?
It's generally better to pick one thing and stick with it, for a number of reasons:
Fewer dependencies.
Lower complexity.
Easier to maintain.
Faster loading times.
No likelihood of dependency conflicts (i.e. jQuery can't conflict with your other Javascript framework if you only have one).
Lower debugging times.
It's true that an extra ~50k these days probably isn't going to kill anybody for a personal site or blog. The benefit comes when you scale to larger sizes, and all those extra 50k hits are eating into your bottom line. But even if you're just doing this for a small site, the savings on your sanity from figuring out problems quicker will easily be worth it.
That said, if you just need one part of a specific piece of a Javascript framework, the larger ones are often split into logical chunks, so that you can just take the piece you need rather than the entire framework. Most are rich enough these days that if framework X has a feature you want, but you're using framework Y, you can pretty much just map the source code for that feature directly into framework Y.
If you can load the javascript library from a repository where it would be cached on the first visit, then i don't really see any problem with that.
But, for uniformity sake i will go with one javascript library.
But, if you really have any strong reason to use two, then go ahead. As far as it is cached the first time.
Just adding something to John's words, one could choose, for example, JQuery, which allows you to use it without clashing with other libraries. I remember that once, I had troubles while trying prototype and mootools because I wanted some things from the former and some other from the latter, but it was long ago and maybe it was solved.
Somehow, I've found that it's easier to maintain a single library and there're a few real differences between them. It related more to the way each one approaches to map documents and apply things to their elements, which causes differences in response time, but the goal happens to be the same.
Good luck with your choice :)
PS. If you gzip and fix etags in the stuff you offer in your webapps, the difference between loading one or two js frameworks is not reeeeaaally important -unless you're in facebook, meebo or so!-. I've found that the yahoo! recommendations are helpful.
http://developer.yahoo.com/performance/rules.html
I wouldn't run two frameworks unless there was really no alternative. They often aren't written to play well with others; there are lots of potential ways they can cause unwanted interactions.
For example with jQuery, a lot depends on the user-data key identifiers the library adds as attributes to element nodes. If you start messing with the structure of the document by cloning, adding/removing and so on from another framework (or even just plain DOM methods) then those identifiers aren't as unique as jQuery expects; it can easily get confused and end up failing to call event handlers, or tripping on errors. This sort of thing isn't fun to debug.

Categories

Resources