What is a javascript/jquery map? - javascript

I'm currently reading a book about building single page web applications. The application is currently in its very early stages, but the author incorporated two functions into the shell code, stateMap and jqueryMap. stateMap is for placing dynamic information shared across the module...I think I understand that. However, the jqueryMap is used to "cache jquery collections. This function should be in almost every shell and feature module we write. The use of the jqueryMap cache can greatly reduce the number of jQuery document traversals and improve performance."
Is anyone familiar with this technique? Can you explain this further?

Even elementary DOM lookups in JavaScript, by element ID or class name, take considerable time by the browser, especially if the document is large.
Consider the following:
<div id="my-window"> ... </div>
A 'normal' jQuery way to locate the above DIV element by its ID would be as follows:
var $my_window = $('#my-window'); // expensive document traversal!
(Note that $('...') always represents a collection in jQuery, even if there is only a single element that matches the selector.)
If over the lifetime of your page you need to refer to that DIV multiple times in your script, such repeated lookups consume extra CPU cycles, causing your page to appear slow and ultimately degrading end user experience. What the author is suggesting is to perform such expensive lookups only once and store the results in a local cache of sorts, which is just a JavaScript object holding a bunch of references. Those references can then be reused as needed very quickly:
var jqueryMap = {}, setJqueryMap;
setJqueryMap = function () {
jqueryMap = {
$my_window: $('#my-window'),
// store other references here
};
};
All you need to do is to call setJqueryMap function once when your module loads. Then you can refer to the desired element (or elements) by their 'cached' references:
setJqueryMap();
...
jqueryMap.$my_window // do something with the element
That way the repeated traversals are avoided, making your script to perform much faster.

Related

How does v8 call DOM's function?

I am studying v8 sources.
I have spent 3 weeks but I couldn't find that how 8v call DOM's function.
Example for,
<script>
document.writeln("Hello V8");
</script>
I want to know process of call sequences, writeln() function of DOM.
Could you explain about this or give me some hints.
You could check V8HTMLDocumentCustom.cpp file where the .writeln function is found:
void V8HTMLDocument::writelnMethodCustom(const v8::FunctionCallbackInfo<v8::Value>& args){
HTMLDocument* htmlDocument = V8HTMLDocument::toNative(args.Holder());
htmlDocument->writeln(writeHelperGetString(args), activeDOMWindow()->document());
}
As you can see there are several headers included, some of those includes lead you to other headers, where you find files like V8DOMConfiguration.h
V8DOMConfiguration.h has some comments:
class V8DOMConfiguration {
public:
// The following Batch structs and methods are used for setting multiple
// properties on an ObjectTemplate, used from the generated bindings
// initialization (ConfigureXXXTemplate). This greatly reduces the binary
// size by moving from code driven setup to data table driven setup.
What I get from this is that Chrome V8 creates "Wrapper Worlds" with objects, recreating DOMs for each one of them, then it just pass data to the active window created, though.
I'm not well versed in V8, however this is a starting point. Maybe someone with a deeper knowledge of it can explain it better.
Update
As #Esailija points out V8 engine when run without a browser has no DOM available. As DOM is part of Webkit/Blink, linked references point to them. Once browser has rendered DOM then V8 Objects are matched with DOM tree elements. There's a related question about this here: V8 Access to DOM

Options to link DOM node to (in-browser) domain object: is direct reference OK?

For a single-page app: I want each of my DOM nodes to have a reference to a single (in-browser) domain object. Is it OK to just store a direct reference like this:
var myDomainObject = ...;
var DOMNode = document.getElementById("myId");
DOMNode.domain_object = myDomainObject;
Is this safe, repeatable? Can the browser do mysterious things with added-on JavaScript properties?
Thanks.
From an experiential standpoint: I've attached data directly to nodes without issue and never had a problem. From a specification standpoint, my interpretation is that doing so is not necessarily recommended, but is safe to the extent that the custom attributes don't conflict anything else.
In Common Infrastructure - Extensibility, the recommendation for authors (that's you) is to use only [data-*] attributes:
Authors can include data for inline client-side scripts or server-side
site-wide scripts to process using the data-*="" attributes. These are
guaranteed to never be touched by browsers, and allow scripts to
include data on HTML elements that scripts can then look for and
process.
And the requirement for a valid user-agent is to leave anything in the DOM it doesn't recognize.
User agents must treat elements and attributes that they do not
understand as semantically neutral; leaving them in the DOM (for DOM
processors), and styling them according to CSS (for CSS processors),
but not inferring any meaning from them.
So, my suggestion, along the same line as the W3C's aim of avoiding conflicts, is to create objects that refer to DOM elements. But, if you "must" tag weird things onto the DOM, rest assured that user agents are required to leave it there. But if you really must, it may be wise to use those data-* attributes!
(I personally don't use them and tend to slap objects and values onto whatever's most convenient at the time. But, I may be jaded by about 15 years of hacks and "feature detection" for the non-compliance of the user agents. Even now, I don't think IE supports the data-* standard ... )
I've done it before to store custom events and it works fine in every browser I tried but yeah, it's dangerous I'd say, only if there are no other alternatives and you must pass this info with the element.
At least, ideally, create your own namespace to not pollute the already polluted object:
var myDomainObject = ...;
var DOMNode = document.getElementById("myId");
DOMNode.My = {};
DOMNode.My.domain_object = myDomainObject;
Edit: Just wanted to see how many methods and properties a regular div might have, and it has 136 (in Chrome). http://jsbin.com/abecaq/1/edit
While it wouldn't cause any issues on a small site, I would avoid doing this for a few reasons...
You are opening yourself up to memory leaks by linking references to many different JS objects
Any new DOM nodes would have to have this link attached
I would recommend utilizing a global variable or ideally a variable at the highest necessary scope that can be referenced whenever necessary.

Identify javascript closures with developer tools

I am currently developing a website that is pure javascript and relies heavily on the jQuery & jQuery UI libraries (this site is not intended for use by a general public, hence progressive enhancement is not a strict requirement for this project). I am encountering a significant memory leak on executing the following code:
oDialogBox = $("<div>...</div>");
/* Add useful things to the dialog box here */
oDialogBox.appendTo("body");
oDialogBox.dialog({
/* Other dialog box settings here */
close: function(event, ui) {
oDialogBox.dialog("destroy");
oDialogBox.remove();
oDialogBox = null;
}
});
At any given time in this dialog box, I am creating, removing and modifying a large number of instances of jQuery UI buttons, multiselects (per the Multiselect widget created by Eric Hynds) and on click event handlers. According to jQuery UI documentation, calling .remove() on oDialogBox should result in all child widgets being unbound and deleted. Yet my detached DOM tree shows a significant number of garbage elements that the GC isn't collecting.
It is highly likely I have missed a large set of closures that need to be finished off safely. How do I do the following:
1) How do I identify which closures are keeping a given detached DOM object alive (either in Firefox or Chrome)?
2) Assuming the complete set of closures is identified, does anything beyond nulling the variable need to be done to assure marking the DOM element for garbage collection?
3) I have also noticed my list of arrays stored by the page is giant and contains references to DOM elements not being gathered by the GC. Is there a documented best practice for cleaning arrays from javascript and allowing all elements to be marked for deletion? (Note: this is a current prime suspect for the source of the memory leak)
I'm afraid that I don't have a great answer for #1. I haven't found any really good tools for this myself, even given how good the development tools have become over the last few years. The best advice I can give is to always keep things in the smallest scope you possibly can. If things don't escape, it's generally easier to simply figure out where the references must be.
As to #2, there can be further concerns. If the object referenced by variable v1 closes over the free variables of some function, removing v1 will not be enough to make them eligible for garbage collection if another variable v2 closes over v1 in some other function. So I guess if you really mean the "complete set of closures", then you should be all set. But this might get hairy. Again, if most object have references only in narrow scopes, these problems are much less severe.
For #3, what sorts of arrays are you discussing? If it's jQuery collections, then perhaps you simply have too many of them around. The only reason I know for them to stay around for a long time is to bind event handlers to them, and that is almost always better handled by event delegation on parent elements. If it's you're own custom arrays, do you really have a good reason to store references to them in arrays that last for any substantial length of time? I've rarely found one.

Is it okay to use data-attributes to store Javascript 'state'

I often use data-attributes to store configuration that I can't semantically markup so that the JS will behave in a certain way for those elements. Now this is fine for pages where the server renders them (dutifully filling out the data-attributes).
However, I've seen examples where the javascript writes data-attributes to save bits of data it may need later. For example, posting some data to the server. If it fails to send then storing the data in a data-attribute and providing a retry button. When the retry button is clicked it finds the appropriate data-attribute and tries again.
To me this feels dirty and expensive as I have to delve into the DOM to then dig this bit of data out, but it's also very easy for me to do.
I can see 2 alternative approaches:
One would be to either take advantage of the scoping of an anonymous Javascript function to keep a handle on the original bit of data, although this may not be possible and could perhaps lead to too much "magic".
Two, keep an object lying around that keeps a track of these things. Instead of asking the DOM for the contents of a certain data-attribute I just query my object.
I guess my assumptions are that the DOM should not be used to store arbitrary bits of state, and instead we should use simpler objects that have a single purpose. On top of that I assume that accessing the DOM is more expensive than a simpler, but specific object to keep track of things.
What do other people think with regards to, performance, clarity and ease of execution?
Your assumptions are very good! Although it's allowed and perfectly valid, it's not a good practice to store data in the DOM. Sure, it's fine if you only have one input field, but, but as the application grows, you end up with a jumbled mess of data everywhere...and as you mentioned, the DOM is SLOW.
The bigger the app, the more essential it is to separate your interests:
DOM Events -> trigger JS functions -> access Data (JS object, JS API, or AJAX API) -> process results (API call or DOM Change)
I'm a big fan of creating an API to access JS data, so you can also trigger new events upon add, delete, get, change.

javascript constructs to avoid?

I have been writing a JS algorithm. Its blazing fast in chrome and dog slow in FF. In the chrome profiler, I spend <10% in a method, in FF the same method is 30% of the execution time. Are there javascript constructs to avoid because they are really slow in one browser or another?
One thing I have noticed is that things like simple variable declaration can be expensive if you do it enough. I sped up my algorithm noticable by not doing things like
var x = y.x;
dosomthing(x);
and just doing
dosomething(y.x)
for example.
As you've found, different things are issues in different implementations. In my experience, barring doing really stupid things, there's not much point worrying about optimizing your JavaScript code to be fast until/unless you run into a specific performance problem when testing on your target browsers. Such simple things as the usual "count down to zero" optimization (for (i = length - 1; i >= 0; --i) instead of for (i = 0; i < length; ++i)) aren't even reliable across implementations. So I tend to stick to writing code that's fairly clear (because I want to be nice to whoever has to maintain it, which is frequently me), and then worry about optimization if and when.
That said, looking through the Google article that tszming linked to in his/her answer reminded me that there are some performance things that I tend to keep in mind when writing code originally. Here's a list (some from that article, some not):
When you're building up a long string out of lots of fragments, surprisingly you usually get better performance if you build up an array of the fragments and then use the Array#join method to create the final string. I do this a lot if I'm building a large HTML snippet that I'll be adding to a page.
The Crockford private instance variable pattern, though cool and powerful, is expensive. I tend to avoid it.
with is expensive and easily misunderstood. Avoid it.
Memory leaks are, of course, expensive eventually. It's fairly easy to create them on browsers when you're interacting with DOM elements. See the article for more detail, but basically, hook up event handlers using a good library like jQuery, Prototype, Closure, etc. (because that's a particularly prone area and the libraries help out), and avoid storing DOM element references on other DOM elements (directly or indirectly) via expando properties.
If you're building up a significant dynamic display of content in a browser, innerHTML is a LOT faster in most cases than using DOM methods (createElement and appendChild). This is because parsing HTML into their internal structures efficiently is what browsers do, and they do it really fast, using optimized, compiled code writing directly to their internal data structures. In contrast, if you're building a significant tree using the DOM methods, you're using an interpreted (usually) language talking to an abstraction that the browser than has to translate to match its internal structures. I did a few experiments a while back, and the difference was about an order of magnitude (in favor of innerHTML). And of course, if you're building up a big string to assign to innerHTML, see the tip above — best to build up fragments in an array and then use join.
Cache the results of known-slow operations, but don't overdo it, and only keep things as long as you need them. Keep in mind the cost of retaining a reference vs. the cost of looking it up again.
I've repeatedly heard people say that accessing vars from a containing scope (globals would be the ultimate example of this, of course, but you can do it with closures in other scopes) is slower than accessing local ones, and certainly that would make sense in a purely interpreted, non-optimized implementation because of the way the scope chain is defined. But I've never actually seen it proved to be a sigificant difference in practice. (Link to simple quick-and-dirty test) Actual globals are special because they're properties of the window object, which is a host object and so a bit different than the anonymous objects used for other levels of scope. But I expect you already avoid globals anyway.
Here's an example of #6. I actually saw this in a question related to Prototype a few weeks back:
for (i = 0; i < $$('.foo').length; ++i) {
if ($$('.foo')[i].hasClass("bar")) { // I forget what this actually was
$$('.foo')[i].setStyle({/* ... */});
}
}
In Prototype, $$ does an expensive thing: It searches through the DOM tree looking for matching elements (in this case, elements with the class "foo"). The code above is searching the DOM three times on each loop: First to check whether the index is in bounds, then when checking whether the element has the class "bar", and then when setting the style.
That's just crazy, and it'll be crazy regardless of what browser it's running on. You clearly want to cache that lookup briefly:
list = $$('.foo');
for (i = 0; i < list.length; ++i) {
if (list[i].hasClass("bar")) { // I forget what this actually was
list[i].setStyle({/* ... */});
}
}
...but taking it further (such as working backward to zero) is pointless, it may be faster on one browser and slower on another.
Here you go:
http://code.google.com/intl/zh-TW/speed/articles/optimizing-javascript.html
I don't think this is really a performance thing, but something to avoid for sure unless you really know what's happening is:
var a = something.getArrayOfWhatever();
for (var element in a) {
// aaaa! no!! please don't do this!!!
}
In other words, using the for ... in construct on arrays should be avoided. Even when iterating through object properties it's tricky.
Also, my favorite thing to avoid is to avoid omission of var when declaring local variables!

Categories

Resources