Inside of a module I'm writing (its kind of a slider / timeline interface component) I've got a method that updates the controls which are a set of clickable elemetns along the bottom that are updated on click and when the user scrolls.
I'm doing the following to attach classes to the items up until the active one. While the approach I'm using works, its feels very inefficient as I'm looping over a set of DOM elements each time.
updateTimeLine : function(pos, cb) {
var p = pos;
var timeline = $('.timer').toArray();
if (p > 15)
p = 15;
$.each(timeline, function(index,value) {
var that = $(this);
if (index >= p) {
if (that.children('span').hasClass('active'))
that.children('span').removeClass('active');
} else {
that.children('span').addClass('active');
}
});
if (cb && typeof(cb) === "function") {
cb();
}
return this;
},
Is there a better way to do this? If so, how?
Is this a good use case for something like the observer pattern? which I don't fully get, having not spent any time with it yet, so if it is, I'd really like to know how to apply this pattern properly.
Observer patterns notify subscribed objects by looping through and invoking listeners on each subscriber when a relevant change occurs. Because of that, you'd probably end up using $.each anyways. I think what you have is equally efficient.
If you feel bad about iterating over the dom each time, consider this: there exists no such algorithm that can update each dom element without iterating through them. Caching the DOM array theoretically would improve performance, but my money says the browser's already doing that. Try it yourself on this jsperf...
Related
I can't seem to find a definite answer for this. Consider the following:
var dupe = false;
$(".syndUID").sort(function(a,b) {
return a - b;
}).each(function(i, el) {
alert($(this).val() + " - " + $(this).next().val());
if($(this).val() == $(this).next().val()) {
dupe = true;
return;
};
});
This code is an attempt to find duplicate values in a set of inputs with the class syndUID. They are scattered about a form, so not next to eachother in the DOM.
next().val() is always undefined though. am I using the wrong function? How do I simply peek ahead to the next element? I have access to the index, but I don't know how to even make use of it.
EDIT:
After reading the comments and answers I realized there really is no proper iterator in jQuery, which seems really stupid to me since it provides each(). I also had another bug with the above code. Here is the final solution I used that works:
// duplicate check
var dupe = false;
var prevVal = undefined;
$(".syndUID").sort(function(a,b) {
return $(a).val() - $(b).val();
}).each(function() {
if($(this).val() == prevVal) {
dupe = true;
return false;
}
prevVal = $(this).val();
});
For anyone who finds this via google, the answers provided by others may be a decent alternative solution, but for my needs I found this sufficed.
You can do something like $('.syndUID')[i+1] to regrab the list, focusing on that element (and optionally turning it back into a jQuery object)
Using [i+1] will return a DOM element, but you can use .eq(i+1) to return a jQuery object. Or if you hate your future developers, wrap the DOM element in $() because screw readability!
As Andreas stated -- regrabbing the list is wasting memory. Before the loop, cache the object into a variable with var element = $('.syndUID') so you're not iterating the DOM so much.
next() grabs the next sibling - not the next ancestor or parent.
For example, if we have an unordered list, next() works like this:
var a = $('#listone.first').next().val();
console.log(a) //outputs "Second"
<ul id="listone">
<li class="first">First</li>
<li class="second">Second</li>
</ul>
<ul id="listtwo">
<li class="third">Third</li>
<li class="fourth">Forth</li>
</ul>
So, if you are trying to grab the next parent, .val() applied to .next() wont work. I dont know what your outputting to (table, UL, DIV, etc) so it is hard to give you a working solution - but that is likely your problem. You are not traversing the DOM properly.
Well, you have your sorted array already, maybe instead of trying to do this with .each() you just use a simple for loop?
var elemArray = $(".syndUID").sort(function(a,b) {
return a - b;
});
for(var i = 0; i < elemArray.length; i++){
if(elemeArray[i].value == elemArray[i+1].value){
//do something with comparison
}
}
However, this will only check for a duplicate in the next syndUID element in the array. In order to do a complete search you would need to check each element against every element (excluding itself) from the array. That would involve a nested loop which will add an order of n^2 to your function
I'm using javascript loop (using setInterval) that runs through a list of search results, highlighting the search term by adding a css styled <span> around search hits as it goes. I'm using setInterval like this to release control of the browser while it does this.
In Chrome and Firefox this works well - even with a setInterval parameter of 10-20ms; and the user has full control of the browser (i.e. scrolling, clicking links etc.) while the results are rapidly highlighted:
mylooper = setInterval(function() {
// my functionality is here
},15); // 15ms
Unfortunately, when using the dreaded IE8, the browser locks up and takes a really long time to add the <span>'s and style the search results. It also takes a long time just to load the page in the first place - shortened a great deal when this script is removed.
So far I've tried:
changing the interval values (I've read that IE8 doesn't detect intervals of sub 15ms);
using setTimeout instead of setInterval;
removing the interval to check that this is in fact what is causing the slow-down (it is!); and
swearing about Internet Explorer a lot;
var highlightLoop;
var index = 0;
highlightLoop = setInterval(function () {
var regex = RegExp(regexPhrase, "gi"); // regexPhase created elsewhere
var searchResults = resultElements.eq(index).get(0); // run through resultElements which contain alll the nodes with search results in them.
findAndReplaceDOMText( // a function that does the searching and inserting of styling
regex,
searchResults,
function (fill, matchIndex) {
called = true;
var span = document.createElement("span");
span.className = "result-highlight";
span.innerHTML = fill;
return span;
}
);
if (index == resultElements.length || searchTermUpdated == true) { // stop interval loop when search term changes or we reach the end of results - variable set elsewhere.
searchTermUpdated = false;
clearInterval(highlightLoop); // stop the loop
}
index++
}
}, 50); // 50ms does not improve performance.
Any advice on workarounds for this kind of javascripting in IE would be massively appreciated. Thanks all.
I believe you may be able to improve the performance by tweaking findAndReplaceDOMText, and maybe its callback too. I suppose findAndReplaceDOMText appends the element returned by the callback to the DOM, from within a loop of all matches. If it's doing that inside a loop, try to move it outside the loop, and apply the all changes to the DOM at once. That should result in better performance, as repainting the page after each DOM update is expensive.
Try this recursive approach instead:
get a list of all elements to be acted upon into array X (one time cost)
while the array X has length, keep repeating the next actions
shift the first element off the array
process the single element
start this process again with the new array X (now Xn - 1 length) on a setTimeout
The code looks like this in general
function processArray(array) {
var element = array.shift();
processElement(element);
if (array)
setTimeout(function(){processArray(array);},15ms);
}
There might be something else to be done with this recursion, but it works fairly well in all browsers and never blocks, because you're only initiating the repeat when the last one has had time to finish.
I have a game I'm creating where lights run around the outside of a circle, and you must try and stop the light on the same spot three times in a row. Currently, I'm using the following code to loop through the lights and turn them "on" and "off":
var num_lights = 20;
var loop_speed = 55;
var light_index = 0;
var prevent_stop = false; //If true, prevents user from stopping light
var loop = setTimeout(startLoop, loop_speed);
function startLoop() {
prevent_stop = false;
$(".light:eq(" + light_index + ")").css("background-color", "#fff");
light_index++;
if(light_index >= num_lights) {
light_index = 0;
}
$(".light:eq(" + light_index + ")").css("background-color", "red");
loop = setTimeout(startLoop, loop_speed);
}
function stopLoop() {
clearTimeout(loop);
}
For the most part, the code seems to run pretty well, but if I have a video running simultaneously in another tab, the turning on and off of the lights seems to chug a bit. Any input on how I could possibly speed this up would be great.
For an example of the code from above, check out this page: http://ericditmer.com/wheel
When optimizing the thing to look at first is not doing twice anything you only need to do once. Looking up an element from the DOM can be expensive and you definitely know which elements you want, so why not pre-fetch all of them and void doing that multiple times?
What I mean is that you should
var lights = $('.light');
So that you can later just say
lights.eq(light_index).css("background-color", "red");
Just be sure to do the first thing in a place which keeps lights in scope for the second.
EDIT: Updated per comment.
I would make a global array of your selector references, so they selector doesn't have to be executed every time the function is called. I would also consider swapping class names, rather than attributes.
Here's some information of jQuery performance:
http://www.componenthouse.com/article-19
EDIT: that article id quite old though and jQuery has evolved a lot since. This is more recent: http://blog.dynatrace.com/2009/11/09/101-on-jquery-selector-performance/
You could try storing the light elements in an array instead of using a selector each time. Class selectors can be a little slow.
var elements = $('.light');
function startLoop() {
prevent_stop = false;
$(elements[light_index]).css('background-color', '#fff');
...
}
This assumes that the elements are already in their intended order in the DOM.
One thing I will note is that you have used a setTimeout() and really just engineered it to behave like setInterval().
Try using setInterval() instead. I'm no js engine guru but I would like to think the constant reuse of setTimeout has to have some effect on performance that would not be present using setInterval() (which you only need to set once).
Edit:
Curtousy of Diodeus, a related post to back my statement:
Related Stack Question - setTimeout() vs setInterval()
OK, this includes some "best practice" improvements, if it really optimizes the execution speed should be tested. At least you can proclaim you're now coding ninja style lol
// create a helper function that lend the array reverse function to reverse the
// order of a jquery sets. It's an object by default, not an array, so using it
// directly would fail
$.fn.reverse = Array.prototype.reverse;
var loop,
loop_speed = 55,
prevent_stop = false,
// prefetch a jquery set of all lights and reverses it to keep the right
// order when iterating backwards (small performance optimization)
lights = $('.light').reverse();
// this named function executes as soon as it's initialized
// I wrapped everything into a second function, so the variable prevent_stop is
// only set once at the beginning of the loop
(function startLoop() {
// keep variables always in the scope they are needed
// changed the iteration to count down, because checking for 0 is faster.
var num_lights = light_index = lights.length - 1;
prevent_stop = false;
// This is an auto-executing, self-referencing function
// which avoids the 55ms delay when starting the loop
loop = setInterval((function() {
// work with css-class changing rather than css manipulation
lights.eq( light_index ).removeClass('active');
// if not 0 iterate else set to num_lights
light_index = (light_index)? --light_index:num_lights;
lights.eq( light_index ).addClass('active');
// returns a referenze to this function so it can be executed by setInterval()
return arguments.callee;
})(), loop_speed);
})();
function stopLoop() {
clearInterval(loop);
}
Cheers neutronenstern
I'm writing a little cached function in a plugin / library. It takes a HTMLElement and returns a Decorator.
return function _cache(elem) {
if (elem.id === "") {
elem.id = PLUGIN_NAME + "_" + uid++;
}
if (cache[elem.id] === void 0) {
cache[elem.id] = _factory(elem);
}
return cache[elem.id];
}
Here I'm storing some expensive operation in a cache by the id of the HTMLElement. This is a O(1) lookup but it uses the "bad practice" of setting elem.id and having a side effect.
The alternative would be O(N) lookup on the cache
return function _cache(elem) {
for (var i = 0, ii = cache.length; i++) {
var o = cache[i];
if (o.elem == elem) return o.data;
}
var ret = _factory(elem);
cache.push({ elem: elem, data: ret });
return ret;
}
But this means that my cached expensive method doesn't have any side effects on the HTMLElement.
Question:
Is this "side effect" innocent and is it worth doing for the optimization on my decorator?
Real Code:
Gist of plugin template where I use this snippet
Edit:
I'm clearly too tired and forgot data-foo exists. Here's how it should be implemented
var attr = "data-" + PLUGIN_NAME + "-cache";
return function _cache(elem) {
var val = elem.getAttribute(attr);
if (val === null || val === "") {
val = PLUGIN_NAME + "_" + uid++;
elem.setAttribute(attr, val);
}
if (cache[val] === undefined) {
cache[val] = _factory(elem);
}
return cache[val];
}
Instead of using the id, use data-x - that's what it was created for.
id has a specific meaning, is confusing to see it automagically generated (even if properly documented, which is nearly never.) You're also risking a slight chance of override.
Is this "side effect" innocent
No, clearly. Is it going to interact well with other scripts on the page? Don't know... that depends what it's for and what other kinds of scripts you expect it to be combined with. You can never make a ‘plugin’ that won't ever fail when interacting with other plugins and scripts, but by keeping the side-effects to a minimum you can at least try to minimise it.
Note that id is not a unique identifier. Although there should be only one element with a given ID in a document at one particular time, (a) multiple elements might be created with the same ID and inserted into the document sequentially (perhaps one element replacing another with the same ID), and (b) people still do use duplicate IDs even though it's wrong. Either would cause your cache to collect old, no-longer-used elements and return them inappropriately.
It is unfortunate that there is no JavaScript function to get a scalar/hashable unique identifier for an arbitrary object; the only way to obtain object identity is to ===-compare against other objects.
Another common way forward is to add an arbitrary new property to the node (‘expando’ in IE terms), with a randomised really-unique ID. Expandos aren't guaranteed by standard to work, but it has worked in all browsers back to day one and is commonly used.
This is how for example jQuery identifies elements uniquely, and if you are writing a plugin for jQuery you might try taking advantage of that—jQuery.expando holds the name of the arbitrary expando property being used for this purpose... or, sticking within the documented featureset, data() could be used to add your own metadata to the element including another unique ID of your own.
Expandos do have some unpleasant side-effects including accidentally treating them as attributes in IE<9 (which can't tell the difference between properties and attribute), but if you're using jQuery anyway you probably don't have anything to lose.
is it worth doing for the optimization on my decorator?
Depends how many you're expecting to have on a page. Comparing each item to each other item is an O(n²) operation; tolerable (and probably preferable given the side-effects) if n is low, but quickly getting unmanageable as n grows.
This seems like something neat that might be "built into" jQuery but I think it's still worth asking.
I have a problem where that can easily be solved by iterating through all the children of a element. I've recently discovered I need to account for the cases where I would need to do a level or two deeper than the "1 level" (just calling .children() once) I am currently doing.
jQuery.each(divToLookAt.children(), function(index, element)
{
//do stuff
}
);
This is what I'm current doing. To go a second layer deep, I run another loop after doing stuff code for each element.
jQuery.each(divToLookAt.children(), function(index, element)
{
//do stuff
jQuery.each(jQuery(element).children(), function(indexLevelTwo, elementLevelTwo)
{
//do stuff
}
);
}
);
If I want to go yet another level deep, I have to do this all over again.
This is clearly not good. I'd love to declare a "level" variable and then have it all take care of. Anyone have any ideas for a clean efficient jQueryish solution?
Thanks!
This is an awesome question because of the levels deep catch. Check out the fiddle.
Converted this to a plugin.
Activate
$('#div').goDeep(3, function(deep){ // $.fn.goDeep(levels, callback)
// do stuff on `this`
});
Plugin
$.fn.goDeep = function(levels, func){
var iterateChildren = function(current, levelsDeep){
func.call(current, levelsDeep);
if(levelsDeep > 0)
$.each(current.children(), function(index, element){
iterateChildren($(element), levelsDeep-1);
});
};
return this.each(function(){
iterateChildren($(this), levels);
});
};
This question is awesome :-)
If you know your DOM is not too gigantic, you could just find all the descendants and filter out the ones that don't qualify:
var $parent = $('#parent');
var $childrenWithinRange = $parent.find('*').filter(function() {
return $(this).parents('#parent').length < yourMaxDepth;
});
After that, the jQuery instance "$childrenWithinRange" would be all the child nodes of that parent <div> that are within some maximum depth. If you wanted exactly that depth, you'd switch "<" to "===". I may be off by one somewhere.
You should be able to just do it with the all-selector(docs), the child-selector(docs) and multiple-selector(docs) like this:
Example: http://jsfiddle.net/mDu9q/1/
$('#start > *,#start > * > *,#start > * > * > *').doSomething();
...or if you only wanted to target the children 3 levels deep, you could do this:
Example: http://jsfiddle.net/mDu9q/2/
$('#start > * > * > *').doSomething();
Both of these selectors are valid for querySelectorAll, which means big performance boost in supported browsers.
The question sounds like the answer could be XPATH. I'm not well informed about the browser-support, but in XPATH you only need to create a path like
/*/*/*/*
https://developer.mozilla.org/en/introduction_to_using_xpath_in_javascript
(works in FF,Chrome,Safari,Opera)
http://msdn.microsoft.com/en-us/library/aa335968%28v=vs.71%29.aspx
(didn't try it yet)
var lvlFunc = function(elmt, depth) {
if(depth > 0) {
elmt.children().each(function(i, e){
// do stuff on the way down
lvlFunc($(this), --depth);
// do stuff on the way out
});
// do stuff
}
};
lvlFunc(divToLookAt, 3);
Make sure that you put your "do stuff" code in the right location if matters which order the "stuff" is performed in.