I'm learning about event delegation, and I understand that if I am creating something like the following, it could be expensive for memory.
bigElementsArray.addEventListener("click", e => {
});
But, what if I am pointing to the same function in memory, this will create only one handler.
const sameFunctionRef = e => {};
bigElementsArray.addEventListener("click", sameFunctionRef);
So, why this is a problem, too?
Aside: In your first code snippet I assume you're iterating over an array and adding event listeners (I'm happy to be corrected, but I'm not aware of a way without some JS trickery of adding event listeners to an array of elements).
The issue here isn't creating a bunch of new functions (that is an issue), but creating a bunch of event listeners. Having a bunch of event listeners is expensive because as the browser needs to maintain them all and you need to be sure to remove the event listeners if elements get deleted to avoid memory leaks.
A better approach is to use event delegation. The basic idea is take advantage of event bubbling and have a single event handler on a parent element which then takes action depending on the event target.
Related is this answer which demonstrates the performance tradeoffs well: Does adding too many event listeners affect performance?
Related
Let's say I have bunch of click events. Also one/few of them is for document object.
Which one is better for performance? Click event for each element or :
document.addEventListener('click', (e)=>{
if(e.target == firstObject){ firstFunction(e) }
if(e.target == secondObject){ secondFunction(e) }
if(e.target == ThirdObject){ thirdFunction(e) }
})
Neither is "better." They each have their place in your toolkit.
A single delegated handler is more complex in that you have to do the kind of dispatch you're doing in your example (often using closest or matches), but has the advantage that if you're adding/removing elements you want to act on, you don't have to juggle event handlers.
Directly-assigned handlers are simpler (at least on elements that aren't added/removed), can prevent propagation, and let you keep your code more modular, more in keeping with the single responsibility principle.
Use the one that makes the most sense in a given context.
I think event listener for each element is better if possible, and makes sense in terms of code quality. There are some cases though where a document event listener will be needed ( for example to emulate a click outside behaviour)
That being said here are some of reasons that makes event listener for each element a better solution
event propagation is handled for you by the browser, if you decide to have only one event handler for the whole document, and u want to have event listeners for elements that are contained in each other, then you will need to handle propagation your self. That is to say you need to handle the order in which functions run yourself, and then you will have some either complex generic solution, or a specific imperative verbose code with a lot of if else statements.
Easier to read code, this is even more true for recent frameworks for web like react, angular, etc..., so for example assume you want to have a listener for clicks on the document, where that code should reside, in which file, and which component should own the code.
Removal of event listeners is handled for you by the browser apis, the browser gives you a way to remove event listeners. If you decide to go with a global event listener then you should handle removing event listeners yourself.
Your code will be hard to refactor and easier to break later, because you are coupling your document (or container ) event listener to your components internals. That is if you decide to change the structure of these components later, your document based event listener will probably break. This will depend a lot on how you identify the target of clicks, for example if you were identifying them by class names or other attributes, then these attributes might change later for reasons like styling.
and if you depend on ids for example you might eventually have unexpected results. because what happens for example if you added a listener for an element that has id, removed that element, and then later added another element with same id.
You miss on the development tooling provided for you by browsers, browsers can show you attached listeners for elements, with a document based event listener you wont be able to do that
It's better if you add one by one, because then you can remove event whenever it finish. Moreover you have more control about this event.
From the Polymer documentation about event listeners:
Use automatic node finding and the convenience methods listen and > unlisten.
this.listen(this.$.myButton, 'tap', 'onTap');
this.unlisten(this.$.myButton, 'tap', 'onTap');
The listener callbacks are invoked with this set to the element instance.
If you add a listener imperatively, you need to remove it imperatively. This is commonly done in the attached and detached callbacks. If you use the listeners object or annotated event listeners, Polymer automatically adds and removes the event listeners.
Questions:
Why is it important to only listen to events of elements in the local DOM after attached(), to then remove them when detached()?
Aren't event listeners deleted automatically when the observed DOM object is destroyed?
Would this also apply to when you listen to events for elements in your light DOM?
Basically, it's just best practise. Older browsers don't handle removal of old events correctly, and if functions have scope references they can cause memory leaks. I guess it's a convention along the lines of "better safe than sorry."
Polymer only removes event listeners it added itself. If you add event listeners yourself (imperatively) you need to remove them yourself.
Code might not be able to get garbage collected if event listeners are referring code.
I'm trying to speed up my event registration. Can anyone tell me which will take up the least processing time -
$('#myElement').find('select.foo').on('click', 'option', handler1);
$('#myElement').find('select.bar').on('click', 'option', handler2);
or
$('#myElement').on('click', 'select.foo option', handler1);
$('#myElement').on('click', '.select.bar option', handler2);
I agree with the commenter who said "run some jsPerf tests". The first rule of optimizing is "don't prematurely optimize". Why are you optimizing? Are you having performance problems? If you are, are you sure you've isolated it to this code? That's what profiling will tell you. If it is this code, then you can figure out what method provides more performance.
I suspect that the first version will have higher performance because it will attach the event handlers to the element(s) that are closest to the event generation. The second version attaches the event handlers to elements higher in the DOM tree, so the events will have to propagate before they are caught, and then the event handler has to run a filter to see if the events come from matching elements.
Another way to look at this is that the first version identifies the elements that need listeners at page load time (doing the work then) and the second version identifies the elements that will respond to events as the events occur (spreading the work out and potentially creating more work -- possibly for good reason; see below).
Be very careful, though: often, the second approach is used when elements are inserted dynamically. It's the easiest way to solve the problem of event handlers not being attached to dynamically-added elements. So if you do have dynamically-added elements, then the second version might be your best option, performance considerations notwithstanding.
Attaching the listeners to the same object makes actually attaching the listeners take less time, but the process of catching and handling the events will be slower.
Attaching the listeners closer to the target takes more time, but handlers will fire quicker when the event occurs.
Thanks to this question for the answer - Should all jquery events be bound to $(document)?
Problem: I need to bind any number of event handlers to any number of elements (DOM nodes, window, document) at dynamically runtime and I need to be able to update event binding for dynamically created (or destroyed) nodes during the lifetime of my page. There are three options that I can see for tackling this problem:
I) Event delegation on window
II) Direct event binding on each node
III) Event delegation on common ancestors (which would be unknown until runtime and would potentially need to be recalculated when the DOM is altered)
What is the most efficient way of doing this?
A little context
I am working on a set of pages that need analytics tracking for user events (clicks, scrolling, etc.) and I want to be able to easily configure these event handlers across a bunch of pages without needing to write a script to handle the event binding for each instance. Moreover, because I may have the need to track new events in the future, or to track events on elements that are dynamically added to/removed from the page, I need to be able to account for changes in the DOM that occur during the lifetime of the page.
As an example of what I'm currently considering, I would like to create a function that accepts a config object that allows the programmer to specify default handlers for each event, and allow them to override them for specific elements:
Analytics.init({
// default handlers for each event type
defaultHandlers: {
"click": function(e) { ... },
"focus": function(e) { ... }
},
// elements to listen to
targetElements: {
// it should work with non-DOM nodes like 'window' and 'document'
window: {
// events for which the default handlers should be called
useDefaultHandlers: ['click'],
// custom handler
"scroll": function(e) { ... }
},
// it should work with CSS selectors
"#someId": {
useDefaultHandlers: ['click', 'focus'],
"blur": function(e) { ... }
}
}
});
Sources
SO: Should all jQuery events be bound to document?
SO: How to find the nearest common ancestors of two or more nodes
jQuery docs: $.fn.on()
I usually delegate events on the document.documentElement object because:
It represents the <html> element on the page, which holds everything which holds all the HTML tags the user can interact with.
It is available for use the moment JavaScript starts executing, negating the need for a window load or DOM ready event handler
You can still capture "scroll" events
As for the efficiency of event delegation, the more nodes the event must bubble up the longer it takes, however we're talking ~1 to 2 ms of time difference -- maybe. It's imperceptible to the user. It's usually the processing of a DOM event that introduces a performance penalty, not the bubbling of the event from one node to another.
I've found the following things negatively affect JavaScript performance in general:
The more nodes you have in the document tree, the more time consuming it is for the browser to manipulate it.
The greater the number of event handlers on the page the more JavaScript slows down, though you would need 100s of handlers to really see a difference.
Mainly, #1 has the biggest impact. I think trying to eek out a performance boost in event handling is a premature optimization in most cases. The only case I see for optimizing event handling code is when you have an event that fires multiple times per second (e.g. "scroll" and "mousemove" events). The added benefit of event delegation is that you don't have to clean up event handlers on DOM nodes that will become detached from the document tree, allowing the browser to garbage collect that memory.
(From the comments below) wvandell said:
The performance costs of event delegation have little to do with the actual 'bubbling' of events ... there is a performance hit incurred when delegating many selectors to a single parent.
This is true, however let's think about the perceived performance. Delegating many click events won't be noticeable to the user. If you delegate an event like scroll or mousemove, which can fire upwards of 50 times per second (leaving 20 ms to process the event) then the user can perceive a performance issue. This comes back to my argument against premature optimizations of event handler code.
Many click events can be delegated with no problem on a common ancestor, such as document.documentElement. Would I delegate a "mousemove" event there? Maybe. It depends on what else is going on and if that delegated "mousemove" event feels responsive enough.
After my previous question I heve this one, that might be better.
I need to add a lot of items on the page and I see that sometimes appendChild+fregment is faster than innerHTML. Anyway now I would need to know the fastest way to add elements and add event listeners too.
One way I see is to listen on the window object and then filter.
Pros:
Only add once, then never
No memory trap if you forget to remove events listeners before remove as the event is added on the window object
others?
Cons:
Maybe slower?
Slower as we need to filter the items and will listen for everything everyime... maybe too slow at this point, I don't know.
The other way I know is to listen on the created element.
But with innerHTML I think only works with the window object listener.
Any other oppinions?
thanks
Best practice to handle "multiple" event handlers for "many" elements is event delegation, which is basically what you described.
Create a listener on the closest shared parent (document.body will of course do it for any element, but maybe there is another parent node below that).
Performance should not be the issue there. It's far worse to create like 200 event handler functions instead of one.