I have an object, and when that object is instantiated, it attaches a click event handler to the <body>. (The process of attaching happens within that object's definition)
This object is instantiated when the URL is changed (when the user navigates to another page).
There is always one type of this object 'per page', and as previously noted, it reinstantiates when the pange is changed, and the old object will no longer exist.
The attaching process looks like this:
var doc = $(document.body);
doc.off('click');
doc.on('click', function(){
do_stuff();
});
I am using this because I noticed that if simply attach the event handler, omitting the .off(), the handler will fire more times on a simple click as I navigate through the site (because it was attached/registered with every instantiation of that object).
Now, I could move this attachment process somewhere else, for example in the code section where the instantiation occurs, so it won't depend on that object and assure that the handler will be attached only once, but that would deprive me of access to some local variables and I would have to make them accessible to that code section.
My question is: Does this cost a lot performance-wise? I have noticed some posts here, on stackoverflow, emphasizing this is not optimal, but most of the examples displayed code with .off() or unbinding happening inside the .on()/binding.
IMPORTANT NOTE: I am using backbone.js. It is a 'one-page site'. The objects are basically views and their instantiation occurs in the router.
In short, no, there's no meaningful performance penalty to using off. Now I won't swear on a stack of bibles that it's impossible for off to cause a performance issue, but I will say that in 99 out of 100 (maybe more like 999 in 1,000 or 9999 in 10,000) real world cases you will never have to worry about off 'causing a performance problem.
To put it another way, off won't ever cause a noticeable performance slow-down unless you do something really crazy with it, or have a really crazy site that inadvertently does something really crazy with it.
NOT calling off on the other hand can cause lots of issues, performance-related and otherwise.
Related
TLDR Below
JS Fiddle To Demo
I've been really involved in recreating the tools that are foundations of premiere JS Libraries to better improve my skills. Currently I'm working on functional data-binding a la Angular.
The idea of data-binding is to take data and bind it to elements so that if manipulated all elements subscribed will change accordingly. I've gotten it to work but one thing I hadn't considered going into it was the issue with innerHTML vs value. Depending on the element you need to change one or the other( in the demo above you'll see that I needed to specifically single out the button element in a conditional statement because it has both, but that's kind of a fringe case )
The issue is that in order to capture a SPAN tag update I needed to trigger an event to happen, and the easiest one to manipulate for Text Boxes/Textareas was 'keyup'.
In my function then, if you pass in an element with no value property we assume you're going to be updating innerHTML, and we setup an observer to determine if the element ever mutates, and if it ever does, the observer will emit a 'keyup' event.
if (watchee.value == void(0)) {
var keyUpEvent = new Event('keyup');
var observer = new MutationObserver(function(mutations) {
mutations.forEach(function(mutation) {
watchee.dispatchEvent(keyUpEvent);
});
});
observer.observe(watchee, {
childList: true
});
}
Now it may just be my paranoia, but it seems like I might be tunneling into a can of worms by faking 'keyup' on an element that doesn't natively have that support.
TLDR:
I'm curious if there's an alternative way to make, a.e. a span tag reactive other than faking a 'keyup'/'keydown'/'change' event? For instance, is there a way that I can make my own pure event(by pure I mean not reliant on other events) that checks if innerHTML or value has changed and then performs a function? I know that this is probably possible with a timer, but I feel like that might hinder performance.
EDIT: just an aside. In the demo the function called hookFrom works by taking a DOM node and returning a function that will take the receiving dom node and continues to return a function that will take additional receiving dom nodes. :
hookFrom(sender)(receiver);
hookFrom(sender)(receiver)(receiver2);
hookFrom(sender)(receiver)(receiver2)(receiver3)(receiver4)...(receiver999)...etc
JS Fiddle To Demo (same as above)
There is nothing inherently wrong with creating a similar event on a DOM node that doesn't natively have that functionality. In fact this happens in a lot of cases when trying to polyfill functionality for separate browsers and platforms.
The only issue with doing this sort of DOM magic is that it can cause redundancy in other events. For instance the example given in this article: https://davidwalsh.name/dont-trigger-real-event-names shows how a newly minted event using the same event name can cause problems.
The advice is useful, but negligible in this specific case. The code adds the same functionality between text boxes, divs, spans, etc... and they are all intentionally handled the same way, and if the event would bubble up to another event, it would be intentional and planned.
In short: There is a can of worms that one can tunnel into while faking already explicitly defined event names, but in this case, the code is fine!
I get wordy sometimes: tl;dr: read the bold text.
The motivation behind deprecating Mutation Events is well understood; their efficacy in achieving many types of tasks is questionable.
However, today, I have discovered a use for them that is highly dependent on those very same undesired properties.
I will first present the question, and then present the reasons that lead me to the question, because the question will be absurd without it.
Is it possible to use the new Mutation Observers in a way that we can have the VM stop at the instant of the change (like the DOM3 Mutation Events do), rather than report it to me after the fact?
Basically, the very thing that makes the Mutation Observer performant and "reasonable" is its asynchronicity, which means (necessarily, it seems) throwing away the stack, pushing a record mutation to a list, and delivering the list to qualified Observers at the next tick or several ticks later.
What I am after is precisely that stack trace of the DOM3 Mutation Event. I really really hope this will work, but basically the Mutation Event callback (which I am allowed to write) will have a stacktrace that will lead me back to the actual code that created my element I'm listening for. So in theory I'd write a Mutation Event handler like this:
// NOT in an onload cb
$("div#haystack").on('DOMNodeInserted', function(evt) {
if (is_needle(evt.target)) {
report(new Error().stack); // please, Chrome, tell me what code created the needle
}
});
This gives me the golden answer.
It seems that Mutation Observers will make it impossible to extract this information. What, then, am I to do once Mutation Events are completely taken out? They have been deprecated for a while now.
Now, to explain a little better the real actual circumstances, and why this matters.
I have been trying to kill a bug which I describe here: I have built a full-DOM serializer which nicely spits back out every element that exists on the webpage, and in comparing them, the broken page and the working page are identical. I have tested this and it is pretty nice. it captures every little thing that's different: Whatever hovery-thing my mouse happens to be over, the CSS class that gets consequently set will be reflected in the HTML dump. Any text of any form on the page will show up if you search it (provided it doesn't span across elements). All inline JS (and more importantly, all differences between inline JS) is present.
I have then gone on to verify that the broken page is missing several event handlers. So none of the clickable items respond to hover or clicks, and therefore no useful work can be done on the interactive form. This is not known to be the only problem, but it does fully explain the behavior. Given that the DOM has no differences in inline JS that explains the difference in behavior, then it must be the case that either the content of the linked resources or the invisible properties of elements (event handlers being in this category) are causing the difference in behavior.
Now I know which elements are supposed to have handlers, but I know not where in the comically large code base (ballpark: 200K lines of JS all loaded as one resource, assembled by several M lines of Perl serverside code) lies the code that assigns the events.
I have tried JS methods to watch modifications of object properties, such as this one (there are many, but all work on the same principle of setting setters and getters), which works the first time, and then subsequently breaks the app afterward. Apparently assigning setters and getters cause the system to stop functioning. It's not clear to me how I can take that approach of watching property assignments to a point where i can get a list of code points that hit a specific element. It might be feasible, but surely not if I can only fire it once, and it breaks everything thereafter.
So watching variables with JS is out.
I might be able to manually instrument jQuery itself, so that when my is_needle() succeeds on the element processed by jQuery, I log all event-related functions performed by jQuery on that element. This is dreadful, and I will resort to this if my Mutation Observer approach fails.
There are yet more ways to skin the cat of course. I could use the handy getEventListeners() on my target element when it is working to get the list of event listener functions that are on it, and then look at the code there, and search the code base to find those functions, and then analyze the code to find out all the places there those functions are inserted into event handlers. That is actually pretty straightforward.
Now I know which elements are supposed to have handlers, but I know not where in the comically large code base (ballpark: 200K lines of JS all loaded as one resource, assembled by several M lines of Perl serverside code) lies the code that assigns the events.
Have you considered simply instrumenting .addEventListener function calls one way or another, e.g. via debugger breakpoints or by modifying the DOM element prototype to replace it with a wrapper method? This would be browser-specific but should be sufficient for your debugging needs.
You also might want to try firefox's tracer, available in nightlies I think. It basically records function execution without the need to use breakpoints or instrumenting code.
How do I completely unbind inline javascript events from their HTML elements?
I've tried:
undelegating the event from the body element
unbinding the event from the element
and even removing the event attribute from the HTML element
To my surprise at least, only removing the onchange attribute (.removeAttr('onchange')) was able to prevent the event from firing again.
<input type="text" onchange="validateString(this)"></input>
I know this is possible with delegates and that's probably the best way to go, but just play along here. This example is purely hypothetical just for the sake of proposing the question.
So the hypothetical situation is this:
I'm writing a javascript validation library that has javascript events tied to input fields via inline HTML attributes like so:
<input type="text" onchange="validateString(this)"></input>
But, I'd like to make the library a little better by unbinding my events, so that people working with this library in a single-page application don't have to manage my event handlers and so that they don't have to clutter their code at all by wiring up input events to functions in my hypothetical validation library... whatever. None of that's true, but it seems like a decent usecase.
Here's the "sample" code of Hypothetical Validation Library.js:
http://jsfiddle.net/CoryDanielson/jwTTf/
To test, just type in the textbox and then click elsewhere to fire the change event. Do this with the web inspector open and recording on the Timeline tab. Highlight the region of the timeline that correlates to when you've fired the change event (fire the change event multiple times) and you'll see the event listeners (in the window below) increase by 100 on each change event. If managed & removed properly, each event listener would be properly removed before rendering a new input, but I have not found a way to properly do that with inline javascript events.
What that code does is this:
onChange, the input element triggers a validation function
That function validates the input and colors the border if successful
Then after 1 second (to demonstrate the memory leak) the input element is replaced with identical HTML 100 times in a row without unbinding the change event (because I don't know how to do that.. that's the problem here). This simulates changing the view within a single-page app. This creates 100 new eventListeners in the DOM, which is visible through the web inspector.
Interesting Note. $('input').removeAttr('onchange'); will actually prevent the onchange event from being fired in the future, but does not garbage collect the eventListener/DOM stuff that is visible in the web inspector.
This screenshot is after change event fires 3 times. Each time, 100 new DOM nodes are rendered with identical HTML and I've attempted to unbind the onchange event from each node before replacing the HTML.
Update: I came back to this question and just did a quick little test using the JSFiddle to make sure that the answer was valid. I ran the 'test' dozens of times and then waited -- sure enough, the GC came through and took care of business.
I don't think you have anything to worry about. Although the memory can no longer be referenced and will eventually be garbage collected, it still shows up in the Web Inspector memory window. The memory will be garbage collected when the GC decides to garbage collect it (e.g., when the browser is low on memory or after some fixed time). The details are up to the GC implementer. You can verify this by just clicking the "Collect Garbage" button at the bottom of the Web Insepctor window. I'm running Chrome 23 and after I enter text in your validation box about 5 or 6 times, the memory usage comes crashing down, apparently due to garbage collection.
This phenomenon is not specific to inline events. I saw a similar pattern just by repeatedly allocating a large array and then overwriting the reference to that large array, leaving lots of orphaned memory for GC. Memory ramps up for a while, then the GC kicks in and does its job.
My first sggestion would have been to use off('change') but it seems you've already tried that. It's possible that the reason it's not working is because the handler wasn't attached with .on('change'). I don't know too much about how jQuery handles listener like this internally, but try attaching with .on('change', function ()... or .bind('change', function ()... instead.
I am making mouse click events and I'm trying to dispatch it to some node several times in a row. For that I am using the same MouseEvent object and for some reason this approach does not work. Yet, when I create event manually each time, system works. Does anybody know what is the reason for this behavior?
I've tried to change the timeStamp, but problem still occurs. I can solve the problem like I mentioned before, but I am interested in how this MouseEvent and corresponding dispatching and handling subsystems really work. MouseEvent specification that I've found on MDC pages seems to lack a lot of information.
Tnx for the help!
This is actually a security mechanism, dispatching an event that has been dispatched before isn't allowed. An event always has additional data associated with it, for example whether it comes from a trusted source (user's keyboard rather than JavaScript code). Some attacks (mostly against MSIE because it had mutable event objects) were using this - they caught a trusted event, changed it and dispatched it again elsewhere (changing might not always be required, dispatching it at a different element is enough for some attacks). In the end disallowing redispatching of events turned out to be the best solution. After all, this functionality isn't really required: creating a new event object with identical properties (minus hidden data) isn't exactly hard.
Pretty much all the security issues in this area were related to the file input control. Some time ago Firefox decided to change the file input UI radically and disallow entering the file name directly. I wonder whether this change made redispatching of events a non-issue. I doubt that anybody will be willing to risk opening this can of worms again however.
I think the reason you can't reuse the same MouseEvent object is because the event system maintains some internal state in the event objects so they can implement things like bubbling and cancelling. You may just have to stick with creating distinct event objects.
Reading Document Object Model Events may give you a better understanding of how the DOM event system works.
Without knowing what you have now ill just go under assumption.
Make an event function:
function clickEvent(event) {
//do something
}
Attach it:
obj.onclick = clickHandler;
And you can do this multiple times to multiple objects.
Morning,
When using Element#observe(), is it necessary to call Element#stopObserving() to completely get rid of the event handler?
Or will some inbuilt mechanism realize that the handler is no longer necessary when the Element gets removed in any way (.update() on a parent, not just .remove()) ?
Updating large dynamic lists with several bindings per entry every now and then. Drawbacks when using something like
ul.update(''); data.each(..
ul.insert(X); X.bind(..); ..);
Thanks!
If an element is no longer part of the DOM then garbage collection will probably deal with it's handlers, but that will depend on the browser.
I would suggest you not worry about what is out of your control and look at using Event.on() instead.