If, for example, I have a spreadsheet and want to attach handlers to cells to handle a users interaction. Typically, you would attach the handlers to the cell component itself. But if you are creating thousands of cells, this seems rather inefficient.
If I had a component hierarchy where SheetComponent -> CellComponents how can I attach a single handler to SheetComponent to handle each cell interaction? When the user interacts, I would like to have access to CellComponents props in order to identify which was clicked.
DOM events, when using React are synthetic ... they are all handled at the document level and then synthesized across all handlers. So, for DOM events, a handler hasn't been attached to each element for each event. However, that still means there is overhead that can't be avoided as your components would need to be notified that an event has occurred.
For communicating to the SheetComponent from a CellComponent, you could simply add your own event as a property: onCellActivated for example. If you have more than a couple, you could, instead of having lots of events needing to be wired up, you could have a generic event with an argument set to the type of event:
onCellEvent({ eventType: 'activation', cell: this })
That pattern starts to look more like an Action Creator/Action following the Flux pattern (without the Dispatcher and Store of course).
Instead of relying on your own events, you could use a Flux/Action/Dispatcher pattern potentially to relay key events to a centralized location (like the SheetComponent) and from there handle the interaction. With the action dispatched, you would include the key information about the cell where the interaction took place.
But, you might want to test the performance of a more traditional event model first as this might add extra complexity with little actual benefit.
You might also want to take some inspiration from Facebook's Fixed Data Table component, written for React. It's designed to handle thousands of rows of data efficiently.
Related
Let's say I have bunch of click events. Also one/few of them is for document object.
Which one is better for performance? Click event for each element or :
document.addEventListener('click', (e)=>{
if(e.target == firstObject){ firstFunction(e) }
if(e.target == secondObject){ secondFunction(e) }
if(e.target == ThirdObject){ thirdFunction(e) }
})
Neither is "better." They each have their place in your toolkit.
A single delegated handler is more complex in that you have to do the kind of dispatch you're doing in your example (often using closest or matches), but has the advantage that if you're adding/removing elements you want to act on, you don't have to juggle event handlers.
Directly-assigned handlers are simpler (at least on elements that aren't added/removed), can prevent propagation, and let you keep your code more modular, more in keeping with the single responsibility principle.
Use the one that makes the most sense in a given context.
I think event listener for each element is better if possible, and makes sense in terms of code quality. There are some cases though where a document event listener will be needed ( for example to emulate a click outside behaviour)
That being said here are some of reasons that makes event listener for each element a better solution
event propagation is handled for you by the browser, if you decide to have only one event handler for the whole document, and u want to have event listeners for elements that are contained in each other, then you will need to handle propagation your self. That is to say you need to handle the order in which functions run yourself, and then you will have some either complex generic solution, or a specific imperative verbose code with a lot of if else statements.
Easier to read code, this is even more true for recent frameworks for web like react, angular, etc..., so for example assume you want to have a listener for clicks on the document, where that code should reside, in which file, and which component should own the code.
Removal of event listeners is handled for you by the browser apis, the browser gives you a way to remove event listeners. If you decide to go with a global event listener then you should handle removing event listeners yourself.
Your code will be hard to refactor and easier to break later, because you are coupling your document (or container ) event listener to your components internals. That is if you decide to change the structure of these components later, your document based event listener will probably break. This will depend a lot on how you identify the target of clicks, for example if you were identifying them by class names or other attributes, then these attributes might change later for reasons like styling.
and if you depend on ids for example you might eventually have unexpected results. because what happens for example if you added a listener for an element that has id, removed that element, and then later added another element with same id.
You miss on the development tooling provided for you by browsers, browsers can show you attached listeners for elements, with a document based event listener you wont be able to do that
It's better if you add one by one, because then you can remove event whenever it finish. Moreover you have more control about this event.
I have a page that uses a Kendo MVVM approach for two different elements, one providing file search results, the other a document upload facility.
The problem I am encountering is to do with the change event that both elements use - it seems that when one control fires a change event it is then picked up by the other control, which then attempts to process the event and passes it on, at which point it is picked up by the second control's change handler which processes it and passes it on to the first control's change handler. As you might expect, after around 1500 repetitions of this cycle, I see a Uncaught RangeError: Maximum call stack size exceeded message as the JavaScript engine runs out of memory.
At first I thought the problem was that the container of the second model was contained within the first, but even if they are completely separate on the page it seems as though the problem still shows up, so now I'm wondering whether the problem is related to the event being global to the page.
It seems that anything I do in my event handler in terms of trying to stopPropagation or stopImmediatePropagation - or even to set the event to null altogether - makes no difference to this behaviour. Tracing the call stack I can see it looping through Kendo's trigger call then through the event binding on my object and jQuery's dispatch loops that lead it back to Kendo, where it triggers the event handler on the other observable object.
Removing my bindings does not affect the problem, the change event is still bounced back and forth between Kendo and jQuery in the same way, it just doesn't run through my code.
The answer here was not a direct consequence of Kendo itself, so it would have been hard to answer from the question as I set it.
Inside the Observable container that was raising this error, I was using Isotope for layout. The step I had missed was that I had a relationship like this:
Parent [Observable]
-> Container
-> Child
-> Child
-> Child
One of the things that Isotope brings to the party is that for each item in the child collection, it adds a reference to its parent object.
When the child is Observable that creates a structure like this:
Parent [Observable]
-> Container <--┐
-> Child ---|
-> Child ---|
-> Child ---┘
This is an ideal situation for events to be propagated from child to parent, but because the properties in question were being automagically added by the libraries in question it was very hard to troubleshoot.
The solution was to remove the Container layer from the Observable model - it didn't need to trigger anything on change and so I wrapped it in a simple getContainer() closure and used that everywhere I was previously using it as a property. This protected it from the Observable object, breaking the circular reference without harming the functionality.
It may also be relevant that as far as I can tell the initiating event was a DOM change event rather than one of Kendo's own events. The problem may have been avoidable by using a custom Kendo namespace but that would have been a significant change in a complex application and guaranteed to cause a lot of side effects.
I'm having an architecture difficulty with application designed in Backbone.
I've got cascaded, hierarchical views, i.e. root view has header, middle and footer views. Each of them consists of some lower level views, e.g. header view consists of tabs, preferences and login/logout views. It's just a view aggregation.
I also have a configuration model, which has several attributes, it's loaded via AJAX (standard backbone fetch). The model attributes are displayed in the interface using popups, menus etc. to enable the user to choose his settings. When the user changes a setting, possibly many parts of the app will have to re-render. The configuration model holds "state" properties (e.g. property currentPeriod is used among periods which were fetched via AJAX)
Inside views, I use listenTo(this.model, 'change:currentPeriod', this.render) to make this view re-render when anything is changed in the configuration.
I set all my default state attributes inside model::parse. The problem is that if I have 10 attributes to set (after parse is over) and probably each of them will trigger some events, many of them will be run multiple times (which is not what I want).
I was looking for a possibility to set current state attributes inside parse with the {silent:true} option - then no events would be triggered. I hope some of you already had the same problem and there exists an easy solution. Thanks in advance!
You can either fire all events "onSet"/"onChange" or none; in other words, you can pass silent: true, or not, but it's a binary choice. You can't say "set foo, and by the way only fire off this event, not that one".
If you want that level of control I'd recommend using silent: true and then manually triggering the events you do want.
If that doesn't work for you, I'd recommend changing how you bind your events, so that you only bind a given event once; that way it won't repeat. And if that doesn't work, you can just make your render method work even if it's run multiple times; that way the event can trigger render multiple times, but it won't hurt anything.
During fetch the reference to options remains the same between parse and set, so you could change the value of options.silent and the changes will carry over.
See this fiddle for an example of this working.
One way to do this would be to create a proxy (a bare Backbone.Events object) and have your views listen to it. The proxy object would listen to all on the model and simply queue up the events fired by the model (eliminating duplicative events) until the model fires an "I'm done" event (which you'd trigger at the end of parse); then the proxy would fire off all the queued events and flush the queue.
I am building a library which augments the standard JS input events.
This means firing off a great many events for multiple touches at input sampling rate (60Hz) in the browsers of ARM devices.
I have looked at this jsperf and it produces about 250,000 ops/sec on my 1.7Ghz Sandy Bridge i5 and I will test my iPhone5 and Nexus7 on there shortly.
My question is will an unlistened-to event be processed quickly?
Also is there a way to skip processing for generating an event if I know the event is not being listened to?
I think that jsperf muddies the waters when it comes to dispatching and handling events because event listeners are also added and removed every test loop iteration. It sounds like your use-cases have high-frequencies of dispatching and handling events, but comparatively low demands for adding and removing event handlers.
I put together a jsperf that focuses on wrapping a native event with a custom event and then disatches the custom event. The test scenarios are based on:
Presence or absence of a listener for the custom event
Immediate vs lazy initialization of data associated with the custom event
The impact of the above factors when dealig with "light" vs "heavy" intitialization demands
To test the "heavy" vs "light" initialization demands, each custom event creates an array of either 10 or 1000 random numbers.
Regarding lazy initialization of custom event data:
When a listener was present, the lazily init'd event was usually a bit slower. For "light" data it was sometimes as low as 0.8x the speed of the immediately init'd event.
Without a listener, the lazily init'd data was usually faster for both "light" and "heavy" data. For "heavy" data, it was generally 2x-10x faster.
My question is will an unlistened-to event be processed quickly?
In everything I've seen, an unlistened to event is always processed faster than an event that has an associated handler. But, I think this will only have a large impact if the event handlers, themselves, are rather slow and costly. Also, the higher the cost of creating the custom event the less this will matter if the custom event is created either way.
Also is there a way to skip processing for generating an event if I know the event is not being listened to?
Two things come to mind:
Expose the knowledge of whether the event is being listened to or not to the process that generates and dispatches the event, and then have that process skip creating the event if it knows nothing is listening.
Sounds like the code that generates the custom event will, at some point or another, listen for a native event and then create a custom event(s) based on the native event. In this scenario, you could ignore the native event until an event listener for your custom event has been added, and then ignore the native event again when all listeners for the custom event have been removed.
I am in the process of creating a huge web application, with a JavaScript based UI, and many events generated continuously.
To avoid bad performance due to the huge amount of the event listeners needed, I of course opted to use a single event listener which will catch all the events generated from the children elements (event bubbling).
The problem is, this application is designed in such a way that one or more modules can be loaded into the main JavaScript library I'm coding (which is responsible for controlling the UI and every other aspect of the program). Of course every module should be completely independent from each other, so you can choose which methods to load, without affecting the general functionality of the library, only adding or removing features.
Since every module can operate in different DOM elements, I need to have at least a single event listener for each module, since two modules can listen for events generated by html elements placed in different DOM branches.
http://jsfiddle.net/YRejF/2/
In this fiddle for example, the first button will let the first paragraph trigger an event, and its parent will catch it. The second button will let the second paragraph fire the event, but the div listening for the same event won't catch it, because it's not fired from one of its sons.
So my question is: is it possible to have a single event listener, able to listen also to events triggered from elements that are not its sons (elements placed everywhere on the page)?
I was thinking about having a js object, or a dom node, which store the data of the element which triggered the event, and the event itself, then a general event will be fired on the global event listener (no matter where it's placed in the dom), and it will then read the data to discover which element generated which event, and act accordingly.
Any help or suggestion about better ways of achieving this?
jQuery has a special binder for this kind of cases: live(). It let's all events bubble to the document and then handles them accordingly. However, if you use div or other containers for different panels etc, maybe using delegate() makes more sense. Don't worry too much about the number of bound elements. Believe me, it will run as well with 50 binds or 10 delegates as it will with 1 live.