UI elements freeze despite webworker - javascript

I am working on a simple sorting algorithm demo app. I have a few sorting algorithms written in JavaScript, and an HTML page to try them out. I have tested the algorithms separately and they work fine.
You can input your own array, or a random one can be generated for you. If the size of the array was too large, whether you were generating it or sorting it, this would hang the UI, so I added a webworker that performs these operations in a separate thread.
The app logs what it is doing. Before the web worker, when working with really large arrays the log would appear all of a sudden only after the process was completed, but thanks to the webworker, logs appear as soon as each individual step is completed. Also, the page remains scrollable, while without the worker everything would freeze until the job was done.
Having delegated the computationally heavy elements to the webworker, I was expecting elements on the page to remain fully interactive, but this doesn't seem to be the case. All buttons etc on the page become unresponsive anyway. To be exact, they stay interactive for a really short while and then freeze--for example, if I mouseover a button, the cursor will change shape, but then it will stay frozen in that shape no matter where I move it, and if I click, the click won't register until after the webworker has done its job. This is a problem, because I have 'stop' buttons the user may click to stop the webworker if it is taking too long to generate an array or sort it. As things stand, the stop buttons freeze, and if you click them, this will have an effect only after the worker has finished anyway, effectively making them useless.
How can I make sure my controls on the page remain usable? Thanks!

So, I ran into what I think is the same problem, and was having a bear of a time with it for days. Thankfully I ran across this: https://nolanlawson.com/2016/02/29/high-performance-web-worker-messages/
For some reason it seems on both Chrome and Firefox, when webworkers post messages, it freezes the main thread when in the process of packing the data.
If you use JSON to fully serialize all of your objets when posting messages, it seems to make everything run much, much more smoothly.

Related

How games delete an object in the database but animate it out at the same time

I am wondering about how async actions like animations work when you do actions like delete an item, and so came up with this sort of thought process and would like feedback to see if it makes sense from an industry best practice perspective.
There are two layers:
The reactive layer.
The rendering layer.
The reactive layer occurs instantly and can be done with traditional event dispatching.
This is where you create and delete data and it all happens instantaneously.
The state machine gets notified of these instant reactive changes.
Then the state machine "transitions". This process occurs over a period of time, assuming there are some async things that occur (animations, network requests, etc.). This is what people mean when they say "action queue".
Then the rendering layer pics up stuff off the action queue and renders it. This way there is sort of a delayed reaction to the underlying instant reactive layer.
My question is if the reactive layer needs to handle async as well. For example, deleting something.
Say an item is deleted, and you want to animate it out. There are a few ways to do this:
Queue up a delete action, then animate it out first. When animation is complete, then do the actual delete. If animation is interrupted (cancel the delete), then the delete is never performed.
Delete the item instantly (reactive layer). The animation layer keeps a reference to the item around, so can still do its animation even though from the global place it is deleted. If the animation is cancelled, then you would have to do an "undo" sort of thing which is more complex.
If (1) is the way to go, then there is no reactive layer, and everything is implemented with a sort of action queue in mind. This makes it harder to make reusable code because everything is tied to the action-queue idea.
If (2) is the way to go, then there are two copies of the data, the global copy, and the local copy, which is kept in sync asynchronously. This makes it easier to have reusable code but makes it more complicated to reason about.
Wondering if any of this makes sense, and if any of these approaches is better practice in the industry (or if there is an alternative I didn't address that makes more sense).
Put another way, there are two main ways to do it:
Eager: Delete it now, okay everyone we've deleted it, time to come back inside. And wait for everything to trinkle back in to call it "done".
Cautious: Announce "we are going to delete it", then wait til everything is done and comes back inside to actually do the delete.
Wondering how games and apps and such handle this sort of thing.
Update
After thinking about it a different way, maybe it would be more like the swaying of kelp in the ocean:
https://www.youtube.com/watch?v=gIeLCzR8EgA
By that I mean, there is a base layer that is immediate (does the create/delete), then there are some intermediate layers with copies of all the transactions that occurred (create/delete), that the rendering layer uses to animate create/delete. So the original data is always in sync, but the rendering layer uses a sort of older version of the data with a chain of all the changes taking place.
reactive layer -> transaction layer -> rendering layer.
Another option is flagging as delete, then only after animations are complete actually do the delete, but that seems hacky.
Update
Another version:
Reactive version
Always has the latest data. (a)
Rendering version
Has the last data (b), plus a chain of changes leading to (a).
As rendering completes, it applies the changes to (b), so eventually it is like (a).
from my experiece as Unity Game Developer the right decision is in the middle, as for the physics in game engine are approximated because the sensation of a things semi-perfect is pretty like a perfect one.
The explanation is because the more realistic and real-life like is your goal the more you need resources, not only CPU and GPU but cash too.
after this strange preamble I quote for the flag options, the method applicated is not much different than a normal Garbade Collector, you mark a no more usefull item and when the Garbage collector come he free the ram space used by this object, so this new space can be reused.
The same process is done by a lot of engine, and the destroy operation appens before the rendering calculations.
The final goal is always to reach a good in-between solution.
There is the possibility to force the instant destruction in every moment of the engine computation process, but this solution is always deprecated.
With the animation the way u typically use is to Destroy (setting flag to be destructed) at the end of the animation or in the last few frames.
At least you can use some tricks to make the fade more enjoyable (like particles).
The real problem is when you have to destroy object over the net (multiplayer games), in this case you have to establish what is the more attendible machine and pick that to calculate physics and interaction, this machine is always the server or the at least the host(depending on the game type).
I know that the question was marked as javascript question, but i couldn't resist to answer.
I enclose also a page from the unity documentation about the Destruct function were explain how they menage to remove item from a game enviroment:
https://docs.unity3d.com/ScriptReference/Object.Destroy.html
Have a nice day! and good coding.

List with 3500+ items is loaded via ajax a small bunch at a time. Stops updating in the UI, but I see the li tags being appended in DOM

I have a use case where there are thousands of items on a list shown to the user. They are loaded a small batch at a time and I see the network traffic going in and out, I see data getting loaded. I see the DOM getting bigger, but the list itself in the UI stops updating (Chrome).
When I examine it, I see thousands of items in the code, when I select the items through console and make it count them, I see the proper number. But in the page itself, I don't see these items get displayed. The list uses drag-and-drop to put items from it into another list (and load additional data about them).
Not using jquery.datatables at the moment, but been meaning to migrate to them a long time ago. I can’t find a good example, though, everything I see uses pagination to split, but what if this is not an option?
How can I pinpoint what it is that is preventing the items from display? The number of entries will vary between 500 and 20 000.
Never mind. everything works as intended, duh. I was stupid and missed something very obvious: things had "display: none" for a very good reason about which I totally forgot (has to do with the core logic of the application). Next time hit me with a stick so I could remember to pay more attention.
Not sure what you meant by saying 'DOM getting bigger' but 'don't see items get displayed'.
Typically JS has a main thread which will handle functions/callbacks as well as view-refresh. So if you operation is blocking , the view will not be refreshing.
As for the pagination is not an option thing, you can consider using DOM-lazy-Loading mechanism where you only put what should be in the current viewport into the dom. As user scroll, you calculate the scroll height dynamically to add/remove items to/from the DOM. One thing to remember is you typically need to define a fixed height for your rows so that you could do the calculation. This lazy-loading way is a common way of solving this type of problem and is widely used by different frameworks like GXT, angular-gird..etc.

Runaway jQuery - Page runs slower over time

We have a timer that removes the top item in an unordered list and moves it to the bottom of the list. Each item has images, custom fonts, rollovers, etc.
For some reason, the longer the page runs, the slower it gets. You can notice the latency when hovering over the ribbons. The ribbons are supposed to turn red on hover, but when it slows down you'll notice it can take several seconds to see the hover state.
I have no idea why this is happening. I believe we're properly cleaning everything up, but something is obviously wrong.
Here's the page in question...
http://gmfg.trailerparkinteractive.com/
Let me know if I can provide any additional detail.
It seems you have a memory leak and here's how you detect one.
It seems one of your scripts is allocating and deallocating a lot of memory over a short period of time.
Further drilling into the retaining tree we find that some HTML element nodes are being deleted from the DOM but not released.
My advice is, try running your site while disabling different scripts, and retest with this method to get a guestimate of which plugin is doing that.

Force HTML5 canvas to redraw while doing heavy JavaScript processing?

This question is related to this older one, but I wanted to be sure I had the right answer before I started making major changes to my code.
I'm working on a very computation-intensive JavaScript program that needs to constantly update an image in an HTML5 canvas to draw an animation. As written now, the code draws all the frames of the animation in a tight loop without returning control to the browser, meaning that what's ultimately displayed is just the final frame. I'm pretty sure the only way to fix this is to split the animation code into smaller pieces that can be called reentrantly through a timeout event. Is this correct? Or is there a way to force the canvas to display its contents at a certain point even in the middle of a tight JavaScript loop?
I'm pretty sure the only way to fix this is to split the animation code into smaller pieces that can be called reentrantly through a timeout event. Is this correct?
This is correct.
Javascript is single threaded, so there's no way for anything else to happen while your logic is being executed. Your only choice is to "emulate" threading by splitting your logic up in to micro-chunks to be executed on timeouts.
You could use webworkers for this if they are available. Just calculate everything you need to do in the webworkers and post the result back when it's done. When you post the message back you can just refresh the image.
Calculations will be done in the background and your page only blocks while updating the image.

I have a couple thousand javascript objects which I need to display and scroll through. What are my options?

I'm working off of designs which show a scrollable box containing a list of a user's "contacts".
Users may have up to 10,000 contacts.
For now assume that all contacts are already in memory, and I'm simply trying to draw them. If you want to comment on the question of how wise it is to load 10k items of data in a browser, please do it here.
There are two techniques I've seen for managing a huge list like this inside a scrollable box.
Just Load Them All
This seems to be how gmail approaches displaying contacts. I currently have 2k contacts in gmail. If I click "all contacts", I get a short delay, then the scrollable box at the right begins to fill with contacts. It looks like they're breaking the task into chunks, probably separating the DOM additions into smaller steps and putting those steps into timeouts in order to not freeze the entire interface while the process completes.
pros:
Simple to implement
Uses native UI elements the way they were designed to be used
Google does it, it must be ok
cons
Not totally snappy -- there is some delay involved, even on my development machine running Firefox. There will probably be quite a lot of delay for a user running a slower machine running IE6
I don't know what sort of limits there are in how large I can allow the DOM to grow, but it seems to me there must be some limit to how many nodes I can add to it. How will an older browser on an older machine react if I ask it to hold 10k nodes in the DOM?
Draw As Needed
This seems to be how Yahoo deals with displaying contact lists. The create a scrollable box, put a super-tall, empty placeholder inside it, and draw contacts only when the user scrolls to reveal them.
pros:
DOM nodes are drawn only as needed, so there's very little delay while loading, and much less risk of overloading the browser with too many DOM nodes
cons:
Trickier to implement, and more opportunity for bugs. For example, if I scroll quickly in the yahoo mail contact manager as soon as the page loads, I'm able to get contacts to load on top of one another. Of course, bugs can be worked out, but obviously this approach will introduce more bugs.
There's still the potential to add a huge number of DOM nodes. If the user scrolls slowly through the entire list, every item will get drawn, and we'll still end up with an enormous DOM
Are there other approaches in common use for displaying a huge list? Any more pros or cons with each approach to add? Any experience/problems/success using either of these approaches?
I would chunk up the DOM-writing into handle-able amounts (say, 25 or 50), then draw the chunks on demand. I wouldn't worry about removing the old DOM elements until the amount drawn gets quite large.
I would divide the contacts into chunks, and keep a sort of view buffer alive that changes which chunks are written to the DOM as the user scrolls through the list. That way the total number of dom elements never rises above a certain threshold. Could be fairly tricky to implement, however.
Using this method you can also dynamically modify the size of chunks and the size of the buffer, depending on the browser's performance (dynamic performance optimization), which could help out quite a bit.
It's definitely non-trivial to implement, however.
The bug you see in Yahoo may be due to absolutely positioned elements: if you keep your CSS simple and avoid absolutely/relatively positioning your contact entries, it shouldn't happen.

Categories

Resources