Should I use transform instead of redraw? - javascript

I have several charts I do redraw everytime I zoom/pann using d3 brushes.
But, when I have tons of rendered elements, redrawing starts to be a little bit slow.
Instead of redrawing all elements everytime I move my brush, I was wondering whether or not it's feasible to transform (translate) the already drawn elements, and only redraw whenever I need to update my data.
I think it would increase my visualization performance a lot whenever panning to right/left, wouldn't it ?
Any insights ?

In general, the less you touch the DOM the better your performance will be. The details are browser and platform specific, but in general this is the pecking order of performance at a very high level (ordered from most expensive to least):
Creating and removing DOM elements.
Modifying properties of existing DOM elements.
In memory JavaScript (that is, not involving DOM at all... e.g. Array iteration).
So if you can get the result you want by simply modifying a targeted subset of existing elements with a transform attribute, I would guess you will be much better off.
Of course, it's impossible to say anything with certainty without seeing the actual code and use case.

Related

Is there any way to make a web page slightly glow without CSS box-shadow?

I'm trying to code a tough interface feature performantly
I'm trying to code an interface to which I apply a broad, faint glow like this (dark example, sorry):
What I've tried so far
I've considered outer-glow, filter, and canvas solutions, but none of them seem to offer a viable, performant result:
outer-glow: Would technically do exactly what I want.
BUT severely damages performance, especially with wide glows.
filters: I only know of the blur filter for doing this, except blur isn't really glow - it would blur edges severely. It would technically do exactly what I want IF I were to duplicate the DOM, blur the duplicate, place that in front of the actual DOM, apply low opacity, pointer-events:none, and keep it in sync with the non-blurred DOM via JavaScript.
BUT doubling the DOM and replicating all events within the DOM would not only be a big mess, it would likely damage performance in a serious way.
canvas: I could write the DOM to an overlayed low opacity canvas and blur that.
BUT (unless there's a way to read the whole window render and feed that into canvas directly) this involves a complex, faulty (security roadblocks and limited support for the functionality) process of rendering all the DOM elements as SVG and I would still have to keep that canvas version of the DOM in sync with the actual DOM, which will be a mess.
My question
So I'm out of ideas. Is there any way for me to make my interface glow, in a performant way?
I'm not very hopeful on this, but I did want to reach out and see if something I hadn't thought of with filters, canvas, or some other trick would make this glow effect possible in a web application while preserving performance.
This is probably not the route you were really looking for but, it would be able to produce the result you are trying to get. This approach would certainly be the most performant approach you could take, but of course you have to update the design.
The design update approach
You could approach this from the design side rather than a coding one. So rather than coding something. You would figure out what the final target colors are for your elements and modify your css. You would also need to modify some images potentially.

Reflow/Layout performance for large application

I am using GWT to build a HTML application where the performance is correct in general.
Sometimes, it can load many objects in the DOM and the application becomes slow. I used Chrome Developer Tools Profiler to see where that time was spent (under Chrome once the app is compiled ie no GWT overhead) and it is clear that the methods getAbsoluteLeft()/getBoundingClientRect() consume the major part of this time.
Here is the implementation used under Chrome (com.google.gwt.dom.client.DOMImplStandardBase) :
private static native ClientRect getBoundingClientRect(Element element) /*-{
return element.getBoundingClientRect && element.getBoundingClientRect();
}-*/;
#Override
public int getAbsoluteLeft(Element elem) {
ClientRect rect = getBoundingClientRect(elem);
return rect != null ? rect.getLeft()
+ elem.getOwnerDocument().getBody().getScrollLeft()
: getAbsoluteLeftUsingOffsets(elem);
}
This makes sense to me, as the more elements in the DOM, the more time it may take to calculate absolute positions. But it is frustrating because sometimes you know just a subpart of your application has changed whereas those methods will still take time to calculate absolute positioning, probably because it unnecessarily recheck a whole bunch of DOM elements. My question is not necessarily GWT oriented as it is a browser/javascript related problem :
Is there any known solution to improve GWT getAbsoluteLeft/javascript getBoundingClientRect problem for large DOM elements application ?
I did not find any clues on the internet, but I thought about solution like :
(reducing number of calls for those methods :-) ...
isolate part of the DOM through iframe, in order to reduce the number of elements the browser has to evaluate to get an absolute position (although it would make difficult components to communicate ...)
in the same idea, there might be some css property (overflow, position ?) or some html element (like iframe) which tell the browser to skip a whole part of the dom or simply help the browser to get absolute position faster
EDIT :
Using Chrome TimeLine debugger, and doing a specific action while there are a lot of elements in the DOM, I have the average performance :
Recalculate style : nearly zero
Paint : nearly 1 ms
Layout : nearly 900ms
Layout takes 900ms through the getBoundingClientRect method. This page list all the methods triggering layout in WebKit, including getBoundingClientRect ...
As I have many elements in the dom that are not impacted by my action, I assume layout is doing recalculation in the whole DOM whereas paint is able through css property/DOM tree to narrow its scope (I can see it through MozAfterPaintEvent in firebug for example).
Except grouping and calling less the methods that trigger layout, any clues on how to reduce the time for layout ?
Some related articles :
Minimizing browser reflow
I finally solve my problem : getBoundingClientRect was triggering a whole layout event in the application, which was taking many times through heavy CSS rules.
In fact, layout time is not directly proportional to the number of elements in the DOM. You could draw hundred thousands of them with light style and layout will take only 2ms.
In my case, I had two CSS selectors and a background image which were matching hundred thousands of DOM elements, and that was consuming a huge amount of time during layout. By simply removing those CSS rules, I reduce the layout time from 900ms to 2ms.
The most basic answer to your question is to use lazy evaluation, also called delayed evaluation. The principle is that you only evaluate a new position when something it depends upon has changed. It generally requires a fair amount of code to set up but is much cleaner to use once that's done. You'd make one assignment to something (such as a window size) and then all the new values propagate automatically, and only the values that need to propagate.

Should I use multiple canvases (HTML 5) or use divs to display HUD data?

I am in process of making a game where the health bar (animated) and some other info represented visually like some icons showing the number of bombs the player has etc. Now, this can be done both in canvas (by making another canvas for info that sits over the main canvas, or it can be done using many divs and spans with absolute positioning. This is my first time in making a browser based game so if any experienced people view this, tell me what you recommend. I would like to know that which method would be faster.
The game will also be running on mobile devices. Thanks!
There is no straighforward answer and I suggest you do FPS testing with different browser how it plays out for your use case. If you do not wish to go such in-depth I suggest you simply draw the elements inside canvas and if you need to hide them then leave out drawHUD() call from your rendering loop.
For HTML HUD overlay on <canvas> the following factors should be considered
Can the web browser compositor do hardware accelerated <canvas> properly if there are DOM elements upon the canvas
HTML / DOM manipulation will be always slower than <canvas> operations due to inherited complexity dealing with DOM elements
<canvas> pixel space stays inside <canvas> and it might be difficult to have pixel-perfect aligment if you try to draw elements on <canvas> outside the canvas itself
HTML offers much more formatting options for text than canvas drawString() - is HTML formatting necessary
Use the canvas. Use two canvases if you want, one overlaid over the other, but use the canvas.
Touching the DOM at all is slow. Making the document redo its layout because the size of DOM elements moved is very slow. Dealing with the canceling (or not) of even more events because there are DOM items physically on top of the canvas can be a pain and why bother dealing with that?
If your HUD does not update very often then the fastest thing to do would be drawing it to an in-memory canvas when it changes, and then always drawing that canvas to the main canvas when you update the frame. In that way your drawHud method will look exactly like this:
function drawHUD() {
// This is what gets called every frame
// one call to drawImage = simple and fast
ctx.drawImage(inMemoryCanvas, 0, 0);
}
and of course updating the HUD information would be like:
function updateHUD() {
// This is only called if information in the HUD changes
inMemCtx.clearRect(0, 0, width, height);
inMemCtx.fillRect(blah);
inMemCtx.drawImage(SomeHudImage, x, y);
var textToDraw = "Actually text is really slow and if there's" +
"often repeated lines of text in your game you should be" +
"caching them to images instead";
inMemCtx.fillText(textToDraw, x, y);
}
Since HUDs often contain text I really do urge caching it if you're using any. More on text performance here.
As others have said, there is no universally best approach, as it depends on the specifics of what you need to render, how often, and possibly what messaging needs to happen between graphical components.
While it is true the DOM reflows are expensive, this blanket warning is not always applicable. For instance, using position:fixed; elements avoids triggering reflows for the page (not necessarily within the element if there are non-fixed children). Repaint is (correct me if this is wrong) expensive because it is pixel pushing, and so is not intrinsically slower than pushing the same number of pixels to a canvas. It can be faster for some things. What's more, each has certain operations that have performance advantages over the other.
Here are some points to consider:
It's increasingly possible to use WebGL-accelerated canvas elements on many A-grade browsers. This works fine for 2D, with the advantage that drawing operations are sent to the GPU, which is MUCH faster than the 2D context. However this may not be available on some target platforms (e.g., at the time of this writing, it is available in iOS Safari but not in the iOS UIWebView used if you target hybrid mobile applications.) Using a library to wrap canvas can abstract this and use WebGL if its available. Take a look at pixi.js.
Conversely, the DOM has CSS3 animations/transitions which are typically hardware-accelerated by the GPU automatically (with no reliance on WebGL). Depending on the type of animation, you can often get much faster results this way than with canvas, and often with simpler code.
Ultimately, as a rule in software performance, understanding the algorithms used is critical. That is, regardless of which approach used, how are you scheduling animation frames? Have you looked in a profiler to see what things take the most time? This practice is excellent for understanding what is impacting performance.
I've been working on an app with multiple animations, and have implemented each component both as DOM and canvas. I was initially surprised that the DOM version was higher performant than the canvas (wrapped with KineticJS) version, though I know see that this was because all the animated elements were position:fixed and using CSS (under the hood via jQuery UI), thereby getting GPU performance. However the code to manage these elements felt clunky (in my case, ymmv). Using a canvas approach allows more pixel-perfect rendering, but then it loses the ability to style with CSS (which technically allows pixel-perfect rendering as well but may be more or less complex to achieve).
I achieved a big speed up by throttling the most complex animation to a lower framerate, which for my case is indistinguishable from the 60fps version but runs smooth as butter on an older iPad 2. Throttling required using requestAnimationFrame and clamping calls to be no more often than the desired framerate. This would be hard to do with CSS animations on the DOM (though again, these are intrinsically faster for many things). The next thing I'm looking at is syncing multiple canvas-based components to the same requestAnimationFrame loop (possibly independently throttled, or a round-robin approach where each component gets a set fraction of the framerate, which may work okay for 2-3 elements. (Incidentally, I have some GUI controls like sliders that are not locked to any framerate as they are should be as close to 60fps as possible and are small/simple enough that I haven't seen performance issues with them).
I also achieved a huge speed boost by profiling and seeing that one class in my code that had nothing to do with the GUI was having a specific method called very often to calculate a property. The class in question was immutable, so I changed the method to memoize the value and saw the CPU usage drop in half. Thanks Chrome DevTools and the flame chart! Always profile.
Most of the time, the number of pixels being updated will tend to be the biggest bottleneck, though if you can do it on the GPU you have effectively regained all the CPU for your code. DOM reflows should be avoided, but this does not mean avoid the DOM. Some elements are far simpler to render using the DOM (e.g. text!) and may be optimized by the browser's (or OS's) native code more than canvas. Finally, if you can get acceptable performance for a given component using either approach (DOM or canvas), use the one that makes the code simplest for managing that type of component.
Best advice is to try isolated portions in the different approaches, run with a profiler, use techniques to over-draw or otherwise push the limits to see which approach can run fastest, and do NOT optimize before you have to. The caveat to this rule is the question you are asking: how do I know in advance which technical approach is going to allow the best performance? If you pick one based on assuming the answer, you are basically prematurely optimizing and will live with the arbitrary pain this causes. If instead you are picking by rapid prototyping or (even better) controlled experiments that focus on the needs of your application, you are doing R&D :)
Browserquest displays their HUD using HTML elements, which has the benefit that you don't have to worry about redrawing etc. (and the performance will be pretty good, given that the entire browser engine is optimized to render the DOM pretty fast.
They (browserquest) also use several layered canvas elements for different game elements. I don't know the exact structure, but I guess that on which canvas an element is displayed depends on how often it needs to be redrawn.

Javascript performance problems with too many dom nodes?

I'm currently debugging a ajax chat that just endlessly fills the page with DOM-elements. If you have a chat going for like 3 hours you will end up with god nows how many thousands of DOM-nodes.
What are the problems related to extreme DOM Usage?
Is it possible that the UI becomes totally unresponsive (especially in Internet Explorer)?
(And related to this question is off course the solution, If there are any other solutions other than manual garbage collection and removal of dom nodes.)
Most modern browser should be able to deal pretty well with huge DOM trees. And "most" usually doesn't include IE.
So yes, your browser can become unresponsive (because it needs too much RAM -> swapping) or because it's renderer is just overwhelmed.
The standard solution is to drop elements, say after the page has 10'000 lines worth of chat. Even 100'000 lines shouldn't be a big problem. But I'd start to feel uneasy for numbers much larger than that (say millions of lines).
[EDIT] Another problem is memory leaks. Even though JS uses garbage collection, if you make a mistake in your code and keep references to deleted DOM elements in global variables (or objects references from a global variable), you can run out of memory even though the page itself contains only a few thousand elements.
Just having lots of DOM nodes shouldn't be much of an issue (unless the client is short on RAM); however, manipulating lots of DOM nodes will be pretty slow. For example, looping through a group of elements and changing the background color of each is fine if you're doing this to 100 elements, but may take a while if you're doing it on 100,000. Also, some old browsers have problems when working with a huge DOM tree--for example, scrolling through a table with hundreds of thousands of rows may be unacceptably slow.
A good solution to this is to buffer the view. Basically, you only show the elements that are visible on the screen at any given moment, and when the user scrolls, you remove the elements that get hidden, and show the ones that get revealed. This way, the number of DOM nodes in the tree is relatively constant, but you don't really lose anything.
Another similar solution to this is to implement a cap on the number of messages that are shown at any given time. This way, any messages past, say, 100 get removed, and to see them you need to click a button or link that shows more. This is sort of what Facebook does with their profiles, if you need a reference.
Problems with extreme DOM usage can boil down to performance. DOM scripting is very expensive, so constantly accessing and manipulating the DOM can result in a poor performance (and user experience), particularly when the number of elements becomes very large.
Consider HTML collections such as document.getElementsByTagName('div'), for example. This is a query against the document and it will be reexecuted every time up-to-date information is required, such as the collection's length. This could lead to inefficiencies. The worst cases will occur when accessing and manipulating collections inside loops.
There are many considerations and examples, but like anything it depends on the application.

Performance of setting img src to unchanged value?

If I have an img tag like
<img src="example.png" />
and I set it via
myImg.src = "example.png";
to the same value again, will this be a no-op, or will browsers unnecessarily redraw the image? (I'm mainly interested in the behaviour of IE6-8, FF3.x, Safari 4-5 and Chrome.)
I need to change many (hundreds of) images at once, and manually comparing the src attribute might be a little bit superfluous - as I assume, that the browser already does this for me?
Don't assume the browser will do it for you. I am working on a project of similar scale which requires hundreds of (dynamic-loading) images, with speed as the top priority.
Caching the 'src' property of every element is highly recommended. It is expensive to read and set hundreds of DOM element properties, and even setting src to the same value may cause reflow or paint events.
[Edit] The majority of sluggishness in the interface was due to all my loops and processes. Once those were optimized, the UI was very snappy, even when continuously loading hundreds of images.
[Edit 2] In light of your additional information (the images are all small status icons), perhaps you should consider simply declaring a class for each status in your CSS. Also, you might want to look into using cloneNode and replaceNode for a very quick and efficient swap.
[Edit 3] Try absolutely-positioning your image elements. It will limit the amount of reflow that needs to happen, since absolutely-positioned elements are outside of the flow.
When you change a bunch of elements at once, you're usually blocking the UI thread anyway, so only one redraw after the JavaScript completes is happening, meaning the per-image redraw really isn't a factor.
I wouldn't double check anything here, let the browser take care of it, the new ones are smart enough to do this in an efficient way (and it's never really been that much of a problem anyway).
The case you'll see here is new images loading and re-flowing the page as they load, that's what's expensive here, existing images are very minor compared to this cost.
I recommend using CSS Sprite technique. More info at: http://www.alistapart.com/articles/
You can use an image that contains all the icons. Then instead of changing the src attribute, you update the background property.

Categories

Resources